Initial commit
This commit is contained in:
176
skills/draft-github-issues/SKILL.md
Normal file
176
skills/draft-github-issues/SKILL.md
Normal file
@@ -0,0 +1,176 @@
|
||||
---
|
||||
name: draft-github-issues
|
||||
description: Drafts GitHub issues as YAML files from plans or requirements. Use when converting plans to issue format or structuring multiple related issues with parent-child relationships. Needs git repository with remote (for repo detection) and optional plan file or verbal requirements. Trigger with phrases like 'draft issues [from file-path]', 'create issue draft', 'draft github issues for [description]'.
|
||||
allowed-tools: "Read, Write, Edit, Task(analyst)"
|
||||
---
|
||||
|
||||
Base directory for this skill: {baseDir}
|
||||
|
||||
## Workflow
|
||||
|
||||
**Draft Mode:** Generate YAML from plan/requirements → save to tmp/issues/
|
||||
**Refine Mode:** Analyze and improve existing YAML draft
|
||||
|
||||
Publishing happens separately via `publish-github-issues` skill.
|
||||
|
||||
<drafting>
|
||||
|
||||
## Draft Issues from Requirements
|
||||
|
||||
**Input:** Plan file path, text description, or verbal requirement
|
||||
|
||||
**Process:**
|
||||
|
||||
1. Parse requirement into logical issues (pattern: data → logic → UI)
|
||||
2. Determine issue set name from requirement (ask only if ambiguous)
|
||||
3. Detect repository from git remote (ask if not found: format owner/repo)
|
||||
4. Generate outcome-focused titles and acceptance criteria
|
||||
5. Evaluate each issue for technical context (see analyst usage below)
|
||||
6. Save YAML to `./tmp/issues/{name}-{timestamp}.yml`
|
||||
7. Report which issues were enriched with analyst context
|
||||
|
||||
**Output:**
|
||||
|
||||
```
|
||||
Draft saved: ./tmp/issues/user-search-20251102-143022.yml
|
||||
|
||||
Enriched 3 of 4 issues with technical context.
|
||||
|
||||
Next: Review file, then refine or publish using publish-github-issues skill.
|
||||
```
|
||||
|
||||
</drafting>
|
||||
|
||||
<analyst_usage>
|
||||
|
||||
## When to Use Analyst Subagent
|
||||
|
||||
**Invoke analyst for:**
|
||||
|
||||
- Multiple systems/models (integration)
|
||||
- Performance/scale requirements (>100 records, <200ms, etc.)
|
||||
- Security keywords (auth, permissions, tenant, isolation)
|
||||
- Background jobs, async processing, queues
|
||||
- New functionality in unfamiliar domain
|
||||
|
||||
**Skip analyst for:**
|
||||
|
||||
- Standard CRUD (add field, basic form)
|
||||
- UI-only changes (text, styling, layout)
|
||||
- Copying existing pattern explicitly
|
||||
|
||||
**Analyst request:** "Provide technical breadcrumbs: primary domain, similar patterns, integration points, gotchas (3-5 bullets)"
|
||||
|
||||
**Technical Context Format in issue body:**
|
||||
|
||||
```yaml
|
||||
### Technical Context
|
||||
- Primary domain: [models/controllers]
|
||||
- Similar pattern: [existing feature]
|
||||
- Integration points: [connections]
|
||||
- Consider: [gotcha/constraint]
|
||||
```
|
||||
|
||||
</analyst_usage>
|
||||
|
||||
<refinement>
|
||||
|
||||
## Refine Draft (Optional)
|
||||
|
||||
**Input:** Path to draft YAML file
|
||||
|
||||
**Process:**
|
||||
|
||||
1. Read and parse YAML
|
||||
2. Analyze each issue:
|
||||
- Titles outcome-focused (WHAT not HOW)?
|
||||
- Acceptance criteria specific and testable?
|
||||
- Parent-child relationships logical?
|
||||
- Labels appropriate?
|
||||
- Technical context present where valuable?
|
||||
3. Apply improvements directly to file
|
||||
4. Report changes made
|
||||
|
||||
**Output:**
|
||||
|
||||
```
|
||||
Refined tmp/issues/user-search-20251102.yml
|
||||
|
||||
Changes:
|
||||
- Issue #2: Changed title from "Implement SearchService" to "Enable search functionality"
|
||||
- Issue #3: Added specific acceptance criteria for error handling
|
||||
- Issue #4: Added technical context (was missing analyst breadcrumbs)
|
||||
|
||||
File updated.
|
||||
```
|
||||
|
||||
</refinement>
|
||||
|
||||
<yaml_format>
|
||||
|
||||
## YAML Structure
|
||||
|
||||
See `{baseDir}/references/YAML-FORMAT.md` for complete specification.
|
||||
|
||||
**Quick reference:**
|
||||
|
||||
- `repository` (required): owner/repo
|
||||
- `defaults` (optional): labels, milestone
|
||||
- `issues` (required): array with ref, title, body
|
||||
- Per-issue optional: parent_ref, milestone, labels
|
||||
|
||||
</yaml_format>
|
||||
|
||||
## Examples
|
||||
|
||||
### Draft from Plan File
|
||||
|
||||
**User:** "Draft issues from docs/plans/paddle-integration.md"
|
||||
|
||||
```
|
||||
Reading docs/plans/paddle-integration.md...
|
||||
Analyzing requirements...
|
||||
Invoking analyst for technical context (3 of 3 issues)...
|
||||
|
||||
Draft saved: tmp/issues/paddle-integration-20251105.yml
|
||||
|
||||
Enriched 3 of 3 issues with technical context.
|
||||
|
||||
Next: Review file, then publish using publish-github-issues skill.
|
||||
```
|
||||
|
||||
### Draft from Verbal Requirements
|
||||
|
||||
**User:** "Draft issues for adding user authentication with OAuth providers"
|
||||
|
||||
```
|
||||
Detecting repository: myorg/myapp
|
||||
Generating issues...
|
||||
Invoking analyst for security context...
|
||||
|
||||
Draft saved: tmp/issues/user-auth-20251105.yml
|
||||
|
||||
Enriched 2 of 3 issues with technical context.
|
||||
|
||||
Next: Review file, then publish using publish-github-issues skill.
|
||||
```
|
||||
|
||||
### Refine Draft
|
||||
|
||||
**User:** "Refine tmp/issues/paddle-integration-20251105.yml"
|
||||
|
||||
```
|
||||
Reading tmp/issues/paddle-integration-20251105.yml...
|
||||
Analyzing structure and content...
|
||||
|
||||
Refined tmp/issues/paddle-integration-20251105.yml
|
||||
|
||||
Changes:
|
||||
- Issue #2: Changed title to be more outcome-focused
|
||||
- Issue #2: Added specific acceptance criteria for webhook events
|
||||
- Issue #3: Added technical context about data migration risks
|
||||
|
||||
File updated.
|
||||
|
||||
Next: Review changes, then publish using publish-github-issues skill.
|
||||
```
|
||||
211
skills/draft-github-issues/references/YAML-FORMAT.md
Normal file
211
skills/draft-github-issues/references/YAML-FORMAT.md
Normal file
@@ -0,0 +1,211 @@
|
||||
# YAML Format Specification
|
||||
|
||||
Complete reference for GitHub issue YAML files.
|
||||
|
||||
## Top-Level Structure
|
||||
|
||||
```yaml
|
||||
repository: owner/repo # REQUIRED
|
||||
project: 6 # OPTIONAL - GitHub project number
|
||||
|
||||
defaults: # OPTIONAL - Default values for all issues
|
||||
labels: [label1, label2]
|
||||
milestone: "Milestone Name"
|
||||
|
||||
issues: # REQUIRED - Array of issue definitions
|
||||
- ref: issue1
|
||||
title: "Issue Title"
|
||||
body: "Issue body"
|
||||
- ref: issue2
|
||||
# ... more issues
|
||||
```
|
||||
|
||||
## Required Fields
|
||||
|
||||
### `repository`
|
||||
- Format: `owner/repo` (e.g., `myorg/myapp`)
|
||||
- The GitHub repository where issues will be created
|
||||
|
||||
### `issues`
|
||||
- Array of issue objects
|
||||
- At least one issue required
|
||||
|
||||
### Per-Issue Fields
|
||||
|
||||
Each issue requires:
|
||||
|
||||
**`ref`** (string)
|
||||
- Unique identifier for this issue within the YAML file
|
||||
- Used for parent-child linking via `parent_ref`
|
||||
- Not sent to GitHub (internal reference only)
|
||||
- Recommended: lowercase with hyphens (e.g., `parent-issue`, `login-feature`)
|
||||
|
||||
**`title`** (string)
|
||||
- Issue title displayed on GitHub
|
||||
- Keep concise (< 80 characters recommended)
|
||||
- Should be outcome-focused (WHAT, not HOW)
|
||||
- Examples:
|
||||
- ✅ "Enable search functionality"
|
||||
- ❌ "Implement SearchService class"
|
||||
|
||||
**`body`** (string, multiline)
|
||||
- Issue description in Markdown
|
||||
- Use `|` for multiline content
|
||||
- Supports GitHub Flavored Markdown
|
||||
|
||||
## Optional Fields
|
||||
|
||||
### Top-Level Optional
|
||||
|
||||
**`project`** (integer)
|
||||
- GitHub project number (not project name)
|
||||
- All created issues added to this project
|
||||
|
||||
**`defaults`** (object)
|
||||
- Default values applied to all issues
|
||||
- Can be overridden per-issue
|
||||
- Supported: `labels`, `milestone`
|
||||
|
||||
### Per-Issue Optional
|
||||
|
||||
**`parent_ref`** (string)
|
||||
- Reference to parent issue's `ref`
|
||||
- Creates parent-child relationship
|
||||
- Parent issue must be defined BEFORE child in YAML
|
||||
|
||||
**`milestone`** (string)
|
||||
- Milestone name (exact match required)
|
||||
- Overrides default milestone if specified
|
||||
|
||||
**`labels`** (array of strings)
|
||||
- Labels to apply
|
||||
- Overrides default labels if specified
|
||||
- Labels don't need to exist (GitHub auto-creates)
|
||||
|
||||
## Issue Body Format
|
||||
|
||||
Standard format for issue bodies:
|
||||
|
||||
```markdown
|
||||
## Overview
|
||||
Brief description of what needs to be accomplished (the outcome).
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] Specific, testable criterion
|
||||
- [ ] Another testable criterion
|
||||
- [ ] Final criterion
|
||||
|
||||
## Technical Context
|
||||
- Primary domain: [models/controllers involved]
|
||||
- Similar pattern: [existing feature to reference]
|
||||
- Integration points: [connections to other systems]
|
||||
- Consider: [gotchas, constraints, or performance notes]
|
||||
```
|
||||
|
||||
**Notes:**
|
||||
- **Overview**: Outcome-focused (what users/system can do after)
|
||||
- **Acceptance Criteria**: Specific, testable, observable
|
||||
- **Technical Context**: Optional - add when issue involves integration, performance, security, or unfamiliar domains
|
||||
|
||||
## Complete Example
|
||||
|
||||
```yaml
|
||||
repository: myorg/myapp
|
||||
project: 6
|
||||
|
||||
defaults:
|
||||
labels: [enhancement]
|
||||
milestone: "v2.0"
|
||||
|
||||
issues:
|
||||
# Parent issue
|
||||
- ref: search-feature
|
||||
title: "Enable search functionality"
|
||||
milestone: "v2.1" # Override default
|
||||
labels: [enhancement, search]
|
||||
body: |
|
||||
## Overview
|
||||
Add full-text search to allow users to find posts and comments quickly.
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] Search bar visible in header on all pages
|
||||
- [ ] Results page displays matching posts and comments
|
||||
- [ ] Results are paginated (20 per page)
|
||||
- [ ] Search works across post titles, bodies, and comments
|
||||
|
||||
## Technical Context
|
||||
- Primary domain: Posts, Comments models; SearchController
|
||||
- Similar pattern: Existing filter functionality in app/controllers/posts_controller.rb
|
||||
- Consider: PostgreSQL full-text search vs external service (start simple)
|
||||
|
||||
# Child issue 1
|
||||
- ref: search-indexing
|
||||
parent_ref: search-feature
|
||||
title: "Build search indexing"
|
||||
body: |
|
||||
## Overview
|
||||
Create database indices to support full-text search.
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] Migration adds search columns to posts and comments
|
||||
- [ ] Background job updates search indices on content changes
|
||||
- [ ] Search query returns results in < 200ms for typical queries
|
||||
|
||||
## Technical Context
|
||||
- Primary domain: Posts, Comments models; db/migrate
|
||||
- Similar pattern: Existing index patterns in schema.rb
|
||||
- Consider: Use PostgreSQL tsvector type, GIN index
|
||||
|
||||
# Child issue 2
|
||||
- ref: search-ui
|
||||
parent_ref: search-feature
|
||||
title: "Build search UI"
|
||||
body: |
|
||||
## Overview
|
||||
Create user interface for search functionality.
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] Search bar component in header
|
||||
- [ ] Results page shows post/comment matches with highlighting
|
||||
- [ ] Pagination controls (prev/next, page numbers)
|
||||
- [ ] Empty state when no results found
|
||||
|
||||
## Technical Context
|
||||
- Primary domain: SearchController, app/views/search
|
||||
- Similar pattern: Pagination in app/views/posts/index.html.erb
|
||||
- Integration points: Uses search indexing from #search-indexing
|
||||
```
|
||||
|
||||
## Title Guidelines
|
||||
|
||||
Titles should describe the outcome, not implementation:
|
||||
|
||||
**Good (outcome-focused):**
|
||||
- "Enable search functionality"
|
||||
- "Users can filter posts by category"
|
||||
- "Support OAuth authentication"
|
||||
|
||||
**Bad (implementation-focused):**
|
||||
- "Implement SearchService class"
|
||||
- "Add filter method to PostsController"
|
||||
- "Install Devise gem"
|
||||
|
||||
## Technical Context Guidelines
|
||||
|
||||
Add Technical Context section when issue involves:
|
||||
- **Integration**: Multiple systems/models working together
|
||||
- **Performance**: Scale requirements (>100 records, <200ms response)
|
||||
- **Security**: Auth, permissions, tenant isolation
|
||||
- **Background processing**: Jobs, queues, async work
|
||||
- **Unfamiliar domains**: New functionality in unknown territory
|
||||
|
||||
Skip Technical Context for:
|
||||
- Standard CRUD (add field, basic form)
|
||||
- UI-only changes (text, styling, layout)
|
||||
- Copying existing pattern explicitly
|
||||
|
||||
**Format (3-5 bullets):**
|
||||
- Primary domain: Where code lives
|
||||
- Similar pattern: Existing feature to reference
|
||||
- Integration points: Connections to other parts
|
||||
- Consider: Gotchas, constraints, performance notes
|
||||
91
skills/draft-github-issues/scripts/create_issues.sh
Executable file
91
skills/draft-github-issues/scripts/create_issues.sh
Executable file
@@ -0,0 +1,91 @@
|
||||
#!/bin/bash
|
||||
# Create GitHub issues from YAML file using gh CLI
|
||||
# Usage: ./create_issues.sh path/to/issues.yml
|
||||
|
||||
set -e
|
||||
|
||||
YAML_FILE="$1"
|
||||
|
||||
if [ -z "$YAML_FILE" ]; then
|
||||
echo "Error: YAML file path required"
|
||||
echo "Usage: $0 path/to/issues.yml"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ ! -f "$YAML_FILE" ]; then
|
||||
echo "Error: File not found: $YAML_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check gh CLI is installed and authenticated
|
||||
if ! command -v gh &> /dev/null; then
|
||||
echo "Error: gh CLI not found. Install: https://cli.github.com"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! gh auth status &> /dev/null; then
|
||||
echo "Error: gh not authenticated. Run: gh auth login"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Get repo from git remote (fallback to current directory check)
|
||||
REPO=$(gh repo view --json nameWithOwner -q .nameWithOwner 2>/dev/null || echo "")
|
||||
if [ -z "$REPO" ]; then
|
||||
echo "Error: Not in a GitHub repository or remote not configured"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Creating issues in $REPO from $YAML_FILE"
|
||||
echo ""
|
||||
|
||||
# This script expects Claude to parse the YAML and call gh commands
|
||||
# for each issue. The script serves as a wrapper and validator.
|
||||
#
|
||||
# Claude will:
|
||||
# 1. Parse YAML to extract issues
|
||||
# 2. Create parent issues first (store their numbers)
|
||||
# 3. Create child issues, updating body to reference parent number
|
||||
# 4. Handle labels, milestones, assignees
|
||||
#
|
||||
# This script just validates environment and provides helper functions
|
||||
|
||||
# Helper: Create single issue
|
||||
# Args: title body [labels] [milestone] [assignees]
|
||||
create_issue() {
|
||||
local title="$1"
|
||||
local body="$2"
|
||||
local labels="$3"
|
||||
local milestone="$4"
|
||||
local assignees="$5"
|
||||
|
||||
local cmd="gh issue create --repo $REPO --title \"$title\" --body \"$body\""
|
||||
|
||||
if [ -n "$labels" ]; then
|
||||
cmd="$cmd --label \"$labels\""
|
||||
fi
|
||||
|
||||
if [ -n "$milestone" ]; then
|
||||
cmd="$cmd --milestone \"$milestone\""
|
||||
fi
|
||||
|
||||
if [ -n "$assignees" ]; then
|
||||
cmd="$cmd --assignee \"$assignees\""
|
||||
fi
|
||||
|
||||
eval "$cmd"
|
||||
}
|
||||
|
||||
# Export function for Claude to use
|
||||
export -f create_issue
|
||||
export REPO
|
||||
|
||||
echo "Environment validated. Ready to create issues."
|
||||
echo "Repository: $REPO"
|
||||
echo ""
|
||||
|
||||
# Note: This script is invoked by Claude with specific gh commands
|
||||
# based on parsed YAML structure. Claude handles:
|
||||
# - YAML parsing
|
||||
# - Issue ordering (parents before children)
|
||||
# - Reference resolution (ref -> issue numbers)
|
||||
# - Error handling and reporting
|
||||
465
skills/prompt-architecting/SKILL.md
Normal file
465
skills/prompt-architecting/SKILL.md
Normal file
@@ -0,0 +1,465 @@
|
||||
---
|
||||
name: prompt-architecting
|
||||
description:
|
||||
"Optimizes or analyzes prompts using proven strategies. Use when user
|
||||
provides a prompt for optimization/analysis, or before you generate any prompt content
|
||||
yourself (commands, skills, agents, docs, instructions). Two modes: OPTIMIZE (returns
|
||||
optimized prompt), CONSULT (analysis only). Trigger phrases: 'optimize prompt',
|
||||
'analyze this prompt', 'make concise'."
|
||||
allowed-tools:
|
||||
- Read
|
||||
---
|
||||
|
||||
# prompt-architecting
|
||||
|
||||
Optimizes unstructured prompts into constrained, strategy-driven instructions that prevent over-generation and verbosity.
|
||||
|
||||
## When to Use
|
||||
|
||||
Trigger phrases: "optimize this prompt", "make instructions concise", "architect this prompt", "what's the best way to prompt for X"
|
||||
|
||||
## Your Task
|
||||
|
||||
When this skill is used, you are acting as a **prompt architect**. Your job:
|
||||
|
||||
Transform an unstructured task request into a constrained, strategy-driven prompt that prevents over-generation and verbosity.
|
||||
|
||||
**You will receive:**
|
||||
|
||||
- Task description (what needs to be generated)
|
||||
- Content (the actual content to analyze/optimize)
|
||||
- Output type (skill / docs / plan / instructions / other)
|
||||
- Complexity level (low / medium / high - or infer it)
|
||||
- Constraints (max length, format, audience, exclusions - or none)
|
||||
- Architecture reference (optional file path to architecture specification)
|
||||
- Mode (optional: "consult" or "optimize", defaults to "optimize")
|
||||
|
||||
**You will output (mode=optimize):**
|
||||
|
||||
- Analysis of complexity and bloat risks
|
||||
- 2-3 selected strategies with rationale
|
||||
- Optimized, constrained prompt (ready to use)
|
||||
- Summary of constraints applied
|
||||
|
||||
**You will output (mode=consult):**
|
||||
|
||||
- Analysis of complexity and bloat risks
|
||||
- Recommended strategies with rationale
|
||||
- Optimization potential assessment
|
||||
- Recommendations (no optimized prompt)
|
||||
|
||||
## Analysis Mode
|
||||
|
||||
ULTRATHINK: Prioritize accuracy over speed. Take time to:
|
||||
|
||||
- **Semantic before syntactic**: What is this actually trying to accomplish?
|
||||
- Identify content type (agent/workflow vs docs/skills)
|
||||
- Distinguish bloat from necessary procedural detail
|
||||
- **Research-informed structure decisions**: Formatting impacts performance significantly (40%+ variance in template studies); practical experience suggests enumeration helps for 3+ sequential steps
|
||||
- Select appropriate reduction target (40-50% vs 60%+)
|
||||
- Choose optimal strategy combination (1-3 max)
|
||||
|
||||
**Core principle (research-validated)**: Prompt formatting significantly impacts LLM performance (up to 40% variance, sometimes 300%+). Structure provides cognitive anchors that reduce hallucination and improve thoroughness. Default to enumeration for 3+ steps; use natural language only when structure adds no clarity.
|
||||
|
||||
**Key research findings**:
|
||||
|
||||
- Numbered lists help LLMs understand sequential steps and address each point thoroughly
|
||||
- Structure reduces ambiguity about task sequence
|
||||
- LLMs mirror the formatting structure you provide
|
||||
- Enumeration works not because of the numbers, but because of pattern consistency and correct ordering
|
||||
|
||||
Careful analysis prevents over-optimization and vague instructions.
|
||||
|
||||
## Process
|
||||
|
||||
Follow these steps:
|
||||
|
||||
1. Read reference materials (strategies, anti-patterns, examples from references/)
|
||||
2. Analyze the task (complexity, bloat risks, structure needs, target length)
|
||||
3. Select 1-3 strategies based on complexity score and bloat risks
|
||||
4. Generate output (optimized prompt or consultation analysis depending on mode)
|
||||
|
||||
### Step 1: Read Reference Materials
|
||||
|
||||
ALWAYS start by reading these files (progressive disclosure loads them on-demand):
|
||||
|
||||
- `~/.claude/skills/prompt-architecting/references/STRATEGIES.md` (15 prompting strategies catalog)
|
||||
- `~/.claude/skills/prompt-architecting/references/ANTI-PATTERNS.md` (basic bloat patterns)
|
||||
- `~/.claude/skills/prompt-architecting/references/ADVANCED-ANTI-PATTERNS.md` (workflow/agent patterns - read if optimizing workflows)
|
||||
- `~/.claude/skills/prompt-architecting/references/EXAMPLES.md` (basic case studies)
|
||||
- `~/.claude/skills/prompt-architecting/references/ADVANCED-EXAMPLES.md` (workflow/agent case studies - read if optimizing workflows)
|
||||
|
||||
IF architecture_reference provided:
|
||||
|
||||
- Read the architecture file at the provided path
|
||||
- Understand required sections, patterns, and structural requirements
|
||||
- This will guide refactoring in Step 4
|
||||
|
||||
### Step 2: Analyze the Task
|
||||
|
||||
Evaluate and explicitly STATE:
|
||||
|
||||
**FIRST: Safety checks (prevent harmful optimization)**
|
||||
|
||||
Check if content should NOT be optimized:
|
||||
|
||||
- Already optimal pattern? (deliverable-first + natural/appropriate structure + right complexity + appropriate notation)
|
||||
- Callable entity description at correct structure? (context + triggers present)
|
||||
- Agent/workflow at 40-50% of bloated version with specificity intact?
|
||||
- Technical notation serving clear purpose? (API specs, standard conventions, precision needed)
|
||||
- User requests preservation?
|
||||
|
||||
**If any YES**: STATE "Optimization not recommended: [reason]" and use mode=consult to provide analysis only.
|
||||
|
||||
- **Semantic analysis** (SECOND - most important):
|
||||
|
||||
- What is the core job this is trying to accomplish?
|
||||
- Can it be expressed as a single coherent task in natural language?
|
||||
- Test: Describe in one sentence using "then/if/when" connectors
|
||||
- Example: "Read file, optimize with skill (preserving header if present), write back"
|
||||
- **If YES**: Consider natural language reframing instead of formal structure
|
||||
- **If NO**: Task may need structured breakdown
|
||||
|
||||
- **Complexity scoring** (determines structure level):
|
||||
Calculate score based on these factors:
|
||||
|
||||
- 1-2 steps with obvious sequence? → **-1 point**
|
||||
- 3-4 steps where sequence matters? → **+1 point**
|
||||
- 5+ sequential steps? → **+2 points**
|
||||
- Has user approval gates (WAIT, AskUserQuestion)? → **+3 points**
|
||||
- Has 2+ terminal states (different end conditions)? → **+2 points**
|
||||
- Has 3-way+ conditional branching? → **+2 points**
|
||||
- Simple if/else conditionals only? → **+0 points**
|
||||
- Skill invocation just for data gathering? → **+0 points**
|
||||
- Skill invocation affects control flow decisions? → **+1 point**
|
||||
|
||||
**Score interpretation** (research-aligned):
|
||||
|
||||
- Score ≤ 0: Natural language framing acceptable (1-2 simple steps)
|
||||
- Score 1-2: Numbered enumeration helps (research: improves thoroughness, reduces ambiguity)
|
||||
- Score 3-4: Moderate structure (enumeration + opening mandate, no EXECUTION RULES)
|
||||
- Score ≥ 5: Full formal structure (complete EFC pattern with EXECUTION RULES)
|
||||
|
||||
- **Bloat risks**: What specifically could cause over-generation? (edge cases, theoretical coverage, defensive documentation, etc.)
|
||||
|
||||
- **Workflow structure assessment**:
|
||||
|
||||
- Count sequential steps in input (look for "Step 1", "Step 2", numbered workflow)
|
||||
- Check for skill/agent invocations (are they just data gathering or control flow?)
|
||||
- Check for user approval gates (AskUserQuestion, explicit WAIT/STOP)
|
||||
- Check for multiple terminal states (different ways the task can end)
|
||||
- Check if input already has Execution Flow Control (opening mandate, data flow notation, EXECUTION RULES)
|
||||
- **Structure determination** based on complexity score:
|
||||
- "Complexity score: [X]. Recommendation: [Natural language / Light structure / Moderate structure / Full EFC]"
|
||||
|
||||
- **Target length**: Calculate optimal word/line count based on complexity and output type
|
||||
|
||||
- **Architecture compliance** (if architecture_reference provided):
|
||||
- Compare input structure to architecture requirements
|
||||
- Identify missing required sections
|
||||
- Identify structural misalignments
|
||||
- State: "Architecture: [compliant/partial/non-compliant]. Missing: [list]. Misaligned: [list]."
|
||||
|
||||
**Dimensional analysis (if optimization appropriate)**
|
||||
|
||||
Evaluate each dimension:
|
||||
|
||||
**Dimension 1 (Verbosity):**
|
||||
|
||||
- Bloat indicators present? (adjective-heavy, scope inflation, vague quality statements, meta-discussion, filler, repetition)
|
||||
- Current word count vs. ideal for task? (>2x ideal = bloat)
|
||||
- State: "Verbosity: [bloated/concise/appropriate]. Reduction needed: [yes/no]"
|
||||
|
||||
**Dimension 2 (Structure):**
|
||||
|
||||
- Complexity score already calculated above
|
||||
- Current structure level: [none/minimal/moderate/full]
|
||||
- Appropriate structure level for score: [natural/light/moderate/full]
|
||||
- Structure mismatch? [over-structured/under-structured/appropriate]
|
||||
- State: "Complexity score: [X]. Current structure: [level]. Needed: [level]. Mismatch: [yes/no]"
|
||||
|
||||
**Dimension 3 (Notation):**
|
||||
|
||||
- Technical notation assessment:
|
||||
- CAPS labels as action markers? (CHECK:, PARSE:, etc.)
|
||||
- → notation for data flow? (→ variable_name)
|
||||
- Variable naming conventions? (work_file_status, requirement_data)
|
||||
- Function call syntax? (tool({params}))
|
||||
- Sub-step enumeration? (a/b/c)
|
||||
- Defensive meta-instructions? ("DO NOT narrate")
|
||||
- Count indicators (3+ = over-technical)
|
||||
- Does notation serve precision purpose? (API specs, schemas, standard conventions)
|
||||
- Cognitive load test: Does notation make it easier or harder to understand?
|
||||
- State: "Technical notation: [X indicators]. Purpose: [precision/ceremony]. Cognitive load: [helps/hurts/neutral]. Assessment: [over-technical/appropriate]"
|
||||
|
||||
**Callable entity check** (if description field):
|
||||
|
||||
- Contextual "when" conditions: present/vague/missing
|
||||
- Trigger phrases (quoted literals): present/weak/missing
|
||||
- Delegation signals if subagent: present/missing/N/A
|
||||
- Integration points: present/missing/N/A
|
||||
- Structure: complete/context-only/triggers-only/missing
|
||||
|
||||
**Workflow pattern detection** (if skill/agent invocations):
|
||||
|
||||
- High-risk stopping patterns present? (CAPS + → + variables + remote rules + warnings)
|
||||
- Classification: high-risk / optimal / standard
|
||||
- Stopping risk: yes/no
|
||||
- Note: High-risk patterns are Dimension 3 problem (over-technical notation)
|
||||
|
||||
**Complexity guidelines:**
|
||||
|
||||
| Level | Target Length | Notes |
|
||||
| ------ | ------------- | ----------------------------------------- |
|
||||
| Low | 100-200w | Straightforward tasks |
|
||||
| Medium | 200-400w | Moderate scope |
|
||||
| High | 400-600w | Use progressive disclosure to references/ |
|
||||
|
||||
### Step 3: Select Strategies
|
||||
|
||||
**MANDATORY EXCLUSIONS (based on Step 2 safety checks):**
|
||||
|
||||
- If already optimal: STOP - recommend no optimization
|
||||
- If complexity score ≤ 0: NEVER use EFC, Decomposition, or Template-Based
|
||||
- If callable entity: MUST use Callable Entity Preservation, MAX 1 additional strategy
|
||||
- If technical notation serves precision purpose: PRESERVE notation, optimize other dimensions only
|
||||
|
||||
**MANDATORY SELECTIONS (based on Step 2 dimensional analysis):**
|
||||
|
||||
**For Dimension 1 (Verbosity problems):**
|
||||
|
||||
- MUST select: Constraint-Based (hard word limits)
|
||||
- SHOULD select: Negative Prompting (if specific bloat patterns identified)
|
||||
- MAY select: Progressive Disclosure (if complex topic with separable details)
|
||||
|
||||
**For Dimension 2 (Structure mismatch):**
|
||||
|
||||
- If over-structured (score ≤ 0 but has formal structure):
|
||||
- MUST select: Natural Language Reframing
|
||||
- If under-structured (score ≥ 3 but vague prose):
|
||||
- Score 3-4: Moderate structure (organized natural, no heavy formality)
|
||||
- Score ≥ 5: Goal + Capabilities pattern
|
||||
- MAY select: Decomposition, Directive Hierarchy
|
||||
|
||||
**For Dimension 3 (Over-technical notation):**
|
||||
|
||||
- If over-technical detected (3+ indicators, cognitive load hurts):
|
||||
- MUST select: Technical → Natural Transformation
|
||||
- This often solves stopping risk simultaneously
|
||||
- May be SUFFICIENT optimization (don't over-optimize)
|
||||
|
||||
**For Callable Entities (detected in Step 2):**
|
||||
|
||||
- MUST select: Callable Entity Preservation
|
||||
- Focus on preserving/adding both layers (context + triggers)
|
||||
|
||||
**For High-Risk Workflows (detected in Step 2):**
|
||||
|
||||
- MUST select: Technical → Natural Transformation (removes stopping risk)
|
||||
- Preserve appropriate structure level (based on complexity score)
|
||||
- Remove ceremony (CAPS, →, variables, warnings)
|
||||
|
||||
**STRATEGY COUNT LIMIT: 1-3 strategies max**
|
||||
|
||||
- 1 strategy: Simple reframing or notation simplification
|
||||
- 2 strategies: Most common (address 2 dimensions or primary + constraint)
|
||||
- 3 strategies: Complex only (rarely needed)
|
||||
|
||||
**NEVER exceed 3 strategies** (over-optimization risk)
|
||||
|
||||
**COMPLEMENTARY CHECK:**
|
||||
|
||||
- Verify selected strategies don't conflict (see compatibility matrix in STRATEGIES.md)
|
||||
- If conflict detected, choose most important strategy and drop conflicting ones
|
||||
|
||||
### Step 4: Generate Output
|
||||
|
||||
Based on mode:
|
||||
|
||||
**IF mode=consult:**
|
||||
|
||||
- DO NOT generate optimized prompt
|
||||
- Output analysis, recommended strategies, and optimization potential
|
||||
- If architecture provided, include architecture compliance assessment
|
||||
- Use consult output format (see Output Format section below)
|
||||
|
||||
**IF mode=optimize (default):**
|
||||
|
||||
**Primary principle**: Reduce cognitive load while preserving intent.
|
||||
|
||||
**Apply selected strategies based on dimensions:**
|
||||
|
||||
**For Dimension 1 (Verbosity):**
|
||||
|
||||
- Remove adjectives and quality statements
|
||||
- Set hard word/line limits
|
||||
- Use template structure to bound scope
|
||||
- Exclude known bloat patterns explicitly
|
||||
- Target: 60%+ reduction for docs OK, 40-50% for agents/workflows
|
||||
|
||||
**For Dimension 2 (Structure):**
|
||||
|
||||
Match structure to complexity score:
|
||||
|
||||
- **Score ≤ 0 (Natural language)**:
|
||||
|
||||
- Rewrite as clear prose with connectors ("then", "if", "when")
|
||||
- Avoid enumeration unless truly necessary
|
||||
- Embed conditionals inline naturally
|
||||
- Example: "Read the file, optimize content, and write back"
|
||||
|
||||
- **Score 1-2 (Light structure)**:
|
||||
|
||||
- Simple numbered steps or sections without heavy formality
|
||||
- Natural language throughout
|
||||
- No CAPS labels, no → notation
|
||||
- Example: "1. Read file 2. Optimize 3. Write result"
|
||||
|
||||
- **Score 3-4 (Organized natural with structure)**:
|
||||
|
||||
- Logical sections or phases (Setup, Analysis, Execution, etc.)
|
||||
- Natural language within structured organization
|
||||
- NO CAPS/→/variables
|
||||
- Completion criterion inline or immediately after
|
||||
- Example: See "appropriately optimized" examples in OPTIMIZATION-SAFETY-GUIDE.md
|
||||
|
||||
- **Score ≥ 5 (Goal + Capabilities + organized workflow)**:
|
||||
- Goal statement (ultimate outcome upfront)
|
||||
- Capabilities declaration (tools/skills needed)
|
||||
- Organized workflow with natural language
|
||||
- Clear terminal conditions
|
||||
- STILL use natural notation (no CAPS/→/variables)
|
||||
|
||||
**For Dimension 3 (Over-technical notation):**
|
||||
|
||||
Remove ceremonial technical notation:
|
||||
|
||||
- CAPS labels → natural section headers or action verbs
|
||||
- → notation → implicit data flow or prose
|
||||
- Variable names → eliminate or minimize
|
||||
- Function call syntax → natural tool mentions
|
||||
- Sub-step enumeration → consolidate to prose
|
||||
- Defensive warnings → remove entirely (trust structure)
|
||||
- Remote EXECUTION RULES → integrate inline or remove
|
||||
|
||||
While preserving:
|
||||
|
||||
- Appropriate structure level (based on complexity score)
|
||||
- All requirements and dependencies
|
||||
- Tool invocations (mentioned naturally)
|
||||
- Terminal conditions (integrated naturally)
|
||||
|
||||
**For Callable Entities:**
|
||||
|
||||
- Preserve both contextual "when" AND trigger phrases
|
||||
- Add missing layer if incomplete
|
||||
- Minimal optimization (10-20% max)
|
||||
- Focus on invocation clarity
|
||||
|
||||
**General optimization:**
|
||||
|
||||
- Sets hard word/line limits (Constraint-Based)
|
||||
- Specifies structure (Template-Based or Output Formatting if applicable)
|
||||
- Excludes known bloat patterns (Negative Prompting if applicable)
|
||||
- Embeds selected strategies naturally into instructions
|
||||
|
||||
**If architecture_reference provided:**
|
||||
|
||||
- Refactor content to align with architecture requirements
|
||||
- Add missing required sections (Purpose, Workflow, Output, etc.)
|
||||
- Preserve required patterns (asset references, error formats, etc.)
|
||||
- Optimize content WITHIN architectural structure (don't remove structure)
|
||||
|
||||
**FINAL SAFETY CHECK before returning:**
|
||||
|
||||
Verify optimized version:
|
||||
|
||||
- [ ] Clarity test: Is it clearer than original? (cognitive load lower)
|
||||
- [ ] Intent test: Core requirements preserved?
|
||||
- [ ] Complexity match: Structure appropriate for score?
|
||||
- [ ] Notation appropriate: Natural unless technical serves precision purpose?
|
||||
- [ ] No new problems: No stopping points, lost triggers, introduced ambiguity?
|
||||
- [ ] Executable: Would LLM follow this successfully?
|
||||
- [ ] Reduction appropriate: 40-50% agents/workflows, 60%+ docs
|
||||
- [ ] Strategy count: 1-3, complementary
|
||||
|
||||
**If any check FAILS**: Revise optimization or recommend consult mode only.
|
||||
|
||||
## Output Format
|
||||
|
||||
You MUST structure your response based on mode:
|
||||
|
||||
**For mode=optimize:**
|
||||
|
||||
```markdown
|
||||
## Analysis
|
||||
|
||||
- Task complexity: {low|medium|high}
|
||||
- Primary bloat risks: {2-3 specific risks identified}
|
||||
- Architecture compliance: {if architecture_reference provided}
|
||||
- Target length: {calculated word/line count based on complexity}
|
||||
|
||||
## Selected Strategies
|
||||
|
||||
- **{Strategy Name}**: {1 sentence why chosen for this specific task}
|
||||
- **{Strategy Name}**: {1 sentence why chosen}
|
||||
- **{Strategy Name}** (if needed): {1 sentence why chosen}
|
||||
|
||||
## Optimized Prompt
|
||||
|
||||
{The actual constrained prompt - this gets used directly by the executor}
|
||||
|
||||
{Prompt should be 3-10 sentences with:
|
||||
|
||||
- Clear scope boundaries
|
||||
- Specific word/line limits
|
||||
- Explicit structure (sections, format)
|
||||
- DO NOT clauses if bloat risks identified
|
||||
- Reference to examples if anchoring strategy used
|
||||
- Architecture alignment directives (if architecture provided)}
|
||||
|
||||
## Constraints Applied
|
||||
|
||||
- Word/line limits: {specific counts}
|
||||
- Structure: {template or format specified}
|
||||
- Exclusions: {what NOT to include}
|
||||
- Architecture: {required sections/patterns preserved if applicable}
|
||||
- Other: {any additional constraints}
|
||||
```
|
||||
|
||||
**For mode=consult:**
|
||||
|
||||
```markdown
|
||||
## Analysis
|
||||
|
||||
- Task complexity: {low|medium|high}
|
||||
- Primary bloat risks: {2-3 specific risks identified}
|
||||
- Architecture compliance: {if architecture_reference provided}
|
||||
- Target length: {calculated word/line count based on complexity}
|
||||
|
||||
## Recommended Strategies
|
||||
|
||||
- **{Strategy Name}**: {1 sentence why recommended for this specific task}
|
||||
- **{Strategy Name}**: {1 sentence why recommended}
|
||||
- **{Strategy Name}** (if needed): {1 sentence why recommended}
|
||||
|
||||
## Optimization Potential
|
||||
|
||||
- Content reduction: {estimated percentage - e.g., "40-50% reduction possible"}
|
||||
- Structural changes needed: {list if architecture provided - e.g., "Add Output Format section, refactor steps"}
|
||||
- Bloat removal: {specific areas - e.g., "Remove verbose library comparisons, consolidate examples"}
|
||||
|
||||
## Recommendations
|
||||
|
||||
{2-4 sentence summary of what should be done to optimize this content}
|
||||
```
|
||||
|
||||
## Errors
|
||||
|
||||
**"Missing required parameter"**: Task description or content not provided. Cannot analyze without content to optimize. Provide: task description, content, output type.
|
||||
|
||||
**"Invalid architecture_reference path"**: File path doesn't exist or isn't readable. Verify path exists: ~/.claude/skills/skill-generator/references/SKILL-ARCHITECTURE.md
|
||||
|
||||
**"Unsupported output type"**: Output type not recognized. Supported: skill, docs, plan, instructions, command, agent.
|
||||
|
||||
**"Mode parameter invalid"**: Mode must be "optimize" or "consult". Defaults to "optimize" if not specified.
|
||||
267
skills/prompt-architecting/references/ADVANCED-ANTI-PATTERNS.md
Normal file
267
skills/prompt-architecting/references/ADVANCED-ANTI-PATTERNS.md
Normal file
@@ -0,0 +1,267 @@
|
||||
# Advanced Anti-Patterns: Workflow & Agent Optimization
|
||||
|
||||
**CRITICAL**: For detailed stopping point analysis, see `/Users/brandoncasci/.claude/tmp/workflow-optimization-spec.md`
|
||||
|
||||
**CRITICAL**: For safety guidelines and dimensional analysis, see `OPTIMIZATION-SAFETY-GUIDE.md`
|
||||
|
||||
**KEY INSIGHT**: Most stopping risk patterns are caused by over-technical notation (Dimension 3). Simplifying notation while preserving appropriate structure solves the problem.
|
||||
|
||||
---
|
||||
|
||||
Advanced patterns for optimizing multi-step workflows and agent prompts.
|
||||
|
||||
## Pattern 6: Numbered Steps Without Execution Mandate
|
||||
|
||||
### ❌ Verbose
|
||||
|
||||
```
|
||||
You are optimizing a Claude Code prompt file. Follow this workflow exactly:
|
||||
|
||||
## Step 1: Read File
|
||||
|
||||
Read the file at the path provided by the user. If no path provided, ask for it.
|
||||
|
||||
## Step 2: Parse Structure
|
||||
|
||||
- Detect YAML front matter (content between `---` markers at file start)
|
||||
- If front matter exists, extract `name` field
|
||||
- Separate front matter from content body
|
||||
|
||||
## Step 3: Optimize Content
|
||||
|
||||
Use the prompt-architecting skill with:
|
||||
- Task description: "Optimize this prompt"
|
||||
- Current content: {content body without front matter}
|
||||
|
||||
Wait for skill to return optimized prompt. DO NOT implement optimization yourself.
|
||||
|
||||
## Step 4: Analyze Dependencies
|
||||
|
||||
Check if description has dependencies by searching codebase.
|
||||
|
||||
## Step 5: Present Results
|
||||
|
||||
Show optimization results and ask for approval.
|
||||
|
||||
## Step 6: Replace File
|
||||
|
||||
Write optimized content back to file.
|
||||
```
|
||||
|
||||
**Problem**: Numbered steps imply sequence but don't mandate complete execution. LLM may stop after Step 3 (skill returns result) treating it as a deliverable. No guarantee all steps execute sequentially or that Step N uses Step N-1 output.
|
||||
|
||||
### ✅ Optimized
|
||||
|
||||
```
|
||||
Execute this 6-step workflow completely. Each step produces input for the next:
|
||||
|
||||
WORKFLOW:
|
||||
1. READ: Use Read tool on $1 → content
|
||||
2. PARSE: Extract front matter + body from content → {front_matter, body}
|
||||
3. OPTIMIZE: Run your prompt-architecting skill to optimize body → optimized_body
|
||||
4. ANALYZE: Use Grep to check dependencies in front_matter → risk_level
|
||||
5. PRESENT: Show optimized_body + risk_level → STOP, WAIT for user approval
|
||||
6. WRITE: If approved, use Write tool to save optimized_body + front_matter to $1 → done
|
||||
|
||||
EXECUTION RULES:
|
||||
- Complete steps 1-5 without stopping
|
||||
- STOP only at step 5 (user approval required)
|
||||
- Proceed to step 6 only if user approves (yes/1/2)
|
||||
- Task incomplete until step 6 completes or user cancels
|
||||
|
||||
Each step's output feeds the next. Do not stop early.
|
||||
```
|
||||
|
||||
**Strategies applied**: Execution Flow Control, Decomposition, Directive Hierarchy, Output Formatting
|
||||
|
||||
**Key improvements**:
|
||||
- Opening mandate: "Execute this 6-step workflow completely"
|
||||
- Explicit data flow: "Step X → output Y"
|
||||
- Clear terminal states: "STOP only at step 5"
|
||||
- Completion guarantee: "Task incomplete until step 6"
|
||||
- Prevents premature stopping after async operations (skill invocations)
|
||||
|
||||
---
|
||||
|
||||
## Pattern 7: Removing Procedural Detail as "Bloat" (Agent/Workflow Prompts)
|
||||
|
||||
### ❌ Over-optimized
|
||||
|
||||
```
|
||||
## Process
|
||||
|
||||
### For New Features
|
||||
|
||||
1. Read scratchpad if prompted
|
||||
2. Understand requirement (ULTRATHINK): Core request, acceptance criteria, constraints
|
||||
3. Find patterns (MANDATORY):
|
||||
- Read relevant CLAUDE.md files
|
||||
- Search similar implementations
|
||||
- Check test structure
|
||||
- For gem-backed features: Verify gem capabilities FIRST
|
||||
4. Determine approach:
|
||||
- Existing pattern → point to specific files
|
||||
- New pattern → recommend architecture fitting codebase style
|
||||
5. Synthesize: Which files, patterns to follow, architecture rationale
|
||||
|
||||
### For Bug Fixes
|
||||
|
||||
1. Read scratchpad if prompted
|
||||
2. Analyze bug nature: Where manifests? User impact? Larger architectural issue?
|
||||
3. Evaluate ALL test levels (ULTRATHINK):
|
||||
- System: UI/JavaScript/user-visible bugs
|
||||
- Integration: Request/response/multi-component
|
||||
- Unit: Business logic/model behavior
|
||||
```
|
||||
|
||||
**Problem**:
|
||||
- "Read relevant CLAUDE.md files" - vague (which ones? just root? subdirectories?)
|
||||
- Pattern-finding detail only in "New Features" mode, removed from "Bug Fixes"
|
||||
- Agent doesn't know if bug fix mode needs same rigor as new features
|
||||
- Lost specificity: "ALL files (root + subdirectories)", "# AI: comments", specific checklist items
|
||||
- Aggressive 60%+ reduction created ambiguity
|
||||
|
||||
### ✅ Properly optimized
|
||||
|
||||
```
|
||||
## Research Checklist
|
||||
|
||||
For ALL modes, check:
|
||||
- ALL CLAUDE.md files (root + subdirectories)
|
||||
- Similar implementations in codebase
|
||||
- # AI: comments in existing code
|
||||
- Test structure
|
||||
- **For gem-backed features**: Gem capabilities before custom code
|
||||
|
||||
## Process
|
||||
|
||||
### For New Features
|
||||
|
||||
1. Read scratchpad if prompted: "Read scratchpad for context: [path]"
|
||||
2. Understand requirement (ULTRATHINK): Core request, acceptance criteria, constraints
|
||||
3. Find patterns (see Research Checklist above)
|
||||
4. Determine approach:
|
||||
- Existing pattern → point to specific files
|
||||
- New pattern → recommend architecture fitting codebase style
|
||||
5. Synthesize: Which files, patterns to follow, architecture rationale
|
||||
|
||||
### For Bug Fixes (from issue-diagnosis)
|
||||
|
||||
ULTRATHINK MODE: Think comprehensively about best solution.
|
||||
|
||||
1. Read scratchpad if prompted
|
||||
2. Analyze bug nature: Where manifests? User impact? Larger architectural issue?
|
||||
3. Research context (see Research Checklist above)
|
||||
4. Evaluate ALL test levels (ULTRATHINK):
|
||||
- System: UI/JavaScript/user-visible bugs
|
||||
- Integration: Request/response/multi-component
|
||||
- Unit: Business logic/model behavior
|
||||
- Don't settle for "good enough" - recommend all appropriate tests
|
||||
```
|
||||
|
||||
**Strategies applied**: Execution Flow Control + DRY refactoring, Agent/Workflow Guidelines
|
||||
|
||||
**Key improvements**:
|
||||
- Extracted shared "Research Checklist" - eliminates repetition without losing detail
|
||||
- Preserved ALL specificity: "ALL CLAUDE.md files (root + subdirectories)", "# AI: comments"
|
||||
- Applied to all modes - bug fixes get same rigor as new features
|
||||
- DRY refactoring instead of deletion - saves ~40 words while maintaining clarity
|
||||
- 40-50% reduction (appropriate for agents) vs 60%+ (too aggressive)
|
||||
|
||||
**When this pattern applies**:
|
||||
|
||||
- Optimizing agent prompts or workflow commands
|
||||
- Multiple modes/sections with similar procedural steps
|
||||
- Procedural detail appears repetitive but is actually necessary
|
||||
- Target reduction is 60%+ (too aggressive for agents)
|
||||
|
||||
**How to avoid**:
|
||||
|
||||
- Extract shared checklists instead of deleting detail
|
||||
- Preserve specific qualifiers: "ALL", "MANDATORY", "root + subdirectories"
|
||||
- Target 40-50% reduction for agents (not 60%+)
|
||||
- Ask: "Does removing this create vagueness?" If yes, refactor instead
|
||||
|
||||
---
|
||||
|
||||
## Pattern 8: Defensive Meta-Commentary and Stop-Awareness
|
||||
|
||||
### ❌ Creates stopping risk through negative priming
|
||||
|
||||
```markdown
|
||||
**Step 3: OPTIMIZE** → optimized_body
|
||||
|
||||
- Use Skill tool: Skill(skill="prompt-architecting")
|
||||
- WAIT for skill output (contains multiple sections)
|
||||
- EXTRACT text under "## Optimized Prompt" heading → optimized_body
|
||||
- → DO NOT STOP - this is NOT the end - continue to Step 6 after Step 4
|
||||
|
||||
**CRITICAL REMINDERS:**
|
||||
|
||||
- The Skill tool (Step 3) returns structured output with multiple sections
|
||||
- You MUST extract the "## Optimized Prompt" section and store as optimized_body
|
||||
- Receiving skill output is NOT a completion signal - it's just data for Step 6
|
||||
- NEVER return control to caller after Step 3 - continue to Steps 4 and 6
|
||||
- The ONLY valid stopping points are: Step 5 (waiting for user) or Step 6 (done writing)
|
||||
- If you find yourself returning results without calling Write tool, you failed
|
||||
```
|
||||
|
||||
**Problem**:
|
||||
|
||||
- Each "DO NOT STOP" warning creates decision point: "Should I stop here?"
|
||||
- "This is NOT the end" reinforces that ending is a possibility
|
||||
- CRITICAL REMINDERS section acknowledges failure mode, normalizing it
|
||||
- "If you find yourself returning results... you failed" describes the exact unwanted behavior
|
||||
- Defensive commentary creates stop-awareness, making premature stopping MORE likely
|
||||
|
||||
**Psychological mechanism** (Ironic Process Theory):
|
||||
|
||||
- Telling someone "don't think about X" makes them think about X
|
||||
- Repeatedly saying "DO NOT STOP" primes stopping behavior
|
||||
- Meta-commentary about failure normalizes and increases failure
|
||||
|
||||
### ✅ Trust structure, eliminate stop-awareness
|
||||
|
||||
```markdown
|
||||
Your job is to update the file with optimized prompt from your skill.
|
||||
|
||||
Read the file, extract any front matter. Run the prompt-architecting skill on the content body. Check for dependencies if front matter exists. Ask user for approval if dependencies found. Write the optimized content back to the file.
|
||||
```
|
||||
|
||||
**Or, if complexity requires structure:**
|
||||
|
||||
```markdown
|
||||
Execute this workflow completely:
|
||||
|
||||
1. READ: Use Read(file_path) → content
|
||||
2. OPTIMIZE: Run prompt-architecting skill on content → optimized_content
|
||||
3. CHECK: If front matter exists, search for dependencies → risk_level
|
||||
4. APPROVE: If risk_level high, ask user → approval
|
||||
5. WRITE: Save optimized_content to file → done
|
||||
|
||||
Task completes at step 5.
|
||||
```
|
||||
|
||||
**Strategies applied**: Natural Language Reframing (first example) or moderate EFC without defensive warnings (second example)
|
||||
|
||||
**Key improvements**:
|
||||
|
||||
- No "DO NOT STOP" warnings anywhere
|
||||
- No CRITICAL REMINDERS section discussing failure modes
|
||||
- No meta-commentary about what might go wrong
|
||||
- Structure implies continuation naturally
|
||||
- Task framing makes completion criteria obvious
|
||||
|
||||
**When this pattern applies**:
|
||||
|
||||
- Any workflow with skill/agent invocations
|
||||
- Multi-step processes where premature stopping is a risk
|
||||
- Prompts that have been "fixed" by adding more warnings
|
||||
|
||||
**How to avoid**:
|
||||
|
||||
- Frame positively: "Continue to X" instead of "DO NOT STOP"
|
||||
- Remove all meta-commentary about failures
|
||||
- If you need warnings, the structure is wrong - simplify instead
|
||||
- Trust natural language and clear structure over defensive reminders
|
||||
- Test: If prompt mentions stopping/failure, you're creating the problem
|
||||
188
skills/prompt-architecting/references/ADVANCED-EXAMPLES.md
Normal file
188
skills/prompt-architecting/references/ADVANCED-EXAMPLES.md
Normal file
@@ -0,0 +1,188 @@
|
||||
# Advanced Optimization Examples: Workflows & Agents
|
||||
|
||||
Advanced case studies showing prompt-architecting for multi-step workflows and agent prompts.
|
||||
|
||||
## Example 5: Multi-Step Workflow Command
|
||||
|
||||
### Original Request
|
||||
|
||||
```
|
||||
User: "Create a command that optimizes prompt files by using the prompt-architecting skill, checks for front matter dependencies, and writes the optimized version back"
|
||||
```
|
||||
|
||||
### Without Optimization
|
||||
|
||||
Claude generates:
|
||||
|
||||
```
|
||||
You are optimizing a Claude Code prompt file. Follow this workflow exactly:
|
||||
|
||||
## Step 1: Read File
|
||||
Read the file at the path provided by the user.
|
||||
|
||||
## Step 2: Parse Structure
|
||||
Detect YAML front matter and separate from content body.
|
||||
|
||||
## Step 3: Optimize Content
|
||||
Use the prompt-architecting skill with the content.
|
||||
Wait for skill to return optimized prompt.
|
||||
|
||||
## Step 4: Analyze Dependencies
|
||||
Check if description has dependencies.
|
||||
|
||||
## Step 5: Present Results
|
||||
Show optimization results and ask for approval.
|
||||
|
||||
## Step 6: Replace File
|
||||
Write optimized content back to file.
|
||||
```
|
||||
|
||||
**Result**: LLM stops after Step 3 (skill returns result), treating skill output as deliverable. Never proceeds to dependency analysis, approval, or file writing. User receives optimization output but file is never modified.
|
||||
|
||||
### With prompt-architecting
|
||||
|
||||
**Subagent analysis:**
|
||||
|
||||
- Task: Multi-step workflow command (6 sequential steps)
|
||||
- Complexity: High (skill invocation, dependency checking, user approval, file I/O)
|
||||
- Risk: Premature stopping after Step 3 (async skill call), skipped steps, no execution guarantee
|
||||
- Bloat risks: Verbose step descriptions, missing flow control
|
||||
- Optimal strategies: **Execution Flow Control** (critical), Decomposition, Directive Hierarchy, Constraint-Based
|
||||
|
||||
**Optimized prompt:**
|
||||
|
||||
```
|
||||
Execute this 6-step workflow completely. Each step produces input for the next:
|
||||
|
||||
WORKFLOW:
|
||||
1. READ: Use Read tool on $1 → content
|
||||
2. PARSE: Extract front matter + body from content → {front_matter, body}
|
||||
3. OPTIMIZE: Run your prompt-architecting skill to optimize body → optimized_body
|
||||
4. ANALYZE: Use Grep to check dependencies in front_matter → risk_level
|
||||
5. PRESENT: Show optimized_body + risk_level → STOP, WAIT for user approval
|
||||
6. WRITE: If approved, use Write tool to save optimized_body + front_matter to $1 → done
|
||||
|
||||
EXECUTION RULES:
|
||||
- Complete steps 1-5 without stopping
|
||||
- STOP only at step 5 (user approval required)
|
||||
- Proceed to step 6 only if user approves (yes/1/2)
|
||||
- Task incomplete until step 6 completes or user cancels
|
||||
|
||||
Each step's output feeds the next. Do not stop early.
|
||||
```
|
||||
|
||||
**Result**: Clear execution mandate, explicit data flow between steps, guaranteed completion through step 5, proper stop at approval gate, file gets written after approval.
|
||||
|
||||
**Why Execution Flow Control was critical:**
|
||||
|
||||
1. **Prevents premature stopping**: Mandate ensures execution continues after Step 3 skill invocation
|
||||
2. **Explicit dependencies**: "Step X → output Y" shows each step consumes previous output
|
||||
3. **Clear terminal states**: "STOP only at step 5" prevents arbitrary stopping
|
||||
4. **Completion guarantee**: "Task incomplete until..." creates obligation to finish
|
||||
|
||||
**Lessons from this example:**
|
||||
|
||||
- Numbered steps alone don't guarantee sequential execution
|
||||
- Skill invocations are natural stopping points - must mandate continuation
|
||||
- Multi-step workflows need opening mandate + terminal state specification
|
||||
- Data flow notation (→) makes dependencies explicit and prevents skipping
|
||||
|
||||
---
|
||||
|
||||
## Example 6: Agent Prompt with Multiple Modes
|
||||
|
||||
### Original Request
|
||||
|
||||
```
|
||||
User: "Optimize this analyst agent prompt that has ~1,450 words with sections for new features, bug fixes, and gem verification"
|
||||
```
|
||||
|
||||
### Without Agent/Workflow Guidelines
|
||||
|
||||
Claude generates:
|
||||
|
||||
- 1,450w → 560w (61% reduction - too aggressive)
|
||||
- Removes procedural detail to hit 60% target
|
||||
- Creates vague instructions: "Read relevant CLAUDE.md files" (which ones?)
|
||||
- Pattern-finding detail only in "New Features", removed from "Bug Fixes"
|
||||
- Agent doesn't know if bug fixes need same rigor as features
|
||||
- Lost specificity: "ALL files (root + subdirectories)", "# AI: comments"
|
||||
|
||||
**Result**: Concise but vague. Agent has unclear guidance for bug fix mode.
|
||||
|
||||
### With Agent/Workflow Guidelines
|
||||
|
||||
**Subagent analysis:**
|
||||
|
||||
- Task: Optimize agent prompt with multiple modes
|
||||
- Complexity: High (1,450 words, 3 modes: new features, bug fixes, gems)
|
||||
- Risk: Over-optimization removes necessary procedural detail
|
||||
- Bloat risks: Verbose YAML examples (90+ lines), Rails conventions, repetitive pattern-finding
|
||||
- Optimal strategies: **Agent/Workflow Guidelines** (preserve procedural detail), DRY refactoring, Progressive Disclosure, Constraint-Based
|
||||
|
||||
**Optimized prompt:**
|
||||
|
||||
```
|
||||
You are a requirements and architecture analyst. Tools: Read, Grep, Glob (read-only).
|
||||
|
||||
Follow output structure from @.claude/protocols/agent-output-protocol.md
|
||||
|
||||
## Core Philosophy
|
||||
|
||||
ULTRATHINK: Prioritize correctness over speed. AI implements fast regardless of approach. Strategic decisions matter most.
|
||||
|
||||
## Research Checklist
|
||||
|
||||
For ALL modes, check:
|
||||
- ALL CLAUDE.md files (root + subdirectories)
|
||||
- Similar implementations in codebase
|
||||
- # AI: comments in existing code
|
||||
- Test structure
|
||||
- **For gem-backed features**: Gem capabilities before custom code
|
||||
|
||||
## Process
|
||||
|
||||
### For New Features
|
||||
|
||||
1. Read scratchpad if prompted: "Read scratchpad for context: [path]"
|
||||
2. Understand requirement (ULTRATHINK): Core request, acceptance criteria, constraints
|
||||
3. Find patterns (see Research Checklist above)
|
||||
4. Determine approach:
|
||||
- Existing pattern → point to specific files
|
||||
- New pattern → recommend architecture fitting codebase style
|
||||
5. Synthesize: Which files, patterns to follow, architecture rationale
|
||||
|
||||
### For Bug Fixes (from issue-diagnosis)
|
||||
|
||||
ULTRATHINK MODE: Think comprehensively about best solution.
|
||||
|
||||
1. Read scratchpad if prompted
|
||||
2. Analyze bug nature: Where manifests? User impact? Larger architectural issue?
|
||||
3. Research context (see Research Checklist above)
|
||||
4. Evaluate ALL test levels (ULTRATHINK):
|
||||
- System: UI/JavaScript/user-visible bugs
|
||||
- Integration: Request/response/multi-component
|
||||
- Unit: Business logic/model behavior
|
||||
- Don't settle for "good enough" - recommend all appropriate tests
|
||||
...
|
||||
|
||||
[Verbose YAML examples moved to references/analyst-examples.md]
|
||||
```
|
||||
|
||||
**Result**: 1,450w → 650w (55% reduction - appropriate for agents). Preserved procedural detail while eliminating repetition via DRY refactoring.
|
||||
|
||||
**Why Agent/Workflow Guidelines were critical:**
|
||||
|
||||
1. **Recognized agent context**: Applied 40-50% target instead of 60%+
|
||||
2. **DRY refactoring over deletion**: Extracted "Research Checklist" - eliminated repetition without losing specificity
|
||||
3. **Preserved procedural detail**: "ALL CLAUDE.md files (root + subdirectories)" not "relevant files"
|
||||
4. **All modes get rigor**: Bug fixes reference same Research Checklist as new features
|
||||
5. **Aggressive optimization where appropriate**: 90-line YAML examples → references/
|
||||
|
||||
**Lessons from this example:**
|
||||
|
||||
- Agent prompts need execution detail - different standard than docs
|
||||
- DRY refactoring beats deletion - extract shared sections instead of removing
|
||||
- Target 40-50% for agents (not 60%+) - they need procedural clarity
|
||||
- Preserve specificity: "ALL", "MANDATORY", "root + subdirectories"
|
||||
- Recognize when detail is necessary vs when it's bloat
|
||||
194
skills/prompt-architecting/references/ANTI-PATTERNS.md
Normal file
194
skills/prompt-architecting/references/ANTI-PATTERNS.md
Normal file
@@ -0,0 +1,194 @@
|
||||
# Anti-Patterns: Verbose → Concise
|
||||
|
||||
Real examples of prompt bloat and their optimized versions.
|
||||
|
||||
## Pattern 1: Over-Elaborated Context
|
||||
|
||||
### ❌ Verbose
|
||||
```
|
||||
I need you to create comprehensive documentation that covers all aspects of
|
||||
user authentication in our system. This should include detailed explanations
|
||||
of how the system works, what technologies we're using, best practices for
|
||||
implementation, common pitfalls to avoid, security considerations, edge cases,
|
||||
error handling strategies, and example code showing different use cases. Make
|
||||
sure it's thorough and covers everything a developer might need to know.
|
||||
```
|
||||
|
||||
### ✅ Optimized
|
||||
```
|
||||
Write auth docs. Structure: [Setup - 100w] [Usage - 150w] [Error handling - 100w]
|
||||
[One example - code only]. MAX 400 words total. Audience: Mid-level dev familiar
|
||||
with JWT. DO NOT: Include security theory, framework comparisons, or "best practices"
|
||||
sections.
|
||||
```
|
||||
|
||||
**Strategies applied**: Constraint-Based, Output Formatting, Negative Prompting
|
||||
|
||||
---
|
||||
|
||||
## Pattern 2: Defensive Over-Coverage
|
||||
|
||||
### ❌ Verbose
|
||||
```
|
||||
Create a skill for processing CSV files. It should handle reading CSV files,
|
||||
parsing them, validating the data, transforming it, handling errors, dealing
|
||||
with different encodings, managing large files, streaming if needed, and
|
||||
outputting to various formats. Include comprehensive documentation, examples
|
||||
for each use case, edge case handling, performance considerations, and detailed
|
||||
setup instructions.
|
||||
```
|
||||
|
||||
### ✅ Optimized
|
||||
```
|
||||
Create CSV processing skill. SKILL.md (300w max): [Problem - 50w] [Workflow:
|
||||
read → parse → transform → output - 100w] [One example - code only - 100w].
|
||||
Move to references/: encoding edge cases, streaming for large files, pandas vs
|
||||
csv module comparison.
|
||||
```
|
||||
|
||||
**Strategies applied**: Constraint-Based, Progressive Disclosure, Template-Based
|
||||
|
||||
---
|
||||
|
||||
## Pattern 3: Meta-Discussion Instead of Instructions
|
||||
|
||||
### ❌ Verbose
|
||||
```
|
||||
I think we should probably create some documentation for the API endpoints.
|
||||
It would be good to explain what each endpoint does, what parameters they take,
|
||||
what they return, and maybe some examples of how to use them. We should also
|
||||
probably mention error codes and authentication requirements. Does that sound good?
|
||||
```
|
||||
|
||||
### ✅ Optimized
|
||||
```
|
||||
Document REST API endpoints. Format per endpoint: [Method + Path] [Purpose - 1
|
||||
sentence] [Parameters - table] [Response - JSON schema] [Auth required - yes/no]
|
||||
[Error codes - list only]. One example request/response per endpoint.
|
||||
```
|
||||
|
||||
**Strategies applied**: Output Formatting, Template-Based, Density Optimization
|
||||
|
||||
---
|
||||
|
||||
## Pattern 4: Unclear Success Criteria
|
||||
|
||||
### ❌ Verbose
|
||||
```
|
||||
Help me understand how the deployment process works in this codebase and create
|
||||
some documentation about it so other developers can figure it out too.
|
||||
```
|
||||
|
||||
### ✅ Optimized
|
||||
```
|
||||
Document deployment process. Target: New dev can deploy to staging in <10 min.
|
||||
Structure: [Prerequisites - bullet list] [Steps - numbered, 1 sentence each]
|
||||
[Verification - how to confirm success] [Rollback - 2 sentences]. MAX 300 words.
|
||||
```
|
||||
|
||||
**Strategies applied**: Audience-Targeted, Template-Based, Constraint-Based
|
||||
|
||||
---
|
||||
|
||||
## Pattern 5: Solution Seeking Instead of Problem Stating
|
||||
|
||||
### ❌ Verbose
|
||||
```
|
||||
We need a really robust, production-grade, enterprise-level authentication
|
||||
system with all the features and best practices implemented following industry
|
||||
standards and security guidelines...
|
||||
```
|
||||
|
||||
### ✅ Optimized
|
||||
```
|
||||
Implement auth. Requirements: JWT tokens, refresh flow, role-based access.
|
||||
MUST: Secure token storage, HTTPS only. SHOULD: Remember me option. OUT OF SCOPE:
|
||||
OAuth, SSO, 2FA. Match existing pattern in: src/auth/session.ts
|
||||
```
|
||||
|
||||
**Strategies applied**: Directive Hierarchy, Negative Prompting, Anchoring
|
||||
|
||||
---
|
||||
|
||||
## Common Bloat Indicators
|
||||
|
||||
**Red flags in prompts:**
|
||||
- "comprehensive", "robust", "enterprise-grade", "production-ready"
|
||||
- "all aspects", "everything", "fully cover"
|
||||
- "best practices", "industry standards"
|
||||
- Multiple questions without priority
|
||||
- Hypothetical edge cases ("what if...", "we might need...")
|
||||
|
||||
**Optimization checklist:**
|
||||
1. Remove adjectives (comprehensive, robust, etc.)
|
||||
2. Set word/line limits
|
||||
3. Specify structure explicitly
|
||||
4. Use DO NOT for known over-generation
|
||||
5. Define success criteria concretely
|
||||
6. Defer details to references where possible
|
||||
|
||||
**Decision tree:**
|
||||
|
||||
- Adjective-heavy? → Constraint-Based
|
||||
- No structure? → Template-Based or Output Formatting
|
||||
- Known bloat patterns? → Negative Prompting
|
||||
- 1-2 very simple steps (sequence obvious)? → Natural language acceptable
|
||||
- 3+ steps where sequence matters? → Enumeration helps (research: improves thoroughness and reduces ambiguity)
|
||||
- Complex task with branching? → Execution Flow Control (appropriate level)
|
||||
- Numbered steps but overly formal? → Simplify notation, keep enumeration for clarity
|
||||
- Agent/workflow with repeated procedural steps? → DRY refactoring (extract shared checklist)
|
||||
- Procedural detail appears as bloat? → Preserve specificity, target 40-50% reduction
|
||||
- Need examples? → Few-Shot or Anchoring
|
||||
|
||||
---
|
||||
|
||||
## Pattern 8: Destroying Callable Entity Triggers
|
||||
|
||||
### ❌ Over-optimized
|
||||
```
|
||||
# Before (complete)
|
||||
description: Reviews code for security, bugs, performance when quality assessment needed. When user says "review this code", "check for bugs", "analyze security".
|
||||
|
||||
# Over-optimized (WRONG - lost triggers)
|
||||
description: Code review assistant
|
||||
```
|
||||
|
||||
### ✅ Correct
|
||||
```
|
||||
# Minimal acceptable optimization
|
||||
description: Reviews code for security, bugs, performance when quality assessment needed. When user says "review code", "check bugs", "analyze security".
|
||||
```
|
||||
|
||||
**Why**: Trigger phrases are functional pattern-matching signals for model-invocation, not decorative examples. Preserve both contextual "when" AND literal trigger phrases.
|
||||
|
||||
**See OPTIMIZATION-SAFETY-GUIDE.md Part 4 for callable entity preservation rules.**
|
||||
|
||||
---
|
||||
|
||||
## Pattern 9: Over-Technical Notation Creating Cognitive Load
|
||||
|
||||
### ❌ Over-technical
|
||||
```
|
||||
Execute this workflow:
|
||||
1. READ: Load file → content
|
||||
2. PARSE: Extract(content) → {fm, body}
|
||||
3. OPTIMIZE: Run skill(body) → optimized
|
||||
a. Pass parameters: {content: body, mode: "optimize"}
|
||||
b. Extract result → optimized
|
||||
c. DO NOT STOP - continue to step 4
|
||||
4. WRITE: Save(optimized) → done
|
||||
|
||||
EXECUTION RULES:
|
||||
- DO NOT STOP at step 3 when skill returns
|
||||
```
|
||||
|
||||
### ✅ Organized natural
|
||||
```
|
||||
Your job: Optimize the file and write it back.
|
||||
|
||||
Read the file and parse structure. Optimize the content using the skill. Write the result back. The file edit is the deliverable.
|
||||
```
|
||||
|
||||
**Why**: Technical notation (CAPS + → + variables + function syntax + warnings) increases cognitive load and creates stopping risk. Organized natural language with appropriate structure is clearer for LLM audiences.
|
||||
|
||||
**See OPTIMIZATION-SAFETY-GUIDE.md Part 3 and Part 6 for detailed analysis.**
|
||||
219
skills/prompt-architecting/references/EXAMPLES.md
Normal file
219
skills/prompt-architecting/references/EXAMPLES.md
Normal file
@@ -0,0 +1,219 @@
|
||||
# Real-World Optimization Examples
|
||||
|
||||
Case studies showing prompt-architecting in action.
|
||||
|
||||
## Example 1: Skill Generation
|
||||
|
||||
### Original Request
|
||||
|
||||
```
|
||||
User: "Create a skill for processing PDFs"
|
||||
```
|
||||
|
||||
### Without Optimization
|
||||
|
||||
Claude generates:
|
||||
|
||||
- 1,200-line SKILL.md with 15 subsections
|
||||
- Complete PDF specification history
|
||||
- 8 different extraction libraries compared
|
||||
- 20 edge cases documented inline
|
||||
- Security considerations essay
|
||||
- Performance tuning guide
|
||||
- Complete API reference
|
||||
|
||||
### With prompt-architecting
|
||||
|
||||
**Subagent analysis:**
|
||||
|
||||
- Task: Skill creation
|
||||
- Complexity: Medium (PDF parsing is known domain)
|
||||
- Risk: Over-documentation of libraries, edge cases
|
||||
- Optimal strategies: Constraint-Based, Template-Based, Progressive Disclosure
|
||||
|
||||
**Optimized prompt:**
|
||||
|
||||
```
|
||||
Create PDF processing skill. SKILL.md MAX 300 words:
|
||||
- When to use (50w)
|
||||
- Core workflow: upload → extract text/images → output (100w)
|
||||
- One example showing pypdf2 usage (code only, 100w)
|
||||
- Move to references/: library comparison (pypdf2 vs pdfplumber), OCR for scanned PDFs, encoding issues
|
||||
|
||||
Match this style: {paste concise skill example from building-skills}
|
||||
```
|
||||
|
||||
**Result:** 280-word SKILL.md, 2 reference files, actionable and maintainable
|
||||
|
||||
---
|
||||
|
||||
## Example 2: Documentation Generation
|
||||
|
||||
### Original Request
|
||||
|
||||
```
|
||||
User: "Document the API endpoints in src/api/"
|
||||
```
|
||||
|
||||
### Without Optimization
|
||||
|
||||
Claude writes:
|
||||
|
||||
- Comprehensive REST API guide
|
||||
- HTTP protocol primer
|
||||
- Authentication deep-dive
|
||||
- Rate limiting theory
|
||||
- Pagination best practices
|
||||
- Error handling philosophy
|
||||
- 40 pages of markdown
|
||||
|
||||
### With prompt-architecting
|
||||
|
||||
**Subagent analysis:**
|
||||
|
||||
- Task: API documentation
|
||||
- Complexity: Low (structured data)
|
||||
- Risk: Theory instead of reference
|
||||
- Optimal strategies: Output Formatting, Template-Based, Negative Prompting
|
||||
|
||||
**Optimized prompt:**
|
||||
|
||||
```
|
||||
Document API endpoints. Format per endpoint:
|
||||
## [METHOD] /path
|
||||
Purpose: {1 sentence}
|
||||
Auth: {required|optional|none}
|
||||
Params: {table: name, type, required, description}
|
||||
Response: {JSON schema only}
|
||||
Errors: {codes list}
|
||||
Example: {curl + response}
|
||||
|
||||
DO NOT: Include HTTP theory, auth implementation details, or pagination strategy essays.
|
||||
Target: API reference, not guide.
|
||||
```
|
||||
|
||||
**Result:** Clean reference docs, 8 endpoints in 4 pages, instantly usable
|
||||
|
||||
---
|
||||
|
||||
## Example 3: Plan Creation
|
||||
|
||||
### Original Request
|
||||
|
||||
```
|
||||
User: "Plan the implementation of user notification system"
|
||||
```
|
||||
|
||||
### Without Optimization
|
||||
|
||||
Claude creates:
|
||||
|
||||
- 15-page plan with every possible notification type
|
||||
- Complete microservices architecture
|
||||
- Email, SMS, push, in-app, webhook notifications
|
||||
- Queue theory and message broker comparison
|
||||
- Scaling strategy for 1M users
|
||||
- 30 implementation tasks spanning 6 months
|
||||
|
||||
### With prompt-architecting
|
||||
|
||||
**Subagent analysis:**
|
||||
|
||||
- Task: Implementation plan
|
||||
- Complexity: High (but scope unclear)
|
||||
- Risk: Over-engineering, scope creep
|
||||
- Optimal strategies: Decomposition, Directive Hierarchy, Constraint-Based
|
||||
|
||||
**Optimized prompt:**
|
||||
|
||||
```
|
||||
Create notification system plan. Scope: Email + in-app only (MVP).
|
||||
|
||||
MUST cover:
|
||||
- Data model (users, notifications, preferences)
|
||||
- Notification creation flow
|
||||
- Delivery mechanism (async job)
|
||||
- User preferences UI
|
||||
|
||||
SHOULD cover:
|
||||
- Read/unread state
|
||||
- Basic templates
|
||||
|
||||
OUT OF SCOPE: SMS, push, webhooks, scaling >10K users
|
||||
|
||||
Structure: [Problem] [Phase 1 tasks] [Phase 2 tasks] [Success criteria]
|
||||
Target: 2-week implementation for 2 devs
|
||||
```
|
||||
|
||||
**Result:** Focused 4-page plan, 12 actionable tasks, clear scope, deliverable
|
||||
|
||||
---
|
||||
|
||||
## Example 4: Code Review Request
|
||||
|
||||
### Original Request
|
||||
|
||||
```
|
||||
User: "Review this code and tell me what you think"
|
||||
```
|
||||
|
||||
### Without Optimization
|
||||
|
||||
Claude provides:
|
||||
|
||||
- Line-by-line analysis
|
||||
- Code style commentary
|
||||
- Design pattern suggestions
|
||||
- Performance speculation
|
||||
- Security theoretical concerns
|
||||
- Refactoring proposal (3 alternatives)
|
||||
- 800 lines of feedback
|
||||
|
||||
### With prompt-architecting
|
||||
|
||||
**Subagent analysis:**
|
||||
|
||||
- Task: Code review (but criteria unknown)
|
||||
- Complexity: Medium
|
||||
- Risk: Unfocused feedback overload
|
||||
- Optimal strategies: Constraint-Based, Audience-Targeted, Decomposition
|
||||
|
||||
**Optimized prompt:**
|
||||
|
||||
```
|
||||
Review code for: (1) Bugs/errors, (2) Security issues, (3) Performance bottlenecks.
|
||||
|
||||
Format:
|
||||
## Issues Found
|
||||
- [SEVERITY] Location: Description + fix suggestion
|
||||
|
||||
## Summary
|
||||
- {Count by severity}
|
||||
- {Top priority item}
|
||||
|
||||
DO NOT: Comment on style, suggest refactorings, or discuss alternative patterns unless directly related to bugs/security/performance.
|
||||
|
||||
Audience: Code works, need to ship, focused review only.
|
||||
```
|
||||
|
||||
**Result:** 15-line review, 2 bugs found, 1 security fix, actionable
|
||||
|
||||
---
|
||||
|
||||
**For advanced workflow and agent optimization examples, see ADVANCED-EXAMPLES.md**
|
||||
|
||||
---
|
||||
|
||||
## Lessons Learned
|
||||
|
||||
**Unspecified scope = maximal scope** (Examples 1-3): Without constraints, Claude assumes comprehensive coverage. Fix: Set MAX length and explicit boundaries.
|
||||
|
||||
**Complexity triggers research mode** (Examples 1, 2): Unfamiliar topics trigger defensive over-documentation. Fix: Progressive Disclosure - overview now, details in references.
|
||||
|
||||
**Ambiguous success = everything** (Example 3): "Help me understand" lacks definition of done. Fix: Define success concretely ("New dev deploys in <10min").
|
||||
|
||||
**Implicit = inclusion** (Examples 2, 4): Unexcluded edge cases get included. Fix: Negative Prompting to exclude known bloat.
|
||||
|
||||
**Workflow patterns** (see ADVANCED-EXAMPLES.md): Numbered steps don't mandate completion after async operations. Fix: Execution Flow Control.
|
||||
|
||||
**Meta-lesson**: Every optimization uses 2-3 strategies, never just one. Pair Constraint-Based with structure (Template/Format) or exclusion (Negative). For workflows with dependencies, Execution Flow Control is mandatory.
|
||||
2325
skills/prompt-architecting/references/OPTIMIZATION-SAFETY-GUIDE.md
Normal file
2325
skills/prompt-architecting/references/OPTIMIZATION-SAFETY-GUIDE.md
Normal file
File diff suppressed because it is too large
Load Diff
249
skills/prompt-architecting/references/STRATEGIES.md
Normal file
249
skills/prompt-architecting/references/STRATEGIES.md
Normal file
@@ -0,0 +1,249 @@
|
||||
# Prompting Strategies Catalog
|
||||
|
||||
Reference for prompt-architect. Each strategy includes when to use and example pattern.
|
||||
|
||||
# IMPORTANT: Read Safety Guide First
|
||||
|
||||
Before selecting strategies, read OPTIMIZATION-SAFETY-GUIDE.md to understand:
|
||||
- When NOT to optimize
|
||||
- Three dimensions of optimization (verbosity, structure, notation)
|
||||
- Over-optimization risks
|
||||
- Natural language vs technical strategy decision criteria
|
||||
- Callable entity preservation requirements
|
||||
- Strategy combination limits (1-3 max)
|
||||
- Cognitive load as the core metric
|
||||
|
||||
This ensures trustworthy optimization that reduces cognitive load while preserving intent.
|
||||
|
||||
---
|
||||
|
||||
## 1. Constraint-Based Prompting
|
||||
|
||||
**When**: Task scope clear but tends toward over-generation
|
||||
**Pattern**: Set hard boundaries on length/scope
|
||||
**Example**: `Generate auth docs. MAX 300 words. Cover only: setup, usage, errors.`
|
||||
|
||||
## 2. Progressive Disclosure
|
||||
|
||||
**When**: Complex topics where details can be separated
|
||||
**Pattern**: Overview in main doc, details in references
|
||||
**Example**: `Write skill overview (100w), then separate reference docs for: API specs, edge cases, examples.`
|
||||
|
||||
## 3. Template-Based
|
||||
|
||||
**When**: Output needs consistent structure
|
||||
**Pattern**: Provide fill-in-the-blank format
|
||||
**Example**: `Follow: [Problem] [Solution in 3 steps] [One example] [Common pitfall]`
|
||||
|
||||
## 4. Directive Hierarchy
|
||||
|
||||
**When**: Mixed priority requirements
|
||||
**Pattern**: Use MUST/SHOULD/MAY tiers
|
||||
**Example**: `MUST: Cover errors. SHOULD: Include 1 example. MAY: Reference advanced patterns.`
|
||||
|
||||
## 5. Negative Prompting
|
||||
|
||||
**When**: Known tendency to add unwanted content
|
||||
**Pattern**: Explicitly exclude behaviors
|
||||
**Example**: `Write deploy guide. DO NOT: framework comparisons, history, "best practices" essays.`
|
||||
|
||||
## 6. Few-Shot Learning
|
||||
|
||||
**When**: Abstract requirements but concrete examples exist
|
||||
**Pattern**: Show 2-3 examples of desired output
|
||||
**Example**: `Good doc: [150w example]. Bad doc: [verbose example]. Follow "good" pattern.`
|
||||
|
||||
## 7. Decomposition
|
||||
|
||||
**When**: Complex multi-step tasks
|
||||
**Pattern**: Break into numbered discrete subtasks
|
||||
**Example**: `Step 1: Identify 3 use cases. Step 2: 50w description each. Step 3: 1 code example each.`
|
||||
|
||||
## 8. Comparative/Contrastive
|
||||
|
||||
**When**: Need to show difference between good/bad
|
||||
**Pattern**: Side-by-side ❌/✅ examples
|
||||
**Example**: `❌ "Comprehensive guide covering everything..." ✅ "Setup: npm install. Use: auth.login()."`
|
||||
|
||||
## 9. Anchoring
|
||||
|
||||
**When**: Have reference standard to match
|
||||
**Pattern**: Provide example to emulate
|
||||
**Example**: `Match style/length of this: [paste 200w reference doc]`
|
||||
|
||||
## 10. Output Formatting
|
||||
|
||||
**When**: Structure more important than content discovery
|
||||
**Pattern**: Specify exact section structure
|
||||
**Example**: `Format: ## Problem (50w) ## Solution (100w) ## Example (code only)`
|
||||
|
||||
## 11. Density Optimization
|
||||
|
||||
**When**: Content tends toward fluff/filler
|
||||
**Pattern**: Maximize information per word
|
||||
**Example**: `Write as Hemingway: short sentences, concrete nouns, active voice. Every sentence advances understanding.`
|
||||
|
||||
## 12. Audience-Targeted
|
||||
|
||||
**When**: Reader expertise level known
|
||||
**Pattern**: Specify what to skip based on audience
|
||||
**Example**: `Audience: Senior dev who knows React. Skip basics, focus on gotchas and our implementation.`
|
||||
|
||||
## 13. Execution Flow Control
|
||||
|
||||
**When**: Complex workflows requiring state management, branching control, or approval gates
|
||||
**Pattern**: Mandate complete execution with explicit flow control and dependencies
|
||||
**Example**:
|
||||
|
||||
```markdown
|
||||
Execute this workflow completely:
|
||||
1. READ: Use Read tool on $1 → content
|
||||
2. PARSE: Extract front matter + body → {front_matter, body}
|
||||
3. OPTIMIZE: Use prompt-architecting skill → optimized_body
|
||||
4. PRESENT: Show optimized_body → STOP, WAIT for approval
|
||||
|
||||
EXECUTION RULES:
|
||||
- Stop only at step 4 (user approval required)
|
||||
- Task incomplete until approval received
|
||||
```
|
||||
|
||||
**Indicators**:
|
||||
- REQUIRED: User approval gates, multiple terminal states, 3-way+ branching, complex state tracking
|
||||
- NOT REQUIRED: Simple sequential tasks, linear flow, skill invocations for data only
|
||||
|
||||
**Anti-pattern**: Using EFC for simple tasks that can be expressed as "Do X, then Y, then Z"
|
||||
|
||||
See OPTIMIZATION-GUIDE.md for complete Execution Flow Control pattern, language guidelines, and agent/workflow optimization standards.
|
||||
|
||||
## 14. Natural Language Reframing
|
||||
|
||||
**When**: 1-2 step tasks where sequence is obvious or trivial
|
||||
**Pattern**: Rewrite as clear prose when enumeration adds no clarity
|
||||
**Example**:
|
||||
|
||||
Input (over-enumerated):
|
||||
```markdown
|
||||
1. Read the file at the provided path
|
||||
2. Write it back with modifications
|
||||
```
|
||||
|
||||
Output (natural language):
|
||||
```markdown
|
||||
Read the file and write it back with modifications.
|
||||
```
|
||||
|
||||
**Research findings**: Enumeration helps for 3+ steps (improves thoroughness, reduces ambiguity, provides cognitive anchors). Only skip enumeration when:
|
||||
- 1-2 very simple steps
|
||||
- Sequence is completely obvious
|
||||
- Structure would add no clarity
|
||||
|
||||
**Indicators for natural language**:
|
||||
- Task is genuinely 1-2 steps (not 3+ steps disguised as one job)
|
||||
- Sequence is trivial/obvious
|
||||
- No need for LLM to address each point thoroughly
|
||||
- Enumeration would be redundant
|
||||
|
||||
**Why research matters**: Studies show prompt formatting impacts performance by up to 40%. Numbered lists help LLMs:
|
||||
- Understand sequential steps clearly
|
||||
- Address each point thoroughly and in order
|
||||
- Reduce task sequence ambiguity
|
||||
- Provide cognitive anchors that reduce hallucination
|
||||
|
||||
**Anti-pattern**: Avoiding enumeration for 3+ step tasks. Research shows structure helps more than it hurts for multi-step instructions.
|
||||
|
||||
**Revised guidance**: Default to enumeration for 3+ steps. Use natural language only when complexity truly doesn't justify structure.
|
||||
|
||||
---
|
||||
|
||||
## 15. Technical → Natural Transformation
|
||||
|
||||
**When**: Over-technical notation detected (3+ indicators) and cognitive load test shows notation hurts understanding
|
||||
|
||||
**Indicators**:
|
||||
- CAPS labels as action markers (CHECK:, PARSE:, VALIDATE:)
|
||||
- → notation for data flow (→ variable_name)
|
||||
- Variable naming conventions (work_file_status, requirement_data)
|
||||
- Function call syntax (tool({params}))
|
||||
- Sub-step enumeration (a/b/c when prose would work)
|
||||
- Defensive meta-instructions ("DO NOT narrate", "continue immediately")
|
||||
|
||||
**Pattern**: Keep appropriate structure level (based on complexity score), simplify notation to organized natural language
|
||||
|
||||
**Transformation**:
|
||||
- CAPS labels → natural section headers or action verbs
|
||||
- → notation → implicit data flow or prose
|
||||
- Variable names → eliminate or minimize
|
||||
- Function call syntax → natural tool mentions
|
||||
- Sub-step enumeration → consolidate to prose
|
||||
- Defensive warnings → remove (trust structure)
|
||||
|
||||
**Example**:
|
||||
|
||||
Before (over-technical):
|
||||
```
|
||||
1. CHECK: Verify status → work_file_status
|
||||
a. Use Bash `git branch` → branch_name
|
||||
b. Check if file exists
|
||||
c. DO NOT proceed if exists
|
||||
2. PARSE: Extract data → requirement_data
|
||||
```
|
||||
|
||||
After (organized natural):
|
||||
```
|
||||
## Setup
|
||||
|
||||
Get current branch name and check if work file already exists. If it exists, stop and tell user to use /dev-resume.
|
||||
|
||||
Parse the requirement source...
|
||||
```
|
||||
|
||||
**Why this works**:
|
||||
- Preserves appropriate structure (complexity still warrants organization)
|
||||
- Removes ceremonial notation that creates cognitive load
|
||||
- Eliminates stopping risk (no CAPS/→/variables creating boundaries)
|
||||
- Natural language is clearer for LLM audiences
|
||||
- Reduces cognitive load significantly
|
||||
|
||||
**Often solves multiple problems simultaneously**:
|
||||
- Dimension 3 (notation clarity)
|
||||
- Stopping risk (no false completion boundaries)
|
||||
- Cognitive load reduction
|
||||
|
||||
**May be sufficient optimization alone** - don't over-optimize by adding more strategies.
|
||||
|
||||
**See OPTIMIZATION-SAFETY-GUIDE.md Part 3 for detailed examples and Part 6 for stopping risk relationship.**
|
||||
|
||||
---
|
||||
|
||||
## Strategy Selection Guide
|
||||
|
||||
**FIRST**: Calculate complexity score (see SKILL.md Step 2). Let score guide structure level.
|
||||
|
||||
**New addition**: Technical → Natural Transformation (applies across all complexity levels when notation is over-technical)
|
||||
|
||||
**By complexity score** (research-informed):
|
||||
|
||||
- **Score ≤ 0**: Natural Language Reframing acceptable (1-2 trivial steps). Add Constraint-Based if word limits needed.
|
||||
- **Score 1-2**: Use numbered enumeration (research: 3+ steps benefit from structure). Add Template-Based or Constraint-Based. Avoid heavy EFC.
|
||||
- **Score 3-4**: Moderate structure (enumeration + opening mandate). Add Decomposition or Template-Based. No EXECUTION RULES yet.
|
||||
- **Score ≥ 5**: Full EFC pattern (mandate + EXECUTION RULES). Add Decomposition + Directive Hierarchy.
|
||||
|
||||
**By output type**:
|
||||
|
||||
**For skills**: Constraint-Based + Template-Based primary. Add Progressive Disclosure (move details to references/).
|
||||
|
||||
**For documentation**: Output Formatting + Density Optimization primary. Add Audience-Targeted or Negative Prompting conditionally.
|
||||
|
||||
**For plans**: Template-Based + Decomposition primary. Add Directive Hierarchy for priority tiers.
|
||||
|
||||
**For simple workflows** (can be described as single job): Natural Language Reframing primary. Avoid enumeration and formal structure.
|
||||
|
||||
**For complex workflows** (approval gates, multiple terminal states): Execution Flow Control (appropriate level based on score) + Decomposition. Apply agent/workflow optimization guidelines (40-50% reduction, preserve procedural detail). See OPTIMIZATION-GUIDE.md for specifics.
|
||||
|
||||
**General complexity-based**:
|
||||
|
||||
- Low: 1-2 strategies (Natural Language Reframing or Constraint-Based + Output Formatting)
|
||||
- Medium: 2 strategies (Template-Based + Constraint-Based or light EFC)
|
||||
- High: 2-3 strategies max (full EFC + Decomposition, or Natural Language + Progressive Disclosure)
|
||||
|
||||
**Rule**: 1-3 strategies optimal. More than 3 = over-optimization risk.
|
||||
114
skills/publish-github-issues/SKILL.md
Normal file
114
skills/publish-github-issues/SKILL.md
Normal file
@@ -0,0 +1,114 @@
|
||||
---
|
||||
name: publish-github-issues
|
||||
description: Publishes GitHub issues from YAML files using gh CLI. Use when publishing draft issues to GitHub or user provides YAML file path in tmp/issues/. Needs YAML file with valid issue definitions and gh CLI authenticated. Trigger with phrases like 'publish issues [file-path]', 'create github issues from [file-path]', 'publish to github'.
|
||||
allowed-tools: "Read, Bash(gh:*), AskUserQuestion"
|
||||
---
|
||||
|
||||
Base directory for this skill: {baseDir}
|
||||
|
||||
## Workflow
|
||||
|
||||
1. Get YAML file path (ask if not provided)
|
||||
2. Read and validate YAML structure
|
||||
3. Create issues in GitHub (parents before children)
|
||||
4. Link child issues to parents using parent_ref
|
||||
5. Add issues to project (if specified)
|
||||
6. Ask user to archive YAML to processed/
|
||||
|
||||
<yaml_validation>
|
||||
|
||||
## YAML Structure
|
||||
|
||||
**Required fields:**
|
||||
|
||||
- `repository` (format: `owner/repo`)
|
||||
- `issues` (array with at least one)
|
||||
- Each issue: `ref`, `title`, `body`
|
||||
|
||||
**Optional fields:**
|
||||
|
||||
- Top-level: `project` (integer), `defaults` (labels/milestone)
|
||||
- Per-issue: `parent_ref`, `milestone`, `labels` (array)
|
||||
|
||||
</yaml_validation>
|
||||
|
||||
<issue_creation>
|
||||
|
||||
## Creating Issues
|
||||
|
||||
**Order:** Create parent issues first, store their numbers, then create children.
|
||||
|
||||
**For each issue:**
|
||||
|
||||
```bash
|
||||
gh issue create \
|
||||
--repo {repository} \
|
||||
--title "{title}" \
|
||||
--body "{body}" \
|
||||
[--milestone "{milestone}"] \
|
||||
[--label "label1,label2"]
|
||||
```
|
||||
|
||||
**Parent-child linking:**
|
||||
When issue has `parent_ref`, look up parent issue number and add to body:
|
||||
|
||||
```
|
||||
Depends on: #{parent_number}
|
||||
```
|
||||
|
||||
**Output:** Report each created issue: `✓ #{number}: {title}`
|
||||
|
||||
</issue_creation>
|
||||
|
||||
<project_assignment>
|
||||
|
||||
## Adding to Project (Optional)
|
||||
|
||||
If YAML has `project` field (project number), add each created issue:
|
||||
|
||||
```bash
|
||||
gh project item-add {project_number} \
|
||||
--owner {org} \
|
||||
--url {issue_url}
|
||||
```
|
||||
|
||||
</project_assignment>
|
||||
|
||||
<archiving>
|
||||
|
||||
## Archive YAML
|
||||
|
||||
**Important:** Only archive if ALL issues created successfully.
|
||||
|
||||
1. Ask user: "Move YAML to processed/? (Y/n)"
|
||||
2. If yes: Create `tmp/issues/processed/` if needed, move file
|
||||
3. On partial failure: Keep file in tmp/issues/ for retry
|
||||
|
||||
</archiving>
|
||||
|
||||
<error_handling>
|
||||
|
||||
## Error Handling
|
||||
|
||||
**gh not found:** `which gh` fails → Install gh CLI
|
||||
**Not authenticated:** `gh auth status` fails → Run `gh auth login`
|
||||
**API rate limit:** Wait ~1 hour, check `gh api rate_limit`
|
||||
**Partial failure:** Report succeeded/failed issues, do not archive
|
||||
**YAML parse error:** Report line number and field
|
||||
|
||||
</error_handling>
|
||||
|
||||
## Example Output
|
||||
|
||||
```
|
||||
Creating 3 issues in owner/repo...
|
||||
|
||||
✓ #188: Implement manual trial subscription management
|
||||
✓ #189: Integrate Paddle billing system (child of #188)
|
||||
✓ #190: Migrate manual trial tenants to Paddle (child of #188, #189)
|
||||
|
||||
All issues created successfully.
|
||||
Added 3 issues to project "My Project".
|
||||
|
||||
Move YAML to processed/? (Y/n)
|
||||
```
|
||||
313
skills/saas-pricing-strategy/SKILL.md
Normal file
313
skills/saas-pricing-strategy/SKILL.md
Normal file
@@ -0,0 +1,313 @@
|
||||
---
|
||||
name: saas-pricing-strategy
|
||||
description: Advises on SaaS pricing strategy using Daniel Priestley's oversubscription principles and Patrick Campbell's value-based framework. Use when defining pricing tiers, selecting value metrics, positioning against competitors, or creating pricing page copy for any SaaS product.
|
||||
---
|
||||
|
||||
# SaaS Pricing Strategy
|
||||
|
||||
Apply proven pricing frameworks from Daniel Priestley (demand generation) and Patrick Campbell (value-based pricing) to optimize SaaS pricing strategy.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
- Defining or revising pricing tiers
|
||||
- Selecting value metrics (per-seat, usage-based, flat-rate, hybrid)
|
||||
- Competitive positioning and market analysis
|
||||
- Writing pricing page copy
|
||||
- Planning pricing experiments or A/B tests
|
||||
- Evaluating pricing model changes
|
||||
|
||||
## Core Pricing Philosophy
|
||||
|
||||
### Daniel Priestley's Oversubscription Principle
|
||||
|
||||
#### Demand > Supply = Pricing Power
|
||||
|
||||
DO NOT compete on price. Compete on demand generation.
|
||||
|
||||
Key tenets:
|
||||
|
||||
- Transparent capacity constraints create urgency
|
||||
- Waiting lists signal value and scarcity
|
||||
- Price reflects perceived value, not just costs
|
||||
- Market positioning matters more than feature comparison
|
||||
|
||||
### Patrick Campbell's Value-Based Framework
|
||||
|
||||
#### Price on value delivered, not cost incurred
|
||||
|
||||
Key principles:
|
||||
|
||||
1. **Value metric alignment**: What you charge for should match what customers value
|
||||
2. **Buyer persona intimacy**: Different segments have different willingness to pay
|
||||
3. **Continuous iteration**: Pricing is ongoing optimization, not one-time decision
|
||||
4. **3-tier sweet spot**: Fewer tiers = analysis paralysis, more tiers = choice overload
|
||||
|
||||
## Value Metric Selection
|
||||
|
||||
### Common Value Metrics
|
||||
|
||||
**Per-Seat (Per-User)**
|
||||
|
||||
- Best for: Collaboration tools, team software, platforms where value scales with team size
|
||||
- Pros: Predictable, simple, aligns with organizational growth
|
||||
- Cons: Can discourage adding users, ceiling effect for small teams
|
||||
|
||||
**Usage-Based (Consumption)**
|
||||
|
||||
- Best for: Infrastructure, APIs, data processing, services with variable usage
|
||||
- Pros: Fair pricing ("pay for what you use"), no ceiling on revenue
|
||||
- Cons: Unpredictable billing, complex to explain, requires usage tracking
|
||||
|
||||
**Flat-Rate (All-You-Can-Eat)**
|
||||
|
||||
- Best for: Simple products, low variance in usage, commoditized markets
|
||||
- Pros: Simplest to communicate, no metering overhead
|
||||
- Cons: Leaves money on table with power users, doesn't scale with value
|
||||
|
||||
**Feature-Based (Good-Better-Best)**
|
||||
|
||||
- Best for: Products with clear feature differentiation, tiered capabilities
|
||||
- Pros: Upgrade path is clear, captures different willingness to pay
|
||||
- Cons: Feature bloat temptation, can feel arbitrary
|
||||
|
||||
**Hybrid (Combination)**
|
||||
|
||||
- Best for: Complex products where multiple dimensions drive value
|
||||
- Pros: Captures more value, serves diverse segments
|
||||
- Cons: More complex to communicate and implement
|
||||
|
||||
### Decision Framework for Value Metric
|
||||
|
||||
Ask these questions:
|
||||
|
||||
1. What metric correlates most strongly with customer value received?
|
||||
2. What's simple enough for customers to predict their costs?
|
||||
3. What aligns incentives (not penalizing desired behavior)?
|
||||
4. What grows naturally as customer success grows?
|
||||
5. What can you reliably measure and bill for?
|
||||
|
||||
## Tiered Pricing Structure
|
||||
|
||||
### Optimal Tier Count
|
||||
|
||||
**3-4 tiers is ideal**
|
||||
|
||||
- 2 tiers: Not enough choice, hard to capture variance
|
||||
- 3-4 tiers: Sweet spot for conversion
|
||||
- 5+ tiers: Analysis paralysis, decision fatigue
|
||||
|
||||
### Tier Differentiation Strategies
|
||||
|
||||
**Capacity Limits** (quantity-based)
|
||||
|
||||
- Users, seats, projects, API calls, storage, transactions
|
||||
- Example: "Up to 5 users" vs "Up to 25 users"
|
||||
|
||||
**Feature Access** (capability-based)
|
||||
|
||||
- Advanced features, integrations, customization, priority support
|
||||
- Example: "Basic reports" vs "Custom dashboards + API access"
|
||||
|
||||
**Service Level** (support-based)
|
||||
|
||||
- Response time, dedicated support, onboarding, account management
|
||||
- Example: "Email support" vs "24/7 phone + dedicated CSM"
|
||||
|
||||
**Usage Rights** (commercial terms)
|
||||
|
||||
- Commercial use, white-labeling, resale rights, SLA guarantees
|
||||
- Example: "Personal use" vs "Commercial use + SLA"
|
||||
|
||||
### Pricing Tier Psychology
|
||||
|
||||
**Anchor with highest price**: Show Enterprise tier first or prominently to make mid-tier seem reasonable
|
||||
|
||||
**Highlight recommended tier**: Use "Most Popular" or "Best Value" badge on target tier (usually middle)
|
||||
|
||||
**Price gaps should increase**: $30 → $60 → $120 feels better than $30 → $50 → $70
|
||||
|
||||
**Round numbers for simplicity**: $99/mo feels gimmicky for B2B; use $100/mo
|
||||
|
||||
## Pricing Page Best Practices
|
||||
|
||||
### Page Structure
|
||||
|
||||
1. **Lead with value, not features**
|
||||
- Wrong: "Unlimited clients, 5GB storage, custom fields"
|
||||
- Right: "Save 10 hours per week on reporting"
|
||||
|
||||
2. **Show annual savings option**
|
||||
- Offer 2 months free for annual billing (17% discount)
|
||||
- Improves cash flow and reduces churn
|
||||
|
||||
3. **Transparent tier comparison**
|
||||
- Feature comparison table with clear differentiators
|
||||
- Use checkmarks, not excessive text
|
||||
- Highlight recommended tier
|
||||
|
||||
4. **Social proof by tier**
|
||||
- Include customer counts, testimonials, use case examples per tier
|
||||
- "Perfect for teams of 5-10" or "Used by 500+ companies like yours"
|
||||
|
||||
5. **Remove friction**
|
||||
- Free trial (14-30 days)
|
||||
- No credit card required for trial (increases signups)
|
||||
- Easy upgrade/downgrade path
|
||||
- Money-back guarantee if appropriate
|
||||
|
||||
### Copy Framework
|
||||
|
||||
**Headline**: Focus on outcome transformation
|
||||
|
||||
- Good: "Close deals faster with intelligent CRM"
|
||||
- Bad: "Affordable CRM software"
|
||||
|
||||
**Subhead**: Address primary objection
|
||||
|
||||
- "Simple enough to start today. Powerful enough to scale with you."
|
||||
|
||||
**CTA Language**:
|
||||
|
||||
- Entry tier: "Start Free Trial"
|
||||
- Mid tier: "Start Free Trial" or "Get Started"
|
||||
- Top tier: "Start Free Trial" or "Schedule Demo"
|
||||
- Enterprise: "Contact Sales" or "Let's Talk"
|
||||
|
||||
### Priestley-Aligned Demand Tactics
|
||||
|
||||
**Capacity Signaling**:
|
||||
|
||||
- "We're currently onboarding X new customers per month"
|
||||
- "Join X+ companies already using [Product]"
|
||||
- "Limited spots available for [special program]"
|
||||
|
||||
**Demand Indicators**:
|
||||
|
||||
- Show number of customers/users
|
||||
- Display recent signups (with permission)
|
||||
- Highlight waitlist count or assessment completions
|
||||
|
||||
## Competitive Positioning
|
||||
|
||||
### Market Research Checklist
|
||||
|
||||
Before setting prices, research:
|
||||
|
||||
1. **Direct competitors**: What do they charge? How do they tier?
|
||||
2. **Adjacent solutions**: What alternatives exist? (spreadsheets, consultants, etc.)
|
||||
3. **Customer budget**: What's typical spend for this category?
|
||||
4. **Switching costs**: How hard is it to leave current solution?
|
||||
5. **Perceived value gap**: How much better are you, quantifiably?
|
||||
|
||||
### Positioning Strategies
|
||||
|
||||
**Price leadership** (lowest price)
|
||||
|
||||
- Only if cost advantage is sustainable
|
||||
- Race to bottom risk
|
||||
- Attracts price-sensitive, high-churn customers
|
||||
|
||||
**Value leadership** (best value)
|
||||
|
||||
- Sweet spot for most SaaS
|
||||
- Middle-market pricing with superior product/service
|
||||
- "We're not the cheapest, but we're worth it"
|
||||
|
||||
**Premium positioning** (highest price)
|
||||
|
||||
- Requires defensible differentiation
|
||||
- Attracts best customers, lower churn
|
||||
- "You get what you pay for"
|
||||
|
||||
## Pricing Experiments & Iteration
|
||||
|
||||
### What to Test
|
||||
|
||||
1. **Tier names**: Functional vs. aspirational (Starter vs. Essential)
|
||||
2. **Anchor pricing**: Show Enterprise price to make Professional seem reasonable
|
||||
3. **Feature bundling**: Which features drive upgrades?
|
||||
4. **Annual vs. monthly default**: Does showing annual first increase LTV?
|
||||
5. **Trial length**: 14 days vs. 30 days conversion rates
|
||||
6. **CTA copy**: "Start Free Trial" vs. "Get Started Free"
|
||||
|
||||
### Metrics to Track
|
||||
|
||||
- **Conversion rate by tier**: Which tier converts best from trial?
|
||||
- **Time to upgrade**: How long before customers outgrow their tier?
|
||||
- **Churn by tier**: Do certain tiers retain better?
|
||||
- **Revenue per customer by tier**: LTV analysis
|
||||
- **Failed payment recovery rate**: Billing issue resolution
|
||||
- **Price sensitivity**: At what price point do signups drop?
|
||||
|
||||
### Iteration Cadence
|
||||
|
||||
- Review pricing metrics: Monthly
|
||||
- Minor adjustments (copy, positioning): Quarterly
|
||||
- Major structural changes (tiers, value metric): Annually
|
||||
- Always communicate changes in advance and grandfather existing customers when appropriate
|
||||
|
||||
## Common Pitfalls to Avoid
|
||||
|
||||
❌ **Competing on price alone**: Race to bottom, attracts worst customers
|
||||
|
||||
✓ **Compete on demand and positioning**: Create scarcity, demonstrate value
|
||||
|
||||
❌ **Too many tiers**: Analysis paralysis kills conversions
|
||||
|
||||
✓ **3-4 tiers maximum**: Clear upgrade path, easy decision
|
||||
|
||||
❌ **Grandfathering forever**: Prevents necessary price increases
|
||||
|
||||
✓ **Communicate value, migrate gradually**: Give notice, explain benefits
|
||||
|
||||
❌ **Feature-based differentiation only**: Customers don't buy features
|
||||
|
||||
✓ **Outcome-based positioning**: "Save X hours/week" or "Increase revenue by Y%"
|
||||
|
||||
❌ **Hiding pricing**: "Contact us" for all tiers reduces trust
|
||||
|
||||
✓ **Transparent pricing**: Builds trust, qualifies leads naturally
|
||||
|
||||
❌ **Set and forget**: Pricing is not a one-time decision
|
||||
|
||||
✓ **Continuous optimization**: Treat pricing like product development
|
||||
|
||||
## Decision Framework
|
||||
|
||||
When evaluating pricing changes, ask:
|
||||
|
||||
1. **Does this align with customer value perception?** (Campbell principle)
|
||||
2. **Does this create healthy demand/supply tension?** (Priestley principle)
|
||||
3. **Is it simple to understand and predict?** (Complexity kills conversion)
|
||||
4. **Does it scale with customer success?** (Value metric alignment)
|
||||
5. **Can we test it without disrupting existing customers?** (Iteration safety)
|
||||
|
||||
If yes to all five, proceed with experiment. If no to any, revisit approach.
|
||||
|
||||
## Workflow
|
||||
|
||||
When asked to help with pricing:
|
||||
|
||||
1. **Understand the context**
|
||||
- What does the product do?
|
||||
- Who are the customers?
|
||||
- What competitors exist and how do they price?
|
||||
- What constraints exist (market, budget, positioning)?
|
||||
|
||||
2. **Apply the frameworks**
|
||||
- Recommend value metric using decision framework
|
||||
- Suggest tier structure (3-4 tiers)
|
||||
- Position against competition using Priestley's demand principles
|
||||
- Apply Campbell's value-based approach
|
||||
|
||||
3. **Deliverable options**
|
||||
- Pricing strategy document
|
||||
- Pricing page copy
|
||||
- Competitive analysis
|
||||
- A/B test plan
|
||||
- Pricing tier structure with rationale
|
||||
|
||||
4. **Next steps**
|
||||
- Set up tracking and metrics
|
||||
- Plan communication strategy
|
||||
- Schedule quarterly pricing review
|
||||
Reference in New Issue
Block a user