Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:52:57 +08:00
commit 8a338b17fd
15 changed files with 3492 additions and 0 deletions

View File

@@ -0,0 +1,178 @@
---
description: "Review scored content and provide improvement suggestions"
---
# Critic Review
## Mission
Review all scored content variations, provide specific improvement suggestions for pieces scoring 15-25/30, and filter out content below quality threshold (<20/30).
## Process
Follow the Critic agent instructions (`agents/critic.md`) to:
1. **Read scored content** from content-drafts.md
2. **Evaluate quality** (factual accuracy, framework alignment, engagement potential)
3. **Generate critiques** with strengths, weaknesses, and specific suggestions
4. **Apply Pass/Fail verdicts** (< 20 = FAIL, ≥ 20 = PASS)
5. **Update content-drafts.md** with critique sections
## Execution Steps
### Step 1: Read Scored Content
**Input Source**: `content-drafts.md` with complete scores
**Focus On**:
- Content scoring 15-25/30 (improvement candidates)
- Content scoring < 20/30 (automatic FAIL)
- Content scoring 20+/30 (PASS but optimize)
### Step 2: Quality Evaluation
For each piece, assess:
#### Factual Accuracy (Non-Negotiable)
- [ ] All claims verifiable against themes-memory.md source stories
- [ ] No hallucinated examples or fictional scenarios
- [ ] Numbers, dates, names match source material
**If Inaccurate**: Mark FAIL regardless of score
#### Framework Alignment
- [ ] Gap Selling: Problem clear, emotional stakes present, solution evident
- [ ] Biases: Claimed biases actually activated
- [ ] Decision Framework: Hook strong, value clear, CTA present
#### Engagement Potential
- [ ] First line grabs attention
- [ ] Logical flow
- [ ] Emotional resonance
- [ ] Actionable insight
- [ ] Platform-appropriate style
### Step 3: Generate Structured Critique
For each piece:
```markdown
**Critic Notes:**
**Strengths:**
- {Specific element that works, with reference}
- {Second strength}
- {Third strength}
**Weaknesses:**
- {Specific issue with explanation}
- {Second weakness and why it matters}
- {Third weakness}
**Suggestions:**
- {Concrete edit: "Change X to Y because..."}
- {Second suggestion with specific line reference}
- {Third suggestion}
**Verdict:** {✅ PASS or ❌ FAIL}
```
### Step 4: Apply Pass/Fail Logic
**FAIL if ANY of these true**:
- Total score < 20/30
- Factually inaccurate
- Hallucinated information
- Gap Selling < 6/10 (problem unclear)
- Decision Framework < 6/10 (weak hook or no value)
**PASS if ALL of these true**:
- Total score ≥ 20/30
- Factually accurate
- All frameworks adequately addressed
- Engagement potential present
### Step 5: Update content-drafts.md
Add critique section after scores for each variation.
## Validation Checklist
Before marking review complete:
- [ ] All scored content reviewed
- [ ] Factual accuracy verified for each piece
- [ ] Specific strengths identified (3 per piece)
- [ ] Specific weaknesses identified (3 per piece)
- [ ] Actionable suggestions provided (3 per piece)
- [ ] Pass/Fail verdicts assigned
- [ ] content-drafts.md updated with critiques
## Common Improvement Patterns
**If Gap Selling Low** (< 6/10):
- Make problem more explicit
- Increase emotional stakes
- Strengthen future-state value
**If Bias Score Low** (< 5):
- Add before/after structure (Contrast)
- Include numbers/credentials (Authority)
- Reference crowd behavior (Social Proof)
- Give free value (Reciprocation)
**If Decision Framework Low** (< 6/10):
- Strengthen opening hook
- Add actionable insight
- Make CTA explicit and low-friction
## Example Output
```
✅ Critic Review Complete
Variations Reviewed: 25
Pass/Fail Distribution:
- PASS: 21 pieces (84%)
- FAIL: 4 pieces (16%)
Common Strengths:
- Strong vulnerability/authenticity (18/25 pieces)
- Effective contrast before/after structure (15/25 pieces)
- Clear problem statements (20/25 pieces)
Common Weaknesses:
- CTAs often philosophical rather than actionable (12/25 pieces)
- Emotional stakes could be more vivid (8/25 pieces)
- Some hooks predictable (6/25 pieces)
Top Recommendations:
1. Convert philosophical CTAs to specific actions
2. Amplify stakes language for emotional impact
3. Test open-loop questions vs bold statements for hooks
Output File: content-drafts.md (updated with critiques)
Next Step: Run /content-select-best to choose top piece
```
## Error Handling
**If critique seems subjective**:
- Reference specific lines and concrete issues
- Justify suggestions with framework principles
- Avoid "I think" or "I feel" language
**If uncertain about factual accuracy**:
- Cross-check against themes-memory.md source stories
- Flag for user verification
- Mark as uncertain rather than guessing
## Next Steps
After successful review:
1. Review critiques in content-drafts.md
2. Run `/content-select-best` to select top piece
3. Or continue with full pipeline if running `/content-full-pipeline`

View File

@@ -0,0 +1,146 @@
---
description: "Extract stories from Linear tasks and identify themes for content generation"
---
# Extract Stories from Linear
## Mission
Connect to Linear, extract stories from as specified by user, identify recurring themes, and output structured theme data to `themes-memory.md`.
## Process
Follow the Story Extractor agent instructions (`agents/story-extractor.md`) to systematically:
1. **Connect to Linear MCP** and fetch task details
2. **Analyze story content** for problems, insights, and emotional hooks
3. **Identify recurring themes** (minimum 5)
4. **Structure theme extraction** with all required sections
5. **Write to themes-memory.md** in poasting repository
6. **Update Linear tasks** with extraction confirmation
## Execution Steps
### Step 1: Verify Prerequisites
Check that required tools are available:
- [ ] Linear MCP installed and configured
- [ ] LINEAR_API_KEY set in .env file
- [ ] themes-memory.md path accessible: `./poasting/themes-memory.md`
### Step 2: Fetch Stories from Linear
Use Linear MCP commands to retrieve tasks POA-5 through POA-14:
```javascript
// List all target issues
mcp__linear__list_issues({
"team": "YOUR_TEAM_ID",
"filter": {
"id": {
"in": ["POA-{X}", "POA-{Y}"]
}
}
})
// For each task, get full details including comments
mcp__linear__get_issue({
"id": "POA-X",
"include_comments": true
})
```
**Rate Limit**: 100 requests/minute. Batch requests when possible.
### Step 3: Extract and Structure Themes
For each identified theme, create structured output with:
- Theme name
- Source task IDs
- Problem statement
- Emotional hook
- Key insight
- 5 content angles (Bold, Story, Problem-Solution, Data, Emotional)
- Source story excerpt
**Minimum Output**: 5 distinct themes
### Step 4: Write to themes-memory.md
**Location**: `./poasting/themes-memory.md`
**Action**:
- If file doesn't exist, create with header
- If file exists, APPEND new themes (preserve existing)
- Add extraction metadata timestamp
### Step 5: Update Linear Tasks
Add confirmation comment to each processed task:
```
✅ Theme extracted: {Theme Name}
Extracted to themes-memory.md on {YYYY-MM-DD}
```
## Validation Checklist
Before marking extraction complete:
- [ ] Minimum 5 themes extracted
- [ ] Each theme has all required sections
- [ ] Themes are distinct (>70% different)
- [ ] Source stories quoted accurately
- [ ] themes-memory.md written successfully
- [ ] Linear tasks updated with confirmations
- [ ] No hallucinated information
- [ ] Emotional authenticity preserved
## Example Output
```
✅ Story Extraction Complete
Themes Extracted: n
Source Tasks: POA-X to POA-Y
Output File: themes-memory.md
Themes Identified:
1. First Money From Code (POA-5, POA-8)
2. Personal Pain → Product (POA-6, POA-9, POA-12)
3. Learning Intensity vs Environment (POA-7, POA-10)
4. Affordable Loss Decision Making (POA-5, POA-11)
5. The Quit Day (POA-5, POA-13)
6. Family-Influenced Products (POA-6, POA-14)
7. Creative Reactivation After Engineering (POA-8, POA-10)
Linear tasks updated with extraction confirmations.
Ready for content generation: /content-generate-drafts {theme}
```
## Error Handling
**If Linear MCP fails**:
- Check .env for LINEAR_API_KEY
- Verify Linear MCP installation
- Confirm team ID is correct
**If rate limit hit**:
- Batch remaining requests
- Wait 60 seconds before retry
**If stories are sparse**:
- Extract what's available
- Flag for user follow-up in Linear comment
## Next Steps
After successful extraction:
1. Review themes in themes-memory.md
2. Select theme for content generation
3. Run `/content-generate-drafts {theme-name}` to create variations
**Or run full pipeline**: `/content-full-pipeline` to execute all stages end-to-end

View File

@@ -0,0 +1,493 @@
---
description: "Execute end-to-end content generation pipeline from Linear stories to ready-to-post content"
---
# Full Content Generation Pipeline
## Mission
Execute the complete content generation workflow end-to-end: Extract stories from Linear → Generate variations → Score all → Critic review → Select best → Output ONE ready-to-post piece.
## Overview
This command orchestrates all 5 stages of the content generation system:
1. **Story Extraction** (`/content-extract-stories`)
2. **Draft Generation** (`/content-generate-drafts` for each theme)
3. **Automated Scoring** (`/content-score-all`)
4. **Critic Review** (`/content-critic-review`)
5. **Best Selection** (`/content-select-best`)
**Expected Output**: ONE piece in `content-ready.md` scoring 25+/30, ready for human approval and posting.
**Expected Duration**: < 3 minutes for 5 themes → 25 variations → 1 selected piece
## Process
### Stage 1: Story Extraction
**Command**: `/content-extract-stories`
**Actions**:
- Connect to Linear MCP
- Fetch tasks POA-X through POA-Y
- Analyze story content and identify themes
- Extract minimum 5 distinct themes
- Write to `themes-memory.md`
- Update Linear tasks with extraction confirmations
**Success Criteria**:
- [ ] Minimum 5 themes extracted
- [ ] themes-memory.md populated with structured themes
- [ ] Linear tasks updated with confirmation comments
- [ ] No hallucinated information
**Failure Handling**:
- If Linear MCP fails: Check .env for LINEAR_API_KEY, verify MCP installation
- If < 5 themes: Alert user, proceed with available themes
- If rate limit hit: Batch requests, wait 60s, retry
**Stage Output**: `themes-memory.md` with 5+ themes
---
### Stage 2: Draft Generation
**Command**: `/content-generate-drafts {theme}` for EACH theme
**Actions**:
- For each theme in themes-memory.md:
- Spawn 5 parallel Draft Generator sub-agents
- Each sub-agent targets DIFFERENT bias combinations:
- Bold Statement (Contrast, Authority)
- Story Hook (Curiosity, Liking)
- Problem-Solution (Social Proof, Reciprocation)
- Data-Driven (Authority, Reason-Respecting)
- Emotional (Liking, Stress-Influence, Lollapalooza)
- Generate 5 unique variations per theme
- Write to `content-drafts.md`
**Success Criteria**:
- [ ] 5 variations per theme (25 total for 5 themes)
- [ ] Each variation >70% different from others
- [ ] All variations follow Hook-Content-CTA structure
- [ ] Bias targeting explicit and diverse
**Failure Handling**:
- If variation count < 5 per theme: Regenerate missing variations
- If similarity > 30% between variations: Regenerate duplicates
- If bias targeting unclear: Review draft-generator.md specs
**Stage Output**: `content-drafts.md` with 25 variations (5 themes × 5 variations)
---
### Stage 3: Automated Scoring
**Command**: `/content-score-all`
**Actions**:
- Read all variations from content-drafts.md
- Apply 3-framework scoring to each:
- **Gap Selling** (0-10): Problem clarity + Impact + Solution value
- **Cognitive Biases** (count): Activated biases + Lollapalooza bonus
- **Decision Framework** (0-10): Hook strength + Content value + CTA clarity
- Calculate total score (Gap + Biases + Decision = XX/30)
- Update content-drafts.md with complete score breakdowns
**Success Criteria**:
- [ ] All 25 variations scored
- [ ] Score breakdowns complete (subscore details)
- [ ] Minimum 80% of content scores 20+/30
- [ ] Total scores calculated correctly
**Failure Handling**:
- If < 50% score 20+/30: Alert quality issue, consider regenerating
- If scoring formulas inconsistent: Review scorer.md rubrics
- If subscore missing: Re-run scoring for affected variations
**Stage Output**: `content-drafts.md` with complete scores for all variations
---
### Stage 4: Critic Review
**Command**: `/content-critic-review`
**Actions**:
- Review all variations scoring 15+/30
- Provide specific improvement suggestions
- Verify factual accuracy against Linear stories
- Assign ✅ PASS or ❌ FAIL verdict
- Filter out < 20/30 pieces
- Update content-drafts.md with critic notes
**Success Criteria**:
- [ ] All 20+/30 variations reviewed
- [ ] Critic notes complete (Strengths, Weaknesses, Suggestions)
- [ ] PASS/FAIL verdicts assigned
- [ ] Factual accuracy verified (no hallucinations)
**Failure Handling**:
- If no PASS content: Alert pipeline failure, regenerate with constraints
- If factual inaccuracies found: Flag for correction, re-verify
- If critic notes missing: Re-run review for affected variations
**Stage Output**: `content-drafts.md` with critic verdicts and notes
---
### Stage 5: Best Selection
**Command**: `/content-select-best`
**Actions**:
- Filter for PASS content (20+/30 scores)
- Rank by total score (descending)
- Apply tie-breaker rules if needed:
1. Gap Selling subscore
2. Hook strength
3. Lollapalooza effect (5+ biases)
4. Bias diversity
5. Theme novelty
6. Human judgment flag
- Validate selection against quality gates
- Format for content-ready.md with posting instructions
- Document top 3 runner-ups
**Success Criteria**:
- [ ] ONE piece selected (not zero, not multiple)
- [ ] Selected piece scores 20+/30 (ideally 25+/30)
- [ ] content-ready.md overwritten with formatted output
- [ ] Posting instructions included
- [ ] Runner-ups documented
**Failure Handling**:
- If no PASS content: STOP pipeline, alert user
- If tie-breakers don't resolve: Flag for human decision
- If content-ready.md already full: Prompt to archive or overwrite
**Stage Output**: `content-ready.md` with ONE ready-to-post piece
---
## Execution Flow
```
START
[Stage 1: Extract Stories]
├─ Linear MCP → POA-5 to POA-14
├─ Identify themes (min 5)
└─ Output: themes-memory.md
[Stage 2: Generate Drafts]
├─ For each theme (5 themes):
│ ├─ Spawn 5 parallel sub-agents
│ ├─ Generate 5 variations (different bias combos)
│ └─ Total: 25 variations
└─ Output: content-drafts.md
[Stage 3: Score All]
├─ Apply Gap Selling (0-10)
├─ Count Cognitive Biases
├─ Apply Decision Framework (0-10)
└─ Output: content-drafts.md with scores
[Stage 4: Critic Review]
├─ Review 20+/30 content
├─ Provide improvement suggestions
├─ Assign PASS/FAIL verdicts
└─ Output: content-drafts.md with critic notes
[Stage 5: Select Best]
├─ Rank PASS content
├─ Apply tie-breakers
├─ Validate selection
└─ Output: content-ready.md (ONE piece)
[Human Approval Required]
├─ Review content-ready.md
├─ Post to Twitter/X
└─ Capture metrics after 48 hours
END
```
## Pre-Execution Checklist
Before running full pipeline, verify:
**Infrastructure**:
- [ ] Linear MCP installed and configured
- [ ] LINEAR_API_KEY set in .env file
- [ ] All framework docs accessible (gap_selling.md, bias_checklist_munger.md, effective-decision-making-framework.md)
- [ ] File paths correct (themes-memory.md, content-drafts.md, content-ready.md)
**Content Readiness**:
- [ ] Linear tasks POA-5 to POA-14 have story content
- [ ] Stories contain sufficient detail for theme extraction
- [ ] content-ready.md is clear (or content archived to content-posted.md)
**System Resources**:
- [ ] Network connectivity stable (for Linear API calls)
- [ ] Sufficient API quota (Linear rate limit: 100 req/min)
- [ ] Execution environment ready (no blocking processes)
## Validation Checkpoints
### After Stage 1 (Story Extraction)
```bash
# Verify themes extracted
grep -c "## Theme:" /home/rpiplewar/fast_dot_ai/poasting/themes-memory.md
# Should be >= 5
```
### After Stage 2 (Draft Generation)
```bash
# Verify variations generated
grep -c "### Variation" /home/rpiplewar/fast_dot_ai/poasting/content-drafts.md
# Should be >= 25 (5 themes × 5 variations)
```
### After Stage 3 (Automated Scoring)
```bash
# Verify scores added
grep -c "TOTAL:" /home/rpiplewar/fast_dot_ai/poasting/content-drafts.md
# Should match variation count
```
### After Stage 4 (Critic Review)
```bash
# Verify verdicts assigned
grep -c "Verdict: PASS\|FAIL" /home/rpiplewar/fast_dot_ai/poasting/content-drafts.md
# Should match variations with 15+/30 scores
```
### After Stage 5 (Best Selection)
```bash
# Verify single piece selected
grep -c "# Content Ready to Post" /home/rpiplewar/fast_dot_ai/poasting/content-ready.md
# Should be exactly 1
```
## Error Handling & Recovery
### Pipeline Failure Scenarios
**Stage 1 Failure (Story Extraction)**:
```
❌ Stage 1 Failed: Could not extract stories from Linear
Possible Causes:
- Linear MCP not configured
- LINEAR_API_KEY missing or invalid
- Network connectivity issues
- Tasks POA-5 to POA-14 not accessible
Recovery:
1. Check .env file for LINEAR_API_KEY
2. Verify Linear MCP installation: mcp__linear__list_issues test
3. Confirm network connectivity
4. Retry: /content-extract-stories
Pipeline STOPPED at Stage 1. Fix issues before retrying full pipeline.
```
**Stage 2 Failure (Draft Generation)**:
```
❌ Stage 2 Failed: Insufficient variations generated
Possible Causes:
- Theme quality low (not enough content generation potential)
- Draft generator specs unclear
- Agent spawning failed
Recovery:
1. Review themes in themes-memory.md for clarity
2. Manually run: /content-generate-drafts {theme} for each theme
3. Verify draft-generator.md agent specs
4. Check for variation diversity (>70% different)
Pipeline STOPPED at Stage 2. Complete draft generation before continuing.
```
**Stage 3 Failure (Automated Scoring)**:
```
❌ Stage 3 Failed: Scoring incomplete or inconsistent
Possible Causes:
- Framework docs inaccessible
- Scoring formulas incorrect
- Subscore calculation errors
Recovery:
1. Verify framework docs accessible (gap_selling.md, bias_checklist_munger.md, effective-decision-making-framework.md)
2. Review scorer.md agent specs
3. Manually run: /content-score-all
4. Compare sample scores vs manual evaluation
Pipeline STOPPED at Stage 3. Complete scoring before continuing.
```
**Stage 4 Failure (Critic Review)**:
```
❌ Stage 4 Failed: No PASS content after critic review
Possible Causes:
- Content quality below 20/30 threshold
- Critic too harsh (scoring too low)
- Theme selection poor
Recovery:
1. Review content-drafts.md scores (check if consistently low)
2. If scores 15-19/30: Adjust scoring weights in scorer.md
3. If scores < 15/30: Regenerate content with stronger constraints
4. Consider revising themes for better content potential
Pipeline STOPPED at Stage 4. Fix quality issues before continuing.
```
**Stage 5 Failure (Best Selection)**:
```
❌ Stage 5 Failed: No content available for selection
Possible Causes:
- No PASS content from Stage 4
- All content < 20/30 threshold
- Selection criteria too strict
Recovery:
1. Review Critic verdicts in content-drafts.md
2. If close to threshold (18-19/30): Consider relaxing to 18+/30 minimum
3. If far below threshold: Regenerate content from Stage 2
4. Review theme quality and bias targeting
Pipeline STOPPED at Stage 5. Fix quality issues or regenerate content.
```
### Partial Pipeline Recovery
**If pipeline stopped at Stage X**, resume from that stage:
```bash
# Resume from Stage 2 (Draft Generation)
/content-generate-drafts {theme} # For each remaining theme
# Resume from Stage 3 (Automated Scoring)
/content-score-all
# Resume from Stage 4 (Critic Review)
/content-critic-review
# Resume from Stage 5 (Best Selection)
/content-select-best
```
**No need to re-run completed stages** - pipeline is idempotent at each stage.
## Success Output
```
✅ Full Content Generation Pipeline Complete
⏱️ Execution Time: 2m 34s
📊 Pipeline Stats:
- Themes Extracted: 7
- Variations Generated: 35 (7 themes × 5 variations)
- Content Scored: 35/35
- PASS Content: 24/35 (68.6%)
- FAIL Content: 11/35 (31.4%)
🏆 Best Content Selected:
- Theme: First Money From Code
- Variation: Bold Statement
- Score: 28/30 (EXCELLENT)
- Ranking: #1 of 24 PASS pieces
📁 Output Files Updated:
✓ themes-memory.md (7 themes)
✓ content-drafts.md (35 variations with scores)
✓ content-ready.md (1 ready-to-post piece)
📋 Next Steps:
1. Review content-ready.md
2. Perform final quality check
3. Post to Twitter/X at optimal time (8:30 AM or 5:30 PM IST)
4. Capture metrics after 48 hours
5. Move to content-posted.md with metrics
🚀 Ready for human review and posting!
```
## Performance Metrics
**Target Benchmarks**:
- Execution Time: < 3 minutes
- Themes Extracted: ≥ 5
- Variations Generated: ≥ 25 (5 themes × 5 variations)
- PASS Content: ≥ 80% (20/25)
- Selected Score: ≥ 25/30 (ideally 27+/30)
**Quality Thresholds**:
- < 50% PASS content: Quality issue, regenerate with constraints
- Selected score < 23/30: Consider regenerating or improving themes
- Selected score ≥ 28/30: Excellent, high viral potential
## Integration Notes
This command represents the complete automated content generation system. It requires:
- All 6 agent instruction documents (agents/*.md)
- All 5 individual stage commands (commands/content-*.md)
- Linear MCP integration
- All 3 framework docs (docs/frameworks/*.md)
After execution, human approval is required before posting. Performance tracking begins after posting to content-posted.md.
## Advanced Options
**Regenerate Pipeline with Filters**:
```bash
# Regenerate specific theme only
/content-generate-drafts "First Money From Code"
/content-score-all
/content-critic-review
/content-select-best
# Regenerate with stronger bias constraints
# (Modify draft-generator.md to require 5+ biases per variation)
/content-full-pipeline
# Test pipeline without Linear extraction (use existing themes)
# Skip Stage 1, start from Stage 2
```
**Performance Optimization**:
- Parallel theme processing: Spawn all theme agents simultaneously
- Batch Linear API calls: Group requests to avoid rate limits
- Cache framework docs: Load once, reuse across agents
- Incremental updates: Only re-score modified variations
## CRITICAL RULES
1. **No File Proliferation**: Use ONLY 4 pipeline files (themes-memory.md, content-drafts.md, content-ready.md, content-posted.md)
2. **ONE Piece in content-ready.md**: Always exactly ONE, never zero or multiple
3. **Minimum Score Threshold**: 20+/30 to PASS, ideally 25+/30 for selection
4. **No Hallucination**: All content must be traceable to Linear stories
5. **Framework Completeness**: All 3 frameworks applied to every variation
6. **Human Approval Required**: Never auto-post, always require review
## Validation Checklist
Before marking pipeline complete:
- [ ] All 5 stages completed successfully
- [ ] themes-memory.md has ≥5 themes
- [ ] content-drafts.md has ≥25 variations with complete scores
- [ ] content-ready.md has EXACTLY ONE piece scoring 20+/30
- [ ] Execution time < 3 minutes
- [ ] No file proliferation (only 4 pipeline files used)
- [ ] Linear tasks updated with extraction confirmations
- [ ] No hallucinated information in any stage
- [ ] Human approval workflow clear

View File

@@ -0,0 +1,157 @@
---
description: "Generate 5 content variations for a theme using parallel multi-agent approach"
---
# Generate Content Drafts
## Arguments
`$ARGUMENTS` - Theme name from themes-memory.md
## Mission
Generate 5 unique content variations for the specified theme by spawning 5 parallel sub-agents, each targeting different cognitive bias combinations and content structures.
## Process
Follow the Draft Generator agent instructions (`agents/draft-generator.md`) to:
1. **Read theme details** from themes-memory.md
2. **Spawn 5 parallel sub-agents** (CRITICAL: simultaneous, not sequential)
3. **Collect sub-agent outputs** (5 variations)
4. **Write to content-drafts.md** with proper formatting
## Execution Steps
### Step 1: Read Theme from themes-memory.md
**Target Theme**: $ARGUMENTS
**Extract**:
- Theme name and problem statement
- Emotional hook and key insight
- All 5 content angles
- Source story excerpts
**If theme not found**: List available themes and ask user to retry with correct name.
### Step 2: Spawn 5 Parallel Sub-Agents
**CRITICAL**: Use single message with 5 Task tool calls to spawn all sub-agents simultaneously.
Each sub-agent receives self-contained prompt with:
- Full theme details
- Specific bias activation strategy
- Content structure requirements
- Hook-Content-CTA framework
- Character limits (280 single or 4-6 tweet thread)
**Sub-Agent Assignments**:
1. **Bold Statement Generator**
- Biases: Contrast-Misreaction + Authority-Misinfluence
- Structure: Shocking opening → Contrast → Authority evidence → CTA
2. **Story Hook Generator**
- Biases: Curiosity Tendency + Liking/Loving Tendency
- Structure: Open-loop question → Personal story → Vulnerability → Resolution
3. **Problem-Solution Generator**
- Biases: Social-Proof + Reciprocation + Reward/Punishment
- Structure: Problem statement → Social proof → Solution value → Free insight
4. **Data-Driven Generator**
- Biases: Authority + Reason-Respecting + Availability-Misweighing
- Structure: Surprising stat → Reasoning → Concrete example → Implication
5. **Emotional Lollapalooza Generator**
- Biases: Liking + Stress-Influence + 5+ biases converging
- Structure: Emotional hook → Stress creation → Relief → Multi-bias activation
### Step 3: Quality Check Each Variation
Verify:
- [ ] Content factually accurate (matches source stories)
- [ ] Target biases clearly activated
- [ ] Structure follows assigned format
- [ ] Character limits respected
- [ ] No meta-commentary (pure content)
### Step 4: Write to content-drafts.md
**Location**: `/home/rpiplewar/fast_dot_ai/poasting/content-drafts.md`
**Format**:
```markdown
## Theme: {Theme Name}
**Source:** {Linear Task ID}
### Variation 1: Bold Statement
**Content:**
{Generated content}
**Biases Targeted:** Contrast-Misreaction, Authority-Misinfluence
**Scores:**
[To be filled by Scorer agent]
---
### Variation 2: Story Hook
...
```
**Action**: APPEND to file (don't overwrite existing content from other themes)
## Validation Checklist
Before marking generation complete:
- [ ] All 5 variations generated
- [ ] Each variation targets DIFFERENT bias combinations
- [ ] All content factually accurate
- [ ] Variations >70% structurally different
- [ ] content-drafts.md updated successfully
- [ ] No meta-commentary in output
- [ ] Character limits respected
- [ ] Hook-Content-CTA structure followed
## Example Output
```
✅ Draft Generation Complete
Theme: First Money From Code
Variations Generated: 5
1. Bold Statement (Contrast + Authority)
2. Story Hook (Curiosity + Liking)
3. Problem-Solution (Social Proof + Reciprocation)
4. Data-Driven (Authority + Reason-Respecting)
5. Emotional Lollapalooza (6 biases)
Output File: content-drafts.md
Next Step: Run /content-score-all to apply framework scoring
```
## Error Handling
**If theme not found**:
- List available themes from themes-memory.md
- Ask user to specify correct theme name
**If sub-agent fails**:
- Retry that specific sub-agent only
- Don't re-run all 5
**If variations too similar**:
- Regenerate similar variation with stronger bias differentiation
## Next Steps
After successful generation:
1. Review variations in content-drafts.md
2. Run `/content-score-all` to apply automated scoring
3. Or continue with full pipeline if running `/content-full-pipeline`

View File

@@ -0,0 +1,136 @@
---
description: "Apply automated 3-framework scoring to all content variations"
---
# Score All Content Variations
## Mission
Apply automated framework-based scoring (Gap Selling + Munger Biases + Decision Framework) to all content variations in content-drafts.md, calculating total scores with detailed breakdowns.
## Process
Follow the Scorer agent instructions (`agents/scorer.md`) to:
1. **Read all content variations** from content-drafts.md
2. **Apply 3-framework scoring** with mathematical formulas
3. **Calculate total scores** (out of 30)
4. **Update content-drafts.md** with complete score breakdowns
## Execution Steps
### Step 1: Read Content from content-drafts.md
**Location**: `/home/rpiplewar/fast_dot_ai/poasting/content-drafts.md`
**Identify**:
- All content variations awaiting scoring
- Look for `[To be filled by Scorer agent]` placeholders
- Extract content text and bias targeting info
### Step 2: Score Each Variation
**For each content piece, calculate**:
#### Framework 1: Gap Selling (0-10 points)
- **Problem Clarity** (0-3): Is problem explicit and relatable?
- **Emotional Impact** (0-3): Is pain point vivid and resonant?
- **Solution Value** (0-4): Is future state compelling and actionable?
#### Framework 2: Cognitive Biases (0-10+ points)
- **Count activated biases** from Munger's 25
- **Lollapalooza bonus**: +2 if 5+ biases converge
- **List specific biases** detected
#### Framework 3: Decision Framework (0-10 points)
- **Hook Strength** (0-3): Does first line grab attention?
- **Content Value** (0-4): Are insights actionable and transferable?
- **CTA Clarity** (0-3): Is next step crystal clear?
**Total Score = Gap (0-10) + Biases (0-10+) + Decision (0-10)**
### Step 3: Assign Pass/Fail Verdict
**Quality Thresholds**:
- **< 20**: ❌ FAIL (filter out)
- **20-24**: ✅ PASS (needs improvement)
- **25-27**: ✅ PASS (GOOD)
- **28-30**: ✅ PASS (EXCELLENT)
### Step 4: Update content-drafts.md
**Replace** `[To be filled by Scorer agent]` with:
```markdown
**Scores:**
- Gap Selling: X/10 (Problem: X/3, Impact: X/3, Solution: X/4)
- Biases Activated: Y (List: Bias1, Bias2, Bias3...)
- Decision Framework: Z/10 (Hook: X/3, Value: X/4, CTA: X/3)
- **TOTAL: XX/30** {✅ PASS or ❌ FAIL}
```
## Validation Checklist
Before marking scoring complete:
- [ ] All variations scored
- [ ] All subscores documented with reasoning
- [ ] Bias lists include specific bias names (not just count)
- [ ] Lollapalooza bonus applied where applicable (5+ biases)
- [ ] Pass/Fail verdicts assigned (< 20 = FAIL, ≥ 20 = PASS)
- [ ] content-drafts.md updated with complete scores
- [ ] Scoring formulas followed exactly per scorer.md
## Example Output
```
✅ Scoring Complete
Variations Scored: 25 (5 themes × 5 variations)
Score Distribution:
- EXCELLENT (28-30): 3 pieces
- GOOD (25-27): 8 pieces
- PASS (20-24): 10 pieces
- FAIL (< 20): 4 pieces
Pass Rate: 84% (21/25)
Highest Scoring:
1. Theme: First Money From Code, Variation 1 (Bold Statement) - 28/30
2. Theme: The Quit Day, Variation 5 (Lollapalooza) - 27/30
3. Theme: Personal Pain → Product, Variation 2 (Story Hook) - 26/30
Output File: content-drafts.md (updated with all scores)
Next Step: Run /content-critic-review for quality feedback
```
## Error Handling
**If scoring logic unclear**:
- Refer to detailed rubrics in `agents/scorer.md`
- Use examples as reference
- Document edge cases for future refinement
**If scores seem inaccurate**:
- Cross-check against manual evaluation
- Verify detection patterns
- Adjust formulas if systematic drift detected
## Accuracy Targets
**Goal**: Within ±2 points (10% margin) of manual expert evaluation
**If accuracy drifts**:
- Review scoring logic against actual performance
- Adjust detection patterns
- Document refinements in scorer.md
## Next Steps
After successful scoring:
1. Review score distribution in content-drafts.md
2. Run `/content-critic-review` to get improvement suggestions
3. Or continue with full pipeline if running `/content-full-pipeline`

View File

@@ -0,0 +1,259 @@
---
description: "Select the best content piece from scored variations and format for content-ready.md"
---
# Select Best Content for Posting
## Mission
Rank all PASS content (20+/30 scores), apply tie-breaker rules, and select EXACTLY ONE best piece for human review in `content-ready.md`.
## Process
Follow the Selector agent instructions (`agents/selector.md`) to systematically:
1. **Filter for PASS content** (20+/30 scores with ✅ PASS verdict)
2. **Rank by total score** (descending order)
3. **Apply tie-breaker rules** if multiple pieces tied for first
4. **Validate selection** against quality gates
5. **Format for content-ready.md** with posting instructions
6. **Document runner-ups** (top 3 alternatives)
## Execution Steps
### Step 1: Read Scored Content
**Input Source**: `content-drafts.md` with completed scores and critic verdicts
**Required Data**:
- Total scores (Gap + Biases + Decision = XX/30)
- Critic verdict (✅ PASS or ❌ FAIL)
- Subscore breakdowns (Gap subscore, Hook subscore, Bias count)
- Content text and theme information
### Step 2: Filter and Rank
**Filter Criteria**:
- Total score ≥ 20/30
- Critic verdict: ✅ PASS
- Factually accurate (verified by Critic)
**Ranking**:
- Primary: Sort by total score (descending)
- Expected: 12-20 PASS pieces from 25 total variations
**Quality Alert**:
- If < 5 PASS pieces: Alert user that quality is below threshold
- Consider regenerating content with stronger constraints
### Step 3: Apply Tie-Breaker Rules
**If multiple pieces have same total score**, apply tie-breakers in order:
1. **Gap Selling Subscore**: Higher Gap score wins (problem clarity most important)
2. **Hook Strength**: Higher Hook subscore wins (first line = 80% of engagement)
3. **Lollapalooza Effect**: Content with 5+ biases wins (exponential persuasive power)
4. **Bias Diversity**: More unique biases wins (broader psychological appeal)
5. **Theme Novelty**: Less-used theme wins (prevents theme fatigue)
6. **Human Judgment**: If still tied, flag for manual decision
### Step 4: Validate Selection
Before finalizing, verify:
**Quality Gates**:
- [ ] Selected piece scores 20+/30 (ideally 25+/30)
- [ ] All three frameworks adequately addressed (no subscore < 5)
- [ ] Factually accurate (verified by Critic)
- [ ] High engagement potential (hook + emotional resonance)
**Platform Appropriateness**:
- [ ] Twitter/X character limits respected (280 single or thread)
- [ ] Tone matches platform (conversational, punchy)
- [ ] Format clean (line breaks, readability)
**Strategic Fit**:
- [ ] Theme aligns with user's positioning
- [ ] Message supports broader narrative
- [ ] Timing appropriate (seasonally relevant)
### Step 5: Write to content-ready.md
**Location**: `/home/rpiplewar/fast_dot_ai/poasting/content-ready.md`
**Action**: OVERWRITE file (only ONE piece should exist)
**Format**:
```markdown
# Content Ready to Post
**Date Generated:** {ISO Timestamp}
**Theme:** {Theme Name}
**Source:** {Linear Task ID}
**Total Score:** {XX/30}
---
## Content
{Content exactly as it should be posted}
---
## Scoring Breakdown
**Gap Selling:** X/10 (Problem: X/3, Impact: X/3, Solution: X/4)
**Cognitive Biases:** Y (List: Bias1, Bias2, Bias3...)
**Decision Framework:** Z/10 (Hook: X/3, Value: X/4, CTA: X/3)
**TOTAL: XX/30**
---
## Why This Piece?
**Ranking Position:** #1 of {total_pass_count} PASS pieces
**Key Strengths:**
- {Strength from Critic notes}
- {Strength from Critic notes}
- {Strength from Critic notes}
**Winning Elements:**
- {Why this beat other contenders}
- {Specific tie-breaker if applicable}
- {Strategic fit reasoning}
---
## Posting Instructions
**Optimal Timing:**
- **Best Times (IST)**: 8:30 AM or 5:30 PM (high engagement windows)
- **Avoid**: Late night (11 PM - 6 AM) or midday lull (12 PM - 2 PM)
**Format Check:**
- [ ] Character count: {count} (within 280 for single, or thread format)
- [ ] Line breaks clean
- [ ] No typos or formatting issues
**Pre-Post Checklist:**
- [ ] Read aloud for flow
- [ ] Verify factual accuracy one final time
- [ ] Check for unintended meanings or misinterpretations
- [ ] Confirm tone matches brand voice
**After Posting:**
1. Copy final posted version to content-posted.md
2. Add posting timestamp
3. Set reminder to capture metrics after 48 hours
4. Monitor engagement in first 2 hours for immediate feedback
---
## Alternatives (Top 3 Runner-Ups)
### Runner-Up #2: {Score}
**Theme:** {Theme Name}
**Content Preview:** {First 50 characters}...
**Why Not Selected:** {Reasoning}
### Runner-Up #3: {Score}
**Theme:** {Theme Name}
**Content Preview:** {First 50 characters}...
**Why Not Selected:** {Reasoning}
### Runner-Up #4: {Score}
**Theme:** {Theme Name}
**Content Preview:** {First 50 characters}...
**Why Not Selected:** {Reasoning}
---
**Generated by:** content-gen plugin v1.0
**Selection Criteria:** Highest total score + tie-breaker rules
**Human Approval Required:** YES (review before posting)
```
## Validation Checklist
Before marking selection complete:
- [ ] ONE piece selected (not zero, not multiple)
- [ ] Selected piece scores 20+/30 (ideally 25+/30)
- [ ] All tie-breakers applied correctly if needed
- [ ] content-ready.md overwritten with formatted output
- [ ] Posting instructions included
- [ ] Top 3 runner-ups documented
- [ ] Selection reasoning documented
## Example Output
```
✅ Best Content Selected
Selected: Theme A, Variation 1
Score: 28/30 (EXCELLENT)
Ranking: #1 of 18 PASS pieces
Key Strengths:
- Exceptional problem clarity (Gap: 9/10)
- Strong emotional hook activating 6 biases
- Clear, actionable CTA
Output: content-ready.md
Status: Ready for human review and approval
Runner-Ups:
#2: Theme C, Var 5 (27/30)
#3: Theme B, Var 2 (26/30)
#4: Theme A, Var 3 (25/30)
Next Step: Review content-ready.md and post when ready
```
## Error Handling
**If no PASS content**:
```
❌ Selection Failed: No content scored 20+/30
Action Required:
1. Review scorer settings (may be too harsh)
2. Regenerate content with stronger constraints
3. Review theme quality (may lack content generation potential)
Pipeline STOPPED at selection phase.
```
**If tie-breakers don't resolve**:
```
⚠️ Human Decision Required
Two pieces tied after all 6 automated tie-breakers.
Both pieces displayed in content-ready.md.
User must manually select.
```
**If content-ready.md already has content**:
```
⚠️ Warning: content-ready.md already contains content
Options:
1. Archive existing content to content-posted.md first
2. Overwrite with new selection (confirm Y/N)
3. Cancel selection
Current content in content-ready.md should be posted or archived before generating new content.
```
## Next Steps
After successful selection:
1. Review content-ready.md
2. Perform final quality check
3. Post to Twitter/X at optimal time
4. Capture metrics after 48 hours
5. Move to content-posted.md with metrics
**Or run full pipeline**: `/content-full-pipeline` to execute all stages end-to-end