Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:37:06 +08:00
commit 0467556b9c
22 changed files with 10168 additions and 0 deletions

595
commands/blog-analyse.md Normal file
View File

@@ -0,0 +1,595 @@
# Blog Analysis & Constitution Generator
Reverse-engineer blog constitution from existing content by analyzing articles, patterns, and style.
## Usage
```bash
/blog-analyse
```
**Optional**: Specify content directory if detection fails or you want to override:
```bash
/blog-analyse "content"
/blog-analyse "posts"
/blog-analyse "articles/en"
```
## What This Command Does
Analyzes existing blog content to automatically generate `.spec/blog.spec.json`.
**Opposite of `/blog-setup`**:
- `/blog-setup` = Create constitution → Generate content
- `/blog-analyse` = Analyze content → Generate constitution
### Analysis Process
1. **Content Discovery** (Phase 1)
- Scan for content directories (articles/, content/, posts/, etc.)
- If multiple found → ask user which to analyze
- If none found → ask user to specify path
- Count total articles
2. **Language Detection** (Phase 2)
- Detect i18n structure (en/, fr/, es/ subdirectories)
- Or detect language from frontmatter
- Count articles per language
3. **Tone & Style Analysis** (Phase 3)
- Analyze sample of 10 articles
- Detect tone: expert, pédagogique, convivial, corporate
- Extract voice patterns (do/don't)
4. **Metadata Extraction** (Phase 4)
- Detect blog name (from package.json, README, config)
- Determine context/audience from keywords
- Identify objective (education, leads, community, etc.)
5. **Constitution Generation** (Phase 5)
- Create comprehensive `.spec/blog.spec.json`
- Include detected metadata
- Validate JSON structure
- Generate analysis report
6. **CLAUDE.md Generation** (Phase 6)
- Create CLAUDE.md in content directory
- Document blog.spec.json as source of truth
- Include voice guidelines from constitution
- Explain tone and validation workflow
**Time**: 10-15 minutes
**Output**: `.spec/blog.spec.json` + `[content_dir]/CLAUDE.md` + analysis report
## Prerequisites
**Required**:
- Existing blog content (.md or .mdx files)
- At least 3 articles (more = better analysis)
- Consistent writing style across articles
**Optional but Recommended**:
- `jq` or `python3` for JSON validation
- Frontmatter in articles (for language detection)
- README.md or package.json (for blog name detection)
## Instructions
Create a new subagent conversation with the `analyzer` agent.
**Provide the following prompt**:
```
You are analyzing existing blog content to reverse-engineer a blog constitution.
**Task**: Complete content analysis and generate blog.spec.json
**Content Directory**: [Auto-detect OR use user-specified: $CONTENT_DIR]
Execute ALL phases (1-6) from your instructions:
**Phase 1: Content Discovery**
- Scan common directories: articles/, content/, posts/, blog/, src/content/, _posts/
- If multiple directories found with content:
- Display list with article counts
- Ask user: "Which directory should I analyze?"
- Wait for user response
- If no directories found:
- Ask user: "Please specify your content directory path:"
- Wait for user response
- Validate path exists
- If single directory found:
- Use it automatically
- Inform user: "✅ Found content in: [directory]"
- Detect i18n structure (language subdirectories)
- Count total articles
**Phase 2: Language Detection**
- If i18n structure: list language directories and count articles per language
- If single structure: detect language from frontmatter or ask user
- Determine primary language
**Phase 3: Tone & Style Analysis**
- Sample 10 articles (diverse selection across languages if applicable)
- Read frontmatter + first 500 words of each
- Analyze tone indicators:
- Expert: technical terms, docs refs, assumes knowledge
- Pédagogique: step-by-step, explanations, analogies
- Convivial: conversational, personal, casual
- Corporate: professional, ROI focus, formal
- Score each tone based on indicators
- Select highest scoring tone (or ask user if unclear)
- Extract voice patterns:
- voice_do: positive patterns (active voice, code examples, data-driven, etc.)
- voice_dont: anti-patterns (passive voice, vague claims, buzzwords, etc.)
**Phase 4: Metadata Extraction**
- Detect blog name from:
- package.json "name" field
- README.md first heading
- config files (hugo.toml, gatsby-config.js, etc.)
- Or use directory name as fallback
- Generate context string from article keywords/themes
- Determine objective based on content type:
- Tutorials → Educational
- Analysis/opinions → Thought leadership
- CTAs/products → Lead generation
- Updates/discussions → Community
**Phase 5: Constitution Generation**
- Create .spec/blog.spec.json with:
```json
{
"version": "1.0.0",
"blog": {
"name": "[detected]",
"context": "[generated]",
"objective": "[determined]",
"tone": "[detected]",
"languages": ["[detected]"],
"content_directory": "[detected]",
"brand_rules": {
"voice_do": ["[extracted patterns]"],
"voice_dont": ["[extracted anti-patterns]"]
}
},
"workflow": {
"review_rules": {
"must_have": ["[standard rules]"],
"must_avoid": ["[standard anti-patterns]"]
}
},
"analysis": {
"generated_from": "existing_content",
"articles_analyzed": [count],
"total_articles": [count],
"confidence": "[percentage]",
"generated_at": "[timestamp]"
}
}
```
- Validate JSON with jq or python3
- Generate analysis report with:
- Content discovery summary
- Language analysis results
- Tone detection (with confidence %)
- Voice guidelines with examples
- Blog metadata
- Next steps suggestions
**Phase 6: CLAUDE.md Generation for Content Directory**
- Read configuration from blog.spec.json:
- content_directory
- blog name
- tone
- languages
- voice guidelines
- Create CLAUDE.md in content directory with:
- Explicit statement: blog.spec.json is "single source of truth"
- Voice guidelines (DO/DON'T) extracted from constitution
- Tone explanation with specific behaviors
- Article structure requirements from constitution
- Validation workflow documentation
- Commands that use constitution
- Instructions for updating constitution
- Important notes about never deviating from guidelines
- Expand variables ($BLOG_NAME, $TONE, etc.) in template
- Inform user that CLAUDE.md was created
**Important**:
- ALL analysis scripts must be in /tmp/ (non-destructive)
- If user interaction needed (directory selection, tone confirmation), WAIT for response
- Be transparent about confidence levels
- Provide examples from actual content to support detections
- Clean up temporary files after analysis
Display the analysis report and constitution location when complete.
```
## Expected Output
### Analysis Report
```markdown
# Blog Analysis Report
Generated: 2025-10-12 15:30:00
## Content Discovery
- **Content directory**: articles/
- **Total articles**: 47
- **Structure**: i18n (language subdirectories)
## Language Analysis
- **Languages**:
- en: 25 articles
- fr: 22 articles
- **Primary language**: en
## Tone & Style Analysis
- **Detected tone**: pédagogique (confidence: 78%)
- **Tone indicators found**:
- Step-by-step instructions (18 articles)
- Technical term explanations (all articles)
- Code examples with commentary (23 articles)
- Clear learning objectives (15 articles)
## Voice Guidelines
### DO (Positive Patterns)
- ✅ Clear, actionable explanations (found in 92% of articles)
- ✅ Code examples with inline comments (found in 85% of articles)
- ✅ Step-by-step instructions (found in 76% of articles)
- ✅ External links to official documentation (found in 68% of articles)
- ✅ Active voice and direct language (found in 94% of articles)
### DON'T (Anti-patterns)
- ❌ Jargon without explanation (rarely found)
- ❌ Vague claims without data (avoid, found in 2 articles)
- ❌ Complex sentences over 25 words (minimize, found in some)
- ❌ Passive voice constructions (minimize)
## Blog Metadata
- **Name**: Tech Insights
- **Context**: Technical blog for software developers and DevOps engineers
- **Objective**: Educate and upskill developers on cloud-native technologies
## Files Generated
✅ Constitution: `.spec/blog.spec.json`
✅ Content Guidelines: `articles/CLAUDE.md` (uses constitution as source of truth)
## Next Steps
1. **Review**: Check `.spec/blog.spec.json` for accuracy
2. **Refine**: Edit voice guidelines if needed
3. **Test**: Generate new article: `/blog-generate "Test Topic"`
4. **Validate**: Run quality check: `/blog-optimize "article-slug"`
---
**Note**: This constitution was reverse-engineered from your existing content.
You can refine it manually at any time.
```
### Generated Constitution
**File**: `.spec/blog.spec.json`
```json
{
"version": "1.0.0",
"blog": {
"name": "Tech Insights",
"context": "Technical blog for software developers and DevOps engineers",
"objective": "Educate and upskill developers on cloud-native technologies",
"tone": "pédagogique",
"languages": ["en", "fr"],
"content_directory": "articles",
"brand_rules": {
"voice_do": [
"Clear, actionable explanations",
"Code examples with inline comments",
"Step-by-step instructions",
"External links to official documentation",
"Active voice and direct language"
],
"voice_dont": [
"Jargon without explanation",
"Vague claims without data",
"Complex sentences over 25 words",
"Passive voice constructions",
"Unsourced technical claims"
]
}
},
"workflow": {
"review_rules": {
"must_have": [
"Executive summary with key takeaways",
"Minimum 3-5 credible source citations",
"Actionable insights (3-5 specific recommendations)",
"Code examples for technical topics",
"Clear structure with H2/H3 headings"
],
"must_avoid": [
"Unsourced or unverified claims",
"Keyword stuffing (density >2%)",
"Vague or generic recommendations",
"Missing internal links",
"Images without descriptive alt text"
]
}
},
"analysis": {
"generated_from": "existing_content",
"articles_analyzed": 10,
"total_articles": 47,
"confidence": "78%",
"generated_at": "2025-10-12T15:30:00Z"
}
}
```
## Interactive Prompts
### Multiple Directories Found
```
Found directories with content:
1) articles/ (47 articles)
2) content/ (12 articles)
3) posts/ (8 articles)
Which directory should I analyze? (1-3):
```
### No Directory Found
```
❌ No content directories found.
Please specify your content directory path:
(e.g., articles, content, posts, blog):
```
### Tone Detection Unclear
```
⚠️ Tone detection inconclusive
Detected indicators:
- Expert: 35%
- Pédagogique: 42%
- Convivial: 38%
- Corporate: 15%
Which tone best describes your content?
1) Expert (technical, authoritative)
2) Pédagogique (educational, patient)
3) Convivial (friendly, casual)
4) Corporate (professional, formal)
Choice (1-4):
```
### Small Sample Warning
```
⚠️ Only 2 articles found in articles/
Analysis may not be accurate with small sample.
Continue anyway? (y/n):
```
## Use Cases
### Migrate Existing Blog
You have an established blog and want to use Blog Kit:
```bash
# Analyze existing content
/blog-analyse
# Review generated constitution
cat .spec/blog.spec.json
# Test with new article
/blog-generate "New Topic"
# Validate existing articles
/blog-optimize "existing-article"
```
### Multi-Author Blog
Ensure consistency across multiple authors:
```bash
# Analyze to establish baseline
/blog-analyse
# Share .spec/blog.spec.json with team
# All new articles will follow detected patterns
# Generate new content
/blog-copywrite "new-article" # Enforces constitution
```
### Refactor Content Style
Want to understand current style before changing it:
```bash
# Analyze current style
/blog-analyse
# Review tone and voice patterns
# Decide what to keep/change
# Edit .spec/blog.spec.json manually
# Generate new articles with updated constitution
```
### Hugo/Gatsby/Jekyll Migration
Adapting Blog Kit to existing static site generator:
```bash
# Analyze content/ directory (Hugo/Gatsby)
/blog-analyse "content"
# Or analyze _posts/ (Jekyll)
/blog-analyse "_posts"
# Constitution will include content_directory
# All commands will use correct directory
```
## Comparison: Setup vs Analyse
| Feature | `/blog-setup` | `/blog-analyse` |
|---------|---------------|-----------------|
| **Input** | User answers prompts | Existing articles |
| **Process** | Manual configuration | Automated analysis |
| **Output** | Fresh constitution | Reverse-engineered constitution |
| **Use Case** | New blog | Existing blog |
| **Time** | 2-5 minutes | 10-15 minutes |
| **Accuracy** | 100% (user defined) | 70-90% (depends on sample) |
| **Customization** | Full control | Review and refine needed |
## Troubleshooting
### "No content directories found"
**Cause**: No common directories with .md files
**Solution**: Specify your content path:
```bash
/blog-analyse "path/to/your/content"
```
### "Tone detection inconclusive"
**Cause**: Mixed writing styles or small sample
**Solution**: Agent will ask you to select tone manually
### "Only X articles found, continue?"
**Cause**: Content directory has very few articles
**Solution**:
- Add more articles first (recommended)
- Or continue with warning (may be inaccurate)
### "Cannot detect blog name"
**Cause**: No package.json, README.md, or config files
**Solution**: Agent will use directory name as fallback
You can edit `.spec/blog.spec.json` manually afterward
### "Language detection failed"
**Cause**: No frontmatter with `language:` field
**Solution**: Agent will ask you to specify primary language
## Tips for Better Analysis
### Before Analysis
1. **Consistent Frontmatter**: Ensure articles have YAML frontmatter
2. **Sufficient Sample**: At least 5-10 articles for accurate detection
3. **Recent Content**: Analysis prioritizes newer articles
4. **Clean Structure**: Organize by language if multi-language
### After Analysis
1. **Review Constitution**: Check `.spec/blog.spec.json` for accuracy
2. **Refine Guidelines**: Edit voice_do/voice_dont if needed
3. **Test Generation**: Generate test article to verify tone
4. **Iterate**: Re-run analysis if you add more content
### For Best Results
- **Diverse Sample**: Include different article types
- **Representative Content**: Use typical articles, not outliers
- **Clear Style**: Consistent writing voice improves detection
- **Good Metadata**: Complete frontmatter helps detection
## Integration with Workflow
### Complete Adoption Workflow
```bash
# 1. Analyze existing content
/blog-analyse
# 2. Review generated constitution
cat .spec/blog.spec.json
vim .spec/blog.spec.json # Refine if needed
# 3. Validate existing articles
/blog-optimize "article-1"
/blog-optimize "article-2"
# 4. Check translation coverage (if i18n)
/blog-translate
# 5. Generate new articles
/blog-generate "New Topic"
# 6. Maintain consistency
/blog-copywrite "new-article" # Enforces constitution
```
## Advanced Usage
### Analyze Specific Language
If you have i18n structure and want to analyze only one language:
```bash
# Analyze only English articles
/blog-analyse "articles/en"
```
**Note**: Constitution will have `content_directory: "articles/en"` which may not work for other languages. Edit manually to `"articles"` after analysis.
### Compare Multiple Analyses
Analyze different content sets to compare:
```bash
# Analyze primary content
/blog-analyse "articles"
mv .spec/blog.spec.json .spec/articles-constitution.json
# Analyze legacy content
/blog-analyse "old-posts"
mv .spec/blog.spec.json .spec/legacy-constitution.json
# Compare differences
diff .spec/articles-constitution.json .spec/legacy-constitution.json
```
### Re-analyze After Growth
As your blog grows, re-analyze to update constitution:
```bash
# Backup current constitution
cp .spec/blog.spec.json .spec/blog.spec.backup.json
# Re-analyze with more articles
/blog-analyse
# Compare changes
diff .spec/blog.spec.backup.json .spec/blog.spec.json
```
---
**Ready to analyze?** Let Blog Kit learn from your existing content and generate the perfect constitution automatically.

339
commands/blog-copywrite.md Normal file
View File

@@ -0,0 +1,339 @@
# Blog Copywriting (Spec-Driven)
Rewrite or create content with strict adherence to blog constitution and brand voice guidelines.
## Usage
```bash
/blog-copywrite "topic-name"
```
**Example**:
```bash
/blog-copywrite "nodejs-tracing"
```
**Note**: Provide the sanitized topic name (same as article filename).
## Prerequisites
**Optional Files**:
- `.spec/blog.spec.json` - Blog constitution (highly recommended)
- `articles/[topic].md` - Existing article to rewrite (optional)
- `.specify/seo/[topic]-seo-brief.md` - SEO structure (optional)
**If no constitution exists**: Agent will use generic professional tone.
## What This Command Does
Delegates to the **copywriter** subagent for spec-driven content creation:
- **Constitution-First**: Loads and applies `.spec/blog.spec.json` requirements
- **Tone Precision**: Matches exact tone (expert/pédagogique/convivial/corporate)
- **Voice Compliance**: Enforces `voice_do` and avoids `voice_dont`
- **Review Rules**: Ensures `must_have` items and avoids `must_avoid`
- **Quality Focus**: Spec-perfect copy over marketing optimization
**Time**: 20-40 minutes
**Output**: `articles/[topic].md` (overwrites existing with backup)
## Difference from /blog-marketing
| Feature | /blog-marketing | /blog-copywrite |
|---------|----------------|-----------------|
| Focus | Conversion & CTAs | Spec compliance |
| Voice | Engaging, persuasive | Constitution-driven |
| When to use | New articles | Rewrite for brand consistency |
| Constitution | Optional influence | Mandatory requirement |
| CTAs | 2-3 strategic | Only if in spec |
| Tone freedom | High | Zero (follows spec exactly) |
**Use /blog-copywrite when**:
- Existing content violates brand voice
- Need perfect spec compliance
- Building content library with consistent voice
- Rewriting AI-generated content to match brand
**Use /blog-marketing when**:
- Need conversion-focused content
- Want CTAs and social proof
- Creating new promotional articles
## Instructions
Create a new subagent conversation with the `copywriter` agent.
**Provide the following prompt**:
```
You are creating spec-driven copy for a blog article.
**Topic**: $ARGUMENTS
**Your task**: Write (or rewrite) content that PERFECTLY matches blog constitution requirements.
Follow your Three-Phase Process:
1. **Constitution Deep-Load** (5-10 min):
- Load .spec/blog.spec.json (if exists, otherwise use generic tone)
- Extract: blog.name, blog.context, blog.objective, blog.tone, blog.languages
- Internalize brand_rules.voice_do (guidelines to follow)
- Internalize brand_rules.voice_dont (anti-patterns to avoid)
- Load workflow.review_rules (must_have, must_avoid)
2. **Spec-Driven Content Creation** (20-40 min):
- Apply tone exactly as specified:
* expert → Technical, authoritative, assumes domain knowledge
* pédagogique → Educational, patient, step-by-step
* convivial → Friendly, conversational, relatable
* corporate → Professional, business-focused, ROI-oriented
- Check if article exists (articles/$ARGUMENTS.md):
* If YES: Load structure, preserve data, rewrite for spec compliance
* If NO: Load SEO brief (.specify/seo/$ARGUMENTS-seo-brief.md) for structure
* If neither: Create logical structure based on topic
- Write content following:
* Every voice_do guideline applied
* Zero voice_dont violations
* All must_have items included
* No must_avoid patterns
- Backup existing article if rewriting:
```bash
if [ -f "articles/$ARGUMENTS.md" ]; then
cp "articles/$ARGUMENTS.md" "articles/$ARGUMENTS.backup-$(date +%Y%m%d-%H%M%S).md"
fi
```
3. **Spec Compliance Validation** (10-15 min):
- Generate validation script: /tmp/validate-voice-$$.sh
- Check voice_dont violations (jargon, passive voice, vague claims)
- Verify voice_do presence (guidelines applied)
- Validate must_have items (summary, citations, insights)
- Check must_avoid patterns (keyword stuffing, unsourced claims)
- Calculate tone metrics (sentence length, technical terms, etc.)
**Output Location**: Save final article to `articles/$ARGUMENTS.md`
**Important**:
- Constitution is LAW - no creative liberty that violates specs
- If constitution missing, warn user and use professional tone
- Always backup before overwriting existing content
- Include spec compliance notes in frontmatter
Begin copywriting now.
```
## Expected Output
After completion, verify that `articles/[topic].md` exists and contains:
**Perfect Tone Match**:
- Expert: Technical precision, industry terminology
- Pédagogique: Step-by-step, explained jargon, simple language
- Convivial: Conversational, personal pronouns, relatable
- Corporate: Professional, business value, ROI focus
**Voice Compliance**:
- All `voice_do` guidelines applied throughout
- Zero `voice_dont` violations
- Consistent brand voice from intro to conclusion
**Review Rules Met**:
- All `must_have` items present (summary, citations, insights)
- No `must_avoid` patterns (keyword stuffing, vague claims)
- Meets minimum quality thresholds
**Spec Metadata** (in frontmatter):
```yaml
---
tone: "pédagogique"
spec_version: "1.0.0"
constitution_applied: true
---
```
## When to Use This Command
Use `/blog-copywrite` when you need to:
-**Rewrite off-brand content**: Fix articles that don't match voice
-**Enforce consistency**: Make all articles follow same spec
-**Convert AI content**: Transform generic AI output to branded copy
-**Create spec-perfect drafts**: New articles with zero voice violations
-**Audit compliance**: Rewrite to pass quality validation
**For marketing-focused content**: Use `/blog-marketing` instead.
## Constitution Required?
**With Constitution** (`.spec/blog.spec.json`):
```
✅ Exact tone matching
✅ Voice guidelines enforced
✅ Review rules validated
✅ Brand-perfect output
```
**Without Constitution**:
```
⚠️ Generic professional tone
⚠️ No brand voice enforcement
⚠️ Basic quality only
⚠️ Recommend running /blog-setup first
```
**Best practice**: Always create constitution first:
```bash
/blog-setup
```
## Tone Examples
### Expert Tone
```markdown
Distributed consensus algorithms fundamentally trade latency for
consistency guarantees. Raft's leader-based approach simplifies
implementation complexity compared to Paxos, achieving similar
safety properties while maintaining comprehensible state machine
replication semantics.
```
### Pédagogique Tone
```markdown
Think of consensus algorithms as voting systems for computers. When
multiple servers need to agree on something, they use a "leader" to
coordinate. Raft makes this simpler than older methods like Paxos,
while keeping your data safe and consistent.
```
### Convivial Tone
```markdown
Here's the thing about getting computers to agree: it's like
herding cats. Consensus algorithms are your herding dog. Raft is
the friendly retriever that gets the job done without drama,
unlike Paxos which is more like a border collie—effective but
complicated!
```
### Corporate Tone
```markdown
Organizations requiring distributed system reliability must
implement robust consensus mechanisms. Raft provides enterprise-
grade consistency with reduced operational complexity compared to
traditional Paxos implementations, optimizing both infrastructure
costs and engineering productivity.
```
## Quality Validation
After copywriting, validate quality:
```bash
/blog-optimize "topic-name"
```
This will check:
- Spec compliance
- Frontmatter correctness
- Markdown quality
- SEO elements
Fix any issues and re-run `/blog-copywrite` if needed.
## Backup and Recovery
Copywriter automatically backs up existing articles:
```bash
# List backups
ls articles/*.backup-*
# Restore from backup
cp articles/topic.backup-20250112-143022.md articles/topic.md
# Clean old backups (keep last 3)
ls -t articles/*.backup-* | tail -n +4 | xargs rm
```
## Tips
1. **Constitution first**: Create `.spec/blog.spec.json` before copywriting
2. **Be specific with voice**: Clear `voice_do` / `voice_dont` = better output
3. **Test tone**: Try each tone to find your brand's fit
4. **Iterate gradually**: Start generic, refine constitution, re-copywrite
5. **Validate after**: Always run `/blog-optimize` to check compliance
## Troubleshooting
### "No constitution found"
```bash
# Create constitution
/blog-setup
# Or copy example
mkdir -p .spec
cp examples/blog.spec.example.json .spec/blog.spec.json
# Then run copywriting
/blog-copywrite "topic-name"
```
### "Tone doesn't match"
```bash
# Check constitution tone setting
cat .spec/blog.spec.json | grep '"tone"'
# Update if needed, then re-run
/blog-copywrite "topic-name"
```
### "Voice violations"
```bash
# Review voice_dont guidelines
cat .spec/blog.spec.json | grep -A5 '"voice_dont"'
# Update guidelines if too strict
# Then re-run copywriting
```
## Workflow Integration
### Full Workflow with Copywriting
```bash
# 1. Setup (one-time)
/blog-setup
# 2. Research
/blog-research "topic"
# 3. SEO Brief
/blog-seo "topic"
# 4. Spec-Driven Copy (instead of marketing)
/blog-copywrite "topic"
# 5. Validate Quality
/blog-optimize "topic"
# 6. Publish
```
### Rewrite Existing Content
```bash
# Fix off-brand article
/blog-copywrite "existing-topic"
# Validate compliance
/blog-optimize "existing-topic"
# Compare before/after
diff articles/existing-topic.backup-*.md articles/existing-topic.md
```
---
**Ready to create spec-perfect copy?** Provide the topic name and execute this command.

255
commands/blog-generate.md Normal file
View File

@@ -0,0 +1,255 @@
# Generate Blog Article
Complete end-to-end blog article generation workflow with specialized AI agents.
## Usage
```bash
/blog-generate "Your article topic here"
```
**Example**:
```bash
/blog-generate "Best practices for implementing observability in microservices"
```
## What This Command Does
Orchestrates three specialized agents in sequence to create a comprehensive, SEO-optimized blog article:
1. **Research Intelligence Agent** → Comprehensive research with 5-7 sources
2. **SEO Specialist Agent** → Keyword analysis and content structure
3. **Marketing Specialist Agent** → Final article with CTAs and engagement
**Total Time**: 30-45 minutes
**Token Usage**: ~200k tokens (agents isolated, main thread stays clean)
## Pre-flight Checks
Before starting the workflow, run system checks:
```bash
# Generate and execute preflight check script
bash scripts/preflight-check.sh || exit 1
```
**This checks**:
- ✅ curl (required for WebSearch/WebFetch)
- ⚠️ python3 (recommended for JSON validation)
- ⚠️ jq (optional for JSON parsing)
- 📁 Creates `.specify/` and `articles/` directories if missing
- 📄 Checks for blog constitution (`.spec/blog.spec.json`)
**If checks fail**: Install missing required tools before proceeding.
**If constitution exists**: Agents will automatically apply brand rules!
---
## Workflow
### Phase 1: Deep Research (15-20 min)
**Agent**: `research-intelligence`
**What It Does**:
- Decomposes your topic into 3-5 sub-questions
- Executes 5-7 targeted web searches
- Evaluates and fetches credible sources
- Cross-references findings
- Generates comprehensive research report
**Output**: `.specify/research/[topic]-research.md`
**Your Task**: Create a subagent conversation with the research-intelligence agent.
```
Prompt for subagent:
You are conducting deep research on the following topic for a blog article:
**Topic**: $ARGUMENTS
Follow your Three-Phase Process:
1. Strategic Planning - Decompose the topic into sub-questions
2. Autonomous Retrieval - Execute searches and gather sources
3. Synthesis - Generate comprehensive research report
Save your final report to: .specify/research/[SANITIZED-TOPIC]-research.md
Where [SANITIZED-TOPIC] is the topic converted to lowercase with spaces replaced by hyphens.
Begin your research now.
```
---
**CHECKPOINT**: Wait for research agent to complete. Verify that the research report exists and contains quality sources before proceeding.
---
### Phase 2: SEO Optimization (5-10 min)
**Agent**: `seo-specialist`
**What It Does**:
- Extracts target keywords from research
- Analyzes search intent
- Creates content structure (H2/H3 outline)
- Generates headline options
- Provides SEO recommendations
**Output**: `.specify/seo/[topic]-seo-brief.md`
**Your Task**: Create a subagent conversation with the seo-specialist agent.
```
Prompt for subagent:
You are creating an SEO content brief based on completed research.
**Research Report Path**: .specify/research/[SANITIZED-TOPIC]-research.md
Read the research report and follow your Four-Phase Process:
1. Keyword Analysis - Extract and validate target keywords
2. Search Intent - Determine what users want
3. Content Structure - Design H2/H3 outline with headline options
4. SEO Recommendations - Provide optimization guidance
Save your SEO brief to: .specify/seo/[SANITIZED-TOPIC]-seo-brief.md
Begin your analysis now.
```
---
**CHECKPOINT**: Review the SEO brief with the user.
Ask the user:
1. Is the target keyword appropriate for your goals?
2. Do the headline options resonate with your audience?
3. Does the content structure make sense?
4. Any adjustments needed before writing the article?
If user approves, proceed to Phase 3. If changes requested, regenerate SEO brief with adjustments.
---
### Phase 3: Content Creation (10-15 min)
**Agent**: `marketing-specialist`
**What It Does**:
- Loads research report and SEO brief (token-efficiently)
- Writes engaging introduction with hook
- Develops body content following SEO structure
- Integrates social proof (stats, quotes, examples)
- Places strategic CTAs (2-3 throughout)
- Polishes for readability and conversion
- Formats with proper frontmatter
**Output**: `articles/[topic].md`
**Your Task**: Create a subagent conversation with the marketing-specialist agent.
```
Prompt for subagent:
You are writing the final blog article based on research and SEO brief.
**Research Report**: .specify/research/[SANITIZED-TOPIC]-research.md
**SEO Brief**: .specify/seo/[SANITIZED-TOPIC]-seo-brief.md
Read both files (using token-efficient loading strategy from your instructions) and follow your Three-Phase Process:
1. Context Loading - Extract essential information only
2. Content Creation - Write engaging article following SEO structure
3. Polish - Refine for readability, engagement, and SEO
Save your final article to: articles/[SANITIZED-TOPIC].md
Begin writing now.
```
---
**CHECKPOINT**: Final review with user.
Display the completed article path and ask:
1. Would you like to review the article?
2. Any sections need revision?
3. Ready to publish or need changes?
**Options**:
- ✅ Approve and done
- 🔄 Request revisions (specify sections)
- ✨ Regenerate specific parts
---
## Error Handling
If any phase fails:
1. **Display error clearly**: "Phase [X] failed: [error message]"
2. **Show progress**: "Phases 1 and 2 completed successfully. Retrying Phase 3..."
3. **Offer retry**: "Would you like to retry [Phase X]?"
4. **Preserve work**: Don't delete outputs from successful phases
5. **Provide options**:
- Retry automatically
- Skip to next phase
- Abort workflow
## Output Structure
After successful completion, you'll have:
```
.specify/
├── research/
│ └── [topic]-research.md # 5k tokens, 5-7 sources
└── seo/
└── [topic]-seo-brief.md # 2k tokens, keywords + structure
articles/
└── [topic].md # Final article, fully optimized
```
## Tips for Success
1. **Be Specific**: Detailed topics work better
- ✅ "Implementing observability in Node.js microservices with OpenTelemetry"
- ❌ "Observability"
2. **Review Checkpoints**: Don't skip the review steps
- SEO brief sets article direction
- Early feedback saves time
3. **Use Subagent Power**: Each agent has full context window
- They can process 50k-150k tokens each
- Main thread stays under 1k tokens
4. **Iterate If Needed**: Use individual commands for refinement
- `/blog-research` - Redo research only
- `/blog-seo` - Regenerate SEO brief
- `/blog-marketing` - Rewrite article
## Philosophy
This workflow follows the **"Burn tokens in workers, preserve main thread"** pattern:
- **Agents**: Process massive amounts of data in isolation
- **Main thread**: Stays clean with only orchestration commands
- **Result**: Unlimited processing power without context rot
## Next Steps
After generating article:
1. Review for accuracy and brand voice
2. Add any custom sections or examples
3. Optimize images and add alt text
4. Publish and promote
5. Track performance metrics
---
**Ready to start?** Provide your topic and I'll begin the workflow.

391
commands/blog-geo.md Normal file
View File

@@ -0,0 +1,391 @@
# Blog GEO Optimization
Create GEO (Generative Engine Optimization) content brief based on completed research using the GEO Specialist agent.
## Usage
```bash
/blog-geo "topic-name"
```
**Example**:
```bash
/blog-geo "nodejs-tracing"
```
**Note**: Provide the sanitized topic name (same as used in research filename).
## What is GEO?
**GEO (Generative Engine Optimization)** is the academic and industry-standard term for optimizing content for AI-powered search engines. Formally introduced in **November 2023** by researchers from **Princeton University, Georgia Tech, Allen Institute for AI, and IIT Delhi**.
**Target Platforms**:
- ChatGPT (with web search)
- Perplexity AI
- Google AI Overviews
- Gemini
- Claude (with web access)
- Bing Copilot
**Proven Results**:
- **30-40% visibility improvement** in AI responses
- **1,200% growth** in AI-sourced traffic (July 2024 - February 2025)
- **27% conversion rate** from AI traffic vs 2.1% from standard search
- **3.2x more citations** for content updated within 30 days
**Source**: Princeton Study + 29 industry research papers (2023-2025)
### GEO vs SEO
| Aspect | SEO | GEO |
|--------|-----|-----|
| **Target** | Search crawlers | Large Language Models |
| **Goal** | SERP ranking | AI citation & source attribution |
| **Focus** | Keywords, backlinks | E-E-A-T, citations, quotations |
| **Optimization** | Meta tags, H1 | Quotable facts, statistics, sources |
| **Success Metric** | Click-through rate | Citation frequency |
| **Freshness** | Domain-dependent | Critical (3.2x impact) |
**Why Both Matter**: Traditional SEO gets you found via Google/Bing. GEO gets you cited by AI assistants.
**Top 3 GEO Methods** (Princeton Study):
1. **Cite Sources**: 115% visibility increase for lower-ranked sites
2. **Add Quotations**: Especially effective for People & Society topics
3. **Include Statistics**: Most beneficial for Law/Government content
## Prerequisites
**Required**: Research report must exist at `.specify/research/[topic]-research.md`
If research doesn't exist, run `/blog-research` first.
## What This Command Does
Delegates to the **geo-specialist** subagent to create comprehensive GEO content brief:
- Applies Princeton Top 3 methods (cite sources, add quotations, include statistics)
- Assesses source authority and E-E-A-T signals
- Optimizes content structure for AI parsing
- Identifies quotable statements for AI citations
- Ensures comprehensive topic coverage
- Provides AI-readable formatting recommendations
- Recommends schema markup for discoverability (near-essential)
**Time**: 10-15 minutes
**Output**: `.specify/geo/[topic]-geo-brief.md`
## Instructions
Create a new subagent conversation with the `geo-specialist` agent.
**Provide the following prompt**:
```
You are creating a GEO (Generative Engine Optimization) content brief based on completed research.
**Research Report Path**: .specify/research/$ARGUMENTS-research.md
Read the research report and follow your Four-Phase GEO Process:
1. **Source Authority Analysis + Princeton Methods** (5-7 min):
- **Apply Top 3 Princeton Methods** (30-40% visibility improvement):
* Cite Sources (115% increase for lower-ranked sites)
* Add Quotations (best for People & Society domains)
* Include Statistics (best for Law/Government topics)
- Assess E-E-A-T signals (Experience, Expertise, Authoritativeness, Trustworthiness)
- Check content freshness (3.2x more citations for 30-day updates)
- Score overall authority potential (X/10)
2. **Structured Content Optimization** (7-10 min):
- Create AI-parseable H1/H2/H3 outline
- Extract key facts as quotable statements
- Structure sections as questions where appropriate
- Recommend schema.org markup (Article, HowTo, FAQPage) - near-essential
3. **Context and Depth Assessment** (7-10 min):
- Verify comprehensive topic coverage
- Identify gaps to fill
- Ensure technical terms are defined
- Recommend multi-perspective coverage (pros/cons, use cases)
4. **AI Citation Optimization** (5-7 min):
- Identify 5-7 quotable key statements
- Ensure facts are clear and self-contained
- Highlight unique value propositions
- Add date/version indicators for freshness
**Output Location**: Save your GEO brief to `.specify/geo/$ARGUMENTS-geo-brief.md`
**Important**: If research quality is insufficient (< 3 credible sources) or topic structure is ambiguous, use the User Decision Cycle to involve the user.
Begin your analysis now.
```
## Expected Output
After completion, verify that `.specify/geo/[topic]-geo-brief.md` exists and contains:
**Authority Assessment**: Credibility score + improvement recommendations
**AI-Optimized Outline**: Clear H1/H2/H3 structure with question-format headings
**Quotable Statements**: 5-7 key facts that AI can cite
**Context Analysis**: Topic coverage assessment + gaps identified
**Schema Recommendations**: Article, HowTo, FAQPage, etc.
**Metadata Guidance**: Title, description, tags optimized for AI understanding
**Citation Strategy**: Unique value propositions + formatting recommendations
**GEO Checklist**: 20+ criteria for AI discoverability
## Review Checklist
Before proceeding to content creation, review:
1. **Authority**: Are sources credible enough for AI citation?
2. **Structure**: Is the outline clear and AI-parseable?
3. **Quotables**: Are key statements citation-worthy?
4. **Depth**: Does coverage satisfy comprehensive AI queries?
5. **Unique Value**: What makes this content worth citing?
## How GEO Brief Guides Content
The marketing agent will use your GEO brief to:
- **Structure Content**: Follow AI-optimized H2/H3 outline
- **Embed Quotables**: Place key statements prominently
- **Add Context**: Define terms, provide examples
- **Apply Schema**: Implement recommended markup
- **Cite Sources**: Properly attribute external research
- **Format for AI**: Use lists, tables, clear statements
**Result**: Content optimized for BOTH human readers AND AI citation.
## Next Steps
After GEO brief is approved:
1. **Proceed to writing**: Run `/blog-marketing` to create final article
2. **Or continue full workflow**: If this was part of `/blog-generate`, the orchestrator will proceed automatically
**Note**: For complete AI optimization, consider running BOTH `/blog-seo` (traditional search) AND `/blog-geo` (AI search).
## When to Use This Command
Use `/blog-geo` when you need to:
- Optimize content for AI-powered search engines
- Maximize likelihood of AI citation
- Ensure content is authoritative and comprehensive
- Structure content for easy AI parsing
- Create AI-discoverable content brief only (without writing article)
**For full workflow**: Use `/blog-generate` (which can include GEO phase).
## Comparison: SEO vs GEO Briefs
| Feature | SEO Brief | GEO Brief |
|---------|-----------|-----------|
| **Keywords** | Primary + secondary + LSI | Natural language topics |
| **Structure** | H2/H3 for readability | H2/H3 as questions for AI |
| **Focus** | SERP ranking factors | Citation worthiness |
| **Meta** | Title tags, descriptions | Schema markup, structured data |
| **Success** | Click-through rate | AI citation frequency |
| **Length** | Word count targets | Comprehensiveness targets |
| **Links** | Backlink strategy | Source attribution strategy |
**Recommendation**: Create BOTH briefs for comprehensive discoverability.
## Tips for Maximum GEO Impact
### 1. Authority Signals
- Cite 5-7 credible sources in research
- Include expert quotes
- Add author bio with credentials
- Link to authoritative external sources
### 2. AI-Friendly Structure
- Use questions as H2 headings ("What is X?", "How to Y?")
- Place key facts in bulleted lists
- Add tables for comparisons
- Include FAQ section
### 3. Quotable Statements
- Make claims clear and self-contained
- Provide context so quotes make sense alone
- Use precise language (avoid ambiguity)
- Bold or highlight key data points
### 4. Comprehensive Coverage
- Answer related questions
- Address common misconceptions
- Provide examples for abstract concepts
- Include pros/cons and alternatives
### 5. Freshness Indicators
- Date published/updated
- Version numbers (if applicable)
- "As of [date]" for time-sensitive info
- Indicate currency of information
## Requesting Changes
If GEO brief needs adjustments, you can:
- Request deeper coverage on specific topics
- Ask for additional quotable statements
- Adjust authority recommendations
- Modify content structure
- Request different schema markup
Just provide feedback and re-run the command with clarifications.
## Error Handling
If GEO analysis fails:
- Verify research report exists
- Check research has 3+ credible sources
- Ensure research contains sufficient content
- Try providing more specific guidance about target audience
### Common Issues
**"Insufficient source authority"**
- Research needs more credible sources
- Add academic papers, official docs, or expert blogs
- Re-run `/blog-research` with better sources
**"Topic structure ambiguous"**
- Agent will ask for user decision
- Clarify whether to focus on depth or breadth
- Specify target audience technical level
**"Missing context for AI understanding"**
- Research may be too technical without explanations
- Add definitions and examples
- Ensure prerequisites are stated
## Integration with Full Workflow
### Option 1: GEO Only
```bash
# Research → GEO → Write
/blog-research "topic"
/blog-geo "topic"
/blog-marketing "topic" # Marketing agent uses GEO brief
```
### Option 2: SEO + GEO (Recommended)
```bash
# Research → SEO → GEO → Write
/blog-research "topic"
/blog-seo "topic" # Traditional search optimization
/blog-geo "topic" # AI search optimization
/blog-marketing "topic" # Marketing agent uses BOTH briefs
```
### Option 3: Full Automated
```bash
# Generate command can include GEO
/blog-generate "topic" # Optionally include GEO phase
```
**Note**: Marketing agent is smart enough to merge SEO and GEO briefs when both exist.
## Real-World GEO Examples
### What Works Well for AI Citation
**Clear Definitions**
> "Distributed tracing is a method of tracking requests across microservices to identify performance bottlenecks and failures."
**Data Points with Context**
> "According to a 2024 study by Datadog, applications with tracing experience 40% faster incident resolution compared to those relying solely on logs."
**Structured Comparisons**
| Feature | Logging | Tracing |
|---------|---------|---------|
| Scope | Single service | Cross-service |
| Use case | Debugging | Performance |
**Question-Format Headings**
> ## How Does OpenTelemetry Compare to Proprietary Solutions?
**Actionable Recommendations**
> "Start with 10% sampling in production environments to minimize overhead while maintaining visibility into application behavior."
### What Doesn't Work
**Vague Claims**
> "Tracing is important for modern applications."
**Keyword Stuffing**
> "Node.js tracing nodejs tracing best practices nodejs application tracing guide..."
**Buried Facts**
> Long paragraphs with key information not highlighted
**Outdated Information**
> Content without publication/update dates
**Unsourced Statistics**
> "Most developers prefer X" (without citation)
## Success Metrics
Track these indicators after publication:
1. **AI Citation Rate**: Monitor if content is cited by ChatGPT, Perplexity, etc.
2. **Source Attribution**: Frequency of being named as source in AI responses
3. **Query Coverage**: Number of related queries your content answers
4. **Freshness**: How recently updated (AI systems prefer recent)
5. **Authority Signals**: Backlinks from other authoritative sites
**Tools**: No established GEO tracking tools yet. Manual testing:
- Ask ChatGPT about your topic → check if you're cited
- Search in Perplexity → verify source attribution
- Use Claude with web access → monitor citations
## Future-Proofing
GEO best practices are evolving. Focus on fundamentals:
1. **Accuracy**: Factual correctness is paramount
2. **Authority**: Build credibility gradually
3. **Structure**: Clear, organized content
4. **Comprehensiveness**: Thorough topic coverage
5. **Freshness**: Regular updates
These principles will remain valuable regardless of how AI search evolves.
---
**Ready to optimize for AI search?** Provide the topic name (from research filename) and execute this command.
## Additional Resources
- **GEO Research**: Check latest posts on AI search optimization
- **Schema.org**: Reference for structured data markup
- **OpenAI/Anthropic**: Monitor changes to citation behavior
- **Perplexity Blog**: Insights on source selection algorithms
---
## Research Foundation
This GEO command is based on comprehensive research from:
**Academic Foundation**:
- Princeton University, Georgia Tech, Allen Institute for AI, IIT Delhi (November 2023)
- Presented at ACM SIGKDD Conference (August 2024)
- GEO-bench benchmark study (10,000 queries across diverse domains)
**Key Research Findings**:
- 30-40% visibility improvement through Princeton's Top 3 methods
- 1,200% growth in AI-sourced traffic (July 2024 - February 2025)
- 27% conversion rate from AI traffic vs 2.1% from standard search
- 3.2x more citations for content updated within 30 days
- 115% visibility increase for lower-ranked sites using citations
**Industry Analysis**:
- Analysis of 17 million AI citations (Ahrefs study)
- Platform-specific citation patterns (ChatGPT, Perplexity, Google AI Overviews)
- 29 cited research studies (2023-2025)
- Case studies: 800-2,300% traffic increases, real conversion data
For full research report, see: `.specify/research/gso-geo-comprehensive-research.md`

186
commands/blog-marketing.md Normal file
View File

@@ -0,0 +1,186 @@
# Blog Marketing & Content Creation
Write final blog article based on research and SEO brief using the Marketing Specialist agent.
## Usage
```bash
/blog-marketing "topic-name"
```
**Example**:
```bash
/blog-marketing "nodejs-tracing"
```
**Note**: Provide the sanitized topic name (same as used in research and SEO filenames).
## Prerequisites
**Required Files**:
1. Research report: `.specify/research/[topic]-research.md`
2. SEO brief: `.specify/seo/[topic]-seo-brief.md`
If either doesn't exist, run `/blog-research` and `/blog-seo` first.
## What This Command Does
Delegates to the **marketing-specialist** subagent to create final, polished article:
- Loads research and SEO brief (token-efficiently)
- Writes engaging introduction with hook
- Develops body content following SEO structure
- Integrates social proof (stats, quotes, examples)
- Places strategic CTAs (2-3 throughout)
- Creates FAQ section with schema optimization
- Writes compelling conclusion
- Polishes for readability and conversion
- Formats with proper frontmatter
**Time**: 10-15 minutes
**Output**: `articles/[topic].md`
## Instructions
Create a new subagent conversation with the `marketing-specialist` agent.
**Provide the following prompt**:
```
You are writing the final blog article based on research and SEO brief.
**Research Report**: .specify/research/$ARGUMENTS-research.md
**SEO Brief**: .specify/seo/$ARGUMENTS-seo-brief.md
Read both files using your token-efficient loading strategy (documented in your instructions) and follow your Three-Phase Process:
1. **Context Loading** (3-5 min):
- Extract ONLY essential information from research (key findings, quotes, sources)
- Extract ONLY essential information from SEO brief (keywords, structure, meta)
- Build mental model of target audience and goals
2. **Content Creation** (20-30 min):
- Write engaging introduction (150-200 words)
* Hook (problem/question/stat)
* Promise (what reader will learn)
* Credibility signal
- Develop body content following SEO brief structure
* Each H2 section with clear value
* H3 subsections for depth
* Mix of paragraphs, lists, and formatting
- Integrate social proof throughout
* Statistics from research
* Expert quotes
* Real-world examples
- Place 2-3 strategic CTAs
* Primary CTA (after intro or in conclusion)
* Secondary CTAs (mid-article)
- Create FAQ section (if in SEO brief)
* Direct, concise answers (40-60 words each)
- Write compelling conclusion
* Summary of 3-5 key takeaways
* Reinforce main message
* Strong final CTA
3. **Polish** (5-10 min):
- Readability check (varied sentences, active voice, short paragraphs)
- Engagement review (questions, personal pronouns, power words)
- SEO compliance (keyword placement, structure, links)
- Conversion optimization (CTAs, value prop, no friction)
**Output Location**: Save your final article to `articles/$ARGUMENTS.md`
**Important**: Use proper markdown frontmatter format with all required fields (title, description, keywords, author, date, etc.).
Begin writing now.
```
## Expected Output
After completion, verify that `articles/[topic].md` exists and contains:
Complete frontmatter (title, description, keywords, author, date, etc.)
Engaging introduction with hook and promise
All H2/H3 sections from SEO brief
Primary keyword in title, intro, headings
Secondary keywords distributed naturally
Social proof integrated (5-7 citations)
2-3 well-placed CTAs
FAQ section (if in SEO brief)
Conclusion with key takeaways
Sources/references section
Internal linking suggestions
Target word count achieved (±10%)
## Quality Checklist
Before finalizing, review:
1. **Accuracy**: Facts match research sources?
2. **Brand Voice**: Tone appropriate for audience?
3. **Readability**: Easy to scan and understand?
4. **SEO**: Keywords natural, not forced?
5. **Engagement**: Interesting and actionable?
6. **CTAs**: Clear and compelling?
7. **Formatting**: Proper markdown, good structure?
## Next Steps
After article is generated:
1. **Review**: Read through for quality and accuracy
2. **Refine**: Request changes if needed (specific sections)
3. **Enhance**: Add custom examples, images, diagrams
4. **Publish**: Copy to your blog/CMS
5. **Promote**: Share on social media, newsletters
6. **Track**: Monitor performance metrics
## When to Use This Command
Use `/blog-marketing` when you need to:
- Rewrite article with different angle
- Adjust tone or style
- Add/remove sections
- Improve specific parts (intro, conclusion, CTAs)
- Write only (without research/SEO phases)
**For full workflow**: Use `/blog-generate` instead.
## Tips
1. **Review intro carefully**: First impression matters
2. **Check CTA placement**: Natural or forced?
3. **Verify sources cited**: All major claims backed?
4. **Test readability**: Ask someone to scan it
5. **Compare to SEO brief**: Did it follow structure?
## Requesting Revisions
If article needs changes, be specific:
- "Make introduction more engaging with a stronger hook"
- "Add more technical depth to section on [topic]"
- "Reduce jargon in [section name]"
- "Strengthen conclusion CTA"
Provide clear feedback and re-run with adjustments.
## Common Adjustments
**Too Technical**: "Simplify language for non-experts"
**Too Basic**: "Add more technical depth and examples"
**Wrong Tone**: "Make more conversational/professional"
**Missing CTAs**: "Add stronger calls-to-action"
**Too Long**: "Reduce to [X] words, keeping core value"
## Error Handling
If content creation fails:
- Verify both research and SEO files exist
- Check file paths are correct
- Ensure SEO brief has complete structure
- Review research for sufficient content
---
**Ready to start?** Provide the topic name and execute this command.

View File

@@ -0,0 +1,381 @@
# Blog Image Optimization
Optimize article images with automated compression, format conversion (WebP), and reference updates.
## Usage
```bash
/blog-optimize-images "language/article-slug"
```
**Examples**:
```bash
/blog-optimize-images "en/nodejs-best-practices"
/blog-optimize-images "fr/microservices-logging"
```
**Note**: Provide the language code and article slug (path relative to `articles/`).
## Prerequisites
**Required**:
- Article exists at `articles/[language]/[slug]/article.md`
- Images referenced in article (`.png`, `.jpg`, `.jpeg`, `.gif`, `.bmp`, `.tiff`)
- ffmpeg installed (for conversion)
**Install ffmpeg**:
```bash
# macOS
brew install ffmpeg
# Windows (with Chocolatey)
choco install ffmpeg
# Windows (manual)
# Download from: https://ffmpeg.org/download.html
# Ubuntu/Debian
sudo apt-get install ffmpeg
```
## What This Command Does
Delegates to the **quality-optimizer** subagent (Phase 4) for image optimization:
- **Discovers Images**: Scans article for image references
- **Backs Up Originals**: Copies to `images/.backup/` (preserves uncompressed)
- **Converts to WebP**: 80% quality, optimized for web
- **Updates References**: Changes `.png``.webp` in article.md
- **Reports Results**: Shows size reduction and file locations
**Time**: 10-20 minutes (depends on image count/size)
**Output**: Optimized images in `images/`, backups in `images/.backup/`
## Instructions
Create a new subagent conversation with the `quality-optimizer` agent.
**Provide the following prompt**:
```
You are optimizing images for a blog article.
**Article Path**: articles/$ARGUMENTS/article.md
Execute ONLY Phase 4 (Image Optimization) from your instructions:
1. **Image Discovery**:
- Scan article for image references
- Extract image paths from markdown: `grep -E '!\[.*\]\(.*\.(png|jpg|jpeg|gif|bmp|tiff)\)' article.md`
- Build list of images to process
2. **Generate Optimization Script** (`/tmp/optimize-images-$$.sh`):
- Create image optimization script in /tmp/
- Include backup logic (copy originals to images/.backup/)
- Include conversion logic (to WebP, 80% quality)
- Include reference update logic (sed replacements in article.md)
- Make script executable
3. **Execute Script**:
- Run the optimization script
- Capture output and errors
- Verify all images processed successfully
4. **Validation**:
- Check all originals backed up to images/.backup/
- Verify all WebP files created in images/
- Confirm article.md references updated
- Calculate total size reduction
**Important**:
- All scripts must be in /tmp/ (never pollute project)
- Backup originals BEFORE conversion
- Use ffmpeg (cross-platform: Windows, macOS, Linux)
- 80% quality for WebP conversion (hardcoded)
- Update ALL image references in article.md
- Report file size reductions
Begin image optimization now.
```
## Expected Output
After completion, verify:
**Backup Directory Created**:
```bash
ls articles/en/my-article/images/.backup/
# screenshot.png (original)
# diagram.png (original)
```
**Optimized Images Created**:
```bash
ls articles/en/my-article/images/
# screenshot.webp (optimized, 80% quality)
# diagram.webp (optimized, 80% quality)
```
**Article References Updated**:
```markdown
# Before:
![Screenshot](images/.backup/screenshot.png)
# After:
![Screenshot](images/screenshot.webp)
```
**Size Reduction Report**:
```
Optimization Results:
- screenshot.png: 2.4MB → 512KB (79% reduction)
- diagram.png: 1.8MB → 420KB (77% reduction)
Total savings: 3.3MB (78% reduction)
```
## Supported Image Formats
### Source Formats (will be converted)
- `.png` - Portable Network Graphics
- `.jpg` / `.jpeg` - JPEG images
- `.gif` - Graphics Interchange Format (first frame)
- `.bmp` - Bitmap images
- `.tiff` - Tagged Image File Format
### Target Format
- `.webp` - WebP (80% quality, optimized)
### Compression Settings
**Hardcoded** (cannot be changed via command):
- **Quality**: 80%
- **Format**: WebP
- **Method**: ffmpeg (cross-platform)
**Why 80%?**
- Excellent visual quality
- Significant file size reduction (30-70%)
- Broad browser support
- Optimal for web performance
## Image Workflow
### 1. Add Images to Article
Place originals in `.backup/` first:
```bash
cp ~/Downloads/screenshot.png articles/en/my-article/images/.backup/
```
Reference in article (use `.backup/` path initially):
```markdown
![Architecture Screenshot](images/.backup/screenshot.png)
```
### 2. Run Optimization
```bash
/blog-optimize-images "en/my-article"
```
### 3. Verify Results
Check backups:
```bash
ls articles/en/my-article/images/.backup/
# screenshot.png
```
Check optimized:
```bash
ls articles/en/my-article/images/
# screenshot.webp
```
Check article updated:
```bash
grep "screenshot" articles/en/my-article/article.md
# ![Architecture Screenshot](images/screenshot.webp)
```
## Multi-Language Support
Images are per-language/per-article:
```bash
# English article images
/blog-optimize-images "en/my-topic"
# → articles/en/my-topic/images/
# French article images
/blog-optimize-images "fr/my-topic"
# → articles/fr/my-topic/images/
```
**Sharing images across languages**:
```markdown
# In French article, link to English image
![Diagram](/en/my-topic/images/diagram.webp)
```
## Re-Optimization
If you need to re-optimize:
### Restore from Backup
```bash
# Copy backups back to main images/
cp articles/en/my-article/images/.backup/* articles/en/my-article/images/
# Update article references to use .backup/ again
sed -i '' 's|images/\([^.]*\)\.webp|images/.backup/\1.png|g' articles/en/my-article/article.md
```
### Run Optimization Again
```bash
/blog-optimize-images "en/my-article"
```
## Troubleshooting
### "ffmpeg not found"
```bash
# Install ffmpeg
brew install ffmpeg # macOS
choco install ffmpeg # Windows (Chocolatey)
sudo apt-get install ffmpeg # Linux
# Verify installation
ffmpeg -version
```
### "No images to optimize"
```bash
# Check article has image references
grep "!\[" articles/en/my-article/article.md
# Check image files exist
ls articles/en/my-article/images/.backup/
```
### "Images not updating in article"
```bash
# Check current references
grep "images/" articles/en/my-article/article.md
# Manually fix if needed
sed -i '' 's|\.png|.webp|g' articles/en/my-article/article.md
```
### "Permission denied"
```bash
# Make optimization script executable
chmod +x /tmp/optimize-images-*.sh
# Or run agent again (it recreates script)
```
## Performance Tips
### Before Optimization
1. **Use descriptive names**: `architecture-diagram.png` not `img1.png`
2. **Keep high quality**: Optimization preserves visual quality
3. **Remove unused images**: Delete unreferenced images first
### After Optimization
1. **Verify backups exist**: Check `.backup/` directory
2. **Test image loading**: Preview article to ensure images load
3. **Monitor file sizes**: Typical reduction 30-70%
4. **Commit both**: Commit `.backup/` and optimized images (or just optimized)
## Integration with Workflows
### New Article with Images
```bash
# 1. Create article
/blog-marketing "en/my-topic"
# 2. Add images to .backup/
cp ~/images/*.png articles/en/my-topic/images/.backup/
# 3. Reference in article.md
# ![Image](images/.backup/image.png)
# 4. Optimize
/blog-optimize-images "en/my-topic"
# 5. Validate
/blog-optimize "en/my-topic"
```
### Update Existing Article Images
```bash
# 1. Add new images to .backup/
cp ~/new-image.png articles/en/my-topic/images/.backup/
# 2. Reference in article
# ![New Image](images/.backup/new-image.png)
# 3. Re-optimize (only new images affected)
/blog-optimize-images "en/my-topic"
```
## Storage Considerations
### What to Commit to Git
**Option 1: Commit Both** (recommended for collaboration)
```gitignore
# .gitignore - allow both
# (images/.backup/ and images/*.webp committed)
```
**Option 2: Commit Only Optimized**
```gitignore
# .gitignore - exclude backups
articles/**/images/.backup/
```
**Option 3: Commit Only Backups** (not recommended)
```gitignore
# .gitignore - exclude optimized
articles/**/images/*.webp
# (requires re-optimization on each machine)
```
### Large Images
For very large originals (>10MB):
1. Store backups externally (CDN, cloud storage)
2. Document source URL in article frontmatter
3. Only commit optimized `.webp` files
```yaml
---
title: "My Article"
image_sources:
- original: "https://cdn.example.com/screenshot.png"
optimized: "images/screenshot.webp"
---
```
## Script Cleanup
Optimization scripts are temporary:
```bash
# List optimization scripts
ls /tmp/optimize-images-*.sh
# Remove manually if needed
rm /tmp/optimize-images-*.sh
# Or let OS auto-cleanup on reboot
```
---
**Ready to optimize images?** Provide the language/slug path and execute this command.

289
commands/blog-optimize.md Normal file
View File

@@ -0,0 +1,289 @@
# Blog Quality Optimization
Validate article quality with automated checks for frontmatter, markdown formatting, and spec compliance.
## Usage
### Single Article (Recommended)
```bash
/blog-optimize "lang/article-slug"
```
**Examples**:
```bash
/blog-optimize "en/nodejs-tracing"
/blog-optimize "fr/microservices-logging"
```
**Token usage**: ~10k-15k tokens per article
### Global Validation (⚠️ High Token Usage)
```bash
/blog-optimize
```
**⚠️ WARNING**: This will validate ALL articles in your content directory.
**Token usage**: 50k-500k+ tokens (depending on article count)
**Cost**: Can be expensive (e.g., 50 articles = ~500k tokens)
**Duration**: 20-60 minutes for large blogs
**Use case**: Initial audit, bulk validation, CI/CD pipelines
**Recommendation**: Validate articles individually unless you need a full audit.
## Prerequisites
**Required**: Article must exist at `articles/[topic].md`
If article doesn't exist, run `/blog-generate` or `/blog-marketing` first.
## What This Command Does
Delegates to the **quality-optimizer** subagent to validate article quality:
- **Spec Compliance**: Validates against `.spec/blog.spec.json` requirements
- **Frontmatter Structure**: Checks required fields and format
- **Markdown Quality**: Validates syntax, headings, links, code blocks
- **SEO Elements**: Checks meta description, keywords, internal links
- **Readability**: Analyzes sentence length, paragraph size, passive voice
- **Brand Voice**: Validates against `voice_dont` anti-patterns
**Time**: 10-15 minutes
**Output**: `.specify/quality/[topic]-validation.md`
## Instructions
Create a new subagent conversation with the `quality-optimizer` agent.
**Provide the following prompt**:
```
You are validating the quality of a blog article.
**Article Path**: articles/$ARGUMENTS.md
Follow your Three-Phase Process:
1. **Spec Compliance Validation** (5-7 min):
- Generate validation script in /tmp/validate-spec-$$.sh
- Load .spec/blog.spec.json (if exists)
- Check frontmatter required fields
- Validate review_rules compliance (must_have, must_avoid)
- Check brand voice anti-patterns (voice_dont)
- Run script and capture results
2. **Markdown Quality Validation** (5-10 min):
- Generate validation script in /tmp/validate-markdown-$$.sh
- Check heading hierarchy (one H1, proper nesting)
- Validate link syntax (no broken links)
- Check code blocks (properly closed, language tags)
- Verify images have alt text
- Run script and capture results
3. **SEO and Performance Validation** (3-5 min):
- Generate validation script in /tmp/validate-seo-$$.sh
- Check meta description length (150-160 chars)
- Validate keyword presence in critical locations
- Count internal links (minimum 3 recommended)
- Calculate readability metrics
- Run script and capture results
**Output Location**: Save comprehensive validation report to `.specify/quality/$ARGUMENTS-validation.md`
**Important**:
- All scripts must be generated in /tmp/ (never pollute project directory)
- Scripts are non-destructive (read-only operations)
- Provide actionable fixes for all issues found
- Include metrics and recommendations in report
Begin validation now.
```
## Expected Output
After completion, verify that `.specify/quality/[topic]-validation.md` exists and contains:
**Passed Checks Section**:
- List of all successful validations
- Green checkmarks for passing items
**Warnings Section**:
- Non-critical issues that should be addressed
- Improvement suggestions
**Critical Issues Section**:
- Must-fix problems before publishing
- Clear descriptions of what's wrong
**Metrics Dashboard**:
- Frontmatter completeness
- Content structure statistics
- SEO metrics
- Readability scores
**Recommended Fixes**:
- Prioritized list (critical first)
- Code snippets for fixes
- Step-by-step instructions
**Validation Scripts**:
- List of generated scripts in /tmp/
- Instructions for manual review/cleanup
## Interpreting Results
### ✅ All Checks Passed
```
✅ Passed Checks (12/12)
No issues found! Article is ready to publish.
```
**Next Steps**: Review the article one final time and publish.
### ⚠️ Warnings Only
```
✅ Passed Checks (10/12)
⚠️ Warnings (2)
- Only 2 internal links (recommend 3+)
- Keyword density 2.3% (slightly high)
```
**Next Steps**: Address warnings if possible, then publish (warnings are optional improvements).
### ❌ Critical Issues
```
✅ Passed Checks (8/12)
⚠️ Warnings (1)
❌ Critical Issues (3)
- Missing required frontmatter field: category
- 2 images without alt text
- Unclosed code block
```
**Next Steps**: Fix all critical issues before publishing. Re-run `/blog-optimize` after fixes.
## Review Checklist
Before considering article complete:
1. **Frontmatter Complete**?
- All required fields present
- Meta description 150-160 chars
- Valid date format
2. **Content Quality**?
- Proper heading hierarchy
- No broken links
- All images have alt text
- Code blocks properly formatted
3. **SEO Optimized**?
- Keyword in title and headings
- 3+ internal links
- Meta description compelling
- Readable paragraphs (<150 words)
4. **Spec Compliant**?
- Meets all `must_have` requirements
- Avoids all `must_avoid` patterns
- Follows brand voice guidelines
## Next Steps
After validation is complete:
### If All Checks Pass ✅
```bash
# Publish the article
# (copy to your CMS or commit to git)
```
### If Issues Found ⚠️ ❌
```bash
# Fix issues manually or use other commands
# If content needs rewriting:
/blog-marketing "topic-name" # Regenerate with fixes
# If SEO needs adjustment:
/blog-seo "topic-name" # Regenerate SEO brief
# After fixes, re-validate:
/blog-optimize "topic-name"
```
## When to Use This Command
Use `/blog-optimize` when you need to:
-**Before publishing**: Final quality check
-**After manual edits**: Validate changes didn't break anything
-**Updating old articles**: Check compliance with current standards
-**Troubleshooting**: Identify specific issues in article
-**Learning**: See what makes a quality article
**For full workflow**: `/blog-generate` includes optimization as final step (optional).
## Tips
1. **Run after each major edit**: Catch issues early
2. **Review validation scripts**: Learn what good quality means
3. **Keep constitution updated**: Validation reflects your current standards
4. **Fix critical first**: Warnings can wait, critical issues block publishing
5. **Use metrics to improve**: Track quality trends over time
## Requesting Re-validation
After fixing issues:
```bash
/blog-optimize "topic-name"
```
The agent will re-run all checks and show improvements:
```
Previous: ❌ 3 critical, ⚠️ 2 warnings
Current: ✅ All checks passed!
Improvements:
- Fixed missing frontmatter field ✅
- Added alt text to all images ✅
- Closed unclosed code block ✅
```
## Validation Script Cleanup
Scripts are generated in `/tmp/` and can be manually removed:
```bash
# List validation scripts
ls /tmp/validate-*.sh
# Remove all validation scripts
rm /tmp/validate-*.sh
# Or let OS auto-cleanup on reboot
```
## Error Handling
If validation fails:
- **Check article exists**: Verify path `articles/[topic].md`
- **Check constitution valid**: Run `bash scripts/validate-constitution.sh`
- **Review script output**: Check `/tmp/validate-*.sh` for errors
- **Try with simpler article**: Test validation on known-good article
---
**Ready to validate?** Provide the topic name and execute this command.

155
commands/blog-research.md Normal file
View File

@@ -0,0 +1,155 @@
# Blog Research (ACTION)
Execute comprehensive research for blog article topic using the Research Intelligence specialist agent.
**100% ACTION**: This command generates an actionable article draft, not just a research report.
## Usage
```bash
/blog-research "Your article topic"
```
**Example**:
```bash
/blog-research "Implementing distributed tracing in Node.js with OpenTelemetry"
```
## What This Command Does
Delegates to the **research-intelligence** subagent to conduct deep, multi-source research AND generate article draft:
- Decomposes topic into 3-5 sub-questions
- Executes 5-7 targeted web searches
- Evaluates sources for credibility and relevance
- Cross-references findings
- Generates comprehensive research report with citations
- **Transforms findings into actionable article draft** (NEW)
**Time**: 15-20 minutes
**Outputs**:
- `.specify/research/[topic]-research.md` (research report)
- `articles/[topic]-draft.md` (article draft) ✅ **ACTIONABLE**
## Instructions
Create a new subagent conversation with the `research-intelligence` agent.
**Provide the following prompt**:
```
You are conducting deep research on the following topic for a blog article:
**Topic**: $ARGUMENTS
Follow your Four-Phase Process as documented in your agent instructions:
1. **Strategic Planning** (5-10 min):
- Decompose the topic into sub-questions
- Plan source strategy
- Define success criteria
2. **Autonomous Retrieval** (10-20 min):
- Execute targeted searches
- Evaluate and fetch sources
- Cross-reference findings
- Apply quality filters
3. **Synthesis** (5-10 min):
- Generate structured research report
- Include executive summary
- Document key findings with citations
- Note contradictions/debates
- Provide actionable insights
4. **Draft Generation** (10-15 min) ✅ NEW - ACTION PHASE:
- Transform research findings into article draft
- Create introduction with hook from research
- Structure 3-5 main sections based on sub-questions
- Integrate all findings into narrative
- Include 5-7 source citations
- Add concrete examples from sources
- Write key takeaways summary
- Target 1,500-2,000 words
**Output Locations**:
- Research report: `.specify/research/[SANITIZED-TOPIC]-research.md`
- Article draft: `articles/[SANITIZED-TOPIC]-draft.md` ✅ **ACTIONABLE**
**Sanitization Rules**:
- Convert to lowercase
- Replace spaces with hyphens
- Remove special characters
- Example: "Node.js Tracing" → "nodejs-tracing"
Begin your research now.
```
## Expected Outputs
After completion, verify that **TWO files** exist:
### 1. Research Report (`.specify/research/[topic]-research.md`)
Executive summary with key takeaways
Findings organized by sub-questions
Minimum 5-7 credible sources cited
Evidence with proper attribution
Contradictions or debates (if any)
Actionable insights (3-5 points)
References section with full citations
### 2. Article Draft (✅ **ACTIONABLE** - `articles/[topic]-draft.md`)
Title and meta description
Introduction with hook from research (stat/quote/trend)
3-5 main sections based on sub-questions
All research findings integrated into narrative
5-7 source citations in References section
Concrete examples from case studies/sources
Key takeaways summary at end
1,500-2,000 words
Frontmatter with status: "draft"
## Next Steps
After research completes:
1. **Review BOTH outputs**:
- Check research report quality and coverage
- **Review article draft** for accuracy and structure ✅
2. **Refine draft (optional)**:
- Edit `articles/[topic]-draft.md` manually if needed
- Or regenerate with more specific instructions
3. **Proceed to SEO**: Run `/blog-seo` to create content brief
4. **Or continue full workflow**: If this was part of `/blog-generate`, the orchestrator will proceed automatically
## When to Use This Command
Use `/blog-research` when you need to:
- Redo research with different focus
- Update research with newer sources
- Add depth to existing research
- Research only (without SEO/writing)
**For full workflow**: Use `/blog-generate` instead.
## Tips
1. **Be specific**: Detailed topics yield better research
2. **Check sources**: Review citations for quality
3. **Verify recency**: Ensure sources are recent (if topic is current)
4. **Note gaps**: If research misses something, you can request follow-up
## Error Handling
If research fails:
- Check if topic is clear and researchable
- Verify web search is available
- Try narrowing or broadening the topic
- Check `.specify/research/` directory exists
---
**Ready to start?** Provide your topic above and execute this command.

148
commands/blog-seo.md Normal file
View File

@@ -0,0 +1,148 @@
# Blog SEO Optimization
Create SEO content brief based on completed research using the SEO Specialist agent.
## Usage
```bash
/blog-seo "topic-name"
```
**Example**:
```bash
/blog-seo "nodejs-tracing"
```
**Note**: Provide the sanitized topic name (same as used in research filename).
## Prerequisites
**Required**: Research report must exist at `.specify/research/[topic]-research.md`
If research doesn't exist, run `/blog-research` first.
## What This Command Does
Delegates to the **seo-specialist** subagent to create comprehensive SEO content brief:
- Extracts target keywords from research
- Analyzes search intent
- Creates content structure (H2/H3 outline)
- Generates 5-7 headline options
- Provides SEO recommendations
- Identifies internal linking opportunities
**Time**: 5-10 minutes
**Output**: `.specify/seo/[topic]-seo-brief.md`
## Instructions
Create a new subagent conversation with the `seo-specialist` agent.
**Provide the following prompt**:
```
You are creating an SEO content brief based on completed research.
**Research Report Path**: .specify/research/$ARGUMENTS-research.md
Read the research report and follow your Four-Phase Process:
1. **Keyword Analysis** (3-5 min):
- Extract keyword candidates from research
- Validate with web search (if available)
- Select 1 primary + 3-5 secondary keywords
- Identify 5-7 LSI keywords
2. **Search Intent Determination** (5-7 min):
- Analyze top-ranking articles (if WebSearch available)
- Classify intent (Informational/Navigational/Transactional)
- Determine content format
3. **Content Structure Creation** (7-10 min):
- Generate 5-7 headline options
- Create H2/H3 outline covering all research topics
- Write meta description (155 chars max)
- Identify internal linking opportunities
4. **SEO Recommendations** (3-5 min):
- Content length guidance
- Keyword density targets
- Image optimization suggestions
- Schema markup recommendations
- Featured snippet opportunities
**Output Location**: Save your SEO brief to `.specify/seo/$ARGUMENTS-seo-brief.md`
Begin your analysis now.
```
## Expected Output
After completion, verify that `.specify/seo/[topic]-seo-brief.md` exists and contains:
Target keywords (primary, secondary, LSI)
Search intent classification
5-7 headline options with recommendation
Complete content structure (H2/H3 outline)
Meta description (under 155 characters)
SEO recommendations (length, density, images, schema)
Internal linking opportunities
Competitor insights summary
## Review Checklist
Before proceeding to content creation, review:
1. **Keywords**: Are they appropriate for your goals?
2. **Headlines**: Do they resonate with your audience?
3. **Structure**: Does the H2/H3 outline make sense?
4. **Intent**: Does it match what you want to target?
5. **Length**: Is the target word count realistic?
## Next Steps
After SEO brief is approved:
1. **Proceed to writing**: Run `/blog-marketing` to create final article
2. **Or continue full workflow**: If this was part of `/blog-generate`, the orchestrator will proceed automatically
## When to Use This Command
Use `/blog-seo` when you need to:
- Regenerate SEO brief with different angle
- Update keywords for different target
- Adjust content structure
- Create brief only (without writing article)
**For full workflow**: Use `/blog-generate` instead.
## Tips
1. **Review headlines carefully**: They drive CTR and engagement
2. **Check structure depth**: Too shallow? Too deep?
3. **Validate intent**: Wrong intent = wrong audience
4. **Consider competition**: Can you realistically rank?
## Requesting Changes
If SEO brief needs adjustments, you can:
- Specify different primary keyword
- Request alternative headline approaches
- Adjust content structure (more/fewer sections)
- Change target word count
Just provide feedback and re-run the command with clarifications.
## Error Handling
If SEO analysis fails:
- Verify research report exists
- Check file path is correct
- Ensure research contains sufficient content
- Try providing more specific guidance
---
**Ready to start?** Provide the topic name (from research filename) and execute this command.

548
commands/blog-setup.md Normal file
View File

@@ -0,0 +1,548 @@
# Blog Setup
Interactive setup wizard to create blog constitution (`.spec/blog.spec.json`).
## Usage
```bash
/blog-setup
```
This command creates a bash script in `/tmp/` and executes it interactively to gather your blog configuration.
## What It Does
1. Generates interactive setup script in `/tmp/blog-kit-setup-[timestamp].sh`
2. Prompts for blog configuration (name, context, tone, voice rules)
3. Creates `.spec/blog.spec.json` with your configuration
4. Validates JSON structure
5. Creates `CLAUDE.md` in content directory (documents constitution as source of truth)
6. Cleans up temporary script
## Instructions
Generate and execute the following bash script:
```bash
# Generate unique script name
SCRIPT="/tmp/blog-kit-setup-$(date +%s).sh"
# Create interactive setup script
cat > "$SCRIPT" <<'SCRIPT_EOF'
#!/bin/bash
# Blog Kit Setup Wizard
# ======================
clear
echo "╔════════════════════════════════════════╗"
echo "║ Blog Kit - Setup Wizard ║"
echo "╚════════════════════════════════════════╝"
echo ""
echo "This wizard will create .spec/blog.spec.json"
echo "with your blog configuration."
echo ""
# Prompt: Blog Name
echo "📝 Blog Configuration"
echo "─────────────────────"
read -p "Blog name: " blog_name
# Validate non-empty
while [ -z "$blog_name" ]; do
echo "❌ Blog name cannot be empty"
read -p "Blog name: " blog_name
done
# Prompt: Context
echo ""
read -p "Context (e.g., 'Tech blog for developers'): " context
while [ -z "$context" ]; do
echo "❌ Context cannot be empty"
read -p "Context: " context
done
# Prompt: Objective
echo ""
read -p "Objective (e.g., 'Generate qualified leads'): " objective
while [ -z "$objective" ]; do
echo "❌ Objective cannot be empty"
read -p "Objective: " objective
done
# Prompt: Tone
echo ""
echo "🎨 Select tone:"
echo " 1) Expert (technical, authoritative)"
echo " 2) Pédagogique (educational, patient)"
echo " 3) Convivial (friendly, casual)"
echo " 4) Corporate (professional, formal)"
read -p "Choice (1-4): " tone_choice
case $tone_choice in
1) tone="expert" ;;
2) tone="pédagogique" ;;
3) tone="convivial" ;;
4) tone="corporate" ;;
*)
echo "⚠️ Invalid choice, defaulting to 'pédagogique'"
tone="pédagogique"
;;
esac
# Prompt: Languages
echo ""
read -p "Languages (comma-separated, e.g., 'fr,en'): " languages
languages=${languages:-"fr"} # Default to fr if empty
# Prompt: Content Directory
echo ""
read -p "Content directory (default: articles): " content_dir
content_dir=${content_dir:-"articles"} # Default to articles if empty
# Prompt: Voice DO
echo ""
echo "✅ Voice guidelines - DO"
echo "What should your content be?"
echo "Examples: Clear, Actionable, Engaging, Technical, Data-driven"
read -p "DO (comma-separated): " voice_do
while [ -z "$voice_do" ]; do
echo "❌ Please provide at least one DO guideline"
read -p "DO (comma-separated): " voice_do
done
# Prompt: Voice DON'T
echo ""
echo "❌ Voice guidelines - DON'T"
echo "What should your content avoid?"
echo "Examples: Jargon, Vague claims, Salesy language, Passive voice"
read -p "DON'T (comma-separated): " voice_dont
while [ -z "$voice_dont" ]; do
echo "❌ Please provide at least one DON'T guideline"
read -p "DON'T (comma-separated): " voice_dont
done
# Generate JSON
echo ""
echo "📄 Generating configuration..."
# Create .spec directory
mkdir -p .spec
# Convert comma-separated strings to JSON arrays
voice_do_json=$(echo "$voice_do" | sed 's/,\s*/","/g' | sed 's/^/"/' | sed 's/$/"/')
voice_dont_json=$(echo "$voice_dont" | sed 's/,\s*/","/g' | sed 's/^/"/' | sed 's/$/"/')
languages_json=$(echo "$languages" | sed 's/,\s*/","/g' | sed 's/^/"/' | sed 's/$/"/')
# Generate timestamp
timestamp=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
# Create JSON file
cat > .spec/blog.spec.json <<JSON_EOF
{
"version": "1.0.0",
"blog": {
"name": "$blog_name",
"context": "$context",
"objective": "$objective",
"tone": "$tone",
"languages": [$languages_json],
"content_directory": "$content_dir",
"brand_rules": {
"voice_do": [$voice_do_json],
"voice_dont": [$voice_dont_json]
}
},
"workflow": {
"review_rules": {
"must_have": [
"Executive summary",
"Source citations",
"Actionable insights"
],
"must_avoid": [
"Unsourced claims",
"Keyword stuffing",
"Vague recommendations"
]
}
},
"generated_at": "$timestamp"
}
JSON_EOF
# Validate JSON
echo ""
if command -v python3 >/dev/null 2>&1; then
if python3 -m json.tool .spec/blog.spec.json > /dev/null 2>&1; then
echo "✅ JSON validation passed"
else
echo "❌ JSON validation failed"
echo "Please check .spec/blog.spec.json manually"
exit 1
fi
else
echo "⚠️ python3 not found, skipping JSON validation"
echo " (Validation will happen when agents run)"
fi
# Generate CLAUDE.md in content directory
echo ""
echo "📄 Generating CLAUDE.md in content directory..."
# Create content directory if it doesn't exist
mkdir -p "$content_dir"
# Determine tone behavior based on selected tone
case $tone in
"expert")
tone_behavior="Technical depth, assumes reader knowledge, industry terminology"
;;
"pédagogique")
tone_behavior="Educational approach, step-by-step explanations, learning-focused"
;;
"convivial")
tone_behavior="Friendly and approachable, conversational style, personal touch"
;;
"corporate")
tone_behavior="Professional and formal, business-oriented, ROI-focused"
;;
esac
# Generate CLAUDE.md with constitution as source of truth
cat > "$content_dir/CLAUDE.md" <<CLAUDE_EOF
# Blog Content Directory
**Blog Name**: $blog_name
**Tone**: $tone
## Source of Truth: blog.spec.json
**IMPORTANT**: All content in this directory MUST follow \`.spec/blog.spec.json\` guidelines.
This file is your blog constitution - it defines:
- Voice and tone
- Brand rules (DO/DON'T)
- Content structure requirements
- Review and validation criteria
### Always Check Constitution First
Before creating or editing any article:
1. **Load Constitution**: \`cat .spec/blog.spec.json\`
2. **Verify tone matches**: $tone ($tone_behavior)
3. **Follow voice guidelines** (see below)
4. **Run validation**: \`/blog-optimize "lang/article-slug"\`
## Voice Guidelines (from Constitution)
### ✅ DO
$(echo "$voice_do" | sed 's/,\s*/\n- ✅ /g' | sed 's/^/- ✅ /')
### ❌ DON'T
$(echo "$voice_dont" | sed 's/,\s*/\n- ❌ /g' | sed 's/^/- ❌ /')
## Tone: $tone
**What this means**:
$tone_behavior
**How to apply**:
- Every article must reflect this tone consistently
- Use vocabulary and phrasing appropriate to this tone
- Maintain tone across all languages ($(echo "$languages" | sed 's/,/, /g'))
## Article Structure
Every article must include:
1. **Frontmatter** (YAML):
- title
- description
- date
- language
- tags/categories
2. **Executive Summary**:
- Key takeaways upfront
- Clear value proposition
3. **Main Content**:
- H2/H3 structured headings
- Code examples (for technical topics)
- Source citations (3-5 credible sources)
4. **Actionable Insights**:
- 3-5 specific recommendations
- Next steps for readers
5. **Images**:
- Descriptive alt text (SEO + accessibility)
- Optimized format (WebP recommended)
## Validation Workflow
**Before Publishing**:
\`\`\`bash
# Validate single article
/blog-optimize "lang/article-slug"
# Check translation coverage (if i18n)
/blog-translate "lang/article-slug" "target-lang"
# Optimize images
/blog-optimize-images "lang/article-slug"
\`\`\`
**Commands that Use Constitution**:
- \`/blog-generate\` - Generates content following constitution
- \`/blog-copywrite\` - Writes article using spec-kit + constitution
- \`/blog-optimize\` - Validates against constitution rules
- \`/blog-marketing\` - Creates marketing content with brand voice
## Updating Constitution
To update blog guidelines:
1. Edit \`.spec/blog.spec.json\` manually
2. Or run \`/blog-setup\` again (overwrites file)
3. Or run \`/blog-analyse\` to regenerate from existing content
**After updating constitution**:
- This CLAUDE.md file should be regenerated
- Validate existing articles: \`/blog-optimize\`
- Update voice guidelines as needed
## Important Notes
⚠️ **Never Deviate from Constitution**
All agents (research-intelligence, seo-specialist, marketing-specialist, etc.) are instructed to:
- Load \`.spec/blog.spec.json\` before generating content
- Apply voice_do/voice_dont guidelines strictly
- Match the specified tone: $tone
- Follow review_rules for validation
If constitution conflicts with a specific request, **constitution always wins**.
If you need different guidelines for a specific article, update the constitution first.
---
**Context**: $context
**Objective**: $objective
**Languages**: $(echo "$languages" | sed 's/,/, /g')
**Content Directory**: $content_dir
Generated by: \`/blog-setup\` command
Constitution: \`.spec/blog.spec.json\`
CLAUDE_EOF
echo "✅ CLAUDE.md created in $content_dir/"
# Success message
echo ""
echo "╔════════════════════════════════════════╗"
echo "║ ✅ Setup Complete! ║"
echo "╚════════════════════════════════════════╝"
echo ""
echo "Files created:"
echo " ✅ .spec/blog.spec.json (constitution)"
echo " ✅ $content_dir/CLAUDE.md (content guidelines)"
echo ""
echo "Your blog: $blog_name"
echo "Tone: $tone"
echo "Content directory: $content_dir"
echo "Voice DO: $voice_do"
echo "Voice DON'T: $voice_dont"
echo ""
echo "Next steps:"
echo " 1. Review .spec/blog.spec.json"
echo " 2. Check $content_dir/CLAUDE.md for content guidelines"
echo " 3. Generate your first article: /blog-generate \"Your topic\""
echo ""
echo "All agents will use blog.spec.json as source of truth! 🎨"
echo ""
SCRIPT_EOF
# Make script executable
chmod +x "$SCRIPT"
# Execute script
bash "$SCRIPT"
# Capture exit code
EXIT_CODE=$?
# Clean up
rm "$SCRIPT"
# Report result
if [ $EXIT_CODE -eq 0 ]; then
echo "✅ Blog constitution and content guidelines created successfully!"
echo ""
echo "View your configuration:"
echo " cat .spec/blog.spec.json # Constitution"
echo " cat [content-dir]/CLAUDE.md # Content guidelines"
else
echo "❌ Setup failed with exit code $EXIT_CODE"
exit $EXIT_CODE
fi
```
## Expected Output
After running `/blog-setup`, you'll have:
**File 1**: `.spec/blog.spec.json` (Constitution)
**Example content**:
```json
{
"version": "1.0.0",
"blog": {
"name": "Tech Insights",
"context": "Technical blog for software developers",
"objective": "Generate qualified leads and establish thought leadership",
"tone": "pédagogique",
"languages": ["fr", "en"],
"content_directory": "articles",
"brand_rules": {
"voice_do": [
"Clear",
"Actionable",
"Technical",
"Data-driven"
],
"voice_dont": [
"Jargon without explanation",
"Vague claims",
"Salesy language"
]
}
},
"workflow": {
"review_rules": {
"must_have": [
"Executive summary",
"Source citations",
"Actionable insights"
],
"must_avoid": [
"Unsourced claims",
"Keyword stuffing",
"Vague recommendations"
]
}
},
"generated_at": "2025-10-12T10:30:00Z"
}
```
**File 2**: `articles/CLAUDE.md` (Content Guidelines)
**Example content**:
```markdown
# Blog Content Directory
**Blog Name**: Tech Insights
**Tone**: pédagogique
## Source of Truth: blog.spec.json
**IMPORTANT**: All content in this directory MUST follow `.spec/blog.spec.json` guidelines.
### Always Check Constitution First
Before creating or editing any article:
1. **Load Constitution**: `cat .spec/blog.spec.json`
2. **Verify tone matches**: pédagogique (Educational approach, step-by-step explanations, learning-focused)
3. **Follow voice guidelines** (see below)
4. **Run validation**: `/blog-optimize "lang/article-slug"`
## Voice Guidelines (from Constitution)
### ✅ DO
- ✅ Clear
- ✅ Actionable
- ✅ Technical
- ✅ Data-driven
### ❌ DON'T
- ❌ Jargon without explanation
- ❌ Vague claims
- ❌ Salesy language
## Tone: pédagogique
**What this means**: Educational approach, step-by-step explanations, learning-focused
**How to apply**:
- Every article must reflect this tone consistently
- Use vocabulary and phrasing appropriate to this tone
- Maintain tone across all languages (fr, en)
## Article Structure
Every article must include:
1. **Frontmatter** (YAML): title, description, date, language, tags
2. **Executive Summary**: Key takeaways upfront
3. **Main Content**: H2/H3 structured headings, code examples, source citations
4. **Actionable Insights**: 3-5 specific recommendations
5. **Images**: Descriptive alt text (SEO + accessibility)
## Validation Workflow
**Before Publishing**:
```bash
/blog-optimize "lang/article-slug"
/blog-translate "lang/article-slug" "target-lang"
/blog-optimize-images "lang/article-slug"
```
**Commands that Use Constitution**:
- `/blog-generate` - Generates content following constitution
- `/blog-copywrite` - Writes article using spec-kit + constitution
- `/blog-optimize` - Validates against constitution rules
- `/blog-marketing` - Creates marketing content with brand voice
⚠️ **Never Deviate from Constitution**
All agents are instructed to load `.spec/blog.spec.json` and follow it strictly.
If constitution conflicts with a request, **constitution always wins**.
```
## What Happens Next
When you run `/blog-generate`, agents will automatically:
1. Check if `.spec/blog.spec.json` exists
2. Load brand rules (voice do/don't)
3. Apply your tone preference
4. Follow review rules
5. Generate content consistent with your brand
**No manual configuration needed!** ✨
## Updating Configuration
To update your configuration:
1. Edit `.spec/blog.spec.json` manually, or
2. Run `/blog-setup` again (overwrites existing file)
## Validation
The script validates JSON automatically if `python3` is available. If validation fails, agents will catch errors when loading the constitution.
## Tips
1. **Be specific with voice guidelines**: "Avoid jargon" → "Avoid jargon without explanation"
2. **Balance DO/DON'T**: Provide both positive and negative guidelines
3. **Test tone**: Generate a test article after setup to verify tone matches expectations
4. **Iterate**: Don't worry about perfection - you can edit `.spec/blog.spec.json` anytime
---
**Ready to set up your blog?** Run `/blog-setup` now!

608
commands/blog-translate.md Normal file
View File

@@ -0,0 +1,608 @@
# Blog Translation & i18n Validation
Validate i18n structure consistency and translate articles across languages.
## Usage
### Validation Only (Structure Check)
```bash
/blog-translate
```
**What it does**:
- Scans `articles/` directory structure
- Validates against `.spec/blog.spec.json` languages
- Generates coverage report
- Identifies missing translations
**Output**: `/tmp/translation-report.md`
### Translate Specific Article
```bash
/blog-translate "source-lang/article-slug" "target-lang"
```
**Examples**:
```bash
# Translate English article to French
/blog-translate "en/nodejs-logging" "fr"
# Translate English to Spanish
/blog-translate "en/microservices-patterns" "es"
# Translate French to German
/blog-translate "fr/docker-basics" "de"
```
### Auto-Detect Source Language
```bash
/blog-translate "article-slug" "target-lang"
```
**Example**:
```bash
# Finds first available language for this slug
/blog-translate "nodejs-logging" "es"
```
## Prerequisites
**Required**:
- `.spec/blog.spec.json` with languages configured
- Source article exists in source language
- Target language configured in constitution
**Language Configuration**:
```json
{
"blog": {
"languages": ["en", "fr", "es", "de"]
}
}
```
## What This Command Does
### Mode 1: Structure Validation (No Arguments)
Delegates to **translator** agent (Phase 1 only):
1. **Load Constitution**: Extract configured languages
2. **Scan Structure**: Analyze `articles/` directory
3. **Generate Script**: Create validation script in `/tmp/`
4. **Execute Validation**: Run structure check
5. **Generate Report**: Coverage statistics + missing translations
**Time**: 2-5 minutes
**Output**: Detailed report showing language coverage
### Mode 2: Article Translation (With Arguments)
Delegates to **translator** agent (All Phases):
1. **Phase 1**: Validate structure + identify source
2. **Phase 2**: Load source article + extract context
3. **Phase 3**: Translate content preserving technical accuracy
4. **Phase 4**: Synchronize images
5. **Phase 5**: Validate + save translated article
**Time**: 10-20 minutes (depending on article length)
**Output**: Translated article in `articles/$TARGET_LANG/$SLUG/article.md`
## Instructions
Create a new subagent conversation with the `translator` agent.
### For Validation Only
**Provide the following prompt**:
```
You are validating the i18n structure for a multi-language blog.
**Task**: Structure validation only (Phase 1)
**Constitution**: .spec/blog.spec.json
Execute ONLY Phase 1 (Structure Analysis) from your instructions:
1. Load language configuration from .spec/blog.spec.json
2. Scan articles/ directory structure
3. Generate validation script in /tmp/validate-translations-$$.sh
4. Execute the validation script
5. Read and display /tmp/translation-report.md
**Important**:
- Generate ALL scripts in /tmp/ (non-destructive)
- Do NOT modify any article files
- Report coverage percentage
- List all missing translations
Display the complete translation report when finished.
```
### For Article Translation
**Provide the following prompt**:
```
You are translating a blog article from one language to another.
**Source Article**: articles/$SOURCE_LANG/$SLUG/article.md
**Target Language**: $TARGET_LANG
**Constitution**: .spec/blog.spec.json
Execute ALL phases (1-5) from your instructions:
**Phase 1**: Structure validation
- Verify source article exists
- Verify target language is configured
- Check if target already exists (backup if needed)
**Phase 2**: Translation preparation
- Load source article
- Extract frontmatter
- Identify technical terms to preserve
- Build translation context
**Phase 3**: Content translation
- Translate frontmatter (title, description, keywords)
- Translate headings (maintain H2/H3 structure)
- Translate body content
- Preserve code blocks unchanged
- Translate image alt text
- Add cross-language navigation links
**Phase 4**: Image synchronization
- Copy optimized images from source
- Copy .backup/ originals
- Verify all image references exist
**Phase 5**: Validation & output
- Save translated article to articles/$TARGET_LANG/$SLUG/article.md
- Generate translation summary
- Suggest next steps
**Translation Guidelines**:
- Preserve technical precision
- Keep code blocks identical
- Translate naturally (not literally)
- Maintain brand voice from constitution
- Adapt idioms culturally
- Update meta description (150-160 chars in target language)
**Important**:
- Create backup if target file exists
- Generate validation scripts in /tmp/
- Keep image filenames identical (don't translate)
- Translate image alt text for accessibility
- Add language navigation links ( )
Display translation summary when complete.
```
## Expected Output
### Validation Report
After structure validation:
```markdown
# Translation Coverage Report
Generated: 2025-01-12 15:30:00
Language directory exists: en
Language directory exists: fr
Missing language directory: es
## Article Coverage
### nodejs-logging
- **en**: 2,450 words
- **fr**: 2,380 words
- **es**: MISSING
### microservices-patterns
- **en**: 3,200 words
- **fr**: MISSING
- **es**: MISSING
## Summary
- **Total unique articles**: 2
- **Languages configured**: 3
- **Expected articles**: 6
- **Existing articles**: 3
- **Coverage**: 50%
## Missing Translations
- Translate **nodejs-logging** from `en``es`
- Translate **microservices-patterns** from `en``fr`
- Translate **microservices-patterns** from `en``es`
```
### Translation Summary
After article translation:
```markdown
# Translation Summary
**Article**: nodejs-logging
**Source**: en
**Target**: fr
**Date**: 2025-01-12 15:45:00
## Statistics
- **Source word count**: 2,450
- **Target word count**: 2,380
- **Images copied**: 3
- **Code blocks**: 12
- **Headings**: 15 (5 H2, 10 H3)
## Files Created
- articles/fr/nodejs-logging/article.md
- articles/fr/nodejs-logging/images/ (3 WebP files)
## Next Steps
1. Review translation for accuracy
2. Run quality optimization: `/blog-optimize "fr/nodejs-logging"`
3. Optimize images if needed: `/blog-optimize-images "fr/nodejs-logging"`
4. Add cross-language links to source article
## Cross-Language Navigation
Add to source article (en):
```markdown
[Lire en français](/fr/nodejs-logging)
```
```
## Validation Script Example
The agent generates `/tmp/validate-translations-$$.sh`:
```bash
#!/bin/bash
SPEC_FILE=".spec/blog.spec.json"
ARTICLES_DIR="articles"
# Extract supported languages from spec
LANGUAGES=$(jq -r '.blog.languages[]' "$SPEC_FILE")
# Initialize report
echo "# Translation Coverage Report" > /tmp/translation-report.md
echo "Generated: $(date)" >> /tmp/translation-report.md
# Check each language exists
for lang in $LANGUAGES; do
if [ ! -d "$ARTICLES_DIR/$lang" ]; then
echo " Missing language directory: $lang"
mkdir -p "$ARTICLES_DIR/$lang"
else
echo " Language directory exists: $lang"
fi
done
# Build article slug list (union of all languages)
ALL_SLUGS=()
for lang in $LANGUAGES; do
if [ -d "$ARTICLES_DIR/$lang" ]; then
for article_dir in "$ARTICLES_DIR/$lang"/*; do
if [ -d "$article_dir" ]; then
slug=$(basename "$article_dir")
if [[ ! " ${ALL_SLUGS[@]} " =~ " ${slug} " ]]; then
ALL_SLUGS+=("$slug")
fi
fi
done
fi
done
# Check coverage for each slug
for slug in "${ALL_SLUGS[@]}"; do
echo "### $slug"
for lang in $LANGUAGES; do
article_path="$ARTICLES_DIR/$lang/$slug/article.md"
if [ -f "$article_path" ]; then
word_count=$(wc -w < "$article_path")
echo "- **$lang**: $word_count words"
else
echo "- **$lang**: MISSING"
fi
done
done
# Summary statistics
TOTAL_SLUGS=${#ALL_SLUGS[@]}
LANG_COUNT=$(echo "$LANGUAGES" | wc -w)
EXPECTED_TOTAL=$((TOTAL_SLUGS * LANG_COUNT))
# Calculate coverage
# ... (see full script in agent)
```
## Multi-Language Workflow
### 1. Initial Setup
```bash
# Create constitution with languages
cat > .spec/blog.spec.json <<'EOF'
{
"blog": {
"languages": ["en", "fr", "es"]
}
}
EOF
```
### 2. Create Original Article
```bash
# Write English article
/blog-copywrite "en/nodejs-logging"
```
### 3. Check Coverage
```bash
# Validate structure
/blog-translate
# Output shows:
# - nodejs-logging: en , fr , es
```
### 4. Translate to Other Languages
```bash
# Translate to French
/blog-translate "en/nodejs-logging" "fr"
# Translate to Spanish
/blog-translate "en/nodejs-logging" "es"
```
### 5. Verify Complete Coverage
```bash
# Re-check structure
/blog-translate
# Output shows:
# - nodejs-logging: en , fr , es
# - Coverage: 100%
```
### 6. Update Cross-Links
Manually add language navigation to each article:
```markdown
---
[Read in English](/en/nodejs-logging)
[Lire en français](/fr/nodejs-logging)
[Leer en español](/es/nodejs-logging)
---
```
## Translation Quality Tips
### Before Translation
1. **Finalize source**: Complete and review source article first
2. **Optimize images**: Run `/blog-optimize-images` on source
3. **SEO validation**: Run `/blog-optimize` on source
4. **Cross-references**: Ensure internal links work
### During Translation
1. **Technical accuracy**: Verify technical terms are correct
2. **Cultural adaptation**: Adapt examples and idioms
3. **SEO keywords**: Research target language keywords
4. **Natural flow**: Read translated text aloud
### After Translation
1. **Native review**: Have native speaker review
2. **Quality check**: Run `/blog-optimize` on translation
3. **Link verification**: Test all internal/external links
4. **Image check**: Verify all images load correctly
## Troubleshooting
### "Language not configured"
```bash
# Add language to constitution
# Edit .spec/blog.spec.json
{
"blog": {
"languages": ["en", "fr", "es", "de"] // Add your language
}
}
```
### "Source article not found"
```bash
# Verify source exists
ls articles/en/nodejs-logging/article.md
# If missing, create it first
/blog-copywrite "en/nodejs-logging"
```
### "Target already exists"
```bash
# Agent will offer options:
# 1. Overwrite (backup created in /tmp/)
# 2. Skip translation
# 3. Compare versions
# Manual backup if needed
cp articles/fr/nodejs-logging/article.md /tmp/backup-$(date +%s).md
```
### "jq: command not found"
```bash
# Install jq (JSON parser)
brew install jq # macOS
sudo apt-get install jq # Linux
choco install jq # Windows (Chocolatey)
```
### "Images not synchronized"
```bash
# Manually copy images
cp -r articles/en/nodejs-logging/images/* articles/fr/nodejs-logging/images/
# Or re-run optimization
/blog-optimize-images "fr/nodejs-logging"
```
## Performance Notes
### Validation Only
- **Time**: 2-5 minutes
- **Complexity**: O(n × m) where n = articles, m = languages
- **Token usage**: ~500 tokens
### Single Translation
- **Time**: 10-20 minutes
- **Complexity**: O(article_length)
- **Token usage**: ~5k-8k tokens (source + translation context)
### Batch Translation
- **Recommended**: Translate similar articles in sequence
- **Parallel**: Translations are independent (can run multiple agents)
## Integration with Other Commands
### Complete Workflow
```bash
# 1. Research (language-agnostic)
/blog-research "Node.js Logging Best Practices"
# 2. SEO (language-agnostic)
/blog-seo "Node.js Logging Best Practices"
# 3. Create English article
/blog-copywrite "en/nodejs-logging"
# 4. Optimize English
/blog-optimize "en/nodejs-logging"
/blog-optimize-images "en/nodejs-logging"
# 5. Check translation coverage
/blog-translate
# 6. Translate to French
/blog-translate "en/nodejs-logging" "fr"
# 7. Translate to Spanish
/blog-translate "en/nodejs-logging" "es"
# 8. Optimize translations
/blog-optimize "fr/nodejs-logging"
/blog-optimize "es/nodejs-logging"
# 9. Final coverage check
/blog-translate # Should show 100%
```
## Advanced Usage
### Selective Translation
Translate only high-priority articles:
```bash
# Check what needs translation
/blog-translate | grep ""
# Translate priority articles only
/blog-translate "en/top-article" "fr"
/blog-translate "en/top-article" "es"
```
### Update Existing Translations
When source article changes:
```bash
# 1. Update source
vim articles/en/nodejs-logging/article.md
# 2. Re-translate
/blog-translate "en/nodejs-logging" "fr" # Overwrites with backup
# 3. Review changes
diff /tmp/backup-*.md articles/fr/nodejs-logging/article.md
```
### Batch Translation Script
For translating all missing articles:
```bash
# Generate list of missing translations
/blog-translate > /tmp/coverage.txt
# Extract missing translations
grep "Translate.*→" /tmp/translation-report.md | while read line; do
# Extract slug and languages
# Run translations
# ...
done
```
## Storage Considerations
### Translated Articles
```
articles/
├── en/nodejs-logging/article.md (2.5k words, source)
├── fr/nodejs-logging/article.md (2.4k words, translation)
└── es/nodejs-logging/article.md (2.6k words, translation)
```
### Shared Images
Images can be shared across languages (recommended):
```
articles/
└── en/nodejs-logging/images/
├── diagram.webp
└── screenshot.webp
articles/fr/nodejs-logging/article.md # References: ../en/nodejs-logging/images/diagram.webp
```
Or duplicated per language (isolated):
```
articles/
├── en/nodejs-logging/images/diagram.webp
├── fr/nodejs-logging/images/diagram.webp
└── es/nodejs-logging/images/diagram.webp
```
---
**Ready to translate?** Maintain a consistent multi-language blog with automated structure validation and quality-preserving translations.