Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 09:07:22 +08:00
commit fab98d059b
179 changed files with 46209 additions and 0 deletions

View File

@@ -0,0 +1,408 @@
# Template: Concept Explanation
**Purpose**: Structured template for explaining individual technical concepts clearly
**Based on**: Example-driven explanation pattern from BAIME guide
**Validated**: Multiple concepts in BAIME guide, ready for reuse
---
## When to Use This Template
**Use for**:
- Abstract technical concepts that need clarification
- Framework components or subsystems
- Design patterns or architectural concepts
- Any concept where "what" and "why" both matter
**Don't use for**:
- Simple definitions (use glossary format)
- Step-by-step instructions (use procedure template)
- API reference (use API docs format)
---
## Template Structure
```markdown
### [Concept Name]
**Definition**: [1-2 sentence explanation in plain language]
**Why it matters**: [Practical reason or benefit]
**Key characteristics**:
- [Characteristic 1]
- [Characteristic 2]
- [Characteristic 3]
**Example**:
```[language]
[Concrete example showing concept in action]
```
**Explanation**: [How example demonstrates concept]
**Related concepts**:
- [Related concept 1]: [How they relate]
- [Related concept 2]: [How they relate]
**Common misconceptions**:
- ❌ [Misconception]: [Why it's wrong]
- ❌ [Misconception]: [Correct understanding]
**Further reading**: [Link to detailed reference]
```
---
## Section Guidelines
### Definition
- **Length**: 1-2 sentences maximum
- **Language**: Plain language, avoid jargon
- **Focus**: What it is, not what it does (that comes in "Why it matters")
- **Test**: Could a beginner understand this?
**Good example**:
> **Definition**: Progressive disclosure is a content structuring pattern that reveals complexity incrementally, starting simple and building to advanced topics.
**Bad example** (too technical):
> **Definition**: Progressive disclosure implements a hierarchical information architecture with lazy evaluation of cognitive load distribution across discretized complexity strata.
### Why It Matters
- **Length**: 1-2 sentences
- **Focus**: Practical benefit or problem solved
- **Avoid**: Vague statements like "improves quality"
- **Include**: Specific outcome or metric if possible
**Good example**:
> **Why it matters**: Prevents overwhelming new users while still providing depth for experts, increasing completion rates from 20% to 80%.
**Bad example** (vague):
> **Why it matters**: Makes documentation better and easier to use.
### Key Characteristics
- **Count**: 3-5 bullet points
- **Format**: Observable properties or behaviors
- **Purpose**: Help reader recognize concept in wild
- **Avoid**: Repeating definition
**Good example**:
> - Each layer is independently useful
> - Complexity increases gradually
> - Reader can stop at any layer and have learned something valuable
> - Clear boundaries between layers (headings, whitespace)
### Example
- **Type**: Concrete code, diagram, or scenario
- **Size**: Small enough to understand quickly (< 10 lines code)
- **Relevance**: Directly demonstrates the concept
- **Completeness**: Should be runnable/usable if possible
**Good example**:
```markdown
# Quick Start (Layer 1)
Install and run:
```bash
npm install tool
tool --quick-start
```
# Advanced Configuration (Layer 2)
All options:
```bash
tool --config-file custom.yml --verbose --parallel 4
```
```
### Explanation
- **Length**: 1-3 sentences
- **Purpose**: Connect example back to concept definition
- **Format**: "Notice how [aspect of example] demonstrates [concept characteristic]"
**Good example**:
> **Explanation**: Notice how the Quick Start shows a single command with no options (Layer 1), while Advanced Configuration shows all available options (Layer 2). This demonstrates progressive disclosure—simple first, complexity later.
### Related Concepts
- **Count**: 2-4 related concepts
- **Format**: Concept name + relationship type
- **Purpose**: Help reader build mental model
- **Types**: "complements", "contrasts with", "builds on", "prerequisite for"
**Good example**:
> - Example-driven explanation: Complements progressive disclosure (each layer needs examples)
> - Reference documentation: Contrasts with progressive disclosure (optimized for lookup, not learning)
### Common Misconceptions
- **Count**: 2-3 most common misconceptions
- **Format**: ❌ [Wrong belief] → ✅ [Correct understanding]
- **Purpose**: Preemptively address confusion
- **Source**: User feedback or anticipated confusion
**Good example**:
> - ❌ "Progressive disclosure means hiding information" → ✅ All information is accessible, just organized by complexity level
> - ❌ "Quick start must include all features" → ✅ Quick start shows minimal viable path; features come later
---
## Variations
### Variation 1: Abstract Concept (No Code)
For concepts without code examples (design principles, methodologies):
```markdown
### [Concept Name]
**Definition**: [Plain language explanation]
**Why it matters**: [Practical benefit]
**In practice**:
- **Scenario**: [Describe situation]
- **Without concept**: [What happens without it]
- **With concept**: [What changes with it]
- **Outcome**: [Measurable result]
**Example**: [Story or scenario demonstrating concept]
**Related concepts**: [As above]
```
### Variation 2: Component/System
For explaining system components:
```markdown
### [Component Name]
**Purpose**: [What role it plays in system]
**Responsibilities**:
- [Responsibility 1]
- [Responsibility 2]
- [Responsibility 3]
**Interfaces**:
- **Inputs**: [What it receives]
- **Outputs**: [What it produces]
- **Dependencies**: [What it requires]
**Example usage**:
```[language]
[Code showing component in action]
```
**Related components**: [How it connects to other parts]
```
### Variation 3: Pattern
For design patterns:
```markdown
### [Pattern Name]
**Problem**: [What problem pattern solves]
**Solution**: [How pattern solves it]
**Structure**:
```
[Diagram or code structure]
```
**When to use**:
- ✅ [Use case 1]
- ✅ [Use case 2]
**When NOT to use**:
- ❌ [Anti-pattern 1]
- ❌ [Anti-pattern 2]
**Example**:
```[language]
[Pattern implementation]
```
**Trade-offs**:
- **Pros**: [Benefits]
- **Cons**: [Costs]
```
---
## Quality Checklist
Before publishing concept explanation:
**Clarity**:
- [ ] Definition is 1-2 sentences in plain language
- [ ] Example is concrete and understandable
- [ ] No undefined jargon
- [ ] Tested on someone unfamiliar with concept
**Completeness**:
- [ ] "Why it matters" answers "so what?"
- [ ] Key characteristics help identify concept
- [ ] Example directly demonstrates concept
- [ ] Related concepts provide context
**Accuracy**:
- [ ] Example is correct and runnable (if code)
- [ ] Characteristics are observable
- [ ] Misconceptions are actually common
- [ ] Links work and are relevant
**Usability**:
- [ ] Can be understood in < 2 minutes
- [ ] Example is copy-paste ready (if code)
- [ ] Structure follows template
- [ ] Visual hierarchy clear (headings, bold, lists)
---
## Example Usage
### Input: Need to explain "Dual-layer value functions"
**Step 1**: Apply template
```markdown
### Dual-Layer Value Functions
**Definition**: A quantitative evaluation framework that measures both domain-specific task quality (V_instance) and methodology transferability quality (V_meta) independently.
**Why it matters**: Enables objective assessment of both "did we solve the problem?" and "can others reuse our solution?", preventing local optimization where methodology works once but isn't reusable.
**Key characteristics**:
- Two independent value functions calculated each iteration
- Each function has 4 weighted components
- Both must reach ≥ 0.80 threshold for convergence
- Prevents premature convergence on either dimension alone
**Example**:
```
Iteration 0:
V_instance = 0.66 (documentation quality)
- Accuracy: 0.70
- Completeness: 0.60
- Usability: 0.65
- Maintainability: 0.70
V_meta = 0.36 (methodology quality)
- Completeness: 0.25 (no templates yet)
- Effectiveness: 0.35 (modest speedup)
- Reusability: 0.40 (patterns identified)
- Validation: 0.45 (metrics defined)
```
**Explanation**: Notice how V_instance (task quality) can be high while V_meta (methodology quality) is low. This prevents declaring "success" when documentation is good but methodology isn't reusable.
**Related concepts**:
- Convergence criteria: Uses dual-layer values to determine when iteration complete
- Value optimization: Mathematical framework underlying value functions
- Component scoring: Each value function breaks into 4 components
**Common misconceptions**:
- ❌ "Higher V_instance means methodology is good" → ✅ Need high V_meta for reusable methodology
- ❌ "V_meta is subjective" → ✅ Each component has concrete metrics (coverage %, transferability %)
```
**Step 2**: Review with checklist
**Step 3**: Test on unfamiliar reader
**Step 4**: Refine based on feedback
---
## Real Examples from BAIME Guide
### Example 1: OCA Cycle
```markdown
### OCA Cycle
**Definition**: Observe-Codify-Automate is an iterative framework for extracting empirical patterns from practice and converting them into automated checks.
**Why it matters**: Converts implicit knowledge into explicit, testable, automatable form—enabling methodology improvement at the same pace as software development.
**Key phases**:
- **Observe**: Collect empirical data about current practices
- **Codify**: Extract patterns and document methodologies
- **Automate**: Convert methodologies to automated checks
- **Evolve**: Apply methodology to itself
**Example**:
Observe: Analyze git history → Notice 80% of commits fix test failures
Codify: Pattern: "Run tests before committing"
Automate: Pre-commit hook that runs tests
Evolve: Apply OCA to improving the OCA process itself
```
✅ Follows template structure
✅ Clear definition + practical example
✅ Demonstrates concept through phases
### Example 2: Convergence Criteria
```markdown
### Convergence Criteria
**Definition**: Mathematical conditions that determine when methodology development iteration should stop, preventing both premature convergence and infinite iteration.
**Why it matters**: Provides objective "done" criteria instead of subjective judgment, typically converging in 3-7 iterations.
**Four criteria** (all must be met):
- System stable: No agent changes for 2+ iterations
- Dual threshold: V_instance ≥ 0.80 AND V_meta ≥ 0.80
- Objectives complete: All planned work finished
- Diminishing returns: ΔV < 0.02 for 2+ iterations
**Example**:
Iteration 5: V_i=0.81, V_m=0.82, no agent changes, ΔV=0.01
Iteration 6: V_i=0.82, V_m=0.83, no agent changes, ΔV=0.01
→ Converged ✅ (all criteria met)
```
✅ Clear multi-part concept
✅ Concrete example with thresholds
✅ Demonstrates decision logic
---
## Validation
**Usage in BAIME guide**: 6 core concepts explained
- OCA Cycle
- Dual-layer value functions
- Convergence criteria
- Meta-agent
- Capabilities
- Agent specialization
**Pattern effectiveness**:
- ✅ Each concept has definition + example
- ✅ Clear "why it matters" for each
- ✅ Examples concrete and understandable
**Transferability**: High (applies to any concept explanation)
**Confidence**: Validated through multiple uses in same document
**Next validation**: Apply to concepts in different domain
---
## Related Templates
- [tutorial-structure.md](tutorial-structure.md) - Overall tutorial organization (uses concept explanations)
- [example-walkthrough.md](example-walkthrough.md) - Detailed examples (complements concept explanations)
---
**Status**: ✅ Ready for use | Validated in 1 context (6 concepts) | High confidence
**Maintenance**: Update based on user comprehension feedback

View File

@@ -0,0 +1,484 @@
# Template: Example Walkthrough
**Purpose**: Structured template for creating end-to-end practical examples in documentation
**Based on**: Testing methodology example from BAIME guide
**Validated**: 1 use, ready for reuse
---
## When to Use This Template
**Use for**:
- End-to-end workflow demonstrations
- Real-world use case examples
- Tutorial practical sections
- "How do I accomplish X?" documentation
**Don't use for**:
- Code snippets (use inline examples)
- API reference examples (use API docs format)
- Concept explanations (use concept template)
- Quick tips (use list format)
---
## Template Structure
```markdown
## Practical Example: [Use Case Name]
**Scenario**: [1-2 sentence description of what we're accomplishing]
**Domain**: [Problem domain - testing, CI/CD, etc.]
**Time to complete**: [Estimate]
---
### Context
**Problem**: [What problem are we solving?]
**Goal**: [What we want to achieve]
**Starting state**:
- [Condition 1]
- [Condition 2]
- [Condition 3]
**Success criteria**:
- [Measurable outcome 1]
- [Measurable outcome 2]
---
### Prerequisites
**Required**:
- [Tool/knowledge 1]
- [Tool/knowledge 2]
**Files needed**:
- `[path/to/file]` - [Purpose]
**Setup**:
```bash
[Setup commands if needed]
```
---
### Workflow
#### Phase 1: [Phase Name]
**Objective**: [What this phase accomplishes]
**Step 1**: [Action]
[Explanation of what we're doing]
```[language]
[Code or command]
```
**Output**:
```
[Expected output]
```
**Why this matters**: [Reasoning]
**Step 2**: [Continue pattern]
**Phase 1 Result**: [What we have now]
---
#### Phase 2: [Phase Name]
[Repeat structure for 2-4 phases]
---
#### Phase 3: [Phase Name]
---
### Results
**Outcomes achieved**:
- ✅ [Outcome 1 with metric]
- ✅ [Outcome 2 with metric]
- ✅ [Outcome 3 with metric]
**Before and after comparison**:
| Metric | Before | After | Improvement |
|--------|--------|-------|-------------|
| [Metric 1] | [Value] | [Value] | [%/x] |
| [Metric 2] | [Value] | [Value] | [%/x] |
**Artifacts created**:
- `[file]` - [Description]
- `[file]` - [Description]
---
### Takeaways
**What we learned**:
1. [Insight 1]
2. [Insight 2]
3. [Insight 3]
**Key patterns observed**:
- [Pattern 1]
- [Pattern 2]
**Next steps**:
- [What to do next]
- [How to extend this example]
---
### Variations
**For different scenarios**:
**Scenario A**: [Variation description]
- Change: [What's different]
- Impact: [How it affects workflow]
**Scenario B**: [Another variation]
- Change: [What's different]
- Impact: [How it affects workflow]
---
### Troubleshooting
**Common issues in this example**:
**Issue 1**: [Problem]
- **Symptoms**: [How to recognize]
- **Cause**: [Why it happens]
- **Solution**: [How to fix]
**Issue 2**: [Continue pattern]
```
---
## Section Guidelines
### Scenario
- **Length**: 1-2 sentences
- **Specificity**: Concrete, not abstract ("Create testing strategy for Go project", not "Use BAIME for testing")
- **Appeal**: Should sound relevant to target audience
### Context
- **Problem statement**: Clear pain point
- **Starting state**: Observable conditions (can be verified)
- **Success criteria**: Measurable (coverage %, time, error rate, etc.)
### Workflow
- **Organization**: By logical phases (2-4 phases)
- **Detail level**: Sufficient to reproduce
- **Code blocks**: Runnable, copy-paste ready
- **Explanations**: "Why" not just "what"
### Results
- **Metrics**: Quantitative when possible
- **Comparison**: Before/after table
- **Artifacts**: List all files created
### Takeaways
- **Insights**: What was learned
- **Patterns**: What emerged from practice
- **Generalization**: How to apply elsewhere
---
## Quality Checklist
**Completeness**:
- [ ] All prerequisites listed
- [ ] Starting state clearly defined
- [ ] Success criteria measurable
- [ ] All phases documented
- [ ] Results quantified
- [ ] Artifacts listed
**Reproducibility**:
- [ ] Commands are copy-paste ready
- [ ] File paths are clear
- [ ] Setup instructions complete
- [ ] Expected outputs shown
- [ ] Tested on clean environment
**Clarity**:
- [ ] Each step has explanation
- [ ] "Why" provided for key decisions
- [ ] Phases logically organized
- [ ] Progression clear (what we have after each phase)
**Realism**:
- [ ] Based on real use case (not toy example)
- [ ] Complexity matches real-world (not oversimplified)
- [ ] Metrics are actual measurements (not estimates)
- [ ] Problems/challenges acknowledged
---
## Example: Testing Methodology Walkthrough
**Actual example from BAIME guide** (simplified):
```markdown
## Practical Example: Testing Methodology
**Scenario**: Developing systematic testing strategy for Go project using BAIME
**Domain**: Software testing
**Time to complete**: 6-8 hours across 3-5 iterations
---
### Context
**Problem**: Ad-hoc testing approach, coverage at 60%, no systematic strategy
**Goal**: Reach 80%+ coverage with reusable testing patterns
**Starting state**:
- Go project with 10K lines
- 60% test coverage
- Mix of unit and integration tests
- No testing standards
**Success criteria**:
- Test coverage ≥ 80%
- Testing patterns documented
- Methodology transferable to other Go projects (≥70%)
---
### Workflow
#### Phase 1: Baseline (Iteration 0)
**Objective**: Measure current state and identify gaps
**Step 1**: Measure coverage
```bash
go test -cover ./...
# Output: coverage: 60.2% of statements
```
**Step 2**: Analyze test quality
- Found 15 untested edge cases
- Identified 3 patterns: table-driven, golden file, integration
**Phase 1 Result**: Baseline established (V_instance=0.40, V_meta=0.20)
---
#### Phase 2: Pattern Codification (Iterations 1-2)
**Objective**: Extract and document testing patterns
**Step 1**: Extract table-driven pattern
```go
// Pattern: Table-driven tests
func TestFunction(t *testing.T) {
tests := []struct {
name string
input int
want int
}{
{"zero", 0, 0},
{"positive", 5, 25},
{"negative", -3, 9},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := Function(tt.input)
if got != tt.want {
t.Errorf("got %v, want %v", got, tt.want)
}
})
}
}
```
**Step 2**: Document 8 testing patterns
**Step 3**: Create test templates
**Phase 2 Result**: Patterns documented, coverage at 72%
---
#### Phase 3: Automation (Iteration 3)
**Objective**: Automate pattern detection and enforcement
**Step 1**: Create coverage analyzer script
**Step 2**: Create test generator tool
**Step 3**: Add pre-commit hooks
**Phase 3 Result**: Coverage at 86%, automated quality gates
---
### Results
**Outcomes achieved**:
- ✅ Coverage: 60% → 86% (+26 percentage points)
- ✅ Methodology: 8 patterns, 3 tools, comprehensive guide
- ✅ Transferability: 89% to other Go projects
**Before and after comparison**:
| Metric | Before | After | Improvement |
|--------|--------|-------|-------------|
| Coverage | 60% | 86% | +26 pp |
| Test generation time | 30 min | 2 min | 15x |
| Pattern consistency | Ad-hoc | Enforced | 100% |
**Artifacts created**:
- `docs/testing-strategy.md` - Complete methodology
- `scripts/coverage-analyzer.sh` - Coverage analysis tool
- `scripts/test-generator.sh` - Test template generator
- `patterns/*.md` - 8 testing patterns
---
### Takeaways
**What we learned**:
1. Table-driven tests are most common pattern (60% of tests)
2. Coverage gaps mostly in error handling paths
3. Automation provides 15x speedup over manual
**Key patterns observed**:
- Progressive coverage improvement (60→72→86)
- Value convergence in 3 iterations (faster than expected)
- Patterns emerged from practice, not designed upfront
**Next steps**:
- Apply to other Go projects to validate 89% transferability claim
- Add mutation testing for quality validation
- Expand pattern library based on new use cases
```
---
## Variations
### Variation 1: Quick Example (< 5 min)
For simple, focused examples:
```markdown
## Example: [Task]
**Task**: [What we're doing]
**Steps**:
1. [Action]
```
[Code]
```
2. [Action]
```
[Code]
```
3. [Action]
```
[Code]
```
**Result**: [What we achieved]
```
### Variation 2: Comparison Example
When showing before/after or comparing approaches:
```markdown
## Example: [Comparison]
**Scenario**: [Context]
### Approach A: [Name]
[Implementation]
**Pros**: [Benefits]
**Cons**: [Drawbacks]
### Approach B: [Name]
[Implementation]
**Pros**: [Benefits]
**Cons**: [Drawbacks]
### Recommendation
[Which to use when]
```
### Variation 3: Error Recovery Example
For troubleshooting documentation:
```markdown
## Example: Recovering from [Error]
**Symptom**: [What user sees]
**Diagnosis**:
1. Check [aspect]
```
[Diagnostic command]
```
2. Verify [aspect]
**Solution**:
1. [Fix step]
```
[Fix command]
```
2. [Verification step]
**Prevention**: [How to avoid in future]
```
---
## Validation
**Usage**: 1 complete walkthrough (Testing Methodology in BAIME guide)
**Effectiveness**:
- ✅ Clear phases and progression
- ✅ Realistic (based on actual experiment)
- ✅ Quantified results (metrics, before/after)
- ✅ Reproducible (though conceptual, not literal)
**Gaps identified in Iteration 0**:
- ⚠️ Example was conceptual, not literally tested
- ⚠️ Should be more specific (actual commands, actual output)
**Improvements for next use**:
- Make example literally reproducible (test every command)
- Add troubleshooting section specific to example
- Include timing for each phase
---
## Related Templates
- [tutorial-structure.md](tutorial-structure.md) - Practical Example section uses this template
- [concept-explanation.md](concept-explanation.md) - Uses brief examples; walkthrough provides depth
---
**Status**: ✅ Ready for use | Validated in 1 context | Refinement needed for reproducibility
**Maintenance**: Update based on example effectiveness feedback

View File

@@ -0,0 +1,607 @@
# Quick Reference Template
**Purpose**: Template for creating concise, scannable reference documentation (cheat sheets, command references, API quick guides)
**Version**: 1.0
**Status**: Ready for use
**Validation**: Applied to BAIME quick reference outline
---
## When to Use This Template
### Use For
**Command-line tool references** (CLI commands, options, examples)
**API quick guides** (endpoints, parameters, responses)
**Configuration cheat sheets** (settings, values, defaults)
**Keyboard shortcut guides** (shortcuts, actions, contexts)
**Syntax references** (language syntax, operators, constructs)
**Workflow checklists** (steps, validation, common patterns)
### Don't Use For
**Comprehensive tutorials** (use tutorial-structure.md instead)
**Conceptual explanations** (use concept-explanation.md instead)
**Detailed troubleshooting** (use troubleshooting guide template)
**Narrative documentation** (use example-walkthrough.md)
---
## Template Structure
### 1. Title and Scope
**Purpose**: Immediately communicate what this reference covers
**Structure**:
```markdown
# [Tool/API/Feature] Quick Reference
**Purpose**: [One sentence describing what this reference covers]
**Scope**: [What's included and what's not]
**Last Updated**: [Date]
```
**Example**:
```markdown
# BAIME Quick Reference
**Purpose**: Essential commands, patterns, and workflows for BAIME methodology development
**Scope**: Covers common operations, subagent invocations, value functions. See full tutorial for conceptual explanations.
**Last Updated**: 2025-10-19
```
---
### 2. At-A-Glance Summary
**Purpose**: Provide 10-second overview for users who already know basics
**Structure**:
```markdown
## At a Glance
**Core Workflow**:
1. [Step 1] - [What it does]
2. [Step 2] - [What it does]
3. [Step 3] - [What it does]
**Most Common Commands**:
- `[command]` - [Description]
- `[command]` - [Description]
**Key Concepts**:
- **[Concept]**: [One-sentence definition]
- **[Concept]**: [One-sentence definition]
```
**Example**:
```markdown
## At a Glance
**Core BAIME Workflow**:
1. Design iteration prompts - Define experiment structure
2. Execute Iteration 0 - Establish baseline
3. Iterate until convergence - Improve both layers
**Most Common Subagents**:
- `iteration-prompt-designer` - Create ITERATION-PROMPTS.md
- `iteration-executor` - Run OCA cycle iteration
- `knowledge-extractor` - Extract final methodology
**Key Metrics**:
- **V_instance ≥ 0.80**: Domain work quality
- **V_meta ≥ 0.80**: Methodology quality
```
---
### 3. Command Reference (for CLI/API tools)
**Purpose**: Provide exhaustive, scannable command list
**Structure**:
#### For CLI Tools
```markdown
## Command Reference
### [Command Category]
#### `[command] [options] [args]`
**Description**: [What this command does]
**Options**:
- `-a, --option-a` - [Description]
- `-b, --option-b VALUE` - [Description] (default: VALUE)
**Examples**:
```bash
# [Use case 1]
[command] [example]
# [Use case 2]
[command] [example]
```
**Common Patterns**:
- [Pattern description]: `[command pattern]`
```
#### For APIs
```markdown
## API Reference
### [Endpoint Category]
#### `[METHOD] /path/to/endpoint`
**Description**: [What this endpoint does]
**Parameters**:
| Name | Type | Required | Description |
|------|------|----------|-------------|
| param1 | string | Yes | [Description] |
| param2 | number | No | [Description] (default: value) |
**Request Example**:
```json
{
"param1": "value",
"param2": 42
}
```
**Response Example**:
```json
{
"status": "success",
"data": { ... }
}
```
**Error Codes**:
- `400` - [Error description]
- `404` - [Error description]
```
---
### 4. Pattern Reference
**Purpose**: Document common patterns and their usage
**Structure**:
```markdown
## Common Patterns
### Pattern: [Pattern Name]
**When to use**: [Situation where this pattern applies]
**Structure**:
```
[Pattern template or pseudocode]
```
**Example**:
```[language]
[Concrete example]
```
**Variations**:
- [Variation 1]: [When to use]
- [Variation 2]: [When to use]
```
**Example**:
```markdown
## Common Patterns
### Pattern: Value Function Calculation
**When to use**: End of each iteration, during evaluation phase
**Structure**:
```
V_component = (Metric1 + Metric2 + ... + MetricN) / N
V_layer = (Component1 + Component2 + ... + ComponentN) / N
```
**Example**:
```
V_instance = (Accuracy + Completeness + Usability + Maintainability) / 4
V_instance = (0.75 + 0.60 + 0.65 + 0.80) / 4 = 0.70
```
**Variations**:
- **Weighted average**: When components have different importance
- **Minimum threshold**: When any component below threshold fails entire layer
```
---
### 5. Decision Trees / Flowcharts (Text-Based)
**Purpose**: Help users navigate choices
**Structure**:
```markdown
## Decision Guide: [What Decision]
**Question**: [Decision question]
**If [condition]**:
- Do: [Action]
- Why: [Rationale]
- Example: [Example]
**Else if [condition]**:
- Do: [Action]
- Why: [Rationale]
**Otherwise**:
- Do: [Action]
```
**Example**:
```markdown
## Decision Guide: When to Create Specialized Agent
**Question**: Should I create a specialized agent for this task?
**If ALL of these are true**:
- Task performed 3+ times with similar structure
- Generic approach struggled or was inefficient
- Can articulate specific agent improvements
- **Do**: Create specialized agent
- **Why**: Evidence shows insufficiency, pattern clear
- **Example**: test-generator after manual test writing 3x
**Else if task done 1-2 times only**:
- **Do**: Wait for more evidence
- **Why**: Insufficient pattern recurrence
**Otherwise (no clear benefit)**:
- **Do**: Continue with generic approach
- **Why**: Evolution requires evidence, not speculation
```
---
### 6. Troubleshooting Quick Reference
**Purpose**: One-line solutions to common issues
**Structure**:
```markdown
## Quick Troubleshooting
| Problem | Quick Fix | Full Details |
|---------|-----------|--------------|
| [Symptom] | [Quick solution] | [Link to detailed guide] |
| [Symptom] | [Quick solution] | [Link to detailed guide] |
```
**Example**:
```markdown
## Quick Troubleshooting
| Problem | Quick Fix | Full Details |
|---------|-----------|--------------|
| Value scores not improving | Check if solving symptoms vs root causes | [Full troubleshooting](#troubleshooting) |
| Low V_meta Reusability | Parameterize patterns, add adaptation guides | [Full troubleshooting](#troubleshooting) |
| Iterations taking too long | Use specialized subagents, time-box templates | [Full troubleshooting](#troubleshooting) |
| Can't reach 0.80 threshold | Re-evaluate value function definitions | [Full troubleshooting](#troubleshooting) |
```
---
### 7. Configuration/Settings Reference
**Purpose**: Document all configurable options
**Structure**:
```markdown
## Configuration Reference
### [Configuration Category]
| Setting | Type | Default | Description |
|---------|------|---------|-------------|
| `setting_name` | type | default | [What it does] |
| `setting_name` | type | default | [What it does] |
**Example Configuration**:
```[format]
[example config file]
```
```
**Example**:
```markdown
## Value Function Configuration
### Instance Layer Components
| Component | Weight | Range | Description |
|-----------|--------|-------|-------------|
| Accuracy | 0.25 | 0.0-1.0 | Technical correctness, factual accuracy |
| Completeness | 0.25 | 0.0-1.0 | Coverage of user needs, edge cases |
| Usability | 0.25 | 0.0-1.0 | Clarity, accessibility, examples |
| Maintainability | 0.25 | 0.0-1.0 | Modularity, consistency, automation |
**Example Calculation**:
```
V_instance = (0.75 + 0.60 + 0.65 + 0.80) / 4 = 0.70
```
```
---
### 8. Related Resources
**Purpose**: Point to related documentation
**Structure**:
```markdown
## Related Resources
**Deeper Learning**:
- [Tutorial Name](link) - [When to read]
- [Guide Name](link) - [When to read]
**Related References**:
- [Reference Name](link) - [What it covers]
**External Resources**:
- [Resource Name](link) - [Description]
```
---
## Quality Checklist
Before publishing, verify:
### Content Quality
- [ ] **Scannability**: Can user find information in <30 seconds?
- [ ] **Completeness**: All common commands/operations covered?
- [ ] **Examples**: Every command/pattern has concrete example?
- [ ] **Accuracy**: All commands/code tested and working?
- [ ] **Currency**: Information up-to-date with latest version?
### Structure Quality
- [ ] **At-a-glance section**: Provides 10-second overview?
- [ ] **Consistent formatting**: Tables, code blocks, headings uniform?
- [ ] **Cross-references**: Links to detailed docs where needed?
- [ ] **Navigation**: Easy to jump to specific section?
### User Experience
- [ ] **Target audience**: Assumes user knows basics, needs quick lookup?
- [ ] **No redundancy**: Information not duplicated from full docs?
- [ ] **Print-friendly**: Could be printed as 1-2 page reference?
- [ ] **Progressive disclosure**: Most common info first, advanced later?
### Maintainability
- [ ] **Version tracking**: Last updated date present?
- [ ] **Change tracking**: Version history documented?
- [ ] **Linked to source**: References to source of truth (API spec, etc)?
- [ ] **Update frequency**: Plan for keeping current?
---
## Adaptation Guide
### For Different Domains
**CLI Tools** (git, docker, etc):
- Focus on command syntax, options, examples
- Include common workflows (init → add → commit → push)
- Add troubleshooting for common errors
**APIs** (REST, GraphQL):
- Focus on endpoints, parameters, responses
- Include authentication examples
- Add rate limits, error codes
**Configuration** (yaml, json, env):
- Focus on settings, defaults, validation
- Include complete example config
- Add common configuration patterns
**Syntax** (programming languages):
- Focus on operators, keywords, constructs
- Include code examples for each construct
- Add "coming from X language" sections
### Length Guidelines
**Ideal length**: 1-3 printed pages (500-1500 words)
- Too short (<500 words): Probably missing common use cases
- Too long (>2000 words): Should be split or moved to full tutorial
**Balance**: 70% reference tables/lists, 30% explanatory text
---
## Examples of Good Quick References
### Example 1: Git Cheat Sheet
**Why it works**:
- Commands organized by workflow (init, stage, commit, branch)
- Each command has one-line description
- Common patterns shown (fork → clone → branch → PR)
- Fits on one page
### Example 2: Docker Quick Reference
**Why it works**:
- Separates basic commands from advanced
- Shows command anatomy (docker [options] command [args])
- Includes real-world examples
- Links to full documentation
### Example 3: Python String Methods Reference
**Why it works**:
- Alphabetical table of methods
- Each method shows signature and one example
- Indicates Python version compatibility
- Quick search via browser Ctrl+F
---
## Common Mistakes to Avoid
### ❌ Mistake 1: Too Much Explanation
**Problem**: Quick reference becomes mini-tutorial
**Bad**:
```markdown
## git commit
Git commit is an important command that saves your changes to the local repository.
Before committing, you should stage your changes with git add. Commits create a
snapshot of your work that you can return to later...
[3 more paragraphs]
```
**Good**:
```markdown
## git commit
`git commit -m "message"` - Save staged changes with message
Examples:
- `git commit -m "Add login feature"` - Basic commit
- `git commit -a -m "Fix bug"` - Stage and commit all
- `git commit --amend` - Modify last commit
See: [Full Git Guide](link) for commit best practices
```
### ❌ Mistake 2: Missing Examples
**Problem**: Syntax shown but no concrete usage
**Bad**:
```markdown
## API Endpoint
`POST /api/users`
Parameters: name (string), email (string), age (number)
```
**Good**:
```markdown
## API Endpoint
`POST /api/users` - Create new user
Example Request:
```bash
curl -X POST https://api.example.com/api/users \
-H "Content-Type: application/json" \
-d '{"name": "Alice", "email": "alice@example.com", "age": 30}'
```
Example Response:
```json
{"id": 123, "name": "Alice", "email": "alice@example.com"}
```
```
### ❌ Mistake 3: Poor Organization
**Problem**: Commands in random order, no grouping
**Bad**:
- `docker ps`
- `docker build`
- `docker stop`
- `docker run`
- `docker images`
[Random order, hard to find]
**Good**:
**Image Commands**:
- `docker build` - Build image
- `docker images` - List images
**Container Commands**:
- `docker run` - Start container
- `docker ps` - List containers
- `docker stop` - Stop container
### ❌ Mistake 4: No Progressive Disclosure
**Problem**: Advanced features mixed with basics
**Bad**:
```markdown
## Commands
- ls - List files
- docker buildx create --use --platform=linux/arm64,linux/amd64
- cd directory - Change directory
- git rebase -i --autosquash --fork-point main
```
**Good**:
```markdown
## Basic Commands
- `ls` - List files
- `cd directory` - Change directory
## Advanced Commands
- `docker buildx create --use --platform=...` - Multi-platform builds
- `git rebase -i --autosquash` - Interactive rebase
```
---
## Template Variables
When creating quick reference, customize:
- `[Tool/API/Feature]` - Name of what's being referenced
- `[Command Category]` - Logical grouping of commands
- `[Method]` - HTTP method or operation type
- `[Parameter]` - Input parameter name
- `[Example]` - Concrete, runnable example
---
## Validation Checklist
Test your quick reference:
1. **Speed test**: Can experienced user find command in <30 seconds?
2. **Completeness test**: Are 80%+ of common operations covered?
3. **Example test**: Can user copy/paste examples and run successfully?
4. **Print test**: Is it useful when printed?
5. **Search test**: Can user Ctrl+F to find what they need?
**If any test fails, revise before publishing.**
---
## Version History
- **1.0** (2025-10-19): Initial template created from documentation methodology iteration 2
---
**Ready to use**: Apply this template to create scannable, efficient quick reference guides for any tool, API, or feature.

View File

@@ -0,0 +1,650 @@
# Troubleshooting Guide Template
**Purpose**: Template for creating systematic troubleshooting documentation using Problem-Cause-Solution pattern
**Version**: 1.0
**Status**: Ready for use
**Validation**: Applied to BAIME troubleshooting section
---
## When to Use This Template
### Use For
**Error diagnosis guides** (common errors, root causes, fixes)
**Performance troubleshooting** (slow operations, bottlenecks, optimizations)
**Configuration issues** (setup problems, misconfigurations, validation)
**Integration problems** (API failures, connection issues, compatibility)
**User workflow issues** (stuck states, unexpected behavior, workarounds)
**Debug guides** (systematic debugging, diagnostic tools, log analysis)
### Don't Use For
**FAQ** (use FAQ format for common questions)
**Feature documentation** (use tutorial or reference)
**Conceptual explanations** (use concept-explanation.md)
**Step-by-step tutorials** (use tutorial-structure.md)
---
## Template Structure
### 1. Title and Scope
**Purpose**: Set expectations for what troubleshooting is covered
**Structure**:
```markdown
# Troubleshooting [System/Feature/Tool]
**Purpose**: Diagnose and resolve common issues with [system/feature]
**Scope**: Covers [what's included], see [other guide] for [what's excluded]
**Last Updated**: [Date]
## How to Use This Guide
1. Find your symptom in the issue list
2. Verify symptoms match your situation
3. Follow diagnosis steps to identify root cause
4. Apply recommended solution
5. If unresolved, see [escalation path]
```
**Example**:
```markdown
# Troubleshooting BAIME Methodology Development
**Purpose**: Diagnose and resolve common issues during BAIME experiments
**Scope**: Covers iteration execution, value scoring, convergence issues. See [BAIME Usage Guide] for workflow questions.
**Last Updated**: 2025-10-19
## How to Use This Guide
1. Find your symptom in the issue list below
2. Read the diagnosis section to identify root cause
3. Follow step-by-step solution
4. Verify fix worked by checking "Success Indicators"
5. If still stuck, see [Getting Help](#getting-help) section
```
---
### 2. Issue Index
**Purpose**: Help users quickly navigate to their problem
**Structure**:
```markdown
## Common Issues
**[Category 1]**:
- [Issue 1: Symptom summary](#issue-1-details)
- [Issue 2: Symptom summary](#issue-2-details)
**[Category 2]**:
- [Issue 3: Symptom summary](#issue-3-details)
- [Issue 4: Symptom summary](#issue-4-details)
**Quick Diagnosis**:
| If you see... | Likely issue | Jump to |
|---------------|--------------|---------|
| [Symptom] | [Issue name] | [Link] |
| [Symptom] | [Issue name] | [Link] |
```
**Example**:
```markdown
## Common Issues
**Iteration Execution Problems**:
- [Value scores not improving](#value-scores-not-improving)
- [Iterations taking too long](#iterations-taking-too-long)
- [Can't reach convergence](#cant-reach-convergence)
**Methodology Quality Issues**:
- [Low V_meta Reusability](#low-reusability)
- [Patterns not transferring](#patterns-not-transferring)
**Quick Diagnosis**:
| If you see... | Likely issue | Jump to |
|---------------|--------------|---------|
| V_instance/V_meta stuck or decreasing | Value scores not improving | [Link](#value-scores-not-improving) |
| V_meta Reusability < 0.60 | Patterns too project-specific | [Link](#low-reusability) |
| >7 iterations without convergence | Unrealistic targets or missing patterns | [Link](#cant-reach-convergence) |
```
---
### 3. Issue Template (Repeat for Each Issue)
**Purpose**: Systematic problem-diagnosis-solution structure
**Structure**:
```markdown
### Issue: [Issue Name]
#### Symptoms
**What you observe**:
- [Observable symptom 1]
- [Observable symptom 2]
- [Observable symptom 3]
**Example**:
```[format]
[Concrete example showing the problem]
```
**Not this issue if**:
- [Condition that rules out this issue]
- [Alternative explanation]
---
#### Diagnosis
**Root Causes** (one or more):
**Cause 1: [Root cause name]**
**How to verify**:
1. [Check step 1]
2. [Check step 2]
3. [Expected finding if this is the cause]
**Evidence**:
```[format]
[What evidence looks like for this cause]
```
**Cause 2: [Root cause name]**
[Same structure]
**Diagnostic Decision Tree**:
→ If [condition]: Likely Cause 1
→ Else if [condition]: Likely Cause 2
→ Otherwise: See [related issue]
---
#### Solutions
**Solution for Cause 1**:
**Step-by-step fix**:
1. [Action step 1]
```[language]
[Code or command if applicable]
```
2. [Action step 2]
3. [Action step 3]
**Why this works**: [Explanation of solution mechanism]
**Time estimate**: [How long solution takes]
**Success indicators**:
- ✅ [How to verify fix worked]
- ✅ [Expected outcome]
**If solution doesn't work**:
- Check [alternative cause]
- See [related issue]
---
**Solution for Cause 2**:
[Same structure]
---
#### Prevention
**How to avoid this issue**:
- [Preventive measure 1]
- [Preventive measure 2]
**Early warning signs**:
- [Sign that issue is developing]
- [Metric to monitor]
**Best practices**:
- [Practice that prevents this issue]
---
#### Related Issues
- [Related issue 1] - [When to check]
- [Related issue 2] - [When to check]
**See also**:
- [Related documentation]
```
---
### 4. Full Example
```markdown
### Issue: Value Scores Not Improving
#### Symptoms
**What you observe**:
- V_instance or V_meta stuck across iterations (ΔV < 0.05)
- Value scores decreasing instead of increasing
- Multiple iterations (3+) without meaningful progress
**Example**:
```
Iteration 0: V_instance = 0.35, V_meta = 0.25
Iteration 1: V_instance = 0.37, V_meta = 0.28 (minimal Δ)
Iteration 2: V_instance = 0.34, V_meta = 0.30 (instance decreased!)
Iteration 3: V_instance = 0.36, V_meta = 0.31 (still stuck)
```
**Not this issue if**:
- Only 1-2 iterations completed (need more data)
- Scores are improving but slowly (ΔV = 0.05-0.10 is normal)
- Just hit temporary plateau (common at 0.60-0.70)
---
#### Diagnosis
**Root Causes**:
**Cause 1: Solving symptoms, not root problems**
**How to verify**:
1. Review problem identification from iteration-N.md "Problems" section
2. Check if problems describe symptoms (e.g., "low coverage") vs root causes (e.g., "no testing strategy")
3. Review solutions attempted - do they address why problem exists?
**Evidence**:
```markdown
❌ Symptom-based problem: "Test coverage is only 65%"
❌ Symptom-based solution: "Write more tests"
❌ Result: Coverage increased but tests brittle, V_instance stagnant
✅ Root-cause problem: "No systematic testing strategy"
✅ Root-cause solution: "Create TDD workflow, extract test patterns"
✅ Result: Better tests, sustainable coverage, V_instance improved
```
**Cause 2: Strategy not evidence-based**
**How to verify**:
1. Check if iteration-N-strategy.md references data artifacts
2. Look for phrases like "seems like", "probably", "might" (speculation)
3. Verify each planned improvement has supporting evidence
**Evidence**:
```markdown
❌ Speculative strategy: "Let's add integration tests because they seem useful"
❌ No supporting data
✅ Evidence-based strategy: "Data shows 80% of bugs in API layer (see data/bug-analysis.md), prioritize API tests"
✅ Clear data reference
```
**Cause 3: Scope too broad**
**How to verify**:
1. Count problems being addressed in current iteration
2. Check if all problems fully solved vs partially addressed
3. Review time spent per problem
**Evidence**:
```markdown
❌ Iteration 2 plan: Fix 7 problems (coverage, CI/CD, docs, errors, deps, perf, security)
❌ Result: All partially done, none complete, scores barely moved
✅ Iteration 2 plan: Fix top 2 problems (test strategy + coverage analysis)
✅ Result: Both fully solved, V_instance +0.15
```
**Diagnostic Decision Tree**:
→ If problem statements describe symptoms: Cause 1 (symptoms not root causes)
→ Else if strategy lacks data references: Cause 2 (not evidence-based)
→ Else if >4 problems in iteration plan: Cause 3 (scope too broad)
→ Otherwise: Check value function definitions (may be miscalibrated)
---
#### Solutions
**Solution for Cause 1: Root Cause Analysis**
**Step-by-step fix**:
1. **For each identified problem, ask "Why?" 3 times**:
```
Problem: "Test coverage is low"
Why? → "We don't have enough tests"
Why? → "Writing tests is slow and unclear"
Why? → "No systematic testing strategy or patterns"
✅ Root cause: "No testing strategy"
```
2. **Reframe problems as root causes**:
- Before: "Coverage is 65%" (symptom)
- After: "No systematic testing strategy prevents sustainable coverage" (root cause)
3. **Design solutions that address root causes**:
```markdown
Root cause: No testing strategy
Solution: Create TDD workflow, extract test patterns
Outcome: Strategy enables sustainable testing
```
4. **Update iteration-N.md "Problems" section with reframed problems**
**Why this works**: Addressing root causes creates sustainable improvement. Symptom fixes are temporary.
**Time estimate**: 30-60 minutes to reframe problems and redesign strategy
**Success indicators**:
- ✅ Problems describe "why" things aren't working, not just "what" is broken
- ✅ Solutions create systems/patterns that prevent problem recurrence
- ✅ Next iteration shows measurable V_instance/V_meta improvement (ΔV ≥ 0.10)
**If solution doesn't work**:
- Check if root cause analysis went deep enough (may need 5 "why"s instead of 3)
- Verify solutions actually address identified root cause
- See [Can't reach convergence](#cant-reach-convergence) if problem persists
---
**Solution for Cause 2: Evidence-Based Strategy**
**Step-by-step fix**:
1. **For each planned improvement, identify supporting evidence**:
```markdown
Planned: "Improve test coverage"
Evidence needed: "Which areas lack coverage? Why? What's the impact?"
```
2. **Collect data to support or refute each improvement**:
```bash
# Example: Collect coverage data
go test -coverprofile=coverage.out ./...
go tool cover -func=coverage.out | sort -k3 -n
# Document findings
echo "Analysis: 80% of uncovered code is in pkg/api/" > data/coverage-analysis.md
```
3. **Reference data artifacts in strategy**:
```markdown
Improvement: Prioritize API test coverage
Evidence: coverage-analysis.md shows 80% of gaps in pkg/api/
Expected impact: Coverage +15%, V_instance +0.10
```
4. **Review strategy.md - should have ≥2 data references per improvement**
**Why this works**: Evidence-based decisions have higher success rate than speculation.
**Time estimate**: 1-2 hours for data collection and analysis
**Success indicators**:
- ✅ iteration-N-strategy.md references data artifacts (≥2 per improvement)
- ✅ Can show "before" data that motivated improvement
- ✅ Improvements address measured gaps, not hypothetical issues
---
**Solution for Cause 3: Narrow Scope**
**Step-by-step fix**:
1. **List all identified problems with estimated impact**:
```markdown
Problems:
1. No testing strategy - Impact: +0.20 V_instance
2. Low coverage - Impact: +0.10 V_instance
3. No CI/CD - Impact: +0.05 V_instance
4. Docs incomplete - Impact: +0.03 V_instance
[7 more...]
```
2. **Sort by impact, select top 2-3**:
```markdown
Iteration N priorities:
1. Create testing strategy (+0.20 impact) ✅
2. Improve coverage (+0.10 impact) ✅
3. [Defer remaining 9 problems]
```
3. **Allocate time: 80% to top 2, 20% to #3**:
```
Testing strategy: 3 hours
Coverage improvement: 2 hours
Other: 1 hour
```
4. **Update iteration-N.md "Priorities" section with focused list**
**Why this works**: Better to solve 2 problems completely than 5 problems partially. Depth > breadth.
**Time estimate**: 15-30 minutes to prioritize and revise plan
**Success indicators**:
- ✅ Iteration plan addresses 2-3 problems maximum
- ✅ Each problem has 1+ hours allocated
- ✅ Problems are fully resolved (not partially addressed)
---
#### Prevention
**How to avoid this issue**:
- **Honest baseline assessment** (Iteration 0): Low scores are expected, they're measurement not failure
- **Problem root cause analysis**: Always ask "why" 3-5 times
- **Evidence-driven planning**: Collect data before deciding what to fix
- **Narrow focus per iteration**: 2-3 high-impact problems, fully solved
**Early warning signs**:
- ΔV < 0.05 for first time (investigate immediately)
- Problem list growing instead of shrinking (scope creep)
- Strategy document lacks data references (speculation)
**Best practices**:
- Spend 20% of iteration time on data collection
- Document evidence in data/ artifacts
- Review previous iteration to understand what worked
- Prioritize ruthlessly (defer ≥50% of identified problems)
---
#### Related Issues
- [Can't reach convergence](#cant-reach-convergence) - If stuck after 7+ iterations
- [Iterations taking too long](#iterations-taking-too-long) - If time is constraint
- [Low V_meta Reusability](#low-reusability) - If methodology not transferring
**See also**:
- [BAIME Usage Guide: When value scores don't improve](../baime-usage.md#faq)
- [Evidence collection patterns](../patterns/evidence-collection.md)
```
---
## Quality Checklist
Before publishing, verify:
### Content Quality
- [ ] **Completeness**: All common issues covered?
- [ ] **Accuracy**: Solutions tested and verified working?
- [ ] **Diagnosis depth**: Root causes identified, not just symptoms?
- [ ] **Evidence**: Concrete examples for each symptom/cause/solution?
### Structure Quality
- [ ] **Issue index**: Easy to find relevant issue?
- [ ] **Consistent format**: All issues follow same structure?
- [ ] **Progressive detail**: Symptoms → Diagnosis → Solutions flow?
- [ ] **Cross-references**: Links to related issues and docs?
### Solution Quality
- [ ] **Actionable**: Step-by-step instructions clear?
- [ ] **Verifiable**: Success indicators defined?
- [ ] **Complete**: Handles "if doesn't work" scenarios?
- [ ] **Realistic**: Time estimates provided?
### User Experience
- [ ] **Quick navigation**: Can find issue in <1 minute?
- [ ] **Self-service**: Can solve without external help?
- [ ] **Escalation path**: Clear what to do if stuck?
- [ ] **Prevention guidance**: Helps avoid issue in future?
---
## Adaptation Guide
### For Different Domains
**Error Troubleshooting** (HTTP errors, exceptions):
- Focus on error codes, stack traces, log analysis
- Include common error messages verbatim
- Add debugging tool usage (debuggers, profilers)
**Performance Issues** (slow queries, memory leaks):
- Focus on metrics, profiling, bottleneck identification
- Include before/after performance data
- Add monitoring and alerting guidance
**Configuration Problems** (startup failures, invalid config):
- Focus on configuration validation, common misconfigurations
- Include example correct configs
- Add validation tools and commands
**Integration Issues** (API failures, auth problems):
- Focus on request/response analysis, credential validation
- Include curl/Postman examples
- Add network debugging tools
### Depth Guidelines
**Issue coverage**:
- **Essential**: Top 10 most common issues (80% of user problems)
- **Important**: Next 20 issues (15% of problems)
- **Reference**: Remaining issues (5% of problems)
**Solution depth**:
- **Common issues**: Full diagnosis + multiple solutions + examples
- **Rare issues**: Brief description + link to external resources
- **Edge cases**: Acknowledge existence + escalation path
---
## Common Mistakes to Avoid
### ❌ Mistake 1: Vague Symptoms
**Bad**:
```markdown
### Issue: Things aren't working
**Symptoms**: Tool doesn't work correctly
```
**Good**:
```markdown
### Issue: Build Fails with "Module not found" Error
**Symptoms**:
- Build command exits with error code 1
- Error message: "Error: Cannot find module './config'"
- Occurs after npm install, before npm start
```
### ❌ Mistake 2: Solutions Without Diagnosis
**Bad**:
```markdown
### Issue: Slow performance
**Solution**: Try turning it off and on again
```
**Good**:
```markdown
### Issue: Slow API Responses (>2s)
#### Diagnosis
**Cause: Database query N+1 problem**
- Check: Log shows 100+ queries per request
- Check: Each query takes <10ms but total >2s
- Evidence: ORM lazy loading on collection
#### Solution
1. Add eager loading: .include('relations')
2. Verify with query count (should be 2-3 queries)
```
### ❌ Mistake 3: Missing Success Indicators
**Bad**:
```markdown
### Solution
1. Run this command
2. Restart the server
3. Hope it works
```
**Good**:
```markdown
### Solution
1. Run: `npm cache clean --force`
2. Restart server: `npm start`
**Success indicators**:
- ✅ Server starts without errors
- ✅ Module found in node_modules/
- ✅ App loads at http://localhost:3000
```
---
## Template Variables
Customize these for your domain:
- `[System/Feature/Tool]` - What's being troubleshot
- `[Issue Name]` - Descriptive issue title
- `[Category]` - Logical grouping of issues
- `[Symptom]` - Observable problem
- `[Root Cause]` - Underlying reason
- `[Solution]` - Fix steps
- `[Time Estimate]` - How long fix takes
---
## Validation Checklist
Test your troubleshooting guide:
1. **Coverage test**: Are 80%+ of common issues documented?
2. **Navigation test**: Can user find their issue in <1 minute?
3. **Solution test**: Can user apply solution successfully?
4. **Completeness test**: Are all 4 sections (symptoms, diagnosis, solution, prevention) present for each issue?
5. **Accuracy test**: Have solutions been tested and verified?
**If any test fails, revise before publishing.**
---
## Version History
- **1.0** (2025-10-19): Initial template created from documentation methodology iteration 2
---
**Ready to use**: Apply this template to create systematic, effective troubleshooting documentation for any system or tool.

View File

@@ -0,0 +1,436 @@
# Template: Tutorial Structure
**Purpose**: Structured template for creating comprehensive technical tutorials
**Based on**: Progressive disclosure pattern + BAIME usage guide
**Validated**: 1 use (BAIME guide), ready for reuse
---
## When to Use This Template
**Use for**:
- Complex frameworks or systems
- Topics requiring multiple levels of understanding
- Audiences with mixed expertise (beginners to experts)
- Topics where quick start is possible (< 10 min example)
**Don't use for**:
- Simple how-to guides (< 5 steps)
- API reference documentation
- Quick tips or cheat sheets
---
## Template Structure
```markdown
# [Topic Name]
**[One-sentence description]** - [Core value proposition]
---
## Table of Contents
- [What is [Topic]?](#what-is-topic)
- [When to Use [Topic]](#when-to-use-topic)
- [Prerequisites](#prerequisites)
- [Core Concepts](#core-concepts)
- [Quick Start](#quick-start)
- [Step-by-Step Workflow](#step-by-step-workflow)
- [Advanced Topics](#advanced-topics) (if applicable)
- [Practical Example](#practical-example)
- [Troubleshooting](#troubleshooting)
- [Next Steps](#next-steps)
---
## What is [Topic]?
[2-3 paragraphs explaining the topic]
**Paragraph 1**: Integration/components
- What methodologies/tools does it integrate?
- How do they work together?
**Paragraph 2**: Key innovation
- What problem does it solve?
- How is it different from alternatives?
**Paragraph 3** (optional): Proof points
- Results from real usage
- Examples of applications
### Why [Topic]?
**Problem**: [Describe the pain point]
**Solution**: [Topic] provides systematic approach with:
- ✅ [Benefit 1 with metric]
- ✅ [Benefit 2 with metric]
- ✅ [Benefit 3 with metric]
- ✅ [Benefit 4 with metric]
### [Topic] in Action
**Example Results**:
- **[Domain 1]**: [Metric], [Transferability]
- **[Domain 2]**: [Metric], [Transferability]
- **[Domain 3]**: [Metric], [Transferability]
---
## When to Use [Topic]
### Use [Topic] For
**[Category 1]** for:
- [Use case 1]
- [Use case 2]
- [Use case 3]
**When you need**:
- [Need 1]
- [Need 2]
- [Need 3]
### Don't Use [Topic] For
❌ [Anti-pattern 1]
❌ [Anti-pattern 2]
❌ [Anti-pattern 3]
---
## Prerequisites
### Required
1. **[Tool/knowledge 1]**
- [Installation/setup link]
- Verify: [How to check it's working]
2. **[Tool/knowledge 2]**
- [Setup instructions or reference]
3. **[Context requirement]**
- [What the reader needs to have]
- [How to measure current state]
### Recommended
- **[Optional tool/knowledge 1]**
- [Why it helps]
- [How to get it]
- **[Optional tool/knowledge 2]**
- [Why it helps]
- [Link to documentation]
---
## Core Concepts
**[Number] key concepts you need to understand**:
### 1. [Concept Name]
**Definition**: [1-2 sentence explanation]
**Why it matters**: [Practical reason]
**Example**:
```
[Code or conceptual example]
```
### 2. [Concept Name]
[Repeat structure]
### [3-6 total concepts]
---
## Quick Start
**Goal**: [What reader will accomplish] in 10 minutes
### Step 1: [Action]
[Brief instruction]
```bash
[Code block if applicable]
```
**Expected result**: [What should happen]
### Step 2: [Action]
[Continue for 3-5 steps maximum]
### Step 3: [Action]
### Step 4: [Action]
---
## Step-by-Step Workflow
**Complete guide** organized by phases or stages:
### Phase 1: [Phase Name]
**Purpose**: [What this phase accomplishes]
**Steps**:
1. **[Step name]**
- [Detailed instructions]
- **Why**: [Rationale]
- **Example**: [If applicable]
2. **[Step name]**
- [Continue pattern]
**Output**: [What you have after this phase]
### Phase 2: [Phase Name]
[Repeat structure for 2-4 phases]
### Phase 3: [Phase Name]
---
## [Advanced Topics] (Optional)
**For experienced users** who want to customize or extend:
### [Advanced Topic 1]
[Explanation]
### [Advanced Topic 2]
[Explanation]
---
## Practical Example
**Real-world walkthrough**: [Domain/use case]
### Context
[What problem we're solving]
### Setup
[Starting state]
### Execution
**Step 1**: [Action]
```
[Code/example]
```
**Result**: [Outcome]
**Step 2**: [Continue pattern]
### Outcome
[What we achieved]
[Metrics or concrete results]
---
## Troubleshooting
**Common issues and solutions**:
### Issue 1: [Problem description]
**Symptoms**:
- [Symptom 1]
- [Symptom 2]
**Cause**: [Root cause]
**Solution**:
```
[Fix or workaround]
```
### Issue 2: [Repeat structure for 5-7 common issues]
---
## Next Steps
**After mastering the basics**:
1. **[Next learning path]**
- [Link to advanced guide]
- [What you'll learn]
2. **[Complementary topic]**
- [Link to related documentation]
- [How it connects]
3. **[Community/support]**
- [Where to ask questions]
- [How to contribute]
**Further reading**:
- [Link 1]: [Description]
- [Link 2]: [Description]
- [Link 3]: [Description]
---
**Status**: [Version] | [Date] | [Maintenance status]
```
---
## Content Guidelines
### What is [Topic]? Section
- **Length**: 3-5 paragraphs
- **Tone**: Accessible, not overly technical
- **Include**: Problem statement, solution overview, proof points
- **Avoid**: Implementation details (save for later sections)
### Core Concepts Section
- **Count**: 3-6 concepts (7+ is too many)
- **Each concept**: Definition + why it matters + example
- **Order**: Most fundamental to most advanced
- **Examples**: Concrete, not abstract
### Quick Start Section
- **Time limit**: Must be completable in < 10 minutes
- **Steps**: 3-5 maximum
- **Complexity**: One happy path, no branching
- **Outcome**: Working example, not full understanding
### Step-by-Step Workflow Section
- **Organization**: By phases or logical groupings
- **Detail level**: Complete (all options, all decisions)
- **Examples**: Throughout, not just at end
- **Cross-references**: Link to concepts and troubleshooting
### Practical Example Section
- **Realism**: Based on actual use case, not toy example
- **Completeness**: End-to-end, showing all steps
- **Metrics**: Quantify outcomes when possible
- **Context**: Explain why this example matters
### Troubleshooting Section
- **Coverage**: 5-7 common issues
- **Structure**: Symptoms → Cause → Solution
- **Evidence**: Based on real problems (user feedback or anticipated)
- **Links**: Cross-reference to relevant sections
---
## Adaptation Guide
### For Simple Topics (< 5 concepts)
- **Omit**: Advanced Topics section
- **Combine**: Core Concepts + Quick Start
- **Simplify**: Step-by-Step Workflow (single section, not phases)
### For API Documentation
- **Omit**: Practical Example (use code examples instead)
- **Expand**: Core Concepts (one per major API concept)
- **Add**: API Reference section after Step-by-Step
### For Process Documentation
- **Omit**: Quick Start (processes don't always have quick paths)
- **Expand**: Step-by-Step Workflow (detailed process maps)
- **Add**: Decision trees for complex choices
---
## Quality Checklist
Before publishing, verify:
**Structure**:
- [ ] Table of contents present with working links
- [ ] All required sections present (What is, When to Use, Prerequisites, Core Concepts, Quick Start, Workflow, Example, Troubleshooting, Next Steps)
- [ ] Progressive disclosure (simple → complex)
- [ ] Clear section boundaries (headings, whitespace)
**Content**:
- [ ] Core concepts have examples (100%)
- [ ] Quick start is < 10 minutes
- [ ] Step-by-step workflow is complete (no "TBD" placeholders)
- [ ] Practical example is realistic and complete
- [ ] Troubleshooting covers 5+ issues
**Usability**:
- [ ] Links work (use validation tool)
- [ ] Code blocks have syntax highlighting
- [ ] Examples are copy-paste ready
- [ ] No broken forward references
**Accuracy**:
- [ ] Technical details verified (test examples)
- [ ] Metrics are current and accurate
- [ ] Links point to correct resources
- [ ] Prerequisites are complete and correct
---
## Example Usage
**Input**: Need to create tutorial for "API Design Methodology"
**Step 1**: Copy template
**Step 2**: Fill in topic-specific content
- What is API Design? → Explain methodology
- When to Use → API design scenarios
- Core Concepts → 5-6 API design principles
- Quick Start → Design first API in 10 min
- Workflow → Full design process
- Example → Real API design walkthrough
- Troubleshooting → Common API design problems
**Step 3**: Verify with checklist
**Step 4**: Validate links and examples
**Step 5**: Publish
---
## Validation
**First Use**: BAIME Usage Guide
- **Structure match**: 95% (omitted some optional sections)
- **Effectiveness**: Created comprehensive guide (V_instance = 0.66)
- **Learning**: Pattern worked well, validated structure
**Transferability**: Expected 90%+ (universal tutorial structure)
**Next Validation**: Apply to different domain (API docs, troubleshooting guide, etc.)
---
## Related Templates
- [concept-explanation.md](concept-explanation.md) - Template for explaining individual concepts
- [example-walkthrough.md](example-walkthrough.md) - Template for practical examples
- [progressive-disclosure pattern](../patterns/progressive-disclosure.md) - Underlying pattern
---
**Status**: ✅ Ready for use | Validated in 1 context | High confidence
**Maintenance**: Update based on usage feedback