Files
gh-yaleh-meta-cc-claude/skills/methodology-bootstrapping/examples/iteration-structure-template.md
2025-11-30 09:07:22 +08:00

14 KiB

Iteration N: [Iteration Title]

Date: YYYY-MM-DD Duration: ~X hours Status: [In Progress / Completed] Framework: BAIME (Bootstrapped AI Methodology Engineering)


1. Executive Summary

[2-3 paragraphs summarizing:]

  • Iteration focus and primary objectives
  • Key achievements and deliverables
  • Key learnings and insights
  • Value scores with gaps to target

Value Scores:

  • V_instance(s_N) = [X.XX] (Target: 0.80, Gap: [±X.XX])
  • V_meta(s_N) = [X.XX] (Target: 0.80, Gap: [±X.XX])

2. Pre-Execution Context

Previous State (s_{N-1}): From Iteration N-1

  • V_instance(s_{N-1}) = [X.XX] (Target: 0.80, Gap: [±X.XX])
    • [Component 1] = [X.XX]
    • [Component 2] = [X.XX]
    • [Component 3] = [X.XX]
    • [Component 4] = [X.XX]
  • V_meta(s_{N-1}) = [X.XX] (Target: 0.80, Gap: [±X.XX])
    • V_completeness = [X.XX]
    • V_effectiveness = [X.XX]
    • V_reusability = [X.XX]

Meta-Agent: M_{N-1} ([describe stability status, e.g., "M₀ stable, 5 capabilities"])

Agent Set: A_{N-1} = {[list agent names]} ([describe type, e.g., "generic agents" or "2 specialized"])

Primary Objectives:

  1. [Objective 1 with success indicator: /⚠️/]
  2. [Objective 2 with success indicator: /⚠️/]
  3. [Objective 3 with success indicator: /⚠️/]
  4. [Objective 4 with success indicator: /⚠️/]

3. Work Executed

Phase 1: OBSERVE - [Description] (~X min/hours)

Data Collection:

Analysis:

  • [Finding 1 Title]: [Detailed finding with data]
  • [Finding 2 Title]: [Detailed finding with data]
  • [Finding 3 Title]: [Detailed finding with data]

Gaps Identified:

  • [Gap area 1]: [Current state] → [Target state]
  • [Gap area 2]: [Current state] → [Target state]
  • [Gap area 3]: [Current state] → [Target state]

Phase 2: CODIFY - [Description] (~X min/hours)

Deliverable: [path/to/knowledge-file.md] ([X lines])

Content Structure:

Patterns Extracted:

  • [Pattern 1 Name]: [Description, applicability, benefits]
  • [Pattern 2 Name]: [Description, applicability, benefits]

Decision Made: [Key decision with rationale]

Rationale:

  • [Reason 1]
  • [Reason 2]
  • [Reason 3]

Phase 3: AUTOMATE - [Description] (~X min/hours)

Approach: [High-level approach description]

Changes Made:

  1. [Change Category 1]:

    • [Specific change 1a]
    • [Specific change 1b]
  2. [Change Category 2]:

    • [Specific change 2a]
    • [Specific change 2b]
  3. [Change Category 3]:

    // Example code changes
    // Before:
    [old code]
    
    // After:
    [new code]
    

Code Changes:

  • Modified: [file path] ([X lines changed], [description])
  • Created: [file path] ([X lines], [description])

Results:

Before: [metric]
After:  [metric]

Benefits:

  • [Benefit 1 with evidence]
  • [Benefit 2 with evidence]
  • [Benefit 3 with evidence]

Phase 4: EVALUATE - Calculate V(s_N) (~X min/hours)

Measurements:

  • [Metric 1]: [baseline value] → [final value] (change: [±X%])
  • [Metric 2]: [baseline value] → [final value] (change: [±X%])
  • [Metric 3]: [baseline value] → [final value] (change: [±X%])

Why [Metric Changed/Didn't Change]:

  • [Reason 1]
  • [Reason 2]

4. Value Calculations

V_instance(s_N) Calculation

Formula:

V_instance(s) = [weight1]·[Component1] + [weight2]·[Component2] + [weight3]·[Component3] + [weight4]·[Component4]

Component 1: [Component Name]

Measurement:

Score: [X.XX] ([±X.XX from previous iteration])

Evidence:

  • [Concrete evidence 1 with data]
  • [Concrete evidence 2 with data]
  • [Concrete evidence 3 with data]

Component 2: [Component Name]

Measurement:

Score: [X.XX] ([±X.XX from previous iteration])

Evidence:

  • [Concrete evidence 1]
  • [Concrete evidence 2]

Component 3: [Component Name]

Measurement:

Score: [X.XX] ([±X.XX from previous iteration])

Evidence:

  • [Concrete evidence 1]

Component 4: [Component Name]

Measurement: [Description]

Score: [X.XX] ([±X.XX from previous iteration])

Evidence: [Concrete evidence]

V_instance(s_N) Final Calculation

V_instance(s_N) = [weight1]·([score1]) + [weight2]·([score2]) + [weight3]·([score3]) + [weight4]·([score4])
               = [term1] + [term2] + [term3] + [term4]
               = [sum]
               ≈ [X.XX]

V_instance(s_N) = [X.XX] (Target: 0.80, Gap: [±X.XX] or [±X]%)

Change from s_{N-1}: [±X.XX] ([±X]% improvement/decline)


V_meta(s_N) Calculation

Formula:

V_meta(s) = 0.40·V_completeness + 0.30·V_effectiveness + 0.30·V_reusability

Component 1: V_completeness (Methodology Documentation)

Checklist Progress ([X]/15 items):

  • Process steps documented
  • Decision criteria defined
  • Examples provided
  • Edge cases covered
  • Failure modes documented
  • Rationale explained
  • [Additional item 7]
  • [Additional item 8]
  • [Additional item 9]
  • [Additional item 10]
  • [Additional item 11]
  • [Additional item 12]
  • [Additional item 13]
  • [Additional item 14]
  • [Additional item 15]

Score: [X.XX] ([±X.XX from previous iteration])

Evidence:

  • [Evidence 1: document created, X lines]
  • [Evidence 2: patterns added]
  • [Evidence 3: examples provided]

Gap to 1.0: Still missing [X]/15 items

  • [Missing item 1]
  • [Missing item 2]
  • [Missing item 3]

Component 2: V_effectiveness (Practical Impact)

Measurement:

  • Time savings: [X hours for task] (vs [Y hours ad-hoc] → [Z]x speedup)
  • Pattern usage: [Describe how patterns were applied]
  • Quality improvement: [Metric] improved from [X] to [Y]
  • Speedup estimate: [Z]x faster than ad-hoc approach

Score: [X.XX] ([±X.XX from previous iteration])

Evidence:

  • [Evidence 1: time measurement]
  • [Evidence 2: quality improvement]
  • [Evidence 3: pattern effectiveness]

Gap to 0.80: [What's needed]

  • [Gap item 1]
  • [Gap item 2]

Component 3: V_reusability (Transferability)

Assessment: [Overall transferability assessment]

Score: [X.XX] ([±X.XX from previous iteration])

Evidence:

  • [Evidence 1: universal patterns identified]
  • [Evidence 2: language-agnostic concepts]
  • [Evidence 3: cross-domain applicability]

Transferability Estimate:

  • Same language ([language]): ~[X]% modification ([reason])
  • Similar language ([language] → [language]): ~[X]% modification ([reason])
  • Different paradigm ([language] → [language]): ~[X]% modification ([reason])

Gap to 0.80: [What's needed]

  • [Gap item 1]
  • [Gap item 2]

V_meta(s_N) Final Calculation

V_meta(s_N) = 0.40·([completeness]) + 0.30·([effectiveness]) + 0.30·([reusability])
           = [term1] + [term2] + [term3]
           = [sum]
           ≈ [X.XX]

V_meta(s_N) = [X.XX] (Target: 0.80, Gap: [±X.XX] or [±X]%)

Change from s_{N-1}: [±X.XX] ([±X]% improvement/decline)


5. Gap Analysis

Instance Layer Gaps (ΔV = [±X.XX] to target)

Status: [Assessment, e.g., "🔄 MODERATE PROGRESS (X% of target)"]

Priority 1: [Gap Area] ([Component] = [X.XX], need [±X.XX])

  • [Action item 1]: [Details, expected impact]
  • [Action item 2]: [Details, expected impact]
  • [Action item 3]: [Details, expected impact]

Priority 2: [Gap Area] ([Component] = [X.XX], need [±X.XX])

  • [Action item 1]
  • [Action item 2]

Priority 3: [Gap Area] ([Component] = [X.XX], status)

  • [Action item 1]

Priority 4: [Gap Area] ([Component] = [X.XX], status)

  • [Assessment]

Estimated Work: [X] more iteration(s) to reach V_instance ≥ 0.80

Meta Layer Gaps (ΔV = [±X.XX] to target)

Status: [Assessment]

Priority 1: Completeness (V_completeness = [X.XX], need [±X.XX])

  • [Action item 1]
  • [Action item 2]
  • [Action item 3]

Priority 2: Effectiveness (V_effectiveness = [X.XX], need [±X.XX])

  • [Action item 1]
  • [Action item 2]
  • [Action item 3]

Priority 3: Reusability (V_reusability = [X.XX], need [±X.XX])

  • [Action item 1]
  • [Action item 2]
  • [Action item 3]

Estimated Work: [X] more iteration(s) to reach V_meta ≥ 0.80


6. Convergence Check

Criteria Assessment

Dual Threshold:

  • V_instance(s_N) ≥ 0.80: [ YES / NO] ([X.XX], gap: [±X.XX], [X]% of target)
  • V_meta(s_N) ≥ 0.80: [ YES / NO] ([X.XX], gap: [±X.XX], [X]% of target)

System Stability:

  • M_N == M_{N-1}: [ YES / NO] ([rationale, e.g., "M₀ stable, no evolution needed"])
  • A_N == A_{N-1}: [ YES / NO] ([rationale, e.g., "generic agents sufficient"])

Objectives Complete:

  • [Objective 1]: [ YES / NO] ([status])
  • [Objective 2]: [ YES / NO] ([status])
  • [Objective 3]: [ YES / NO] ([status])
  • [Objective 4]: [ YES / NO] ([status])

Diminishing Returns:

  • ΔV_instance = [±X.XX] ([assessment, e.g., "small but positive", "diminishing"])
  • ΔV_meta = [±X.XX] ([assessment])
  • [Overall assessment]

Status: [ CONVERGED / NOT CONVERGED]

Reason:

  • [Detailed rationale for convergence decision]
  • [Supporting evidence 1]
  • [Supporting evidence 2]

Progress Trajectory:

  • Instance layer: [s0] → [s1] → [s2] → ... → [sN]
  • Meta layer: [s0] → [s1] → [s2] → ... → [sN]

Estimated Iterations to Convergence: [X] more iteration(s)

  • Iteration N+1: [Expected progress]
  • Iteration N+2: [Expected progress]
  • Iteration N+3: [Expected progress]

7. Evolution Decisions

Agent Evolution

Current Agent Set: A_N = [list agents, e.g., "A_{N-1}" if unchanged]

Sufficiency Analysis:

  • [/] [Agent 1 name]: [Performance assessment]
  • [/] [Agent 2 name]: [Performance assessment]
  • [/] [Agent 3 name]: [Performance assessment]

Decision: [ NO EVOLUTION NEEDED / ⚠️ EVOLUTION NEEDED]

Rationale:

  • [Reason 1]
  • [Reason 2]
  • [Reason 3]

If Evolution: [Describe new agent, rationale, expected improvement]

Re-evaluate: [When to reassess, e.g., "After Iteration N+1 if [condition]"]

Meta-Agent Evolution

Current Meta-Agent: M_N = [describe, e.g., "M_{N-1} (5 capabilities)"]

Sufficiency Analysis:

  • [/] [Capability 1]: [Effectiveness assessment]
  • [/] [Capability 2]: [Effectiveness assessment]
  • [/] [Capability 3]: [Effectiveness assessment]
  • [/] [Capability 4]: [Effectiveness assessment]
  • [/] [Capability 5]: [Effectiveness assessment]

Decision: [ NO EVOLUTION NEEDED / ⚠️ EVOLUTION NEEDED]

Rationale: [Detailed reasoning]

If Evolution: [Describe new capability, rationale, expected improvement]


8. Artifacts Created

Data Files

  • [path/to/data-file-1] - [Description, e.g., "Test coverage report (X%)"]
  • [path/to/data-file-2] - [Description]
  • [path/to/data-file-3] - [Description]

Knowledge Files

  • [path/to/knowledge-file-1] - [Description, e.g., "X lines, Pattern Y documented"]
  • [path/to/knowledge-file-2] - [Description]

Code Changes

  • Modified: [file path] ([X lines, description])
  • Created: [file path] ([X lines, description])
  • Deleted: [file path] ([reason])

Other Artifacts


9. Reflections

What Worked

  1. [Success 1 Title]: [Detailed description with evidence]
  2. [Success 2 Title]: [Detailed description with evidence]
  3. [Success 3 Title]: [Detailed description with evidence]
  4. [Success 4 Title]: [Detailed description with evidence]

What Didn't Work

  1. [Challenge 1 Title]: [Detailed description with root cause]
  2. [Challenge 2 Title]: [Detailed description with root cause]
  3. [Challenge 3 Title]: [Detailed description with root cause]

Learnings

  1. [Learning 1 Title]: [Insight gained, applicability]
  2. [Learning 2 Title]: [Insight gained, applicability]
  3. [Learning 3 Title]: [Insight gained, applicability]
  4. [Learning 4 Title]: [Insight gained, applicability]

Insights for Methodology

  1. [Insight 1 Title]: [Meta-level insight for methodology development]
  2. [Insight 2 Title]: [Meta-level insight for methodology development]
  3. [Insight 3 Title]: [Meta-level insight for methodology development]
  4. [Insight 4 Title]: [Meta-level insight for methodology development]

10. Conclusion

[Comprehensive summary paragraph covering:]

  • Overall iteration assessment
  • Key metrics and their changes
  • Critical decisions made and their rationale
  • Methodology development progress

Key Metrics:

  • [Metric 1]: [value] ([change], target: [target])
  • [Metric 2]: [value] ([change], target: [target])
  • [Metric 3]: [value] ([change], target: [target])

Value Functions:

  • V_instance(s_N) = [X.XX] ([X]% of target, [±X.XX] improvement)
  • V_meta(s_N) = [X.XX] ([X]% of target, [±X.XX] improvement - [±X]% growth)

Key Insight: [Main takeaway from this iteration in 1-2 sentences]

Critical Decision: [Most important decision made and its impact]

Next Steps: [What Iteration N+1 will focus on, expected outcomes]

Confidence: [Assessment of confidence in achieving next iteration goals, e.g., "High / Medium / Low" with reasoning]


Status: [Status indicator, e.g., " [Achievement]" or "🔄 [In Progress]"] Next: Iteration N+1 - [Focus Area] Expected Duration: [X] hours