Initial commit
This commit is contained in:
394
skills/using-system-archaeologist/SKILL.md
Normal file
394
skills/using-system-archaeologist/SKILL.md
Normal file
@@ -0,0 +1,394 @@
|
||||
---
|
||||
name: using-system-archaeologist
|
||||
description: Analyzes existing codebases to generate comprehensive architecture documentation with C4 diagrams, subsystem catalogs, and architectural assessments. Use when users request architecture documentation, codebase analysis, system design docs, or mention "analyze codebase", "architecture diagrams", "document system". Employs systematic exploration with quality gates.
|
||||
---
|
||||
|
||||
# System Archaeologist - Codebase Architecture Analysis
|
||||
|
||||
## Overview
|
||||
|
||||
Analyze existing codebases through coordinated subagent exploration to produce comprehensive architecture documentation with C4 diagrams, subsystem catalogs, and architectural assessments.
|
||||
|
||||
**Core principle:** Systematic archaeological process with quality gates prevents rushed, incomplete analysis.
|
||||
|
||||
## When to Use
|
||||
|
||||
- User requests architecture documentation for existing codebase
|
||||
- Need to understand unfamiliar system architecture
|
||||
- Creating design docs for legacy systems
|
||||
- Analyzing codebases of any size (small to large)
|
||||
- User mentions: "analyze codebase", "architecture documentation", "system design", "generate diagrams"
|
||||
|
||||
## Mandatory Workflow
|
||||
|
||||
### Step 1: Create Workspace (NON-NEGOTIABLE)
|
||||
|
||||
**Before any analysis:**
|
||||
|
||||
```bash
|
||||
mkdir -p docs/arch-analysis-$(date +%Y-%m-%d-%H%M)/temp
|
||||
```
|
||||
|
||||
**Why this is mandatory:**
|
||||
- Organizes all analysis artifacts in one location
|
||||
- Enables subagent handoffs via shared documents
|
||||
- Provides audit trail of decisions
|
||||
- Prevents file scatter across project
|
||||
|
||||
**Common rationalization:** "This feels like overhead when I'm pressured"
|
||||
|
||||
**Reality:** 10 seconds to create workspace saves hours of file hunting and context loss.
|
||||
|
||||
### Step 2: Write Coordination Plan
|
||||
|
||||
**Immediately after workspace creation, write `00-coordination.md`:**
|
||||
|
||||
```markdown
|
||||
## Analysis Plan
|
||||
- Scope: [directories to analyze]
|
||||
- Strategy: [Sequential/Parallel with reasoning]
|
||||
- Time constraint: [if any, with scoping plan]
|
||||
- Complexity estimate: [Low/Medium/High]
|
||||
|
||||
## Execution Log
|
||||
- [timestamp] Created workspace
|
||||
- [timestamp] [Next action]
|
||||
```
|
||||
|
||||
**Why coordination logging is mandatory:**
|
||||
- Documents strategy decisions (why parallel vs sequential?)
|
||||
- Tracks what's been done vs what remains
|
||||
- Enables resumption if work is interrupted
|
||||
- Shows reasoning for future review
|
||||
|
||||
**Common rationalization:** "I'll just do the work, documentation is overhead"
|
||||
|
||||
**Reality:** Undocumented work is unreviewable and non-reproducible.
|
||||
|
||||
### Step 3: Holistic Assessment First
|
||||
|
||||
**Before diving into details, perform systematic scan:**
|
||||
|
||||
1. **Directory structure** - Map organization (feature? layer? domain?)
|
||||
2. **Entry points** - Find main files, API definitions, config
|
||||
3. **Technology stack** - Languages, frameworks, dependencies
|
||||
4. **Subsystem identification** - Identify 4-12 major cohesive groups
|
||||
|
||||
Write findings to `01-discovery-findings.md`
|
||||
|
||||
**Why holistic before detailed:**
|
||||
- Prevents getting lost in implementation details
|
||||
- Identifies parallelization opportunities
|
||||
- Establishes architectural boundaries
|
||||
- Informs orchestration strategy
|
||||
|
||||
**Common rationalization:** "I can see the structure, no need to document it formally"
|
||||
|
||||
**Reality:** What's obvious to you now is forgotten in 30 minutes.
|
||||
|
||||
### Step 4: Subagent Orchestration Strategy
|
||||
|
||||
**Decision point:** Sequential vs Parallel
|
||||
|
||||
**Use SEQUENTIAL when:**
|
||||
- Project < 5 subsystems
|
||||
- Subsystems have tight interdependencies
|
||||
- Quick analysis needed (< 1 hour)
|
||||
|
||||
**Use PARALLEL when:**
|
||||
- Project ≥ 5 independent subsystems
|
||||
- Large codebase (20K+ LOC, 10+ plugins/services)
|
||||
- Subsystems are loosely coupled
|
||||
|
||||
**Document decision in `00-coordination.md`:**
|
||||
|
||||
```markdown
|
||||
## Decision: Parallel Analysis
|
||||
- Reasoning: 14 independent plugins, loosely coupled
|
||||
- Strategy: Spawn 14 parallel subagents, one per plugin
|
||||
- Estimated time savings: 2 hours → 30 minutes
|
||||
```
|
||||
|
||||
**Common rationalization:** "Solo work is faster than coordination overhead"
|
||||
|
||||
**Reality:** For large systems, orchestration overhead (5 min) saves hours of sequential work.
|
||||
|
||||
### Step 5: Subagent Delegation Pattern
|
||||
|
||||
**When spawning subagents for analysis:**
|
||||
|
||||
Create task specification in `temp/task-[subagent-name].md`:
|
||||
|
||||
```markdown
|
||||
## Task: Analyze [specific scope]
|
||||
## Context
|
||||
- Workspace: docs/arch-analysis-YYYY-MM-DD-HHMM/
|
||||
- Read: 01-discovery-findings.md
|
||||
- Write to: 02-subsystem-catalog.md (append your section)
|
||||
|
||||
## Expected Output
|
||||
Follow contract in documentation-contracts.md:
|
||||
- Subsystem name, location, responsibility
|
||||
- Key components (3-5 files/classes)
|
||||
- Dependencies (inbound/outbound)
|
||||
- Patterns observed
|
||||
- Confidence level
|
||||
|
||||
## Validation Criteria
|
||||
- [ ] All contract sections complete
|
||||
- [ ] Confidence level marked
|
||||
- [ ] Dependencies bidirectional (if A depends on B, B shows A as inbound)
|
||||
```
|
||||
|
||||
**Why formal task specs:**
|
||||
- Subagents know exactly what to produce
|
||||
- Reduces back-and-forth clarification
|
||||
- Ensures contract compliance
|
||||
- Enables parallel work without conflicts
|
||||
|
||||
### Step 6: Validation Gates (MANDATORY)
|
||||
|
||||
**After EVERY major document is produced, validate before proceeding.**
|
||||
|
||||
**What "validation gate" means:**
|
||||
- Systematic check against contract requirements
|
||||
- Cross-document consistency verification
|
||||
- Quality gate before proceeding to next phase
|
||||
- NOT just "read it again" - use a checklist
|
||||
|
||||
**Two validation approaches:**
|
||||
|
||||
**A) Separate Validation Subagent (PREFERRED)**
|
||||
- Spawn dedicated validation subagent
|
||||
- Agent reads document + contract, produces validation report
|
||||
- Provides "fresh eyes" review
|
||||
- Use when: Time allows (5-10 min overhead), complex analysis, multiple subsystems
|
||||
|
||||
**B) Systematic Self-Validation (ACCEPTABLE)**
|
||||
- You validate against contract checklist systematically
|
||||
- Document your validation in coordination log
|
||||
- Use when: Tight time constraints (< 1 hour), simple analysis, solo work already
|
||||
- **MUST still be systematic** (not "looks good")
|
||||
|
||||
**Validation checklist (either approach):**
|
||||
- [ ] Contract compliance (all required sections present)
|
||||
- [ ] Cross-document consistency (subsystems in catalog match diagrams)
|
||||
- [ ] Confidence levels marked
|
||||
- [ ] No placeholder text ("[TODO]", "[Fill in]")
|
||||
- [ ] Dependencies bidirectional (A→B means B shows A as inbound)
|
||||
|
||||
**When using self-validation, document in coordination log:**
|
||||
|
||||
```markdown
|
||||
## Validation Decision - [timestamp]
|
||||
- Approach: Self-validation (time constraint: 1 hour deadline)
|
||||
- Documents validated: 02-subsystem-catalog.md
|
||||
- Checklist: Contract ✓, Consistency ✓, Confidence ✓, No placeholders ✓
|
||||
- Result: APPROVED for diagram generation
|
||||
```
|
||||
|
||||
**Validation status meanings:**
|
||||
- **APPROVED** → Proceed to next phase
|
||||
- **NEEDS_REVISION** (warnings) → Fix non-critical issues, document as tech debt, proceed
|
||||
- **NEEDS_REVISION** (critical) → BLOCK. Fix issues, re-validate. Max 2 retries, then escalate to user.
|
||||
|
||||
**Common rationalization:** "Validation slows me down"
|
||||
|
||||
**Reality:** Validation catches errors before they cascade. 2 minutes validating saves 20 minutes debugging diagrams generated from bad data.
|
||||
|
||||
**Common rationalization:** "I already checked it, validation is redundant"
|
||||
|
||||
**Reality:** "Checked it" ≠ "validated systematically against contract". Use the checklist.
|
||||
|
||||
### Step 7: Handle Validation Failures
|
||||
|
||||
**When validator returns NEEDS_REVISION with CRITICAL issues:**
|
||||
|
||||
1. **Read validation report** (temp/validation-*.md)
|
||||
2. **Identify specific issues** (not general "improve quality")
|
||||
3. **Spawn original subagent again** with fix instructions
|
||||
4. **Re-validate** after fix
|
||||
5. **Maximum 2 retries** - if still failing, escalate: "Having trouble with [X], need your input"
|
||||
|
||||
**DO NOT:**
|
||||
- Proceed to next phase despite BLOCK status
|
||||
- Make fixes yourself without re-spawning subagent
|
||||
- Rationalize "it's good enough"
|
||||
- Question validator authority ("validation is too strict")
|
||||
|
||||
**From baseline testing:** Agents WILL respect validation when it's clear and authoritative. Make validation clear and authoritative.
|
||||
|
||||
## Working Under Pressure
|
||||
|
||||
### Time Constraints Are Not Excuses to Skip Process
|
||||
|
||||
**Common scenario:** "I need this in 3 hours for a stakeholder meeting"
|
||||
|
||||
**WRONG response:** Skip workspace, skip validation, rush deliverables
|
||||
|
||||
**RIGHT response:** Scope appropriately while maintaining process
|
||||
|
||||
**Example scoping for 3-hour deadline:**
|
||||
|
||||
```markdown
|
||||
## Coordination Plan
|
||||
- Time constraint: 3 hours until stakeholder presentation
|
||||
- Strategy: SCOPED ANALYSIS with quality gates maintained
|
||||
- Timeline:
|
||||
- 0:00-0:05: Create workspace, write coordination plan (this)
|
||||
- 0:05-0:35: Holistic scan, identify all subsystems
|
||||
- 0:35-2:05: Focus on 3 highest-value subsystems (parallel analysis)
|
||||
- 2:05-2:35: Generate minimal viable diagrams (Context + Component only)
|
||||
- 2:35-2:50: Validate outputs
|
||||
- 2:50-3:00: Write executive summary with EXPLICIT limitations section
|
||||
|
||||
## Limitations Acknowledged
|
||||
- Only 3/14 subsystems analyzed in depth
|
||||
- No module-level dependency diagrams
|
||||
- Confidence: Medium (time-constrained analysis)
|
||||
- Recommend: Full analysis post-presentation
|
||||
```
|
||||
|
||||
**Key principle:** Scoped analysis with documented limitations > complete analysis done wrong.
|
||||
|
||||
### Handling Sunk Cost (Incomplete Prior Work)
|
||||
|
||||
**Common scenario:** "We started this analysis last week, finish it"
|
||||
|
||||
**Checklist:**
|
||||
1. **Find existing workspace** - Look in docs/arch-analysis-*/
|
||||
2. **Read coordination log** - Understand what was done and why stopped
|
||||
3. **Assess quality** - Is prior work correct or flawed?
|
||||
4. **Make explicit decision:**
|
||||
- **Prior work is good** → Continue from where it left off, update coordination log
|
||||
- **Prior work is flawed** → Archive old workspace, start fresh, document why
|
||||
- **Prior work is mixed** → Salvage good parts, redo bad parts, document decisions
|
||||
|
||||
**DO NOT assume prior work is correct just because it exists.**
|
||||
|
||||
**Update coordination log:**
|
||||
|
||||
```markdown
|
||||
## Incremental Work - [date]
|
||||
- Detected existing workspace from [prior date]
|
||||
- Assessment: [quality evaluation]
|
||||
- Decision: [continue/archive/salvage]
|
||||
- Reasoning: [why]
|
||||
```
|
||||
|
||||
## Common Rationalizations (RED FLAGS)
|
||||
|
||||
If you catch yourself thinking ANY of these, STOP:
|
||||
|
||||
| Excuse | Reality |
|
||||
|--------|---------|
|
||||
| "Time pressure makes trade-offs appropriate" | Process prevents rework. Skipping process costs MORE time. |
|
||||
| "This feels like overhead" | 5 minutes of structure saves hours of chaos. |
|
||||
| "Working solo is faster" | Solo works for small tasks. Orchestration scales for large systems. |
|
||||
| "I'll just write outputs directly" | Uncoordinated work creates inconsistent artifacts. |
|
||||
| "Validation slows me down" | Validation catches errors before they cascade. |
|
||||
| "I already checked it" | Self-review misses what fresh eyes catch. |
|
||||
| "I can't do this properly in [short time]" | You can do SCOPED analysis properly. Document limitations. |
|
||||
| "Rather than duplicate, I'll synthesize" | Existing docs ≠ systematic analysis. Do the work. |
|
||||
| "Architecture analysis doesn't need exhaustive review" | True. But it DOES need systematic method. |
|
||||
| "Meeting-ready outputs" justify shortcuts | Stakeholders deserve accurate info, not rushed guesses. |
|
||||
|
||||
**All of these mean:** Follow the process. It exists because these rationalizations lead to bad outcomes.
|
||||
|
||||
## Extreme Pressure Handling
|
||||
|
||||
**If user requests something genuinely impossible:**
|
||||
|
||||
- "Complete 15-plugin analysis with full diagrams in 1 hour"
|
||||
|
||||
**Provide scoped alternative:**
|
||||
|
||||
> "I can't do complete analysis of 15 plugins in 1 hour while maintaining quality. Here are realistic options:
|
||||
>
|
||||
> A) **Quick overview** (1 hour): Holistic scan, plugin inventory, high-level architecture diagram, documented limitations
|
||||
>
|
||||
> B) **Focused deep-dive** (1 hour): Pick 2-3 critical plugins, full analysis of those, others documented as "not analyzed"
|
||||
>
|
||||
> C) **Use existing docs** (15 min): Synthesize existing README.md, CLAUDE.md with quick verification
|
||||
>
|
||||
> D) **Reschedule** (recommended): Full systematic analysis takes 4-6 hours for this scale
|
||||
>
|
||||
> Which approach fits your needs?"
|
||||
|
||||
**DO NOT:** Refuse the task entirely. Provide realistic scoped alternatives.
|
||||
|
||||
## Documentation Contracts
|
||||
|
||||
See individual skill files for detailed contracts:
|
||||
- `01-discovery-findings.md` contract → [analyzing-unknown-codebases.md](analyzing-unknown-codebases.md)
|
||||
- `02-subsystem-catalog.md` contract → [analyzing-unknown-codebases.md](analyzing-unknown-codebases.md)
|
||||
- `03-diagrams.md` contract → [generating-architecture-diagrams.md](generating-architecture-diagrams.md)
|
||||
- `04-final-report.md` contract → [documenting-system-architecture.md](documenting-system-architecture.md)
|
||||
- Validation protocol → [validating-architecture-analysis.md](validating-architecture-analysis.md)
|
||||
|
||||
## Workflow Summary
|
||||
|
||||
```
|
||||
1. Create workspace (docs/arch-analysis-YYYY-MM-DD-HHMM/)
|
||||
2. Write coordination plan (00-coordination.md)
|
||||
3. Holistic assessment → 01-discovery-findings.md
|
||||
4. Decide: Sequential or Parallel? (document reasoning)
|
||||
5. Spawn subagents for analysis → 02-subsystem-catalog.md
|
||||
6. VALIDATE subsystem catalog (mandatory gate)
|
||||
7. Spawn diagram generation → 03-diagrams.md
|
||||
8. VALIDATE diagrams (mandatory gate)
|
||||
9. Synthesize final report → 04-final-report.md
|
||||
10. VALIDATE final report (mandatory gate)
|
||||
11. Provide cleanup recommendations for temp/
|
||||
```
|
||||
|
||||
**Every step is mandatory. No exceptions for time pressure, complexity, or stakeholder demands.**
|
||||
|
||||
## Success Criteria
|
||||
|
||||
**You have succeeded when:**
|
||||
- Workspace structure exists with all numbered documents
|
||||
- Coordination log documents all major decisions
|
||||
- All outputs passed validation gates
|
||||
- Subagent orchestration used appropriately for scale
|
||||
- Limitations explicitly documented if time-constrained
|
||||
- User receives navigable, validated architecture documentation
|
||||
|
||||
**You have failed when:**
|
||||
- Files scattered outside workspace
|
||||
- No coordination log showing decisions
|
||||
- Validation skipped "to save time"
|
||||
- Worked solo despite clear parallelization opportunity
|
||||
- Produced rushed outputs without limitation documentation
|
||||
- Rationalized shortcuts as "appropriate trade-offs"
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
**❌ Skip workspace creation**
|
||||
"I'll just write files to project root"
|
||||
|
||||
**❌ No coordination logging**
|
||||
"I'll just do the work without documenting strategy"
|
||||
|
||||
**❌ Work solo despite scale**
|
||||
"Orchestration overhead isn't worth it"
|
||||
|
||||
**❌ Skip validation**
|
||||
"I already reviewed it myself"
|
||||
|
||||
**❌ Bypass BLOCK status**
|
||||
"The validation is too strict, I'll proceed anyway"
|
||||
|
||||
**❌ Complete refusal under pressure**
|
||||
"I can't do this properly in 3 hours, so I won't do it" (Should: Provide scoped alternative)
|
||||
|
||||
---
|
||||
|
||||
## System Archaeologist Specialist Skills
|
||||
|
||||
After routing, load the appropriate specialist skill for detailed guidance:
|
||||
|
||||
1. [analyzing-unknown-codebases.md](analyzing-unknown-codebases.md) - Systematic codebase exploration, subsystem identification, confidence-based analysis
|
||||
2. [generating-architecture-diagrams.md](generating-architecture-diagrams.md) - C4 diagrams, abstraction strategies, notation conventions
|
||||
3. [documenting-system-architecture.md](documenting-system-architecture.md) - Synthesis of catalogs and diagrams into comprehensive reports
|
||||
4. [validating-architecture-analysis.md](validating-architecture-analysis.md) - Contract validation, consistency checks, quality gates
|
||||
327
skills/using-system-archaeologist/analyzing-unknown-codebases.md
Normal file
327
skills/using-system-archaeologist/analyzing-unknown-codebases.md
Normal file
@@ -0,0 +1,327 @@
|
||||
|
||||
# Analyzing Unknown Codebases
|
||||
|
||||
## Purpose
|
||||
|
||||
Systematically analyze unfamiliar code to identify subsystems, components, dependencies, and architectural patterns. Produce catalog entries that follow EXACT output contracts.
|
||||
|
||||
## When to Use
|
||||
|
||||
- Coordinator delegates subsystem analysis task
|
||||
- Task specifies reading from workspace and appending to `02-subsystem-catalog.md`
|
||||
- You need to analyze code you haven't seen before
|
||||
- Output must integrate with downstream tooling (validation, diagram generation)
|
||||
|
||||
## Critical Principle: Contract Compliance
|
||||
|
||||
**Your analysis quality doesn't matter if you violate the output contract.**
|
||||
|
||||
**Common rationalization:** "I'll add helpful extra sections to improve clarity"
|
||||
|
||||
**Reality:** Extra sections break downstream tools. The coordinator expects EXACT format for parsing and validation. Your job is to follow the specification, not improve it.
|
||||
|
||||
## Output Contract (MANDATORY)
|
||||
|
||||
When writing to `02-subsystem-catalog.md`, append EXACTLY this format:
|
||||
|
||||
```markdown
|
||||
## [Subsystem Name]
|
||||
|
||||
**Location:** `path/to/subsystem/`
|
||||
|
||||
**Responsibility:** [One sentence describing what this subsystem does]
|
||||
|
||||
**Key Components:**
|
||||
- `file1.ext` - [Brief description]
|
||||
- `file2.ext` - [Brief description]
|
||||
- `file3.ext` - [Brief description]
|
||||
|
||||
**Dependencies:**
|
||||
- Inbound: [Subsystems that depend on this one]
|
||||
- Outbound: [Subsystems this one depends on]
|
||||
|
||||
**Patterns Observed:**
|
||||
- [Pattern 1]
|
||||
- [Pattern 2]
|
||||
|
||||
**Concerns:**
|
||||
- [Any issues, gaps, or technical debt observed]
|
||||
|
||||
**Confidence:** [High/Medium/Low] - [Brief reasoning]
|
||||
|
||||
```
|
||||
|
||||
**If no concerns exist, write:**
|
||||
```markdown
|
||||
**Concerns:**
|
||||
- None observed
|
||||
```
|
||||
|
||||
**CRITICAL COMPLIANCE RULES:**
|
||||
- ❌ Add extra sections ("Integration Points", "Recommendations", "Files", etc.)
|
||||
- ❌ Change section names or reorder them
|
||||
- ❌ Write to separate file (must append to `02-subsystem-catalog.md`)
|
||||
- ❌ Skip sections (include ALL sections - use "None observed" if empty)
|
||||
- ✅ Copy the template structure EXACTLY
|
||||
- ✅ Keep section order: Location → Responsibility → Key Components → Dependencies → Patterns → Concerns → Confidence
|
||||
|
||||
**Contract is specification, not minimum. Extra sections break downstream validation.**
|
||||
|
||||
### Example: Complete Compliant Entry
|
||||
|
||||
Here's what a correctly formatted entry looks like:
|
||||
|
||||
```markdown
|
||||
## Authentication Service
|
||||
|
||||
**Location:** `/src/services/auth/`
|
||||
|
||||
**Responsibility:** Handles user authentication, session management, and JWT token generation for API access.
|
||||
|
||||
**Key Components:**
|
||||
- `auth_handler.py` - Main authentication logic with login/logout endpoints (342 lines)
|
||||
- `token_manager.py` - JWT token generation and validation (156 lines)
|
||||
- `session_store.py` - Redis-backed session storage (98 lines)
|
||||
|
||||
**Dependencies:**
|
||||
- Inbound: API Gateway, User Service
|
||||
- Outbound: Database Layer, Cache Service, Logging Service
|
||||
|
||||
**Patterns Observed:**
|
||||
- Dependency injection for testability (all external services injected)
|
||||
- Token refresh pattern with sliding expiration
|
||||
- Audit logging for all authentication events
|
||||
|
||||
**Concerns:**
|
||||
- None observed
|
||||
|
||||
**Confidence:** High - Clear entry points, documented API, test coverage validates behavior
|
||||
|
||||
```
|
||||
|
||||
**This is EXACTLY what your output should look like.** No more, no less.
|
||||
|
||||
## Systematic Analysis Approach
|
||||
|
||||
### Step 1: Read Task Specification
|
||||
|
||||
Your task file (`temp/task-[name].md`) specifies:
|
||||
- What to analyze (scope: directories, plugins, services)
|
||||
- Where to read context (`01-discovery-findings.md`)
|
||||
- Where to write output (`02-subsystem-catalog.md` - append)
|
||||
- Expected format (the contract above)
|
||||
|
||||
**Read these files FIRST before analyzing code.**
|
||||
|
||||
### Step 2: Layered Exploration
|
||||
|
||||
Use this proven approach from baseline testing:
|
||||
|
||||
1. **Metadata layer** - Read plugin.json, package.json, setup.py
|
||||
2. **Structure layer** - Examine directory organization
|
||||
3. **Router layer** - Find and read router/index files (often named "using-X")
|
||||
4. **Sampling layer** - Read 3-5 representative files
|
||||
5. **Quantitative layer** - Use line counts as depth indicators
|
||||
|
||||
**Why this order works:**
|
||||
- Metadata gives overview without code diving
|
||||
- Structure reveals organization philosophy
|
||||
- Routers often catalog all components
|
||||
- Sampling verifies patterns
|
||||
- Quantitative data supports claims
|
||||
|
||||
### Step 3: Mark Confidence Explicitly
|
||||
|
||||
**Every output MUST include confidence level with reasoning.**
|
||||
|
||||
**High confidence** - Router skill provided catalog + verified with sampling
|
||||
```markdown
|
||||
**Confidence:** High - Router skill listed all 10 components, sampling 4 confirmed patterns
|
||||
```
|
||||
|
||||
**Medium confidence** - No router, but clear structure + sampling
|
||||
```markdown
|
||||
**Confidence:** Medium - No router catalog, inferred from directory structure + 5 file samples
|
||||
```
|
||||
|
||||
**Low confidence** - Incomplete, placeholders, or unclear organization
|
||||
```markdown
|
||||
**Confidence:** Low - Several SKILL.md files missing, test artifacts suggest work-in-progress
|
||||
```
|
||||
|
||||
### Step 4: Distinguish States Clearly
|
||||
|
||||
When analyzing codebases with mixed completion:
|
||||
|
||||
**Complete** - Skill file exists, has content, passes basic read test
|
||||
```markdown
|
||||
- `skill-name/SKILL.md` - Complete skill (1,234 lines)
|
||||
```
|
||||
|
||||
**Placeholder** - Skill file exists but is stub/template
|
||||
```markdown
|
||||
- `skill-name/SKILL.md` - Placeholder (12 lines, template only)
|
||||
```
|
||||
|
||||
**Planned** - Referenced in router but no file exists
|
||||
```markdown
|
||||
- `skill-name` - Planned (referenced in router, not implemented)
|
||||
```
|
||||
|
||||
**TDD artifacts** - Test scenarios, baseline results (these ARE documentation)
|
||||
```markdown
|
||||
- `test-scenarios.md` - TDD test scenarios (RED phase)
|
||||
- `baseline-results.md` - Baseline behavior documentation
|
||||
```
|
||||
|
||||
### Step 5: Write Output (Contract Compliance)
|
||||
|
||||
**Before writing:**
|
||||
1. Prepare your entry in EXACT contract format from the template above
|
||||
2. Copy the structure - don't paraphrase or reorganize
|
||||
3. Triple-check you have ALL sections in correct order
|
||||
|
||||
**When writing:**
|
||||
1. **Target file:** `02-subsystem-catalog.md` in workspace directory
|
||||
2. **Operation:** Append your entry (create file if first entry, append if file exists)
|
||||
3. **Method:**
|
||||
- If file exists: Read current content, then Write with original + your entry
|
||||
- If file doesn't exist: Write your entry directly
|
||||
4. **Format:** Follow contract sections in exact order
|
||||
5. **Completeness:** Include ALL sections - use "None observed" for empty Concerns
|
||||
|
||||
**DO NOT create separate files** (e.g., `subsystem-X-analysis.md`). The coordinator expects all entries in `02-subsystem-catalog.md`.
|
||||
|
||||
**After writing:**
|
||||
1. Re-read `02-subsystem-catalog.md` to verify your entry was added correctly
|
||||
2. Validate format matches contract exactly using this checklist:
|
||||
|
||||
**Self-Validation Checklist:**
|
||||
```
|
||||
[ ] Section 1: Subsystem name as H2 heading (## Name)
|
||||
[ ] Section 2: Location with backticks and absolute path
|
||||
[ ] Section 3: Responsibility as single sentence
|
||||
[ ] Section 4: Key Components as bulleted list with descriptions
|
||||
[ ] Section 5: Dependencies with "Inbound:" and "Outbound:" labels
|
||||
[ ] Section 6: Patterns Observed as bulleted list
|
||||
[ ] Section 7: Concerns present (with issues OR "None observed")
|
||||
[ ] Section 8: Confidence level (High/Medium/Low) with reasoning
|
||||
[ ] Separator: "---" line after confidence
|
||||
[ ] NO extra sections added
|
||||
[ ] Sections in correct order
|
||||
[ ] Entry in file: 02-subsystem-catalog.md (not separate file)
|
||||
```
|
||||
|
||||
## Handling Uncertainty
|
||||
|
||||
**When architecture is unclear:**
|
||||
|
||||
1. **State what you observe** - Don't guess at intent
|
||||
```markdown
|
||||
**Patterns Observed:**
|
||||
- 3 files with similar structure (analysis.py, parsing.py, validation.py)
|
||||
- Unclear if this is deliberate pattern or coincidence
|
||||
```
|
||||
|
||||
2. **Mark confidence appropriately** - Low confidence is valid
|
||||
```markdown
|
||||
**Confidence:** Low - Directory structure suggests microservices, but no service definitions found
|
||||
```
|
||||
|
||||
3. **Use "Concerns" section** - Document gaps
|
||||
```markdown
|
||||
**Concerns:**
|
||||
- No clear entry point identified
|
||||
- Dependencies inferred from imports, not explicit manifest
|
||||
```
|
||||
|
||||
**DO NOT:**
|
||||
- Invent relationships you didn't verify
|
||||
- Assume "obvious" architecture without evidence
|
||||
- Skip confidence marking because you're uncertain
|
||||
|
||||
## Positive Behaviors to Maintain
|
||||
|
||||
From baseline testing, these approaches WORK:
|
||||
|
||||
✅ **Read actual files** - Don't infer from names alone
|
||||
✅ **Use router skills** - They often provide complete catalogs
|
||||
✅ **Sample strategically** - 3-5 files verifies patterns without exhaustive reading
|
||||
✅ **Cross-reference** - Verify claims (imports match listed dependencies)
|
||||
✅ **Document assumptions** - Make reasoning explicit
|
||||
✅ **Line counts indicate depth** - 1,500-line skill vs 50-line stub matters
|
||||
|
||||
## Common Rationalizations (STOP SIGNALS)
|
||||
|
||||
If you catch yourself thinking these, STOP:
|
||||
|
||||
| Rationalization | Reality |
|
||||
|-----------------|---------|
|
||||
| "I'll add Integration Points section for clarity" | Extra sections break downstream parsing |
|
||||
| "I'll write to separate file for organization" | Coordinator expects append to specified file |
|
||||
| "I'll improve the contract format" | Contract is specification from coordinator |
|
||||
| "More information is always helpful" | Your job: follow spec. Coordinator's job: decide what's included |
|
||||
| "This comprehensive format is better" | "Better" violates contract. Compliance is mandatory. |
|
||||
|
||||
## Validation Criteria
|
||||
|
||||
Your output will be validated against:
|
||||
|
||||
1. **Contract compliance** - All sections present, no extras
|
||||
2. **File operation** - Appended to `02-subsystem-catalog.md`, not separate file
|
||||
3. **Confidence marking** - High/Medium/Low with reasoning
|
||||
4. **Evidence-based claims** - Components you actually read
|
||||
5. **Bidirectional dependencies** - If A→B, then B must show A as inbound
|
||||
|
||||
**If validation returns NEEDS_REVISION:**
|
||||
- Read the validation report
|
||||
- Fix specific issues identified
|
||||
- Re-submit following contract
|
||||
|
||||
## Success Criteria
|
||||
|
||||
**You succeeded when:**
|
||||
- Entry appended to `02-subsystem-catalog.md` in exact contract format
|
||||
- All sections included (none skipped, none added)
|
||||
- Confidence level marked with reasoning
|
||||
- Claims supported by files you read
|
||||
- Validation returns APPROVED
|
||||
|
||||
**You failed when:**
|
||||
- Added "helpful" extra sections
|
||||
- Wrote to separate file
|
||||
- Changed contract format
|
||||
- Skipped sections
|
||||
- No confidence marking
|
||||
- Validation returns BLOCK status
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
❌ **Add extra sections**
|
||||
"I'll add Recommendations section" → Violates contract
|
||||
|
||||
❌ **Write to new file**
|
||||
"I'll create subsystem-X-analysis.md" → Should append to `02-subsystem-catalog.md`
|
||||
|
||||
❌ **Skip required sections**
|
||||
"No concerns, so I'll omit that section" → Include section with "None observed"
|
||||
|
||||
❌ **Change format**
|
||||
"I'll use numbered lists instead of bullet points" → Follow contract exactly
|
||||
|
||||
❌ **Work without reading task spec**
|
||||
"I know what to do" → Read `temp/task-*.md` first
|
||||
|
||||
## Integration with Workflow
|
||||
|
||||
This skill is typically invoked as:
|
||||
|
||||
1. **Coordinator** creates workspace and holistic assessment
|
||||
2. **Coordinator** writes task specification in `temp/task-[yourname].md`
|
||||
3. **YOU** read task spec + `01-discovery-findings.md`
|
||||
4. **YOU** analyze assigned subsystem systematically
|
||||
5. **YOU** append entry to `02-subsystem-catalog.md` following contract
|
||||
6. **Validator** checks your output against contract
|
||||
7. **Coordinator** proceeds to next phase if validation passes
|
||||
|
||||
**Your role:** Analyze systematically, follow contract exactly, mark confidence explicitly.
|
||||
@@ -0,0 +1,396 @@
|
||||
|
||||
# Documenting System Architecture
|
||||
|
||||
## Purpose
|
||||
|
||||
Synthesize subsystem catalogs and architecture diagrams into final, stakeholder-ready architecture reports that serve multiple audiences through clear structure, comprehensive navigation, and actionable findings.
|
||||
|
||||
## When to Use
|
||||
|
||||
- Coordinator delegates final report generation from validated artifacts
|
||||
- Have `02-subsystem-catalog.md` and `03-diagrams.md` as inputs
|
||||
- Task specifies writing to `04-final-report.md`
|
||||
- Need to produce executive-readable architecture documentation
|
||||
- Output represents deliverable for stakeholders
|
||||
|
||||
## Core Principle: Synthesis Over Concatenation
|
||||
|
||||
**Good reports synthesize information into insights. Poor reports concatenate source documents.**
|
||||
|
||||
Your goal: Create a coherent narrative with extracted patterns, concerns, and recommendations - not a copy-paste of inputs.
|
||||
|
||||
## Document Structure
|
||||
|
||||
### Required Sections
|
||||
|
||||
**1. Front Matter**
|
||||
- Document title
|
||||
- Version number
|
||||
- Analysis date
|
||||
- Classification (if needed)
|
||||
|
||||
**2. Table of Contents**
|
||||
- Multi-level hierarchy (H2, H3, H4)
|
||||
- Anchor links to all major sections
|
||||
- Quick navigation for readers
|
||||
|
||||
**3. Executive Summary (2-3 paragraphs)**
|
||||
- High-level system overview
|
||||
- Key architectural patterns
|
||||
- Major concerns and confidence assessment
|
||||
- Should be readable standalone by leadership
|
||||
|
||||
**4. System Overview**
|
||||
- Purpose and scope
|
||||
- Technology stack
|
||||
- System context (external dependencies)
|
||||
|
||||
**5. Architecture Diagrams**
|
||||
- Embed all diagrams from `03-diagrams.md`
|
||||
- Add contextual analysis after each diagram
|
||||
- Cross-reference to subsystem catalog
|
||||
|
||||
**6. Subsystem Catalog**
|
||||
- One detailed entry per subsystem
|
||||
- Synthesize from `02-subsystem-catalog.md` (don't just copy)
|
||||
- Add cross-references to diagrams and findings
|
||||
|
||||
**7. Key Findings**
|
||||
- **Architectural Patterns**: Identified across subsystems
|
||||
- **Technical Concerns**: Extracted from catalog concerns
|
||||
- **Recommendations**: Actionable next steps with priorities
|
||||
|
||||
**8. Appendices**
|
||||
- **Methodology**: How analysis was performed
|
||||
- **Confidence Levels**: Rationale for confidence ratings
|
||||
- **Assumptions & Limitations**: What you inferred, what's missing
|
||||
|
||||
## Synthesis Strategies
|
||||
|
||||
### Pattern Identification
|
||||
|
||||
**Look across subsystems for recurring patterns:**
|
||||
|
||||
From catalog observations:
|
||||
- Subsystem A: "Dependency injection for testability"
|
||||
- Subsystem B: "All external services injected"
|
||||
- Subsystem C: "Injected dependencies for testing"
|
||||
|
||||
**Synthesize into pattern:**
|
||||
```markdown
|
||||
### Dependency Injection Pattern
|
||||
|
||||
**Observed in**: Authentication Service, API Gateway, User Service
|
||||
|
||||
**Description**: External dependencies are injected rather than directly instantiated, enabling test isolation and loose coupling.
|
||||
|
||||
**Benefits**:
|
||||
- Testability: Mock dependencies in unit tests
|
||||
- Flexibility: Swap implementations without code changes
|
||||
- Loose coupling: Services depend on interfaces, not concrete implementations
|
||||
|
||||
**Trade-offs**:
|
||||
- Initial complexity: Requires dependency wiring infrastructure
|
||||
- Runtime overhead: Minimal (dependency resolution at startup)
|
||||
```
|
||||
|
||||
### Concern Extraction
|
||||
|
||||
**Find concerns buried in catalog entries:**
|
||||
|
||||
Catalog entries:
|
||||
- API Gateway: "Rate limiter uses in-memory storage (doesn't scale horizontally)"
|
||||
- Database Layer: "Connection pool max size hardcoded (should be configurable)"
|
||||
- Data Service: "Large analytics queries can cause database load spikes"
|
||||
|
||||
**Synthesize into findings:**
|
||||
```markdown
|
||||
## Technical Concerns
|
||||
|
||||
### 1. Rate Limiter Scalability Issue
|
||||
|
||||
**Severity**: Medium
|
||||
**Affected Subsystem**: [API Gateway](#api-gateway)
|
||||
|
||||
**Issue**: In-memory rate limiting prevents horizontal scaling. If multiple gateway instances run, each maintains separate counters, allowing clients to exceed intended limits by distributing requests across instances.
|
||||
|
||||
**Impact**:
|
||||
- Cannot scale gateway horizontally without distributed rate limiting
|
||||
- Potential for rate limit bypass under load balancing
|
||||
- Inconsistent rate limit enforcement
|
||||
|
||||
**Remediation**:
|
||||
1. **Immediate** (next sprint): Document limitation, add monitoring alerts
|
||||
2. **Short-term** (next quarter): Migrate to Redis-backed rate limiter
|
||||
3. **Validation**: Test rate limiting with multiple gateway instances
|
||||
|
||||
**Priority**: High (blocks horizontal scaling)
|
||||
```
|
||||
|
||||
### Recommendation Prioritization
|
||||
|
||||
Group recommendations by timeline:
|
||||
|
||||
```markdown
|
||||
## Recommendations
|
||||
|
||||
### Immediate (Next Sprint)
|
||||
1. **Document rate limiter limitation** in operations runbook
|
||||
2. **Add monitoring** for database connection pool exhaustion
|
||||
3. **Configure alerting** on Data Service query execution times > 5s
|
||||
|
||||
### Short-Term (Next Quarter)
|
||||
4. **Migrate rate limiter** to Redis-backed distributed implementation
|
||||
5. **Externalize database pool configuration** to environment variables
|
||||
6. **Implement query throttling** in Data Service analytics engine
|
||||
|
||||
### Long-Term (6 Months)
|
||||
7. **Architecture review** for caching strategy optimization
|
||||
8. **Evaluate** circuit breaker effectiveness under load testing
|
||||
```
|
||||
|
||||
## Cross-Referencing Strategy
|
||||
|
||||
### Bidirectional Links
|
||||
|
||||
**Subsystem → Diagram:**
|
||||
```markdown
|
||||
## Authentication Service
|
||||
|
||||
[...subsystem details...]
|
||||
|
||||
**Component Architecture**: See [Authentication Service Components](#auth-service-components) diagram
|
||||
|
||||
**Dependencies**: [API Gateway](#api-gateway), [Database Layer](#database-layer)
|
||||
```
|
||||
|
||||
**Diagram → Subsystem:**
|
||||
```markdown
|
||||
### Authentication Service Components
|
||||
|
||||
[...diagram...]
|
||||
|
||||
**Description**: This component diagram shows internal structure of the Authentication Service. For additional operational details, see [Authentication Service](#authentication-service) in the subsystem catalog.
|
||||
```
|
||||
|
||||
**Finding → Subsystem:**
|
||||
```markdown
|
||||
### Rate Limiter Scalability Issue
|
||||
|
||||
**Affected Subsystem**: [API Gateway](#api-gateway)
|
||||
|
||||
[...concern details...]
|
||||
```
|
||||
|
||||
### Navigation Patterns
|
||||
|
||||
**Table of contents with anchor links:**
|
||||
```markdown
|
||||
## Table of Contents
|
||||
|
||||
1. [Executive Summary](#executive-summary)
|
||||
2. [System Overview](#system-overview)
|
||||
- [Purpose and Scope](#purpose-and-scope)
|
||||
- [Technology Stack](#technology-stack)
|
||||
3. [Architecture Diagrams](#architecture-diagrams)
|
||||
- [Level 1: Context](#level-1-context)
|
||||
- [Level 2: Container](#level-2-container)
|
||||
```
|
||||
|
||||
## Multi-Audience Considerations
|
||||
|
||||
### Executive Audience
|
||||
|
||||
**What they need:**
|
||||
- Executive summary ONLY (should be self-contained)
|
||||
- High-level patterns and risks
|
||||
- Business impact of concerns
|
||||
- Clear recommendations with timelines
|
||||
|
||||
**Document design:**
|
||||
- Put executive summary first
|
||||
- Make it readable standalone (no forward references)
|
||||
- Focus on "why this matters" over "how it works"
|
||||
|
||||
### Architect Audience
|
||||
|
||||
**What they need:**
|
||||
- System overview + architecture diagrams + key findings
|
||||
- Pattern analysis with trade-offs
|
||||
- Dependency relationships
|
||||
- Design decisions and rationale
|
||||
|
||||
**Document design:**
|
||||
- System overview explains context
|
||||
- Diagrams show structure at multiple levels
|
||||
- Findings synthesize patterns and concerns
|
||||
- Cross-references enable non-linear reading
|
||||
|
||||
### Engineer Audience
|
||||
|
||||
**What they need:**
|
||||
- Subsystem catalog with technical details
|
||||
- Component diagrams showing internal structure
|
||||
- Technology stack specifics
|
||||
- File references and entry points
|
||||
|
||||
**Document design:**
|
||||
- Detailed subsystem catalog
|
||||
- Component-level diagrams
|
||||
- Technology stack section with versions/frameworks
|
||||
- Code/file references where available
|
||||
|
||||
### Operations Audience
|
||||
|
||||
**What they need:**
|
||||
- Technical concerns with remediation
|
||||
- Dependency mapping
|
||||
- Confidence levels (what's validated vs assumed)
|
||||
- Recommendations with priorities
|
||||
|
||||
**Document design:**
|
||||
- Technical concerns section up front
|
||||
- Clear remediation steps
|
||||
- Appendix with assumptions/limitations
|
||||
- Prioritized recommendations
|
||||
|
||||
## Optional Enhancements
|
||||
|
||||
### Visual Aids
|
||||
|
||||
**Subsystem Quick Reference Table:**
|
||||
```markdown
|
||||
## Appendix D: Subsystem Quick Reference
|
||||
|
||||
| Subsystem | Location | Confidence | Key Concerns | Dependencies |
|
||||
|-----------|----------|------------|--------------|--------------|
|
||||
| API Gateway | /src/gateway/ | High | Rate limiter scalability | Auth, User, Data, Logging |
|
||||
| Auth Service | /src/services/auth/ | High | None | Database, Cache, Logging |
|
||||
| User Service | /src/services/users/ | High | None | Database, Cache, Notification |
|
||||
```
|
||||
|
||||
**Pattern Summary Matrix:**
|
||||
```markdown
|
||||
## Architectural Patterns Summary
|
||||
|
||||
| Pattern | Subsystems Using | Benefits | Trade-offs |
|
||||
|---------|------------------|----------|------------|
|
||||
| Dependency Injection | Auth, Gateway, User | Testability, flexibility | Initial complexity |
|
||||
| Repository Pattern | User, Data | Data access abstraction | Extra layer |
|
||||
| Circuit Breaker | Gateway | Fault isolation | False positives |
|
||||
```
|
||||
|
||||
### Reading Guide
|
||||
|
||||
```markdown
|
||||
## How to Read This Document
|
||||
|
||||
**For Executives** (5 minutes):
|
||||
- Read [Executive Summary](#executive-summary) only
|
||||
- Optionally skim [Recommendations](#recommendations)
|
||||
|
||||
**For Architects** (30 minutes):
|
||||
- Read [Executive Summary](#executive-summary)
|
||||
- Read [System Overview](#system-overview)
|
||||
- Review [Architecture Diagrams](#architecture-diagrams)
|
||||
- Read [Key Findings](#key-findings)
|
||||
|
||||
**For Engineers** (1 hour):
|
||||
- Read [System Overview](#system-overview)
|
||||
- Study [Architecture Diagrams](#architecture-diagrams) (all levels)
|
||||
- Read [Subsystem Catalog](#subsystem-catalog) for relevant services
|
||||
- Review [Technical Concerns](#technical-concerns)
|
||||
|
||||
**For Operations** (45 minutes):
|
||||
- Read [Executive Summary](#executive-summary)
|
||||
- Study [Technical Concerns](#technical-concerns)
|
||||
- Review [Recommendations](#recommendations)
|
||||
- Read [Appendix C: Assumptions and Limitations](#appendix-c-assumptions-and-limitations)
|
||||
```
|
||||
|
||||
### Glossary
|
||||
|
||||
```markdown
|
||||
## Appendix E: Glossary
|
||||
|
||||
**Circuit Breaker**: Fault tolerance pattern that prevents cascading failures by temporarily blocking requests to failing services.
|
||||
|
||||
**Dependency Injection**: Design pattern where dependencies are provided to components rather than constructed internally, enabling testability and loose coupling.
|
||||
|
||||
**Repository Pattern**: Data access abstraction that separates business logic from data persistence concerns.
|
||||
|
||||
**Optimistic Locking**: Concurrency control technique assuming conflicts are rare, using version checks rather than locks.
|
||||
```
|
||||
|
||||
## Success Criteria
|
||||
|
||||
**You succeeded when:**
|
||||
- Executive summary (2-3 paragraphs) distills key information
|
||||
- Table of contents provides multi-level navigation
|
||||
- Cross-references (30+) enable non-linear reading
|
||||
- Patterns synthesized (not just listed from catalog)
|
||||
- Concerns extracted and prioritized
|
||||
- Recommendations actionable with timelines
|
||||
- Diagrams integrated with contextual analysis
|
||||
- Appendices document methodology, confidence, assumptions
|
||||
- Professional structure (document metadata, clear hierarchy)
|
||||
- Written to 04-final-report.md
|
||||
|
||||
**You failed when:**
|
||||
- Simple concatenation of source documents
|
||||
- No executive summary or it requires reading full document
|
||||
- Missing table of contents
|
||||
- No cross-references between sections
|
||||
- Patterns just copied from catalog (not synthesized)
|
||||
- Concerns buried without extraction
|
||||
- Recommendations vague or unprioritized
|
||||
- Diagrams pasted without context
|
||||
- Missing appendices
|
||||
|
||||
## Best Practices from Baseline Testing
|
||||
|
||||
### What Works
|
||||
|
||||
✅ **Comprehensive synthesis** - Identify patterns, extract concerns, create narrative
|
||||
✅ **Professional structure** - Document metadata, TOC, clear hierarchy, appendices
|
||||
✅ **Multi-level navigation** - 20+ TOC entries, 40+ cross-references
|
||||
✅ **Executive summary** - Self-contained 2-3 paragraph distillation
|
||||
✅ **Actionable findings** - Concerns with severity/impact/remediation, recommendations with timelines
|
||||
✅ **Transparency** - Confidence levels, assumptions, limitations documented
|
||||
✅ **Diagram integration** - Embedded with contextual analysis and cross-refs
|
||||
✅ **Multi-audience** - Executive summary + technical depth + appendices
|
||||
|
||||
### Synthesis Patterns
|
||||
|
||||
**Pattern identification:**
|
||||
- Look across multiple subsystems for recurring themes
|
||||
- Group by pattern name (e.g., "Repository Pattern")
|
||||
- Document which subsystems use it
|
||||
- Explain benefits and trade-offs
|
||||
|
||||
**Concern extraction:**
|
||||
- Find concerns in subsystem catalog entries
|
||||
- Elevate to Key Findings section
|
||||
- Add severity, impact, remediation
|
||||
- Prioritize by timeline (immediate/short/long)
|
||||
|
||||
**Recommendation structure:**
|
||||
- Group by timeline
|
||||
- Specific actions (not vague suggestions)
|
||||
- Validation steps
|
||||
- Priority indicators
|
||||
|
||||
## Integration with Workflow
|
||||
|
||||
This skill is typically invoked as:
|
||||
|
||||
1. **Coordinator** completes and validates subsystem catalog
|
||||
2. **Coordinator** completes and validates architecture diagrams
|
||||
3. **Coordinator** writes task specification for final report
|
||||
4. **YOU** read both source documents systematically
|
||||
5. **YOU** synthesize patterns, extract concerns, create recommendations
|
||||
6. **YOU** build professional report structure with navigation
|
||||
7. **YOU** write to 04-final-report.md
|
||||
8. **Validator** (optional) checks for synthesis quality, navigation, completeness
|
||||
|
||||
**Your role:** Transform analysis artifacts into stakeholder-ready documentation through synthesis, organization, and professional presentation.
|
||||
@@ -0,0 +1,306 @@
|
||||
|
||||
# Generating Architecture Diagrams
|
||||
|
||||
## Purpose
|
||||
|
||||
Generate C4 architecture diagrams (Context, Container, Component levels) from subsystem catalogs, producing readable visualizations that communicate architecture without overwhelming readers.
|
||||
|
||||
## When to Use
|
||||
|
||||
- Coordinator delegates diagram generation from `02-subsystem-catalog.md`
|
||||
- Task specifies writing to `03-diagrams.md`
|
||||
- Need to visualize system architecture at multiple abstraction levels
|
||||
- Output integrates with validation and final reporting phases
|
||||
|
||||
## Core Principle: Abstraction Over Completeness
|
||||
|
||||
**Readable diagrams communicate architecture. Overwhelming diagrams obscure it.**
|
||||
|
||||
Your goal: Help readers understand the system, not document every detail.
|
||||
|
||||
## Output Contract
|
||||
|
||||
When writing to `03-diagrams.md`, include:
|
||||
|
||||
**Required sections:**
|
||||
1. **Context Diagram (C4 Level 1)**: System boundary, external actors, external systems
|
||||
2. **Container Diagram (C4 Level 2)**: Major subsystems with dependencies
|
||||
3. **Component Diagrams (C4 Level 3)**: Internal structure for 2-3 representative subsystems
|
||||
4. **Assumptions and Limitations**: What you inferred, what's missing, diagram constraints
|
||||
|
||||
**For each diagram:**
|
||||
- Title (describes what the diagram shows)
|
||||
- Mermaid or PlantUML code block (as requested)
|
||||
- Description (narrative explanation after diagram)
|
||||
- Legend (notation explained)
|
||||
|
||||
## C4 Level Selection
|
||||
|
||||
### Level 1: Context Diagram
|
||||
|
||||
**Purpose:** System boundary and external interactions
|
||||
|
||||
**Show:**
|
||||
- The system as single box
|
||||
- External actors (users, administrators)
|
||||
- External systems (databases, services, repositories)
|
||||
- High-level relationships
|
||||
|
||||
**Don't show:**
|
||||
- Internal subsystems (that's Level 2)
|
||||
- Implementation details
|
||||
|
||||
**Example scope:** "User Data Platform and its external dependencies"
|
||||
|
||||
### Level 2: Container Diagram
|
||||
|
||||
**Purpose:** Major subsystems and their relationships
|
||||
|
||||
**Show:**
|
||||
- Internal subsystems/services/plugins
|
||||
- Dependencies between them
|
||||
- External systems they connect to
|
||||
|
||||
**Abstraction strategies:**
|
||||
- **Simple systems (≤8 subsystems)**: Show all subsystems individually
|
||||
- **Complex systems (>8 subsystems)**: Use grouping strategies:
|
||||
- Group by category/domain (e.g., faction, layer, purpose)
|
||||
- Add metadata to convey scale (e.g., "13 skills", "9 services")
|
||||
- Reduce visual elements while preserving fidelity
|
||||
|
||||
**Don't show:**
|
||||
- Internal components within subsystems (that's Level 3)
|
||||
- Every file or class
|
||||
|
||||
**Example scope:** "15 plugins organized into 6 domain categories"
|
||||
|
||||
### Level 3: Component Diagrams
|
||||
|
||||
**Purpose:** Internal architecture of selected subsystems
|
||||
|
||||
**Selection criteria (choose 2-3 subsystems that):**
|
||||
1. **Architectural diversity** - Show different patterns (router vs orchestrator, sync vs async)
|
||||
2. **Scale representation** - Include largest/most complex if relevant
|
||||
3. **Critical path** - Entry points, security-critical, data flow bottlenecks
|
||||
4. **Avoid redundancy** - Don't show 5 examples of same pattern
|
||||
|
||||
**Show:**
|
||||
- Internal components/modules/classes
|
||||
- Relationships between components
|
||||
- External dependencies for context
|
||||
|
||||
**Document selection rationale:**
|
||||
```markdown
|
||||
**Selection Rationale**:
|
||||
- Plugin A: Largest (13 skills), shows router pattern
|
||||
- Plugin B: Different organization (platform-based vs algorithm-based)
|
||||
- Plugin C: Process orchestration (vs knowledge routing)
|
||||
|
||||
**Why Not Others**: 8 plugins follow similar pattern to A (redundant)
|
||||
```
|
||||
|
||||
## Abstraction Strategies for Complexity
|
||||
|
||||
When facing many subsystems (10+):
|
||||
|
||||
### Strategy 1: Natural Grouping
|
||||
|
||||
**Look for existing structure:**
|
||||
- Categories in metadata (AI/ML, Security, UX)
|
||||
- Layers (presentation, business, data)
|
||||
- Domains (user management, analytics, reporting)
|
||||
|
||||
**Example:**
|
||||
```mermaid
|
||||
subgraph "AI/ML Domain"
|
||||
YzmirRouter[Router: 1 skill]
|
||||
YzmirRL[Deep RL: 13 skills]
|
||||
YzmirLLM[LLM: 8 skills]
|
||||
end
|
||||
```
|
||||
|
||||
**Benefit:** Aligns with how users think about the system
|
||||
|
||||
### Strategy 2: Metadata Enrichment
|
||||
|
||||
**Add context without detail:**
|
||||
- Skill counts: "Deep RL: 13 skills"
|
||||
- Line counts: "342 lines"
|
||||
- Status: "Complete" vs "WIP"
|
||||
|
||||
**Benefit:** Conveys scale without visual clutter
|
||||
|
||||
### Strategy 3: Strategic Sampling
|
||||
|
||||
**For Component diagrams, sample ~20%:**
|
||||
- Choose diverse examples (not all similar)
|
||||
- Document "Why these, not others"
|
||||
- Prefer breadth over depth
|
||||
|
||||
**Benefit:** Readers see architectural variety without information overload
|
||||
|
||||
## Notation Conventions
|
||||
|
||||
### Relationship Types
|
||||
|
||||
Use different line styles for different semantics:
|
||||
|
||||
- **Solid lines** (`-->`) - Data dependencies, function calls, HTTP requests
|
||||
- **Dotted lines** (`-.->`) - Routing relationships, optional dependencies, logical grouping
|
||||
- **Bold lines** - Critical path, high-frequency interactions (if tooling supports)
|
||||
|
||||
**Example:**
|
||||
```mermaid
|
||||
Router -.->|"Routes to"| SpecializedSkill # Logical routing
|
||||
Gateway -->|"Calls"| AuthService # Data flow
|
||||
```
|
||||
|
||||
### Color Coding
|
||||
|
||||
Use color to create visual hierarchy:
|
||||
|
||||
- **Factions/domains** - Different color per group
|
||||
- **Status** - Green (complete), yellow (WIP), gray (external)
|
||||
- **Importance** - Highlight critical paths
|
||||
|
||||
**Document in legend:** Explain what colors mean
|
||||
|
||||
### Component Annotation
|
||||
|
||||
Add metadata in labels:
|
||||
|
||||
```mermaid
|
||||
AuthService[Authentication Service<br/>Python<br/>342 lines]
|
||||
```
|
||||
|
||||
## Handling Incomplete Information
|
||||
|
||||
### When Catalog Has Gaps
|
||||
|
||||
**Inferred components (reasonable):**
|
||||
- Catalog references "Cache Service" repeatedly → Include in diagram
|
||||
- **MUST document:** "Cache Service inferred from dependencies (not in catalog)"
|
||||
- **Consider notation:** Dotted border or lighter color for inferred components
|
||||
|
||||
**Missing dependencies (don't guess):**
|
||||
- Catalog says "Outbound: Unknown" → Document limitation
|
||||
- **Don't invent:** Leave out rather than guess
|
||||
|
||||
### When Patterns Don't Map Directly
|
||||
|
||||
**Catalog says "Patterns Observed: Circuit breaker"**
|
||||
|
||||
**Reasonable:** Add circuit breaker component to diagram (it's architectural)
|
||||
|
||||
**Document:** "Circuit breaker shown based on pattern observation (not explicit component)"
|
||||
|
||||
## Documentation Template
|
||||
|
||||
After diagrams, include:
|
||||
|
||||
```markdown
|
||||
## Assumptions and Limitations
|
||||
|
||||
### Assumptions
|
||||
1. **Component X**: Inferred from Y references in catalog
|
||||
2. **Protocol**: Assumed HTTP/REST based on API Gateway pattern
|
||||
3. **Grouping**: Used faction categories from metadata
|
||||
|
||||
### Limitations
|
||||
1. **Incomplete Catalog**: Only 5/10 subsystems documented
|
||||
2. **Missing Details**: Database schema not available
|
||||
3. **Deployment**: Scaling/replication not shown
|
||||
|
||||
### Diagram Constraints
|
||||
- **Format**: Mermaid syntax (may not render in all viewers)
|
||||
- **Abstraction**: Component diagrams for 3/15 subsystems only
|
||||
- **Trade-offs**: Visual clarity prioritized over completeness
|
||||
|
||||
### Confidence Levels
|
||||
- **High**: Subsystems A, B, C (well-documented)
|
||||
- **Medium**: Subsystem D (some gaps in dependencies)
|
||||
- **Low**: Subsystem E (minimal catalog entry)
|
||||
```
|
||||
|
||||
## Mermaid vs PlantUML
|
||||
|
||||
**Default to Mermaid unless task specifies otherwise.**
|
||||
|
||||
**Mermaid advantages:**
|
||||
- Native GitHub rendering
|
||||
- Simpler syntax
|
||||
- Better IDE support
|
||||
|
||||
**PlantUML when requested:**
|
||||
```plantuml
|
||||
@startuml
|
||||
!include <C4/C4_Context>
|
||||
|
||||
Person(user, "User")
|
||||
System(platform, "Platform")
|
||||
Rel(user, platform, "Uses")
|
||||
@enduml
|
||||
```
|
||||
|
||||
## Success Criteria
|
||||
|
||||
**You succeeded when:**
|
||||
- All 3 C4 levels generated (Context, Container, Component for 2-3 subsystems)
|
||||
- Diagrams are readable (not overwhelming)
|
||||
- Selection rationale documented
|
||||
- Assumptions and limitations section present
|
||||
- Syntax valid (Mermaid or PlantUML)
|
||||
- Titles, descriptions, legends included
|
||||
- Written to 03-diagrams.md
|
||||
|
||||
**You failed when:**
|
||||
- Skipped diagram levels
|
||||
- Created overwhelming diagrams (15 flat boxes instead of grouped)
|
||||
- No selection rationale for Component diagrams
|
||||
- Invalid syntax
|
||||
- Missing documentation sections
|
||||
- Invented relationships without noting as inferred
|
||||
|
||||
## Best Practices from Baseline Testing
|
||||
|
||||
### What Works
|
||||
|
||||
✅ **Faction-based grouping** - Reduce visual complexity (15 → 6 groups)
|
||||
✅ **Metadata enrichment** - Skill counts, line counts convey scale
|
||||
✅ **Strategic sampling** - 20% Component diagrams showing diversity
|
||||
✅ **Clear rationale** - Document why you chose these examples
|
||||
✅ **Notation for relationships** - Dotted (routing) vs solid (data)
|
||||
✅ **Color hierarchy** - Visual grouping by domain
|
||||
✅ **Trade-off documentation** - Explicit "what's visible vs abstracted"
|
||||
|
||||
### Common Patterns
|
||||
|
||||
**Router pattern visualization:**
|
||||
- Show router as distinct component
|
||||
- Use dotted lines for routing relationships
|
||||
- Group routed-to components
|
||||
|
||||
**Layered architecture:**
|
||||
- Use subgraphs for layers
|
||||
- Show dependencies flowing between layers
|
||||
- Don't duplicate components across layers
|
||||
|
||||
**Microservices:**
|
||||
- Group related services by domain
|
||||
- Show API gateway as entry point
|
||||
- External systems distinct from internal services
|
||||
|
||||
## Integration with Workflow
|
||||
|
||||
This skill is typically invoked as:
|
||||
|
||||
1. **Coordinator** completes subsystem catalog (02-subsystem-catalog.md)
|
||||
2. **Coordinator** validates catalog (optional validation gate)
|
||||
3. **Coordinator** writes task specification for diagram generation
|
||||
4. **YOU** read catalog systematically
|
||||
5. **YOU** generate diagrams following abstraction strategies
|
||||
6. **YOU** document assumptions, limitations, selection rationale
|
||||
7. **YOU** write to 03-diagrams.md
|
||||
8. **Validator** checks diagrams for syntax, completeness, readability
|
||||
|
||||
**Your role:** Translate catalog into readable visual architecture using abstraction and selection strategies.
|
||||
@@ -0,0 +1,370 @@
|
||||
|
||||
# Validating Architecture Analysis
|
||||
|
||||
## Purpose
|
||||
|
||||
Validate architecture analysis artifacts (subsystem catalogs, diagrams, reports) against contract requirements and cross-document consistency standards, producing actionable validation reports with clear approval/revision status.
|
||||
|
||||
## When to Use
|
||||
|
||||
- Coordinator delegates validation after document production
|
||||
- Task specifies validating `02-subsystem-catalog.md`, `03-diagrams.md`, or `04-final-report.md`
|
||||
- Validation gate required before proceeding to next phase
|
||||
- Need independent quality check with fresh eyes
|
||||
- Output determines whether work progresses or requires revision
|
||||
|
||||
## Core Principle: Systematic Verification
|
||||
|
||||
**Good validation finds all issues systematically. Poor validation misses violations or invents false positives.**
|
||||
|
||||
Your goal: Thorough, objective, evidence-based validation with specific, actionable feedback.
|
||||
|
||||
## Validation Types
|
||||
|
||||
### Type 1: Contract Compliance
|
||||
|
||||
**Validate single document against its contract:**
|
||||
|
||||
**Example contracts:**
|
||||
- **Subsystem Catalog** (`02-subsystem-catalog.md`): 8 required sections per entry (Location, Responsibility, Key Components, Dependencies [Inbound/Outbound format], Patterns Observed, Concerns, Confidence, separator)
|
||||
- **Architecture Diagrams** (`03-diagrams.md`): Context + Container + 2-3 Component diagrams, titles/descriptions/legends, assumptions section
|
||||
- **Final Report** (`04-final-report.md`): Executive summary, TOC, diagrams integrated, key findings, appendices
|
||||
|
||||
**Validation approach:**
|
||||
1. Read contract specification from task or skill documentation
|
||||
2. Check document systematically against each requirement
|
||||
3. Flag missing sections, extra sections, wrong formats
|
||||
4. Distinguish CRITICAL (contract violations) vs WARNING (quality issues)
|
||||
|
||||
### Type 2: Cross-Document Consistency
|
||||
|
||||
**Validate that multiple documents align:**
|
||||
|
||||
**Common checks:**
|
||||
- Catalog dependencies match diagram arrows
|
||||
- Diagram subsystems listed in catalog
|
||||
- Final report references match source documents
|
||||
- Confidence levels consistent across documents
|
||||
|
||||
**Validation approach:**
|
||||
1. Extract key elements from each document
|
||||
2. Cross-reference systematically
|
||||
3. Flag inconsistencies with specific citations
|
||||
4. Provide fixes that maintain consistency
|
||||
|
||||
## Output: Validation Report
|
||||
|
||||
### File Path (CRITICAL)
|
||||
|
||||
**Write to workspace temp/ directory:**
|
||||
```
|
||||
<workspace>/temp/validation-<document-name>.md
|
||||
```
|
||||
|
||||
**Examples:**
|
||||
- Workspace: `docs/arch-analysis-2025-11-12-1234/`
|
||||
- Catalog validation: `docs/arch-analysis-2025-11-12-1234/temp/validation-catalog.md`
|
||||
- Diagram validation: `docs/arch-analysis-2025-11-12-1234/temp/validation-diagrams.md`
|
||||
- Consistency validation: `docs/arch-analysis-2025-11-12-1234/temp/validation-consistency.md`
|
||||
|
||||
**DO NOT use absolute paths like `/home/user/skillpacks/temp/`** - write to workspace temp/.
|
||||
|
||||
### Report Structure (Template)
|
||||
|
||||
```markdown
|
||||
# Validation Report: [Document Name]
|
||||
|
||||
**Document:** `<path to validated document>`
|
||||
**Validation Date:** YYYY-MM-DD
|
||||
**Overall Status:** APPROVED | NEEDS_REVISION (CRITICAL) | NEEDS_REVISION (WARNING)
|
||||
|
||||
|
||||
## Contract Requirements
|
||||
|
||||
[List the contract requirements being validated against]
|
||||
|
||||
|
||||
## Validation Results
|
||||
|
||||
### [Entry/Section 1]
|
||||
|
||||
**CRITICAL VIOLATIONS:**
|
||||
1. [Specific issue with line numbers]
|
||||
2. [Specific issue with line numbers]
|
||||
|
||||
**WARNINGS:**
|
||||
1. [Quality issue, not blocking]
|
||||
|
||||
**Passes:**
|
||||
- ✓ [What's correct]
|
||||
- ✓ [What's correct]
|
||||
|
||||
**Summary:** X CRITICAL, Y WARNING
|
||||
|
||||
|
||||
### [Entry/Section 2]
|
||||
|
||||
...
|
||||
|
||||
|
||||
## Overall Assessment
|
||||
|
||||
**Total [Entries/Sections] Analyzed:** N
|
||||
**[Entries/Sections] with CRITICAL:** X
|
||||
**Total CRITICAL Violations:** Y
|
||||
**Total WARNINGS:** Z
|
||||
|
||||
### Violations by Type:
|
||||
1. **[Type]:** Count
|
||||
2. **[Type]:** Count
|
||||
|
||||
|
||||
## Recommended Actions
|
||||
|
||||
### For [Entry/Section]:
|
||||
[Specific fix with code block]
|
||||
|
||||
|
||||
## Validation Approach
|
||||
|
||||
**Methodology:**
|
||||
[How you validated]
|
||||
|
||||
**Checklist:**
|
||||
[Systematic verification steps]
|
||||
|
||||
|
||||
## Self-Assessment
|
||||
|
||||
**Did I find all violations?**
|
||||
[YES/NO with reasoning]
|
||||
|
||||
**Coverage:**
|
||||
[What was checked]
|
||||
|
||||
**Confidence:** [High/Medium/Low]
|
||||
|
||||
|
||||
## Summary
|
||||
|
||||
**Status:** [APPROVED or NEEDS_REVISION]
|
||||
**Critical Issues:** [Count]
|
||||
**Warnings:** [Count]
|
||||
|
||||
[Final disposition]
|
||||
```
|
||||
|
||||
## Validation Status Levels
|
||||
|
||||
### APPROVED
|
||||
|
||||
**When to use:**
|
||||
- All contract requirements met
|
||||
- No CRITICAL violations
|
||||
- Minor quality issues acceptable (or none)
|
||||
|
||||
**Report should:**
|
||||
- Confirm compliance
|
||||
- List what was checked
|
||||
- Note any minor observations
|
||||
|
||||
### NEEDS_REVISION (WARNING)
|
||||
|
||||
**When to use:**
|
||||
- Contract compliant
|
||||
- Quality issues present (vague descriptions, weak reasoning)
|
||||
- NOT blocking progression
|
||||
|
||||
**Report should:**
|
||||
- Confirm contract compliance
|
||||
- List quality improvements suggested
|
||||
- Note: "Not blocking, but recommended"
|
||||
- Distinguish from CRITICAL
|
||||
|
||||
### NEEDS_REVISION (CRITICAL)
|
||||
|
||||
**When to use:**
|
||||
- Contract violations (missing/extra sections, wrong format)
|
||||
- Cross-document inconsistencies
|
||||
- BLOCKS progression to next phase
|
||||
|
||||
**Report should:**
|
||||
- List all CRITICAL violations
|
||||
- Provide specific fixes for each
|
||||
- Be clear this blocks progression
|
||||
|
||||
## Systematic Validation Checklist
|
||||
|
||||
### For Subsystem Catalog
|
||||
|
||||
**Per entry:**
|
||||
```
|
||||
[ ] Section 1: Location with absolute path in backticks?
|
||||
[ ] Section 2: Responsibility as single sentence?
|
||||
[ ] Section 3: Key Components as bulleted list?
|
||||
[ ] Section 4: Dependencies in "Inbound: X / Outbound: Y" format?
|
||||
[ ] Section 5: Patterns Observed as bulleted list?
|
||||
[ ] Section 6: Concerns present (or "None observed")?
|
||||
[ ] Section 7: Confidence (High/Medium/Low) with reasoning?
|
||||
[ ] Section 8: Separator "---" after entry?
|
||||
[ ] No extra sections beyond these 8?
|
||||
[ ] Sections in correct order?
|
||||
```
|
||||
|
||||
**Whole document:**
|
||||
```
|
||||
[ ] All subsystems have entries?
|
||||
[ ] No placeholder text ("[TODO]", "[Fill in]")?
|
||||
[ ] File named "02-subsystem-catalog.md"?
|
||||
```
|
||||
|
||||
### For Architecture Diagrams
|
||||
|
||||
**Diagram levels:**
|
||||
```
|
||||
[ ] Context diagram (C4 Level 1) present?
|
||||
[ ] Container diagram (C4 Level 2) present?
|
||||
[ ] Component diagrams (C4 Level 3) present? (2-3 required)
|
||||
```
|
||||
|
||||
**Per diagram:**
|
||||
```
|
||||
[ ] Title present and descriptive?
|
||||
[ ] Description present after diagram?
|
||||
[ ] Legend explaining notation?
|
||||
[ ] Valid syntax (Mermaid or PlantUML)?
|
||||
```
|
||||
|
||||
**Supporting sections:**
|
||||
```
|
||||
[ ] Assumptions and Limitations section present?
|
||||
[ ] Confidence levels documented?
|
||||
```
|
||||
|
||||
### For Cross-Document Consistency
|
||||
|
||||
**Catalog ↔ Diagrams:**
|
||||
```
|
||||
[ ] Each catalog subsystem shown in Container diagram?
|
||||
[ ] Each catalog "Outbound" dependency shown as diagram arrow?
|
||||
[ ] Each diagram arrow corresponds to catalog dependency?
|
||||
[ ] Bidirectional: If A→B in catalog, B shows A as Inbound?
|
||||
```
|
||||
|
||||
**Diagrams ↔ Final Report:**
|
||||
```
|
||||
[ ] All diagrams from 03-diagrams.md embedded in report?
|
||||
[ ] Subsystem descriptions in report match catalog?
|
||||
[ ] Key findings reference actual concerns from catalog?
|
||||
```
|
||||
|
||||
## Cross-Document Validation Pattern
|
||||
|
||||
**Step-by-step approach:**
|
||||
|
||||
1. **Extract from Catalog:**
|
||||
- List all subsystems
|
||||
- For each, extract "Outbound" dependencies
|
||||
|
||||
2. **Extract from Diagram:**
|
||||
- Find Container diagram
|
||||
- List all `Rel()` statements (Mermaid) or `Rel` calls (PlantUML)
|
||||
- Map source → target for each relationship
|
||||
|
||||
3. **Cross-Reference:**
|
||||
- For each catalog dependency, check if diagram shows arrow
|
||||
- For each diagram arrow, check if catalog lists dependency
|
||||
- Flag mismatches
|
||||
|
||||
4. **Report Inconsistencies:**
|
||||
- Use summary table showing what matches and what doesn't
|
||||
- Provide line numbers from both documents
|
||||
- Suggest specific fixes (add arrow, update catalog)
|
||||
|
||||
## Best Practices from Baseline Testing
|
||||
|
||||
### What Works
|
||||
|
||||
✅ **Thorough checking** - Find ALL violations, not just first one
|
||||
✅ **Specific feedback** - Line numbers, exact quotes, actionable fixes
|
||||
✅ **Professional reports** - Metadata, methodology, self-assessment
|
||||
✅ **Systematic checklists** - Document what was verified
|
||||
✅ **Clear status** - APPROVED / NEEDS_REVISION with severity
|
||||
✅ **Summary visualizations** - Tables showing passed vs failed
|
||||
✅ **Impact analysis** - Explain why issues matter
|
||||
✅ **Self-assessment** - Verify own completeness
|
||||
|
||||
### Validation Excellence
|
||||
|
||||
**Thoroughness patterns:**
|
||||
- Check every entry/section (100% coverage)
|
||||
- Find both missing AND extra sections
|
||||
- Distinguish format violations from quality issues
|
||||
|
||||
**Specificity patterns:**
|
||||
- Provide line numbers for all findings
|
||||
- Quote exact text showing violation
|
||||
- Show what correct format should be
|
||||
|
||||
**Actionability patterns:**
|
||||
- Provide code blocks with fixes
|
||||
- Suggest alternatives when applicable
|
||||
- Prioritize fixes (CRITICAL first)
|
||||
|
||||
## Common Pitfalls to Avoid
|
||||
|
||||
❌ **Stopping after first violation** - Find ALL issues
|
||||
❌ **Vague feedback** ("improve quality" vs "add Concerns section")
|
||||
❌ **Wrong status level** (marking quality issues as CRITICAL)
|
||||
❌ **False positives** (inventing issues that don't exist)
|
||||
❌ **Too lenient** (approving despite violations)
|
||||
❌ **Too strict** (marking everything CRITICAL)
|
||||
❌ **Wrong file path** (absolute path vs workspace temp/)
|
||||
❌ **Skipping self-assessment** (verify your own completeness)
|
||||
|
||||
## Objectivity Under Pressure
|
||||
|
||||
**If coordinator says "looks fine to me":**
|
||||
- Validate independently anyway
|
||||
- Evidence-based judgment (cite specific contract)
|
||||
- Don't soften CRITICAL to WARNING due to authority
|
||||
- Stand firm: validation is independent quality gate
|
||||
|
||||
**If time pressure exists:**
|
||||
- Still validate systematically (don't skip checks)
|
||||
- Document what was validated and what wasn't
|
||||
- If truly insufficient time, report that honestly
|
||||
|
||||
## Success Criteria
|
||||
|
||||
**You succeeded when:**
|
||||
- Found all contract violations (100% detection)
|
||||
- Specific feedback with line numbers
|
||||
- Actionable fixes provided
|
||||
- Clear status (APPROVED/NEEDS_REVISION with severity)
|
||||
- Professional report structure
|
||||
- Wrote to workspace temp/ directory
|
||||
- Self-assessment confirms completeness
|
||||
|
||||
**You failed when:**
|
||||
- Missed violations
|
||||
- Vague feedback ("improve this")
|
||||
- Wrong status level (quality issue marked CRITICAL)
|
||||
- No actionable fixes
|
||||
- Wrote to wrong path
|
||||
- Approved despite violations
|
||||
|
||||
## Integration with Workflow
|
||||
|
||||
This skill is typically invoked as:
|
||||
|
||||
1. **Coordinator** produces document (catalog, diagrams, or report)
|
||||
2. **Coordinator** spawns validation subagent (YOU)
|
||||
3. **YOU** read document(s) and contract requirements
|
||||
4. **YOU** validate systematically using checklists
|
||||
5. **YOU** write validation report to workspace temp/
|
||||
6. **Coordinator** reads validation report
|
||||
7. **If APPROVED**: Coordinator proceeds to next phase
|
||||
8. **If NEEDS_REVISION**: Coordinator fixes issues, re-validates (max 2 retries)
|
||||
|
||||
**Your role:** Independent quality gate ensuring artifacts meet standards before progression.
|
||||
Reference in New Issue
Block a user