Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:59:22 +08:00
commit b2731247f4
13 changed files with 3454 additions and 0 deletions

View File

@@ -0,0 +1,40 @@
# Archive of Untested Briefings
This directory contains briefings that were created without following the proper RED-GREEN-REFACTOR TDD cycle.
## Untested Briefings (v1)
Created: 2025-11-19
Created by: Maintenance workflow without behavioral testing
Violation: Writing-skills Iron Law - "NO SKILL WITHOUT A FAILING TEST FIRST"
### Files
1. **assessing-code-quality-v1-untested.md** (~400 lines)
- Content coverage: Complexity, duplication, code smells, maintainability, dependencies
- Problem: No baseline testing to verify agents follow guidance
- Use: Reference for content areas to cover in tested version
2. **creating-architect-handover-v1-untested.md** (~400 lines)
- Content coverage: Handover report generation, consultation patterns, architect integration
- Problem: No baseline testing to verify agents follow guidance
- Use: Reference for content areas to cover in tested version
## Purpose
These files are archived (not deleted) to:
- Track what content areas should be covered
- Compare tested vs. untested versions
- Document the improvement from proper TDD methodology
- Serve as reference when designing pressure scenarios
## Tested Versions
Properly tested versions (RED-GREEN-REFACTOR) will be created in the parent directory following:
1. **RED:** Baseline scenarios WITHOUT skill - document exact failures
2. **GREEN:** Write minimal skill addressing observed rationalizations
3. **REFACTOR:** Find loopholes, plug them, re-test until bulletproof
## Do Not Use
These untested files should NOT be used in production. They have not been validated through behavioral testing with subagents and may contain gaps, rationalizations, or ineffective guidance.

View File

@@ -0,0 +1,411 @@
# Assessing Code Quality
## Purpose
Analyze code quality indicators beyond architecture to identify maintainability issues, code smells, and technical debt - produces quality scorecard with actionable improvement recommendations.
## When to Use
- Coordinator delegates quality assessment after subsystem catalog completion
- Task specifies analyzing code quality in addition to architecture
- Need to identify refactoring priorities beyond structural concerns
- Output feeds into architect handover reports or improvement planning
## Core Principle: Evidence-Based Quality Assessment
**Good quality analysis identifies specific, actionable issues. Poor quality analysis makes vague claims about "bad code."**
Your goal: Provide evidence-based quality metrics with concrete examples and remediation guidance.
## Quality Analysis Dimensions
### 1. Code Complexity
**What to assess:**
- Function/method length (lines of code)
- Cyclomatic complexity (decision points)
- Nesting depth (indentation levels)
- Parameter count
**Evidence to collect:**
- Longest functions with line counts
- Functions with highest decision complexity
- Deeply nested structures (> 4 levels)
- Functions with >5 parameters
**Thresholds (guidelines, not rules):**
- Functions > 50 lines: Flag for review
- Cyclomatic complexity > 10: Consider refactoring
- Nesting > 4 levels: Simplification candidate
- Parameters > 5: Consider parameter object
**Example documentation:**
```markdown
### Complexity Concerns
**High complexity functions:**
- `src/api/order_processing.py:process_order()` - 127 lines, complexity ~15
- 8 nested if statements
- Handles validation, pricing, inventory, shipping in single function
- **Recommendation:** Extract validation, pricing, inventory, shipping into separate functions
- `src/utils/data_transform.py:transform_dataset()` - 89 lines, 7 parameters
- **Recommendation:** Create DatasetConfig object to replace parameter list
```
### 2. Code Duplication
**What to assess:**
- Repeated code blocks (copy-paste patterns)
- Similar functions with slight variations
- Duplicated logic across subsystems
**Evidence to collect:**
- Quote repeated code blocks (5+ lines)
- List functions with similar structure
- Note duplication percentage (if tool available)
**Analysis approach:**
1. Read representative files from each subsystem
2. Look for similar patterns, function structures
3. Note copy-paste indicators (similar variable names, comment duplication)
4. Assess if duplication is deliberate or accidental
**Example documentation:**
```markdown
### Duplication Concerns
**Copy-paste pattern in validation:**
- `src/api/users.py:validate_user()` (lines 45-67)
- `src/api/orders.py:validate_order()` (lines 89-111)
- `src/api/products.py:validate_product()` (lines 23-45)
All three functions follow identical structure:
1. Check required fields
2. Validate format with regex
3. Check database constraints
4. Return validation result
**Recommendation:** Extract common validation framework to `src/utils/validation.py`
```
### 3. Code Smells
**Common smells to identify:**
**Long parameter lists:**
- Functions with >5 parameters
- Recommendation: Parameter object or builder pattern
**God objects/functions:**
- Classes with >10 methods
- Functions doing multiple unrelated things
- Recommendation: Single Responsibility Principle refactoring
**Magic numbers:**
- Hardcoded values without named constants
- Recommendation: Extract to configuration or named constants
**Dead code:**
- Commented-out code blocks
- Unused functions/classes (no references found)
- Recommendation: Remove or document why kept
**Shotgun surgery indicators:**
- Single feature change requires edits in 5+ files
- Indicates high coupling
- Recommendation: Improve encapsulation
**Example documentation:**
```markdown
### Code Smell Observations
**Magic numbers:**
- `src/services/cache.py`: Hardcoded timeout values (300, 3600, 86400)
- **Recommendation:** Extract to CacheConfig with named durations
**Dead code:**
- `src/legacy/` directory contains 15 files, no imports found in active code
- Last modified: 2023-06-15
- **Recommendation:** Archive or remove if truly unused
**Shotgun surgery:**
- Adding new payment method requires changes in:
- `src/api/payment.py`
- `src/models/transaction.py`
- `src/utils/validators.py`
- `src/services/notification.py`
- `config/payment_providers.json`
- **Recommendation:** Introduce payment provider abstraction layer
```
### 4. Maintainability Indicators
**What to assess:**
- Documentation coverage (docstrings, comments)
- Test coverage (if test files visible)
- Error handling patterns
- Logging consistency
**Evidence to collect:**
- Percentage of functions with docstrings
- Test file presence per module
- Error handling approaches (try/except, error codes, etc.)
- Logging statements (presence, consistency)
**Example documentation:**
```markdown
### Maintainability Assessment
**Documentation:**
- 12/45 functions (27%) have docstrings
- Public API modules better documented than internal utilities
- **Recommendation:** Add docstrings to all public functions, focus on "why" not "what"
**Error handling inconsistency:**
- `src/api/` uses exception raising
- `src/services/` uses error code returns
- `src/utils/` mixes both approaches
- **Recommendation:** Standardize on exceptions with custom exception hierarchy
**Logging:**
- Inconsistent log levels (some files use DEBUG for errors)
- No structured logging (difficult to parse)
- **Recommendation:** Adopt structured logging library, establish level conventions
```
### 5. Dependency Quality
**What to assess:**
- Coupling between subsystems
- Circular dependencies
- External dependency management
**Evidence from subsystem catalog:**
- Review "Dependencies - Outbound" sections
- Count dependencies per subsystem
- Identify bidirectional dependencies (A→B and B→A)
**Analysis approach:**
1. Use subsystem catalog dependency data
2. Count inbound/outbound dependencies per subsystem
3. Identify highly coupled subsystems (>5 dependencies)
4. Note circular dependency patterns
**Example documentation:**
```markdown
### Dependency Concerns
**High coupling:**
- `API Gateway` subsystem: 8 outbound dependencies (most in system)
- Depends on: Auth, User, Product, Order, Payment, Notification, Logging, Cache
- **Observation:** Acts as orchestrator, coupling may be appropriate
- **Recommendation:** Monitor for API Gateway becoming bloated
**Circular dependencies:**
- `User Service``Notification Service`
- User triggers notifications, Notification updates user preferences
- **Recommendation:** Introduce event bus to break circular dependency
**Dependency concentration:**
- 6/10 subsystems depend on `Database Layer`
- Database Layer has no abstraction (direct SQL queries)
- **Recommendation:** Consider repository pattern to isolate database logic
```
## Output Contract
Write findings to workspace as `05-quality-assessment.md`:
```markdown
# Code Quality Assessment
**Analysis Date:** YYYY-MM-DD
**Scope:** [Subsystems analyzed]
**Methodology:** Static code review, pattern analysis
## Quality Scorecard
| Dimension | Rating | Severity | Evidence Count |
|-----------|--------|----------|----------------|
| Complexity | Medium | 3 High, 5 Medium | 8 functions flagged |
| Duplication | High | 2 Critical, 4 Medium | 6 patterns identified |
| Code Smells | Medium | 0 Critical, 7 Medium | 7 smells documented |
| Maintainability | Medium-Low | 1 Critical, 3 Medium | 4 concerns noted |
| Dependencies | Low | 1 Medium | 2 concerns noted |
**Overall Rating:** Medium - Several actionable improvements identified, no critical blockers
## Detailed Findings
### 1. Complexity Concerns
[List from analysis above]
### 2. Duplication Concerns
[List from analysis above]
### 3. Code Smell Observations
[List from analysis above]
### 4. Maintainability Assessment
[List from analysis above]
### 5. Dependency Concerns
[List from analysis above]
## Prioritized Recommendations
### Critical (Address Immediately)
1. [Issue with highest impact]
### High (Next Sprint)
2. [Important issues]
3. [Important issues]
### Medium (Next Quarter)
4. [Moderate issues]
5. [Moderate issues]
### Low (Backlog)
6. [Nice-to-have improvements]
## Methodology Notes
**Analysis approach:**
- Sampled [N] representative files across [M] subsystems
- Focused on [specific areas of concern]
- Did NOT use automated tools (manual review only)
**Limitations:**
- Sample-based (not exhaustive)
- No runtime analysis (static review only)
- Test coverage estimates based on file presence
- No quantitative complexity metrics (manual assessment)
**For comprehensive analysis, consider:**
- Running static analysis tools (ruff, pylint, mypy for Python)
- Measuring actual test coverage
- Profiling runtime behavior
- Security-focused code review
```
## Severity Rating Guidelines
**Critical:**
- Blocks core functionality or deployment
- Security vulnerability present
- Data corruption risk
- Examples: SQL injection, hardcoded credentials, unhandled exceptions in critical path
**High:**
- Significant maintainability impact
- High effort to modify or extend
- Frequent source of bugs
- Examples: God objects, extreme duplication, shotgun surgery patterns
**Medium:**
- Moderate maintainability concern
- Refactoring beneficial but not urgent
- Examples: Long functions, missing documentation, inconsistent error handling
**Low:**
- Minor quality improvement
- Cosmetic or style issues
- Examples: Magic numbers, verbose naming, minor duplication
## Integration with Architect Handover
Quality assessment feeds directly into `creating-architect-handover.md`:
1. Quality scorecard provides severity ratings
2. Prioritized recommendations become architect's action items
3. Code smells inform refactoring strategy
4. Dependency concerns guide architectural improvements
The architect handover briefing will synthesize architecture + quality into comprehensive improvement plan.
## When to Skip Quality Assessment
**Optional scenarios:**
- User requested architecture-only analysis
- Extremely tight time constraints (< 2 hours total)
- Codebase is very small (< 1000 lines)
- Quality issues not relevant to stakeholder needs
**Document if skipped:**
```markdown
## Quality Assessment: SKIPPED
**Reason:** [Time constraints / Not requested / etc.]
**Recommendation:** Run focused quality review post-stakeholder presentation
```
## Systematic Analysis Checklist
```
[ ] Read subsystem catalog to understand structure
[ ] Sample 3-5 representative files per subsystem
[ ] Document complexity concerns (functions >50 lines, high nesting)
[ ] Identify duplication patterns (repeated code blocks)
[ ] Note code smells (god objects, magic numbers, dead code)
[ ] Assess maintainability (docs, tests, error handling)
[ ] Review dependencies from catalog (coupling, circular deps)
[ ] Rate severity for each finding (Critical/High/Medium/Low)
[ ] Prioritize recommendations by impact
[ ] Write to 05-quality-assessment.md following contract
[ ] Document methodology and limitations
```
## Success Criteria
**You succeeded when:**
- Quality assessment covers all 5 dimensions
- Each finding has concrete evidence (file paths, line numbers, examples)
- Severity ratings are justified
- Recommendations are specific and actionable
- Methodology and limitations documented
- Output written to 05-quality-assessment.md
**You failed when:**
- Vague claims without evidence ("code is messy")
- No severity ratings or priorities
- Recommendations are generic ("improve code quality")
- Missing methodology notes
- Skipped dimensions without documentation
## Common Mistakes
**❌ Analysis paralysis**
"Need to read every file" → Sample strategically, 20% coverage reveals patterns
**❌ Vague findings**
"Functions are too complex" → "process_order() is 127 lines with complexity ~15"
**❌ No prioritization**
Flat list of 50 issues → Prioritize by severity/impact, focus on Critical/High
**❌ Tool-dependent**
"Can't assess without linting tools" → Manual review reveals patterns, note as limitation
**❌ Perfectionism**
"Everything needs fixing" → Focus on high-impact issues, accept some technical debt
## Integration with Workflow
This briefing is typically invoked as:
1. **Coordinator** completes subsystem catalog (02-subsystem-catalog.md)
2. **Coordinator** (optionally) validates catalog
3. **Coordinator** writes task specification for quality assessment
4. **YOU** read subsystem catalog to understand structure
5. **YOU** perform systematic quality analysis (5 dimensions)
6. **YOU** write to 05-quality-assessment.md following contract
7. **Coordinator** proceeds to diagram generation or architect handover
**Your role:** Complement architectural analysis with code quality insights, providing evidence-based improvement recommendations.

View File

@@ -0,0 +1,385 @@
# Creating Architect Handover
## Purpose
Generate handover reports for axiom-system-architect plugin, enabling seamless transition from analysis (archaeologist) to improvement planning (architect) - synthesizes architecture + quality findings into actionable assessment inputs.
## When to Use
- Coordinator completes architecture analysis and quality assessment
- User requests improvement recommendations or refactoring guidance
- Need to transition from "what exists" (archaeologist) to "what should change" (architect)
- Task specifies creating architect-ready outputs
## Core Principle: Analysis → Assessment Pipeline
**Archaeologist documents neutrally. Architect assesses critically. Handover bridges the two.**
```
Archaeologist → Handover → Architect → Improvements
(neutral docs) (synthesis) (critical) (execution)
```
Your goal: Package archaeologist findings into architect-consumable format for assessment and prioritization.
## The Division of Labor
### Archaeologist (This Plugin)
**What archaeologist DOES:**
- Documents existing architecture (subsystems, diagrams, dependencies)
- Identifies quality concerns (complexity, duplication, smells)
- Marks confidence levels (High/Medium/Low)
- Stays neutral ("Here's what you have")
**What archaeologist does NOT do:**
- Critical assessment ("this is bad")
- Refactoring recommendations ("you should fix X first")
- Priority decisions ("security is more important than performance")
### Architect (axiom-system-architect Plugin)
**What architect DOES:**
- Critical quality assessment (direct, no diplomatic softening)
- Technical debt cataloging (structured, prioritized)
- Improvement roadmaps (risk-based, security-first)
- Refactoring strategy recommendations
**What architect does NOT do:**
- Neutral documentation (that's archaeologist's job)
- Implementation execution (future: project manager plugin)
### Handover (This Briefing)
**What handover DOES:**
- Synthesizes archaeologist outputs (architecture + quality)
- Formats findings for architect consumption
- Enables architect consultation (spawn as subagent)
- Bridges neutral documentation → critical assessment
## Output: Architect Handover Report
Create `06-architect-handover.md` in workspace:
```markdown
# Architect Handover Report
**Project:** [System name]
**Analysis Date:** YYYY-MM-DD
**Archaeologist Version:** [axiom-system-archaeologist version]
**Handover Purpose:** Enable architect assessment and improvement prioritization
---
## Executive Summary
**System scale:**
- [N] subsystems identified
- [M] subsystem dependencies mapped
- [X] architectural patterns observed
- [Y] quality concerns flagged
**Assessment readiness:**
- Architecture: [Fully documented / Partial coverage / etc.]
- Quality: [Comprehensive analysis / Sample-based / Not performed]
- Confidence: [Overall High/Medium/Low]
**Recommended architect workflow:**
1. Use `axiom-system-architect:assessing-architecture-quality` for critical assessment
2. Use `axiom-system-architect:identifying-technical-debt` to catalog debt items
3. Use `axiom-system-architect:prioritizing-improvements` for roadmap creation
---
## Archaeologist Deliverables
### Available Documents
- [x] `01-discovery-findings.md` - Holistic scan results
- [x] `02-subsystem-catalog.md` - Detailed subsystem documentation
- [x] `03-diagrams.md` - C4 diagrams (Context, Container, Component)
- [x] `04-final-report.md` - Multi-audience synthesis
- [x] `05-quality-assessment.md` - Code quality analysis (if performed)
- [ ] Additional views: [List if created]
### Key Findings Summary
**Architectural patterns identified:**
1. [Pattern 1] - Observed in: [Subsystems]
2. [Pattern 2] - Observed in: [Subsystems]
3. [Pattern 3] - Observed in: [Subsystems]
**Concerns flagged (from subsystem catalog):**
1. [Concern 1] - Subsystem: [Name], Severity: [Level]
2. [Concern 2] - Subsystem: [Name], Severity: [Level]
3. [Concern 3] - Subsystem: [Name], Severity: [Level]
**Quality issues identified (from quality assessment):**
1. [Issue 1] - Category: [Complexity/Duplication/etc.], Severity: [Critical/High/Medium/Low]
2. [Issue 2] - Category: [Category], Severity: [Level]
3. [Issue 3] - Category: [Category], Severity: [Level]
---
## Architect Input Package
### 1. Architecture Documentation
**Location:** `02-subsystem-catalog.md`, `03-diagrams.md`, `04-final-report.md`
**Usage:** Architect reads these to understand system structure before assessment
**Highlights for architect attention:**
- [Subsystem X]: [Why architect should review - complexity, coupling, etc.]
- [Subsystem Y]: [Specific concern flagged]
- [Dependency pattern]: [Circular dependencies, high coupling, etc.]
### 2. Quality Assessment
**Location:** `05-quality-assessment.md` (if performed)
**Usage:** Architect incorporates quality metrics into technical debt catalog
**Priority issues for architect:**
- **Critical:** [Issue requiring immediate attention]
- **High:** [Important issues from quality assessment]
- **Medium:** [Moderate concerns]
### 3. Confidence Levels
**Usage:** Architect knows which assessments are well-validated vs. tentative
| Subsystem | Confidence | Rationale |
|-----------|------------|-----------|
| [Subsystem A] | High | Well-documented, sampled 5 files |
| [Subsystem B] | Medium | Router provided list, sampled 2 files |
| [Subsystem C] | Low | Missing documentation, inferred structure |
**Guidance for architect:**
- High confidence areas: Proceed with detailed assessment
- Medium confidence areas: Consider deeper analysis before major recommendations
- Low confidence areas: Flag for additional investigation
### 4. Scope and Limitations
**What was analyzed:**
- [Scope description]
**What was NOT analyzed:**
- Runtime behavior (static analysis only)
- Security vulnerabilities (not performed)
- Performance profiling (not available)
- [Other limitations]
**Guidance for architect:**
- Recommendations should acknowledge analysis limitations
- Security assessment may require dedicated review
- Performance concerns should be validated with profiling
---
## Architect Consultation Pattern
### Option A: Handover Only (Document-Based)
**When to use:**
- User will engage architect separately
- Archaeologist completes, then user decides next steps
- Asynchronous workflow preferred
**What to do:**
1. Create this handover report (06-architect-handover.md)
2. Inform user: "Handover report ready for axiom-system-architect"
3. User decides when/how to engage architect
### Option B: Integrated Consultation (Subagent-Based)
**When to use:**
- User requests immediate improvement recommendations
- Integrated archaeologist + architect workflow
- User says: "What should we fix?" or "Assess the architecture"
**What to do:**
1. **Complete handover report first** (this document)
2. **Spawn architect as consultant subagent:**
```
I'll consult with the system architect to assess the architecture and provide improvement recommendations.
[Use Task tool with subagent_type='general-purpose']
Task: "Use the axiom-system-architect plugin to assess the architecture documented in [workspace-path].
Context:
- Architecture analysis is complete
- Handover report available at [workspace-path]/06-architect-handover.md
- Key deliverables: [list deliverables]
- Primary concerns: [top 3-5 concerns from analysis]
Your task:
1. Read the handover report and referenced documents
2. Use axiom-system-architect:assessing-architecture-quality for critical assessment
3. Use axiom-system-architect:identifying-technical-debt to catalog debt
4. Use axiom-system-architect:prioritizing-improvements for roadmap
Deliverables:
- Architecture quality assessment
- Technical debt catalog
- Prioritized improvement roadmap
IMPORTANT: Follow architect skills rigorously - maintain professional discipline, no diplomatic softening, security-first prioritization."
```
3. **Synthesize architect outputs** when subagent returns
4. **Present to user:**
- Architecture assessment (from architect)
- Technical debt catalog (from architect)
- Prioritized roadmap (from architect)
- Combined context (archaeologist + architect)
### Option C: Architect Recommendation (User Choice)
**When to use:**
- User didn't explicitly request architect engagement
- Archaeologist found concerns warranting architect review
- Offer as next step
**What to say:**
> "I've completed the architecture analysis and documented [N] concerns requiring attention.
>
> **Next step options:**
>
> A) **Immediate assessment** - I can consult the system architect (axiom-system-architect) right now to provide:
> - Critical architecture quality assessment
> - Technical debt catalog
> - Prioritized improvement roadmap
>
> B) **Handover for later** - I've created a handover report (`06-architect-handover.md`) that you can use to engage the architect when ready
>
> C) **Complete current analysis** - Finish with archaeologist deliverables only
>
> Which approach fits your needs?"
## Handover Report Synthesis Approach
### Extract from Subsystem Catalog
**Read `02-subsystem-catalog.md`:**
- Count subsystems (total documented)
- Collect all "Concerns" entries (aggregate findings)
- Note confidence levels (High/Medium/Low distribution)
- Identify dependency patterns (high coupling, circular deps)
**Synthesize into handover:**
- List total subsystems
- Aggregate concerns by category (complexity, coupling, technical debt, etc.)
- Highlight low-confidence areas needing deeper analysis
- Note architectural patterns observed
### Extract from Quality Assessment
**Read `05-quality-assessment.md` (if exists):**
- Extract quality scorecard ratings
- Collect Critical/High severity issues
- Note methodology limitations
**Synthesize into handover:**
- Summarize quality dimensions assessed
- List priority issues by severity
- Note analysis limitations for architect awareness
### Extract from Final Report
**Read `04-final-report.md`:**
- Executive summary (system overview)
- Key findings (synthesized patterns)
- Recommendations (if any)
**Synthesize into handover:**
- Use executive summary for handover summary
- Reference key findings for architect attention
- Note any recommendations already made
## Success Criteria
**You succeeded when:**
- Handover report comprehensively synthesizes archaeologist outputs
- Architect input package clearly structured with locations and usage guidance
- Confidence levels documented for architect awareness
- Scope and limitations explicitly stated
- Consultation pattern matches user's workflow needs
- Written to 06-architect-handover.md following format
**You failed when:**
- Handover is just concatenation of source documents (no synthesis)
- No guidance on which documents architect should read
- Missing confidence level context
- Limitations not documented
- Spawned architect without completing handover first
- No option for user choice (forced integrated consultation)
## Integration with Workflow
This briefing is typically invoked as:
1. **Coordinator** completes final report (04-final-report.md)
2. **Coordinator** (optionally) completes quality assessment (05-quality-assessment.md)
3. **Coordinator** writes task specification for handover creation
4. **YOU** read all archaeologist deliverables
5. **YOU** synthesize into handover report
6. **YOU** write to 06-architect-handover.md
7. **YOU** offer consultation options to user (A/B/C)
8. **(Optional)** Spawn architect subagent if user chooses integrated workflow
9. **Coordinator** proceeds to cleanup or next steps
**Your role:** Bridge archaeologist's neutral documentation and architect's critical assessment through structured handover synthesis.
## Common Mistakes
**❌ Skipping handover report**
"Architect can just read subsystem catalog" → Architect needs synthesized input, not raw docs
**❌ Spawning architect without handover**
"I'll just task architect directly" → Architect works best with structured handover package
**❌ Making architect decisions**
"I'll do the assessment myself" → That's architect's job, not archaeologist's
**❌ Forcing integrated workflow**
"I'll spawn architect automatically" → Offer choice (A/B/C), let user decide
**❌ No synthesis**
"Handover is just copy-paste" → Synthesize, don't concatenate
**❌ Missing limitations**
"I'll hide what we didn't analyze" → Architect needs to know limitations for accurate assessment
## Anti-Patterns
**Overstepping into architect role:**
"This architecture is bad" → "This architecture has [N] concerns documented"
**Incomplete handover:**
Missing confidence levels, no limitations section → Architect can't calibrate recommendations
**Forced workflow:**
Always spawning architect subagent → Offer user choice
**Raw data dump:**
Handover is just file paths → Synthesize key findings for architect
**No consultation pattern:**
Just write report, no next-step guidance → Offer explicit A/B/C options
## The Bottom Line
**Archaeologist documents neutrally. Architect assesses critically. Handover bridges professionally.**
Synthesize findings. Package inputs. Offer consultation. Enable next phase.
The pipeline works when each role stays in its lane and handovers are clean.