Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:37:27 +08:00
commit 37774aa937
131 changed files with 31137 additions and 0 deletions

View File

@@ -0,0 +1,440 @@
---
name: ln-116-test-docs-creator
description: Creates test documentation (testing-strategy.md + tests/README.md). Establishes testing philosophy and Story-Level Test Task Pattern. Part of ln-110-documents-pipeline workflow.
---
# Test Documentation Creator
This skill creates and validates test documentation: testing-strategy.md (universal testing philosophy) + tests/README.md (test organization structure and Story-Level Test Task Pattern).
## When to Use This Skill
**This skill is a WORKER** invoked by **ln-110-documents-pipeline** orchestrator.
This skill should be used directly when:
- Creating only test documentation (testing-strategy.md + tests/README.md)
- Validating existing test documentation structure and content
- Setting up test philosophy and structure documentation for existing project
- NOT creating full documentation structure (use ln-110-documents-pipeline for complete setup)
## How It Works
The skill follows a 3-phase workflow: **CREATE****VALIDATE STRUCTURE****VALIDATE CONTENT**. Each phase builds on the previous, ensuring complete structure and semantic validation.
---
### Phase 1: Create Test Documentation
**Objective**: Establish test philosophy and documentation structure.
**Process**:
**1.1 Check & create directories**:
- Check if `docs/reference/guides/` exists → create if missing
- Check if `tests/` exists → create if missing
- Log for each: "✓ Created [directory]/" or "✓ [directory]/ already exists"
**1.2 Check & create documentation files**:
- Check if `docs/reference/guides/testing-strategy.md` exists
- If exists:
- Skip creation
- Log: "✓ testing-strategy.md already exists, proceeding to validation"
- If NOT exists:
- Copy template: `ln-116-test-docs-creator/references/testing_strategy_template.md``docs/reference/guides/testing-strategy.md`
- Replace placeholders:
- `[CURRENT_DATE]` — current date (YYYY-MM-DD)
- Log: "✓ Created testing-strategy.md from template"
- Check if `tests/README.md` exists
- If exists:
- Skip creation
- Log: "✓ tests/README.md already exists, proceeding to validation"
- If NOT exists:
- Copy template: `ln-116-test-docs-creator/references/tests_readme_template.md``tests/README.md`
- Replace placeholders:
- `{{DATE}}` — current date (YYYY-MM-DD)
- Log: "✓ Created tests/README.md from template"
**1.3 Output**:
```
docs/reference/guides/
└── testing-strategy.md # Created or existing
tests/
└── README.md # Created or existing
```
---
### Phase 2: Validate Structure
**Objective**: Ensure test documentation files comply with structural requirements and auto-fix violations.
**Process**:
**2.1 Check SCOPE tags**:
- Read both files (testing-strategy.md, tests/README.md) - first 5 lines only
- Check for `<!-- SCOPE: ... -->` tag in each
- Expected SCOPE tags:
- testing-strategy.md: `<!-- SCOPE: Universal testing philosophy (Risk-Based Testing, test pyramid, isolation patterns) -->`
- tests/README.md: `<!-- SCOPE: Test organization structure (directory layout, Story-Level Test Task Pattern) -->`
- If missing in either file:
- Use Edit tool to add SCOPE tag at line 1 (before first heading)
- Track violation: `scope_tags_added += 1`
**2.2 Check required sections**:
- Load expected sections from `references/questions.md`
- For **testing-strategy.md**, required sections:
- "Testing Philosophy"
- "Test Levels"
- For **tests/README.md**, required sections:
- "Test Organization"
- "Running Tests"
- For each file:
- Read file content
- Check if `## [Section Name]` header exists
- If missing:
- Use Edit tool to add section with placeholder content from template
- Track violation: `missing_sections += 1`
**2.3 Check Maintenance section**:
- For each file (testing-strategy.md, tests/README.md):
- Search for `## Maintenance` header
- If missing:
- Use Edit tool to add at end of file:
```markdown
## Maintenance
**Last Updated:** [current date]
**Update Triggers:**
- Test framework changes
- Test organization changes
- New test patterns introduced
**Verification:**
- [ ] All test examples follow current framework syntax
- [ ] Directory structure matches actual tests/
- [ ] Test runner commands are current
```
- Track violation: `maintenance_added += 1`
**2.4 Check POSIX file endings**:
- For each file:
- Check if file ends with single blank line (LF)
- If missing:
- Use Edit tool to add final newline
- Track fix: `posix_fixed += 1`
**2.5 Report validation**:
- Log summary:
```
✅ Structure validation complete:
- SCOPE tags: [added N / already present]
- Missing sections: [added N sections]
- Maintenance sections: [added N / already present]
- POSIX endings: [fixed N / compliant]
```
- If violations found: "⚠️ Auto-fixed [total] structural violations"
---
### Phase 3: Validate Content
**Objective**: Ensure each section answers its questions with meaningful content and populate test-specific sections from auto-discovery.
**Process**:
**3.1 Load validation spec**:
- Read `references/questions.md`
- Parse questions and validation heuristics for 4 sections (2 per file)
**3.2 Validate testing-strategy.md sections**:
For this file, use **standard template content** (no auto-discovery needed):
1. **Testing Philosophy section**:
- Read section content
- Check validation heuristics from questions.md:
- ✅ Mentions "Risk-Based Testing"
- ✅ Has test pyramid description
- ✅ Mentions priority threshold (≥15)
- ✅ References Story-Level Test Task Pattern
- ✅ Length > 100 words
- If ANY heuristic passes → content valid
- If ALL fail → log warning: "⚠️ testing-strategy.md → Testing Philosophy section may need review"
2. **Test Levels section**:
- Read section content
- Check validation heuristics from questions.md:
- ✅ Lists 3 levels (E2E, Integration, Unit)
- ✅ Has numeric ranges (2-5, 3-8, 5-15)
- ✅ Explains rationale
- ✅ Length > 150 words
- If ANY heuristic passes → content valid
- If ALL fail → log warning: "⚠️ testing-strategy.md → Test Levels section may need review"
**Note**: testing-strategy.md is **universal philosophy** - no project-specific auto-discovery needed.
**3.3 Validate tests/README.md sections with auto-discovery**:
**Section: Test Organization**
1. **Auto-discover test framework**:
- Check `package.json` → "devDependencies" and "dependencies":
- Node.js frameworks: jest, vitest, mocha, ava, tap, jasmine
- If found: Extract name and version
- Check `requirements.txt` (if Python project):
- Python frameworks: pytest, nose2, unittest2
- If found: Extract name and version
- Check `go.mod` (if Go project):
- Go uses built-in testing package
- If framework detected:
- Log: "✓ Test framework detected: [framework]@[version]"
- Track: `framework_detected = "[framework]"`
- If NOT detected:
- Log: "⚠️ No test framework detected. Will use generic test organization."
- Track: `framework_detected = None`
2. **Auto-discover test directory structure**:
- Use Glob tool to scan tests/ directory:
- Pattern: `"tests/e2e/**/*.{js,ts,py,go}"`
- Pattern: `"tests/integration/**/*.{js,ts,py,go}"`
- Pattern: `"tests/unit/**/*.{js,ts,py,go}"`
- Count files in each directory:
- `e2e_count = len(e2e_files)`
- `integration_count = len(integration_files)`
- `unit_count = len(unit_files)`
- If directories exist:
- Log: "✓ Test structure: [e2e_count] E2E, [integration_count] Integration, [unit_count] Unit tests"
- If directories DON'T exist:
- Create placeholder structure:
```
tests/
e2e/ (empty, ready for E2E tests)
integration/ (empty, ready for Integration tests)
unit/ (empty, ready for Unit tests)
```
- Log: "✓ Created test directory structure (will be populated during Story test task execution)"
3. **Auto-discover naming conventions**:
- For each test file found (from step 2):
- Extract filename pattern:
- `*.test.js` → "*.test.js" convention
- `*.spec.ts` → "*.spec.ts" convention
- `test_*.py` → "test_*.py" convention
- `*_test.go` → "*_test.go" convention
- Count occurrences of each pattern
- Use most common pattern (majority rule)
- If pattern detected:
- Log: "✓ Naming convention: [pattern] (detected from [count] files)"
- If NO files exist:
- Use framework default:
- Jest/Vitest → *.test.js
- Mocha → *.spec.js
- Pytest → test_*.py
- Go → *_test.go
- Log: "✓ Naming convention: [default_pattern] (framework default)"
4. **Check Test Organization section content**:
- Read section from tests/README.md
- Check validation heuristics:
- ✅ Describes directory structure (e2e/integration/unit)
- ✅ Mentions naming conventions
- ✅ References Story-Level Test Task Pattern
- ✅ Has framework mention
- If ANY heuristic passes → content valid
- If ALL fail → log warning: "⚠️ tests/README.md → Test Organization section needs update"
**Section: Running Tests**
1. **Auto-discover test runner command**:
- Read `package.json` → "scripts" → "test"
- If found:
- Extract command value
- Examples:
- `"jest"` → Test runner: "npm test" (runs jest)
- `"vitest"` → Test runner: "npm test" (runs vitest)
- `"mocha"` → Test runner: "npm test" (runs mocha)
- Custom script → Test runner: "npm test" (runs [custom])
- Log: "✓ Test runner: npm test (runs [command])"
- If NOT found (no package.json or no test script):
- Use default based on detected framework (from step 3.3.1):
- Jest → "npm test"
- Vitest → "npm test"
- Pytest → "pytest"
- Go → "go test ./..."
- Log: "⚠️ No test script found in package.json. Using default '[command]'."
2. **Auto-discover coverage command** (optional):
- Check `package.json` → "scripts" for:
- "test:coverage"
- "coverage"
- "test:cov"
- If found:
- Extract command
- Log: "✓ Coverage command: npm run [script_name]"
- If NOT found:
- Use framework default:
- Jest → "npm test -- --coverage"
- Vitest → "npm test -- --coverage"
- Pytest → "pytest --cov=src"
- Go → "go test -cover ./..."
- Log: "✓ Coverage command: [default] (framework default)"
3. **Check Running Tests section content**:
- Read section from tests/README.md
- Check validation heuristics:
- ✅ Has test runner command
- ✅ Mentions coverage
- ✅ Shows how to run specific tests
- ✅ Has command examples
- If ANY heuristic passes → content valid
- If ALL fail → log warning: "⚠️ tests/README.md → Running Tests section needs update"
**3.4 Report content validation**:
- Log summary:
```
✅ Content validation complete:
- testing-strategy.md: [2 sections checked]
- tests/README.md: [2 sections checked]
- Test framework: [detected framework or "Not detected"]
- Test structure: [e2e/integration/unit counts or "Created placeholder"]
- Naming convention: [pattern or "Framework default"]
- Test runner: [command]
- Coverage command: [command]
```
---
## Complete Output Structure
```
docs/reference/guides/
└── testing-strategy.md # Universal testing philosophy (465 lines)
tests/
└── README.md # Test organization + Story-Level Pattern (112 lines)
```
**Note**: Actual test directories (e2e/, integration/, unit/) created during Story test task execution or Phase 3 if missing.
---
## Reference Files Used
### Templates
**Testing Strategy Template**:
- `references/testing_strategy_template.md` - Universal testing philosophy with:
- SCOPE tags (testing philosophy, NOT framework-specific)
- Core Philosophy ("Test YOUR code, not frameworks")
- Risk-Based Testing Strategy (Priority Matrix, test caps)
- Story-Level Testing Pattern
- Test Organization (E2E/Integration/Unit definitions)
- Isolation Patterns (Data Deletion/Transaction Rollback/DB Recreation)
- What To Test vs NOT Test (universal examples)
- Testing Patterns (Arrange-Act-Assert, Mock at the Seam, Test Data Builders)
- Common Issues (Flaky Tests, Slow Tests, Test Coupling)
- Coverage Guidelines
- Verification Checklist
**Tests README Template**:
- `references/tests_readme_template.md` - Test organization with:
- SCOPE tags (test documentation ONLY)
- Overview (E2E/Integration/Unit test organization)
- Testing Philosophy (brief, link to testing-strategy.md)
- Test Structure (directory tree)
- Story-Level Test Task Pattern (tests in final Story task, NOT scattered)
- Test Execution (project-specific commands)
- Quick Navigation (links to testing-strategy.md, kanban_board, guidelines)
- Maintenance section (Update Triggers, Verification, Last Updated)
**Validation Specification**:
- `references/questions.md` (v1.0.0) - Question-driven validation:
- Questions each section must answer (4 sections total)
- Validation heuristics (ANY passes → valid)
- Auto-discovery hints (test frameworks, directory structure, naming conventions)
- MCP Ref hints (external research if needed)
---
## Best Practices
- **No premature validation**: Phase 1 creates structure, Phase 2 validates it (no duplicate checks)
- **Parametric validation**: Phase 3 validates 4 sections across 2 files (no code duplication)
- **Auto-discovery first**: Scan test frameworks and directory structure before using defaults
- **Idempotent**: ✅ Can run multiple times safely (checks existence before creation, re-validates on each run)
- **Separation of concerns**: CREATE → VALIDATE STRUCTURE → VALIDATE CONTENT
- **Story-Level Test Task Pattern**: Tests consolidated in final Story task (ln-350-story-test-planner creates task, ln-334-test-executor implements)
- **Value-Based Testing**: 2-5 E2E, 3-8 Integration, 5-15 Unit per Story (10-28 total max), Priority ≥15 MUST be tested
- **No test code**: This skill creates DOCUMENTATION only, NOT actual tests
---
## Prerequisites
**Invoked by**: ln-110-documents-pipeline orchestrator
**Requires**:
- `docs/reference/guides/` directory (created by ln-112-reference-docs-creator or Phase 1 if missing)
**Creates**:
- `docs/reference/guides/testing-strategy.md` (universal testing philosophy)
- `tests/README.md` (test organization structure)
- Validated structure and content (auto-discovery of test frameworks and directory structure)
---
## Definition of Done
Before completing work, verify ALL checkpoints:
**✅ Structure Created (Phase 1):**
- [ ] `docs/reference/guides/` directory exists
- [ ] `tests/` directory exists
- [ ] `testing-strategy.md` exists (created or existing)
- [ ] `tests/README.md` exists (created or existing)
**✅ Structure Validated (Phase 2):**
- [ ] SCOPE tags present in both files (first 5 lines)
- [ ] testing-strategy.md has "Testing Philosophy" and "Test Levels" sections
- [ ] tests/README.md has "Test Organization" and "Running Tests" sections
- [ ] Maintenance sections present in both files at end
- [ ] POSIX file endings compliant (LF, single blank line at EOF)
**✅ Content Validated (Phase 3):**
- [ ] testing-strategy.md → Testing Philosophy section checked (Risk-Based Testing mentioned)
- [ ] testing-strategy.md → Test Levels section checked (2-5 E2E, 3-8 Integration, 5-15 Unit)
- [ ] tests/README.md → Test Organization section checked or auto-discovered
- [ ] tests/README.md → Running Tests section checked or auto-discovered
- [ ] Test framework detected (if applicable) and logged
- [ ] Test directory structure scanned or created
- [ ] Naming conventions detected or defaults used
- [ ] Test runner command identified or defaults used
**✅ Reporting:**
- [ ] Phase 1 logged: creation summary
- [ ] Phase 2 logged: structural fixes (if any)
- [ ] Phase 3 logged: content validation summary with auto-discovery results
---
## Technical Details
**Standards**:
- Story-Level Test Task Pattern
- Value-Based Testing (2-5 E2E, 3-8 Integration, 5-15 Unit, 10-28 total max per Story)
- Risk-Based Testing (Priority ≥15)
**Language**: English only
**Auto-Discovery Support**:
- Node.js: jest, vitest, mocha, ava, tap, jasmine
- Python: pytest, nose2, unittest2
- Go: built-in testing package
---
**Version:** 7.0.0 (MAJOR: Merged validation into worker. Added Phase 2 structure validation + Phase 3 semantic content validation with test framework auto-discovery. Idempotent - can be invoked multiple times.)
**Last Updated:** 2025-11-18

View File

@@ -0,0 +1,126 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>ln-116-test-docs-creator - State Diagram</title>
<script src="https://cdn.jsdelivr.net/npm/mermaid@10/dist/mermaid.min.js"></script>
<link rel="stylesheet" href="../shared/css/diagram.css">
</head>
<body>
<div class="container">
<header>
<h1>🧪 ln-116-test-docs-creator</h1>
<p class="subtitle">Test Documentation Creator - State Diagram</p>
</header>
<div class="info-box">
<h3>📋 Workflow Overview</h3>
<ul>
<li><strong>Purpose:</strong> Create and validate test documentation (testing-strategy.md + tests/README.md)</li>
<li><strong>Worker for:</strong> ln-110-documents-pipeline orchestrator</li>
<li><strong>Phases:</strong> 3 phases (Phase 1 CREATE → Phase 2 Structure Validation → Phase 3 Content Validation)</li>
<li><strong>Auto-Discovery:</strong> Test framework detection, directory structure scanning, naming convention detection</li>
<li><strong>Risk-Based Testing:</strong> Priority ≥15 scenarios, test caps (2-5 E2E, 3-8 Integration, 5-15 Unit)</li>
</ul>
</div>
<div class="legend">
<div class="legend-item">
<div class="legend-color color-action"></div>
<span>Creation Action</span>
</div>
<div class="legend-item">
<div class="legend-color color-validation"></div>
<span>Validation</span>
</div>
<div class="legend-item">
<div class="legend-color color-decision"></div>
<span>Decision Point</span>
</div>
</div>
<div class="diagram-container">
<div class="mermaid">
graph TD
Start([Start: Test Docs Creation]) --> Phase1[Phase 1: CREATE<br/>Check & create test documentation]
Phase1 --> CheckDirs[Check directories:<br/>docs/reference/guides/, tests/]
CheckDirs --> CreateDirs{Directories<br/>exist?}
CreateDirs -->|No| MakeDirs[Create missing directories]
CreateDirs -->|Yes| CheckFiles
MakeDirs --> CheckFiles
CheckFiles[Check files:<br/>testing-strategy.md, tests/README.md]
CheckFiles --> FilesExist{Files<br/>exist?}
FilesExist -->|Yes| Preserved[Preserve existing files<br/>Skip creation]
FilesExist -->|No| CreateFiles[Create from templates<br/>Replace DATE placeholders]
Preserved --> Phase2
CreateFiles --> Phase2
Phase2[Phase 2: Validate Structure<br/>Auto-fix violations]
Phase2 --> AutoFix[Auto-fix:<br/>SCOPE tags, required sections,<br/>Maintenance sections, POSIX endings]
AutoFix --> Phase3[Phase 3: Validate Content<br/>Semantic validation + auto-discovery]
Phase3 --> ValidateStrategy[Validate testing-strategy.md:<br/>Testing Philosophy, Test Levels]
Phase3 --> AutoDiscovery[Auto-discovery for tests/README.md]
AutoDiscovery --> FrameworkDetect[Detect framework:<br/>package.json jest/vitest<br/>requirements.txt pytest<br/>go.mod built-in]
FrameworkDetect --> StructureScan[Scan test directory structure:<br/>tests/e2e/, tests/integration/, tests/unit/]
StructureScan --> NamingDetect[Detect naming conventions:<br/>*.test.js, *.spec.ts, test_*.py, *_test.go]
NamingDetect --> ValidateReadme[Validate tests/README.md sections:<br/>Test Organization, Running Tests]
ValidateStrategy --> Summary
ValidateReadme --> Summary
Summary[Display completion summary:<br/>Files created/preserved,<br/>Framework detected,<br/>Structure validated]
Summary --> End([End: ✓ Test docs created + validated])
%% Styling
classDef action fill:#C8E6C9,stroke:#388E3C,stroke-width:2px
classDef validation fill:#FFF9C4,stroke:#F57C00,stroke-width:2px
classDef decision fill:#FFE0B2,stroke:#E64A19,stroke-width:2px
class Phase1,CheckDirs,MakeDirs,CheckFiles,CreateFiles,Preserved,AutoFix,FrameworkDetect,StructureScan,NamingDetect,Summary action
class Phase2,Phase3,ValidateStrategy,AutoDiscovery,ValidateReadme validation
class CreateDirs,FilesExist decision
</div>
</div>
<div class="info-box">
<h3>🔑 Key Features</h3>
<ul>
<li><strong>Sixth Worker:</strong> Creates test documentation after presentation (ln-110 → ln-111 → ln-112 → ln-113 → ln-114 → ln-115 → ln-116)</li>
<li><strong>Two Files:</strong> testing-strategy.md (universal philosophy) + tests/README.md (organization with framework-specific details)</li>
<li><strong>Universal Philosophy:</strong> testing-strategy.md is framework-agnostic (Risk-Based Testing, test pyramid, isolation patterns)</li>
<li><strong>Story-Level Test Task Pattern:</strong> All tests consolidated in final Story task (NOT scattered across implementation tasks)</li>
<li><strong>Framework Detection:</strong> Auto-discovers test framework from package.json/requirements.txt/go.mod</li>
<li><strong>Structure Auto-Discovery:</strong> Scans tests/ directory for e2e/integration/unit, detects naming conventions</li>
<li><strong>Idempotent:</strong> Checks file existence, preserves existing files, re-validates on each run</li>
</ul>
</div>
<footer>
<p>Generated for ln-116-test-docs-creator skill | Version 7.0.0</p>
<p>Diagram format: Mermaid.js | Last updated: 2025-11-18</p>
</footer>
</div>
<script>
mermaid.initialize({
startOnLoad: true,
theme: 'default',
flowchart: {
useMaxWidth: true,
htmlLabels: true,
curve: 'basis'
}
});
</script>
</body>
</html>

View File

@@ -0,0 +1,235 @@
# Test Documentation Questions
**Purpose:** Define what each section of test documentation should answer.
**Format:** Question → Expected Content → Validation Heuristics → Auto-Discovery Hints → MCP Ref Hints
---
## Table of Contents
| Document | Questions | Auto-Discovery | Priority | Line |
|----------|-----------|----------------|----------|------|
| [docs/reference/guides/testing-strategy.md](#docsreferenceguidestesting-strategymd) | 2 | None | High | L25 |
| [tests/README.md](#testsreadmemd) | 2 | High | High | L101 |
**Priority Legend:**
- **Critical:** Must answer all questions
- **High:** Strongly recommended
- **Medium:** Optional (can use template defaults)
**Auto-Discovery Legend:**
- **None:** No auto-discovery needed (use template as-is)
- **Low:** 1-2 questions need auto-discovery
- **High:** All questions need auto-discovery
---
<!-- DOCUMENT_START: docs/reference/guides/testing-strategy.md -->
## docs/reference/guides/testing-strategy.md
**File:** docs/reference/guides/testing-strategy.md (universal testing philosophy)
**Target Sections:** Testing Philosophy, Test Levels
**Rules for this document:**
- Universal testing philosophy (NOT framework-specific)
- Risk-Based Testing principle
- Story-Level Test Task Pattern
- No auto-discovery needed (standard best practices)
---
<!-- QUESTION_START: 1 -->
### Question 1: What is the overall testing approach?
**Expected Answer:** Risk-Based Testing principle, test pyramid levels (E2E/Integration/Unit), priority-driven approach (Priority ≥15)
**Target Section:** ## Testing Philosophy
**Validation Heuristics:**
- ✅ Mentions "Risk-Based Testing" or "risk-based"
- ✅ Has test pyramid description (E2E, Integration, Unit)
- ✅ Mentions priority threshold (≥15 or "Priority 15")
- ✅ References "Story-Level Test Task Pattern"
- ✅ Length > 100 words
**Auto-Discovery:**
- N/A (standard philosophy from template)
**MCP Ref Hints:**
- Research: "risk-based testing best practices" (if template needs enhancement)
- Research: "test pyramid Martin Fowler" (if need to explain pyramid rationale)
<!-- QUESTION_END: 1 -->
---
<!-- QUESTION_START: 2 -->
### Question 2: What are the test level targets?
**Expected Answer:** E2E tests 2-5 per Story, Integration tests 3-8 per Story, Unit tests 5-15 per Story, rationale for each level
**Target Section:** ## Test Levels
**Validation Heuristics:**
- ✅ Lists 3 levels: E2E, Integration, Unit
- ✅ Has numeric ranges: "2-5" for E2E, "3-8" for Integration, "5-15" for Unit
- ✅ Explains purpose/rationale for each level
- ✅ Total mentions "10-28 tests per Story"
- ✅ Length > 150 words
**Auto-Discovery:**
- N/A (standard targets from template)
**MCP Ref Hints:**
- Research: "testing pyramid ratios" (if need to justify ranges)
- Research: "value-based testing approach" (if need to explain priority-driven testing)
<!-- QUESTION_END: 2 -->
---
**Overall File Validation:**
- ✅ Has SCOPE tags in first 5 lines
- ✅ NO framework-specific code examples (FastAPI, pytest, Jest, etc.)
- ✅ Has Maintenance section at end
- ✅ Total length > 400 words
<!-- DOCUMENT_END: docs/reference/guides/testing-strategy.md -->
---
<!-- DOCUMENT_START: tests/README.md -->
## tests/README.md
**File:** tests/README.md (test organization structure)
**Target Sections:** Test Organization, Running Tests
**Rules for this document:**
- Project-specific test organization
- Auto-discovery of test frameworks (Jest/Vitest/Pytest/Mocha)
- Auto-discovery of directory structure (e2e/, integration/, unit/)
- Auto-discovery of naming conventions
- Auto-discovery of test runner commands
---
<!-- QUESTION_START: 3 -->
### Question 3: How are tests organized in this project?
**Expected Answer:** Directory structure (tests/e2e/, tests/integration/, tests/unit/), naming conventions (*.test.js or *.spec.js or test_*.py), Story-Level Test Task Pattern
**Target Section:** ## Test Organization
**Validation Heuristics:**
- ✅ Describes directory structure with 3 levels (e2e, integration, unit)
- ✅ Mentions naming conventions (*.test.*, *.spec.*, test_*.*)
- ✅ References Story-Level Test Task Pattern
- ✅ Has test framework mention (Jest, Vitest, Pytest, Mocha, etc.)
- ✅ Length > 80 words
**Auto-Discovery:**
1. **Scan tests/ directory:**
- Use Glob tool: `pattern: "tests/e2e/**/*.{js,ts,py,go}"`
- Use Glob tool: `pattern: "tests/integration/**/*.{js,ts,py,go}"`
- Use Glob tool: `pattern: "tests/unit/**/*.{js,ts,py,go}"`
- Count files in each directory
- Example output: "✓ Test structure: 12 E2E, 45 Integration, 78 Unit tests"
2. **Detect test framework:**
- Check package.json → "devDependencies" or "dependencies":
- Node.js: jest, vitest, mocha, ava, tap, jasmine
- Extract version
- Check requirements.txt (if exists):
- Python: pytest, nose2, unittest2
- Extract version
- Check go.mod (if exists):
- Go: testing (built-in)
- Example output: "✓ Test framework detected: jest@29.7.0"
3. **Extract naming conventions:**
- For each test file found:
- Extract filename pattern:
- *.test.js → "*.test.js" convention
- *.spec.ts → "*.spec.ts" convention
- test_*.py → "test_*.py" convention
- *_test.go → "*_test.go" convention
- Use most common pattern (majority rule)
- Example output: "✓ Naming convention: *.test.js (detected from 135 files)"
4. **If tests/ directory doesn't exist:**
- Create placeholder structure:
```
tests/
e2e/ (empty, ready for E2E tests)
integration/ (empty, ready for Integration tests)
unit/ (empty, ready for Unit tests)
```
- Log: "⚠️ Test directory structure created (will be populated during Story test task execution)"
**MCP Ref Hints:**
- Research: "[detected_framework] best practices" (e.g., "jest best practices 2024")
- Research: "[detected_framework] naming conventions" (if need to explain patterns)
<!-- QUESTION_END: 3 -->
---
<!-- QUESTION_START: 4 -->
### Question 4: How do I run tests locally?
**Expected Answer:** Test runner command (npm test, pytest, go test), run specific test files, run with coverage
**Target Section:** ## Running Tests
**Validation Heuristics:**
- ✅ Has test runner command (npm test, yarn test, pytest, go test, etc.)
- ✅ Mentions coverage (--coverage, coverage report, etc.)
- ✅ Shows how to run specific tests
- ✅ Has examples with actual commands
- ✅ Length > 50 words
**Auto-Discovery:**
1. **Extract test command from package.json:**
- Read package.json → "scripts" → "test"
- Extract command:
- "jest" → "npm test" (runs Jest)
- "vitest" → "npm test" (runs Vitest)
- "mocha" → "npm test" (runs Mocha)
- Custom script → use as-is
- Example output: "Test runner: npm test (runs jest)"
2. **Extract coverage command (if exists):**
- Check package.json → "scripts":
- "test:coverage": "jest --coverage"
- "coverage": "vitest run --coverage"
- Example output: "Coverage: npm run test:coverage"
3. **For Python projects:**
- Check for pytest.ini or pyproject.toml
- Default: "pytest" or "python -m pytest"
- Coverage: "pytest --cov=src"
4. **For Go projects:**
- Default: "go test ./..."
- Coverage: "go test -cover ./..."
5. **If no test script found:**
- Default based on detected framework:
- Jest → "npm test"
- Vitest → "npm test"
- Pytest → "pytest"
- Go → "go test ./..."
- Log: "⚠️ No test script found in package.json. Using default '[command]'."
**MCP Ref Hints:**
- N/A (framework-specific, use detected framework docs)
<!-- QUESTION_END: 4 -->
---
**Overall File Validation:**
- ✅ Has SCOPE tags in first 5 lines
- ✅ Has link to testing-strategy.md in Quick Navigation section
- ✅ Has Maintenance section at end
- ✅ Total length > 100 words
<!-- DOCUMENT_END: tests/README.md -->
---
**Version:** 1.0.0
**Last Updated:** 2025-11-18

View File

@@ -0,0 +1,465 @@
# Testing Strategy
Universal testing philosophy and strategy for modern software projects: principles, organization, and best practices.
> **SCOPE:** Testing philosophy, risk-based strategy, test organization, isolation patterns, what to test. **NOT IN SCOPE:** Project structure, framework-specific patterns, CI/CD configuration, test tooling setup.
## Quick Navigation
- **Tests Organization:** [tests/README.md](../../tests/README.md) - Directory structure, Story-Level Pattern, running tests
- **Test Inventory:** [tests/unit/REGISTRY.md](../../tests/unit/REGISTRY.md), [tests/integration/REGISTRY.md](../../tests/integration/REGISTRY.md), [tests/e2e/REGISTRY.md](../../tests/e2e/REGISTRY.md)
---
## Core Philosophy
### Test YOUR Code, Not Frameworks
**Focus testing effort on YOUR business logic and integration usage.** Do not retest database constraints, ORM internals, framework validation, or third-party library mechanics.
**Rule of thumb:** If deleting your code wouldn't fail the test, you're testing someone else's code.
### Examples
| Verdict | Test Description | Rationale |
|---------|-----------------|-----------|
| ✅ **GOOD** | Custom validation logic raises exception for invalid input | Tests YOUR validation rules |
| ✅ **GOOD** | Repository query returns filtered results based on business criteria | Tests YOUR query construction |
| ✅ **GOOD** | API endpoint returns correct HTTP status for error scenarios | Tests YOUR error handling |
| ❌ **BAD** | Database enforces UNIQUE constraint on email column | Tests database, not your code |
| ❌ **BAD** | ORM model has correct column types and lengths | Tests ORM configuration, not logic |
| ❌ **BAD** | Framework validates request body matches schema | Tests framework validation |
---
## Risk-Based Testing Strategy
### Priority Matrix
**Automate only high-value scenarios** using Business Impact (1-5) × Probability (1-5).
| Priority Score | Action | Example Scenarios |
|----------------|--------|-------------------|
| **≥15** | MUST test | Payment processing, authentication, data loss scenarios |
| **10-14** | Consider testing | Edge cases with moderate impact |
| **<10** | Skip automated tests | Low-probability edge cases, framework behavior |
### Test Caps (per Story)
**Enforce caps to prevent test bloat:**
- **E2E:** 2-5 tests
- **Integration:** 3-8 tests
- **Unit:** 5-15 tests
- **Total:** 10-28 tests per Story
**Key principles:**
- **No minimum limits** - Can be 0 tests if no Priority ≥15 scenarios exist
- **No test pyramids** - Test distribution based on risk, not arbitrary ratios
- **Every test must add value** - Each test should validate unique Priority ≥15 scenario
**Exception:** ML/GPU/Hardware-dependent workloads may favor more E2E (5-10), fewer Integration (2-5), minimal Unit (1-3) because behavior is hardware-dependent and mocks lack fidelity. Same 10-28 total cap applies.
---
## Story-Level Testing Pattern
### When to Write Tests
**Consolidate ALL tests in Story's final test task** AFTER implementation + manual verification.
| Task Type | Contains Tests? | Rationale |
|-----------|----------------|-----------|
| **Implementation Tasks** | ❌ NO tests | Focus on implementation only |
| **Final Test Task** | ✅ ALL tests | Complete Story coverage after manual verification |
### Benefits
1. **Complete context** - Tests written when all code implemented
2. **No duplication** - E2E covers integration paths, no need to retest same code
3. **Better prioritization** - Manual testing identifies Priority ≥15 scenarios before automation
4. **Atomic delivery** - Story delivers working code + comprehensive tests together
### Anti-Pattern Example
| ❌ Wrong Approach | ✅ Correct Approach |
|-------------------|---------------------|
| Task 1: Implement feature X + write unit tests<br>Task 2: Update integration + write integration tests<br>Task 3: Add logging + write E2E tests | Task 1: Implement feature X<br>Task 2: Update integration points<br>Task 3: Add logging<br>**Task 4 (Final): Write ALL tests (2 E2E, 3 Integration, 8 Unit)** |
| **Result:** Tests scattered, duplication, incomplete coverage | **Result:** Tests consolidated, no duplication, complete coverage |
---
## Test Organization
### Directory Structure
```
tests/
├── e2e/ # End-to-end tests (full system, real services)
│ ├── test_user_journey.ext
│ └── REGISTRY.md # E2E test inventory
├── integration/ # Integration tests (multiple components, real dependencies)
│ ├── api/
│ ├── services/
│ ├── db/
│ └── REGISTRY.md # Integration test inventory
├── unit/ # Unit tests (single component, mocked dependencies)
│ ├── api/
│ ├── services/
│ ├── db/
│ └── REGISTRY.md # Unit test inventory
└── README.md # Test documentation
```
### Test Inventory (REGISTRY.md)
**Each test category has REGISTRY.md** with detailed test descriptions:
**Purpose:**
- Document what each test validates
- Track test counts per Epic/Story
- Provide navigation for test maintenance
**Format example:**
```markdown
# E2E Test Registry
## Quality Estimation (Epic 6 - API-69)
**File:** tests/e2e/test_quality_estimation.ext
**Tests (4):**
1. **evaluate_endpoint_batch_splitting** - MetricX batch splitting (segments >128 split into batches)
2. **evaluate_endpoint_gpu_integration** - MetricX-24 GPU service integration
3. **evaluate_endpoint_error_handling** - Service timeout handling (503 status)
4. **evaluate_endpoint_response_format** - Response schema validation
**Total:** 4 E2E tests | **Coverage:** 100% Priority ≥15 scenarios
```
---
## Test Levels
### E2E (End-to-End) Tests
**Definition:** Full system tests with real external services and complete data flow.
**Characteristics:**
- Real external APIs/services
- Real database
- Full request-response cycle
- Validates complete user journeys
**When to write:**
- Critical user workflows (authentication, payments, core features)
- Integration with external services
- Priority ≥15 scenarios that span multiple systems
**Example:** User registration flow (E2E) vs individual validation function (Unit)
### Integration Tests
**Definition:** Tests multiple components together with real dependencies (database, cache, file system).
**Characteristics:**
- Real database/cache/file system
- Multiple components interact
- May mock external APIs
- Validates component integration
**When to write:**
- Database query behavior
- Service orchestration
- Component interaction
- API endpoint behavior (without external services)
**Example:** Repository query with real database vs service logic with mocked repository
### Unit Tests
**Definition:** Tests single component in isolation with mocked dependencies.
**Characteristics:**
- Fast execution (<1ms per test)
- No external dependencies
- Mocked collaborators
- Validates single responsibility
**When to write:**
- Business logic validation
- Complex calculations
- Error handling logic
- Custom transformations
**Example:** Validation function with mocked data vs endpoint with real database
---
## Isolation Patterns
### Pattern Comparison
| Pattern | Speed | Complexity | Best For |
|---------|-------|------------|----------|
| **Data Deletion** | ⚡⚡⚡ Fastest | Simple | Default choice (90% of projects) |
| **Transaction Rollback** | ⚡⚡ Fast | Moderate | Transaction semantics testing |
| **Database Recreation** | ⚡ Slow | Simple | Maximum isolation paranoia |
### Data Deletion (Default)
**How it works:**
1. Create schema once at test session start
2. Delete data after each test
3. Drop schema at test session end
**Benefits:**
- Fast (5-8s for 50 tests)
- Simple implementation
- Full isolation between tests
**When to use:** Default choice for most projects
### Transaction Rollback
**How it works:**
1. Start transaction before each test
2. Run test code
3. Rollback transaction after test
**Benefits:**
- Good for testing transaction semantics
- Faster than DB recreation
**When to use:** Testing transaction behavior, savepoints, isolation levels
### Database Recreation
**How it works:**
1. Drop and recreate database before each test
2. Apply migrations
3. Run test
**Benefits:**
- Maximum isolation
- Catches migration issues
**When to use:** Paranoia about shared state, testing migrations
---
## What To Test vs NOT Test
### ✅ Test (GOOD)
**Test YOUR code and integration usage:**
| Category | Examples |
|----------|----------|
| **Business logic** | Validation rules, orchestration, error handling, computed properties |
| **Query construction** | Filters, joins, aggregations, pagination |
| **API behavior** | Request validation, response shape, HTTP status codes |
| **Custom validators** | Complex validation logic, transformations |
| **Integration smoke** | Database connectivity, basic CRUD, configuration |
### ❌ Avoid (BAD)
**Don't test framework internals and third-party libraries:**
| Category | Examples |
|----------|----------|
| **Database constraints** | UNIQUE, FOREIGN KEY, NOT NULL, CHECK constraints |
| **ORM internals** | Column types, table creation, metadata, relationships |
| **Framework validation** | Request body validation, dependency injection, routing |
| **Third-party libraries** | HTTP client behavior, serialization libraries, cryptography |
---
## Testing Patterns
### Arrange-Act-Assert
**Structure tests clearly:**
```
test_example:
# ARRANGE: Set up test data and dependencies
setup_data()
mock_dependencies()
# ACT: Execute code under test
result = execute_operation()
# ASSERT: Verify outcomes
assert result == expected
verify_side_effects()
```
**Benefits:**
- Clear test structure
- Easy to read and maintain
- Explicit test phases
### Mock at the Seam
**Mock at component boundaries, not internals:**
| Test Type | What to Mock | What to Use Real |
|-----------|--------------|------------------|
| **Unit tests** | External dependencies (repositories, APIs, file system) | Business logic |
| **Integration tests** | External APIs, slow services | Database, cache, your code |
| **E2E tests** | Nothing (or minimal external services) | Everything |
**Anti-pattern:** Over-mocking your own code defeats the purpose of integration tests.
### Test Data Builders
**Create readable test data:**
```
# Builder pattern for test data
user = build_user(
email="test@example.com",
role="admin",
active=True
)
# Easy to create edge cases
inactive_user = build_user(active=False)
guest_user = build_user(role="guest")
```
**Benefits:**
- Readable test setup
- Easy edge case creation
- Reusable across tests
---
## Common Issues
### Flaky Tests
**Symptom:** Tests pass/fail randomly without code changes
**Common causes:**
- Shared state between tests (global variables, cached data)
- Time-dependent logic (timestamps, delays)
- External service instability
- Improper cleanup between tests
**Solutions:**
- Isolate test data (per-test creation, cleanup)
- Mock time-dependent code
- Use test-specific configurations
- Implement proper teardown
### Slow Tests
**Symptom:** Test suite takes too long (>30s for 50 tests)
**Common causes:**
- Database recreation per test
- Running migrations per test
- No connection pooling
- Too many E2E tests
**Solutions:**
- Use Data Deletion pattern
- Run migrations once per session
- Optimize test data creation
- Balance test levels (more Unit, fewer E2E)
### Test Coupling
**Symptom:** Changing one component breaks many unrelated tests
**Common causes:**
- Tests depend on implementation details
- Shared test fixtures across unrelated tests
- Testing framework internals instead of behavior
**Solutions:**
- Test behavior, not implementation
- Use independent test data per test
- Focus on public APIs, not internal state
---
## Coverage Guidelines
### Targets
| Layer | Target | Priority |
|-------|--------|----------|
| **Critical business logic** | 100% branch coverage | HIGH |
| **Repositories/Data access** | 90%+ line coverage | HIGH |
| **API endpoints** | 80%+ line coverage | MEDIUM |
| **Utilities/Helpers** | 80%+ line coverage | MEDIUM |
| **Overall** | 80%+ line coverage | MEDIUM |
### What Coverage Means
**Coverage is a tool, not a goal:**
- ✅ High coverage + focused tests = good quality signal
- ❌ High coverage + meaningless tests = false confidence
- ❌ Low coverage = blind spots in testing
**Focus on:**
- Critical paths covered
- Edge cases tested
- Error handling validated
**Not on:**
- Arbitrary percentage targets
- Testing getters/setters
- Framework code
---
## Verification Checklist
### Strategy
- [ ] Risk-based selection (Priority ≥15)
- [ ] Test caps enforced (E2E 2-5, Integration 3-8, Unit 5-15)
- [ ] Total 10-28 tests per Story
- [ ] Tests target YOUR code, not framework internals
- [ ] E2E smoke tests for critical integrations
### Organization
- [ ] Story-Level Test Task Pattern followed
- [ ] Tests consolidated in final Story task
- [ ] REGISTRY.md files maintained for all test categories
- [ ] Test directory structure follows conventions
### Isolation
- [ ] Isolation pattern chosen (Data Deletion recommended)
- [ ] Each test creates own data
- [ ] Proper cleanup between tests
- [ ] No shared state between tests
### Quality
- [ ] Tests are order-independent
- [ ] Tests run fast (<10s for 50 integration tests)
- [ ] No flaky tests
- [ ] Coverage ≥80% overall, 100% for critical logic
- [ ] Meaningful test names and descriptions
---
## Maintenance
**Update Triggers:**
- New testing patterns discovered
- Framework version changes affecting tests
- Significant changes to test architecture
- New isolation issues identified
**Verification:** Review this strategy when starting new projects or experiencing test quality issues.
**Last Updated:** [CURRENT_DATE] - Initial universal testing strategy

View File

@@ -0,0 +1,114 @@
# Test Documentation
**Last Updated:** {{DATE}}
<!-- SCOPE: Test organization structure and Story-Level Test Task Pattern ONLY. Contains test directories organization, test execution commands, quick navigation. -->
<!-- DO NOT add here: Test code → Test files, Story implementation → docs/tasks/kanban_board.md, Test strategy → Story test task descriptions -->
---
## Overview
This directory contains all tests for the project, following the **Story-Level Test Task Pattern** where tests are consolidated in the final Story test task (NOT scattered across implementation tasks).
**Test organization:**
- **E2E tests** (End-to-End) - 2-5 per Story - Priority ≥15 scenarios MUST be tested
- **Integration tests** - 3-8 per Story - Multi-component interactions
- **Unit tests** - 5-15 per Story - Individual component logic
- **Total**: 10-28 tests per Story (Value-Based Testing)
---
## Testing Philosophy
**Test YOUR code, not frameworks.** Focus on business logic and integration usage. Avoid testing database constraints, ORM internals, or framework validation.
**Risk-based testing:** Automate only Priority ≥15 scenarios (Business Impact × Probability). Test caps prevent bloat: 2-5 E2E, 3-8 Integration, 5-15 Unit (10-28 total per Story). No minimum limits - can be 0 if no high-priority scenarios exist.
**Rule of thumb:** If deleting your code wouldn't fail the test, you're testing someone else's code.
👉 **Full strategy:** See [docs/reference/guides/testing-strategy.md](../docs/reference/guides/testing-strategy.md)
---
## Test Structure
```
tests/
├── e2e/ # End-to-End tests (2-5 per Story)
│ ├── auth/
│ ├── user-flows/
│ └── critical-paths/
├── integration/ # Integration tests (3-8 per Story)
│ ├── api/
│ ├── database/
│ └── services/
└── unit/ # Unit tests (5-15 per Story)
├── components/
├── utils/
└── services/
```
---
## Story-Level Test Task Pattern
**Rule**: All tests (E2E/Integration/Unit) are written in the **final Story test task** (created by ln-350-story-test-planner after manual testing).
**Why**:
- **Single source of truth**: All Story tests in one place
- **Atomic completion**: Story Done when all tests pass
- **No scattered tests**: NOT in implementation tasks
- **Regression prevention**: Test suite runs before Story marked Done
**Workflow**:
1. Implementation tasks completed → Manual testing → Bugs fixed
2. ln-350-story-test-planner creates Story Finalizer test task
3. ln-334-test-executor implements all tests (E2E/Integration/Unit) in final task
4. All tests pass → Story marked Done
---
## Test Execution
**Run all tests:**
```bash
npm test
```
**Run specific test suites:**
```bash
npm run test:unit # Unit tests only
npm run test:integration # Integration tests only
npm run test:e2e # E2E tests only
```
**Watch mode (development):**
```bash
npm run test:watch
```
---
## Quick Navigation
- **Testing Strategy**: [docs/reference/guides/testing-strategy.md](../docs/reference/guides/testing-strategy.md) - Philosophy, risk-based strategy, what to test
- **Story test tasks**: [docs/tasks/kanban_board.md](../docs/tasks/kanban_board.md) - Story test task tracking
- **Story-Level Pattern**: [docs/tasks/README.md](../docs/tasks/README.md) - Full pattern explanation
- **Test guidelines**: [docs/reference/guides/](../docs/reference/guides/) - Additional testing best practices
---
## Maintenance
**Update Triggers**:
- When adding new test directories or test suites
- When changing test execution commands
- When modifying Story-Level Test Task Pattern workflow
**Verification**:
- All test directories exist (e2e/, integration/, unit/)
- Test execution commands work correctly
- SCOPE tags correctly define test documentation boundaries
**Last Updated**: {{DATE}}