Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:37:27 +08:00
commit 37774aa937
131 changed files with 31137 additions and 0 deletions

View File

@@ -0,0 +1,235 @@
# Test Documentation Questions
**Purpose:** Define what each section of test documentation should answer.
**Format:** Question → Expected Content → Validation Heuristics → Auto-Discovery Hints → MCP Ref Hints
---
## Table of Contents
| Document | Questions | Auto-Discovery | Priority | Line |
|----------|-----------|----------------|----------|------|
| [docs/reference/guides/testing-strategy.md](#docsreferenceguidestesting-strategymd) | 2 | None | High | L25 |
| [tests/README.md](#testsreadmemd) | 2 | High | High | L101 |
**Priority Legend:**
- **Critical:** Must answer all questions
- **High:** Strongly recommended
- **Medium:** Optional (can use template defaults)
**Auto-Discovery Legend:**
- **None:** No auto-discovery needed (use template as-is)
- **Low:** 1-2 questions need auto-discovery
- **High:** All questions need auto-discovery
---
<!-- DOCUMENT_START: docs/reference/guides/testing-strategy.md -->
## docs/reference/guides/testing-strategy.md
**File:** docs/reference/guides/testing-strategy.md (universal testing philosophy)
**Target Sections:** Testing Philosophy, Test Levels
**Rules for this document:**
- Universal testing philosophy (NOT framework-specific)
- Risk-Based Testing principle
- Story-Level Test Task Pattern
- No auto-discovery needed (standard best practices)
---
<!-- QUESTION_START: 1 -->
### Question 1: What is the overall testing approach?
**Expected Answer:** Risk-Based Testing principle, test pyramid levels (E2E/Integration/Unit), priority-driven approach (Priority ≥15)
**Target Section:** ## Testing Philosophy
**Validation Heuristics:**
- ✅ Mentions "Risk-Based Testing" or "risk-based"
- ✅ Has test pyramid description (E2E, Integration, Unit)
- ✅ Mentions priority threshold (≥15 or "Priority 15")
- ✅ References "Story-Level Test Task Pattern"
- ✅ Length > 100 words
**Auto-Discovery:**
- N/A (standard philosophy from template)
**MCP Ref Hints:**
- Research: "risk-based testing best practices" (if template needs enhancement)
- Research: "test pyramid Martin Fowler" (if need to explain pyramid rationale)
<!-- QUESTION_END: 1 -->
---
<!-- QUESTION_START: 2 -->
### Question 2: What are the test level targets?
**Expected Answer:** E2E tests 2-5 per Story, Integration tests 3-8 per Story, Unit tests 5-15 per Story, rationale for each level
**Target Section:** ## Test Levels
**Validation Heuristics:**
- ✅ Lists 3 levels: E2E, Integration, Unit
- ✅ Has numeric ranges: "2-5" for E2E, "3-8" for Integration, "5-15" for Unit
- ✅ Explains purpose/rationale for each level
- ✅ Total mentions "10-28 tests per Story"
- ✅ Length > 150 words
**Auto-Discovery:**
- N/A (standard targets from template)
**MCP Ref Hints:**
- Research: "testing pyramid ratios" (if need to justify ranges)
- Research: "value-based testing approach" (if need to explain priority-driven testing)
<!-- QUESTION_END: 2 -->
---
**Overall File Validation:**
- ✅ Has SCOPE tags in first 5 lines
- ✅ NO framework-specific code examples (FastAPI, pytest, Jest, etc.)
- ✅ Has Maintenance section at end
- ✅ Total length > 400 words
<!-- DOCUMENT_END: docs/reference/guides/testing-strategy.md -->
---
<!-- DOCUMENT_START: tests/README.md -->
## tests/README.md
**File:** tests/README.md (test organization structure)
**Target Sections:** Test Organization, Running Tests
**Rules for this document:**
- Project-specific test organization
- Auto-discovery of test frameworks (Jest/Vitest/Pytest/Mocha)
- Auto-discovery of directory structure (e2e/, integration/, unit/)
- Auto-discovery of naming conventions
- Auto-discovery of test runner commands
---
<!-- QUESTION_START: 3 -->
### Question 3: How are tests organized in this project?
**Expected Answer:** Directory structure (tests/e2e/, tests/integration/, tests/unit/), naming conventions (*.test.js or *.spec.js or test_*.py), Story-Level Test Task Pattern
**Target Section:** ## Test Organization
**Validation Heuristics:**
- ✅ Describes directory structure with 3 levels (e2e, integration, unit)
- ✅ Mentions naming conventions (*.test.*, *.spec.*, test_*.*)
- ✅ References Story-Level Test Task Pattern
- ✅ Has test framework mention (Jest, Vitest, Pytest, Mocha, etc.)
- ✅ Length > 80 words
**Auto-Discovery:**
1. **Scan tests/ directory:**
- Use Glob tool: `pattern: "tests/e2e/**/*.{js,ts,py,go}"`
- Use Glob tool: `pattern: "tests/integration/**/*.{js,ts,py,go}"`
- Use Glob tool: `pattern: "tests/unit/**/*.{js,ts,py,go}"`
- Count files in each directory
- Example output: "✓ Test structure: 12 E2E, 45 Integration, 78 Unit tests"
2. **Detect test framework:**
- Check package.json → "devDependencies" or "dependencies":
- Node.js: jest, vitest, mocha, ava, tap, jasmine
- Extract version
- Check requirements.txt (if exists):
- Python: pytest, nose2, unittest2
- Extract version
- Check go.mod (if exists):
- Go: testing (built-in)
- Example output: "✓ Test framework detected: jest@29.7.0"
3. **Extract naming conventions:**
- For each test file found:
- Extract filename pattern:
- *.test.js → "*.test.js" convention
- *.spec.ts → "*.spec.ts" convention
- test_*.py → "test_*.py" convention
- *_test.go → "*_test.go" convention
- Use most common pattern (majority rule)
- Example output: "✓ Naming convention: *.test.js (detected from 135 files)"
4. **If tests/ directory doesn't exist:**
- Create placeholder structure:
```
tests/
e2e/ (empty, ready for E2E tests)
integration/ (empty, ready for Integration tests)
unit/ (empty, ready for Unit tests)
```
- Log: "⚠️ Test directory structure created (will be populated during Story test task execution)"
**MCP Ref Hints:**
- Research: "[detected_framework] best practices" (e.g., "jest best practices 2024")
- Research: "[detected_framework] naming conventions" (if need to explain patterns)
<!-- QUESTION_END: 3 -->
---
<!-- QUESTION_START: 4 -->
### Question 4: How do I run tests locally?
**Expected Answer:** Test runner command (npm test, pytest, go test), run specific test files, run with coverage
**Target Section:** ## Running Tests
**Validation Heuristics:**
- ✅ Has test runner command (npm test, yarn test, pytest, go test, etc.)
- ✅ Mentions coverage (--coverage, coverage report, etc.)
- ✅ Shows how to run specific tests
- ✅ Has examples with actual commands
- ✅ Length > 50 words
**Auto-Discovery:**
1. **Extract test command from package.json:**
- Read package.json → "scripts" → "test"
- Extract command:
- "jest" → "npm test" (runs Jest)
- "vitest" → "npm test" (runs Vitest)
- "mocha" → "npm test" (runs Mocha)
- Custom script → use as-is
- Example output: "Test runner: npm test (runs jest)"
2. **Extract coverage command (if exists):**
- Check package.json → "scripts":
- "test:coverage": "jest --coverage"
- "coverage": "vitest run --coverage"
- Example output: "Coverage: npm run test:coverage"
3. **For Python projects:**
- Check for pytest.ini or pyproject.toml
- Default: "pytest" or "python -m pytest"
- Coverage: "pytest --cov=src"
4. **For Go projects:**
- Default: "go test ./..."
- Coverage: "go test -cover ./..."
5. **If no test script found:**
- Default based on detected framework:
- Jest → "npm test"
- Vitest → "npm test"
- Pytest → "pytest"
- Go → "go test ./..."
- Log: "⚠️ No test script found in package.json. Using default '[command]'."
**MCP Ref Hints:**
- N/A (framework-specific, use detected framework docs)
<!-- QUESTION_END: 4 -->
---
**Overall File Validation:**
- ✅ Has SCOPE tags in first 5 lines
- ✅ Has link to testing-strategy.md in Quick Navigation section
- ✅ Has Maintenance section at end
- ✅ Total length > 100 words
<!-- DOCUMENT_END: tests/README.md -->
---
**Version:** 1.0.0
**Last Updated:** 2025-11-18

View File

@@ -0,0 +1,465 @@
# Testing Strategy
Universal testing philosophy and strategy for modern software projects: principles, organization, and best practices.
> **SCOPE:** Testing philosophy, risk-based strategy, test organization, isolation patterns, what to test. **NOT IN SCOPE:** Project structure, framework-specific patterns, CI/CD configuration, test tooling setup.
## Quick Navigation
- **Tests Organization:** [tests/README.md](../../tests/README.md) - Directory structure, Story-Level Pattern, running tests
- **Test Inventory:** [tests/unit/REGISTRY.md](../../tests/unit/REGISTRY.md), [tests/integration/REGISTRY.md](../../tests/integration/REGISTRY.md), [tests/e2e/REGISTRY.md](../../tests/e2e/REGISTRY.md)
---
## Core Philosophy
### Test YOUR Code, Not Frameworks
**Focus testing effort on YOUR business logic and integration usage.** Do not retest database constraints, ORM internals, framework validation, or third-party library mechanics.
**Rule of thumb:** If deleting your code wouldn't fail the test, you're testing someone else's code.
### Examples
| Verdict | Test Description | Rationale |
|---------|-----------------|-----------|
| ✅ **GOOD** | Custom validation logic raises exception for invalid input | Tests YOUR validation rules |
| ✅ **GOOD** | Repository query returns filtered results based on business criteria | Tests YOUR query construction |
| ✅ **GOOD** | API endpoint returns correct HTTP status for error scenarios | Tests YOUR error handling |
| ❌ **BAD** | Database enforces UNIQUE constraint on email column | Tests database, not your code |
| ❌ **BAD** | ORM model has correct column types and lengths | Tests ORM configuration, not logic |
| ❌ **BAD** | Framework validates request body matches schema | Tests framework validation |
---
## Risk-Based Testing Strategy
### Priority Matrix
**Automate only high-value scenarios** using Business Impact (1-5) × Probability (1-5).
| Priority Score | Action | Example Scenarios |
|----------------|--------|-------------------|
| **≥15** | MUST test | Payment processing, authentication, data loss scenarios |
| **10-14** | Consider testing | Edge cases with moderate impact |
| **<10** | Skip automated tests | Low-probability edge cases, framework behavior |
### Test Caps (per Story)
**Enforce caps to prevent test bloat:**
- **E2E:** 2-5 tests
- **Integration:** 3-8 tests
- **Unit:** 5-15 tests
- **Total:** 10-28 tests per Story
**Key principles:**
- **No minimum limits** - Can be 0 tests if no Priority ≥15 scenarios exist
- **No test pyramids** - Test distribution based on risk, not arbitrary ratios
- **Every test must add value** - Each test should validate unique Priority ≥15 scenario
**Exception:** ML/GPU/Hardware-dependent workloads may favor more E2E (5-10), fewer Integration (2-5), minimal Unit (1-3) because behavior is hardware-dependent and mocks lack fidelity. Same 10-28 total cap applies.
---
## Story-Level Testing Pattern
### When to Write Tests
**Consolidate ALL tests in Story's final test task** AFTER implementation + manual verification.
| Task Type | Contains Tests? | Rationale |
|-----------|----------------|-----------|
| **Implementation Tasks** | ❌ NO tests | Focus on implementation only |
| **Final Test Task** | ✅ ALL tests | Complete Story coverage after manual verification |
### Benefits
1. **Complete context** - Tests written when all code implemented
2. **No duplication** - E2E covers integration paths, no need to retest same code
3. **Better prioritization** - Manual testing identifies Priority ≥15 scenarios before automation
4. **Atomic delivery** - Story delivers working code + comprehensive tests together
### Anti-Pattern Example
| ❌ Wrong Approach | ✅ Correct Approach |
|-------------------|---------------------|
| Task 1: Implement feature X + write unit tests<br>Task 2: Update integration + write integration tests<br>Task 3: Add logging + write E2E tests | Task 1: Implement feature X<br>Task 2: Update integration points<br>Task 3: Add logging<br>**Task 4 (Final): Write ALL tests (2 E2E, 3 Integration, 8 Unit)** |
| **Result:** Tests scattered, duplication, incomplete coverage | **Result:** Tests consolidated, no duplication, complete coverage |
---
## Test Organization
### Directory Structure
```
tests/
├── e2e/ # End-to-end tests (full system, real services)
│ ├── test_user_journey.ext
│ └── REGISTRY.md # E2E test inventory
├── integration/ # Integration tests (multiple components, real dependencies)
│ ├── api/
│ ├── services/
│ ├── db/
│ └── REGISTRY.md # Integration test inventory
├── unit/ # Unit tests (single component, mocked dependencies)
│ ├── api/
│ ├── services/
│ ├── db/
│ └── REGISTRY.md # Unit test inventory
└── README.md # Test documentation
```
### Test Inventory (REGISTRY.md)
**Each test category has REGISTRY.md** with detailed test descriptions:
**Purpose:**
- Document what each test validates
- Track test counts per Epic/Story
- Provide navigation for test maintenance
**Format example:**
```markdown
# E2E Test Registry
## Quality Estimation (Epic 6 - API-69)
**File:** tests/e2e/test_quality_estimation.ext
**Tests (4):**
1. **evaluate_endpoint_batch_splitting** - MetricX batch splitting (segments >128 split into batches)
2. **evaluate_endpoint_gpu_integration** - MetricX-24 GPU service integration
3. **evaluate_endpoint_error_handling** - Service timeout handling (503 status)
4. **evaluate_endpoint_response_format** - Response schema validation
**Total:** 4 E2E tests | **Coverage:** 100% Priority ≥15 scenarios
```
---
## Test Levels
### E2E (End-to-End) Tests
**Definition:** Full system tests with real external services and complete data flow.
**Characteristics:**
- Real external APIs/services
- Real database
- Full request-response cycle
- Validates complete user journeys
**When to write:**
- Critical user workflows (authentication, payments, core features)
- Integration with external services
- Priority ≥15 scenarios that span multiple systems
**Example:** User registration flow (E2E) vs individual validation function (Unit)
### Integration Tests
**Definition:** Tests multiple components together with real dependencies (database, cache, file system).
**Characteristics:**
- Real database/cache/file system
- Multiple components interact
- May mock external APIs
- Validates component integration
**When to write:**
- Database query behavior
- Service orchestration
- Component interaction
- API endpoint behavior (without external services)
**Example:** Repository query with real database vs service logic with mocked repository
### Unit Tests
**Definition:** Tests single component in isolation with mocked dependencies.
**Characteristics:**
- Fast execution (<1ms per test)
- No external dependencies
- Mocked collaborators
- Validates single responsibility
**When to write:**
- Business logic validation
- Complex calculations
- Error handling logic
- Custom transformations
**Example:** Validation function with mocked data vs endpoint with real database
---
## Isolation Patterns
### Pattern Comparison
| Pattern | Speed | Complexity | Best For |
|---------|-------|------------|----------|
| **Data Deletion** | ⚡⚡⚡ Fastest | Simple | Default choice (90% of projects) |
| **Transaction Rollback** | ⚡⚡ Fast | Moderate | Transaction semantics testing |
| **Database Recreation** | ⚡ Slow | Simple | Maximum isolation paranoia |
### Data Deletion (Default)
**How it works:**
1. Create schema once at test session start
2. Delete data after each test
3. Drop schema at test session end
**Benefits:**
- Fast (5-8s for 50 tests)
- Simple implementation
- Full isolation between tests
**When to use:** Default choice for most projects
### Transaction Rollback
**How it works:**
1. Start transaction before each test
2. Run test code
3. Rollback transaction after test
**Benefits:**
- Good for testing transaction semantics
- Faster than DB recreation
**When to use:** Testing transaction behavior, savepoints, isolation levels
### Database Recreation
**How it works:**
1. Drop and recreate database before each test
2. Apply migrations
3. Run test
**Benefits:**
- Maximum isolation
- Catches migration issues
**When to use:** Paranoia about shared state, testing migrations
---
## What To Test vs NOT Test
### ✅ Test (GOOD)
**Test YOUR code and integration usage:**
| Category | Examples |
|----------|----------|
| **Business logic** | Validation rules, orchestration, error handling, computed properties |
| **Query construction** | Filters, joins, aggregations, pagination |
| **API behavior** | Request validation, response shape, HTTP status codes |
| **Custom validators** | Complex validation logic, transformations |
| **Integration smoke** | Database connectivity, basic CRUD, configuration |
### ❌ Avoid (BAD)
**Don't test framework internals and third-party libraries:**
| Category | Examples |
|----------|----------|
| **Database constraints** | UNIQUE, FOREIGN KEY, NOT NULL, CHECK constraints |
| **ORM internals** | Column types, table creation, metadata, relationships |
| **Framework validation** | Request body validation, dependency injection, routing |
| **Third-party libraries** | HTTP client behavior, serialization libraries, cryptography |
---
## Testing Patterns
### Arrange-Act-Assert
**Structure tests clearly:**
```
test_example:
# ARRANGE: Set up test data and dependencies
setup_data()
mock_dependencies()
# ACT: Execute code under test
result = execute_operation()
# ASSERT: Verify outcomes
assert result == expected
verify_side_effects()
```
**Benefits:**
- Clear test structure
- Easy to read and maintain
- Explicit test phases
### Mock at the Seam
**Mock at component boundaries, not internals:**
| Test Type | What to Mock | What to Use Real |
|-----------|--------------|------------------|
| **Unit tests** | External dependencies (repositories, APIs, file system) | Business logic |
| **Integration tests** | External APIs, slow services | Database, cache, your code |
| **E2E tests** | Nothing (or minimal external services) | Everything |
**Anti-pattern:** Over-mocking your own code defeats the purpose of integration tests.
### Test Data Builders
**Create readable test data:**
```
# Builder pattern for test data
user = build_user(
email="test@example.com",
role="admin",
active=True
)
# Easy to create edge cases
inactive_user = build_user(active=False)
guest_user = build_user(role="guest")
```
**Benefits:**
- Readable test setup
- Easy edge case creation
- Reusable across tests
---
## Common Issues
### Flaky Tests
**Symptom:** Tests pass/fail randomly without code changes
**Common causes:**
- Shared state between tests (global variables, cached data)
- Time-dependent logic (timestamps, delays)
- External service instability
- Improper cleanup between tests
**Solutions:**
- Isolate test data (per-test creation, cleanup)
- Mock time-dependent code
- Use test-specific configurations
- Implement proper teardown
### Slow Tests
**Symptom:** Test suite takes too long (>30s for 50 tests)
**Common causes:**
- Database recreation per test
- Running migrations per test
- No connection pooling
- Too many E2E tests
**Solutions:**
- Use Data Deletion pattern
- Run migrations once per session
- Optimize test data creation
- Balance test levels (more Unit, fewer E2E)
### Test Coupling
**Symptom:** Changing one component breaks many unrelated tests
**Common causes:**
- Tests depend on implementation details
- Shared test fixtures across unrelated tests
- Testing framework internals instead of behavior
**Solutions:**
- Test behavior, not implementation
- Use independent test data per test
- Focus on public APIs, not internal state
---
## Coverage Guidelines
### Targets
| Layer | Target | Priority |
|-------|--------|----------|
| **Critical business logic** | 100% branch coverage | HIGH |
| **Repositories/Data access** | 90%+ line coverage | HIGH |
| **API endpoints** | 80%+ line coverage | MEDIUM |
| **Utilities/Helpers** | 80%+ line coverage | MEDIUM |
| **Overall** | 80%+ line coverage | MEDIUM |
### What Coverage Means
**Coverage is a tool, not a goal:**
- ✅ High coverage + focused tests = good quality signal
- ❌ High coverage + meaningless tests = false confidence
- ❌ Low coverage = blind spots in testing
**Focus on:**
- Critical paths covered
- Edge cases tested
- Error handling validated
**Not on:**
- Arbitrary percentage targets
- Testing getters/setters
- Framework code
---
## Verification Checklist
### Strategy
- [ ] Risk-based selection (Priority ≥15)
- [ ] Test caps enforced (E2E 2-5, Integration 3-8, Unit 5-15)
- [ ] Total 10-28 tests per Story
- [ ] Tests target YOUR code, not framework internals
- [ ] E2E smoke tests for critical integrations
### Organization
- [ ] Story-Level Test Task Pattern followed
- [ ] Tests consolidated in final Story task
- [ ] REGISTRY.md files maintained for all test categories
- [ ] Test directory structure follows conventions
### Isolation
- [ ] Isolation pattern chosen (Data Deletion recommended)
- [ ] Each test creates own data
- [ ] Proper cleanup between tests
- [ ] No shared state between tests
### Quality
- [ ] Tests are order-independent
- [ ] Tests run fast (<10s for 50 integration tests)
- [ ] No flaky tests
- [ ] Coverage ≥80% overall, 100% for critical logic
- [ ] Meaningful test names and descriptions
---
## Maintenance
**Update Triggers:**
- New testing patterns discovered
- Framework version changes affecting tests
- Significant changes to test architecture
- New isolation issues identified
**Verification:** Review this strategy when starting new projects or experiencing test quality issues.
**Last Updated:** [CURRENT_DATE] - Initial universal testing strategy

View File

@@ -0,0 +1,114 @@
# Test Documentation
**Last Updated:** {{DATE}}
<!-- SCOPE: Test organization structure and Story-Level Test Task Pattern ONLY. Contains test directories organization, test execution commands, quick navigation. -->
<!-- DO NOT add here: Test code → Test files, Story implementation → docs/tasks/kanban_board.md, Test strategy → Story test task descriptions -->
---
## Overview
This directory contains all tests for the project, following the **Story-Level Test Task Pattern** where tests are consolidated in the final Story test task (NOT scattered across implementation tasks).
**Test organization:**
- **E2E tests** (End-to-End) - 2-5 per Story - Priority ≥15 scenarios MUST be tested
- **Integration tests** - 3-8 per Story - Multi-component interactions
- **Unit tests** - 5-15 per Story - Individual component logic
- **Total**: 10-28 tests per Story (Value-Based Testing)
---
## Testing Philosophy
**Test YOUR code, not frameworks.** Focus on business logic and integration usage. Avoid testing database constraints, ORM internals, or framework validation.
**Risk-based testing:** Automate only Priority ≥15 scenarios (Business Impact × Probability). Test caps prevent bloat: 2-5 E2E, 3-8 Integration, 5-15 Unit (10-28 total per Story). No minimum limits - can be 0 if no high-priority scenarios exist.
**Rule of thumb:** If deleting your code wouldn't fail the test, you're testing someone else's code.
👉 **Full strategy:** See [docs/reference/guides/testing-strategy.md](../docs/reference/guides/testing-strategy.md)
---
## Test Structure
```
tests/
├── e2e/ # End-to-End tests (2-5 per Story)
│ ├── auth/
│ ├── user-flows/
│ └── critical-paths/
├── integration/ # Integration tests (3-8 per Story)
│ ├── api/
│ ├── database/
│ └── services/
└── unit/ # Unit tests (5-15 per Story)
├── components/
├── utils/
└── services/
```
---
## Story-Level Test Task Pattern
**Rule**: All tests (E2E/Integration/Unit) are written in the **final Story test task** (created by ln-350-story-test-planner after manual testing).
**Why**:
- **Single source of truth**: All Story tests in one place
- **Atomic completion**: Story Done when all tests pass
- **No scattered tests**: NOT in implementation tasks
- **Regression prevention**: Test suite runs before Story marked Done
**Workflow**:
1. Implementation tasks completed → Manual testing → Bugs fixed
2. ln-350-story-test-planner creates Story Finalizer test task
3. ln-334-test-executor implements all tests (E2E/Integration/Unit) in final task
4. All tests pass → Story marked Done
---
## Test Execution
**Run all tests:**
```bash
npm test
```
**Run specific test suites:**
```bash
npm run test:unit # Unit tests only
npm run test:integration # Integration tests only
npm run test:e2e # E2E tests only
```
**Watch mode (development):**
```bash
npm run test:watch
```
---
## Quick Navigation
- **Testing Strategy**: [docs/reference/guides/testing-strategy.md](../docs/reference/guides/testing-strategy.md) - Philosophy, risk-based strategy, what to test
- **Story test tasks**: [docs/tasks/kanban_board.md](../docs/tasks/kanban_board.md) - Story test task tracking
- **Story-Level Pattern**: [docs/tasks/README.md](../docs/tasks/README.md) - Full pattern explanation
- **Test guidelines**: [docs/reference/guides/](../docs/reference/guides/) - Additional testing best practices
---
## Maintenance
**Update Triggers**:
- When adding new test directories or test suites
- When changing test execution commands
- When modifying Story-Level Test Task Pattern workflow
**Verification**:
- All test directories exist (e2e/, integration/, unit/)
- Test execution commands work correctly
- SCOPE tags correctly define test documentation boundaries
**Last Updated**: {{DATE}}