Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:37:27 +08:00
commit 37774aa937
131 changed files with 31137 additions and 0 deletions

View File

@@ -0,0 +1,53 @@
---
name: ln-340-story-quality-gate
description: Story-level quality orchestrator. Pass 1: code quality -> regression -> manual testing (fail fast). Pass 2: verify tests/coverage -> mark Story Done. Auto-discovers team/config.
---
# Story Quality Gate
Two-pass Story review that fails fast, creates needed fix/refactor/test tasks, and finalizes the Story only after tests are verified.
## Purpose & Scope
- Pass 1 (after impl tasks Done): run code-quality, lint, regression, and manual testing; if all pass, create/confirm test task; otherwise create targeted fix/refactor tasks and stop.
- Pass 2 (after test task Done): verify tests/coverage/priority limits and close Story to Done or create fix tasks.
- Delegates work to 341/342/343 workers and ln-350-story-test-planner; invoked by ln-330-story-executor.
## When to Use
- Pass 1: all implementation tasks Done; test task missing or not Done.
- Pass 2: test task exists and is Done.
- Explicit `pass` parameter can force 1 or 2; otherwise auto-detect by test task status.
## Workflow (concise)
- **Phase 1 Discovery:** Auto-discover team/config; select Story; load Story + task metadata (no descriptions), detect test task status.
- **Pass 1 flow (fail fast):**
1) Invoke ln-341-code-quality-checker. If issues -> create refactor task (Backlog), stop.
2) Run all linters from tech_stack.md. If fail -> create lint-fix task, stop.
3) Invoke ln-342-regression-checker. If fail -> create regression-fix task, stop.
4) Invoke ln-343-manual-tester. If fail -> create bug-fix task, stop.
5) If all passed: if no test task exists, auto-call ln-350-story-test-planner (autoApprove) to create test task; if test task exists and Done, jump to Pass 2; if exists but not Done, report status and stop.
- **Pass 2 flow (after test task Done):**
1) Load Story/test task; read test plan/results and manual testing comment from Pass 1.
2) Verify limits and priority: Priority ≤15; E2E 2-5, Integration 0-8, Unit 0-15, total 10-28; tests focus on business logic (no framework/DB/library tests).
3) Ensure Priority ≤15 scenarios and Story AC are covered by tests; infra/docs updates present.
4) If pass -> mark Story Done in Linear; minimal kanban cleanup if needed. If fail -> create fix tasks (Backlog) and stop; ln-330 will loop.
## Critical Rules
- Early-exit: any failure creates a specific task and stops Pass 1/2.
- Single source of truth: rely on Linear metadata for tasks; kanban is updated by workers/ln-330.
- Task creation via skills only (ln-350/ln-311); this skill never edits tasks directly.
- Pass 2 only runs when test task is Done; otherwise return error/status.
- Language preservation in comments (EN/RU).
## Definition of Done
- Pass 1: 341 pass OR refactor task created; linters pass OR lint-fix task created; 342 pass OR regression-fix task created; 343 pass OR bug-fix task created; if all pass, test task created (if missing) or status reported; exits.
- Pass 2: test task verified (priority/limits/coverage/infra/docs); Story set to Done in Linear (if pass) or fix tasks created (if fail); kanban minimally cleaned if needed.
- Summary/comment posted in Linear for actions taken.
## Reference Files
- Workers: `../ln-341-code-quality-checker/SKILL.md`, `../ln-342-regression-checker/SKILL.md`, `../ln-343-manual-tester/SKILL.md`
- Test planning: `../ln-350-story-test-planner/SKILL.md`
- Tech stack/linters: `docs/project/tech_stack.md`
---
Version: 8.0.0 (Condensed passes and fail-fast actions)
Last Updated: 2025-11-26

View File

@@ -0,0 +1,97 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>ln-340-story-quality-gate - State Diagram</title>
<script src="https://cdn.jsdelivr.net/npm/mermaid@10/dist/mermaid.min.js"></script>
<link rel="stylesheet" href="../shared/css/diagram.css">
</head>
<body>
<div class="container">
<header>
<h1>🔍 ln-340-story-quality-gate</h1>
<p class="subtitle">Story Reviewer - State Diagram</p>
</header>
<div class="info-box">
<h3>📋 Overview</h3>
<ul>
<li><strong>Purpose:</strong> Two-pass review - Pass 1 (Early Exit Pattern) + Pass 2 (verify tests + Story Done)</li>
<li><strong>Pass 1 (6 phases):</strong> Discovery → Preparation → **Code Quality (FAIL FAST)** → **Regression Check (FAIL FAST)** → **Manual Testing (FAIL FAST)** → Verdict</li>
<li><strong>Pass 2 (3 phases):</strong> Prerequisites check → Test verification (10-28 tests, Priority ≥15) → Verdict and Story closure</li>
<li><strong>Early Exit Pattern:</strong> Each Pass 1 phase can stop execution and create fix/refactor task</li>
</ul>
</div>
<div class="diagram-container">
<div class="mermaid">
graph TD
Start([Start: Review Story]) --> Phase1[Phase 1: Discovery<br/>Team ID auto-discovery]
Phase1 --> Phase2[Phase 2: Preparation<br/>Determine pass: Pass 1 or Pass 2<br/>based on test task existence]
Phase2 --> PassCheck{Which pass?}
%% Pass 1: Early Exit Pattern with Worker Delegation
PassCheck -->|Pass 1| P1_Phase3[Phase 3: Delegate to<br/>ln-341-code-quality-checker<br/>Worker analyzes DRY/KISS/YAGNI/Architecture<br/>FAIL FAST FIRST]
P1_Phase3 --> CodeQualityCheck{Worker verdict:<br/>PASS or<br/>ISSUES_FOUND?}
CodeQualityCheck -->|ISSUES_FOUND| CodeQualityFail[❌ Code quality failed<br/>Create refactoring task<br/>STOP Pass 1]
CodeQualityCheck -->|PASS| P1_Phase4
P1_Phase4[Phase 4: Delegate to<br/>ln-342-regression-checker<br/>Worker runs ALL existing tests<br/>FAIL FAST SECOND]
P1_Phase4 --> RegressionPass{Worker verdict:<br/>PASS or<br/>FAIL?}
RegressionPass -->|FAIL| RegressionFail[❌ Regression detected<br/>Create fix task<br/>STOP Pass 1]
RegressionPass -->|PASS| P1_Phase5
P1_Phase5[Phase 5: Delegate to<br/>ln-343-manual-tester<br/>Worker tests AC via curl/puppeteer<br/>Creates scripts/tmp_[story_id].sh<br/>Documents in Linear Format v1.0<br/>FAIL FAST THIRD]
P1_Phase5 --> ManualTestCheck{Worker verdict:<br/>PASS or<br/>FAIL?}
ManualTestCheck -->|FAIL| ManualTestFail[❌ Manual testing failed<br/>Create fix task<br/>STOP Pass 1]
ManualTestCheck -->|PASS| P1_Phase6
P1_Phase6[Phase 6: Verdict and Next Steps<br/>All quality gates passed via delegation]
P1_Phase6 --> CheckTestTask{Test task<br/>exists?}
CheckTestTask -->|Yes & Done| Pass2Entry[Continue to Pass 2]
CheckTestTask -->|Yes & NOT Done| ReportStatus[Report test task status<br/>Exit]
CheckTestTask -->|NOT exists| InvokeFinalizer[✅ Delegate to ln-350-story-test-planner<br/>via Skill tool<br/>Worker creates test task]
InvokeFinalizer --> NextSteps[Next steps:<br/>ln-334-test-executor executes test task<br/>ln-332-task-reviewer reviews<br/>Auto-invoke Pass 2 when Done]
NextSteps --> End
%% Pass 2
PassCheck -->|Pass 2| Pass2Entry
Pass2Entry --> P2_Phase1[Pass 2 Phase 1: Prerequisites Check<br/>Load Story + tasks<br/>Verify test task status = Done<br/>Load test files]
P2_Phase1 --> P2_PrereqCheck{Test task<br/>Done?}
P2_PrereqCheck -->|No| P2_Error[Error: Test task missing or NOT Done<br/>Exit]
P2_PrereqCheck -->|Yes| P2_Phase2
P2_Phase2[Pass 2 Phase 2: Test Verification<br/>All tests pass 2-5 E2E 3-8 Integration<br/>5-15 Unit 10-28 total Priority ≥15<br/>NO performance/load tests<br/>Infrastructure updated]
P2_Phase2 --> P2_Phase3[Pass 2 Phase 3: Verdict and Story Closure]
P2_Phase3 --> P2_Verdict{Verdict?}
P2_Verdict -->|Pass| StoryDone[Mark Story Done<br/>Update kanban_board.md<br/>Minimal cleanup]
P2_Verdict -->|Fail| CreateFixTasks[Create fix tasks<br/>Story remains current state<br/>Re-run Pass 2 after fixes]
CodeQualityFail --> End([End])
RegressionFail --> End
ManualTestFail --> End
ReportStatus --> End
StoryDone --> End
CreateFixTasks --> End
P2_Error --> End
classDef discovery fill:#E3F2FD,stroke:#1976D2,stroke-width:2px
classDef processing fill:#FFF9C4,stroke:#F57C00,stroke-width:2px
classDef decision fill:#FFE0B2,stroke:#E64A19,stroke-width:2px
classDef action fill:#C8E6C9,stroke:#388E3C,stroke-width:2px
classDef error fill:#FFCDD2,stroke:#C62828,stroke-width:2px
class Phase1,Phase2 discovery
class P1_Phase3,P1_Phase4,P1_Phase5,P1_Phase6,P2_Phase1,P2_Phase2,P2_Phase3 processing
class PassCheck,CodeQualityCheck,RegressionPass,ManualTestCheck,CheckTestTask,P2_PrereqCheck,P2_Verdict decision
class InvokeFinalizer,NextSteps,StoryDone,ReportStatus action
class CodeQualityFail,RegressionFail,ManualTestFail,CreateFixTasks,P2_Error error
</div>
</div>
<footer>
<p>ln-340-story-quality-gate v7.0.0 | L2 Orchestrator delegating to workers (ln-341-code-quality-checker, ln-342-regression-checker, ln-343-manual-tester) with Early Exit Pattern + Pass 2 (test verify + Done)</p>
<p>Last updated: 2025-11-14</p>
</footer>
</div>
<script>
mermaid.initialize({ startOnLoad: true, theme: 'default', flowchart: { useMaxWidth: true, htmlLabels: true, curve: 'basis' } });
</script>
</body>
</html>

View File

@@ -0,0 +1,444 @@
# Manual Testing Results Comment Template
## Purpose
This template defines the standardized format for Linear comments created by ln-343-manual-tester (invoked by ln-340-story-quality-gate Pass 1). The structured format ensures reliable parsing by ln-350-story-test-planner for E2E-first test design.
## Format Version
**Current Version:** 1.0
**Last Updated:** 2025-10-31
## Template Structure
```markdown
## 🧪 Manual Testing Results
**Format Version:** 1.0
**Story ID:** [Story identifier, e.g., US042]
**Tested By:** ln-343-manual-tester
**Date:** [YYYY-MM-DD]
**Status:** [✅ PASSED (X/Y AC) | ❌ FAILED (X/Y AC)]
---
### Acceptance Criteria (from Story)
**AC1:** [AC title/description]
- **Given:** [Precondition]
- **When:** [Action]
- **Then:** [Expected outcome]
**AC2:** [AC title/description]
- **Given:** [Precondition]
- **When:** [Action]
- **Then:** [Expected outcome]
[Repeat for each AC in Story]
---
### Test Results by AC
**AC1: [AC title]**
- [✅ PASS | ❌ FAIL] **Status:** [PASS|FAIL]
- **Method:** [Full curl command OR puppeteer code]
- **Result:** [Actual HTTP status, response body, or UI state]
- **Notes:** [Any relevant observations]
**AC2: [AC title]**
- [✅ PASS | ❌ FAIL] **Status:** [PASS|FAIL]
- **Method:** [Full curl command OR puppeteer code]
- **Result:** [Actual response or behavior]
- **Notes:** [Any relevant observations]
[Repeat for each AC]
---
### Edge Cases Discovered
1. **[Edge case description]**
- **Input:** [Specific input that triggers edge case]
- **Expected:** [Expected behavior]
- **Actual:** [Actual behavior observed]
- [✅ PASS | ❌ FAIL] **Status:** [PASS|FAIL]
2. **[Edge case description]**
- **Input:** [Specific input]
- **Expected:** [Expected behavior]
- **Actual:** [Actual behavior]
- [✅ PASS | ❌ FAIL] **Status:** [PASS|FAIL]
[Continue numbering for all discovered edge cases]
---
### Error Handling Verified
| HTTP Code | Scenario | Error Message | Verified |
|-----------|----------|---------------|----------|
| [Code] | [What triggers this error] | [Exact error message returned] | [✅ | ❌ | ⚠️ Not tested] |
| [Code] | [Scenario] | [Error message] | [✅ | ❌] |
[Add all HTTP error codes tested: 400, 401, 403, 404, 429, 500, etc.]
---
### Integration Testing
**[Component A] → [Component B] → [Component C] Flow:**
- [✅ | ❌] [Description of integration point 1]
- [✅ | ❌] [Description of integration point 2]
- [✅ | ❌] [Description of integration point 3]
**Transaction Handling:**
- [✅ | ❌] [Transaction behavior description]
- [✅ | ❌] [Rollback behavior if applicable]
**Performance/Concurrency (if applicable):**
- [✅ | ❌] [Any performance observations]
---
### Summary
**Overall Result:** [✅ ALL ACCEPTANCE CRITERIA PASSED | ❌ X/Y ACCEPTANCE CRITERIA FAILED]
**Coverage:**
- [X/Y] AC verified [✅ | ❌]
- [X] edge cases tested [✅]
- [X/Y] error scenarios verified [✅ | ⚠️]
- Integration flow validated [✅ | ❌]
**Recommendation:** [Proceed to test task creation via ln-350-story-test-planner | Create refactoring task for issues found]
---
### Risk Assessment for Test Planning
**Purpose:** Provide Priority scores for ln-350-story-test-planner to select tests based on business risk
| Scenario | Type | Business Impact (1-5) | Probability (1-5) | Priority | Reason |
|----------|------|----------------------|-------------------|----------|--------|
| [AC1: AC title] | AC | [1-5] | [1-5] | [Result] | [Why this impact/probability] |
| [AC2: AC title] | AC | [1-5] | [1-5] | [Result] | [Why this impact/probability] |
| [Edge Case 1: description] | Edge Case | [1-5] | [1-5] | [Result] | [Why this impact/probability] |
| [Edge Case 2: description] | Edge Case | [1-5] | [1-5] | [Result] | [Why this impact/probability] |
| [Error: HTTP 400 scenario] | Error Handling | [1-5] | [1-5] | [Result] | [Why this impact/probability] |
| [Error: HTTP 401 scenario] | Error Handling | [1-5] | [1-5] | [Result] | [Why this impact/probability] |
**Priority Calculation:** Priority = Business Impact (1-5) × Probability (1-5)
**Decision Criteria:**
- Priority ≥15 → MUST test (ln-350-story-test-planner will create automated tests)
- Priority 9-14 → SHOULD test if not already covered
- Priority ≤8 → SKIP (manual testing sufficient)
**Reference:** See `ln-350-story-test-planner/references/risk_based_testing_guide.md` for complete Business Impact/Probability scoring tables and methodology.
**Total Scenarios:** [X scenarios], **Priority ≥15:** [Y scenarios] (will be tested)
```
## Usage Instructions
### For ln-343-manual-tester (Phase 5 Step 1)
1. **Copy template structure** (do NOT include this instruction section)
2. **Fill required fields:**
- Story ID from Linear
- Current date in YYYY-MM-DD format
- Status calculated from AC pass/fail count
3. **Extract AC from Story description:**
- Copy Given-When-Then exactly as written in Story
- Maintain numbering (AC1, AC2, AC3...)
4. **Document test results for EACH AC:**
- Include full curl command or puppeteer code used
- Copy exact HTTP status codes and response bodies
- Note any deviations from expected behavior
5. **List ALL edge cases discovered** during testing:
- Enumerate sequentially (1, 2, 3...)
- Provide concrete input/expected/actual values
6. **Create error handling table:**
- Test all error codes mentioned in Story Technical Notes
- Include 400, 401, 404, 500 at minimum
- Mark ⚠️ for codes not testable without setup
7. **Verify integration flow:**
- Trace request through all architectural layers
- Note any transaction/rollback behavior
8. **Write summary:**
- Count passed AC vs total AC
- Recommend next action (ln-350-story-test-planner or refactoring task)
### For ln-350-story-test-planner (Phase 2 Step 1)
**Parsing strategy:**
1. **Find comment with marker:**
- Search for `## 🧪 Manual Testing Results`
- Verify `**Format Version:** 1.0` present
2. **Extract sections using regex:**
- `^### Acceptance Criteria` → parse AC with Given-When-Then
- `^### Test Results by AC` → extract status, method, results per AC
- `^### Edge Cases Discovered` → parse numbered list items
- `^### Error Handling Verified` → parse markdown table
- `^### Integration Testing` → extract component flows
3. **Map to test design:**
- Each PASSED AC → 1 E2E test (copy method from "Method:" field)
- Each edge case → Unit or Integration test
- Each verified error code → Error handling test
- Integration flow → Integration test suite
4. **Handle parsing errors:**
- Missing Format Version → warn user, try legacy parsing
- Missing required section → error with clear message
- Cannot parse AC → request Story description fix
## Examples
### Example 1: API Endpoint Testing
```markdown
## 🧪 Manual Testing Results
**Format Version:** 1.0
**Story ID:** US042
**Tested By:** ln-343-manual-tester
**Date:** 2025-10-31
**Status:** ✅ PASSED (3/3 AC)
---
### Acceptance Criteria (from Story)
**AC1:** User can login with valid credentials
- **Given:** Valid email and password
- **When:** User submits login form
- **Then:** Returns 200 OK with JWT token
**AC2:** Invalid credentials are rejected
- **Given:** Invalid email or password
- **When:** User submits login form
- **Then:** Returns 401 Unauthorized with error message
**AC3:** Rate limiting prevents brute force
- **Given:** More than 5 failed login attempts within 1 minute
- **When:** User submits 6th attempt
- **Then:** Returns 429 Too Many Requests
---
### Test Results by AC
**AC1: User can login with valid credentials**
-**Status:** PASS
- **Method:** `curl -X POST http://localhost:8000/api/auth/login -H "Content-Type: application/json" -d '{"email":"test@example.com","password":"SecurePass123"}'`
- **Result:** 200 OK, JWT token received: `eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...`
- **Notes:** Token validated successfully, expires in 1 hour
**AC2: Invalid credentials are rejected**
-**Status:** PASS
- **Method:** `curl -X POST http://localhost:8000/api/auth/login -H "Content-Type: application/json" -d '{"email":"test@example.com","password":"WrongPassword"}'`
- **Result:** 401 Unauthorized, `{"error":"Invalid credentials"}`
- **Notes:** Error message does not reveal if email or password is wrong (good security practice)
**AC3: Rate limiting prevents brute force**
-**Status:** PASS
- **Method:** Bash loop: `for i in {1..6}; do curl -X POST http://localhost:8000/api/auth/login -d '{"email":"test@example.com","password":"wrong"}'; done`
- **Result:** First 5 attempts → 401, 6th attempt → 429 with `{"error":"Too many requests, try again in 52 seconds"}`
- **Notes:** Rate limit counter resets correctly after 1 minute
---
### Edge Cases Discovered
1. **Empty email field**
- **Input:** `{"email":"","password":"test123"}`
- **Expected:** 400 Bad Request
- **Actual:** 400 Bad Request with `{"error":"Email is required"}`
-**Status:** PASS
2. **SQL injection attempt**
- **Input:** `{"email":"'; DROP TABLE users;--","password":"test"}`
- **Expected:** Properly escaped, 401 Invalid credentials
- **Actual:** 401 Invalid credentials, SQL not executed (verified in logs)
-**Status:** PASS
3. **Unicode characters in password**
- **Input:** Password: `Test🔒Pass123`
- **Expected:** Works correctly
- **Actual:** Login successful, password stored and validated with UTF-8 encoding
-**Status:** PASS
4. **Very long password (1000 chars)**
- **Input:** Password with 1000 'a' characters
- **Expected:** 400 Bad Request (max length validation)
- **Actual:** 400 Bad Request with `{"error":"Password too long (max 128 characters)"}`
-**Status:** PASS
---
### Error Handling Verified
| HTTP Code | Scenario | Error Message | Verified |
|-----------|----------|---------------|----------|
| 400 | Missing email field | "Email is required" | ✅ |
| 400 | Invalid email format | "Invalid email format" | ✅ |
| 400 | Missing password field | "Password is required" | ✅ |
| 400 | Password too long | "Password too long (max 128 characters)" | ✅ |
| 401 | Wrong email | "Invalid credentials" | ✅ |
| 401 | Wrong password | "Invalid credentials" | ✅ |
| 429 | Rate limit exceeded | "Too many requests, try again in X seconds" | ✅ |
| 500 | Database connection error | "Internal server error" | ⚠️ Not tested (requires DB failure simulation) |
---
### Integration Testing
**API → Service → Repository → Database Flow:**
- ✅ API endpoint receives request and validates JSON schema
- ✅ Service layer calls UserRepository.findByEmail()
- ✅ Repository queries PostgreSQL users table
- ✅ Password comparison using bcrypt.compare() works correctly
- ✅ JWT token generated and signed with SECRET_KEY
- ✅ Response formatted according to API spec
**Transaction Handling:**
- ✅ Failed login attempt logged in audit_log table (INSERT)
- ✅ Rate limit counter incremented in Redis
- ✅ No database locks observed during concurrent login attempts
---
### Summary
**Overall Result:****ALL ACCEPTANCE CRITERIA PASSED**
**Coverage:**
- 3/3 AC verified ✅
- 4 edge cases tested ✅
- 7/8 error scenarios verified (1 requires failure injection) ✅
- Integration flow validated ✅
**Recommendation:** Proceed to test task creation via ln-350-story-test-planner
```
### Example 2: UI Testing with Puppeteer
```markdown
## 🧪 Manual Testing Results
**Format Version:** 1.0
**Story ID:** US045
**Tested By:** ln-343-manual-tester
**Date:** 2025-10-31
**Status:** ✅ PASSED (2/2 AC)
---
### Acceptance Criteria (from Story)
**AC1:** User can see product list on homepage
- **Given:** User navigates to homepage
- **When:** Page loads
- **Then:** Product grid displays with images, names, and prices
**AC2:** User can filter products by category
- **Given:** User is on homepage with products displayed
- **When:** User clicks category filter
- **Then:** Only products from selected category are shown
---
### Test Results by AC
**AC1: User can see product list on homepage**
-**Status:** PASS
- **Method:**
```javascript
const page = await browser.newPage();
await page.goto('http://localhost:3000');
await page.waitForSelector('.product-grid');
const products = await page.$$('.product-card');
console.log(`Found ${products.length} products`);
```
- **Result:** 12 products displayed, all with images, names, and prices visible
- **Notes:** Images load correctly, no broken thumbnails
**AC2: User can filter products by category**
-**Status:** PASS
- **Method:**
```javascript
await page.click('[data-category="electronics"]');
await page.waitForTimeout(500); // Wait for filter animation
const filteredProducts = await page.$$('.product-card[data-category="electronics"]');
console.log(`Filtered to ${filteredProducts.length} electronics`);
```
- **Result:** Filter works, showing only 5 electronics products. Other categories hidden.
- **Notes:** Filter animation smooth, no flickering
---
### Edge Cases Discovered
1. **Empty category returns "No products" message**
- **Input:** Click category "Books" (which has 0 products)
- **Expected:** Show "No products found" message
- **Actual:** Message displayed correctly with suggestion to clear filters
-**Status:** PASS
2. **Multiple rapid filter clicks**
- **Input:** Click different category filters rapidly 5 times
- **Expected:** UI remains stable, shows final selection
- **Actual:** No race conditions, final filter applied correctly
-**Status:** PASS
---
### Error Handling Verified
| HTTP Code | Scenario | Error Message | Verified |
|-----------|----------|---------------|----------|
| 404 | Navigate to /products/invalid-id | "Product not found" page | ✅ |
| 500 | API returns error | "Failed to load products" toast | ⚠️ Not tested (requires API mock failure) |
---
### Integration Testing
**Frontend → API → Backend Flow:**
- ✅ React component fetches from /api/products on mount
- ✅ API returns JSON with product array
- ✅ Product images loaded from CDN correctly
- ✅ Category filter sends query param ?category=electronics
- ✅ React state updates trigger re-render without full page reload
---
### Summary
**Overall Result:****ALL ACCEPTANCE CRITERIA PASSED**
**Coverage:**
- 2/2 AC verified ✅
- 2 edge cases tested ✅
- 1/2 error scenarios verified ✅
- Integration flow validated ✅
**Recommendation:** Proceed to test task creation via ln-350-story-test-planner
```
## Version History
| Version | Date | Changes |
|---------|------|---------|
| 1.1 | 2025-10-31 | Added Risk Assessment section with Priority Matrix (Business Impact × Probability) for ln-350-story-test-planner |
| 1.0 | 2025-10-31 | Initial structured format with AC, Test Results, Edge Cases, Errors, Integration |
## References
- ln-340-story-quality-gate SKILL.md Phase 5 Step 3
- ln-350-story-test-planner SKILL.md Phase 2 Step 1
- Story Template (story_template_universal.md) for AC format

View File

@@ -0,0 +1,394 @@
# Story Review Checklist (14 Points)
Complete checklist for validating User Story after all tasks are Done.
## A. Goal Achievement (CRITICAL)
### 1. Story Statement Fulfilled
**Check:**
- [ ] "As a [role]" - Does role get needed functionality?
- [ ] "I want [capability]" - Is capability fully implemented?
- [ ] "So that [value]" - Is business value delivered?
**Method:**
- Code review of implemented functionality
- Run E2E tests covering Story goal
- Validate user can achieve stated goal
**Red Flags:**
- Story goal partially implemented
- Business value not delivered
- Role cannot use functionality as intended
---
### 2. All Story AC Satisfied
**Check:**
- [ ] Main Scenarios: All Given-When-Then scenarios work
- [ ] Edge Cases: All boundary conditions handled
- [ ] Error Handling: All error scenarios handled correctly
**Method:**
- Compare Story AC (from Story description) with E2E tests
- Verify each Given-When-Then has corresponding E2E test
- Run all E2E tests - all must pass
**Red Flags:**
- Story AC missing E2E test coverage
- E2E tests fail
- Edge cases not handled
---
## B. Integration Quality (CRITICAL)
### 3. Tasks Integrated Correctly
**Check:**
- [ ] API → Service → Repository integration works
- [ ] Contracts between layers honored (types, parameters match)
- [ ] Data flow correct (no data loss, no corruption)
- [ ] External dependencies integrated (APIs, databases)
**Method:**
- Code review of integration points
- Run Integration tests - all must pass
- Check types match between layers
**Red Flags:**
- Type mismatches (API sends `string`, Service expects `number`)
- Missing integration (API doesn't call Service)
- Integration tests fail
**Example Problem:**
```
Task 1 Done: API endpoint accepts user_id: string ✅
Task 2 Done: Service expects userId: number ✅
Problem: API calls Service with string → Runtime error ❌
```
---
### 4. No Gaps Between Tasks
**Check:**
- [ ] All components are used (no orphaned methods)
- [ ] All necessary calls present (no missing calls)
- [ ] Component dependencies satisfied
**Method:**
- Grep for unused functions/classes in Story files
- Code review for missing calls
- Check all Affected Components from tasks are integrated
**Red Flags:**
- Unused methods (created but never called)
- Missing calls (Service method exists but API doesn't call it)
- Dead code
**Example Problem:**
```
Task 1 Done: sendEmail() function created ✅
Task 2 Done: saveUser() function created ✅
Problem: API endpoint calls saveUser() but NOT sendEmail() → Email not sent ❌
```
---
## C. Overall Code Quality
### 5. Patterns Consistent
**Check:**
- [ ] All tasks use same approaches (async/await vs callbacks)
- [ ] No code duplication between tasks (DRY at Story level)
- [ ] Error handling consistent across tasks
- [ ] Architecture coherent
- [ ] Configuration management consistent (no hardcoded values, no magic numbers)
**Method:**
- Grep for similar code patterns across Story files
- Code review for duplication
- Check error handling patterns match
**Red Flags:**
- Task 1 uses async/await, Task 2 uses callbacks
- Same validation logic duplicated in 3 tasks
- Inconsistent error handling (Task 1 throws, Task 2 returns null)
- Magic numbers or hardcoded URLs in code (should be in config)
---
### 6. Following guides/
**Check:**
- [ ] Story implemented per project patterns (from guides/)
- [ ] Technical Notes from Story executed
- [ ] Architectural patterns followed
**Method:**
- Compare implementation with guides/ referenced in Story
- Check Technical Notes section of Story
- Verify patterns match guide examples
**Red Flags:**
- Story says "Use Repository Pattern" but implementation uses direct DB calls
- Technical Notes mention caching but no cache implemented
---
## D. Testing & Coverage (CRITICAL)
### 7. All Story Tests Pass
**Check:**
- [ ] Unit tests (70%) pass
- [ ] Integration tests (20%) pass
- [ ] E2E tests (10%) pass
- [ ] No flaky tests (all deterministic)
- [ ] Tests focus on business logic (not frameworks/libraries/getters)
- [ ] No test duplication - Each behavior tested once at pyramid level
**Method:**
- Run all tests for Story files: `npm test` / `pytest` / `go test`
- Check test output - 0 failures
**Red Flags:**
- Any test fails
- Flaky tests (pass sometimes, fail sometimes)
- Tests skipped
---
### 8. Coverage Story ≥80%
**Check:**
- [ ] Overall coverage across all Story files ≥80%
- [ ] Integration layer covered (not just unit level)
- [ ] Critical paths covered
**Method:**
- Run coverage report for Story files
- Check overall percentage
- Review uncovered lines (should be non-critical)
**Red Flags:**
- Coverage <80%
- Critical business logic uncovered
- Integration points uncovered
**Example:**
```
Task 1 coverage: 90% ✅
Task 2 coverage: 85% ✅
Task 3 coverage: 80% ✅
But: Story coverage: 65% ❌ (integration layer uncovered)
```
---
### 9. Test Limits and Priority Scenarios
**Check:**
- [ ] Test count within limits: 2-5 E2E, 3-8 Integration, 5-15 Unit (10-28 total)
- [ ] All Priority ≥15 scenarios from manual testing tested
- [ ] No duplicate test coverage (each test adds unique business value)
- [ ] Trivial code skipped (simple CRUD, framework code, getters/setters)
**Method:**
- Count tests by type for Story
- Verify total tests 10-28
- Check Risk Priority Matrix in test task - ensure all Priority ≥15 scenarios have tests
- Verify no tests for frameworks, libraries, or trivial logic
**Red Flags:**
- Total tests exceed 28 (maintenance burden)
- Priority ≥15 scenarios not tested (critical paths uncovered)
- Tests for framework code or trivial logic (waste of time)
- Duplicate coverage (same scenario tested at multiple levels)
---
### 10. E2E Cover All Story AC
**Check:**
- [ ] Each Main Scenario from Story AC has E2E test
- [ ] Edge Cases from Story AC covered by E2E
- [ ] Error Handling from Story AC covered by E2E
**Method:**
- List all Story AC (Given-When-Then scenarios)
- List all E2E tests
- Match AC to E2E tests (1:1 mapping for main scenarios)
**Red Flags:**
- Story AC without corresponding E2E test
- E2E test doesn't match Story AC wording
- Critical AC not covered
**Example Problem:**
```
Story AC: "Given user submits form, When submit, Then data saved AND email sent"
E2E test: "test_user_can_save_data" ✅ (covers save)
Missing: E2E test for email sent ❌
```
---
## E. Guides & Infrastructure (NEW)
### 11. Guides Correct
**Check:**
- [ ] All guides/ created in Story are correct
- [ ] Guide template followed (from guide-creator)
- [ ] Patterns documented accurately
- [ ] Anti-patterns documented
- [ ] Examples correct
**Method:**
- Identify guides/ created/modified in Story tasks
- Review each guide structure matches guide_template.md
- Verify patterns are actually used in codebase
- Check examples are correct and runnable
**Red Flags:**
- Guide template not followed
- Pattern documented but not used in code
- Incorrect examples
- Anti-patterns missing
**Skipped if:** No guides created in Story
---
### 12. Infrastructure Updated
**Check when packages added:**
**Package managers:**
- [ ] package.json updated (Node.js)
- [ ] requirements.txt updated (Python)
- [ ] go.mod updated (Go)
- [ ] Cargo.toml updated (Rust)
- [ ] Gemfile updated (Ruby)
**Docker:**
- [ ] Dockerfile updated if package needs system dependencies
- Example: `RUN apt-get install -y libpq-dev` for PostgreSQL Python driver
- [ ] docker-compose.yml updated if package needs services
- Example: Redis, Postgres, MongoDB services added
**Documentation:**
- [ ] README.md updated with setup instructions
- Installation steps
- New environment variables
- New service dependencies
**Method:**
- Check if packages were added in Story tasks (search for "npm install", "pip install", etc.)
- If yes, verify infrastructure files updated
- Run docker build to verify Dockerfile still works
- Run docker-compose up to verify services start
**Red Flags:**
- Package added but package.json not committed
- Package needs system library but Dockerfile not updated
- Package needs Redis but docker-compose.yml not updated
- README missing setup instructions for new package
**Example Problems:**
```
Problem 1:
- Task adds `pg` package (PostgreSQL driver)
- package.json updated ✅
- Dockerfile NOT updated ❌ (missing: RUN apt-get install -y libpq-dev)
- Result: Docker build fails
Problem 2:
- Task adds Redis caching
- package.json updated ✅
- docker-compose.yml NOT updated ❌ (missing Redis service)
- Result: App can't connect to Redis
Problem 3:
- Task adds complex package
- package.json updated ✅
- README NOT updated ❌ (missing setup instructions)
- Result: Other developers can't run project
```
**Skipped if:** No packages added in Story
---
## F. Completeness
### 13. All Tasks Done
**Check:**
- [ ] No tasks in Todo
- [ ] No tasks in In Progress
- [ ] No tasks in To Review
- [ ] No tasks in To Rework
- [ ] All tasks in Done status
**Method:**
- Linear query: `parentId=Story ID, status≠Done`
- Result should be empty
**Red Flags:**
- Any task not Done
- Tasks forgotten
---
### 14. Documentation Complete
**Check:**
- [ ] STRUCTURE.md updated (all new components documented)
- [ ] ARCHITECTURE.md updated (all architectural changes documented)
- [ ] guides/ current (if created in Story)
- [ ] tests/README.md current (if test approach changed)
- [ ] README.md updated (if setup changed)
**Method:**
- Check files modified in Story tasks
- Verify documentation sections updated
- Cross-reference with Affected Components from tasks
**Red Flags:**
- New component not in STRUCTURE.md
- Architectural change not in ARCHITECTURE.md
- Outdated documentation
---
## Summary Checklist
Quick checklist for ln-340-story-quality-gate Pass 1:
- [ ] 1. Story statement fulfilled
- [ ] 2. All Story AC satisfied
- [ ] 3. Tasks integrated correctly
- [ ] 4. No gaps between tasks
- [ ] 5. Patterns consistent
- [ ] 6. Following guides/
- [ ] 7. All tests pass
- [ ] 8. Priority ≥15 scenarios tested
- [ ] 9. Test limits (10-28 total)
- [ ] 10. E2E cover all AC
- [ ] 11. Guides correct
- [ ] 12. Infrastructure updated
- [ ] 13. All tasks Done
- [ ] 14. Documentation complete
**Pass:** All 14 checks ✓ → Story Done
**Fail:** Any check ✗ → Create fix tasks
---
**Version:** 1.3.0 (Configuration management check)
**Last Updated:** 2025-11-07