Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:37:27 +08:00
commit 37774aa937
131 changed files with 31137 additions and 0 deletions

View File

@@ -0,0 +1,37 @@
---
name: ln-343-manual-tester
description: Performs manual testing of Story AC (API/UI) with curl/puppeteer, documents results in Linear (structured comment) and provides reusable scripts. Worker only.
---
# Manual Tester
Manually verifies Story AC on running code and reports structured results for the quality gate.
## Purpose & Scope
- Rebuild/run app, detect API/UI, and execute AC-driven checks via curl/puppeteer.
- Document results in Linear (Format v1.0) with pass/fail per AC, edge/error handling, and temp script path.
- No status changes or task creation.
## Workflow (concise)
1) **Env setup:** Rebuild containers (no cache), start, ensure healthy. Detect API vs UI; confirm app reachable.
2) **Load AC:** Fetch Story, parse AC into Given/When/Then list (3-5 expected).
3) **Execute:** For each AC + edge/error cases, run curl (API) or puppeteer (UI); capture responses/screens; note deviations.
4) **Report:** Produce verdict PASS/FAIL; add Linear comment (Format v1.0) with AC matrix, issues found, evidence, and temp script path; return JSON result.
## Critical Rules
- Rebuild Docker before testing; fail if rebuild/run unhealthy.
- Keep language of Story (EN/RU) in comment.
- No fixes or status changes; only evidence and verdict.
## Definition of Done
- App rebuilt and running; AC parsed.
- Tests executed across main + edge/error cases.
- Verdict and structured Linear comment posted with evidence and script path.
## Reference Files
- AC format and scripts: `../ln-350-story-test-planner/references/test_task_template.md` (for alignment)
- Risk-based context: `../ln-350-story-test-planner/references/risk_based_testing_guide.md`
---
Version: 3.0.0 (Condensed manual testing flow)
Last Updated: 2025-11-26

View File

@@ -0,0 +1,215 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>ln-343-manual-tester Workflow</title>
<script src="https://cdn.jsdelivr.net/npm/mermaid@11/dist/mermaid.min.js"></script>
<link rel="stylesheet" href="../shared/css/diagram.css">
</head>
<body>
<div class="container">
<header>
<h1>🎯 ln-343-manual-tester Workflow</h1>
<p class="subtitle">Linear Workflow - Worker v1.0.0</p>
</header>
<div class="info-box">
<h3>Overview</h3>
<p><strong>Purpose:</strong> Perform manual functional testing of Story Acceptance Criteria using curl (API) or puppeteer (UI).</p>
<p><strong>Type:</strong> Linear Workflow (7 sequential phases)</p>
<p><strong>Single Responsibility:</strong> ONLY performs manual testing and documents results - does NOT create tasks or change statuses.</p>
<p><strong>Output:</strong> JSON verdict + Linear comment (Format v1.0) + temporary testing script for re-running tests.</p>
</div>
<div class="mermaid">
graph TD
Start([START]) --> Phase1[Phase 1: Setup Environment<br/>Detect Story type API/UI<br/>Verify app running]
Phase1 --> Phase2[Phase 2: Load Story AC<br/>Parse Given-When-Then<br/>Extract 3-5 AC]
Phase2 --> Phase3[Phase 3: Test AC<br/>curl for API / puppeteer for UI<br/>Record PASS/FAIL results]
Phase3 --> Phase4[Phase 4: Test Edge Cases<br/>Invalid inputs, boundaries<br/>3-5 edge case scenarios]
Phase4 --> Phase5[Phase 5: Test Error Handling<br/>400s, 500s, validation<br/>Verify user-friendly messages]
Phase5 --> Phase6[Phase 6: Test Integration<br/>Database, APIs, auth<br/>2-3 integration points]
Phase6 --> Phase7[Phase 7: Document Results<br/>Linear comment Format v1.0<br/>Create temp script<br/>Return JSON verdict]
Phase7 --> End([END:<br/>JSON verdict + temp script])
classDef phase fill:#E3F2FD,stroke:#1976D2,stroke-width:2px
classDef endpoint fill:#C8E6C9,stroke:#388E3C,stroke-width:2px
class Phase1,Phase2,Phase3,Phase4,Phase5,Phase6,Phase7 phase
class Start,End endpoint
</div>
<h2>Phase Descriptions</h2>
<div class="phase-description">
<div class="phase-title">Phase 1: Setup Environment</div>
<ul>
<li>Detect Story type from description/labels (API vs UI)</li>
<li>Verify application is running (curl health endpoint or puppeteer navigate)</li>
<li>Determine base URL (from .env or default localhost)</li>
</ul>
</div>
<div class="phase-description">
<div class="phase-title">Phase 2: Load Story Acceptance Criteria</div>
<ul>
<li>Load Story from Linear via MCP</li>
<li>Parse AC section (Given-When-Then format)</li>
<li>Extract 3-5 AC with unique IDs (AC1, AC2, AC3, etc.)</li>
</ul>
</div>
<div class="phase-description">
<div class="phase-title">Phase 3: Test Acceptance Criteria</div>
<ul>
<li><strong>API:</strong> Use curl commands to test endpoints</li>
<li><strong>UI:</strong> Use puppeteer MCP to interact with page</li>
<li>For each AC: Execute test, capture result, compare with expected</li>
<li>Record verdict (PASS/FAIL) and details</li>
</ul>
</div>
<div class="phase-description">
<div class="phase-title">Phase 4: Test Edge Cases</div>
<ul>
<li>Parse Story edge cases section or infer from AC</li>
<li>Test 3-5 edge cases (empty inputs, boundaries, invalid types)</li>
<li>Record results (PASS/FAIL)</li>
</ul>
</div>
<div class="phase-description">
<div class="phase-title">Phase 5: Test Error Handling</div>
<ul>
<li>Test error scenarios (400s, 500s, validation errors)</li>
<li>Verify correct HTTP status codes (API)</li>
<li>Verify user-friendly error messages (no stack traces)</li>
<li>Record results</li>
</ul>
</div>
<div class="phase-description">
<div class="phase-title">Phase 6: Test Integration Points</div>
<ul>
<li>Identify 2-3 critical integrations from implementation tasks</li>
<li>Test database persistence, external APIs, auth, file storage</li>
<li>Verify data flows correctly</li>
<li>Record results</li>
</ul>
</div>
<div class="phase-description">
<div class="phase-title">Phase 7: Document Results</div>
<ul>
<li>Aggregate results from Phases 3-6</li>
<li>Determine overall verdict (PASS if all AC passed + no critical failures)</li>
<li>Format Linear comment (Format v1.0)</li>
<li>Add comment to Story</li>
<li>Create temporary testing script at scripts/tmp_[story_id].sh</li>
<li>Return JSON verdict</li>
</ul>
</div>
<h2>Output Format</h2>
<pre style="background: #F5F5F5; padding: 15px; border-radius: 4px; overflow-x: auto;">
{
"verdict": "PASS" | "FAIL",
"story_type": "API" | "UI",
"story_id": "US001",
"main_scenarios": [
{
"ac_id": "AC1",
"result": "PASS",
"details": "Response 200, token valid, expires in 3600s"
}
],
"edge_cases": [
{
"case": "Invalid credentials",
"result": "PASS",
"details": "Response 401, correct error message"
}
],
"error_handling": [
{
"scenario": "401 Unauthorized",
"result": "PASS",
"details": "Correct status code + user-friendly message"
}
],
"integration": [
{
"integration": "Database persistence",
"result": "PASS",
"details": "User record saved with correct fields"
}
],
"linear_comment_id": "abc123",
"temp_script_path": "scripts/tmp_US001.sh"
}
</pre>
<h2>Key Characteristics</h2>
<ul>
<li><strong>Atomic Worker:</strong> Single responsibility - manual testing only</li>
<li><strong>Dual Mode:</strong> Supports both API (curl) and UI (puppeteer) testing</li>
<li><strong>Comprehensive Coverage:</strong> Tests AC + edge cases + errors + integration</li>
<li><strong>Reusable Scripts:</strong> Creates temp bash script for re-running tests</li>
<li><strong>Structured Documentation:</strong> Linear comment follows Format v1.0 specification</li>
</ul>
<h2>Testing Patterns</h2>
<h3>API Testing (curl)</h3>
<pre style="background: #F5F5F5; padding: 15px; border-radius: 4px; overflow-x: auto;">
curl -X POST http://localhost:8000/api/login \
-H "Content-Type: application/json" \
-d '{"email": "user@example.com", "password": "test123"}' \
-w "\\nHTTP Status: %{http_code}\\n"
</pre>
<h3>UI Testing (puppeteer)</h3>
<pre style="background: #F5F5F5; padding: 15px; border-radius: 4px; overflow-x: auto;">
await page.goto('http://localhost:3000/login');
await page.fill('[name="email"]', 'user@example.com');
await page.fill('[name="password"]', 'test123');
await page.click('button[type="submit"]');
await page.waitForURL('**/dashboard');
</pre>
<h2>Temporary Testing Script</h2>
<p><strong>Purpose:</strong> Reusable bash script for re-running manual tests after refactoring/fixes.</p>
<p><strong>Location:</strong> <code>scripts/tmp_[story_id].sh</code></p>
<p><strong>Lifecycle:</strong></p>
<ul>
<li><strong>Created:</strong> ln-343-manual-tester Phase 7</li>
<li><strong>Used:</strong> Re-run after refactoring instead of typing commands again</li>
<li><strong>Deleted:</strong> ln-334-test-executor Step 6 (after E2E/Integration/Unit tests implemented)</li>
</ul>
<script>
mermaid.initialize({
startOnLoad: true,
theme: 'default',
flowchart: {
useMaxWidth: true,
htmlLabels: true,
curve: 'basis'
}
});
</script>
<footer>
<p>ln-343-manual-tester v1.0.0 | Worker Pattern | Mermaid.js</p>
</footer>
</div>
<script>
mermaid.initialize({ startOnLoad: true, theme: 'default', flowchart: { useMaxWidth: true, htmlLabels: true, curve: 'basis' } });
</script>
</body>
</html>

View File

@@ -0,0 +1,357 @@
# Puppeteer Testing Patterns
This document provides reusable patterns for UI testing with puppeteer MCP.
## Overview
**puppeteer MCP** provides browser automation tools accessible via MCP (Model Context Protocol):
- `puppeteer_launch()` - Launch browser
- `puppeteer_navigate()` - Navigate to URL
- `puppeteer_click()` - Click element
- `puppeteer_type()` - Type text into input
- `puppeteer_get_text()` - Extract text from element
- `puppeteer_screenshot()` - Capture screenshot
- `puppeteer_evaluate()` - Execute JavaScript in page context
- `puppeteer_wait_for_selector()` - Wait for element to appear
## Common Patterns
### Pattern 1: Form Submission
**Use case:** Test login, registration, profile update forms
**Steps:**
1. Navigate to page
2. Fill form fields
3. Submit form
4. Verify redirect or success message
**Example (Login):**
```javascript
// Phase 1: Navigate
await page.goto('http://localhost:3000/login');
// Phase 2: Fill form
await page.fill('[name="email"]', 'user@example.com');
await page.fill('[name="password"]', 'test123');
// Phase 3: Submit
await page.click('button[type="submit"]');
// Phase 4: Verify
await page.waitForURL('**/dashboard');
const heading = await page.textContent('h1');
expect(heading).toBe('Dashboard');
```
**Selectors:**
- Prefer `[name="fieldname"]` for inputs
- Prefer `[type="submit"]` for buttons
- Prefer `[data-testid="component"]` if available
---
### Pattern 2: Navigation and Assertion
**Use case:** Verify page renders correctly, content displayed
**Steps:**
1. Navigate to page
2. Wait for key element
3. Assert content
**Example (User List):**
```javascript
// Navigate
await page.goto('http://localhost:3000/users');
// Wait for content
await page.waitForSelector('table[data-testid="user-table"]');
// Assert
const heading = await page.textContent('h1');
expect(heading).toBe('User List');
const rowCount = await page.evaluate(() => {
return document.querySelectorAll('table tbody tr').length;
});
expect(rowCount).toBeGreaterThan(0);
```
---
### Pattern 3: Element Interaction
**Use case:** Click buttons, toggle switches, open modals
**Steps:**
1. Click trigger element
2. Wait for result element
3. Verify state change
**Example (Add User Modal):**
```javascript
// Click button
await page.click('button[data-testid="add-user"]');
// Wait for modal
await page.waitForSelector('form[data-testid="user-form"]');
// Verify modal visible
const modalVisible = await page.isVisible('form[data-testid="user-form"]');
expect(modalVisible).toBe(true);
```
---
### Pattern 4: Error Message Verification
**Use case:** Test validation errors, API error messages
**Steps:**
1. Perform action that triggers error
2. Wait for error message element
3. Verify error text
**Example (Invalid Login):**
```javascript
// Fill with invalid credentials
await page.fill('[name="email"]', 'wrong@example.com');
await page.fill('[name="password"]', 'wrong');
// Submit
await page.click('button[type="submit"]');
// Wait for error
await page.waitForSelector('[data-testid="error-message"]');
// Verify error text
const errorText = await page.textContent('[data-testid="error-message"]');
expect(errorText).toBe('Invalid email or password');
```
**Important:** Verify user-friendly message (no stack traces, technical jargon)
---
### Pattern 5: Screenshot on Failure
**Use case:** Capture visual evidence when test fails
**Steps:**
1. Wrap test in try-catch
2. On error: take screenshot before throwing
**Example:**
```javascript
try {
await page.click('button.non-existent');
} catch (error) {
// Capture screenshot
await page.screenshot({
path: `screenshots/failure_${Date.now()}.png`,
fullPage: true
});
// Re-throw error
throw new Error(`Test failed: ${error.message}. Screenshot saved.`);
}
```
---
### Pattern 6: Async Wait Patterns
**Use case:** Wait for elements, network requests, animations
**Wait for selector:**
```javascript
await page.waitForSelector('[data-testid="user-profile"]', {
timeout: 5000
});
```
**Wait for URL change:**
```javascript
await page.waitForURL('**/dashboard', {
timeout: 5000
});
```
**Wait for network idle:**
```javascript
await page.waitForLoadState('networkidle');
```
**Wait for custom condition:**
```javascript
await page.waitForFunction(() => {
return document.querySelectorAll('table tbody tr').length > 0;
}, { timeout: 5000 });
```
---
### Pattern 7: Extract Dynamic Data
**Use case:** Verify API data rendered correctly
**Example (User Profile):**
```javascript
// Navigate to profile
await page.goto('http://localhost:3000/users/123');
// Wait for profile data
await page.waitForSelector('[data-testid="user-name"]');
// Extract data
const name = await page.textContent('[data-testid="user-name"]');
const email = await page.textContent('[data-testid="user-email"]');
const role = await page.textContent('[data-testid="user-role"]');
// Verify
expect(name).toBe('John Doe');
expect(email).toBe('john@example.com');
expect(role).toBe('Admin');
```
---
### Pattern 8: Multi-Step Flow
**Use case:** Complex user journeys (e.g., checkout flow)
**Example (User Registration → Email Verification → Login):**
```javascript
// Step 1: Register
await page.goto('http://localhost:3000/register');
await page.fill('[name="email"]', 'newuser@example.com');
await page.fill('[name="password"]', 'password123');
await page.fill('[name="name"]', 'New User');
await page.click('button[type="submit"]');
// Step 2: Verify redirect to verification page
await page.waitForURL('**/verify-email');
const message = await page.textContent('[data-testid="message"]');
expect(message).toContain('Check your email');
// Step 3: (Simulate email verification - in real test, would check email)
// For manual testing, verify verification email sent via integration test
// Step 4: Login with new account
await page.goto('http://localhost:3000/login');
await page.fill('[name="email"]', 'newuser@example.com');
await page.fill('[name="password"]', 'password123');
await page.click('button[type="submit"]');
// Step 5: Verify logged in
await page.waitForURL('**/dashboard');
const welcomeText = await page.textContent('[data-testid="welcome"]');
expect(welcomeText).toContain('Welcome, New User');
```
---
## Selector Best Practices
**Priority order:**
1. **data-testid** attributes (best - stable, semantic)
2. **name** attributes (good for forms)
3. **type** attributes (good for buttons)
4. **role** attributes (accessibility, semantic)
5. **class/id** (avoid - brittle, implementation detail)
**Examples:**
```javascript
// ✅ GOOD: Semantic, stable
await page.click('[data-testid="submit-button"]');
await page.fill('[name="email"]', 'test@example.com');
await page.click('button[type="submit"]');
await page.click('[role="button"]');
// ❌ BAD: Brittle, coupled to implementation
await page.click('.btn-primary.submit-btn');
await page.fill('#emailInput', 'test@example.com');
```
---
## Error Handling
### Pattern: Graceful Degradation
If puppeteer unavailable (e.g., server doesn't support browser automation):
```javascript
try {
await page.goto('http://localhost:3000');
} catch (error) {
if (error.message.includes('net::ERR_CONNECTION_REFUSED')) {
return {
verdict: "ERROR",
message: "Application not running. Start server first."
};
}
throw error;
}
```
### Pattern: Timeout Handling
```javascript
try {
await page.waitForSelector('[data-testid="profile"]', {
timeout: 5000
});
} catch (error) {
if (error.message.includes('Timeout')) {
return {
verdict: "FAIL",
details: "Profile component did not render within 5 seconds"
};
}
throw error;
}
```
---
## Integration with ln-343-manual-tester
**When to use puppeteer (Phase 1 detection):**
- Story type: UI
- Story description contains: "UI", "frontend", "page", "component"
- Story labels: "ui", "frontend", "react", "vue"
**Workflow integration:**
- **Phase 3 (Test AC):** Use puppeteer patterns for each AC
- **Phase 4 (Edge Cases):** Test invalid UI interactions
- **Phase 5 (Error Handling):** Verify error messages displayed
- **Phase 7 (Document):** Include puppeteer commands in temp script
**Temp script format (UI tests):**
```bash
#!/bin/bash
# Temporary manual testing script for Story US001 (UI)
# Created: 2025-11-13
# Note: UI tests require manual verification
# Puppeteer commands below for reference
echo "AC1: User can login"
echo "Steps:"
echo "1. Navigate to http://localhost:3000/login"
echo "2. Fill email: user@example.com"
echo "3. Fill password: test123"
echo "4. Click Submit"
echo "5. Verify redirect to /dashboard"
echo ""
echo "Run automated UI test:"
echo "npm run test:ui -- --grep 'User login'"
```
---
**Version:** 1.0.0
**Last Updated:** 2025-11-13

View File

@@ -0,0 +1,264 @@
# Test Result Format v1.0
This document specifies the structured format for manual testing results in Linear comments.
## Purpose
**Why standardized format:**
- **Consistency:** All manual testing results follow same structure
- **Parseability:** ln-350-story-test-planner can parse results to generate test task
- **Readability:** Clear structure for reviewers
- **Traceability:** Links testing results to AC
## Format Specification
### Header Section
```markdown
## 🎯 Manual Testing Results
**Verdict:** ✅ PASS | ❌ FAIL
**Story Type:** API | UI
**Tested:** 2025-11-13 14:30 UTC
**Tester:** ln-343-manual-tester v1.0.0
```
**Fields:**
- `Verdict`: Overall verdict (PASS if all AC passed + no critical failures)
- `Story Type`: Detected Story type (API or UI)
- `Tested`: ISO 8601 timestamp of test execution
- `Tester`: Skill name and version for traceability
---
### Main Scenarios Section
```markdown
### Main Scenarios (Acceptance Criteria)
**AC1:** Given authenticated user, When POST /api/login, Then return 200 with token
- Result: ✅ PASS
- Details: Response 200, token valid, expires in 3600s
**AC2:** Given valid token, When GET /api/users/me, Then return user profile
- Result: ❌ FAIL
- Details: Expected 200 with user data, got 401 Unauthorized
**AC3:** Given admin role, When DELETE /api/users/123, Then user deleted and 204 returned
- Result: ✅ PASS
- Details: User deleted from database, response 204 No Content
```
**Structure:**
- Each AC has:
- **Title:** Full AC statement (Given-When-Then)
- **Result:** ✅ PASS or ❌ FAIL
- **Details:** Actual outcome vs expected
**Critical Rule:** ALL AC must be tested. If AC untested → Verdict must be ERROR.
---
### Edge Cases Section
```markdown
### Edge Cases
- **Invalid credentials:** ✅ PASS - Response 401, correct error message
- **Empty email field:** ✅ PASS - Response 422, validation error shown
- **SQL injection attempt:** ✅ PASS - Input sanitized, no SQL error
- **Concurrent login requests:** ❌ FAIL - Race condition: duplicate sessions created
- **Token expired:** ✅ PASS - Response 401, redirect to login
```
**Structure:**
- Bullet list of edge case scenarios
- Each item: `- **Case name:** Result - Details`
- Minimum 3 edge cases, maximum 5
**Result Icons:**
- ✅ PASS - Expected behavior
- ❌ FAIL - Unexpected behavior
- ⚠️ WARN - Works but suboptimal (e.g., slow response)
---
### Error Handling Section
```markdown
### Error Handling
- **400 Bad Request:** ✅ PASS - Correct status, validation errors in response
- **401 Unauthorized:** ✅ PASS - Correct status + user-friendly message "Please log in"
- **403 Forbidden:** ✅ PASS - Correct status + message "Insufficient permissions"
- **404 Not Found:** ✅ PASS - Correct status + message "User not found"
- **422 Unprocessable:** ✅ PASS - Correct status + field-specific errors
- **500 Server Error:** ❌ FAIL - Stack trace exposed to user (security issue)
```
**Structure:**
- Test standard HTTP error codes
- Verify:
1. **Correct status code** (API)
2. **User-friendly message** (no technical jargon, no stack traces)
3. **Proper handling** (error doesn't crash app)
**Security Critical:**
- ❌ FAIL if stack traces exposed
- ❌ FAIL if sensitive data in error messages
- ❌ FAIL if error crashes application
---
### Integration Points Section
```markdown
### Integration Points
- **Database persistence:** ✅ PASS - User record saved with correct fields (id, email, name, created_at)
- **Token generation:** ✅ PASS - JWT token valid, properly signed, expires in 3600s
- **Email service:** ⚠️ WARN - Welcome email sent but delayed 5 seconds (acceptable for v1)
- **External API:** ✅ PASS - Third-party service called correctly, response parsed
```
**Structure:**
- List critical integration points from implementation tasks
- Verify:
1. **Data flows correctly** (database CRUD, API calls)
2. **Error handling** (integration failures handled gracefully)
3. **Performance** (acceptable response times)
**Minimum:** 2 integration points tested
---
### Temporary Testing Script Section
```markdown
### Temporary Testing Script
Reusable testing script created at: `scripts/tmp_US001.sh`
**Run with:**
```bash
chmod +x scripts/tmp_US001.sh
./scripts/tmp_US001.sh
```
**Purpose:** Re-run manual tests after refactoring/fixes without typing commands again.
**Lifecycle:** Deleted by ln-334-test-executor Step 6 after E2E/Integration/Unit tests implemented.
```
**Structure:**
- Path to temp script
- Execution instructions
- Purpose explanation
- Lifecycle note
---
## Complete Example
```markdown
## 🎯 Manual Testing Results
**Verdict:** ❌ FAIL
**Story Type:** API
**Tested:** 2025-11-13 14:30 UTC
**Tester:** ln-343-manual-tester v1.0.0
### Main Scenarios (Acceptance Criteria)
**AC1:** Given authenticated user, When POST /api/login, Then return 200 with token
- Result: ✅ PASS
- Details: Response 200, token valid, expires in 3600s
**AC2:** Given valid token, When GET /api/users/me, Then return user profile
- Result: ❌ FAIL
- Details: Expected 200 with user data, got 401 Unauthorized
**AC3:** Given admin role, When DELETE /api/users/123, Then user deleted and 204 returned
- Result: ✅ PASS
- Details: User deleted from database, response 204 No Content
### Edge Cases
- **Invalid credentials:** ✅ PASS - Response 401, correct error message
- **Empty email field:** ✅ PASS - Response 422, validation error shown
- **SQL injection attempt:** ✅ PASS - Input sanitized, no SQL error
- **Token expired:** ✅ PASS - Response 401, redirect to login
### Error Handling
- **400 Bad Request:** ✅ PASS - Correct status, validation errors in response
- **401 Unauthorized:** ✅ PASS - Correct status + user-friendly message
- **404 Not Found:** ✅ PASS - Correct status + message "User not found"
- **500 Server Error:** ❌ FAIL - Stack trace exposed to user (security issue)
### Integration Points
- **Database persistence:** ✅ PASS - User record saved correctly
- **Token generation:** ✅ PASS - JWT token valid and properly signed
- **Email service:** ⚠️ WARN - Welcome email sent but delayed 5 seconds
### Temporary Testing Script
Reusable testing script created at: `scripts/tmp_US001.sh`
**Run with:**
```bash
chmod +x scripts/tmp_US001.sh
./scripts/tmp_US001.sh
```
**Purpose:** Re-run manual tests after refactoring/fixes without typing commands again.
**Lifecycle:** Deleted by ln-334-test-executor Step 6 after E2E/Integration/Unit tests implemented.
```
---
## Parsing Rules for ln-350-story-test-planner
When ln-350-story-test-planner reads this comment to generate test task:
**Extract AC results:**
- Parse each `**AC[N]:**` block
- Extract result (PASS/FAIL)
- If FAIL → Priority = HIGH for test coverage
**Extract edge cases:**
- Parse bullet list under "### Edge Cases"
- Extract case name and result
- If FAIL → Include in test task Edge Cases section
**Extract error handling:**
- Parse bullet list under "### Error Handling"
- Extract scenario and result
- If FAIL → Include in test task Error Handling section
**Extract integration:**
- Parse bullet list under "### Integration Points"
- Extract integration name and result
- If FAIL → Include in test task Integration section
**Risk-Based Testing:**
- Calculate Priority for each scenario: Business Impact × Probability
- Select scenarios with Priority ≥15 for automated testing
- Ensure all FAILED scenarios are covered in test task
---
## Version History
### v1.0.0 (2025-11-13)
- Initial format specification
- Structured sections: Main Scenarios, Edge Cases, Error Handling, Integration, Temp Script
- Icons for visual clarity (✅/❌/⚠️)
- Parsing rules for ln-350-story-test-planner
---
**Version:** 1.0.0
**Last Updated:** 2025-11-13