Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:37:27 +08:00
commit 37774aa937
131 changed files with 31137 additions and 0 deletions

View File

@@ -0,0 +1,36 @@
---
name: ln-342-regression-checker
description: Worker that runs existing tests to catch regressions. Auto-detects framework, reports pass/fail. No status changes or task creation.
---
# Regression Checker
Runs the existing test suite to ensure no regressions after implementation changes.
## Purpose & Scope
- Detect test framework (pytest/jest/vitest/go test/etc.) and test dirs.
- Execute full suite; capture results for Story quality gate.
- Return PASS/FAIL with counts/log excerpts; never modifies Linear or kanban.
## Workflow (concise)
1) Auto-discover framework and test locations from repo config/files.
2) Build appropriate test command; run with timeout (~5m); capture stdout/stderr.
3) Parse results: passed/failed counts; key failing tests.
4) Output verdict JSON (PASS or FAIL + failures list) and add Linear comment.
## Critical Rules
- No selective test runs; run full suite.
- Do not fix tests or change status; only report.
- Language preservation in comment (EN/RU).
## Definition of Done
- Framework detected; command executed.
- Results parsed; verdict produced with failing tests (if any).
- Linear comment posted with summary.
## Reference Files
- Risk-based limits used downstream: `../ln-350-story-test-planner/references/risk_based_testing_guide.md`
---
Version: 3.0.0 (Condensed regression worker)
Last Updated: 2025-11-26

View File

@@ -0,0 +1,127 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>ln-342-regression-checker Workflow</title>
<script src="https://cdn.jsdelivr.net/npm/mermaid@11/dist/mermaid.min.js"></script>
<link rel="stylesheet" href="../shared/css/diagram.css">
</head>
<body>
<div class="container">
<header>
<h1>🧪 ln-342-regression-checker Workflow</h1>
<p class="subtitle">Linear Workflow - Worker v1.0.0</p>
</header>
<div class="info-box">
<h3>Overview</h3>
<p><strong>Purpose:</strong> Run existing test suite to verify no regressions introduced by implementation changes.</p>
<p><strong>Type:</strong> Linear Workflow (4 sequential phases)</p>
<p><strong>Single Responsibility:</strong> ONLY runs tests and reports results - does NOT create tasks or change statuses.</p>
</div>
<div class="mermaid">
graph TD
Start([START]) --> Phase1[Phase 1: Discovery<br/>Auto-detect framework<br/>Locate test directories]
Phase1 --> Phase2[Phase 2: Run Tests<br/>Execute test suite<br/>5-minute timeout]
Phase2 --> Phase3[Phase 3: Parse Results<br/>Extract statistics<br/>Identify failed tests]
Phase3 --> Phase4[Phase 4: Report Results<br/>Add Linear comment<br/>Return JSON verdict]
Phase4 --> End([END:<br/>JSON verdict returned])
classDef phase fill:#E3F2FD,stroke:#1976D2,stroke-width:2px
classDef endpoint fill:#C8E6C9,stroke:#388E3C,stroke-width:2px
class Phase1,Phase2,Phase3,Phase4 phase
class Start,End endpoint
</div>
<h2>Phase Descriptions</h2>
<div class="phase-description">
<div class="phase-title">Phase 1: Discovery</div>
<ul>
<li>Auto-detect test framework (pytest/jest/vitest/go test)</li>
<li>Locate test directories (tests/, test/, __tests__/)</li>
<li>Count test files</li>
</ul>
</div>
<div class="phase-description">
<div class="phase-title">Phase 2: Run Tests</div>
<ul>
<li>Construct framework-specific test command</li>
<li>Execute via Bash tool with 5-minute timeout</li>
<li>Capture stdout/stderr output</li>
</ul>
</div>
<div class="phase-description">
<div class="phase-title">Phase 3: Parse Results</div>
<ul>
<li>Parse output based on framework</li>
<li>Extract: total tests, passed, failed counts</li>
<li>Identify failed test names with file:line references</li>
<li>Calculate execution time</li>
</ul>
</div>
<div class="phase-description">
<div class="phase-title">Phase 4: Report Results</div>
<ul>
<li>Determine verdict (PASS if failed=0, else FAIL)</li>
<li>Format Linear comment with results</li>
<li>Add comment to Story in Linear</li>
<li>Return JSON verdict with all fields</li>
</ul>
</div>
<h2>Output Format</h2>
<pre style="background: #F5F5F5; padding: 15px; border-radius: 4px; overflow-x: auto;">
{
"verdict": "PASS" | "FAIL",
"framework": "pytest" | "jest" | "vitest" | "go test",
"total_tests": 127,
"passed": 125,
"failed": 2,
"failed_tests": [
"tests/auth/test_login.py::test_expired_token",
"tests/api/test_rate_limit.py::test_burst_limit"
],
"execution_time": "12.5s",
"linear_comment_id": "abc123"
}
</pre>
<h2>Key Characteristics</h2>
<ul>
<li><strong>Atomic Worker:</strong> Single responsibility - runs tests only</li>
<li><strong>Framework Agnostic:</strong> Auto-detects pytest/jest/vitest/go test</li>
<li><strong>Timeout Protection:</strong> 5-minute maximum execution time</li>
<li><strong>Structured Output:</strong> JSON verdict for programmatic consumption</li>
<li><strong>Linear Integration:</strong> Adds formatted comment with results</li>
</ul>
<script>
mermaid.initialize({
startOnLoad: true,
theme: 'default',
flowchart: {
useMaxWidth: true,
htmlLabels: true,
curve: 'basis'
}
});
</script>
<footer>
<p>ln-342-regression-checker v1.0.0 | Worker Pattern | Mermaid.js</p>
</footer>
</div>
<script>
mermaid.initialize({ startOnLoad: true, theme: 'default', flowchart: { useMaxWidth: true, htmlLabels: true, curve: 'basis' } });
</script>
</body>
</html>

View File

@@ -0,0 +1,207 @@
# Test Framework Configuration Reference
This document provides configuration examples and detection patterns for supported test frameworks.
## Supported Frameworks
### 1. pytest (Python)
**Detection Patterns:**
- File: `pytest.ini` OR `pyproject.toml` (with `[tool.pytest.ini_options]`)
- Directory: `tests/` with `test_*.py` or `*_test.py` files
**Run Command:**
```bash
pytest tests/ -v --tb=short
```
**Configuration Example (pytest.ini):**
```ini
[pytest]
testpaths = tests
python_files = test_*.py *_test.py
python_classes = Test*
python_functions = test_*
addopts = -v --tb=short --strict-markers
```
**Output Parsing:**
```
============= test session starts ==============
collected 127 items
tests/test_auth.py::test_login PASSED [ 1%]
tests/test_auth.py::test_logout PASSED [ 2%]
tests/test_api.py::test_rate_limit FAILED [ 50%]
======= 125 passed, 2 failed in 12.5s =======
```
**Parse Pattern:**
- Total: Extract from `collected X items`
- Results: Extract from `X passed, Y failed in Z.Zs`
- Failed tests: Lines with `FAILED` status
---
### 2. jest (JavaScript/TypeScript)
**Detection Patterns:**
- File: `jest.config.js` OR `jest.config.ts` OR `package.json` (with `"jest"` key)
- Directory: `__tests__/` OR `test/` OR files matching `*.test.js` or `*.spec.js`
**Run Command:**
```bash
npm test -- --verbose
```
**Configuration Example (jest.config.js):**
```javascript
module.exports = {
testEnvironment: 'node',
testMatch: ['**/__tests__/**/*.js', '**/?(*.)+(spec|test).js'],
collectCoverageFrom: ['src/**/*.js'],
verbose: true,
};
```
**Output Parsing:**
```
PASS tests/auth.test.js
✓ should login successfully (123ms)
✓ should logout (45ms)
FAIL tests/api.test.js
✕ should enforce rate limit (234ms)
Tests: 2 failed, 125 passed, 127 total
Time: 12.5s
```
**Parse Pattern:**
- Total: Extract from `X total`
- Results: Extract from `X failed, Y passed`
- Failed tests: Lines starting with `✕` under FAIL suites
---
### 3. vitest (JavaScript/TypeScript)
**Detection Patterns:**
- File: `vitest.config.js` OR `vitest.config.ts` OR `vite.config.js` (with `test` key)
- Directory: `test/` OR files matching `*.test.js` or `*.spec.js`
**Run Command:**
```bash
npm run test
```
**Configuration Example (vitest.config.js):**
```javascript
import { defineConfig } from 'vitest/config';
export default defineConfig({
test: {
globals: true,
environment: 'node',
include: ['**/*.test.js', '**/*.spec.js'],
},
});
```
**Output Parsing:**
```
✓ tests/auth.test.js (2)
✗ tests/api.test.js (1)
✗ should enforce rate limit
Test Files 1 failed | 4 passed (5)
Tests 2 failed | 125 passed (127)
Time 12.5s
```
**Parse Pattern:**
- Total: Extract from `Tests X failed | Y passed (Z)`
- Failed tests: Lines starting with `✗` (cross mark)
---
### 4. go test (Go)
**Detection Patterns:**
- File: `go.mod`
- Files: `*_test.go` in any directory
**Run Command:**
```bash
go test ./... -v
```
**Configuration:** No config file needed (convention-based)
**Output Parsing:**
```
=== RUN TestLogin
--- PASS: TestLogin (0.12s)
=== RUN TestLogout
--- PASS: TestLogout (0.05s)
=== RUN TestRateLimit
--- FAIL: TestRateLimit (0.23s)
api_test.go:45: Rate limit not enforced
PASS
ok github.com/user/project/auth 0.17s
FAIL
FAIL github.com/user/project/api 0.28s
```
**Parse Pattern:**
- Results: Count lines with `--- PASS:` and `--- FAIL:`
- Failed tests: Extract test names from `--- FAIL: TestName`
- Execution time: Sum times from `ok` and `FAIL` package lines
---
## Framework Detection Algorithm
```
1. Check for Python test files:
IF pytest.ini OR pyproject.toml exists:
RETURN "pytest"
2. Check for JavaScript test files:
IF jest.config.js OR package.json contains "jest":
RETURN "jest"
ELSE IF vitest.config.js OR vite.config.js contains "test":
RETURN "vitest"
3. Check for Go test files:
IF go.mod exists AND *_test.go files found:
RETURN "go test"
4. No framework detected:
RETURN null (no tests found)
```
## Timeout Handling
All test commands run with **5-minute timeout**:
```bash
timeout 300 pytest tests/ -v --tb=short
```
**Rationale:**
- Prevents hanging tests from blocking pipeline
- Typical test suite runs in < 2 minutes
- 5 minutes allows for slow integration tests
**On Timeout:**
- Kill process
- Return verdict: "FAIL"
- Include timeout message in Linear comment
---
**Version:** 1.0.0
**Last Updated:** 2025-11-13