5.0 KiB
allowed-tools, argument-hint, description
| allowed-tools | argument-hint | description | |
|---|---|---|---|
| Bash, Edit, Grep, Read, WebSearch, WebFetch, Write(test/REPORT.md) |
|
Run Integration tests from the test plan via Playwright hitting the live UI |
Run Integration Tests
If not given specific instructions otherwise, run all tests listed in ./test/integration/TEST-PLAN.md systematically, using their Comprehensive Tests section. If given a more specific set of instructions, follow them instead.
Execute the test plan using Playwright automation. Systematically work through all tests defined in ./test/integration/TEST-PLAN.md, document pass/fail status for each test, and generate a detailed report.
Execution Process
1. Session Initialization
ALWAYS begin by:
- Navigate to http://ngrok.edspencer.net/demo
- Click the button to create a demo account
- Wait for successful authentication before proceeding
ALWAYS finish by:
- Logging out of the application (just navigate to /logout) - this will clear the database of cruft demo data
2. Test Execution
Work through ALL test sections in ./test/integration/TEST-PLAN.md systematically. For each test:
- Execute the test steps using Playwright MCP tools
- Record PASS or FAIL status
- Note any console errors or warnings
- Do NOT attempt to debug failures - just document them
3. Testing Guidelines
DO:
- Navigate by clicking links and UI elements (not direct URLs except /demo)
- Check browser console regularly
- Test systematically through all items
- Record exact error messages when failures occur
- Note visual issues or unexpected behavior
DO NOT:
- Skip tests or categories
- Attempt to debug or fix issues found
- Make code changes
- Stop testing when failures are found - continue through all tests
- Navigate to URLs directly (except initial /demo)
4. Playwright MCP Usage
Use Playwright MCP tools extensively:
browser_navigate- Navigate to pagesbrowser_snapshot- Capture accessibility snapshots (preferred for testing)browser_take_screenshot- Take visual screenshotsbrowser_click- Click elementsbrowser_type- Fill formsbrowser_console_messages- Check for errorsbrowser_wait_for- Wait for elements or text
5. Report Generation
After testing is complete, generate a comprehensive report at ./test/integration/runs/YYYY-MM-DD-N/REPORT.md (where N is an index for multiple runs on the same day).
The report should have the following structure:
# UI Test Execution Report
**Date**: [Current date]
**Tested By**: Claude Code (UI Test Runner)
**Environment**: http://ngrok.edspencer.net
**Browser**: Playwright Chromium
---
## Executive Summary
- **Total Tests**: [number]
- **Passed**: [number] ([percentage]%)
- **Failed**: [number] ([percentage]%)
- **Skipped**: [number] (if any)
- **Overall Status**: PASS | FAIL | PARTIAL
**Critical Issues Found**: [number]
**Major Issues Found**: [number]
**Minor Issues Found**: [number]
---
## Test Results by Category
### 1. Navigation - Sidebar
**Status**: PASS | FAIL | PARTIAL
**Tests Passed**: X/Y
#### 1.1 Sidebar Structure
- [x] Test name - PASS
- [ ] Test name - FAIL: [brief reason]
- [x] Test name - PASS
[Continue for each test...]
---
### 2. Navigation - Careers Section
[Same format as above]
---
### 3. Coming Soon Pages
[Same format as above]
---
[Continue for all categories...]
---
## Issues Found
### Critical Issues
[None found] OR:
1. **Issue**: [Brief description]
- **Location**: [Where it occurs]
- **Steps to Reproduce**: [Exact steps]
- **Expected**: [What should happen]
- **Actual**: [What actually happens]
- **Evidence**: [Screenshot references, console errors]
### Major Issues
[Format same as critical]
### Minor Issues
[Format same as critical]
---
## Console Errors
[List all console errors found during testing with page context]
---
## Test Coverage
**Categories Completed**: X/7
**Individual Tests Completed**: X/Y
**Not Tested** (if any):
- [List any tests that couldn't be executed with reasons]
---
## Recommendations
[High-level recommendations for addressing failures, but no specific debugging or code changes]
---
## Conclusion
[Summary paragraph of overall test execution]
Important Constraints
- DO NOT debug issues - only document them
- DO NOT examine code unless needed to understand what to test
- DO NOT propose fixes - only report findings
- DO continue testing even after failures
- DO be thorough - test every checkbox in the test plan
- DO capture evidence - error messages and console logs
- ALWAYS create demo account at start of session
- SAVE report to ./test/integration/runs/YYYY-MM-DD-N/REPORT.md when complete
Success Criteria
A successful test run means:
- All tests in TEST-PLAN.md were attempted
- Clear PASS/FAIL status recorded for each test
- Console errors documented
- Comprehensive report generated at
./test/integration/runs/YYYY-MM-DD-N/REPORT.md
The tests themselves may pass or fail - your job is to execute them all and report accurately, not to achieve 100% pass rate.