Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:49:41 +08:00
commit 57a0ba8872
8 changed files with 2238 additions and 0 deletions

View File

@@ -0,0 +1,12 @@
{
"name": "qa-test-planner",
"description": "Generate comprehensive test plans, manual test cases, regression test suites, and bug reports for QA engineers. Includes Figma MCP integration for design validation.",
"version": "0.0.0-2025.11.28",
"author": {
"name": "James Rochabrun",
"email": "jamesrochabrun@gmail.com"
},
"skills": [
"./skills/qa-test-planner"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# qa-test-planner
Generate comprehensive test plans, manual test cases, regression test suites, and bug reports for QA engineers. Includes Figma MCP integration for design validation.

60
plugin.lock.json Normal file
View File

@@ -0,0 +1,60 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:jamesrochabrun/skills:qa-test-planner",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "99d1046b8a92145556f6428950ebaf09ec631100",
"treeHash": "773360a51798f7dbcf5a5242a7b51993504ff1919e8ee4bf1af88c94f94f111f",
"generatedAt": "2025-11-28T10:17:53.946433Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "qa-test-planner",
"description": "Generate comprehensive test plans, manual test cases, regression test suites, and bug reports for QA engineers. Includes Figma MCP integration for design validation."
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "81b6f62a7f08971a2c353e280049f7a80763d36262f34ef1d647dd01874b8960"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "1d4acc5823c56a21c86086509324ec1d678e9060cd9c3ab87135e7b65ff7c8d6"
},
{
"path": "skills/qa-test-planner/SKILL.md",
"sha256": "0a7e550124e668ca9c3e89b693bb3ef3b021b477e8b84f0d9dac8e26234289bf"
},
{
"path": "skills/qa-test-planner/references/regression_testing.md",
"sha256": "e239509c9046f79b5fca86a09abb6c7c2ffbcd24d0851bcab52cdad408737319"
},
{
"path": "skills/qa-test-planner/references/figma_validation.md",
"sha256": "e0253fb34cfa028659caec0f55122163c275ae78f5b0f1bce594c28f6fe9380a"
},
{
"path": "skills/qa-test-planner/scripts/create_bug_report.sh",
"sha256": "d77ceee80ba23cea4a5de7d3223110a45760bdd5c0677977fdd358117608d182"
},
{
"path": "skills/qa-test-planner/scripts/generate_test_cases.sh",
"sha256": "7f86370c27277573a994e338b234b538809da75077d1a88f3eb893b92dff5543"
}
],
"dirSha256": "773360a51798f7dbcf5a5242a7b51993504ff1919e8ee4bf1af88c94f94f111f"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

View File

@@ -0,0 +1,869 @@
---
name: qa-test-planner
description: Generate comprehensive test plans, manual test cases, regression test suites, and bug reports for QA engineers. Includes Figma MCP integration for design validation.
---
# QA Test Planner
A comprehensive skill for QA engineers to create test plans, generate manual test cases, build regression test suites, validate designs against Figma, and document bugs effectively.
## What This Skill Does
Helps QA engineers with:
- **Test Plan Creation** - Comprehensive test strategy and planning
- **Manual Test Case Generation** - Detailed step-by-step test cases
- **Regression Test Suites** - Critical path and smoke test suites
- **Figma Design Validation** - Compare implementation against designs (requires Figma MCP)
- **Bug Report Templates** - Clear, reproducible bug documentation
- **Test Coverage Analysis** - Identify gaps in testing
- **Test Execution Tracking** - Monitor testing progress
## Why You Need This Skill
**Without structured testing:**
- Inconsistent test coverage
- Missed edge cases
- Poor bug documentation
- No regression safety net
- Design implementation gaps
- Unclear test strategy
**With this skill:**
- Comprehensive test coverage
- Repeatable test cases
- Systematic regression testing
- Design-implementation validation
- Professional bug reports
- Clear testing roadmap
## Core Components
### 1. Test Plan Generator
- Test scope and objectives
- Testing approach and strategy
- Test environment requirements
- Entry/exit criteria
- Risk assessment
- Resource allocation
- Timeline and milestones
### 2. Manual Test Case Generator
- Step-by-step instructions
- Expected vs actual results
- Preconditions and setup
- Test data requirements
- Priority and severity
- Edge case identification
### 3. Regression Test Suite Builder
- Smoke test cases
- Critical path testing
- Integration test scenarios
- Backward compatibility checks
- Performance regression tests
### 4. Figma Design Validation (with MCP)
- Compare UI implementation to designs
- Identify visual discrepancies
- Validate spacing, colors, typography
- Check component consistency
- Flag design-dev mismatches
### 5. Bug Report Generator
- Clear reproduction steps
- Environment details
- Expected vs actual behavior
- Screenshots and evidence
- Severity and priority
- Related test cases
## Test Case Structure
### Standard Test Case Format
```markdown
## TC-001: [Test Case Title]
**Priority:** High | Medium | Low
**Type:** Functional | UI | Integration | Regression
**Status:** Not Run | Pass | Fail | Blocked
### Objective
[What are we testing and why]
### Preconditions
- [Setup requirement 1]
- [Setup requirement 2]
- [Test data needed]
### Test Steps
1. [Action to perform]
**Expected:** [What should happen]
2. [Action to perform]
**Expected:** [What should happen]
3. [Action to perform]
**Expected:** [What should happen]
### Test Data
- Input: [Test data values]
- User: [Test account details]
- Configuration: [Environment settings]
### Post-conditions
- [System state after test]
- [Cleanup required]
### Notes
- [Edge cases to consider]
- [Related test cases]
- [Known issues]
```
## Test Plan Template
### Executive Summary
- Feature/product being tested
- Testing objectives
- Key risks
- Timeline overview
### Test Scope
**In Scope:**
- Features to be tested
- Test types (functional, UI, performance, etc.)
- Platforms and environments
- User flows and scenarios
**Out of Scope:**
- Features not being tested (deferred)
- Known limitations
- Third-party integrations (if applicable)
### Test Strategy
**Test Types:**
- Manual testing
- Exploratory testing
- Regression testing
- Integration testing
- User acceptance testing
- Performance testing (if applicable)
**Test Approach:**
- Black box testing
- Positive and negative testing
- Boundary value analysis
- Equivalence partitioning
### Test Environment
- Operating systems
- Browsers and versions
- Devices (mobile, tablet, desktop)
- Test data requirements
- Backend/API environments
### Entry Criteria
- [ ] Requirements documented
- [ ] Designs finalized
- [ ] Test environment ready
- [ ] Test data prepared
- [ ] Build deployed to test environment
### Exit Criteria
- [ ] All high-priority test cases executed
- [ ] 90%+ test case pass rate
- [ ] All critical bugs fixed
- [ ] No open high-severity bugs
- [ ] Regression suite passed
- [ ] Stakeholder sign-off
### Risk Assessment
| Risk | Probability | Impact | Mitigation |
|------|-------------|--------|------------|
| [Risk 1] | High/Med/Low | High/Med/Low | [How to mitigate] |
| [Risk 2] | High/Med/Low | High/Med/Low | [How to mitigate] |
### Test Deliverables
- Test plan document
- Test cases
- Test execution reports
- Bug reports
- Test summary report
## Test Types and Approaches
### 1. Functional Testing
**What:** Verify features work as specified
**Test Cases:**
- Happy path scenarios
- Error handling
- Input validation
- Business logic
- Data integrity
**Example:**
```
TC: User Login with Valid Credentials
1. Navigate to login page
2. Enter valid email and password
3. Click "Login" button
Expected: User redirected to dashboard, welcome message shown
```
### 2. UI/Visual Testing
**What:** Verify visual appearance and layout
**Test Cases:**
- Layout and alignment
- Responsive design
- Color and typography
- Component states (hover, active, disabled)
- Cross-browser compatibility
**With Figma MCP:**
- Compare implementation to Figma designs
- Verify spacing (padding, margins)
- Check font sizes and weights
- Validate color values
- Ensure icon accuracy
**Example:**
```
TC: Homepage Hero Section Visual Validation
1. Open homepage in browser
2. Compare against Figma design [link]
3. Verify:
- Heading font: 48px, bold, #1A1A1A
- CTA button: 16px padding, #0066FF background
- Image aspect ratio: 16:9
- Spacing: 64px margin-bottom
Expected: All visual elements match Figma exactly
```
### 3. Regression Testing
**What:** Ensure existing functionality still works
**When to Run:**
- Before each release
- After bug fixes
- After new features
- Weekly smoke tests
**Suite Components:**
- Smoke tests (critical paths)
- Full regression (comprehensive)
- Targeted regression (affected areas)
**Example:**
```
Regression Suite: User Authentication
- Login with valid credentials
- Login with invalid credentials
- Password reset flow
- Session timeout handling
- Multi-device login
- Social login (Google, GitHub)
```
### 4. Integration Testing
**What:** Verify different components work together
**Test Cases:**
- API integration
- Database operations
- Third-party services
- Cross-module interactions
- Data flow between components
**Example:**
```
TC: Checkout Payment Integration
1. Add item to cart
2. Proceed to checkout
3. Enter payment details (Stripe)
4. Submit payment
Expected:
- Payment processed via Stripe API
- Order created in database
- Confirmation email sent
- Inventory updated
```
### 5. Exploratory Testing
**What:** Unscripted, creative testing
**Approach:**
- Charter-based exploration
- User persona simulation
- Edge case discovery
- Usability evaluation
**Session Template:**
```
Exploratory Testing Session
Charter: Explore [feature] as [user type]
Time: 60 minutes
Focus: [Area to explore]
Findings:
- [Bug/issue discovered]
- [UX concern]
- [Improvement suggestion]
Follow-up:
- [Test cases to create]
- [Bugs to file]
```
## Figma MCP Integration
### Design Validation Workflow
**Prerequisites:**
- Figma MCP server configured
- Design file access
- Figma URLs available
**Validation Process:**
1. **Get Design Specs from Figma**
```
"Get the button specifications from Figma file [URL]"
- Component: Primary Button
- Width: 120px
- Height: 40px
- Border-radius: 8px
- Background: #0066FF
- Font: 16px, Medium, #FFFFFF
```
2. **Compare Implementation**
```
TC: Primary Button Visual Validation
1. Inspect primary button in browser dev tools
2. Compare against Figma specs:
- Dimensions: 120x40px ✓ / ✗
- Border-radius: 8px ✓ / ✗
- Background color: #0066FF ✓ / ✗
- Font: 16px Medium #FFFFFF ✓ / ✗
3. Document discrepancies
```
3. **Create Bug if Mismatch**
```
BUG: Primary button color doesn't match design
Severity: Medium
Expected (Figma): #0066FF
Actual (Implementation): #0052CC
Screenshot: [attached]
Figma link: [specific component]
```
### Design-Dev Handoff Checklist
**Using Figma MCP:**
- [ ] Retrieve spacing values from design
- [ ] Verify color palette matches
- [ ] Check typography specifications
- [ ] Validate component states (hover, active, disabled)
- [ ] Confirm breakpoint behavior
- [ ] Review iconography and assets
- [ ] Check accessibility annotations
## Bug Reporting Best Practices
### Effective Bug Report Template
```markdown
# BUG-[ID]: [Clear, specific title]
**Severity:** Critical | High | Medium | Low
**Priority:** P0 | P1 | P2 | P3
**Type:** Functional | UI | Performance | Security
**Status:** Open | In Progress | Fixed | Closed
## Environment
- **OS:** [Windows 11, macOS 14, etc.]
- **Browser:** [Chrome 120, Firefox 121, etc.]
- **Device:** [Desktop, iPhone 15, etc.]
- **Build:** [Version/commit]
- **URL:** [Page where bug occurs]
## Description
[Clear, concise description of the issue]
## Steps to Reproduce
1. [Specific step]
2. [Specific step]
3. [Specific step]
## Expected Behavior
[What should happen]
## Actual Behavior
[What actually happens]
## Visual Evidence
- Screenshot: [attached]
- Video: [link if applicable]
- Console errors: [paste errors]
- Network logs: [if relevant]
## Impact
- **User Impact:** [How many users affected]
- **Frequency:** [Always, Sometimes, Rarely]
- **Workaround:** [If one exists]
## Additional Context
- Related to: [Feature/ticket]
- First noticed: [When]
- Regression: [Yes/No - if yes, since when]
- Figma design: [Link if UI bug]
## Test Cases Affected
- TC-001: [Test case that failed]
- TC-045: [Related test case]
```
### Bug Severity Definitions
**Critical (P0):**
- System crash or data loss
- Security vulnerability
- Complete feature breakdown
- Blocks release
**High (P1):**
- Major feature not working
- Significant user impact
- No workaround available
- Should fix before release
**Medium (P2):**
- Feature partially working
- Workaround available
- Minor user inconvenience
- Can ship with fix in next release
**Low (P3):**
- Cosmetic issues
- Rare edge cases
- Minimal impact
- Nice to have fixed
## Test Coverage Analysis
### Coverage Metrics
**Feature Coverage:**
```
Total Features: 25
Tested: 23
Not Tested: 2
Coverage: 92%
```
**Requirement Coverage:**
```
Total Requirements: 150
With Test Cases: 142
Without Test Cases: 8
Coverage: 95%
```
**Risk Coverage:**
```
High-Risk Areas: 12
Tested: 12
Medium-Risk: 35
Tested: 30
```
### Coverage Matrix
| Feature | Requirements | Test Cases | Status | Gaps |
|---------|--------------|------------|--------|------|
| Login | 8 | 12 | ✓ Complete | None |
| Checkout | 15 | 10 | ⚠ Partial | Payment errors |
| Dashboard | 12 | 15 | ✓ Complete | None |
## Regression Test Suite Structure
### Smoke Test Suite (15-30 min)
**Run:** Before every test cycle, daily builds
**Critical Paths:**
- User login/logout
- Core user flow (e.g., create order)
- Navigation and routing
- API health checks
- Database connectivity
**Example:**
```
SMOKE-001: Critical User Flow
1. Login as standard user
2. Navigate to main feature
3. Perform primary action
4. Verify success message
5. Logout
Expected: All steps complete without errors
```
### Full Regression Suite (2-4 hours)
**Run:** Weekly, before releases
**Coverage:**
- All functional test cases
- Integration scenarios
- UI validation
- Cross-browser checks
- Data integrity tests
### Targeted Regression (30-60 min)
**Run:** After bug fixes, feature updates
**Coverage:**
- Affected feature area
- Related components
- Integration points
- Previously failed tests
## Test Execution Tracking
### Test Run Template
```markdown
# Test Run: [Release Version]
**Date:** 2024-01-15
**Build:** v2.5.0-rc1
**Tester:** [Name]
**Environment:** Staging
## Summary
- Total Test Cases: 150
- Executed: 145
- Passed: 130
- Failed: 10
- Blocked: 5
- Not Run: 5
- Pass Rate: 90%
## Test Cases by Priority
| Priority | Total | Pass | Fail | Blocked |
|----------|-------|------|------|---------|
| P0 (Critical) | 25 | 23 | 2 | 0 |
| P1 (High) | 50 | 45 | 3 | 2 |
| P2 (Medium) | 50 | 45 | 3 | 2 |
| P3 (Low) | 25 | 17 | 2 | 1 |
## Failures
### Critical Failures
- TC-045: Payment processing fails
- Bug: BUG-234
- Status: Open
### High Priority Failures
- TC-089: Email notification not sent
- Bug: BUG-235
- Status: In Progress
## Blocked Tests
- TC-112: Dashboard widget (API endpoint down)
- TC-113: Export feature (dependency not deployed)
## Risks
- 2 critical bugs blocking release
- Payment integration needs attention
- Email service intermittent
## Next Steps
- Retest after BUG-234 fix
- Complete remaining 5 test cases
- Run full regression before sign-off
```
## Using This Skill
### Generate Test Plan
```bash
./scripts/generate_test_plan.sh
```
Interactive workflow for creating comprehensive test plans.
### Generate Manual Test Cases
```bash
./scripts/generate_test_cases.sh
```
Create manual test cases for features with step-by-step instructions.
### Build Regression Suite
```bash
./scripts/build_regression_suite.sh
```
Create smoke and regression test suites.
### Validate Design with Figma
**With Figma MCP configured:**
```
"Compare the login page implementation against the Figma design at [URL] and generate test cases for visual validation"
```
### Create Bug Report
```bash
./scripts/create_bug_report.sh
```
Generate structured bug reports with all required details.
### Access Templates
```
references/test_case_templates.md - Various test case formats
references/bug_report_templates.md - Bug documentation templates
references/regression_testing.md - Regression testing guide
references/figma_validation.md - Design validation with Figma MCP
```
## QA Process Workflow
### 1. Planning Phase
- [ ] Review requirements and designs
- [ ] Create test plan
- [ ] Identify test scenarios
- [ ] Estimate effort and timeline
- [ ] Set up test environment
### 2. Test Design Phase
- [ ] Write test cases
- [ ] Review test cases with team
- [ ] Prepare test data
- [ ] Build regression suite
- [ ] Get Figma design access
### 3. Test Execution Phase
- [ ] Execute test cases
- [ ] Log bugs with clear reproduction steps
- [ ] Validate against Figma designs (UI tests)
- [ ] Track test progress
- [ ] Communicate blockers
### 4. Reporting Phase
- [ ] Compile test results
- [ ] Analyze coverage
- [ ] Document risks
- [ ] Provide go/no-go recommendation
- [ ] Archive test artifacts
## Best Practices
### Test Case Writing
**DO:**
- ✅ Be specific and unambiguous
- ✅ Include expected results for each step
- ✅ Test one thing per test case
- ✅ Use consistent naming conventions
- ✅ Keep test cases maintainable
**DON'T:**
- ❌ Assume knowledge
- ❌ Make test cases too long
- ❌ Skip preconditions
- ❌ Forget edge cases
- ❌ Leave expected results vague
### Bug Reporting
**DO:**
- ✅ Provide clear reproduction steps
- ✅ Include screenshots/videos
- ✅ Specify exact environment details
- ✅ Describe impact on users
- ✅ Link to Figma for UI bugs
**DON'T:**
- ❌ Report without reproduction steps
- ❌ Use vague descriptions
- ❌ Skip environment details
- ❌ Forget to assign priority
- ❌ Duplicate existing bugs
### Regression Testing
**DO:**
- ✅ Automate repetitive tests when possible
- ✅ Maintain regression suite regularly
- ✅ Prioritize critical paths
- ✅ Run smoke tests frequently
- ✅ Update suite after each release
**DON'T:**
- ❌ Skip regression before releases
- ❌ Let suite become outdated
- ❌ Test everything every time
- ❌ Ignore failed regression tests
## Figma MCP Setup
### Configuration
**Install Figma MCP server:**
```bash
# Follow Figma MCP installation instructions
# Configure with your Figma API token
# Set file access permissions
```
**Usage in test planning:**
```
"Analyze the Figma design file at [URL] and generate visual validation test cases for:
- Color scheme compliance
- Typography specifications
- Component spacing
- Responsive breakpoints
- Interactive states"
```
**Example queries:**
```
"Get button specifications from Figma design [URL]"
"Compare navigation menu implementation against Figma design"
"Extract spacing values for dashboard layout from Figma"
"List all color tokens used in Figma design system"
```
## Test Case Examples
### Example 1: Login Flow
```markdown
## TC-LOGIN-001: Valid User Login
**Priority:** P0 (Critical)
**Type:** Functional
**Estimated Time:** 2 minutes
### Objective
Verify users can successfully login with valid credentials
### Preconditions
- User account exists (test@example.com / Test123!)
- User is not already logged in
- Browser cookies cleared
### Test Steps
1. Navigate to https://app.example.com/login
**Expected:** Login page displays with email and password fields
2. Enter email: test@example.com
**Expected:** Email field accepts input
3. Enter password: Test123!
**Expected:** Password field shows masked characters
4. Click "Login" button
**Expected:**
- Loading indicator appears
- User redirected to /dashboard
- Welcome message shown: "Welcome back, Test User"
- Avatar/profile image displayed in header
### Post-conditions
- User session created
- Auth token stored
- Analytics event logged
### Visual Validation (with Figma)
- Compare dashboard layout against Figma design [link]
- Verify welcome message typography: 24px, Medium, #1A1A1A
- Check avatar size: 40x40px, border-radius 50%
### Edge Cases to Consider
- TC-LOGIN-002: Invalid password
- TC-LOGIN-003: Non-existent email
- TC-LOGIN-004: SQL injection attempt
- TC-LOGIN-005: Very long password
```
### Example 2: Responsive Design Validation
```markdown
## TC-UI-045: Mobile Navigation Menu
**Priority:** P1 (High)
**Type:** UI/Responsive
**Devices:** Mobile (iPhone, Android)
### Objective
Verify navigation menu works correctly on mobile devices
### Preconditions
- Access from mobile device or responsive mode
- Viewport width: 375px (iPhone SE) to 428px (iPhone Pro Max)
### Test Steps
1. Open homepage on mobile device
**Expected:** Hamburger menu icon visible (top-right)
2. Tap hamburger icon
**Expected:**
- Menu slides in from right
- Overlay appears over content
- Close (X) button visible
3. Tap menu item
**Expected:** Navigate to section, menu closes
4. Compare against Figma mobile design [link]
**Expected:**
- Menu width: 280px
- Slide animation: 300ms ease-out
- Overlay opacity: 0.5, color #000000
- Font size: 16px, line-height 24px
### Breakpoints to Test
- 375px (iPhone SE)
- 390px (iPhone 14)
- 428px (iPhone 14 Pro Max)
- 360px (Galaxy S21)
```
## Summary
This QA Test Planner skill provides:
- **Structured test planning** - Comprehensive test strategies
- **Manual test case generation** - Detailed, repeatable tests
- **Regression testing** - Protect against breaking changes
- **Figma validation** - Design-implementation verification
- **Bug documentation** - Clear, actionable reports
- **Coverage analysis** - Identify testing gaps
**Remember:** Quality is everyone's responsibility, but QA ensures it's systematically verified.
---
**"Testing shows the presence, not the absence of bugs." - Edsger Dijkstra**
**"Quality is not an act, it is a habit." - Aristotle**

View File

@@ -0,0 +1,345 @@
# Figma Design Validation with MCP
Guide for validating UI implementation against Figma designs using Figma MCP.
---
## Prerequisites
**Required:**
- Figma MCP server configured
- Access to Figma design files
- Figma URLs for components/pages
**Setup:**
```bash
# Install Figma MCP (follow official docs)
# Configure API token
# Verify access to design files
```
---
## Validation Workflow
### Step 1: Get Design Specifications
**Using Figma MCP:**
```
"Get the specifications for the primary button from Figma file at [URL]"
Response includes:
- Dimensions (width, height)
- Colors (background, text, border)
- Typography (font, size, weight)
- Spacing (padding, margin)
- Border radius
- States (default, hover, active, disabled)
```
### Step 2: Inspect Implementation
**Browser DevTools:**
1. Inspect element
2. Check computed styles
3. Verify dimensions
4. Compare colors (use color picker)
5. Check typography
6. Test interactive states
### Step 3: Document Discrepancies
**Create test case or bug:**
```
TC-UI-001: Primary Button Visual Validation
Design (Figma):
- Size: 120x40px
- Background: #0066FF
- Border-radius: 8px
- Font: 16px Medium #FFFFFF
Implementation:
- Size: 120x40px ✓
- Background: #0052CC ✗ (wrong shade)
- Border-radius: 8px ✓
- Font: 16px Regular #FFFFFF ✗ (wrong weight)
Status: FAIL
Bugs: BUG-234, BUG-235
```
---
## What to Validate
### Layout & Spacing
- [ ] Component dimensions
- [ ] Padding (all sides)
- [ ] Margins
- [ ] Grid alignment
- [ ] Responsive breakpoints
- [ ] Container max-width
**Example Query:**
```
"Extract spacing values for the card component from Figma"
```
### Typography
- [ ] Font family
- [ ] Font size
- [ ] Font weight
- [ ] Line height
- [ ] Letter spacing
- [ ] Text color
- [ ] Text alignment
**Example Query:**
```
"Get typography specifications for all heading levels from Figma design system"
```
### Colors
- [ ] Background colors
- [ ] Text colors
- [ ] Border colors
- [ ] Shadow colors
- [ ] Gradient values
- [ ] Opacity values
**Example Query:**
```
"List all color tokens used in the navigation component"
```
### Components
- [ ] Icon sizes and colors
- [ ] Button states
- [ ] Input field styling
- [ ] Checkbox/radio appearance
- [ ] Dropdown styling
- [ ] Card components
**Example Query:**
```
"Compare the implemented dropdown menu with Figma design at [URL]"
```
### Interactive States
- [ ] Default state
- [ ] Hover state
- [ ] Active/pressed state
- [ ] Focus state
- [ ] Disabled state
- [ ] Loading state
- [ ] Error state
---
## Common Discrepancies
### Typography Mismatches
- Wrong font weight (e.g., Regular instead of Medium)
- Incorrect font size
- Missing line-height
- Color hex codes off by a shade
### Spacing Issues
- Padding not matching
- Inconsistent margins
- Grid misalignment
- Component spacing varies
### Color Differences
- Hex values off (#0066FF vs #0052CC)
- Opacity not applied
- Gradient angles wrong
- Shadow colors incorrect
### Responsive Behavior
- Breakpoints don't match
- Mobile layout different
- Tablet view inconsistent
- Scaling not as designed
---
## Test Case Template
```markdown
## TC-UI-XXX: [Component] Visual Validation
**Figma Design:** [URL to specific component]
### Desktop (1920x1080)
**Layout:**
- [ ] Width: XXXpx
- [ ] Height: XXXpx
- [ ] Padding: XXpx XXpx XXpx XXpx
- [ ] Margin: XXpx
**Typography:**
- [ ] Font: [Family] [Weight]
- [ ] Size: XXpx
- [ ] Line-height: XXpx
- [ ] Color: #XXXXXX
**Colors:**
- [ ] Background: #XXXXXX
- [ ] Border: Xpx solid #XXXXXX
- [ ] Shadow: XXpx XXpx XXpx rgba(X,X,X,X)
**Interactive States:**
- [ ] Hover: [changes]
- [ ] Active: [changes]
- [ ] Focus: [changes]
- [ ] Disabled: [changes]
### Tablet (768px)
- [ ] [Responsive changes]
### Mobile (375px)
- [ ] [Responsive changes]
### Status
- [ ] PASS - All match
- [ ] FAIL - Discrepancies found
- [ ] BLOCKED - Design incomplete
```
---
## Figma MCP Queries
### Component Specifications
```
"Get complete specifications for the [component name] from Figma at [URL]"
"Extract all button variants from the design system"
"List typography styles defined in Figma"
```
### Color System
```
"Show me all color tokens in the Figma design system"
"What colors are used in the navigation bar design?"
"Get the exact hex values for primary, secondary, and accent colors"
```
### Spacing & Layout
```
"What are the padding values for the card component?"
"Extract grid specifications from the page layout"
"Get spacing tokens (8px, 16px, 24px, etc.)"
```
### Responsive Breakpoints
```
"What are the defined breakpoints in this Figma design?"
"Show mobile vs desktop layout differences for [component]"
```
---
## Bug Report for UI Discrepancies
```markdown
# BUG-XXX: [Component] doesn't match Figma design
**Severity:** Medium (UI)
**Type:** Visual
## Design vs Implementation
**Figma Design:** [URL]
**Expected (from Figma):**
- Button background: #0066FF
- Font weight: 600 (Semi-bold)
- Padding: 12px 24px
**Actual (in implementation):**
- Button background: #0052CC
- Font weight: 400 (Regular) ❌
- Padding: 12px 24px ✓
## Screenshots
- Figma design: [attach]
- Current implementation: [attach]
- Side-by-side comparison: [attach]
## Impact
Users see inconsistent branding. Button appears less prominent than designed.
```
---
## Automation Ideas
### Visual Regression Testing
- Capture screenshots
- Compare against Figma exports
- Highlight pixel differences
- Tools: Percy, Chromatic, BackstopJS
### Design Token Validation
- Extract Figma design tokens
- Compare with CSS variables
- Flag mismatches
- Automate with scripts
---
## Best Practices
**DO:**
- ✅ Always reference specific Figma URLs
- ✅ Test all component states
- ✅ Check responsive breakpoints
- ✅ Document exact values (not "close enough")
- ✅ Screenshot both design and implementation
- ✅ Test in multiple browsers
**DON'T:**
- ❌ Assume "it looks right"
- ❌ Skip hover/active states
- ❌ Ignore small color differences
- ❌ Test only on one screen size
- ❌ Forget to check typography
- ❌ Miss spacing issues
---
## Checklist for UI Test Cases
Per component:
- [ ] Figma URL documented
- [ ] Desktop layout validated
- [ ] Mobile/tablet responsive checked
- [ ] All interactive states tested
- [ ] Colors match exactly (use color picker)
- [ ] Typography specifications correct
- [ ] Spacing (padding/margins) accurate
- [ ] Icons match design
- [ ] Shadows/borders match
- [ ] Animations match timing/easing
---
## Quick Reference
| Element | What to Check | Tool |
|---------|---------------|------|
| Colors | Hex values exact | Browser color picker |
| Spacing | Padding/margin px | DevTools computed styles |
| Typography | Font, size, weight | DevTools font panel |
| Layout | Width, height, position | DevTools box model |
| States | Hover, active, focus | Manual interaction |
| Responsive | Breakpoint behavior | DevTools device mode |
---
**Remember:** Pixel-perfect implementation builds user trust and brand consistency.

View File

@@ -0,0 +1,371 @@
# Regression Testing Guide
Comprehensive guide to regression testing strategies and execution.
---
## What is Regression Testing?
**Definition:** Re-testing existing functionality to ensure new changes haven't broken anything.
**When to run:**
- Before every release
- After bug fixes
- After new features
- After refactoring
- Weekly/nightly builds
---
## Regression Test Suite Structure
### 1. Smoke Test Suite (15-30 min)
**Purpose:** Quick sanity check
**When:** Daily, before detailed testing
**Coverage:**
- Critical user paths
- Core functionality
- System health checks
- Build stability
**Example Smoke Suite:**
```
SMOKE-001: User can login
SMOKE-002: User can navigate to main features
SMOKE-003: Critical API endpoints respond
SMOKE-004: Database connectivity works
SMOKE-005: User can complete primary action
SMOKE-006: User can logout
```
### 2. Full Regression Suite (2-4 hours)
**Purpose:** Comprehensive validation
**When:** Before releases, weekly
**Coverage:**
- All functional test cases
- Integration scenarios
- UI validation
- Data integrity
- Security checks
### 3. Targeted Regression (30-60 min)
**Purpose:** Test impacted areas
**When:** After specific changes
**Coverage:**
- Modified feature area
- Related components
- Integration points
- Dependent functionality
---
## Building a Regression Suite
### Step 1: Identify Critical Paths
**Questions:**
- What can users absolutely NOT live without?
- What generates revenue?
- What handles sensitive data?
- What's used most frequently?
**Example Critical Paths:**
- User authentication
- Payment processing
- Data submission
- Report generation
- Core business logic
### Step 2: Prioritize Test Cases
**P0 (Must Run):**
- Business-critical functionality
- Security-related tests
- Data integrity checks
- Revenue-impacting features
**P1 (Should Run):**
- Major features
- Common user flows
- Integration points
- Performance checks
**P2 (Nice to Run):**
- Minor features
- Edge cases
- UI polish
- Optional functionality
### Step 3: Group by Feature Area
```
Authentication & Authorization
├─ Login/Logout
├─ Password reset
├─ Session management
└─ Permissions
Payment Processing
├─ Checkout flow
├─ Payment methods
├─ Refunds
└─ Receipt generation
User Management
├─ Profile updates
├─ Preferences
├─ Account settings
└─ Data export
```
---
## Regression Suite Examples
### E-commerce Regression Suite
**Smoke Tests (20 min):**
1. Homepage loads
2. User can login
3. Product search works
4. Add to cart functions
5. Checkout accessible
6. Payment gateway responds
**Full Regression (3 hours):**
**User Account (30 min):**
- Registration
- Login/Logout
- Password reset
- Profile updates
- Address management
**Product Catalog (45 min):**
- Browse categories
- Search functionality
- Filters and sorting
- Product details
- Image zoom
- Reviews display
**Shopping Cart (30 min):**
- Add items
- Update quantities
- Remove items
- Apply discounts
- Save for later
- Cart persistence
**Checkout & Payment (45 min):**
- Guest checkout
- Registered user checkout
- Multiple addresses
- Payment methods
- Order confirmation
- Email notifications
**Order Management (30 min):**
- Order history
- Order tracking
- Cancellations
- Returns/Refunds
- Reorders
---
## Execution Strategy
### Test Execution Order
**1. Smoke first**
- If smoke fails → stop, fix build
- If smoke passes → proceed to full regression
**2. P0 tests next**
- Critical functionality
- Must pass before proceeding
**3. P1 then P2**
- Complete remaining tests
- Track failures
**4. Exploratory**
- Unscripted testing
- Find unexpected issues
### Pass/Fail Criteria
**PASS:**
- All P0 tests pass
- 90%+ P1 tests pass
- No critical bugs open
- Performance acceptable
**FAIL (Block Release):**
- Any P0 test fails
- Critical bug discovered
- Security vulnerability
- Data loss scenario
**CONDITIONAL PASS:**
- P1 failures with workarounds
- Known issues documented
- Fix plan in place
---
## Regression Test Management
### Test Suite Maintenance
**Monthly Review:**
- Remove obsolete tests
- Update changed functionality
- Add new critical paths
- Optimize slow tests
**After Each Release:**
- Update test data
- Fix broken tests
- Add regression for bugs found
- Document changes
### Automation Considerations
**Good Candidates for Automation:**
- Stable, repetitive tests
- Smoke tests
- API tests
- Data validation
- Cross-browser checks
**Keep Manual:**
- Exploratory testing
- Usability evaluation
- Visual design validation
- Complex user scenarios
---
## Regression Test Execution Report
```markdown
# Regression Test Report: Release 2.5.0
**Date:** 2024-01-15
**Build:** v2.5.0-rc1
**Tester:** QA Team
**Environment:** Staging
## Summary
| Suite | Total | Pass | Fail | Blocked | Pass Rate |
|-------|-------|------|------|---------|-----------|
| Smoke | 10 | 10 | 0 | 0 | 100% |
| P0 Critical | 25 | 23 | 2 | 0 | 92% |
| P1 High | 50 | 47 | 2 | 1 | 94% |
| P2 Medium | 40 | 38 | 1 | 1 | 95% |
| **TOTAL** | **125** | **118** | **5** | **2** | **94%** |
## Critical Failures (P0)
### BUG-234: Payment processing fails for Visa
- **Test:** TC-PAY-001
- **Impact:** High - Blocks 40% of transactions
- **Status:** In Progress
- **ETA:** 2024-01-16
### BUG-235: User session expires prematurely
- **Test:** TC-AUTH-045
- **Impact:** Medium - Users logged out unexpectedly
- **Status:** Under investigation
## Recommendation
**Status:** ⚠️ CONDITIONAL GO
- Fix BUG-234 (payment) before release
- BUG-235 acceptable with documented workaround
- Retest after fixes
- Final regression run before production deployment
## Risks
- Payment issue could impact revenue
- Session bug may frustrate users
- Limited time before release deadline
## Next Steps
1. Fix BUG-234 by EOD
2. Retest payment flow
3. Document session workaround
4. Final smoke test before release
```
---
## Common Pitfalls
**❌ Don't:**
- Run same tests without updating
- Skip regression "to save time"
- Ignore failures in low-priority tests
- Test only happy paths
- Forget to update test data
- Run regression once and forget
**✅ Do:**
- Maintain suite regularly
- Run regression consistently
- Investigate all failures
- Include edge cases
- Keep test data fresh
- Automate repetitive tests
---
## Regression Checklist
**Before Execution:**
- [ ] Test environment ready
- [ ] Build deployed
- [ ] Test data prepared
- [ ] Previous bugs verified fixed
- [ ] Test suite reviewed/updated
**During Execution:**
- [ ] Follow test execution order
- [ ] Document all failures
- [ ] Screenshot/record issues
- [ ] Note unexpected behavior
- [ ] Track blockers
**After Execution:**
- [ ] Compile results
- [ ] File new bugs
- [ ] Update test cases
- [ ] Report to stakeholders
- [ ] Archive artifacts
---
## Quick Reference
| Suite Type | Duration | Frequency | Coverage |
|------------|----------|-----------|----------|
| Smoke | 15-30 min | Daily | Critical paths |
| Targeted | 30-60 min | Per change | Affected areas |
| Full | 2-4 hours | Weekly/Release | Comprehensive |
| Sanity | 10-15 min | After hotfix | Quick validation |
**Remember:** Regression testing is insurance against breaking existing functionality.

View File

@@ -0,0 +1,276 @@
#!/bin/bash
# Bug Report Generator
# Create structured, reproducible bug reports
set -e
# Colors
GREEN='\033[0;32m'
BLUE='\033[0;34m'
YELLOW='\033[1;33m'
RED='\033[0;31m'
MAGENTA='\033[0;35m'
CYAN='\033[0;36m'
NC='\033[0m'
echo -e "${RED}╔══════════════════════════════════════════════════╗${NC}"
echo -e "${RED}║ Bug Report Generator ║${NC}"
echo -e "${RED}╚══════════════════════════════════════════════════╝${NC}"
echo ""
prompt_input() {
local prompt_text="$1"
local var_name="$2"
local required="$3"
while true; do
echo -e "${CYAN}${prompt_text}${NC}"
read -r input
if [ -n "$input" ]; then
eval "$var_name=\"$input\""
break
elif [ "$required" != "true" ]; then
eval "$var_name=\"\""
break
else
echo -e "${RED}This field is required.${NC}"
fi
done
}
# Bug ID
BUG_ID="BUG-$(date +%Y%m%d%H%M%S)"
echo -e "${YELLOW}Auto-generated Bug ID: $BUG_ID${NC}"
echo ""
# Basic Info
prompt_input "Bug title (clear, specific):" BUG_TITLE true
echo ""
echo "Severity:"
echo "1) Critical - System crash, data loss, security issue"
echo "2) High - Major feature broken, no workaround"
echo "3) Medium - Feature partially broken, workaround exists"
echo "4) Low - Cosmetic, minor inconvenience"
echo ""
prompt_input "Select severity (1-4):" SEVERITY_NUM true
case $SEVERITY_NUM in
1) SEVERITY="Critical" ;;
2) SEVERITY="High" ;;
3) SEVERITY="Medium" ;;
4) SEVERITY="Low" ;;
*) SEVERITY="Medium" ;;
esac
echo ""
echo "Priority:"
echo "1) P0 - Blocks release"
echo "2) P1 - Fix before release"
echo "3) P2 - Fix in next release"
echo "4) P3 - Fix when possible"
echo ""
prompt_input "Select priority (1-4):" PRIORITY_NUM true
case $PRIORITY_NUM in
1) PRIORITY="P0" ;;
2) PRIORITY="P1" ;;
3) PRIORITY="P2" ;;
4) PRIORITY="P3" ;;
*) PRIORITY="P2" ;;
esac
# Environment
echo ""
echo -e "${MAGENTA}━━━ Environment Details ━━━${NC}"
echo ""
prompt_input "Operating System (e.g., Windows 11, macOS 14):" OS true
prompt_input "Browser & Version (e.g., Chrome 120, Firefox 121):" BROWSER true
prompt_input "Device (e.g., Desktop, iPhone 15):" DEVICE false
prompt_input "Build/Version number:" BUILD true
prompt_input "URL or page where bug occurs:" URL false
# Bug Description
echo ""
echo -e "${MAGENTA}━━━ Bug Description ━━━${NC}"
echo ""
prompt_input "Brief description of the issue:" DESCRIPTION true
# Reproduction Steps
echo ""
echo -e "${MAGENTA}━━━ Steps to Reproduce ━━━${NC}"
echo ""
echo "Enter reproduction steps (one per line, press Enter twice when done):"
REPRO_STEPS=""
STEP_NUM=1
while true; do
read -r line
if [ -z "$line" ]; then
break
fi
REPRO_STEPS="${REPRO_STEPS}${STEP_NUM}. ${line}\n"
((STEP_NUM++))
done
# Expected vs Actual
echo ""
prompt_input "Expected behavior:" EXPECTED true
prompt_input "Actual behavior:" ACTUAL true
# Additional Info
echo ""
echo -e "${MAGENTA}━━━ Additional Information ━━━${NC}"
echo ""
prompt_input "Console errors (paste if any):" CONSOLE_ERRORS false
prompt_input "Frequency (Always/Sometimes/Rare):" FREQUENCY false
prompt_input "How many users affected (estimate):" USER_IMPACT false
prompt_input "Workaround available? (describe if yes):" WORKAROUND false
prompt_input "Related test case ID:" TEST_CASE false
prompt_input "Figma design link (if UI bug):" FIGMA_LINK false
prompt_input "First noticed (date/build):" FIRST_NOTICED false
FILENAME="${BUG_ID}.md"
OUTPUT_DIR="."
if [ ! -z "$1" ]; then
OUTPUT_DIR="$1"
fi
OUTPUT_FILE="$OUTPUT_DIR/$FILENAME"
echo ""
echo -e "${BLUE}Generating bug report...${NC}"
echo ""
cat > "$OUTPUT_FILE" << EOF
# ${BUG_ID}: ${BUG_TITLE}
**Severity:** ${SEVERITY}
**Priority:** ${PRIORITY}
**Type:** ${TEST_TYPE:-Functional}
**Status:** Open
**Reported:** $(date +%Y-%m-%d)
**Reporter:** [Your Name]
---
## Environment
- **OS:** ${OS}
- **Browser:** ${BROWSER}
- **Device:** ${DEVICE:-Desktop}
- **Build:** ${BUILD}
- **URL:** ${URL:-N/A}
---
## Description
${DESCRIPTION}
---
## Steps to Reproduce
${REPRO_STEPS}
---
## Expected Behavior
${EXPECTED}
---
## Actual Behavior
${ACTUAL}
---
## Visual Evidence
- [ ] Screenshot attached
- [ ] Screen recording attached
- [ ] Console logs attached
**Console Errors:**
\`\`\`
${CONSOLE_ERRORS:-None}
\`\`\`
---
## Impact
- **Frequency:** ${FREQUENCY:-Unknown}
- **User Impact:** ${USER_IMPACT:-Unknown}
- **Workaround:** ${WORKAROUND:-None available}
---
## Additional Context
${FIGMA_LINK:+**Figma Design:** ${FIGMA_LINK}}
${TEST_CASE:+**Related Test Case:** ${TEST_CASE}}
${FIRST_NOTICED:+**First Noticed:** ${FIRST_NOTICED}}
**Is this a regression?** [Yes/No - if yes, since when]
---
## Root Cause
[To be filled by developer]
---
## Fix
[To be filled by developer]
---
## Verification
- [ ] Bug fix verified in dev environment
- [ ] Regression testing completed
- [ ] Related test cases passing
- [ ] Ready for release
**Verified By:** ___________
**Date:** ___________
---
## Comments
[Discussion and updates]
EOF
echo -e "${GREEN}✅ Bug report generated successfully!${NC}"
echo ""
echo -e "File location: ${BLUE}$OUTPUT_FILE${NC}"
echo ""
echo -e "${RED}⚠️ IMPORTANT NEXT STEPS:${NC}"
echo "1. Attach screenshots/screen recordings"
echo "2. Add console errors if available"
echo "3. Verify reproduction steps work"
echo "4. Submit to bug tracking system"
if [ -n "$FIGMA_LINK" ]; then
echo "5. Verify against Figma design"
fi
echo ""
echo -e "${CYAN}Tip: Clear, reproducible steps = faster fixes${NC}"
echo ""

View File

@@ -0,0 +1,302 @@
#!/bin/bash
# Manual Test Case Generator
# Interactive workflow for creating comprehensive test cases
set -e
# Colors
GREEN='\033[0;32m'
BLUE='\033[0;34m'
YELLOW='\033[1;33m'
RED='\033[0;31m'
MAGENTA='\033[0;35m'
CYAN='\033[0;36m'
NC='\033[0m'
echo -e "${BLUE}╔══════════════════════════════════════════════════╗${NC}"
echo -e "${BLUE}║ Manual Test Case Generator ║${NC}"
echo -e "${BLUE}╚══════════════════════════════════════════════════╝${NC}"
echo ""
# Helper functions
prompt_input() {
local prompt_text="$1"
local var_name="$2"
local required="$3"
while true; do
echo -e "${CYAN}${prompt_text}${NC}"
read -r input
if [ -n "$input" ]; then
eval "$var_name=\"$input\""
break
elif [ "$required" != "true" ]; then
eval "$var_name=\"\""
break
else
echo -e "${RED}This field is required.${NC}"
fi
done
}
# Step 1: Basic Info
echo -e "${MAGENTA}━━━ Step 1: Test Case Basics ━━━${NC}"
echo ""
prompt_input "Test Case ID (e.g., TC-LOGIN-001):" TC_ID true
prompt_input "Test Case Title:" TC_TITLE true
echo ""
echo "Priority:"
echo "1) P0 - Critical (blocks release)"
echo "2) P1 - High (important features)"
echo "3) P2 - Medium (nice to have)"
echo "4) P3 - Low (minor issues)"
echo ""
prompt_input "Select priority (1-4):" PRIORITY_NUM true
case $PRIORITY_NUM in
1) PRIORITY="P0 (Critical)" ;;
2) PRIORITY="P1 (High)" ;;
3) PRIORITY="P2 (Medium)" ;;
4) PRIORITY="P3 (Low)" ;;
*) PRIORITY="P2 (Medium)" ;;
esac
echo ""
echo "Test Type:"
echo "1) Functional"
echo "2) UI/Visual"
echo "3) Integration"
echo "4) Regression"
echo "5) Performance"
echo "6) Security"
echo ""
prompt_input "Select test type (1-6):" TYPE_NUM true
case $TYPE_NUM in
1) TEST_TYPE="Functional" ;;
2) TEST_TYPE="UI/Visual" ;;
3) TEST_TYPE="Integration" ;;
4) TEST_TYPE="Regression" ;;
5) TEST_TYPE="Performance" ;;
6) TEST_TYPE="Security" ;;
*) TEST_TYPE="Functional" ;;
esac
prompt_input "Estimated test time (minutes):" EST_TIME false
# Step 2: Objective and Description
echo ""
echo -e "${MAGENTA}━━━ Step 2: Test Objective ━━━${NC}"
echo ""
prompt_input "What are you testing? (objective):" OBJECTIVE true
prompt_input "Why is this test important?" WHY_IMPORTANT false
# Step 3: Preconditions
echo ""
echo -e "${MAGENTA}━━━ Step 3: Preconditions ━━━${NC}"
echo ""
echo "Enter preconditions (one per line, press Enter twice when done):"
PRECONDITIONS=""
while true; do
read -r line
if [ -z "$line" ]; then
break
fi
PRECONDITIONS="${PRECONDITIONS}- ${line}\n"
done
# Step 4: Test Steps
echo ""
echo -e "${MAGENTA}━━━ Step 4: Test Steps ━━━${NC}"
echo ""
echo "Enter test steps (format: action | expected result)"
echo "Type 'done' when finished"
echo ""
TEST_STEPS=""
STEP_NUM=1
while true; do
echo -e "${YELLOW}Step $STEP_NUM:${NC}"
prompt_input "Action:" ACTION false
if [ "$ACTION" = "done" ] || [ -z "$ACTION" ]; then
break
fi
prompt_input "Expected result:" EXPECTED true
TEST_STEPS="${TEST_STEPS}${STEP_NUM}. ${ACTION}\n **Expected:** ${EXPECTED}\n\n"
((STEP_NUM++))
done
# Step 5: Test Data
echo ""
echo -e "${MAGENTA}━━━ Step 5: Test Data ━━━${NC}"
echo ""
prompt_input "Test data required (e.g., user credentials, sample data):" TEST_DATA false
# Step 6: Figma Design (if UI test)
echo ""
if [ "$TEST_TYPE" = "UI/Visual" ]; then
echo -e "${MAGENTA}━━━ Step 6: Figma Design Validation ━━━${NC}"
echo ""
prompt_input "Figma design URL (if applicable):" FIGMA_URL false
prompt_input "Visual elements to validate:" VISUAL_CHECKS false
fi
# Step 7: Edge Cases
echo ""
echo -e "${MAGENTA}━━━ Step 7: Additional Info ━━━${NC}"
echo ""
prompt_input "Edge cases or variations to consider:" EDGE_CASES false
prompt_input "Related test cases (IDs):" RELATED_TCS false
prompt_input "Notes or comments:" NOTES false
# Generate filename
FILENAME="${TC_ID}.md"
FILENAME="${FILENAME//[^a-zA-Z0-9_-]/}"
OUTPUT_DIR="."
if [ ! -z "$1" ]; then
OUTPUT_DIR="$1"
fi
OUTPUT_FILE="$OUTPUT_DIR/$FILENAME"
# Generate test case
echo ""
echo -e "${BLUE}Generating test case...${NC}"
echo ""
cat > "$OUTPUT_FILE" << EOF
# ${TC_ID}: ${TC_TITLE}
**Priority:** ${PRIORITY}
**Type:** ${TEST_TYPE}
**Status:** Not Run
**Estimated Time:** ${EST_TIME:-TBD} minutes
**Created:** $(date +%Y-%m-%d)
---
## Objective
${OBJECTIVE}
${WHY_IMPORTANT:+**Why this matters:** ${WHY_IMPORTANT}}
---
## Preconditions
${PRECONDITIONS}
---
## Test Steps
${TEST_STEPS}
---
## Test Data
${TEST_DATA:-No specific test data required}
---
EOF
# Add Figma section if UI test
if [ "$TEST_TYPE" = "UI/Visual" ] && [ -n "$FIGMA_URL" ]; then
cat >> "$OUTPUT_FILE" << EOF
## Visual Validation (Figma)
**Design Reference:** ${FIGMA_URL}
**Elements to validate:**
${VISUAL_CHECKS}
**Verification checklist:**
- [ ] Layout matches Figma design
- [ ] Spacing (padding/margins) accurate
- [ ] Typography (font, size, weight, color) correct
- [ ] Colors match design system
- [ ] Component states (hover, active, disabled) implemented
- [ ] Responsive behavior as designed
---
EOF
fi
cat >> "$OUTPUT_FILE" << EOF
## Post-conditions
- [Describe system state after test execution]
- [Any cleanup required]
---
## Edge Cases & Variations
${EDGE_CASES:-Consider boundary values, null inputs, special characters, concurrent users}
---
## Related Test Cases
${RELATED_TCS:-None}
---
## Execution History
| Date | Tester | Build | Result | Notes |
|------|--------|-------|--------|-------|
| | | | Not Run | |
---
## Notes
${NOTES}
---
## Attachments
- [ ] Screenshots
- [ ] Screen recordings
- [ ] Console logs
- [ ] Network traces
EOF
echo -e "${GREEN}✅ Test case generated successfully!${NC}"
echo ""
echo -e "File location: ${BLUE}$OUTPUT_FILE${NC}"
echo ""
echo -e "${YELLOW}Next steps:${NC}"
echo "1. Review test case for completeness"
echo "2. Add to test suite"
echo "3. Execute test and update results"
if [ "$TEST_TYPE" = "UI/Visual" ] && [ -n "$FIGMA_URL" ]; then
echo "4. Validate against Figma design using MCP"
fi
echo ""
echo -e "${CYAN}Tip: Create multiple test cases for comprehensive coverage${NC}"
echo ""