Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:46:04 +08:00
commit dc6f26607c
8 changed files with 1184 additions and 0 deletions

View File

@@ -0,0 +1,522 @@
---
name: a11y-checker-ci
description: Adds comprehensive accessibility testing to CI/CD pipelines using axe-core Playwright integration or pa11y-ci. Automatically generates markdown reports for pull requests showing WCAG violations with severity levels, affected elements, and remediation guidance. This skill should be used when implementing accessibility CI checks, adding a11y tests to pipelines, generating accessibility reports, enforcing WCAG compliance, automating accessibility scans, or setting up PR accessibility gates. Trigger terms include a11y ci, accessibility pipeline, wcag ci, axe-core ci, pa11y ci, accessibility reports, a11y automation, accessibility gate, compliance check.
---
# A11y Checker CI
Automated accessibility testing in CI/CD pipelines with comprehensive reporting.
## Overview
To enforce accessibility standards in continuous integration, this skill configures automated WCAG compliance checks using industry-standard tools and generates detailed reports for every pull request.
## When to Use
Use this skill when:
- Adding accessibility testing to CI/CD pipelines
- Enforcing WCAG compliance in automated builds
- Generating accessibility reports for pull requests
- Setting up quality gates based on accessibility
- Automating accessibility audits
- Tracking accessibility improvements over time
- Ensuring new features meet accessibility standards
## Supported Tools
### @axe-core/playwright
Industry-standard accessibility testing engine with Playwright integration.
**Advantages:**
- Comprehensive WCAG rule coverage
- Fast execution in parallel with E2E tests
- Detailed violation reporting
- Active maintenance and updates
### pa11y-ci
Command-line accessibility testing tool for multiple URLs.
**Advantages:**
- Simple configuration
- Standalone execution (no browser automation needed)
- Multiple URL scanning
- Custom rule configuration
## Implementation Steps
### 1. Choose Testing Approach
To select the appropriate tool:
**Use @axe-core/playwright when:**
- Already using Playwright for E2E tests
- Need integration with existing test suites
- Want to test dynamic/authenticated pages
- Require detailed test context
**Use pa11y-ci when:**
- Need simple URL-based scanning
- Want standalone accessibility checks
- Testing static pages or public URLs
- Prefer configuration-based approach
### 2. Install Dependencies
For @axe-core/playwright:
```bash
npm install -D @axe-core/playwright
```
For pa11y-ci:
```bash
npm install -D pa11y-ci
```
### 3. Create Test Configuration
#### Option A: @axe-core/playwright
Create test file using `assets/a11y-test.spec.ts`:
```typescript
import { test, expect } from '@playwright/test'
import AxeBuilder from '@axe-core/playwright'
test.describe('Accessibility Tests', () => {
test('homepage meets WCAG standards', async ({ page }) => {
await page.goto('/')
const accessibilityScanResults = await new AxeBuilder({ page })
.withTags(['wcag2a', 'wcag2aa', 'wcag21a', 'wcag21aa'])
.analyze()
expect(accessibilityScanResults.violations).toEqual([])
})
})
```
#### Option B: pa11y-ci
Create configuration using `assets/pa11y-config.json`:
```json
{
"defaults": {
"timeout": 30000,
"chromeLaunchConfig": {
"executablePath": "/usr/bin/chromium-browser",
"args": ["--no-sandbox"]
},
"standard": "WCAG2AA",
"runners": ["axe", "htmlcs"],
"ignore": []
},
"urls": [
"http://localhost:3000",
"http://localhost:3000/entities",
"http://localhost:3000/timeline"
]
}
```
### 4. Generate Report Script
Create report generator using `scripts/generate_a11y_report.py`:
```bash
python scripts/generate_a11y_report.py \
--input test-results/a11y-results.json \
--output accessibility-report.md \
--format github
```
The script generates markdown reports with:
- Executive summary with pass/fail status
- Violation count by severity (critical, serious, moderate, minor)
- Detailed violation list with:
- Rule ID and description
- WCAG criteria
- Impact level
- Affected elements
- Remediation guidance
- Historical comparison (if available)
### 5. Configure CI Pipeline
#### GitHub Actions
Use template from `assets/github-actions-a11y.yml`:
```yaml
name: Accessibility Tests
on:
pull_request:
branches: [main, master]
push:
branches: [main, master]
jobs:
a11y:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Build application
run: npm run build
- name: Start server
run: npm start &
- name: Wait for server
run: npx wait-on http://localhost:3000 -t 60000
- name: Run accessibility tests
run: npm run test:a11y
- name: Generate report
if: always()
run: |
python scripts/generate_a11y_report.py \
--input test-results/a11y-results.json \
--output accessibility-report.md \
--format github
- name: Comment PR
if: github.event_name == 'pull_request' && always()
uses: actions/github-script@v7
with:
script: |
const fs = require('fs')
const report = fs.readFileSync('accessibility-report.md', 'utf8')
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: report
})
- name: Upload report
if: always()
uses: actions/upload-artifact@v4
with:
name: accessibility-report
path: |
accessibility-report.md
test-results/
- name: Fail on violations
if: failure()
run: exit 1
```
#### GitLab CI
Use template from `assets/gitlab-ci-a11y.yml`:
```yaml
accessibility-test:
stage: test
image: mcr.microsoft.com/playwright:v1.40.0-focal
script:
- npm ci
- npm run build
- npm start &
- npx wait-on http://localhost:3000 -t 60000
- npm run test:a11y
- python scripts/generate_a11y_report.py
--input test-results/a11y-results.json
--output accessibility-report.md
--format gitlab
artifacts:
when: always
paths:
- accessibility-report.md
- test-results/
reports:
junit: test-results/junit.xml
only:
- merge_requests
- main
```
### 6. Add Package Scripts
Add to package.json:
```json
{
"scripts": {
"test:a11y": "playwright test a11y.spec.ts",
"test:a11y:ci": "playwright test a11y.spec.ts --reporter=json",
"pa11y": "pa11y-ci --config .pa11yci.json"
}
}
```
## Report Format
### Executive Summary
```markdown
# Accessibility Test Report
**Status:** [ERROR] Failed
**Total Violations:** 12
**Pages Tested:** 5
**WCAG Level:** AA
**Date:** 2025-01-15
## Summary by Severity
- [CRITICAL] Critical: 2
- [SERIOUS] Serious: 5
- [MODERATE] Moderate: 3
- [MINOR] Minor: 2
```
### Violation Details
```markdown
## Violations
### [CRITICAL] Critical (2)
#### 1. Form elements must have labels (form-field-multiple-labels)
**WCAG Criteria:** 3.3.2 (Level A)
**Impact:** Critical
**Occurrences:** 3 elements
**Description:**
Form fields should have exactly one associated label element.
**Affected Elements:**
- Line 45: `<input type="text" name="entity-name">`
- Line 67: `<input type="email" name="user-email">`
- Line 89: `<select name="entity-type">`
**How to Fix:**
Add a `<label>` element with a `for` attribute matching the input's `id`:
\`\`\`html
<label for="entity-name">Entity Name</label>
<input id="entity-name" type="text" name="entity-name">
\`\`\`
**More Info:** https://dequeuniversity.com/rules/axe/4.7/label
---
```
### Historical Comparison
```markdown
## Progress
| Metric | Previous | Current | Change |
|--------|----------|---------|--------|
| Total Violations | 15 | 12 | [OK] -3 |
| Critical | 3 | 2 | [OK] -1 |
| Serious | 7 | 5 | [OK] -2 |
| Moderate | 4 | 3 | [OK] -1 |
| Minor | 1 | 2 | [ERROR] +1 |
```
## Quality Gates
### Blocking Violations
To fail builds on specific violations, configure thresholds:
```typescript
const results = await new AxeBuilder({ page }).analyze()
// Fail on any critical violations
const critical = results.violations.filter(v => v.impact === 'critical')
expect(critical).toHaveLength(0)
// Allow up to 5 moderate violations
const moderate = results.violations.filter(v => v.impact === 'moderate')
expect(moderate.length).toBeLessThanOrEqual(5)
```
### Configuration File
Use `assets/a11y-thresholds.json`:
```json
{
"thresholds": {
"critical": 0,
"serious": 0,
"moderate": 5,
"minor": 10
},
"allowedViolations": [
"color-contrast"
],
"ignoreSelectors": [
"#third-party-widget",
"[data-testid='external-embed']"
]
}
```
## Advanced Configuration
### Custom Rules
To disable or configure specific rules:
```typescript
const results = await new AxeBuilder({ page })
.disableRules(['color-contrast'])
.withRules({
'custom-rule': { enabled: true }
})
.analyze()
```
### Page-Specific Tests
Test different page types:
```typescript
const pages = [
{ url: '/', name: 'Homepage' },
{ url: '/entities', name: 'Entity List' },
{ url: '/timeline', name: 'Timeline View' }
]
for (const { url, name } of pages) {
test(`${name} accessibility`, async ({ page }) => {
await page.goto(url)
const results = await new AxeBuilder({ page }).analyze()
expect(results.violations).toEqual([])
})
}
```
### Authenticated Pages
Test pages requiring authentication:
```typescript
test.use({ storageState: 'auth.json' })
test('dashboard accessibility', async ({ page }) => {
await page.goto('/dashboard')
const results = await new AxeBuilder({ page }).analyze()
expect(results.violations).toEqual([])
})
```
## Report Customization
### Custom Templates
Create custom report templates in `assets/report-templates/`:
- `github-template.md` - GitHub PR comments
- `gitlab-template.md` - GitLab MR comments
- `slack-template.md` - Slack notifications
- `html-template.html` - HTML reports
### Report Destinations
Configure report distribution:
```python
python scripts/generate_a11y_report.py \
--input results.json \
--output-dir reports/ \
--formats github gitlab slack html \
--slack-webhook $SLACK_WEBHOOK \
--github-token $GITHUB_TOKEN
```
## Monitoring and Tracking
### Historical Data
Store results for trend analysis:
```bash
# Save results with timestamp
python scripts/save_a11y_results.py \
--input test-results/a11y-results.json \
--database a11y-history.db
# Generate trend report
python scripts/generate_trend_report.py \
--database a11y-history.db \
--days 30 \
--output a11y-trends.md
```
### Metrics Dashboard
Generate metrics for dashboards:
```json
{
"timestamp": "2025-01-15T10:30:00Z",
"commit": "abc123",
"branch": "feature/new-ui",
"violations": {
"critical": 2,
"serious": 5,
"moderate": 3,
"minor": 2
},
"wcagCompliance": {
"a": false,
"aa": false,
"aaa": false
},
"pagesTested": 5,
"totalElements": 1247,
"testedElements": 1247
}
```
## Resources
Consult the following resources for detailed information:
- `scripts/generate_a11y_report.py` - Report generator
- `scripts/save_a11y_results.py` - Historical data storage
- `scripts/generate_trend_report.py` - Trend analysis
- `assets/a11y-test.spec.ts` - Playwright test template
- `assets/pa11y-config.json` - pa11y-ci configuration
- `assets/github-actions-a11y.yml` - GitHub Actions workflow
- `assets/gitlab-ci-a11y.yml` - GitLab CI configuration
- `assets/a11y-thresholds.json` - Violation thresholds
- `references/wcag-criteria.md` - WCAG standards reference
- `references/common-violations.md` - Common issues and fixes
## Best Practices
- Run accessibility tests on every pull request
- Set appropriate thresholds for violations
- Generate readable reports for developers
- Track accessibility metrics over time
- Test authenticated and dynamic pages
- Include accessibility in definition of done
- Review and update ignored rules periodically
- Provide remediation guidance in reports
- Celebrate accessibility improvements

View File

@@ -0,0 +1,153 @@
import { test, expect } from '@playwright/test'
import AxeBuilder from '@axe-core/playwright'
import * as fs from 'fs'
import * as path from 'path'
/**
* Accessibility test suite using axe-core
*
* Tests pages against WCAG 2.1 Level AA standards
* Generates JSON results for report generation
*/
// Pages to test
const PAGES = [
{ url: '/', name: 'Homepage' },
{ url: '/entities', name: 'Entity List' },
{ url: '/timeline', name: 'Timeline' },
{ url: '/about', name: 'About' }
]
// WCAG levels to test
const WCAG_TAGS = ['wcag2a', 'wcag2aa', 'wcag21a', 'wcag21aa']
// Result storage
const results: any[] = []
test.describe('Accessibility Tests', () => {
test.afterAll(async () => {
// Save results to file for report generation
const resultsDir = path.join(process.cwd(), 'test-results')
if (!fs.existsSync(resultsDir)) {
fs.mkdirSync(resultsDir, { recursive: true })
}
const resultsFile = path.join(resultsDir, 'a11y-results.json')
fs.writeFileSync(resultsFile, JSON.stringify(results, null, 2))
console.log(`Accessibility results saved to: ${resultsFile}`)
})
for (const { url, name } of PAGES) {
test(`${name} meets WCAG standards`, async ({ page }) => {
await page.goto(url)
// Wait for page to be fully loaded
await page.waitForLoadState('networkidle')
// Run accessibility scan
const accessibilityScanResults = await new AxeBuilder({ page })
.withTags(WCAG_TAGS)
.analyze()
// Store results
results.push({
url,
name,
timestamp: new Date().toISOString(),
violations: accessibilityScanResults.violations,
passes: accessibilityScanResults.passes,
incomplete: accessibilityScanResults.incomplete
})
// Log violations for immediate feedback
if (accessibilityScanResults.violations.length > 0) {
console.log(`\n[ERROR] ${name} has ${accessibilityScanResults.violations.length} violations:`)
accessibilityScanResults.violations.forEach(violation => {
console.log(` - [${violation.impact}] ${violation.id}: ${violation.description}`)
console.log(` Affected: ${violation.nodes.length} elements`)
})
} else {
console.log(`\n[OK] ${name} has no violations`)
}
// Fail test if violations found
expect(accessibilityScanResults.violations).toEqual([])
})
}
test('scan for color contrast issues', async ({ page }) => {
await page.goto('/')
const results = await new AxeBuilder({ page })
.withRules(['color-contrast'])
.analyze()
expect(results.violations).toEqual([])
})
test('scan for keyboard navigation', async ({ page }) => {
await page.goto('/')
const results = await new AxeBuilder({ page })
.withRules(['keyboard'])
.analyze()
expect(results.violations).toEqual([])
})
test('scan for ARIA usage', async ({ page }) => {
await page.goto('/')
const results = await new AxeBuilder({ page })
.withTags(['cat.aria'])
.analyze()
expect(results.violations).toEqual([])
})
})
test.describe('Form Accessibility', () => {
test('forms have proper labels', async ({ page }) => {
// Test pages with forms
const formPages = ['/signup', '/login', '/entities/new']
for (const url of formPages) {
try {
await page.goto(url, { timeout: 5000 })
const results = await new AxeBuilder({ page })
.withRules(['label', 'label-title-only'])
.analyze()
expect(results.violations).toEqual([])
} catch (e) {
// Skip if page doesn't exist
console.log(`Skipping ${url} - page not found`)
}
}
})
})
test.describe('Interactive Elements', () => {
test('buttons have accessible names', async ({ page }) => {
await page.goto('/')
const results = await new AxeBuilder({ page })
.withRules(['button-name'])
.analyze()
expect(results.violations).toEqual([])
})
test('links have accessible text', async ({ page }) => {
await page.goto('/')
const results = await new AxeBuilder({ page })
.withRules(['link-name'])
.analyze()
expect(results.violations).toEqual([])
})
})

View File

@@ -0,0 +1,134 @@
name: Accessibility Tests
on:
pull_request:
branches: [main, master]
push:
branches: [main, master]
jobs:
accessibility:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Build application
run: npm run build
- name: Install Playwright browsers
run: npx playwright install --with-deps chromium
- name: Start application server
run: |
npm start &
echo $! > .app-pid
- name: Wait for server to be ready
run: npx wait-on http://localhost:3000 -t 60000
- name: Run accessibility tests
id: a11y_tests
continue-on-error: true
run: npm run test:a11y
- name: Setup Python
if: always()
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Generate accessibility report
if: always()
run: |
python scripts/generate_a11y_report.py \
--input test-results/a11y-results.json \
--output accessibility-report.md \
--format github
- name: Comment PR with report
if: github.event_name == 'pull_request' && always()
uses: actions/github-script@v7
with:
script: |
const fs = require('fs')
try {
const report = fs.readFileSync('accessibility-report.md', 'utf8')
// Find existing comment
const { data: comments } = await github.rest.issues.listComments({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number
})
const botComment = comments.find(comment =>
comment.user.type === 'Bot' &&
comment.body.includes('Accessibility Test Report')
)
const commentBody = report + '\n\n---\n*Automated accessibility check*'
if (botComment) {
// Update existing comment
await github.rest.issues.updateComment({
owner: context.repo.owner,
repo: context.repo.repo,
comment_id: botComment.id,
body: commentBody
})
} else {
// Create new comment
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
body: commentBody
})
}
} catch (error) {
console.error('Error posting comment:', error)
}
- name: Upload accessibility report
if: always()
uses: actions/upload-artifact@v4
with:
name: accessibility-report
path: |
accessibility-report.md
test-results/
retention-days: 30
- name: Upload Playwright report
if: always()
uses: actions/upload-artifact@v4
with:
name: playwright-report
path: playwright-report/
retention-days: 30
- name: Stop application server
if: always()
run: |
if [ -f .app-pid ]; then
kill $(cat .app-pid) || true
fi
- name: Fail job if violations found
if: steps.a11y_tests.outcome == 'failure'
run: |
echo "[ERROR] Accessibility violations found"
exit 1

View File

@@ -0,0 +1,52 @@
{
"defaults": {
"timeout": 30000,
"wait": 1000,
"chromeLaunchConfig": {
"executablePath": "/usr/bin/chromium-browser",
"args": [
"--no-sandbox",
"--disable-setuid-sandbox",
"--disable-dev-shm-usage"
]
},
"standard": "WCAG2AA",
"runners": [
"axe",
"htmlcs"
],
"hideElements": "iframe, [role='presentation']",
"ignore": [
"notice",
"warning"
],
"includeNotices": false,
"includeWarnings": false,
"level": "error",
"reporter": "json",
"threshold": 0,
"screenCapture": "reports/screenshots/{url}-{datetime}.png",
"actions": [
"wait for element body to be visible",
"wait for element #main-content to be visible"
]
},
"urls": [
{
"url": "http://localhost:3000",
"screenCapture": "reports/screenshots/homepage.png"
},
{
"url": "http://localhost:3000/entities",
"screenCapture": "reports/screenshots/entities.png"
},
{
"url": "http://localhost:3000/timeline",
"screenCapture": "reports/screenshots/timeline.png"
},
{
"url": "http://localhost:3000/about",
"screenCapture": "reports/screenshots/about.png"
}
]
}

View File

@@ -0,0 +1,247 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""Generate accessibility test reports from axe-core results."""
import argparse
import io
import json
import sys
from datetime import datetime
from pathlib import Path
from typing import Dict, List, Any
# Configure stdout for UTF-8 encoding (prevents Windows encoding errors)
if sys.platform == 'win32':
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8')
class A11yReportGenerator:
"""Generate markdown reports from accessibility test results."""
SEVERITY_ICONS = {
'critical': '[CRITICAL]',
'serious': '[SERIOUS]',
'moderate': '[MODERATE]',
'minor': '[MINOR]'
}
SEVERITY_ORDER = ['critical', 'serious', 'moderate', 'minor']
def __init__(self, results_file: str):
"""Initialize with results file."""
with open(results_file, 'r') as f:
self.results = json.load(f)
def generate_report(self, format_type: str = 'github') -> str:
"""Generate report in specified format."""
if format_type == 'github':
return self._generate_github_report()
elif format_type == 'gitlab':
return self._generate_gitlab_report()
elif format_type == 'slack':
return self._generate_slack_report()
else:
return self._generate_github_report()
def _generate_github_report(self) -> str:
"""Generate report formatted for GitHub PR comments."""
violations = self._extract_violations()
summary = self._generate_summary(violations)
report = "# Accessibility Test Report\n\n"
report += summary + "\n\n"
if violations:
report += "## Violations\n\n"
report += self._generate_violations_section(violations)
else:
report += "**[PASS] No accessibility violations found!**\n\n"
report += "All tested pages meet WCAG 2.1 Level AA standards.\n"
report += "\n---\n"
report += f"*Report generated on {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}*\n"
return report
def _generate_gitlab_report(self) -> str:
"""Generate report formatted for GitLab MR comments."""
# Similar to GitHub but with GitLab-specific formatting
return self._generate_github_report()
def _generate_slack_report(self) -> str:
"""Generate report formatted for Slack."""
violations = self._extract_violations()
total = sum(len(v) for v in violations.values())
if total == 0:
return "[PASS] Accessibility tests passed! No violations found."
report = f"[WARN] Accessibility Report: {total} violations found\n\n"
for severity in self.SEVERITY_ORDER:
if violations.get(severity):
count = len(violations[severity])
icon = self.SEVERITY_ICONS[severity]
report += f"{icon} *{severity.title()}*: {count}\n"
return report
def _extract_violations(self) -> Dict[str, List[Dict]]:
"""Extract and organize violations by severity."""
violations = {
'critical': [],
'serious': [],
'moderate': [],
'minor': []
}
# Handle different result formats
if isinstance(self.results, list):
# Playwright format
for result in self.results:
if 'violations' in result:
for violation in result['violations']:
impact = violation.get('impact', 'minor')
violations[impact].append(violation)
elif isinstance(self.results, dict):
# Single result format
if 'violations' in self.results:
for violation in self.results['violations']:
impact = violation.get('impact', 'minor')
violations[impact].append(violation)
return violations
def _generate_summary(self, violations: Dict[str, List[Dict]]) -> str:
"""Generate executive summary."""
total = sum(len(v) for v in violations.values())
status = "[FAIL] Failed" if total > 0 else "[PASS] Passed"
summary = f"**Status:** {status}\n"
summary += f"**Total Violations:** {total}\n"
summary += f"**WCAG Level:** AA\n\n"
summary += "## Summary by Severity\n\n"
for severity in self.SEVERITY_ORDER:
count = len(violations.get(severity, []))
icon = self.SEVERITY_ICONS[severity]
summary += f"- {icon} **{severity.title()}:** {count}\n"
return summary
def _generate_violations_section(self, violations: Dict[str, List[Dict]]) -> str:
"""Generate detailed violations section."""
section = ""
for severity in self.SEVERITY_ORDER:
severity_violations = violations.get(severity, [])
if not severity_violations:
continue
icon = self.SEVERITY_ICONS[severity]
count = len(severity_violations)
section += f"### {icon} {severity.title()} ({count})\n\n"
for i, violation in enumerate(severity_violations, 1):
section += self._format_violation(i, violation, severity)
section += "\n---\n\n"
return section
def _format_violation(self, index: int, violation: Dict, severity: str) -> str:
"""Format a single violation."""
rule_id = violation.get('id', 'unknown')
description = violation.get('description', 'No description')
help_text = violation.get('help', '')
help_url = violation.get('helpUrl', '')
output = f"#### {index}. {description}\n\n"
output += f"**Rule ID:** `{rule_id}`\n"
output += f"**Impact:** {severity.title()}\n"
# WCAG tags
tags = [tag for tag in violation.get('tags', []) if 'wcag' in tag.lower()]
if tags:
output += f"**WCAG:** {', '.join(tags)}\n"
output += "\n"
if help_text:
output += f"**Description:**\n{help_text}\n\n"
# Affected elements
nodes = violation.get('nodes', [])
if nodes:
output += f"**Affected Elements:** {len(nodes)}\n\n"
output += "<details>\n<summary>Show affected elements</summary>\n\n"
output += "```html\n"
for node in nodes[:5]: # Limit to first 5
html = node.get('html', '')
if html:
output += f"{html}\n"
if len(nodes) > 5:
output += f"\n... and {len(nodes) - 5} more\n"
output += "```\n"
output += "</details>\n\n"
# Remediation
if help_url:
output += f"**More Information:** [{help_url}]({help_url})\n\n"
return output
def main():
parser = argparse.ArgumentParser(
description="Generate accessibility test reports"
)
parser.add_argument(
'--input',
required=True,
help='Input JSON file with test results'
)
parser.add_argument(
'--output',
default='accessibility-report.md',
help='Output markdown file'
)
parser.add_argument(
'--format',
choices=['github', 'gitlab', 'slack'],
default='github',
help='Report format'
)
args = parser.parse_args()
# Check input file exists
input_path = Path(args.input)
if not input_path.exists():
print(f"Error: Input file not found: {args.input}", file=sys.stderr)
sys.exit(1)
# Generate report
try:
generator = A11yReportGenerator(args.input)
report = generator.generate_report(args.format)
# Write output
output_path = Path(args.output)
output_path.parent.mkdir(parents=True, exist_ok=True)
with open(output_path, 'w') as f:
f.write(report)
print(f"Report generated: {output_path}")
except Exception as e:
print(f"Error generating report: {e}", file=sys.stderr)
sys.exit(1)
if __name__ == '__main__':
main()