Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:23:44 +08:00
commit 9b9d90f8d3
11 changed files with 541 additions and 0 deletions

View File

@@ -0,0 +1,54 @@
---
name: generating-test-reports
description: |
This skill generates comprehensive test reports with coverage metrics, trends, and stakeholder-friendly formats (HTML, PDF, JSON). It aggregates test results from various frameworks, calculates key metrics (coverage, pass rate, duration), and performs trend analysis. Use this skill when the user requests a test report, coverage analysis, failure analysis, or historical comparisons of test runs. Trigger terms include "test report", "coverage report", "testing trends", "failure analysis", and "historical test data".
allowed-tools: Read, Write, Edit, Grep, Glob, Bash
version: 1.0.0
---
## Overview
This skill empowers Claude to create detailed test reports, providing insights into code coverage, test performance trends, and failure analysis. It supports multiple output formats for easy sharing and analysis.
## How It Works
1. **Aggregating Results**: Collects test results from various test frameworks used in the project.
2. **Calculating Metrics**: Computes coverage metrics, pass rates, test duration, and identifies trends.
3. **Generating Report**: Produces comprehensive reports in HTML, PDF, or JSON format based on the user's preference.
## When to Use This Skill
This skill activates when you need to:
- Generate a test report after a test run.
- Analyze code coverage to identify areas needing more testing.
- Identify trends in test performance over time.
## Examples
### Example 1: Generating an HTML Test Report
User request: "Generate an HTML test report showing code coverage and failure analysis."
The skill will:
1. Aggregate test results from all available frameworks.
2. Calculate code coverage and identify failing tests.
3. Generate an HTML report summarizing the findings.
### Example 2: Comparing Test Results Over Time
User request: "Create a report comparing the test results from the last two CI/CD runs."
The skill will:
1. Retrieve test results from the two most recent CI/CD runs.
2. Compare key metrics like pass rate and duration.
3. Generate a report highlighting any regressions or improvements.
## Best Practices
- **Clarity**: Specify the desired output format (HTML, PDF, JSON) for the report.
- **Scope**: Define the scope of the report (e.g., specific test suite, time period).
- **Context**: Provide context about the project and testing environment to improve accuracy.
## Integration
This skill can integrate with CI/CD pipelines to automatically generate and share test reports after each build. It also works well with other analysis plugins to provide more comprehensive insights.

View File

@@ -0,0 +1,7 @@
# Assets
Bundled resources for test-report-generator skill
- [ ] report_template.html: HTML template for generating test reports.
- [ ] example_config.yaml: Example configuration file for the test report generator.
- [ ] sample_test_results.json: Sample test results in JSON format.

View File

@@ -0,0 +1,51 @@
# Configuration file for the test-report-generator plugin
# --- General Settings ---
report_name: "Automated Test Report" # Name of the report
output_directory: "./reports" # Directory to store generated reports
report_format: "html" # Report output format (html, pdf, json)
timestamp_format: "%Y-%m-%d %H:%M:%S" # Format for timestamps in the report
# --- Data Sources ---
test_results:
type: "junit" # Type of test result files (junit, pytest, etc.)
path: "./test-results/*.xml" # Path to the test result files (supports glob patterns)
# For multiple data sources, define additional entries:
# - type: "pytest"
# path: "./pytest-results/results.xml"
coverage_data:
type: "cobertura" # Type of coverage data (cobertura, lcov)
path: "./coverage.xml" # Path to the coverage data file
# If no coverage data is available, leave this section commented out:
# type: "none"
# path: ""
# --- Report Customization ---
logo_path: "YOUR_LOGO_PATH_HERE" # Path to a logo to include in the report (optional)
report_title: "REPLACE_ME - Project Test Report" # Title displayed in the report
report_description: "Comprehensive test report for REPLACE_ME project, including test results and code coverage." # Description of the report
# --- Thresholds and Alerts ---
pass_rate_threshold: 0.95 # Minimum pass rate (as a fraction) to be considered successful
coverage_threshold: 0.80 # Minimum code coverage (as a fraction) to be considered successful
alert_on_failure: true # Send alerts if any tests fail
alert_recipients: # List of email addresses or channels to send alerts to
- "YOUR_EMAIL_ADDRESS_HERE"
# - "another_recipient@example.com"
# --- Historical Data ---
historical_data:
enabled: false # Enable historical data tracking and comparisons
database_url: "sqlite:///test_report_history.db" # URL for the database storing historical data (e.g., SQLite, PostgreSQL)
retention_period: "30d" # How long to retain historical data (e.g., 30d, 90d, 1y)
# --- Advanced Options ---
include_stacktraces: true # Include stack traces in the report
collapse_long_messages: true # Collapse long test messages for readability
debug_mode: false # Enable debug mode for more verbose logging
# --- Plugin-Specific Options ---
# These options are specific to this test report generator plugin
failure_analysis_enabled: true # Enables automated failure analysis
stakeholder_format: "executive" # Format for stakeholder-friendly report (executive, detailed)

View File

@@ -0,0 +1,145 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{{report_title}}</title>
<style>
/* Inline CSS for consistent rendering */
body {
font-family: sans-serif;
margin: 0;
padding: 0;
background-color: #f4f4f4;
color: #333;
}
.container {
max-width: 960px;
margin: 20px auto;
padding: 20px;
background-color: #fff;
box-shadow: 0 0 10px rgba(0, 0, 0, 0.1);
}
h1 {
text-align: center;
color: #007bff;
}
h2 {
margin-top: 20px;
border-bottom: 2px solid #eee;
padding-bottom: 5px;
}
table {
width: 100%;
border-collapse: collapse;
margin-top: 10px;
}
th, td {
padding: 8px;
text-align: left;
border-bottom: 1px solid #ddd;
}
th {
background-color: #f2f2f2;
}
.passed {
color: green;
}
.failed {
color: red;
}
.skipped {
color: orange;
}
.coverage-bar {
height: 10px;
background-color: #ddd;
margin-top: 5px;
border-radius: 5px;
overflow: hidden; /* Ensure the inner bar stays within the container */
}
.coverage-bar-fill {
height: 10px;
background-color: #28a745; /* Green for coverage */
width: {{coverage_percentage}}%; /* Dynamic width based on coverage */
display: block; /* Make it a block-level element to control height and width */
}
.trend-chart {
width: 100%;
height: 300px; /* Adjust height as needed */
background-color: #eee; /* Placeholder for the chart */
text-align: center;
line-height: 300px;
color: #777;
font-style: italic;
}
/* Responsive adjustments */
@media (max-width: 768px) {
.container {
padding: 10px;
}
table {
font-size: 0.9em;
}
}
</style>
</head>
<body>
<div class="container">
<h1>{{report_title}}</h1>
<p>Generated on: {{report_date}}</p>
<h2>Summary</h2>
<p>Total Tests: {{total_tests}}</p>
<p class="passed">Passed: {{passed_tests}}</p>
<p class="failed">Failed: {{failed_tests}}</p>
<p class="skipped">Skipped: {{skipped_tests}}</p>
<h2>Test Results</h2>
<table>
<thead>
<tr>
<th>Test Name</th>
<th>Status</th>
<th>Duration (ms)</th>
<th>Error Message</th>
</tr>
</thead>
<tbody>
{{test_results}}
</tbody>
</table>
<h2>Coverage</h2>
<p>Overall Coverage: {{coverage_percentage}}%</p>
<div class="coverage-bar">
<div class="coverage-bar-fill"></div>
</div>
<h2>Trend Analysis</h2>
<div class="trend-chart">
<!-- Placeholder for Trend Chart. Replace with actual chart implementation -->
Trend Chart Placeholder
</div>
<h2>Failure Analysis</h2>
<p>{{failure_analysis}}</p>
<p>Generated by Test Report Generator Plugin</p>
</div>
</body>
</html>

View File

@@ -0,0 +1,128 @@
{
"_comment": "Sample test results for test-report-generator plugin",
"report_metadata": {
"report_id": "test-report-2024-10-27-1430",
"generated_at": "2024-10-27T14:30:00Z",
"test_environment": "Production",
"report_version": "1.0",
"_comment": "Version of the report format. Allows for future schema changes."
},
"summary": {
"total_tests": 100,
"passed": 95,
"failed": 3,
"skipped": 2,
"pass_rate": 95.0,
"fail_rate": 3.0,
"skip_rate": 2.0,
"total_duration_seconds": 120.5,
"_comment": "Overall summary of the test run"
},
"test_suites": [
{
"suite_name": "User Authentication",
"total_tests": 20,
"passed": 19,
"failed": 1,
"skipped": 0,
"duration_seconds": 25.2,
"tests": [
{
"test_name": "Valid login",
"status": "passed",
"duration_seconds": 1.2,
"error_message": null,
"stack_trace": null
},
{
"test_name": "Invalid password",
"status": "failed",
"duration_seconds": 0.8,
"error_message": "Authentication failed",
"stack_trace": "at UserAuthentication.InvalidPasswordTest(UserAuthentication.cs:25)"
}
]
},
{
"suite_name": "Payment Processing",
"total_tests": 30,
"passed": 28,
"failed": 1,
"skipped": 1,
"duration_seconds": 45.8,
"tests": [
{
"test_name": "Successful payment",
"status": "passed",
"duration_seconds": 1.5,
"error_message": null,
"stack_trace": null
},
{
"test_name": "Insufficient funds",
"status": "failed",
"duration_seconds": 1.1,
"error_message": "Payment declined: Insufficient funds",
"stack_trace": "at PaymentProcessing.InsufficientFundsTest(PaymentProcessing.cs:42)"
},
{
"test_name": "Timeout",
"status": "skipped",
"duration_seconds": 0,
"error_message": "External API timeout",
"stack_trace": null
}
]
},
{
"suite_name": "Data Validation",
"total_tests": 50,
"passed": 48,
"failed": 1,
"skipped": 1,
"duration_seconds": 49.5,
"tests": [
{
"test_name": "Valid data input",
"status": "passed",
"duration_seconds": 0.7,
"error_message": null,
"stack_trace": null
},
{
"test_name": "Invalid email format",
"status": "failed",
"duration_seconds": 0.9,
"error_message": "Invalid email format",
"stack_trace": "at DataValidation.InvalidEmailTest(DataValidation.cs:17)"
},
{
"test_name": "Empty field",
"status": "skipped",
"duration_seconds": 0,
"error_message": "Pending implementation",
"stack_trace": null
}
]
}
],
"coverage": {
"line_coverage": 85.2,
"branch_coverage": 70.5,
"function_coverage": 90.1,
"_comment": "Code coverage metrics"
},
"trends": {
"pass_rate_history": [
{"date": "2024-10-20", "pass_rate": 92.0},
{"date": "2024-10-21", "pass_rate": 93.5},
{"date": "2024-10-22", "pass_rate": 94.0},
{"date": "2024-10-23", "pass_rate": 94.5},
{"date": "2024-10-24", "pass_rate": 94.7},
{"date": "2024-10-25", "pass_rate": 94.8},
{"date": "2024-10-26", "pass_rate": 94.9},
{"date": "2024-10-27", "pass_rate": 95.0}
],
"_comment": "Historical trend of pass rates over time"
}
}

View File

@@ -0,0 +1,7 @@
# References
Bundled resources for test-report-generator skill
- [ ] test_framework_api.md: API documentation for interacting with various test frameworks (pytest, JUnit, etc.).
- [ ] report_format_schema.json: JSON schema for the test report format.
- [ ] coverage_metrics_definition.md: Definitions of coverage metrics used in the test report (e.g., line coverage, branch coverage).

View File

@@ -0,0 +1,8 @@
# Scripts
Bundled resources for test-report-generator skill
- [ ] aggregate_results.py: Aggregates test results from various frameworks (pytest, JUnit, etc.) into a unified data structure.
- [ ] generate_report.py: Generates test reports in various formats (HTML, PDF, JSON) based on the aggregated test results.
- [ ] analyze_trends.py: Analyzes test results over time to identify trends in coverage, pass rate, and duration.
- [ ] validate_config.py: Validates the configuration file for the test report generator to ensure it is properly formatted and contains all required information.