Initial commit
This commit is contained in:
14
.claude-plugin/plugin.json
Normal file
14
.claude-plugin/plugin.json
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
{
|
||||||
|
"name": "component-health",
|
||||||
|
"description": "Analyze component health using regression and jira data",
|
||||||
|
"version": "0.0.1",
|
||||||
|
"author": {
|
||||||
|
"name": "github.com/openshift-eng"
|
||||||
|
},
|
||||||
|
"skills": [
|
||||||
|
"./skills"
|
||||||
|
],
|
||||||
|
"commands": [
|
||||||
|
"./commands"
|
||||||
|
]
|
||||||
|
}
|
||||||
3
README.md
Normal file
3
README.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# component-health
|
||||||
|
|
||||||
|
Analyze component health using regression and jira data
|
||||||
383
commands/analyze.md
Normal file
383
commands/analyze.md
Normal file
@@ -0,0 +1,383 @@
|
|||||||
|
---
|
||||||
|
description: Analyze and grade component health based on regression and JIRA bug metrics
|
||||||
|
argument-hint: <release> [--components comp1 comp2 ...] [--project JIRAPROJECT]
|
||||||
|
---
|
||||||
|
|
||||||
|
## Name
|
||||||
|
|
||||||
|
component-health:analyze
|
||||||
|
|
||||||
|
## Synopsis
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:analyze <release> [--components comp1 comp2 ...] [--project JIRAPROJECT]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
The `component-health:analyze` command provides comprehensive component health analysis for a specified OpenShift release by **automatically combining** regression management metrics with JIRA bug backlog data.
|
||||||
|
|
||||||
|
**CRITICAL**: This command REQUIRES and AUTOMATICALLY fetches BOTH data sources:
|
||||||
|
1. Regression data (via summarize-regressions)
|
||||||
|
2. JIRA bug data (via summarize-jiras)
|
||||||
|
|
||||||
|
The analysis is INCOMPLETE without both data sources. Both are fetched automatically without user prompting.
|
||||||
|
|
||||||
|
The command evaluates component health based on:
|
||||||
|
|
||||||
|
1. **Regression Management** (ALWAYS fetched automatically): How well components are managing test regressions
|
||||||
|
- Triage coverage (% of regressions triaged to JIRA bugs)
|
||||||
|
- Triage timeliness (average time from detection to triage)
|
||||||
|
- Resolution speed (average time from detection to closure)
|
||||||
|
|
||||||
|
2. **Bug Backlog Health** (ALWAYS fetched automatically): Current state of open bugs for components
|
||||||
|
- Open bug counts by component
|
||||||
|
- Bug age distribution
|
||||||
|
- Bug priority breakdown
|
||||||
|
- Recent bug flow (opened vs closed in last 30 days)
|
||||||
|
|
||||||
|
This command is useful for:
|
||||||
|
|
||||||
|
- **Grading overall component health** using multiple quality metrics
|
||||||
|
- **Identifying components** that need help with regression or bug management
|
||||||
|
- **Tracking quality trends** across releases
|
||||||
|
- **Generating comprehensive quality scorecards** for stakeholders
|
||||||
|
- **Prioritizing engineering investment** based on data-driven insights
|
||||||
|
|
||||||
|
Grading is subjective and not meant to be a critique of team performance. This is intended to help identify where help is needed and track progress as we improve our quality practices.
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
**CRITICAL WORKFLOW**: The analyze command MUST execute steps 3 and 4 (fetch regression data AND fetch JIRA data) automatically without waiting for user prompting. Both data sources are required for a complete analysis.
|
||||||
|
|
||||||
|
1. **Parse Arguments**: Extract release version and optional filters from arguments
|
||||||
|
|
||||||
|
- Release format: "X.Y" (e.g., "4.17", "4.21")
|
||||||
|
- Optional filters:
|
||||||
|
- `--components`: Space-separated list of component search strings (fuzzy match)
|
||||||
|
- `--project`: JIRA project key (default: "OCPBUGS")
|
||||||
|
|
||||||
|
2. **Resolve Component Names**: Use fuzzy matching to find actual component names
|
||||||
|
|
||||||
|
- Run list_components.py to get all available components:
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/list-components/list_components.py --release <release>
|
||||||
|
```
|
||||||
|
- If `--components` was provided:
|
||||||
|
- For each search string, find all components containing that string (case-insensitive)
|
||||||
|
- Combine all matches into a single list
|
||||||
|
- Remove duplicates
|
||||||
|
- If no matches found for a search string, warn the user and show available components
|
||||||
|
- If `--components` was NOT provided:
|
||||||
|
- Use all available components from the list
|
||||||
|
|
||||||
|
3. **Fetch Regression Summary**: REQUIRED - Always call the summarize-regressions command
|
||||||
|
|
||||||
|
**IMPORTANT**: This step is REQUIRED for the analyze command. Regression data must ALWAYS be fetched automatically without user prompting. The analyze command combines both regression and bug metrics - it is incomplete without both data sources.
|
||||||
|
|
||||||
|
- **ALWAYS execute this step** - do not skip or wait for user to request it
|
||||||
|
- Execute: `/component-health:summarize-regressions <release> [--components ...]`
|
||||||
|
- Pass resolved component names
|
||||||
|
- Extract regression metrics:
|
||||||
|
- Total regressions, triage percentages, timing metrics
|
||||||
|
- Per-component breakdowns
|
||||||
|
- Open vs closed regression counts
|
||||||
|
- Note development window dates for context
|
||||||
|
- If regression API is unreachable, inform the user and note this in the report but continue with bug-only analysis
|
||||||
|
|
||||||
|
4. **Fetch JIRA Bug Summary**: REQUIRED - Always call the summarize-jiras command
|
||||||
|
|
||||||
|
**IMPORTANT**: This step is REQUIRED for the analyze command. JIRA bug data must ALWAYS be fetched automatically without user prompting. The analyze command combines both regression and bug metrics - it is incomplete without both data sources.
|
||||||
|
|
||||||
|
- **ALWAYS execute this step** - do not skip or wait for user to request it
|
||||||
|
- For each resolved component name:
|
||||||
|
- Execute: `/component-health:summarize-jiras --project <project> --component "<component>" --limit 1000`
|
||||||
|
- Note: Must iterate over components because JIRA queries can be too large otherwise
|
||||||
|
- Aggregate bug metrics across all components:
|
||||||
|
- Total open bugs by component
|
||||||
|
- Bug age distribution
|
||||||
|
- Opened vs closed in last 30 days
|
||||||
|
- Priority breakdowns
|
||||||
|
- If JIRA authentication is not configured, inform the user and provide setup instructions
|
||||||
|
- If JIRA queries fail, note this in the report but continue with regression-only analysis
|
||||||
|
|
||||||
|
5. **Calculate Combined Health Grades**: REQUIRED - Analyze BOTH regression and bug data
|
||||||
|
|
||||||
|
**IMPORTANT**: This step requires data from BOTH step 3 (regressions) AND step 4 (JIRA bugs). Do not perform analysis with only one data source unless the other failed to fetch.
|
||||||
|
|
||||||
|
**For each component, grade based on:**
|
||||||
|
|
||||||
|
a. **Regression Health** (from step 3: summarize-regressions):
|
||||||
|
- Triage Coverage: % of regressions triaged
|
||||||
|
- 90-100%: Excellent ✅
|
||||||
|
- 70-89%: Good ⚠️
|
||||||
|
- 50-69%: Needs Improvement ⚠️
|
||||||
|
- <50%: Poor ❌
|
||||||
|
- Triage Timeliness: Average hours to triage
|
||||||
|
- <24 hours: Excellent ✅
|
||||||
|
- 24-72 hours: Good ⚠️
|
||||||
|
- 72-168 hours (1 week): Needs Improvement ⚠️
|
||||||
|
- >168 hours: Poor ❌
|
||||||
|
- Resolution Speed: Average hours to close
|
||||||
|
- <168 hours (1 week): Excellent ✅
|
||||||
|
- 168-336 hours (1-2 weeks): Good ⚠️
|
||||||
|
- 336-720 hours (2-4 weeks): Needs Improvement ⚠️
|
||||||
|
- >720 hours (4+ weeks): Poor ❌
|
||||||
|
|
||||||
|
b. **Bug Backlog Health** (from step 4: summarize-jiras):
|
||||||
|
- Open Bug Count: Total open bugs
|
||||||
|
- Component-relative thresholds (compare across components)
|
||||||
|
- Bug Age: Average/maximum age of open bugs
|
||||||
|
- <30 days average: Excellent ✅
|
||||||
|
- 30-90 days: Good ⚠️
|
||||||
|
- 90-180 days: Needs Improvement ⚠️
|
||||||
|
- >180 days: Poor ❌
|
||||||
|
- Bug Flow: Opened vs closed in last 30 days
|
||||||
|
- More closed than opened: Positive trend ✅
|
||||||
|
- Equal: Stable ⚠️
|
||||||
|
- More opened than closed: Growing backlog ❌
|
||||||
|
|
||||||
|
c. **Combined Health Score**: Weighted average of regression and bug health
|
||||||
|
- Weight regression health more heavily (e.g., 60%) as it's more actionable
|
||||||
|
- Bug backlog provides context (40%)
|
||||||
|
|
||||||
|
6. **Display Overall Health Report**: Present comprehensive analysis combining BOTH data sources
|
||||||
|
|
||||||
|
**IMPORTANT**: The report MUST include BOTH regression metrics AND JIRA bug metrics. Do not present regression-only analysis unless JIRA data fetch failed.
|
||||||
|
|
||||||
|
- Show which components were matched (if fuzzy search was used)
|
||||||
|
- Inform user that both regression and bug data were analyzed
|
||||||
|
|
||||||
|
**Section 1: Overall Release Health**
|
||||||
|
- Release version and development window
|
||||||
|
- Overall regression metrics (from summarize-regressions):
|
||||||
|
- Total regressions, triage %, timing metrics
|
||||||
|
- Overall bug metrics (from summarize-jiras):
|
||||||
|
- Total open bugs, opened/closed last 30 days, priority breakdown
|
||||||
|
- High-level combined health grade
|
||||||
|
|
||||||
|
**Section 2: Per-Component Health Scorecard**
|
||||||
|
- Ranked table of components from best to worst combined health
|
||||||
|
- Key metrics per component (BOTH regression AND bug data):
|
||||||
|
- Regression triage coverage
|
||||||
|
- Average triage time
|
||||||
|
- Average resolution time
|
||||||
|
- Open bug count (from JIRA)
|
||||||
|
- Bug age metrics (from JIRA)
|
||||||
|
- Bug flow (opened vs closed, from JIRA)
|
||||||
|
- Combined health grade
|
||||||
|
- Visual indicators (✅ ⚠️ ❌) for quick assessment
|
||||||
|
|
||||||
|
**Section 3: Components Needing Attention**
|
||||||
|
- Prioritized list of components with specific issues from BOTH sources
|
||||||
|
- Actionable recommendations for each component:
|
||||||
|
- "X open untriaged regressions need triage" (only OPEN, not closed)
|
||||||
|
- "High bug backlog: X open bugs (Y older than 90 days)" (from JIRA)
|
||||||
|
- "Growing bug backlog: +X net bugs in last 30 days" (from JIRA)
|
||||||
|
- "Slow regression triage: X hours average"
|
||||||
|
- Context for each issue
|
||||||
|
|
||||||
|
7. **Offer HTML Report Generation** (AFTER displaying the text report):
|
||||||
|
- Ask the user if they would like an interactive HTML report
|
||||||
|
- If yes, generate an HTML report combining both data sources
|
||||||
|
- Use template from: `plugins/component-health/skills/analyze-regressions/report_template.html`
|
||||||
|
- Enhance template to include bug backlog metrics
|
||||||
|
- Save report to: `.work/component-health-{release}/health-report.html`
|
||||||
|
- Open the report in the user's default browser
|
||||||
|
- Display the file path to the user
|
||||||
|
|
||||||
|
8. **Error Handling**: Handle common error scenarios
|
||||||
|
|
||||||
|
- Network connectivity issues
|
||||||
|
- Invalid release format
|
||||||
|
- Missing regression or JIRA data
|
||||||
|
- API errors
|
||||||
|
- No matches for component filter
|
||||||
|
- JIRA authentication issues
|
||||||
|
|
||||||
|
## Return Value
|
||||||
|
|
||||||
|
The command outputs a **Comprehensive Component Health Report**:
|
||||||
|
|
||||||
|
### Overall Health Grade
|
||||||
|
|
||||||
|
From combined regression and bug data:
|
||||||
|
|
||||||
|
- **Release**: OpenShift version and development window
|
||||||
|
- **Regression Metrics**:
|
||||||
|
- Total regressions: X (Y% triaged)
|
||||||
|
- Average triage time: X hours
|
||||||
|
- Average resolution time: X hours
|
||||||
|
- Open vs closed breakdown
|
||||||
|
- **Bug Backlog Metrics**:
|
||||||
|
- Total open bugs: X across all components
|
||||||
|
- Bugs opened/closed in last 30 days
|
||||||
|
- Priority distribution
|
||||||
|
- **Overall Health**: Combined grade (Excellent/Good/Needs Improvement/Poor)
|
||||||
|
|
||||||
|
### Per-Component Health Scorecard
|
||||||
|
|
||||||
|
Ranked table combining both metrics:
|
||||||
|
|
||||||
|
| Component | Regression Triage | Triage Time | Resolution Time | Open Bugs | Bug Age | Health Grade |
|
||||||
|
|-----------|-------------------|-------------|-----------------|-----------|---------|--------------|
|
||||||
|
| kube-apiserver | 100.0% | 58 hrs | 144 hrs | 15 | 45d avg | ✅ Excellent |
|
||||||
|
| etcd | 95.0% | 84 hrs | 192 hrs | 8 | 30d avg | ✅ Good |
|
||||||
|
| Monitoring | 86.7% | 68 hrs | 156 hrs | 23 | 120d avg | ⚠️ Needs Improvement |
|
||||||
|
|
||||||
|
### Components Needing Attention
|
||||||
|
|
||||||
|
Prioritized list with actionable items:
|
||||||
|
|
||||||
|
```
|
||||||
|
1. Monitoring (Needs Improvement):
|
||||||
|
- 1 open untriaged regression (needs triage)
|
||||||
|
- High bug backlog: 23 open bugs (8 older than 90 days)
|
||||||
|
- Growing backlog: +5 net bugs in last 30 days
|
||||||
|
- Recommendation: Focus on triaging open regression and addressing oldest bugs
|
||||||
|
|
||||||
|
2. Example-Component (Poor):
|
||||||
|
- 5 open untriaged regressions (urgent triage needed)
|
||||||
|
- Slow triage response: 120 hours average
|
||||||
|
- Very high bug backlog: 45 open bugs (15 older than 180 days)
|
||||||
|
- Recommendation: Immediate triage sprint needed; consider bug backlog cleanup initiative
|
||||||
|
```
|
||||||
|
|
||||||
|
**IMPORTANT**: When listing untriaged regressions:
|
||||||
|
- **Only list OPEN untriaged regressions** - these are actionable
|
||||||
|
- **Do NOT recommend triaging closed regressions** - tooling doesn't support retroactive triage
|
||||||
|
- Calculate actionable count as: `open.total - open.triaged`
|
||||||
|
|
||||||
|
### Additional Sections
|
||||||
|
|
||||||
|
If requested:
|
||||||
|
- Detailed regression metrics by component
|
||||||
|
- Detailed bug breakdowns by status and priority
|
||||||
|
- Links to Sippy dashboards for regression analysis
|
||||||
|
- Links to JIRA queries for bug investigation
|
||||||
|
- Trends compared to previous releases (if available)
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
1. **Analyze overall component health for a release**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:analyze 4.17
|
||||||
|
```
|
||||||
|
|
||||||
|
Automatically fetches and analyzes BOTH data sources for release 4.17:
|
||||||
|
- Regression management metrics (via summarize-regressions)
|
||||||
|
- JIRA bug backlog metrics (via summarize-jiras)
|
||||||
|
- Combined health grades based on both sources
|
||||||
|
- Prioritized recommendations using both regression and bug data
|
||||||
|
|
||||||
|
2. **Analyze specific components (exact match)**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:analyze 4.21 --components Monitoring Etcd
|
||||||
|
```
|
||||||
|
|
||||||
|
Automatically fetches BOTH regression and bug data for Monitoring and Etcd:
|
||||||
|
- Compares combined health between the two components
|
||||||
|
- Shows regression metrics AND bug backlog for each
|
||||||
|
- Identifies which component needs more attention
|
||||||
|
- Provides targeted recommendations based on both data sources
|
||||||
|
|
||||||
|
3. **Analyze by fuzzy search**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:analyze 4.21 --components network
|
||||||
|
```
|
||||||
|
|
||||||
|
Automatically fetches BOTH data sources for all components containing "network":
|
||||||
|
- Finds all networking components (e.g., "Networking / ovn-kubernetes", "Networking / DNS", etc.)
|
||||||
|
- Compares combined health across all networking components
|
||||||
|
- Shows regression metrics AND bug backlog for each
|
||||||
|
- Identifies networking-related quality issues from both sources
|
||||||
|
- Provides targeted recommendations
|
||||||
|
|
||||||
|
4. **Analyze with custom JIRA project**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:analyze 4.21 --project OCPSTRAT
|
||||||
|
```
|
||||||
|
|
||||||
|
Analyzes health using bugs from OCPSTRAT project instead of default OCPBUGS.
|
||||||
|
|
||||||
|
5. **In-development release analysis**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:analyze 4.21
|
||||||
|
```
|
||||||
|
|
||||||
|
Automatically fetches BOTH data sources for an in-development release:
|
||||||
|
- Shows current regression management state
|
||||||
|
- Shows current bug backlog state
|
||||||
|
- Tracks bug flow trends (opened vs closed)
|
||||||
|
- Identifies areas to focus on before GA based on both regression and bug metrics
|
||||||
|
|
||||||
|
## Arguments
|
||||||
|
|
||||||
|
- `$1` (required): Release version
|
||||||
|
- Format: "X.Y" (e.g., "4.17", "4.21")
|
||||||
|
- Must be a valid OpenShift release number
|
||||||
|
|
||||||
|
- `$2+` (optional): Filter flags
|
||||||
|
- `--components <search1> [search2 ...]`: Filter by component names using fuzzy search
|
||||||
|
- Space-separated list of component search strings
|
||||||
|
- Case-insensitive substring matching
|
||||||
|
- Each search string matches all components containing that substring
|
||||||
|
- If no components provided, all components are analyzed
|
||||||
|
- Applied to both regression and bug queries
|
||||||
|
- Example: "network" matches "Networking / ovn-kubernetes", "Networking / DNS", etc.
|
||||||
|
- Example: "kube-" matches "kube-apiserver", "kube-controller-manager", etc.
|
||||||
|
|
||||||
|
- `--project <PROJECT>`: JIRA project key
|
||||||
|
- Default: "OCPBUGS"
|
||||||
|
- Use alternative project if component bugs are tracked elsewhere
|
||||||
|
- Examples: "OCPSTRAT", "OCPQE"
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
1. **Python 3**: Required to run the underlying data fetching scripts
|
||||||
|
|
||||||
|
- Check: `which python3`
|
||||||
|
- Version: 3.6 or later
|
||||||
|
|
||||||
|
2. **JIRA Authentication**: Environment variables must be configured for bug data
|
||||||
|
|
||||||
|
- `JIRA_URL`: Your JIRA instance URL
|
||||||
|
- `JIRA_PERSONAL_TOKEN`: Your JIRA bearer token or personal access token
|
||||||
|
- See `/component-health:summarize-jiras` for setup instructions
|
||||||
|
|
||||||
|
3. **Network Access**: Must be able to reach both component health API and JIRA
|
||||||
|
|
||||||
|
- Ensure HTTPS requests can be made to both services
|
||||||
|
- Check firewall and VPN settings if needed
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- **CRITICAL**: This command AUTOMATICALLY fetches data from TWO sources:
|
||||||
|
1. Regression API (via `/component-health:summarize-regressions`)
|
||||||
|
2. JIRA API (via `/component-health:summarize-jiras`)
|
||||||
|
- Both data sources are REQUIRED and fetched automatically without user prompting
|
||||||
|
- The analysis is incomplete without both regression and bug data
|
||||||
|
- Health grades are subjective and intended as guidance, not criticism
|
||||||
|
- Recommendations focus on actionable items (open untriaged regressions, not closed)
|
||||||
|
- Infrastructure regressions are automatically filtered from regression counts
|
||||||
|
- JIRA queries default to open bugs + bugs closed in last 30 days
|
||||||
|
- HTML reports provide interactive visualizations combining both data sources
|
||||||
|
- If one data source fails, the command continues with the available data and notes the failure
|
||||||
|
- For detailed regression data only, use `/component-health:list-regressions`
|
||||||
|
- For detailed JIRA data only, use `/component-health:list-jiras`
|
||||||
|
- This command provides the most comprehensive view by combining both sources
|
||||||
|
|
||||||
|
## See Also
|
||||||
|
|
||||||
|
- Related Command: `/component-health:summarize-regressions` (regression metrics)
|
||||||
|
- Related Command: `/component-health:summarize-jiras` (bug backlog metrics)
|
||||||
|
- Related Command: `/component-health:list-regressions` (raw regression data)
|
||||||
|
- Related Command: `/component-health:list-jiras` (raw JIRA data)
|
||||||
|
- Skill Documentation: `plugins/component-health/skills/analyze-regressions/SKILL.md`
|
||||||
|
- Script: `plugins/component-health/skills/list-regressions/list_regressions.py`
|
||||||
|
- Script: `plugins/component-health/skills/summarize-jiras/summarize_jiras.py`
|
||||||
147
commands/list-components.md
Normal file
147
commands/list-components.md
Normal file
@@ -0,0 +1,147 @@
|
|||||||
|
---
|
||||||
|
description: List all components tracked in Sippy for a release
|
||||||
|
argument-hint: <release>
|
||||||
|
---
|
||||||
|
|
||||||
|
## Name
|
||||||
|
|
||||||
|
component-health:list-components
|
||||||
|
|
||||||
|
## Synopsis
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:list-components <release>
|
||||||
|
```
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
The `component-health:list-components` command fetches and displays all component names tracked in the Sippy component readiness system for a specified OpenShift release.
|
||||||
|
|
||||||
|
This command is useful for:
|
||||||
|
|
||||||
|
- Discovering available components for a release
|
||||||
|
- Validating component names before analysis
|
||||||
|
- Understanding which teams/components are tracked
|
||||||
|
- Generating component lists for reports
|
||||||
|
- Finding exact component names for use in other commands
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
1. **Verify Prerequisites**: Check that Python 3 is installed
|
||||||
|
|
||||||
|
- Run: `python3 --version`
|
||||||
|
- Verify version 3.6 or later is available
|
||||||
|
|
||||||
|
2. **Parse Arguments**: Extract release version from arguments
|
||||||
|
|
||||||
|
- Release format: "X.Y" (e.g., "4.17", "4.21")
|
||||||
|
|
||||||
|
3. **Execute Python Script**: Run the list_components.py script
|
||||||
|
|
||||||
|
- Script location: `plugins/component-health/skills/list-components/list_components.py`
|
||||||
|
- Pass release as `--release` argument
|
||||||
|
- The script automatically appends "-main" suffix to construct the view
|
||||||
|
- Capture JSON output from stdout
|
||||||
|
|
||||||
|
4. **Parse Output**: Process the JSON response
|
||||||
|
|
||||||
|
- Extract component count and component list
|
||||||
|
- Components are returned alphabetically sorted and unique
|
||||||
|
|
||||||
|
5. **Present Results**: Display components in a readable format
|
||||||
|
|
||||||
|
- Show total count
|
||||||
|
- Display components in a numbered or bulleted list
|
||||||
|
- Optionally group by category (e.g., Networking, Storage, etc.)
|
||||||
|
|
||||||
|
6. **Error Handling**: Handle common error scenarios
|
||||||
|
|
||||||
|
- Network connectivity issues
|
||||||
|
- Invalid release format
|
||||||
|
- API errors (400, 404, 500, etc.)
|
||||||
|
- Empty results
|
||||||
|
|
||||||
|
## Return Value
|
||||||
|
|
||||||
|
The command outputs a **Component List** with the following information:
|
||||||
|
|
||||||
|
### Component Summary
|
||||||
|
|
||||||
|
- **Release**: The release version queried
|
||||||
|
- **View**: The constructed view parameter (release + "-main")
|
||||||
|
- **Total Components**: Count of unique components found
|
||||||
|
|
||||||
|
### Component List
|
||||||
|
|
||||||
|
An alphabetically sorted list of all components, for example:
|
||||||
|
|
||||||
|
```
|
||||||
|
1. Bare Metal Hardware Provisioning
|
||||||
|
2. Build
|
||||||
|
3. Cloud Compute / Cloud Controller Manager
|
||||||
|
4. Cluster Version Operator
|
||||||
|
5. Etcd
|
||||||
|
6. HyperShift
|
||||||
|
7. Image Registry
|
||||||
|
8. Installer / openshift-installer
|
||||||
|
9. kube-apiserver
|
||||||
|
10. Machine Config Operator
|
||||||
|
11. Management Console
|
||||||
|
12. Monitoring
|
||||||
|
13. Networking / ovn-kubernetes
|
||||||
|
14. OLM
|
||||||
|
15. Storage
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
1. **List all components for release 4.21**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:list-components 4.21
|
||||||
|
```
|
||||||
|
|
||||||
|
Displays all components tracked in Sippy for release 4.21.
|
||||||
|
|
||||||
|
2. **List components for release 4.20**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:list-components 4.20
|
||||||
|
```
|
||||||
|
|
||||||
|
Displays all components for the 4.20 release.
|
||||||
|
|
||||||
|
## Arguments
|
||||||
|
|
||||||
|
- `$1` (required): Release version
|
||||||
|
- Format: "X.Y" (e.g., "4.17", "4.21")
|
||||||
|
- Must be a valid OpenShift release number
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
1. **Python 3**: Required to run the data fetching script
|
||||||
|
|
||||||
|
- Check: `which python3`
|
||||||
|
- Version: 3.6 or later
|
||||||
|
|
||||||
|
2. **Network Access**: Must be able to reach the Sippy API
|
||||||
|
|
||||||
|
- Ensure HTTPS requests can be made to `sippy.dptools.openshift.org`
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- The script automatically appends "-main" to the release version
|
||||||
|
- Component names are case-sensitive
|
||||||
|
- Component names are returned in alphabetical order
|
||||||
|
- Some components use hierarchical names with "/" separator (e.g., "Networking / ovn-kubernetes")
|
||||||
|
- The script has a 30-second timeout for HTTP requests
|
||||||
|
- Component names returned can be used directly in other component-health commands
|
||||||
|
|
||||||
|
## See Also
|
||||||
|
|
||||||
|
- Skill Documentation: `plugins/component-health/skills/list-components/SKILL.md`
|
||||||
|
- Script: `plugins/component-health/skills/list-components/list_components.py`
|
||||||
|
- Related Command: `/component-health:list-regressions` (for regression data)
|
||||||
|
- Related Command: `/component-health:summarize-jiras` (for bug data)
|
||||||
|
- Related Command: `/component-health:analyze` (for health analysis)
|
||||||
335
commands/list-jiras.md
Normal file
335
commands/list-jiras.md
Normal file
@@ -0,0 +1,335 @@
|
|||||||
|
---
|
||||||
|
description: Query and list raw JIRA bug data for a specific project
|
||||||
|
argument-hint: <project> [--component comp1 comp2 ...] [--status status1 status2 ...] [--include-closed] [--limit N]
|
||||||
|
---
|
||||||
|
|
||||||
|
## Name
|
||||||
|
|
||||||
|
component-health:list-jiras
|
||||||
|
|
||||||
|
## Synopsis
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:list-jiras <project> [--component comp1 comp2 ...] [--status status1 status2 ...] [--include-closed] [--limit N]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
The `component-health:list-jiras` command queries JIRA bugs for a specified project and returns raw issue data. It fetches JIRA issues with all their fields and metadata without performing any summarization or aggregation.
|
||||||
|
|
||||||
|
By default, the command includes:
|
||||||
|
- All currently open bugs
|
||||||
|
- Bugs closed in the last 30 days (to track recent closure activity)
|
||||||
|
|
||||||
|
This command is useful for:
|
||||||
|
|
||||||
|
- Fetching raw JIRA issue data for further processing
|
||||||
|
- Accessing complete issue details including all fields
|
||||||
|
- Building custom analysis workflows
|
||||||
|
- Providing data to other commands (like `summarize-jiras`)
|
||||||
|
- Exporting JIRA data for offline analysis
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
1. **Verify Prerequisites**: Check that Python 3 is installed
|
||||||
|
|
||||||
|
- Run: `python3 --version`
|
||||||
|
- Verify version 3.6 or later is available
|
||||||
|
|
||||||
|
2. **Verify Environment Variables**: Ensure JIRA authentication is configured
|
||||||
|
|
||||||
|
- Check that the following environment variables are set:
|
||||||
|
- `JIRA_URL`: Base URL for JIRA instance (e.g., "https://issues.redhat.com")
|
||||||
|
- `JIRA_PERSONAL_TOKEN`: Your JIRA bearer token or personal access token
|
||||||
|
|
||||||
|
- Verify with:
|
||||||
|
```bash
|
||||||
|
echo "JIRA_URL: ${JIRA_URL}"
|
||||||
|
echo "JIRA_PERSONAL_TOKEN: ${JIRA_PERSONAL_TOKEN:+***set***}"
|
||||||
|
```
|
||||||
|
|
||||||
|
- If missing, guide the user to set them:
|
||||||
|
```bash
|
||||||
|
export JIRA_URL="https://issues.redhat.com"
|
||||||
|
export JIRA_PERSONAL_TOKEN="your-token-here"
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Parse Arguments**: Extract project key and optional filters from arguments
|
||||||
|
|
||||||
|
- Project key: Required first argument (e.g., "OCPBUGS", "OCPSTRAT")
|
||||||
|
- Optional filters:
|
||||||
|
- `--component`: Space-separated list of component search strings (fuzzy match)
|
||||||
|
- `--status`: Space-separated list of status values
|
||||||
|
- `--include-closed`: Flag to include closed bugs
|
||||||
|
- `--limit`: Maximum number of issues to fetch per component (default: 1000, max: 1000)
|
||||||
|
|
||||||
|
4. **Resolve Component Names** (if component filter provided): Use fuzzy matching to find actual component names
|
||||||
|
|
||||||
|
- Extract release from context or ask user for release version
|
||||||
|
- Run list_components.py to get all available components:
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/list-components/list_components.py --release <release>
|
||||||
|
```
|
||||||
|
- For each search string in `--component`:
|
||||||
|
- Find all components containing that string (case-insensitive)
|
||||||
|
- Combine all matches into a single list
|
||||||
|
- Remove duplicates
|
||||||
|
- If no matches found for a search string, warn the user and show available components
|
||||||
|
|
||||||
|
5. **Execute Python Script**: Run the list_jiras.py script for each component
|
||||||
|
|
||||||
|
- Script location: `plugins/component-health/skills/list-jiras/list_jiras.py`
|
||||||
|
- **Important**: Iterate over each resolved component separately to avoid overly large queries
|
||||||
|
- For each component:
|
||||||
|
- Build command with project, single component, and other filters
|
||||||
|
- Execute: `python3 list_jiras.py --project <project> --component "<component>" [other args]`
|
||||||
|
- Capture JSON output from stdout
|
||||||
|
- Aggregate results from all components into a combined response
|
||||||
|
|
||||||
|
6. **Parse Output**: Process the aggregated JSON response
|
||||||
|
|
||||||
|
- Extract metadata:
|
||||||
|
- `project`: Project key queried
|
||||||
|
- `total_count`: Total matching issues in JIRA
|
||||||
|
- `fetched_count`: Number of issues actually fetched
|
||||||
|
- `query`: JQL query that was executed
|
||||||
|
- `filters`: Applied filters
|
||||||
|
- Extract raw issues array:
|
||||||
|
- `issues`: Array of complete JIRA issue objects with all fields
|
||||||
|
|
||||||
|
7. **Present Results**: Display or store the raw JIRA data
|
||||||
|
|
||||||
|
- Show which components were matched (if fuzzy search was used)
|
||||||
|
- The command returns the aggregated JSON response with metadata and raw issues from all components
|
||||||
|
- Inform the user about total count vs fetched count per component
|
||||||
|
- The raw issue data can be passed to other commands for analysis
|
||||||
|
- Suggest using `/component-health:summarize-jiras` for summary statistics
|
||||||
|
- Highlight any truncation (if fetched_count < total_count for any component)
|
||||||
|
- Suggest increasing --limit if results are truncated
|
||||||
|
|
||||||
|
8. **Error Handling**: Handle common error scenarios
|
||||||
|
|
||||||
|
- Network connectivity issues
|
||||||
|
- Invalid JIRA credentials
|
||||||
|
- Invalid project key
|
||||||
|
- HTTP errors (401, 404, 500, etc.)
|
||||||
|
- Rate limiting (429)
|
||||||
|
|
||||||
|
## Return Value
|
||||||
|
|
||||||
|
The command outputs **raw JIRA issue data** in JSON format with the following structure:
|
||||||
|
|
||||||
|
### Metadata
|
||||||
|
|
||||||
|
- **project**: JIRA project key that was queried
|
||||||
|
- **total_count**: Total number of matching issues in JIRA
|
||||||
|
- **fetched_count**: Number of issues actually fetched (may be less than total if limited)
|
||||||
|
- **query**: JQL query that was executed (includes filters)
|
||||||
|
- **filters**: Object containing applied filters:
|
||||||
|
- `components`: List of component filters or null
|
||||||
|
- `statuses`: List of status filters or null
|
||||||
|
- `include_closed`: Boolean indicating if closed bugs were included
|
||||||
|
- `limit`: Maximum number of issues fetched
|
||||||
|
|
||||||
|
### Issues Array
|
||||||
|
|
||||||
|
- **issues**: Array of raw JIRA issue objects, each containing:
|
||||||
|
- `key`: Issue key (e.g., "OCPBUGS-12345")
|
||||||
|
- `fields`: Object containing all issue fields:
|
||||||
|
- `summary`: Issue title/summary
|
||||||
|
- `status`: Status object with name and ID
|
||||||
|
- `priority`: Priority object with name and ID
|
||||||
|
- `components`: Array of component objects
|
||||||
|
- `assignee`: Assignee object with user details
|
||||||
|
- `created`: Creation timestamp
|
||||||
|
- `updated`: Last updated timestamp
|
||||||
|
- `resolutiondate`: Resolution timestamp (if closed)
|
||||||
|
- `versions`: Affects Version/s array
|
||||||
|
- `fixVersions`: Fix Version/s array
|
||||||
|
- `customfield_12319940`: Target Version (custom field)
|
||||||
|
- And other JIRA fields as applicable
|
||||||
|
|
||||||
|
### Additional Information
|
||||||
|
|
||||||
|
- **note**: (Optional) If results are truncated, includes a note suggesting to increase the limit
|
||||||
|
- **component_queries**: (Optional) When multiple components are queried, this array shows the individual query executed for each component. Each entry contains:
|
||||||
|
- `component`: The component name
|
||||||
|
- `query`: The JQL query executed for this component
|
||||||
|
- `total_count`: Total matching issues for this component
|
||||||
|
- `fetched_count`: Number of issues fetched for this component
|
||||||
|
|
||||||
|
### Example Output Structure
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"project": "OCPBUGS",
|
||||||
|
"total_count": 1500,
|
||||||
|
"fetched_count": 100,
|
||||||
|
"query": "project = OCPBUGS AND (status != Closed OR (status = Closed AND resolved >= \"2025-10-11\"))",
|
||||||
|
"filters": {
|
||||||
|
"components": null,
|
||||||
|
"statuses": null,
|
||||||
|
"include_closed": false,
|
||||||
|
"limit": 100
|
||||||
|
},
|
||||||
|
"component_queries": [
|
||||||
|
{
|
||||||
|
"component": "kube-apiserver",
|
||||||
|
"query": "project = OCPBUGS AND component = \"kube-apiserver\" AND ...",
|
||||||
|
"total_count": 800,
|
||||||
|
"fetched_count": 50
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"component": "kube-controller-manager",
|
||||||
|
"query": "project = OCPBUGS AND component = \"kube-controller-manager\" AND ...",
|
||||||
|
"total_count": 700,
|
||||||
|
"fetched_count": 50
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"issues": [
|
||||||
|
{
|
||||||
|
"key": "OCPBUGS-12345",
|
||||||
|
"fields": {
|
||||||
|
"summary": "Bug title here",
|
||||||
|
"status": {"name": "New", "id": "1"},
|
||||||
|
"priority": {"name": "Major", "id": "3"},
|
||||||
|
"components": [{"name": "kube-apiserver"}],
|
||||||
|
"created": "2025-11-01T10:30:00.000+0000",
|
||||||
|
...
|
||||||
|
}
|
||||||
|
},
|
||||||
|
...
|
||||||
|
],
|
||||||
|
"note": "Showing first 100 of 1500 total results. Increase --limit for more data."
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
1. **List all open bugs for a project**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:list-jiras OCPBUGS
|
||||||
|
```
|
||||||
|
|
||||||
|
Fetches all open bugs in the OCPBUGS project (up to default limit of 1000) and returns raw issue data.
|
||||||
|
|
||||||
|
2. **Filter by specific component (exact match)**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:list-jiras OCPBUGS --component "kube-apiserver"
|
||||||
|
```
|
||||||
|
|
||||||
|
Returns raw data for bugs in the kube-apiserver component only.
|
||||||
|
|
||||||
|
3. **Filter by fuzzy search**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:list-jiras OCPBUGS --component network
|
||||||
|
```
|
||||||
|
|
||||||
|
Finds all components containing "network" (case-insensitive) and returns bugs for all matches (e.g., "Networking / ovn-kubernetes", "Networking / DNS", etc.).
|
||||||
|
Makes separate JIRA queries for each component and aggregates results.
|
||||||
|
|
||||||
|
4. **Filter by multiple search strings**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:list-jiras OCPBUGS --component etcd kube-
|
||||||
|
```
|
||||||
|
|
||||||
|
Finds all components containing "etcd" OR "kube-" and returns combined bug data.
|
||||||
|
Iterates over each component separately to avoid overly large queries.
|
||||||
|
|
||||||
|
5. **Include closed bugs**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:list-jiras OCPBUGS --include-closed --limit 500
|
||||||
|
```
|
||||||
|
|
||||||
|
Returns both open and closed bugs, fetching up to 500 issues per component.
|
||||||
|
|
||||||
|
6. **Filter by status**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:list-jiras OCPBUGS --status New "In Progress" Verified
|
||||||
|
```
|
||||||
|
|
||||||
|
Returns only bugs in New, In Progress, or Verified status.
|
||||||
|
|
||||||
|
7. **Combine fuzzy search with other filters**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:list-jiras OCPBUGS --component network --status New Assigned --limit 200
|
||||||
|
```
|
||||||
|
|
||||||
|
Returns bugs for all networking components that are in New or Assigned status.
|
||||||
|
|
||||||
|
## Arguments
|
||||||
|
|
||||||
|
- `$1` (required): JIRA project key
|
||||||
|
- Format: Project key in uppercase (e.g., "OCPBUGS", "OCPSTRAT")
|
||||||
|
- Must be a valid JIRA project you have access to
|
||||||
|
|
||||||
|
- `$2+` (optional): Filter flags
|
||||||
|
- `--component <search1> [search2 ...]`: Filter by component names using fuzzy search
|
||||||
|
- Space-separated list of component search strings
|
||||||
|
- Case-insensitive substring matching
|
||||||
|
- Each search string matches all components containing that substring
|
||||||
|
- Makes separate JIRA queries for each matched component to avoid overly large results
|
||||||
|
- Example: "network" matches "Networking / ovn-kubernetes", "Networking / DNS", etc.
|
||||||
|
- Example: "kube-" matches "kube-apiserver", "kube-controller-manager", etc.
|
||||||
|
- Note: Requires release context (inferred from recent commands or specified by user)
|
||||||
|
|
||||||
|
- `--status <status1> [status2 ...]`: Filter by status values
|
||||||
|
- Space-separated list of status names
|
||||||
|
- Examples: `New`, `"In Progress"`, `Verified`, `Modified`, `ON_QA`
|
||||||
|
|
||||||
|
- `--include-closed`: Include closed bugs in results
|
||||||
|
- By default, only open bugs are returned
|
||||||
|
- When specified, closed bugs are included
|
||||||
|
|
||||||
|
- `--limit <N>`: Maximum number of issues to fetch per component
|
||||||
|
- Default: 1000
|
||||||
|
- Range: 1-1000
|
||||||
|
- When using component filters, this limit applies to each component separately
|
||||||
|
- Higher values provide more accurate statistics but slower performance
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
1. **Python 3**: Required to run the data fetching script
|
||||||
|
|
||||||
|
- Check: `which python3`
|
||||||
|
- Version: 3.6 or later
|
||||||
|
|
||||||
|
2. **JIRA Authentication**: Environment variables must be configured
|
||||||
|
|
||||||
|
- `JIRA_URL`: Your JIRA instance URL
|
||||||
|
- `JIRA_PERSONAL_TOKEN`: Your JIRA bearer token or personal access token
|
||||||
|
|
||||||
|
How to get a JIRA token:
|
||||||
|
- Navigate to JIRA → Profile → Personal Access Tokens
|
||||||
|
- Generate a new token with appropriate permissions
|
||||||
|
- Export it as an environment variable
|
||||||
|
|
||||||
|
3. **Network Access**: Must be able to reach your JIRA instance
|
||||||
|
|
||||||
|
- Ensure HTTPS requests can be made to JIRA_URL
|
||||||
|
- Check firewall and VPN settings if needed
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- The script uses Python's standard library only (no external dependencies)
|
||||||
|
- Output is JSON format for easy parsing and further processing
|
||||||
|
- Diagnostic messages are written to stderr, data to stdout
|
||||||
|
- The script has a 30-second timeout for HTTP requests
|
||||||
|
- For large projects, consider using component filters to reduce query size
|
||||||
|
- The returned data includes ALL JIRA fields for each issue, providing complete information
|
||||||
|
- If you need summary statistics, use `/component-health:summarize-jiras` instead
|
||||||
|
- If results show truncation, increase the --limit parameter to fetch more issues
|
||||||
|
|
||||||
|
## See Also
|
||||||
|
|
||||||
|
- Skill Documentation: `plugins/component-health/skills/list-jiras/SKILL.md`
|
||||||
|
- Script: `plugins/component-health/skills/list-jiras/list_jiras.py`
|
||||||
|
- Related Command: `/component-health:summarize-jiras` (for summary statistics)
|
||||||
|
- Related Command: `/component-health:analyze`
|
||||||
342
commands/list-regressions.md
Normal file
342
commands/list-regressions.md
Normal file
@@ -0,0 +1,342 @@
|
|||||||
|
---
|
||||||
|
description: Fetch and list raw regression data for OpenShift releases
|
||||||
|
argument-hint: <release> [--components comp1 comp2 ...] [--start YYYY-MM-DD] [--end YYYY-MM-DD]
|
||||||
|
---
|
||||||
|
|
||||||
|
## Name
|
||||||
|
|
||||||
|
component-health:list-regressions
|
||||||
|
|
||||||
|
## Synopsis
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:list-regressions <release> [--components comp1 comp2 ...] [--start YYYY-MM-DD] [--end YYYY-MM-DD]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
The `component-health:list-regressions` command fetches regression data for a specified OpenShift release and returns raw regression details without performing any summarization or analysis. It provides complete regression information including test names, timestamps, triages, and metadata.
|
||||||
|
|
||||||
|
This command is useful for:
|
||||||
|
|
||||||
|
- Fetching raw regression data for further processing
|
||||||
|
- Accessing complete regression details for specific components
|
||||||
|
- Building custom analysis workflows
|
||||||
|
- Providing data to other commands (like `summarize-regressions` and `analyze`)
|
||||||
|
- Exporting regression data for offline analysis
|
||||||
|
- Investigating specific test failures across releases
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
1. **Verify Prerequisites**: Check that Python 3 is installed
|
||||||
|
|
||||||
|
- Run: `python3 --version`
|
||||||
|
- Verify version 3.6 or later is available
|
||||||
|
|
||||||
|
2. **Parse Arguments**: Extract release version and optional filters from arguments
|
||||||
|
|
||||||
|
- Release format: "X.Y" (e.g., "4.17", "4.21")
|
||||||
|
- Optional filters:
|
||||||
|
- `--components`: Space-separated list of component search strings (fuzzy match)
|
||||||
|
- `--start`: Start date for filtering (YYYY-MM-DD)
|
||||||
|
- `--end`: End date for filtering (YYYY-MM-DD)
|
||||||
|
- `--short`: Exclude regression arrays from output (only summaries)
|
||||||
|
|
||||||
|
3. **Resolve Component Names**: Use fuzzy matching to find actual component names
|
||||||
|
|
||||||
|
- Run list_components.py to get all available components:
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/list-components/list_components.py --release <release>
|
||||||
|
```
|
||||||
|
- If `--components` was provided:
|
||||||
|
- For each search string, find all components containing that string (case-insensitive)
|
||||||
|
- Example: "network" matches "Networking / ovn-kubernetes", "Networking / DNS", etc.
|
||||||
|
- Combine all matches into a single list
|
||||||
|
- Remove duplicates
|
||||||
|
- If no matches found for a search string, warn the user and show available components
|
||||||
|
- If `--components` was NOT provided:
|
||||||
|
- Use all available components from the list
|
||||||
|
|
||||||
|
4. **Fetch Release Dates** (if date filtering needed): Run the get_release_dates.py script
|
||||||
|
|
||||||
|
- Script location: `plugins/component-health/skills/get-release-dates/get_release_dates.py`
|
||||||
|
- Pass release as `--release` argument
|
||||||
|
- Extract `development_start` and `ga` dates from JSON output
|
||||||
|
- Use these dates for `--start` and `--end` parameters if not explicitly provided
|
||||||
|
|
||||||
|
5. **Execute Python Script**: Run the list_regressions.py script
|
||||||
|
|
||||||
|
- Script location: `plugins/component-health/skills/list-regressions/list_regressions.py`
|
||||||
|
- Pass release as `--release` argument
|
||||||
|
- Pass resolved component names as `--components` argument
|
||||||
|
- Pass `--start` date if filtering by start date
|
||||||
|
- Pass `--end` date if filtering by end date
|
||||||
|
- Capture JSON output from stdout
|
||||||
|
|
||||||
|
6. **Parse Output**: Process the JSON response
|
||||||
|
|
||||||
|
- The script outputs JSON with the following structure:
|
||||||
|
- `summary`: Overall statistics (total, triaged, percentages, timing metrics)
|
||||||
|
- `components`: Dictionary mapping component names to regression data
|
||||||
|
- Each component has:
|
||||||
|
- `summary`: Component-specific statistics
|
||||||
|
- `open`: Array of open regression objects
|
||||||
|
- `closed`: Array of closed regression objects
|
||||||
|
- **Note**: When using `--short` flag, regression arrays are excluded (only summaries)
|
||||||
|
|
||||||
|
7. **Present Results**: Display or store the raw regression data
|
||||||
|
|
||||||
|
- Show which components were matched (if fuzzy search was used)
|
||||||
|
- The command returns the complete JSON response with metadata and raw regressions
|
||||||
|
- Inform the user about overall counts from the summary
|
||||||
|
- The raw regression data can be passed to other commands for analysis
|
||||||
|
- Suggest using `/component-health:summarize-regressions` for summary statistics
|
||||||
|
- Suggest using `/component-health:analyze` for health grading
|
||||||
|
|
||||||
|
8. **Error Handling**: Handle common error scenarios
|
||||||
|
|
||||||
|
- Network connectivity issues
|
||||||
|
- Invalid release format
|
||||||
|
- API errors (404, 500, etc.)
|
||||||
|
- Empty results
|
||||||
|
- No matches for component filter
|
||||||
|
|
||||||
|
## Return Value
|
||||||
|
|
||||||
|
The command outputs **raw regression data** in JSON format with the following structure:
|
||||||
|
|
||||||
|
### Overall Summary
|
||||||
|
|
||||||
|
- `summary.total`: Total number of regressions
|
||||||
|
- `summary.triaged`: Total number of regressions triaged to JIRA bugs
|
||||||
|
- `summary.triage_percentage`: Percentage of regressions that have been triaged
|
||||||
|
- `summary.filtered_suspected_infra_regressions`: Count of infrastructure regressions filtered
|
||||||
|
- `summary.time_to_triage_hrs_avg`: Average hours from opened to first triage
|
||||||
|
- `summary.time_to_triage_hrs_max`: Maximum hours from opened to first triage
|
||||||
|
- `summary.time_to_close_hrs_avg`: Average hours from opened to closed (closed only)
|
||||||
|
- `summary.time_to_close_hrs_max`: Maximum hours from opened to closed (closed only)
|
||||||
|
- `summary.open`: Summary statistics for open regressions
|
||||||
|
- `total`: Number of open regressions
|
||||||
|
- `triaged`: Number of open regressions triaged
|
||||||
|
- `triage_percentage`: Percentage of open regressions triaged
|
||||||
|
- `time_to_triage_hrs_avg`, `time_to_triage_hrs_max`: Triage timing metrics
|
||||||
|
- `open_hrs_avg`, `open_hrs_max`: How long regressions have been open
|
||||||
|
- `summary.closed`: Summary statistics for closed regressions
|
||||||
|
- `total`: Number of closed regressions
|
||||||
|
- `triaged`: Number of closed regressions triaged
|
||||||
|
- `triage_percentage`: Percentage of closed regressions triaged
|
||||||
|
- `time_to_triage_hrs_avg`, `time_to_triage_hrs_max`: Triage timing metrics
|
||||||
|
- `time_to_close_hrs_avg`, `time_to_close_hrs_max`: Time to close metrics
|
||||||
|
- `time_triaged_closed_hrs_avg`, `time_triaged_closed_hrs_max`: Time from triage to close
|
||||||
|
|
||||||
|
### Per-Component Data
|
||||||
|
|
||||||
|
- `components`: Dictionary mapping component names to objects containing:
|
||||||
|
- `summary`: Component-specific statistics (same structure as overall summary)
|
||||||
|
- `open`: Array of open regression objects
|
||||||
|
- `closed`: Array of closed regression objects
|
||||||
|
|
||||||
|
### Regression Object Structure
|
||||||
|
|
||||||
|
Each regression object (in `components.*.open` or `components.*.closed` arrays) contains:
|
||||||
|
|
||||||
|
- `id`: Unique regression identifier
|
||||||
|
- `view`: Release view (e.g., "4.21-main")
|
||||||
|
- `release`: Release version
|
||||||
|
- `base_release`: Base release for comparison
|
||||||
|
- `component`: Component name
|
||||||
|
- `capability`: Test capability/area
|
||||||
|
- `test_name`: Full test name
|
||||||
|
- `variants`: Array of test variants where regression occurred
|
||||||
|
- `opened`: Timestamp when regression was first detected
|
||||||
|
- `closed`: Timestamp when regression was closed (null if still open)
|
||||||
|
- `triages`: Array of triage objects (JIRA bugs linked to this regression)
|
||||||
|
- Each triage has `jira_key`, `created_at`, `url` fields
|
||||||
|
- `last_failure`: Timestamp of most recent test failure
|
||||||
|
- `max_failures`: Maximum number of failures detected
|
||||||
|
|
||||||
|
### Example Output Structure
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"summary": {
|
||||||
|
"total": 62,
|
||||||
|
"triaged": 59,
|
||||||
|
"triage_percentage": 95.2,
|
||||||
|
"filtered_suspected_infra_regressions": 8,
|
||||||
|
"time_to_triage_hrs_avg": 68,
|
||||||
|
"time_to_triage_hrs_max": 240,
|
||||||
|
"time_to_close_hrs_avg": 168,
|
||||||
|
"time_to_close_hrs_max": 480,
|
||||||
|
"open": { "total": 2, "triaged": 1, ... },
|
||||||
|
"closed": { "total": 60, "triaged": 58, ... }
|
||||||
|
},
|
||||||
|
"components": {
|
||||||
|
"Monitoring": {
|
||||||
|
"summary": {
|
||||||
|
"total": 15,
|
||||||
|
"triaged": 13,
|
||||||
|
"triage_percentage": 86.7,
|
||||||
|
...
|
||||||
|
},
|
||||||
|
"open": [
|
||||||
|
{
|
||||||
|
"id": 12894,
|
||||||
|
"component": "Monitoring",
|
||||||
|
"test_name": "[sig-instrumentation] Prometheus ...",
|
||||||
|
"opened": "2025-10-15T10:30:00Z",
|
||||||
|
"closed": null,
|
||||||
|
"triages": [],
|
||||||
|
...
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"closed": [...]
|
||||||
|
},
|
||||||
|
"etcd": {
|
||||||
|
"summary": { "total": 20, "triaged": 19, ... },
|
||||||
|
"open": [],
|
||||||
|
"closed": [...]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note**: When using `--short` flag, the `open` and `closed` arrays are excluded from component objects to reduce response size.
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
1. **List all regressions for a release**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:list-regressions 4.17
|
||||||
|
```
|
||||||
|
|
||||||
|
Fetches all regression data for release 4.17, including all components.
|
||||||
|
|
||||||
|
2. **Filter by specific component (exact match)**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:list-regressions 4.21 --components Monitoring
|
||||||
|
```
|
||||||
|
|
||||||
|
Returns regression data for only the Monitoring component.
|
||||||
|
|
||||||
|
3. **Filter by fuzzy search**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:list-regressions 4.21 --components network
|
||||||
|
```
|
||||||
|
|
||||||
|
Finds all components containing "network" (case-insensitive):
|
||||||
|
- Networking / ovn-kubernetes
|
||||||
|
- Networking / DNS
|
||||||
|
- Networking / router
|
||||||
|
- Networking / cluster-network-operator
|
||||||
|
- ... and returns regression data for all matches
|
||||||
|
|
||||||
|
4. **Filter by multiple search strings**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:list-regressions 4.21 --components etcd kube-
|
||||||
|
```
|
||||||
|
|
||||||
|
Finds all components containing "etcd" OR "kube-":
|
||||||
|
- Etcd
|
||||||
|
- kube-apiserver
|
||||||
|
- kube-controller-manager
|
||||||
|
- kube-scheduler
|
||||||
|
- kube-storage-version-migrator
|
||||||
|
|
||||||
|
5. **Filter by development window** (GA'd release):
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:list-regressions 4.17 --start 2024-05-17 --end 2024-10-29
|
||||||
|
```
|
||||||
|
|
||||||
|
Fetches regressions within the development window:
|
||||||
|
- Excludes regressions closed before 2024-05-17
|
||||||
|
- Excludes regressions opened after 2024-10-29
|
||||||
|
|
||||||
|
6. **Filter for in-development release**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:list-regressions 4.21 --start 2025-09-02
|
||||||
|
```
|
||||||
|
|
||||||
|
Fetches regressions for an in-development release:
|
||||||
|
- Excludes regressions closed before development started
|
||||||
|
- No end date (release still in development)
|
||||||
|
|
||||||
|
7. **Combine fuzzy component search and date filters**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:list-regressions 4.21 --components network --start 2025-09-02
|
||||||
|
```
|
||||||
|
|
||||||
|
Returns regressions for all networking components from the development window.
|
||||||
|
|
||||||
|
## Arguments
|
||||||
|
|
||||||
|
- `$1` (required): Release version
|
||||||
|
- Format: "X.Y" (e.g., "4.17", "4.21")
|
||||||
|
- Must be a valid OpenShift release number
|
||||||
|
|
||||||
|
- `$2+` (optional): Filter flags
|
||||||
|
- `--components <search1> [search2 ...]`: Filter by component names using fuzzy search
|
||||||
|
- Space-separated list of component search strings
|
||||||
|
- Case-insensitive substring matching
|
||||||
|
- Each search string matches all components containing that substring
|
||||||
|
- If no components provided, all components are analyzed
|
||||||
|
- Example: "network" matches "Networking / ovn-kubernetes", "Networking / DNS", etc.
|
||||||
|
- Example: "kube-" matches "kube-apiserver", "kube-controller-manager", etc.
|
||||||
|
|
||||||
|
- `--start <YYYY-MM-DD>`: Filter regressions by start date
|
||||||
|
- Excludes regressions closed before this date
|
||||||
|
- Typically the development_start date from release metadata
|
||||||
|
|
||||||
|
- `--end <YYYY-MM-DD>`: Filter regressions by end date
|
||||||
|
- Excludes regressions opened after this date
|
||||||
|
- Typically the GA date for released versions
|
||||||
|
- Omit for in-development releases
|
||||||
|
|
||||||
|
- `--short`: Exclude regression arrays from output
|
||||||
|
- Only include summary statistics
|
||||||
|
- Significantly reduces response size for large datasets
|
||||||
|
- Use when you only need counts and metrics, not individual regressions
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
1. **Python 3**: Required to run the data fetching script
|
||||||
|
|
||||||
|
- Check: `which python3`
|
||||||
|
- Version: 3.6 or later
|
||||||
|
|
||||||
|
2. **Network Access**: Must be able to reach the component health API
|
||||||
|
|
||||||
|
- Ensure HTTPS requests can be made
|
||||||
|
- Check firewall and VPN settings if needed
|
||||||
|
|
||||||
|
3. **API Configuration**: The API endpoint must be configured in the script
|
||||||
|
- Location: `plugins/component-health/skills/list-regressions/list_regressions.py`
|
||||||
|
- The script should have the correct API base URL
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- The script uses Python's standard library only (no external dependencies)
|
||||||
|
- Output is JSON format for easy parsing and further processing
|
||||||
|
- Diagnostic messages are written to stderr, data to stdout
|
||||||
|
- The script has a 30-second timeout for HTTP requests
|
||||||
|
- For large result sets, consider using component filters or the `--short` flag
|
||||||
|
- Date filtering helps focus on relevant regressions within the development window
|
||||||
|
- Infrastructure regressions (closed quickly on high-volume days) are automatically filtered
|
||||||
|
- The returned data includes complete regression information, not summaries
|
||||||
|
- If you need summary statistics, use `/component-health:summarize-regressions` instead
|
||||||
|
- If you need health grading, use `/component-health:analyze` instead
|
||||||
|
|
||||||
|
## See Also
|
||||||
|
|
||||||
|
- Skill Documentation: `plugins/component-health/skills/list-regressions/SKILL.md`
|
||||||
|
- Script: `plugins/component-health/skills/list-regressions/list_regressions.py`
|
||||||
|
- Related Command: `/component-health:summarize-regressions` (for summary statistics)
|
||||||
|
- Related Command: `/component-health:analyze` (for health grading and analysis)
|
||||||
|
- Related Skill: `get-release-dates` (for fetching development window dates)
|
||||||
302
commands/summarize-jiras.md
Normal file
302
commands/summarize-jiras.md
Normal file
@@ -0,0 +1,302 @@
|
|||||||
|
---
|
||||||
|
description: Query and summarize JIRA bugs for a specific project with counts by component
|
||||||
|
argument-hint: --project <project> [--component comp1 comp2 ...] [--status status1 status2 ...] [--include-closed] [--limit N]
|
||||||
|
---
|
||||||
|
|
||||||
|
## Name
|
||||||
|
|
||||||
|
component-health:summarize-jiras
|
||||||
|
|
||||||
|
## Synopsis
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:summarize-jiras --project <project> [--component comp1 comp2 ...] [--status status1 status2 ...] [--include-closed] [--limit N]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
The `component-health:summarize-jiras` command queries JIRA bugs for a specified project and generates summary statistics. It leverages the `list-jiras` command to fetch raw JIRA data and then calculates counts by status, priority, and component to help understand the bug backlog at a glance.
|
||||||
|
|
||||||
|
By default, the command includes:
|
||||||
|
- All currently open bugs
|
||||||
|
- Bugs closed in the last 30 days (to track recent closure activity)
|
||||||
|
|
||||||
|
This command is useful for:
|
||||||
|
|
||||||
|
- Getting a quick count of open bugs in a JIRA project
|
||||||
|
- Analyzing bug distribution by status, priority, or component
|
||||||
|
- Tracking recent bug flow (opened vs closed in last 30 days)
|
||||||
|
- Generating summary reports for bug backlog
|
||||||
|
- Monitoring bug velocity and closure rates by component
|
||||||
|
- Comparing bug counts across different components
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
1. **Verify Prerequisites**: Check that Python 3 is installed
|
||||||
|
|
||||||
|
- Run: `python3 --version`
|
||||||
|
- Verify version 3.6 or later is available
|
||||||
|
|
||||||
|
2. **Verify Environment Variables**: Ensure JIRA authentication is configured
|
||||||
|
|
||||||
|
- Check that the following environment variables are set:
|
||||||
|
- `JIRA_URL`: Base URL for JIRA instance (e.g., "https://issues.redhat.com")
|
||||||
|
- `JIRA_PERSONAL_TOKEN`: Your JIRA bearer token or personal access token
|
||||||
|
|
||||||
|
- Verify with:
|
||||||
|
```bash
|
||||||
|
echo "JIRA_URL: ${JIRA_URL}"
|
||||||
|
echo "JIRA_PERSONAL_TOKEN: ${JIRA_PERSONAL_TOKEN:+***set***}"
|
||||||
|
```
|
||||||
|
|
||||||
|
- If missing, guide the user to set them:
|
||||||
|
```bash
|
||||||
|
export JIRA_URL="https://issues.redhat.com"
|
||||||
|
export JIRA_PERSONAL_TOKEN="your-token-here"
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Parse Arguments**: Extract project key and optional filters from arguments
|
||||||
|
|
||||||
|
- Project key: Required `--project` flag (e.g., "OCPBUGS", "OCPSTRAT")
|
||||||
|
- Optional filters:
|
||||||
|
- `--component`: Space-separated list of component search strings (fuzzy match)
|
||||||
|
- `--status`: Space-separated list of status values
|
||||||
|
- `--include-closed`: Flag to include closed bugs
|
||||||
|
- `--limit`: Maximum number of issues to fetch per component (default: 1000, max: 1000)
|
||||||
|
|
||||||
|
4. **Resolve Component Names** (if component filter provided): Use fuzzy matching to find actual component names
|
||||||
|
|
||||||
|
- Extract release from context or ask user for release version
|
||||||
|
- Run list_components.py to get all available components:
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/list-components/list_components.py --release <release>
|
||||||
|
```
|
||||||
|
- For each search string in `--component`:
|
||||||
|
- Find all components containing that string (case-insensitive)
|
||||||
|
- Combine all matches into a single list
|
||||||
|
- Remove duplicates
|
||||||
|
- If no matches found for a search string, warn the user and show available components
|
||||||
|
|
||||||
|
5. **Execute Python Script**: Run the summarize_jiras.py script for each component
|
||||||
|
|
||||||
|
- Script location: `plugins/component-health/skills/summarize-jiras/summarize_jiras.py`
|
||||||
|
- The script internally calls `list_jiras.py` to fetch raw data
|
||||||
|
- **Important**: Iterate over each resolved component separately to avoid overly large queries
|
||||||
|
- For each component:
|
||||||
|
- Build command with project, single component, and other filters
|
||||||
|
- Execute: `python3 summarize_jiras.py --project <project> --component "<component>" [other args]`
|
||||||
|
- Capture JSON output from stdout
|
||||||
|
- Aggregate summary statistics from all components into a combined response
|
||||||
|
|
||||||
|
6. **Parse Output**: Process the aggregated JSON response
|
||||||
|
|
||||||
|
- Extract summary statistics:
|
||||||
|
- `total_count`: Total matching issues in JIRA
|
||||||
|
- `fetched_count`: Number of issues actually fetched
|
||||||
|
- `summary.by_status`: Count of issues per status
|
||||||
|
- `summary.by_priority`: Count of issues per priority
|
||||||
|
- `summary.by_component`: Count of issues per component
|
||||||
|
- Extract per-component breakdowns:
|
||||||
|
- Each component has its own counts by status and priority
|
||||||
|
- Includes opened/closed in last 30 days per component
|
||||||
|
|
||||||
|
7. **Present Results**: Display summary in a clear format
|
||||||
|
|
||||||
|
- Show which components were matched (if fuzzy search was used)
|
||||||
|
- Show total bug count across all components
|
||||||
|
- Display status breakdown (e.g., New, In Progress, Verified, etc.)
|
||||||
|
- Display priority breakdown (Critical, Major, Normal, Minor, etc.)
|
||||||
|
- Display component distribution
|
||||||
|
- Show per-component breakdowns with status and priority counts
|
||||||
|
- Highlight any truncation (if fetched_count < total_count for any component)
|
||||||
|
- Suggest increasing --limit if results are truncated
|
||||||
|
|
||||||
|
8. **Error Handling**: Handle common error scenarios
|
||||||
|
|
||||||
|
- Network connectivity issues
|
||||||
|
- Invalid JIRA credentials
|
||||||
|
- Invalid project key
|
||||||
|
- HTTP errors (401, 404, 500, etc.)
|
||||||
|
- Rate limiting (429)
|
||||||
|
|
||||||
|
## Return Value
|
||||||
|
|
||||||
|
The command outputs a **JIRA Bug Summary** with the following information:
|
||||||
|
|
||||||
|
### Project Overview
|
||||||
|
|
||||||
|
- **Project**: JIRA project key
|
||||||
|
- **Total Count**: Total number of matching bugs (open + recently closed)
|
||||||
|
- **Query**: JQL query that was executed (includes 30-day closed bug filter)
|
||||||
|
- **Fetched Count**: Number of bugs actually fetched (may be less than total if limited)
|
||||||
|
|
||||||
|
### Summary Statistics
|
||||||
|
|
||||||
|
**Overall Metrics**:
|
||||||
|
- Total bugs fetched
|
||||||
|
- Bugs opened in last 30 days
|
||||||
|
- Bugs closed in last 30 days
|
||||||
|
|
||||||
|
**By Status**: Count of bugs in each status (includes recently closed)
|
||||||
|
|
||||||
|
| Status | Count |
|
||||||
|
|--------|-------|
|
||||||
|
| New | X |
|
||||||
|
| In Progress | X |
|
||||||
|
| Verified | X |
|
||||||
|
| Closed | X |
|
||||||
|
| ... | ... |
|
||||||
|
|
||||||
|
**By Priority**: Count of bugs by priority level
|
||||||
|
|
||||||
|
| Priority | Count |
|
||||||
|
|----------|-------|
|
||||||
|
| Critical | X |
|
||||||
|
| Major | X |
|
||||||
|
| Normal | X |
|
||||||
|
| Minor | X |
|
||||||
|
| Undefined | X |
|
||||||
|
|
||||||
|
**By Component**: Count of bugs per component
|
||||||
|
|
||||||
|
| Component | Count |
|
||||||
|
|-----------|-------|
|
||||||
|
| kube-apiserver | X |
|
||||||
|
| Management Console | X |
|
||||||
|
| Networking | X |
|
||||||
|
| ... | ... |
|
||||||
|
|
||||||
|
### Per-Component Breakdown
|
||||||
|
|
||||||
|
For each component:
|
||||||
|
- **Total**: Number of bugs assigned to this component
|
||||||
|
- **Opened (30d)**: Bugs created in the last 30 days
|
||||||
|
- **Closed (30d)**: Bugs closed in the last 30 days
|
||||||
|
- **By Status**: Status distribution for this component
|
||||||
|
- **By Priority**: Priority distribution for this component
|
||||||
|
|
||||||
|
### Additional Information
|
||||||
|
|
||||||
|
- **Filters Applied**: Lists any component, status, or other filters used
|
||||||
|
- **Note**: If results are truncated, suggests increasing the limit
|
||||||
|
- **Query Scope**: By default includes open bugs and bugs closed in the last 30 days
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
1. **Summarize all open bugs for a project**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:summarize-jiras --project OCPBUGS
|
||||||
|
```
|
||||||
|
|
||||||
|
Fetches all open bugs in the OCPBUGS project (up to default limit of 1000) and displays summary statistics.
|
||||||
|
|
||||||
|
2. **Filter by specific component**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:summarize-jiras --project OCPBUGS --component "kube-apiserver"
|
||||||
|
```
|
||||||
|
|
||||||
|
Shows bug counts for only the kube-apiserver component.
|
||||||
|
|
||||||
|
3. **Filter by multiple components**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:summarize-jiras --project OCPBUGS --component "kube-apiserver" "etcd" "Networking"
|
||||||
|
```
|
||||||
|
|
||||||
|
Shows bug counts for kube-apiserver, etcd, and Networking components.
|
||||||
|
|
||||||
|
4. **Include closed bugs**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:summarize-jiras --project OCPBUGS --include-closed --limit 500
|
||||||
|
```
|
||||||
|
|
||||||
|
Includes both open and closed bugs, fetching up to 500 issues.
|
||||||
|
|
||||||
|
5. **Filter by status**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:summarize-jiras --project OCPBUGS --status New "In Progress" Verified
|
||||||
|
```
|
||||||
|
|
||||||
|
Shows only bugs in New, In Progress, or Verified status.
|
||||||
|
|
||||||
|
6. **Combine multiple filters**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:summarize-jiras --project OCPBUGS --component "Management Console" --status New Assigned --limit 200
|
||||||
|
```
|
||||||
|
|
||||||
|
Shows bugs for Management Console component that are in New or Assigned status.
|
||||||
|
|
||||||
|
## Arguments
|
||||||
|
|
||||||
|
- `--project <project>` (required): JIRA project key
|
||||||
|
- Format: Project key in uppercase (e.g., "OCPBUGS", "OCPSTRAT")
|
||||||
|
- Must be a valid JIRA project you have access to
|
||||||
|
|
||||||
|
- Additional optional flags:
|
||||||
|
- `--component <search1> [search2 ...]`: Filter by component names using fuzzy search
|
||||||
|
- Space-separated list of component search strings
|
||||||
|
- Case-insensitive substring matching
|
||||||
|
- Each search string matches all components containing that substring
|
||||||
|
- Makes separate JIRA queries for each matched component to avoid overly large results
|
||||||
|
- Example: "network" matches "Networking / ovn-kubernetes", "Networking / DNS", etc.
|
||||||
|
- Example: "kube-" matches "kube-apiserver", "kube-controller-manager", etc.
|
||||||
|
- Note: Requires release context (inferred from recent commands or specified by user)
|
||||||
|
|
||||||
|
- `--status <status1> [status2 ...]`: Filter by status values
|
||||||
|
- Space-separated list of status names
|
||||||
|
- Examples: `New`, `"In Progress"`, `Verified`, `Modified`, `ON_QA`
|
||||||
|
|
||||||
|
- `--include-closed`: Include closed bugs in results
|
||||||
|
- By default, only open bugs are returned
|
||||||
|
- When specified, closed bugs are included
|
||||||
|
|
||||||
|
- `--limit <N>`: Maximum number of issues to fetch per component
|
||||||
|
- Default: 1000
|
||||||
|
- Range: 1-1000
|
||||||
|
- When using component filters, this limit applies to each component separately
|
||||||
|
- Higher values provide more accurate statistics but slower performance
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
1. **Python 3**: Required to run the data fetching and summarization scripts
|
||||||
|
|
||||||
|
- Check: `which python3`
|
||||||
|
- Version: 3.6 or later
|
||||||
|
|
||||||
|
2. **JIRA Authentication**: Environment variables must be configured
|
||||||
|
|
||||||
|
- `JIRA_URL`: Your JIRA instance URL
|
||||||
|
- `JIRA_PERSONAL_TOKEN`: Your JIRA bearer token or personal access token
|
||||||
|
|
||||||
|
How to get a JIRA token:
|
||||||
|
- Navigate to JIRA → Profile → Personal Access Tokens
|
||||||
|
- Generate a new token with appropriate permissions
|
||||||
|
- Export it as an environment variable
|
||||||
|
|
||||||
|
3. **Network Access**: Must be able to reach your JIRA instance
|
||||||
|
|
||||||
|
- Ensure HTTPS requests can be made to JIRA_URL
|
||||||
|
- Check firewall and VPN settings if needed
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- The script uses Python's standard library only (no external dependencies)
|
||||||
|
- Output is JSON format for easy parsing
|
||||||
|
- Diagnostic messages are written to stderr, data to stdout
|
||||||
|
- The script has a 30-second timeout for HTTP requests
|
||||||
|
- For large projects, consider using component filters to reduce query size
|
||||||
|
- Summary statistics are based on fetched issues (controlled by --limit), not total matching issues
|
||||||
|
- If results show truncation, increase the --limit parameter for more accurate statistics
|
||||||
|
- This command internally uses `/component-health:list-jiras` to fetch raw data
|
||||||
|
|
||||||
|
## See Also
|
||||||
|
|
||||||
|
- Skill Documentation: `plugins/component-health/skills/summarize-jiras/SKILL.md`
|
||||||
|
- Script: `plugins/component-health/skills/summarize-jiras/summarize_jiras.py`
|
||||||
|
- Related Command: `/component-health:list-jiras` (for raw JIRA data)
|
||||||
|
- Related Command: `/component-health:analyze`
|
||||||
285
commands/summarize-regressions.md
Normal file
285
commands/summarize-regressions.md
Normal file
@@ -0,0 +1,285 @@
|
|||||||
|
---
|
||||||
|
description: Query and summarize regression data for OpenShift releases with counts and metrics
|
||||||
|
argument-hint: <release> [--components comp1 comp2 ...] [--start YYYY-MM-DD] [--end YYYY-MM-DD]
|
||||||
|
---
|
||||||
|
|
||||||
|
## Name
|
||||||
|
|
||||||
|
component-health:summarize-regressions
|
||||||
|
|
||||||
|
## Synopsis
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:summarize-regressions <release> [--components comp1 comp2 ...] [--start YYYY-MM-DD] [--end YYYY-MM-DD]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
The `component-health:summarize-regressions` command queries regression data for a specified OpenShift release and generates summary statistics. It leverages the `list-regressions` command to fetch raw regression data and then presents counts, percentages, and timing metrics to help understand regression trends at a glance.
|
||||||
|
|
||||||
|
By default, the command analyzes:
|
||||||
|
- All regressions within the release development window
|
||||||
|
- Both open and closed regressions
|
||||||
|
- Triage coverage and timing metrics
|
||||||
|
- Per-component breakdowns
|
||||||
|
|
||||||
|
This command is useful for:
|
||||||
|
|
||||||
|
- Getting a quick count of regressions in a release
|
||||||
|
- Analyzing regression distribution by component
|
||||||
|
- Tracking triage coverage and response times
|
||||||
|
- Generating summary reports for regression management
|
||||||
|
- Monitoring regression resolution speed by component
|
||||||
|
- Comparing regression metrics across different components
|
||||||
|
- Understanding open vs closed regression breakdown
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
1. **Verify Prerequisites**: Check that Python 3 is installed
|
||||||
|
|
||||||
|
- Run: `python3 --version`
|
||||||
|
- Verify version 3.6 or later is available
|
||||||
|
|
||||||
|
2. **Parse Arguments**: Extract release version and optional filters from arguments
|
||||||
|
|
||||||
|
- Release format: "X.Y" (e.g., "4.17", "4.21")
|
||||||
|
- Optional filters:
|
||||||
|
- `--components`: Space-separated list of component search strings (fuzzy match)
|
||||||
|
- `--start`: Start date for filtering (YYYY-MM-DD)
|
||||||
|
- `--end`: End date for filtering (YYYY-MM-DD)
|
||||||
|
|
||||||
|
3. **Resolve Component Names**: Use fuzzy matching to find actual component names
|
||||||
|
|
||||||
|
- Run list_components.py to get all available components:
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/list-components/list_components.py --release <release>
|
||||||
|
```
|
||||||
|
- If `--components` was provided:
|
||||||
|
- For each search string, find all components containing that string (case-insensitive)
|
||||||
|
- Combine all matches into a single list
|
||||||
|
- Remove duplicates
|
||||||
|
- If no matches found for a search string, warn the user and show available components
|
||||||
|
- If `--components` was NOT provided:
|
||||||
|
- Use all available components from the list
|
||||||
|
|
||||||
|
4. **Fetch Release Dates**: Run the get_release_dates.py script to get development window dates
|
||||||
|
|
||||||
|
- Script location: `plugins/component-health/skills/get-release-dates/get_release_dates.py`
|
||||||
|
- Pass release as `--release` argument
|
||||||
|
- Extract `development_start` and `ga` dates from JSON output
|
||||||
|
- Convert timestamps to simple date format (YYYY-MM-DD)
|
||||||
|
- Use these dates if `--start` and `--end` are not explicitly provided
|
||||||
|
|
||||||
|
5. **Execute Python Script**: Run the list_regressions.py script with appropriate arguments
|
||||||
|
|
||||||
|
- Script location: `plugins/component-health/skills/list-regressions/list_regressions.py`
|
||||||
|
- Pass release as `--release` argument
|
||||||
|
- Pass resolved component names as `--components` argument
|
||||||
|
- Pass `development_start` date as `--start` argument (if available)
|
||||||
|
- Always applied (for both GA'd and in-development releases)
|
||||||
|
- Excludes regressions closed before development started
|
||||||
|
- Pass `ga` date as `--end` argument (only if GA date is not null)
|
||||||
|
- Only applied for GA'd releases
|
||||||
|
- Excludes regressions opened after GA
|
||||||
|
- For in-development releases (null GA date), no end date filtering is applied
|
||||||
|
- **Always pass `--short` flag** to exclude regression arrays (only summaries)
|
||||||
|
|
||||||
|
6. **Parse Output**: Process the JSON output from the script
|
||||||
|
|
||||||
|
- Script writes JSON to stdout with summary structure:
|
||||||
|
- `summary`: Overall statistics (total, triaged, percentages, timing)
|
||||||
|
- `components`: Per-component summary statistics
|
||||||
|
- **ALWAYS use the summary fields** for counts and metrics
|
||||||
|
- Regression arrays are not included (due to `--short` flag)
|
||||||
|
|
||||||
|
7. **Present Results**: Display summary in a clear, readable format
|
||||||
|
|
||||||
|
- Show which components were matched (if fuzzy search was used)
|
||||||
|
- Show overall summary statistics
|
||||||
|
- Display per-component breakdowns
|
||||||
|
- Highlight key metrics:
|
||||||
|
- Triage coverage percentages
|
||||||
|
- Average time to triage
|
||||||
|
- Average time to close (for closed regressions)
|
||||||
|
- Open vs closed counts
|
||||||
|
- Present data in tables or structured format
|
||||||
|
- Note any date filtering applied
|
||||||
|
|
||||||
|
8. **Error Handling**: Handle common error scenarios
|
||||||
|
|
||||||
|
- Network connectivity issues
|
||||||
|
- Invalid release format
|
||||||
|
- API errors (404, 500, etc.)
|
||||||
|
- Empty results
|
||||||
|
- No matches for component filter
|
||||||
|
- Release dates not found
|
||||||
|
|
||||||
|
## Return Value
|
||||||
|
|
||||||
|
The command outputs a **Regression Summary Report** with the following information:
|
||||||
|
|
||||||
|
### Overall Summary
|
||||||
|
|
||||||
|
- **Release**: OpenShift release version
|
||||||
|
- **Development Window**: Start and end dates (or "In Development" if no GA date)
|
||||||
|
- **Total Regressions**: `summary.total`
|
||||||
|
- **Filtered Infrastructure Regressions**: `summary.filtered_suspected_infra_regressions`
|
||||||
|
- **Triaged**: `summary.triaged` regressions (`summary.triage_percentage`%)
|
||||||
|
- **Open**: `summary.open.total` regressions (`summary.open.triage_percentage`% triaged)
|
||||||
|
- **Closed**: `summary.closed.total` regressions (`summary.closed.triage_percentage`% triaged)
|
||||||
|
|
||||||
|
### Timing Metrics
|
||||||
|
|
||||||
|
**Overall Metrics**:
|
||||||
|
- **Average Time to Triage**: `summary.time_to_triage_hrs_avg` hours
|
||||||
|
- **Maximum Time to Triage**: `summary.time_to_triage_hrs_max` hours
|
||||||
|
- **Average Time to Close**: `summary.time_to_close_hrs_avg` hours (closed regressions only)
|
||||||
|
- **Maximum Time to Close**: `summary.time_to_close_hrs_max` hours (closed regressions only)
|
||||||
|
|
||||||
|
**Open Regression Metrics**:
|
||||||
|
- **Average Open Duration**: `summary.open.open_hrs_avg` hours
|
||||||
|
- **Maximum Open Duration**: `summary.open.open_hrs_max` hours
|
||||||
|
- **Average Time to Triage** (open): `summary.open.time_to_triage_hrs_avg` hours
|
||||||
|
- **Maximum Time to Triage** (open): `summary.open.time_to_triage_hrs_max` hours
|
||||||
|
|
||||||
|
**Closed Regression Metrics**:
|
||||||
|
- **Average Time to Close**: `summary.closed.time_to_close_hrs_avg` hours
|
||||||
|
- **Maximum Time to Close**: `summary.closed.time_to_close_hrs_max` hours
|
||||||
|
- **Average Time to Triage** (closed): `summary.closed.time_to_triage_hrs_avg` hours
|
||||||
|
- **Maximum Time to Triage** (closed): `summary.closed.time_to_triage_hrs_max` hours
|
||||||
|
- **Average Triage-to-Close Time**: `summary.closed.time_triaged_closed_hrs_avg` hours
|
||||||
|
- **Maximum Triage-to-Close Time**: `summary.closed.time_triaged_closed_hrs_max` hours
|
||||||
|
|
||||||
|
### Per-Component Summary
|
||||||
|
|
||||||
|
For each component (from `components.*.summary`):
|
||||||
|
|
||||||
|
| Component | Total | Open | Closed | Triaged | Triage % | Avg Time to Triage | Avg Time to Close |
|
||||||
|
|-----------|-------|------|--------|---------|----------|--------------------|-------------------|
|
||||||
|
| Monitoring | 15 | 1 | 14 | 13 | 86.7% | 68 hrs | 156 hrs |
|
||||||
|
| etcd | 20 | 0 | 20 | 19 | 95.0% | 84 hrs | 192 hrs |
|
||||||
|
| kube-apiserver | 27 | 1 | 26 | 27 | 100.0% | 58 hrs | 144 hrs |
|
||||||
|
|
||||||
|
### Additional Information
|
||||||
|
|
||||||
|
- **Filters Applied**: Lists any component or date filters used
|
||||||
|
- **Data Scope**: Notes which regressions are included based on date filtering
|
||||||
|
- For GA'd releases: Regressions within development window (start to GA)
|
||||||
|
- For in-development releases: Regressions from development start onwards
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
1. **Summarize all regressions for a release**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:summarize-regressions 4.17
|
||||||
|
```
|
||||||
|
|
||||||
|
Fetches and summarizes all regressions for release 4.17, automatically applying development window date filtering.
|
||||||
|
|
||||||
|
2. **Filter by specific component (exact match)**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:summarize-regressions 4.21 --components Monitoring
|
||||||
|
```
|
||||||
|
|
||||||
|
Shows summary statistics for only the Monitoring component in release 4.21.
|
||||||
|
|
||||||
|
3. **Filter by fuzzy search**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:summarize-regressions 4.21 --components network
|
||||||
|
```
|
||||||
|
|
||||||
|
Finds all components containing "network" (case-insensitive) and shows summary statistics for all matches (e.g., "Networking / ovn-kubernetes", "Networking / DNS", etc.).
|
||||||
|
|
||||||
|
4. **Filter by multiple search strings**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:summarize-regressions 4.21 --components etcd kube-
|
||||||
|
```
|
||||||
|
|
||||||
|
Finds all components containing "etcd" OR "kube-" and shows combined summary statistics.
|
||||||
|
|
||||||
|
5. **Specify custom date range**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:summarize-regressions 4.17 --start 2024-05-17 --end 2024-10-29
|
||||||
|
```
|
||||||
|
|
||||||
|
Summarizes regressions within a specific date range:
|
||||||
|
- Excludes regressions closed before 2024-05-17
|
||||||
|
- Excludes regressions opened after 2024-10-29
|
||||||
|
|
||||||
|
6. **In-development release**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:summarize-regressions 4.21
|
||||||
|
```
|
||||||
|
|
||||||
|
Summarizes regressions for an in-development release:
|
||||||
|
- Automatically fetches development_start date
|
||||||
|
- No end date filtering (release not yet GA'd)
|
||||||
|
- Shows current state of regression management
|
||||||
|
|
||||||
|
## Arguments
|
||||||
|
|
||||||
|
- `$1` (required): Release version
|
||||||
|
- Format: "X.Y" (e.g., "4.17", "4.21")
|
||||||
|
- Must be a valid OpenShift release number
|
||||||
|
|
||||||
|
- `$2+` (optional): Filter flags
|
||||||
|
- `--components <search1> [search2 ...]`: Filter by component names using fuzzy search
|
||||||
|
- Space-separated list of component search strings
|
||||||
|
- Case-insensitive substring matching
|
||||||
|
- Each search string matches all components containing that substring
|
||||||
|
- If no components provided, all components are analyzed
|
||||||
|
- Example: "network" matches "Networking / ovn-kubernetes", "Networking / DNS", etc.
|
||||||
|
- Example: "kube-" matches "kube-apiserver", "kube-controller-manager", etc.
|
||||||
|
|
||||||
|
- `--start <YYYY-MM-DD>`: Filter by start date
|
||||||
|
- Excludes regressions closed before this date
|
||||||
|
- Defaults to development_start from release metadata if not provided
|
||||||
|
|
||||||
|
- `--end <YYYY-MM-DD>`: Filter by end date
|
||||||
|
- Excludes regressions opened after this date
|
||||||
|
- Defaults to GA date from release metadata if not provided and release is GA'd
|
||||||
|
- Omitted for in-development releases
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
1. **Python 3**: Required to run the data fetching script
|
||||||
|
|
||||||
|
- Check: `which python3`
|
||||||
|
- Version: 3.6 or later
|
||||||
|
|
||||||
|
2. **Network Access**: Must be able to reach the component health API
|
||||||
|
|
||||||
|
- Ensure HTTPS requests can be made
|
||||||
|
- Check firewall and VPN settings if needed
|
||||||
|
|
||||||
|
3. **API Configuration**: The API endpoint must be configured in the script
|
||||||
|
- Location: `plugins/component-health/skills/list-regressions/list_regressions.py`
|
||||||
|
- The script should have the correct API base URL
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- The script uses Python's standard library only (no external dependencies)
|
||||||
|
- Output presents summary statistics in a readable format
|
||||||
|
- Diagnostic messages are written to stderr
|
||||||
|
- The script has a 30-second timeout for HTTP requests
|
||||||
|
- Summary statistics are based on all matching regressions (not limited by pagination)
|
||||||
|
- The `--short` flag is always used internally to optimize performance
|
||||||
|
- Infrastructure regressions are automatically filtered from counts
|
||||||
|
- Date filtering focuses analysis on the development window for accuracy
|
||||||
|
- This command internally uses `/component-health:list-regressions` to fetch data
|
||||||
|
- For raw regression data, use `/component-health:list-regressions` instead
|
||||||
|
- For health grading and analysis, use `/component-health:analyze` instead
|
||||||
|
|
||||||
|
## See Also
|
||||||
|
|
||||||
|
- Skill Documentation: `plugins/component-health/skills/list-regressions/SKILL.md`
|
||||||
|
- Script: `plugins/component-health/skills/list-regressions/list_regressions.py`
|
||||||
|
- Related Command: `/component-health:list-regressions` (for raw regression data)
|
||||||
|
- Related Command: `/component-health:analyze` (for health grading and analysis)
|
||||||
|
- Related Skill: `get-release-dates` (for fetching development window dates)
|
||||||
129
plugin.lock.json
Normal file
129
plugin.lock.json
Normal file
@@ -0,0 +1,129 @@
|
|||||||
|
{
|
||||||
|
"$schema": "internal://schemas/plugin.lock.v1.json",
|
||||||
|
"pluginId": "gh:openshift-eng/ai-helpers:plugins/component-health",
|
||||||
|
"normalized": {
|
||||||
|
"repo": null,
|
||||||
|
"ref": "refs/tags/v20251128.0",
|
||||||
|
"commit": "96b9864a63d58b57b15ba10ceeeba3cd4c3a5a14",
|
||||||
|
"treeHash": "d8db1ce91d54578ea65c9a85a7baae3fdf31842272b5b6ba5f6bb178d699f29c",
|
||||||
|
"generatedAt": "2025-11-28T10:27:28.443041Z",
|
||||||
|
"toolVersion": "publish_plugins.py@0.2.0"
|
||||||
|
},
|
||||||
|
"origin": {
|
||||||
|
"remote": "git@github.com:zhongweili/42plugin-data.git",
|
||||||
|
"branch": "master",
|
||||||
|
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
|
||||||
|
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
|
||||||
|
},
|
||||||
|
"manifest": {
|
||||||
|
"name": "component-health",
|
||||||
|
"description": "Analyze component health using regression and jira data",
|
||||||
|
"version": "0.0.1"
|
||||||
|
},
|
||||||
|
"content": {
|
||||||
|
"files": [
|
||||||
|
{
|
||||||
|
"path": "README.md",
|
||||||
|
"sha256": "e4634295b1b7f095ffe83e4d30bb93a7dce6e13bff06816d82433368b3cb1258"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": ".claude-plugin/plugin.json",
|
||||||
|
"sha256": "1c7fad47827f539ccf919e3bafa16ef817d48bd7f33e6e44d005c6c1fc27941c"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "commands/summarize-jiras.md",
|
||||||
|
"sha256": "12f2df655c5415d35a8c3ea29fe0435b280ec0a9a280ea8caebed1d5cb6fd9cb"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "commands/analyze.md",
|
||||||
|
"sha256": "985691a7980fd68932e876c8e13918f7ab7d59779f0b70c10ec9030469ae065f"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "commands/list-regressions.md",
|
||||||
|
"sha256": "3482c63d9a3db8fea51af4fe6c6474ee8c92e3cab7be9857942ef9b4ca65910c"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "commands/list-jiras.md",
|
||||||
|
"sha256": "cdf8428d8c0d032021e70cee335bcaf197242150aeaf60b8897756831ed39607"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "commands/summarize-regressions.md",
|
||||||
|
"sha256": "676ed6ebe193b8c44922c55ac2eb70b2b9bb08820a28aec9190aa4aee72f126d"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "commands/list-components.md",
|
||||||
|
"sha256": "feeec780a999723d00ea9ddde5d8d45a72ed4e43fc222d447d76879e00d04c51"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/list-jiras/list_jiras.py",
|
||||||
|
"sha256": "e1e4c44c1debe748bed68e938f79345317d75b87b3e17fd4af09e9a1283e3126"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/list-jiras/SKILL.md",
|
||||||
|
"sha256": "9c9c333f6d3952edee0695539c8d38c566e8f2908692cd5689cca8a0df2c45fc"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/list-regressions/list_regressions.py",
|
||||||
|
"sha256": "a03078cb80310cb6b6dea657c2c949d72a3a8aa8fe5f93e0404e4041f16bdaca"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/list-regressions/README.md",
|
||||||
|
"sha256": "defcf712908b5af648f89cbbeac4f7f321191f114cd4495ea7a18e9671e6c50a"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/list-regressions/SKILL.md",
|
||||||
|
"sha256": "f47788ff085bd2278691cb86cc0b27629900080046fb551ba720e26b1a6c2584"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/list-components/list_components.py",
|
||||||
|
"sha256": "40896028f09a9d72e6946ac68656477b73c4a5df72c71bdec85c91fb19a6e272"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/list-components/SKILL.md",
|
||||||
|
"sha256": "58857a79bf70902aa14a0747d60c7b77c2d9740043e610b478000e3b0eba9f4d"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/analyze-regressions/generate_html_report.py",
|
||||||
|
"sha256": "188af3502f655c1eaa4bf5e05db544dd63a7cbd2b9cb89311768bf1ff2ed1d7a"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/analyze-regressions/README.md",
|
||||||
|
"sha256": "b890866066a7168c1bd35139a8616af83a31dc294245e3fb870d835245b31e8d"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/analyze-regressions/SKILL.md",
|
||||||
|
"sha256": "63cd5596a388f51ffea18845f0dbb41bd20c141717f1d6cc5ee7a1b8ad2e403b"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/analyze-regressions/report_template.html",
|
||||||
|
"sha256": "66fa92a0f45b223a10a3303a1fd2d7cc62950dfe75b1db89ba6e11f566ddc3fa"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/get-release-dates/get_release_dates.py",
|
||||||
|
"sha256": "5150290c606b136bfdf3c15f106baf091344ffca92632232a5a2026b3d6b6127"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/get-release-dates/README.md",
|
||||||
|
"sha256": "5bae6b29cf7437096c488adcf625b0567500e9dc6045392fd3dd91f0c9efd323"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/get-release-dates/SKILL.md",
|
||||||
|
"sha256": "c14c9f7a62dcbbe89a7c766b725386c0b343000e3663fcab0927c764377269ef"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/summarize-jiras/summarize_jiras.py",
|
||||||
|
"sha256": "5f84b719d5978a6e194a05e815b5511f64b0e83ff0900ae59ca175c0f0bfbf75"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/summarize-jiras/SKILL.md",
|
||||||
|
"sha256": "4ee818cf33a33af35afeb0de30e46bbdb5da3a3121fcdac785f8d16710c07c34"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"dirSha256": "d8db1ce91d54578ea65c9a85a7baae3fdf31842272b5b6ba5f6bb178d699f29c"
|
||||||
|
},
|
||||||
|
"security": {
|
||||||
|
"scannedAt": null,
|
||||||
|
"scannerVersion": null,
|
||||||
|
"flags": []
|
||||||
|
}
|
||||||
|
}
|
||||||
128
skills/analyze-regressions/README.md
Normal file
128
skills/analyze-regressions/README.md
Normal file
@@ -0,0 +1,128 @@
|
|||||||
|
# HTML Report Generation for Component Health Analysis
|
||||||
|
|
||||||
|
This directory contains resources for generating interactive HTML reports from component health regression data.
|
||||||
|
|
||||||
|
## Files
|
||||||
|
|
||||||
|
- `report_template.html` - HTML template with placeholders for data
|
||||||
|
- `generate_html_report.py` - Python script to generate reports from JSON data
|
||||||
|
- `README.md` - This file
|
||||||
|
|
||||||
|
## Template Variables
|
||||||
|
|
||||||
|
The HTML template uses the following placeholders (enclosed in `{{}}` double curly braces):
|
||||||
|
|
||||||
|
### Overall Metrics
|
||||||
|
- `{{RELEASE}}` - Release version (e.g., "4.20")
|
||||||
|
- `{{RELEASE_PERIOD}}` - Development period description
|
||||||
|
- `{{DATE_RANGE}}` - Date range for the analysis
|
||||||
|
- `{{GENERATED_DATE}}` - Report generation date
|
||||||
|
|
||||||
|
### Triage Coverage Metrics
|
||||||
|
- `{{TRIAGE_COVERAGE}}` - Percentage (e.g., "25.7")
|
||||||
|
- `{{TRIAGE_COVERAGE_CLASS}}` - CSS class (good/warning/poor)
|
||||||
|
- `{{TRIAGE_COVERAGE_GRADE}}` - Grade text with emoji
|
||||||
|
- `{{TRIAGE_COVERAGE_GRADE_CLASS}}` - Grade CSS class
|
||||||
|
- `{{TOTAL_REGRESSIONS}}` - Total regression count
|
||||||
|
- `{{TRIAGED_REGRESSIONS}}` - Triaged count
|
||||||
|
- `{{UNTRIAGED_REGRESSIONS}}` - Untriaged count
|
||||||
|
|
||||||
|
### Triage Timeliness Metrics
|
||||||
|
- `{{TRIAGE_TIME_AVG}}` - Average hours to triage
|
||||||
|
- `{{TRIAGE_TIME_AVG_DAYS}}` - Average days to triage
|
||||||
|
- `{{TRIAGE_TIME_MAX}}` - Maximum hours to triage
|
||||||
|
- `{{TRIAGE_TIME_MAX_DAYS}}` - Maximum days to triage
|
||||||
|
- `{{TRIAGE_TIME_CLASS}}` - CSS class
|
||||||
|
- `{{TRIAGE_TIME_GRADE}}` - Grade text
|
||||||
|
- `{{TRIAGE_TIME_GRADE_CLASS}}` - Grade CSS class
|
||||||
|
|
||||||
|
### Resolution Speed Metrics
|
||||||
|
- `{{RESOLUTION_TIME_AVG}}` - Average hours to close
|
||||||
|
- `{{RESOLUTION_TIME_AVG_DAYS}}` - Average days to close
|
||||||
|
- `{{RESOLUTION_TIME_MAX}}` - Maximum hours to close
|
||||||
|
- `{{RESOLUTION_TIME_MAX_DAYS}}` - Maximum days to close
|
||||||
|
- `{{RESOLUTION_TIME_CLASS}}` - CSS class
|
||||||
|
- `{{RESOLUTION_TIME_GRADE}}` - Grade text
|
||||||
|
- `{{RESOLUTION_TIME_GRADE_CLASS}}` - Grade CSS class
|
||||||
|
|
||||||
|
### Open/Closed Breakdown
|
||||||
|
- `{{OPEN_REGRESSIONS}}` - Open regression count
|
||||||
|
- `{{OPEN_TRIAGE_PERCENTAGE}}` - Open triage percentage
|
||||||
|
- `{{CLOSED_REGRESSIONS}}` - Closed regression count
|
||||||
|
- `{{CLOSED_TRIAGE_PERCENTAGE}}` - Closed triage percentage
|
||||||
|
- `{{OPEN_AGE_AVG}}` - Average age of open regressions (hours)
|
||||||
|
- `{{OPEN_AGE_AVG_DAYS}}` - Average age of open regressions (days)
|
||||||
|
|
||||||
|
### Dynamic Content
|
||||||
|
- `{{COMPONENT_ROWS}}` - HTML table rows for all components
|
||||||
|
- `{{ATTENTION_SECTIONS}}` - Alert boxes for critical issues
|
||||||
|
- `{{INSIGHTS}}` - List items for key insights
|
||||||
|
- `{{RECOMMENDATIONS}}` - List items for recommendations
|
||||||
|
|
||||||
|
## Usage with Python Script
|
||||||
|
|
||||||
|
### Using data files:
|
||||||
|
```bash
|
||||||
|
python3 generate_html_report.py \
|
||||||
|
--release 4.20 \
|
||||||
|
--data regression_data.json \
|
||||||
|
--dates release_dates.json \
|
||||||
|
--output report.html
|
||||||
|
```
|
||||||
|
|
||||||
|
### Using stdin:
|
||||||
|
```bash
|
||||||
|
cat regression_data.json | python3 generate_html_report.py \
|
||||||
|
--release 4.20 \
|
||||||
|
--dates release_dates.json \
|
||||||
|
--output report.html
|
||||||
|
```
|
||||||
|
|
||||||
|
## Manual Template Usage (for Claude Code)
|
||||||
|
|
||||||
|
When generating reports directly in Claude Code without the Python script:
|
||||||
|
|
||||||
|
1. Read the template file
|
||||||
|
2. Replace all `{{VARIABLE}}` placeholders with actual values
|
||||||
|
3. Generate component rows dynamically
|
||||||
|
4. Build attention sections based on the data
|
||||||
|
5. Write the final HTML to `.work/component-health-{release}/report.html`
|
||||||
|
6. Open with `open` command (macOS) or equivalent
|
||||||
|
|
||||||
|
## Grading Criteria
|
||||||
|
|
||||||
|
### Triage Coverage
|
||||||
|
- **Excellent (✅)**: 90-100%
|
||||||
|
- **Good (✅)**: 70-89%
|
||||||
|
- **Needs Improvement (⚠️)**: 50-69%
|
||||||
|
- **Poor (❌)**: <50%
|
||||||
|
|
||||||
|
### Triage Timeliness
|
||||||
|
- **Excellent (✅)**: <24 hours
|
||||||
|
- **Good (⚠️)**: 24-72 hours
|
||||||
|
- **Needs Improvement (⚠️)**: 72-168 hours (1 week)
|
||||||
|
- **Poor (❌)**: >168 hours
|
||||||
|
|
||||||
|
### Resolution Speed
|
||||||
|
- **Excellent (✅)**: <168 hours (1 week)
|
||||||
|
- **Good (⚠️)**: 168-336 hours (1-2 weeks)
|
||||||
|
- **Needs Improvement (⚠️)**: 336-720 hours (2-4 weeks)
|
||||||
|
- **Poor (❌)**: >720 hours (4+ weeks)
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
- **Interactive Filtering**: Search components by name and filter by health grade
|
||||||
|
- **Responsive Design**: Works on desktop and mobile devices
|
||||||
|
- **Visual Indicators**: Color-coded metrics (red/yellow/green)
|
||||||
|
- **Hover Effects**: Enhanced UX with hover states
|
||||||
|
- **Alert Sections**: Automatically highlights critical issues
|
||||||
|
- **Auto-generated Content**: Component rows and alerts generated from data
|
||||||
|
|
||||||
|
## Customization
|
||||||
|
|
||||||
|
To customize the report appearance:
|
||||||
|
|
||||||
|
1. Edit `report_template.html` - Modify CSS in the `<style>` section
|
||||||
|
2. Update color schemes by changing gradient values
|
||||||
|
3. Adjust thresholds in the grading logic
|
||||||
|
4. Add new sections by modifying the template structure
|
||||||
796
skills/analyze-regressions/SKILL.md
Normal file
796
skills/analyze-regressions/SKILL.md
Normal file
@@ -0,0 +1,796 @@
|
|||||||
|
---
|
||||||
|
name: Analyze Regressions
|
||||||
|
description: Grade component health based on regression triage metrics for OpenShift releases
|
||||||
|
---
|
||||||
|
|
||||||
|
# Analyze Regressions
|
||||||
|
|
||||||
|
This skill provides functionality to analyze and grade component health for OpenShift releases based on regression management metrics. It evaluates how well components are managing their test regressions by analyzing triage coverage, triage timeliness, and resolution speed.
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
Use this skill when you need to:
|
||||||
|
|
||||||
|
- Grade component health for a specific OpenShift release
|
||||||
|
- Identify components that need help with regression handling
|
||||||
|
- Track triage and resolution efficiency across releases
|
||||||
|
- Generate component quality scorecards
|
||||||
|
- Produce health reports (text or HTML) for stakeholders
|
||||||
|
|
||||||
|
**Important Note**: Grading is subjective and not meant to be a critique of team performance. This is intended to help identify where help is needed and track progress as we try to improve our regression response rates.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
1. **Python 3 Installation**
|
||||||
|
|
||||||
|
- Check if installed: `which python3`
|
||||||
|
- Python 3.6 or later is required
|
||||||
|
- Comes pre-installed on most systems
|
||||||
|
|
||||||
|
2. **Network Access**
|
||||||
|
|
||||||
|
- The scripts require network access to reach the component health API and release dates API
|
||||||
|
- Ensure you can make HTTPS requests
|
||||||
|
|
||||||
|
3. **Required Scripts**
|
||||||
|
|
||||||
|
- `plugins/component-health/skills/get-release-dates/get_release_dates.py`
|
||||||
|
- `plugins/component-health/skills/list-regressions/list_regressions.py`
|
||||||
|
- `plugins/component-health/skills/analyze-regressions/generate_html_report.py` (for HTML reports)
|
||||||
|
- `plugins/component-health/skills/analyze-regressions/report_template.html` (for HTML reports)
|
||||||
|
|
||||||
|
## Implementation Steps
|
||||||
|
|
||||||
|
### Step 1: Parse Arguments
|
||||||
|
|
||||||
|
Extract the release version and optional component filter from the command arguments:
|
||||||
|
|
||||||
|
- **Release format**: "X.Y" (e.g., "4.17", "4.21")
|
||||||
|
- **Components** (optional): List of component names to filter by
|
||||||
|
|
||||||
|
**Example argument parsing**:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:analyze-regressions 4.17
|
||||||
|
/component-health:analyze-regressions 4.21 --components Monitoring etcd
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Fetch Release Dates
|
||||||
|
|
||||||
|
Run the `get_release_dates.py` script to determine the development window for the release:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/get-release-dates/get_release_dates.py \
|
||||||
|
--release 4.17
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected output** (JSON on stdout):
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"release": "4.17",
|
||||||
|
"development_start": "2024-05-17T00:00:00Z",
|
||||||
|
"feature_freeze": "2024-08-26T00:00:00Z",
|
||||||
|
"code_freeze": "2024-09-30T00:00:00Z",
|
||||||
|
"ga": "2024-10-29T00:00:00Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Processing steps**:
|
||||||
|
|
||||||
|
1. Parse the JSON output
|
||||||
|
2. Extract `development_start` date - convert to YYYY-MM-DD format
|
||||||
|
3. Extract `ga` date - convert to YYYY-MM-DD format (may be null for in-development releases)
|
||||||
|
4. Handle null dates appropriately:
|
||||||
|
- `development_start`: Usually always present; if null, omit `--start` parameter
|
||||||
|
- `ga`: Will be null for in-development releases; if null, omit `--end` parameter
|
||||||
|
|
||||||
|
**Date conversion example**:
|
||||||
|
|
||||||
|
```
|
||||||
|
"2024-05-17T00:00:00Z" → "2024-05-17"
|
||||||
|
null → do not use this parameter
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Execute List Regressions Script
|
||||||
|
|
||||||
|
Run the `list_regressions.py` script with the appropriate arguments:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/list-regressions/list_regressions.py \
|
||||||
|
--release 4.17 \
|
||||||
|
--start 2024-05-17 \
|
||||||
|
--end 2024-10-29 \
|
||||||
|
--short
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameter rules**:
|
||||||
|
|
||||||
|
- `--release`: Always required (from Step 1)
|
||||||
|
- `--components`: Optional, only if specified by user (from Step 1)
|
||||||
|
- `--start`: Use `development_start` date from Step 2 (if not null)
|
||||||
|
- **Always applied** for both GA'd and in-development releases
|
||||||
|
- Excludes regressions closed before development started (not relevant to this release)
|
||||||
|
- `--end`: Use `ga` date from Step 2 (only if not null)
|
||||||
|
- **Only applied for GA'd releases** (when GA date is not null)
|
||||||
|
- Excludes regressions opened after GA (post-release regressions, often not monitored/triaged)
|
||||||
|
- **Not applied for in-development releases** (when GA date is null)
|
||||||
|
- `--short`: **Always include** this flag
|
||||||
|
- Excludes regression data arrays from response
|
||||||
|
- Only includes summary statistics
|
||||||
|
- Prevents truncation problems with large datasets
|
||||||
|
|
||||||
|
**Example for GA'd release** (4.17):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/list-regressions/list_regressions.py \
|
||||||
|
--release 4.17 \
|
||||||
|
--start 2024-05-17 \
|
||||||
|
--end 2024-10-29 \
|
||||||
|
--short
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example for in-development release** (4.21 with null GA):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/list-regressions/list_regressions.py \
|
||||||
|
--release 4.21 \
|
||||||
|
--start 2025-09-02 \
|
||||||
|
--short
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example with component filter**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/list-regressions/list_regressions.py \
|
||||||
|
--release 4.21 \
|
||||||
|
--components Monitoring etcd \
|
||||||
|
--start 2025-09-02 \
|
||||||
|
--short
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Parse Output Structure
|
||||||
|
|
||||||
|
The script outputs JSON to stdout with the following structure:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"summary": {
|
||||||
|
"total": 62,
|
||||||
|
"triaged": 59,
|
||||||
|
"triage_percentage": 95.2,
|
||||||
|
"filtered_suspected_infra_regressions": 8,
|
||||||
|
"time_to_triage_hrs_avg": 68,
|
||||||
|
"time_to_triage_hrs_max": 240,
|
||||||
|
"time_to_close_hrs_avg": 168,
|
||||||
|
"time_to_close_hrs_max": 480,
|
||||||
|
"open": {
|
||||||
|
"total": 2,
|
||||||
|
"triaged": 1,
|
||||||
|
"triage_percentage": 50.0,
|
||||||
|
"time_to_triage_hrs_avg": 48,
|
||||||
|
"time_to_triage_hrs_max": 48,
|
||||||
|
"open_hrs_avg": 120,
|
||||||
|
"open_hrs_max": 200
|
||||||
|
},
|
||||||
|
"closed": {
|
||||||
|
"total": 60,
|
||||||
|
"triaged": 58,
|
||||||
|
"triage_percentage": 96.7,
|
||||||
|
"time_to_triage_hrs_avg": 72,
|
||||||
|
"time_to_triage_hrs_max": 240,
|
||||||
|
"time_to_close_hrs_avg": 168,
|
||||||
|
"time_to_close_hrs_max": 480,
|
||||||
|
"time_triaged_closed_hrs_avg": 96,
|
||||||
|
"time_triaged_closed_hrs_max": 240
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"components": {
|
||||||
|
"ComponentName": {
|
||||||
|
"summary": {
|
||||||
|
"total": 15,
|
||||||
|
"triaged": 13,
|
||||||
|
"triage_percentage": 86.7,
|
||||||
|
"filtered_suspected_infra_regressions": 0,
|
||||||
|
"time_to_triage_hrs_avg": 68,
|
||||||
|
"time_to_triage_hrs_max": 180,
|
||||||
|
"time_to_close_hrs_avg": 156,
|
||||||
|
"time_to_close_hrs_max": 360,
|
||||||
|
"open": {
|
||||||
|
"total": 1,
|
||||||
|
"triaged": 0,
|
||||||
|
"triage_percentage": 0.0,
|
||||||
|
"time_to_triage_hrs_avg": null,
|
||||||
|
"time_to_triage_hrs_max": null,
|
||||||
|
"open_hrs_avg": 72,
|
||||||
|
"open_hrs_max": 72
|
||||||
|
},
|
||||||
|
"closed": {
|
||||||
|
"total": 14,
|
||||||
|
"triaged": 13,
|
||||||
|
"triage_percentage": 92.9,
|
||||||
|
"time_to_triage_hrs_avg": 68,
|
||||||
|
"time_to_triage_hrs_max": 180,
|
||||||
|
"time_to_close_hrs_avg": 156,
|
||||||
|
"time_to_close_hrs_max": 360,
|
||||||
|
"time_triaged_closed_hrs_avg": 88,
|
||||||
|
"time_triaged_closed_hrs_max": 180
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**CRITICAL - Use Summary Counts**:
|
||||||
|
|
||||||
|
- **ALWAYS use `summary.total`, `summary.open.total`, `summary.closed.total`** for counts
|
||||||
|
- **ALWAYS use `components.*.summary.*`** for per-component counts
|
||||||
|
- Do NOT attempt to count regression arrays (they are excluded with `--short` flag)
|
||||||
|
- This ensures accuracy even with large datasets
|
||||||
|
|
||||||
|
**Key Metrics to Extract**:
|
||||||
|
|
||||||
|
From `summary` object:
|
||||||
|
|
||||||
|
- `summary.total` - Total regressions
|
||||||
|
- `summary.triaged` - Total triaged regressions
|
||||||
|
- `summary.triage_percentage` - **KEY HEALTH METRIC**: Percentage triaged
|
||||||
|
- `summary.filtered_suspected_infra_regressions` - Count of filtered infrastructure regressions
|
||||||
|
- `summary.time_to_triage_hrs_avg` - **KEY HEALTH METRIC**: Average hours to triage
|
||||||
|
- `summary.time_to_triage_hrs_max` - Maximum hours to triage
|
||||||
|
- `summary.time_to_close_hrs_avg` - **KEY HEALTH METRIC**: Average hours to close
|
||||||
|
- `summary.time_to_close_hrs_max` - Maximum hours to close
|
||||||
|
- `summary.open.total` - Open regressions count
|
||||||
|
- `summary.open.triaged` - Open triaged count
|
||||||
|
- `summary.open.triage_percentage` - Open triage percentage
|
||||||
|
- `summary.closed.total` - Closed regressions count
|
||||||
|
- `summary.closed.triaged` - Closed triaged count
|
||||||
|
- `summary.closed.triage_percentage` - Closed triage percentage
|
||||||
|
|
||||||
|
From `components` object:
|
||||||
|
|
||||||
|
- Same fields as summary, but per-component
|
||||||
|
- Use `components.*.summary.*` for all per-component statistics
|
||||||
|
|
||||||
|
### Step 5: Calculate Health Grades
|
||||||
|
|
||||||
|
**IMPORTANT - Closed Regression Triage**:
|
||||||
|
|
||||||
|
- **DO NOT recommend retroactively triaging closed regressions** - the tooling does not support this
|
||||||
|
- When identifying untriaged regressions that need attention, **only consider open regressions**: `summary.open.total - summary.open.triaged`
|
||||||
|
- Closed regression triage percentages are provided for historical analysis only, not as actionable items
|
||||||
|
|
||||||
|
#### Overall Health Grade
|
||||||
|
|
||||||
|
Calculate grades based on three key metrics:
|
||||||
|
|
||||||
|
**1. Triage Coverage** (`summary.triage_percentage`):
|
||||||
|
|
||||||
|
- 90-100%: Excellent ✅
|
||||||
|
- 70-89%: Good ⚠️
|
||||||
|
- 50-69%: Needs Improvement ⚠️
|
||||||
|
- <50%: Poor ❌
|
||||||
|
|
||||||
|
**2. Triage Timeliness** (`summary.time_to_triage_hrs_avg`):
|
||||||
|
|
||||||
|
- <24 hours: Excellent ✅
|
||||||
|
- 24-72 hours: Good ⚠️
|
||||||
|
- 72-168 hours (1 week): Needs Improvement ⚠️
|
||||||
|
- > 168 hours: Poor ❌
|
||||||
|
|
||||||
|
**3. Resolution Speed** (`summary.time_to_close_hrs_avg`):
|
||||||
|
|
||||||
|
- <168 hours (1 week): Excellent ✅
|
||||||
|
- 168-336 hours (1-2 weeks): Good ⚠️
|
||||||
|
- 336-720 hours (2-4 weeks): Needs Improvement ⚠️
|
||||||
|
- > 720 hours (4+ weeks): Poor ❌
|
||||||
|
|
||||||
|
#### Per-Component Health Grades
|
||||||
|
|
||||||
|
For each component in `components`:
|
||||||
|
|
||||||
|
1. Calculate the same three grades using `components.*.summary.*` fields
|
||||||
|
2. Rank components from best to worst health
|
||||||
|
3. Highlight components needing attention:
|
||||||
|
- Low triage coverage (<50%)
|
||||||
|
- Slow triage response (>72 hours average)
|
||||||
|
- Slow resolution time (>336 hours / 2 weeks average)
|
||||||
|
- High open regression counts
|
||||||
|
- High overall regression counts
|
||||||
|
|
||||||
|
### Step 6: Display Text Report
|
||||||
|
|
||||||
|
Present a well-formatted text report with:
|
||||||
|
|
||||||
|
#### Overall Health Grade Section
|
||||||
|
|
||||||
|
Display overall statistics from `summary`:
|
||||||
|
|
||||||
|
```
|
||||||
|
=== Overall Health Grade for Release 4.17 ===
|
||||||
|
Development Window: 2024-05-17 to 2024-10-29 (GA'd release)
|
||||||
|
|
||||||
|
Total Regressions: 62
|
||||||
|
Filtered Infrastructure Regressions: 8
|
||||||
|
Triaged: 59 (95.2%)
|
||||||
|
Open: 2 (50.0% triaged)
|
||||||
|
Closed: 60 (96.7% triaged)
|
||||||
|
|
||||||
|
Triage Coverage: ✅ Excellent (95.2%)
|
||||||
|
Triage Timeliness: ⚠️ Good (68 hours average, 240 hours max)
|
||||||
|
Resolution Speed: ✅ Excellent (168 hours average, 480 hours max)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Important**: If the GA date is null (in-development release), note:
|
||||||
|
|
||||||
|
```
|
||||||
|
Development Window: 2025-09-02 onwards (In Development)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Per-Component Health Scorecard
|
||||||
|
|
||||||
|
Display ranked table from `components.*.summary`:
|
||||||
|
|
||||||
|
```
|
||||||
|
=== Component Health Scorecard ===
|
||||||
|
|
||||||
|
| Component | Triage Coverage | Triage Time | Resolution Time | Open | Grade |
|
||||||
|
|-----------------|-----------------|-------------|-----------------|------|-------|
|
||||||
|
| kube-apiserver | 100.0% | 58 hrs | 144 hrs | 1 | ✅ |
|
||||||
|
| etcd | 95.0% | 84 hrs | 192 hrs | 0 | ✅ |
|
||||||
|
| Monitoring | 86.7% | 68 hrs | 156 hrs | 1 | ⚠️ |
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Components Needing Attention
|
||||||
|
|
||||||
|
Highlight specific components with issues:
|
||||||
|
|
||||||
|
```
|
||||||
|
=== Components Needing Attention ===
|
||||||
|
|
||||||
|
Monitoring:
|
||||||
|
- 1 open untriaged regression (needs triage)
|
||||||
|
- Triage coverage: 86.7% (below 90%)
|
||||||
|
|
||||||
|
Example-Component:
|
||||||
|
- 5 open untriaged regressions (needs triage)
|
||||||
|
- Slow triage response: 120 hours average
|
||||||
|
- High open count: 5 open regressions
|
||||||
|
```
|
||||||
|
|
||||||
|
**CRITICAL**: When listing untriaged regressions that need action:
|
||||||
|
|
||||||
|
- **Only list OPEN untriaged regressions** - these are actionable
|
||||||
|
- **Do NOT recommend triaging closed regressions** - the tooling does not support retroactive triage
|
||||||
|
- Calculate actionable untriaged count as: `components.*.summary.open.total - components.*.summary.open.triaged`
|
||||||
|
|
||||||
|
### Step 7: Offer HTML Report Generation
|
||||||
|
|
||||||
|
After displaying the text report, ask the user if they want an interactive HTML report:
|
||||||
|
|
||||||
|
```
|
||||||
|
Would you like me to generate an interactive HTML report? (yes/no)
|
||||||
|
```
|
||||||
|
|
||||||
|
If the user responds affirmatively:
|
||||||
|
|
||||||
|
#### Step 7a: Prepare Data for HTML Report
|
||||||
|
|
||||||
|
The HTML report requires data in a specific structure. Transform the JSON data:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Prepare component data for HTML template
|
||||||
|
component_data = []
|
||||||
|
for component_name, component_obj in components.items():
|
||||||
|
summary = component_obj['summary']
|
||||||
|
component_data.append({
|
||||||
|
'name': component_name,
|
||||||
|
'total': summary['total'],
|
||||||
|
'open': summary['open']['total'],
|
||||||
|
'closed': summary['closed']['total'],
|
||||||
|
'triaged': summary['triaged'],
|
||||||
|
'triage_percentage': summary['triage_percentage'],
|
||||||
|
'time_to_triage_hrs_avg': summary.get('time_to_triage_hrs_avg'),
|
||||||
|
'time_to_close_hrs_avg': summary.get('time_to_close_hrs_avg'),
|
||||||
|
'health_grade': calculate_health_grade(summary) # Calculate combined grade
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Step 7b: Generate HTML Report
|
||||||
|
|
||||||
|
Use the `generate_html_report.py` script (or inline Python code):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/analyze-regressions/generate_html_report.py \
|
||||||
|
--release 4.17 \
|
||||||
|
--data regression_data.json \
|
||||||
|
--output .work/component-health-4.17/report.html
|
||||||
|
```
|
||||||
|
|
||||||
|
Or use inline Python with the template:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import json
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
# Load template
|
||||||
|
with open('plugins/component-health/skills/analyze-regressions/report_template.html', 'r') as f:
|
||||||
|
template = f.read()
|
||||||
|
|
||||||
|
# Replace placeholders
|
||||||
|
template = template.replace('{{RELEASE}}', '4.17')
|
||||||
|
template = template.replace('{{GENERATED_DATE}}', datetime.now().isoformat())
|
||||||
|
template = template.replace('{{SUMMARY_DATA}}', json.dumps(summary))
|
||||||
|
template = template.replace('{{COMPONENT_DATA}}', json.dumps(component_data))
|
||||||
|
|
||||||
|
# Write output
|
||||||
|
output_path = '.work/component-health-4.17/report.html'
|
||||||
|
os.makedirs(os.path.dirname(output_path), exist_ok=True)
|
||||||
|
with open(output_path, 'w') as f:
|
||||||
|
f.write(template)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Step 7c: Open the Report
|
||||||
|
|
||||||
|
Open the HTML report in the user's default browser:
|
||||||
|
|
||||||
|
**macOS**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
open .work/component-health-4.17/report.html
|
||||||
|
```
|
||||||
|
|
||||||
|
**Linux**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
xdg-open .work/component-health-4.17/report.html
|
||||||
|
```
|
||||||
|
|
||||||
|
**Windows**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
start .work/component-health-4.17/report.html
|
||||||
|
```
|
||||||
|
|
||||||
|
Display the file path to the user:
|
||||||
|
|
||||||
|
```
|
||||||
|
HTML report generated: .work/component-health-4.17/report.html
|
||||||
|
Opening in your default browser...
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
### Common Errors
|
||||||
|
|
||||||
|
1. **Network Errors**
|
||||||
|
|
||||||
|
- **Symptom**: `URLError` or connection timeout
|
||||||
|
- **Solution**: Check network connectivity and firewall rules
|
||||||
|
- **Retry**: Both scripts have 30-second timeouts
|
||||||
|
|
||||||
|
2. **Invalid Release Format**
|
||||||
|
|
||||||
|
- **Symptom**: Empty results or error response
|
||||||
|
- **Solution**: Verify the release format (e.g., "4.17", not "v4.17" or "4.17.0")
|
||||||
|
|
||||||
|
3. **Release Dates Not Found**
|
||||||
|
|
||||||
|
- **Symptom**: `get_release_dates.py` returns error
|
||||||
|
- **Solution**: Verify the release exists in the system; may be too old or not yet created
|
||||||
|
- **Fallback**: Proceed without date filtering (omit `--start` and `--end` parameters)
|
||||||
|
|
||||||
|
4. **No Regressions Found**
|
||||||
|
|
||||||
|
- **Symptom**: Empty components object
|
||||||
|
- **Solution**: Verify the release has regression data; may be too early in development
|
||||||
|
- **Action**: Inform user that no regressions exist yet for this release
|
||||||
|
|
||||||
|
5. **Component Filter No Matches**
|
||||||
|
|
||||||
|
- **Symptom**: Empty components object after filtering
|
||||||
|
- **Solution**: Check component name spelling; component names are case-insensitive
|
||||||
|
- **Action**: List available components from unfiltered query
|
||||||
|
|
||||||
|
6. **HTML Template Not Found**
|
||||||
|
- **Symptom**: FileNotFoundError when generating HTML report
|
||||||
|
- **Solution**: Verify template exists at `plugins/component-health/skills/analyze-regressions/report_template.html`
|
||||||
|
- **Fallback**: Offer text report only
|
||||||
|
|
||||||
|
### Debugging
|
||||||
|
|
||||||
|
Enable verbose output by examining stderr:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/list-regressions/list_regressions.py \
|
||||||
|
--release 4.17 \
|
||||||
|
--short 2>&1 | tee debug.log
|
||||||
|
```
|
||||||
|
|
||||||
|
Diagnostic messages include:
|
||||||
|
|
||||||
|
- URL being queried
|
||||||
|
- Number of regressions fetched
|
||||||
|
- Number after filtering
|
||||||
|
- Number of suspected infrastructure regressions filtered
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
### Text Report Structure
|
||||||
|
|
||||||
|
The text report should include:
|
||||||
|
|
||||||
|
1. **Header**
|
||||||
|
|
||||||
|
- Release version
|
||||||
|
- Development window dates (start and end/GA)
|
||||||
|
- Release status (GA'd or In Development)
|
||||||
|
|
||||||
|
2. **Overall Health Grade**
|
||||||
|
|
||||||
|
- Total regressions
|
||||||
|
- Filtered infrastructure regressions count
|
||||||
|
- Open/closed breakdown
|
||||||
|
- Triage coverage score with grade
|
||||||
|
- Triage timeliness score with grade
|
||||||
|
- Resolution speed score with grade
|
||||||
|
|
||||||
|
3. **Component Health Scorecard**
|
||||||
|
|
||||||
|
- Ranked table of all components
|
||||||
|
- Key metrics per component
|
||||||
|
- Health grade per component
|
||||||
|
|
||||||
|
4. **Components Needing Attention**
|
||||||
|
|
||||||
|
- List of components with specific issues
|
||||||
|
- Actionable recommendations (only for open untriaged regressions)
|
||||||
|
- Context for each issue
|
||||||
|
|
||||||
|
5. **Footer**
|
||||||
|
- Link to Sippy dashboard (if applicable)
|
||||||
|
- Timestamp of report generation
|
||||||
|
|
||||||
|
### HTML Report Features
|
||||||
|
|
||||||
|
The HTML report should include:
|
||||||
|
|
||||||
|
- **Interactive table** with sorting and filtering
|
||||||
|
- **Visual indicators** for health grades (colors, icons)
|
||||||
|
- **Charts/graphs** showing:
|
||||||
|
- Triage coverage by component
|
||||||
|
- Time to triage distribution
|
||||||
|
- Open vs closed breakdown
|
||||||
|
- **Detailed metrics** on hover or click
|
||||||
|
- **Export functionality** (CSV, PDF)
|
||||||
|
- **Responsive design** for mobile viewing
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### Example 1: Grade Overall Release Health
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:analyze-regressions 4.17
|
||||||
|
```
|
||||||
|
|
||||||
|
**Execution flow**:
|
||||||
|
|
||||||
|
1. Fetch release dates for 4.17
|
||||||
|
2. Run list_regressions.py with --start and --end (GA'd release)
|
||||||
|
3. Display overall health grade
|
||||||
|
4. Display per-component scorecard
|
||||||
|
5. Highlight components needing attention
|
||||||
|
6. Offer HTML report generation
|
||||||
|
|
||||||
|
### Example 2: Grade Specific Components
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:analyze-regressions 4.21 --components Monitoring etcd
|
||||||
|
```
|
||||||
|
|
||||||
|
**Execution flow**:
|
||||||
|
|
||||||
|
1. Fetch release dates for 4.21 (may have null GA)
|
||||||
|
2. Run list_regressions.py with --components and --start only (in-development)
|
||||||
|
3. Display health grades for Monitoring and etcd only
|
||||||
|
4. Compare the two components
|
||||||
|
5. Identify which needs more attention
|
||||||
|
|
||||||
|
### Example 3: Grade Single Component
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:analyze-regressions 4.21 --components "kube-apiserver"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Execution flow**:
|
||||||
|
|
||||||
|
1. Fetch release dates for 4.21
|
||||||
|
2. Run list_regressions.py with single component filter
|
||||||
|
3. Display detailed health metrics for kube-apiserver
|
||||||
|
4. Show open vs closed breakdown
|
||||||
|
5. List count of open untriaged regressions (if any)
|
||||||
|
|
||||||
|
## Health Grade Calculation Details
|
||||||
|
|
||||||
|
### Combined Health Grade
|
||||||
|
|
||||||
|
To calculate an overall health grade for a component, consider all three metrics:
|
||||||
|
|
||||||
|
```python
|
||||||
|
def calculate_health_grade(summary):
|
||||||
|
"""Calculate combined health grade based on three key metrics."""
|
||||||
|
triage_coverage = summary['triage_percentage']
|
||||||
|
triage_time = summary.get('time_to_triage_hrs_avg')
|
||||||
|
resolution_time = summary.get('time_to_close_hrs_avg')
|
||||||
|
|
||||||
|
# Score each metric (0-3)
|
||||||
|
coverage_score = (
|
||||||
|
3 if triage_coverage >= 90 else
|
||||||
|
2 if triage_coverage >= 70 else
|
||||||
|
1 if triage_coverage >= 50 else
|
||||||
|
0
|
||||||
|
)
|
||||||
|
|
||||||
|
time_score = 3 # Default to excellent if no data
|
||||||
|
if triage_time is not None:
|
||||||
|
time_score = (
|
||||||
|
3 if triage_time < 24 else
|
||||||
|
2 if triage_time < 72 else
|
||||||
|
1 if triage_time < 168 else
|
||||||
|
0
|
||||||
|
)
|
||||||
|
|
||||||
|
resolution_score = 3 # Default to excellent if no data
|
||||||
|
if resolution_time is not None:
|
||||||
|
resolution_score = (
|
||||||
|
3 if resolution_time < 168 else
|
||||||
|
2 if resolution_time < 336 else
|
||||||
|
1 if resolution_time < 720 else
|
||||||
|
0
|
||||||
|
)
|
||||||
|
|
||||||
|
# Average the scores
|
||||||
|
avg_score = (coverage_score + time_score + resolution_score) / 3
|
||||||
|
|
||||||
|
# Return grade
|
||||||
|
if avg_score >= 2.5:
|
||||||
|
return "Excellent ✅"
|
||||||
|
elif avg_score >= 1.5:
|
||||||
|
return "Good ⚠️"
|
||||||
|
elif avg_score >= 0.5:
|
||||||
|
return "Needs Improvement ⚠️"
|
||||||
|
else:
|
||||||
|
return "Poor ❌"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Prioritizing Components Needing Attention
|
||||||
|
|
||||||
|
Rank components by priority based on:
|
||||||
|
|
||||||
|
1. **High open untriaged count** (most urgent)
|
||||||
|
|
||||||
|
- Calculate: `summary.open.total - summary.open.triaged`
|
||||||
|
- Threshold: >3 open untriaged regressions
|
||||||
|
|
||||||
|
2. **Low triage coverage** (second priority)
|
||||||
|
|
||||||
|
- Use: `summary.triage_percentage`
|
||||||
|
- Threshold: <50%
|
||||||
|
|
||||||
|
3. **Slow triage response** (third priority)
|
||||||
|
|
||||||
|
- Use: `summary.time_to_triage_hrs_avg`
|
||||||
|
- Threshold: >72 hours
|
||||||
|
|
||||||
|
4. **High total regression count** (fourth priority)
|
||||||
|
- Use: `summary.total`
|
||||||
|
- Threshold: Component-relative (top quartile)
|
||||||
|
|
||||||
|
## Advanced Features
|
||||||
|
|
||||||
|
### Trend Analysis (Future Enhancement)
|
||||||
|
|
||||||
|
Compare metrics across releases:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:analyze-regressions 4.17 --compare 4.16
|
||||||
|
```
|
||||||
|
|
||||||
|
### Export to CSV
|
||||||
|
|
||||||
|
Generate CSV report for spreadsheet analysis:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:analyze-regressions 4.17 --export-csv
|
||||||
|
```
|
||||||
|
|
||||||
|
### Custom Thresholds
|
||||||
|
|
||||||
|
Allow users to customize health grade thresholds:
|
||||||
|
|
||||||
|
```
|
||||||
|
/component-health:analyze-regressions 4.17 --triage-threshold 80
|
||||||
|
```
|
||||||
|
|
||||||
|
## Integration with Other Commands
|
||||||
|
|
||||||
|
This skill can be used by:
|
||||||
|
|
||||||
|
- `/component-health:analyze-regressions` command (primary)
|
||||||
|
- Quality metrics dashboards
|
||||||
|
- Release readiness reports
|
||||||
|
- Team performance tracking tools
|
||||||
|
|
||||||
|
## Related Skills
|
||||||
|
|
||||||
|
- `get-release-dates` - Fetches release development window dates
|
||||||
|
- `list-regressions` - Fetches raw regression data
|
||||||
|
- `prow-job:analyze-test-failure` - Analyzes individual test failures
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- All scripts use Python's standard library only (no external dependencies)
|
||||||
|
- Output is cached in `.work/` directory for performance
|
||||||
|
- Regression data is fetched in real-time from the API
|
||||||
|
- HTML reports are standalone (no external dependencies, embedded CSS/JS)
|
||||||
|
- The `--short` flag is critical to prevent output truncation with large datasets
|
||||||
|
- Health grades are subjective and intended as guidance, not criticism
|
||||||
|
- Infrastructure regressions (closed within 96 hours on high-volume days) are automatically filtered
|
||||||
|
- Retroactive triage of closed regressions is not supported by the tooling
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Issue: Report Shows 0 Regressions
|
||||||
|
|
||||||
|
**Possible causes**:
|
||||||
|
|
||||||
|
1. Release is too early in development
|
||||||
|
2. Date filtering excluded all regressions
|
||||||
|
3. Component filter didn't match any components
|
||||||
|
|
||||||
|
**Solutions**:
|
||||||
|
|
||||||
|
1. Check release dates with `get_release_dates.py`
|
||||||
|
2. Try without date filtering
|
||||||
|
3. List available components without filter first
|
||||||
|
|
||||||
|
### Issue: Triage Percentages Seem Low
|
||||||
|
|
||||||
|
**Context**:
|
||||||
|
|
||||||
|
- Many teams are still ramping up regression triage practices
|
||||||
|
- Low percentages indicate opportunity for improvement, not failure
|
||||||
|
- Focus on the trend over time rather than absolute numbers
|
||||||
|
|
||||||
|
**Actions**:
|
||||||
|
|
||||||
|
- Identify specific untriaged open regressions that need attention
|
||||||
|
- Prioritize by regression severity and frequency
|
||||||
|
- Track improvement over subsequent releases
|
||||||
|
|
||||||
|
### Issue: HTML Report Not Opening
|
||||||
|
|
||||||
|
**Possible causes**:
|
||||||
|
|
||||||
|
1. Browser security restrictions on local files
|
||||||
|
2. Incorrect file path
|
||||||
|
3. Missing file permissions
|
||||||
|
|
||||||
|
**Solutions**:
|
||||||
|
|
||||||
|
1. Manually open the file from file explorer
|
||||||
|
2. Verify the file was created at the expected path
|
||||||
|
3. Check file permissions: `ls -la .work/component-health-*/report.html`
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
This skill provides comprehensive component health analysis by:
|
||||||
|
|
||||||
|
1. Fetching release development window dates
|
||||||
|
2. Retrieving regression data filtered to the development window
|
||||||
|
3. Calculating health grades based on triage metrics
|
||||||
|
4. Generating actionable reports (text and HTML)
|
||||||
|
5. Identifying components that need help
|
||||||
|
|
||||||
|
The key focus is on **actionable insights** - particularly identifying open untriaged regressions that need immediate attention, while avoiding recommendations for closed regressions which cannot be retroactively triaged.
|
||||||
315
skills/analyze-regressions/generate_html_report.py
Normal file
315
skills/analyze-regressions/generate_html_report.py
Normal file
@@ -0,0 +1,315 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Generate HTML component health report from JSON data.
|
||||||
|
|
||||||
|
This script reads regression data and generates an interactive HTML report
|
||||||
|
using the template. It handles all the data processing and HTML generation.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python3 generate_html_report.py --release 4.20 --data data.json --output report.html
|
||||||
|
|
||||||
|
Or pipe JSON data directly:
|
||||||
|
cat data.json | python3 generate_html_report.py --release 4.20 --output report.html
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import sys
|
||||||
|
import argparse
|
||||||
|
from pathlib import Path
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
|
||||||
|
def format_hours_to_days(hours):
|
||||||
|
"""Convert hours to days with one decimal place."""
|
||||||
|
if hours is None:
|
||||||
|
return "N/A"
|
||||||
|
return f"{hours / 24:.1f}"
|
||||||
|
|
||||||
|
|
||||||
|
def get_grade_class(value, thresholds, reverse=False):
|
||||||
|
"""
|
||||||
|
Get CSS class based on value and thresholds.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
value: The value to grade
|
||||||
|
thresholds: Dict with 'excellent', 'good', 'warning' keys
|
||||||
|
reverse: If True, lower is better (for time metrics)
|
||||||
|
"""
|
||||||
|
if value is None:
|
||||||
|
return "poor"
|
||||||
|
|
||||||
|
if reverse:
|
||||||
|
if value < thresholds['excellent']:
|
||||||
|
return "good"
|
||||||
|
elif value < thresholds['good']:
|
||||||
|
return "good"
|
||||||
|
elif value < thresholds['warning']:
|
||||||
|
return "warning"
|
||||||
|
else:
|
||||||
|
return "poor"
|
||||||
|
else:
|
||||||
|
if value >= thresholds['excellent']:
|
||||||
|
return "good"
|
||||||
|
elif value >= thresholds['good']:
|
||||||
|
return "good"
|
||||||
|
elif value >= thresholds['warning']:
|
||||||
|
return "warning"
|
||||||
|
else:
|
||||||
|
return "poor"
|
||||||
|
|
||||||
|
|
||||||
|
def get_grade_text(value, thresholds, reverse=False):
|
||||||
|
"""Get grade text (emoji + text) based on value."""
|
||||||
|
grade_class = get_grade_class(value, thresholds, reverse)
|
||||||
|
|
||||||
|
grade_map = {
|
||||||
|
'good': '✅ EXCELLENT',
|
||||||
|
'warning': '⚠️ NEEDS IMPROVEMENT',
|
||||||
|
'poor': '❌ POOR'
|
||||||
|
}
|
||||||
|
|
||||||
|
# Special handling for coverage and timeliness
|
||||||
|
if not reverse: # Coverage
|
||||||
|
if value is None:
|
||||||
|
return '❌ POOR'
|
||||||
|
elif value >= 90:
|
||||||
|
return '✅ EXCELLENT'
|
||||||
|
elif value >= 70:
|
||||||
|
return '✅ GOOD'
|
||||||
|
elif value >= 50:
|
||||||
|
return '⚠️ NEEDS IMPROVEMENT'
|
||||||
|
else:
|
||||||
|
return '❌ POOR'
|
||||||
|
else: # Timeliness
|
||||||
|
if value is None:
|
||||||
|
return 'N/A'
|
||||||
|
elif value < 24:
|
||||||
|
return '✅ EXCELLENT'
|
||||||
|
elif value < 72:
|
||||||
|
return '⚠️ GOOD'
|
||||||
|
elif value < 168:
|
||||||
|
return '⚠️ NEEDS IMPROVEMENT'
|
||||||
|
else:
|
||||||
|
return '❌ POOR'
|
||||||
|
|
||||||
|
|
||||||
|
def get_component_grade(component_data):
|
||||||
|
"""Calculate overall grade for a component."""
|
||||||
|
triage_pct = component_data['summary']['triage_percentage']
|
||||||
|
|
||||||
|
if triage_pct >= 70:
|
||||||
|
return 'good', '✅ GOOD'
|
||||||
|
elif triage_pct >= 50:
|
||||||
|
return 'good', '⚠️ GOOD'
|
||||||
|
elif triage_pct >= 25:
|
||||||
|
return 'warning', '⚠️ NEEDS IMPROVEMENT'
|
||||||
|
else:
|
||||||
|
return 'poor', '❌ POOR'
|
||||||
|
|
||||||
|
|
||||||
|
def format_time_value(value):
|
||||||
|
"""Format time value, handling None."""
|
||||||
|
if value is None:
|
||||||
|
return '-'
|
||||||
|
return f"{int(value)} hrs"
|
||||||
|
|
||||||
|
|
||||||
|
def format_percentage_value(value):
|
||||||
|
"""Format percentage value with grade class."""
|
||||||
|
if value is None or value == 0:
|
||||||
|
return '<span class="grade-poor">0.0%</span>'
|
||||||
|
elif value >= 90:
|
||||||
|
return f'<span class="grade-excellent">{value:.1f}%</span>'
|
||||||
|
elif value >= 70:
|
||||||
|
return f'<span class="grade-good">{value:.1f}%</span>'
|
||||||
|
elif value >= 50:
|
||||||
|
return f'<span class="grade-warning">{value:.1f}%</span>'
|
||||||
|
else:
|
||||||
|
return f'<span class="grade-poor">{value:.1f}%</span>'
|
||||||
|
|
||||||
|
|
||||||
|
def generate_component_row(name, data):
|
||||||
|
"""Generate a table row for a component."""
|
||||||
|
summary = data['summary']
|
||||||
|
grade_class, grade_text = get_component_grade(data)
|
||||||
|
|
||||||
|
return f'''<tr data-grade="{grade_class}">
|
||||||
|
<td class="component-name">{name}</td>
|
||||||
|
<td>{summary['total']}</td>
|
||||||
|
<td>{format_percentage_value(summary['triage_percentage'])}</td>
|
||||||
|
<td>{format_time_value(summary['time_to_triage_hrs_avg'])}</td>
|
||||||
|
<td>{format_time_value(summary['time_to_close_hrs_avg'])}</td>
|
||||||
|
<td>{summary['open']['total']}</td>
|
||||||
|
<td class="grade-{grade_class}">{grade_text}</td>
|
||||||
|
</tr>'''
|
||||||
|
|
||||||
|
|
||||||
|
def generate_html_report(release, data, release_dates, output_path):
|
||||||
|
"""Generate HTML report from data."""
|
||||||
|
template_path = Path(__file__).parent / "report_template.html"
|
||||||
|
|
||||||
|
with open(template_path, 'r') as f:
|
||||||
|
template = f.read()
|
||||||
|
|
||||||
|
# Extract summary data
|
||||||
|
summary = data['summary']
|
||||||
|
|
||||||
|
# Calculate derived values
|
||||||
|
triage_coverage = summary['triage_percentage']
|
||||||
|
triage_time_avg = summary['time_to_triage_hrs_avg'] or 0
|
||||||
|
resolution_time_avg = summary['time_to_close_hrs_avg'] or 0
|
||||||
|
|
||||||
|
# Determine release period
|
||||||
|
dev_start = release_dates.get('development_start', 'Unknown')
|
||||||
|
ga_date = release_dates.get('ga')
|
||||||
|
|
||||||
|
if dev_start != 'Unknown':
|
||||||
|
dev_start = dev_start.split('T')[0]
|
||||||
|
|
||||||
|
if ga_date:
|
||||||
|
ga_date = ga_date.split('T')[0]
|
||||||
|
release_period = f"Development Period: {dev_start} - {ga_date} (GA)"
|
||||||
|
date_range = f"{dev_start} (Development Start) to {ga_date} (GA)"
|
||||||
|
else:
|
||||||
|
release_period = f"Development Period: {dev_start} - Present (In Development)"
|
||||||
|
date_range = f"{dev_start} (Development Start) - Present"
|
||||||
|
|
||||||
|
# Calculate grades
|
||||||
|
triage_coverage_class = get_grade_class(triage_coverage,
|
||||||
|
{'excellent': 90, 'good': 70, 'warning': 50})
|
||||||
|
triage_time_class = get_grade_class(triage_time_avg,
|
||||||
|
{'excellent': 24, 'good': 72, 'warning': 168},
|
||||||
|
reverse=True)
|
||||||
|
resolution_time_class = get_grade_class(resolution_time_avg,
|
||||||
|
{'excellent': 168, 'good': 336, 'warning': 720},
|
||||||
|
reverse=True)
|
||||||
|
|
||||||
|
# Build component rows (sorted by health)
|
||||||
|
components = data.get('components', {})
|
||||||
|
component_rows = []
|
||||||
|
|
||||||
|
for name, comp_data in sorted(components.items(),
|
||||||
|
key=lambda x: x[1]['summary']['triage_percentage'],
|
||||||
|
reverse=True):
|
||||||
|
component_rows.append(generate_component_row(name, comp_data))
|
||||||
|
|
||||||
|
# Generate attention sections
|
||||||
|
attention_sections = []
|
||||||
|
|
||||||
|
# Zero triage components
|
||||||
|
zero_triage = [name for name, comp in components.items()
|
||||||
|
if comp['summary']['triage_percentage'] == 0]
|
||||||
|
if zero_triage:
|
||||||
|
items = '\n'.join([f'<li><strong>{name}</strong> - {components[name]["summary"]["total"]} regressions</li>'
|
||||||
|
for name in zero_triage])
|
||||||
|
attention_sections.append(f'''<div class="alert-box">
|
||||||
|
<h3>🚨 Zero Triage Coverage (0% triaged)</h3>
|
||||||
|
<ul>
|
||||||
|
{items}
|
||||||
|
</ul>
|
||||||
|
</div>''')
|
||||||
|
|
||||||
|
# Low triage components
|
||||||
|
low_triage = [(name, comp) for name, comp in components.items()
|
||||||
|
if 0 < comp['summary']['triage_percentage'] < 25]
|
||||||
|
if low_triage:
|
||||||
|
items = '\n'.join([f'<li><strong>{name}</strong> - {comp["summary"]["total"]} regressions, only {comp["summary"]["triage_percentage"]:.1f}% triaged</li>'
|
||||||
|
for name, comp in low_triage])
|
||||||
|
attention_sections.append(f'''<div class="alert-box">
|
||||||
|
<h3>⚠️ Low Triage Coverage (<25%)</h3>
|
||||||
|
<ul>
|
||||||
|
{items}
|
||||||
|
</ul>
|
||||||
|
</div>''')
|
||||||
|
|
||||||
|
# High volume components
|
||||||
|
high_volume = [(name, comp) for name, comp in components.items()
|
||||||
|
if comp['summary']['total'] >= 20]
|
||||||
|
if high_volume:
|
||||||
|
high_volume.sort(key=lambda x: x[1]['summary']['total'], reverse=True)
|
||||||
|
items = '\n'.join([f'<li><strong>{name}</strong> - {comp["summary"]["total"]} regressions ({comp["summary"]["triage_percentage"]:.1f}% triaged)</li>'
|
||||||
|
for name, comp in high_volume[:5]])
|
||||||
|
attention_sections.append(f'''<div class="alert-box">
|
||||||
|
<h3>📊 High Regression Volume (Top 5)</h3>
|
||||||
|
<ul>
|
||||||
|
{items}
|
||||||
|
</ul>
|
||||||
|
</div>''')
|
||||||
|
|
||||||
|
# Substitutions
|
||||||
|
substitutions = {
|
||||||
|
'RELEASE': release,
|
||||||
|
'RELEASE_PERIOD': release_period,
|
||||||
|
'DATE_RANGE': date_range,
|
||||||
|
'TRIAGE_COVERAGE': f"{triage_coverage:.1f}",
|
||||||
|
'TRIAGE_COVERAGE_CLASS': triage_coverage_class,
|
||||||
|
'TRIAGE_COVERAGE_GRADE': get_grade_text(triage_coverage, {}, False),
|
||||||
|
'TRIAGE_COVERAGE_GRADE_CLASS': f"grade-{triage_coverage_class}",
|
||||||
|
'TOTAL_REGRESSIONS': str(summary['total']),
|
||||||
|
'TRIAGED_REGRESSIONS': str(summary['triaged']),
|
||||||
|
'UNTRIAGED_REGRESSIONS': str(summary['total'] - summary['triaged']),
|
||||||
|
'TRIAGE_TIME_AVG': str(int(triage_time_avg)) if triage_time_avg else 'N/A',
|
||||||
|
'TRIAGE_TIME_AVG_DAYS': format_hours_to_days(triage_time_avg),
|
||||||
|
'TRIAGE_TIME_MAX': str(int(summary['time_to_triage_hrs_max'])) if summary['time_to_triage_hrs_max'] else 'N/A',
|
||||||
|
'TRIAGE_TIME_MAX_DAYS': format_hours_to_days(summary['time_to_triage_hrs_max']),
|
||||||
|
'TRIAGE_TIME_CLASS': triage_time_class,
|
||||||
|
'TRIAGE_TIME_GRADE': get_grade_text(triage_time_avg, {}, True),
|
||||||
|
'TRIAGE_TIME_GRADE_CLASS': f"grade-{triage_time_class}",
|
||||||
|
'RESOLUTION_TIME_AVG': str(int(resolution_time_avg)) if resolution_time_avg else 'N/A',
|
||||||
|
'RESOLUTION_TIME_AVG_DAYS': format_hours_to_days(resolution_time_avg),
|
||||||
|
'RESOLUTION_TIME_MAX': str(int(summary['time_to_close_hrs_max'])) if summary['time_to_close_hrs_max'] else 'N/A',
|
||||||
|
'RESOLUTION_TIME_MAX_DAYS': format_hours_to_days(summary['time_to_close_hrs_max']),
|
||||||
|
'RESOLUTION_TIME_CLASS': resolution_time_class,
|
||||||
|
'RESOLUTION_TIME_GRADE': get_grade_text(resolution_time_avg, {}, True),
|
||||||
|
'RESOLUTION_TIME_GRADE_CLASS': f"grade-{resolution_time_class}",
|
||||||
|
'OPEN_REGRESSIONS': str(summary['open']['total']),
|
||||||
|
'OPEN_TRIAGE_PERCENTAGE': f"{summary['open']['triage_percentage']:.1f}",
|
||||||
|
'CLOSED_REGRESSIONS': str(summary['closed']['total']),
|
||||||
|
'CLOSED_TRIAGE_PERCENTAGE': f"{summary['closed']['triage_percentage']:.1f}",
|
||||||
|
'OPEN_AGE_AVG': str(int(summary['open']['open_hrs_avg'])) if summary['open']['open_hrs_avg'] else 'N/A',
|
||||||
|
'OPEN_AGE_AVG_DAYS': format_hours_to_days(summary['open']['open_hrs_avg']),
|
||||||
|
'COMPONENT_ROWS': '\n'.join(component_rows),
|
||||||
|
'ATTENTION_SECTIONS': '\n'.join(attention_sections),
|
||||||
|
'INSIGHTS': '<li>Report generated automatically from regression data</li>',
|
||||||
|
'RECOMMENDATIONS': '<li>Review components with zero triage coverage</li><li>Address high-volume components</li>',
|
||||||
|
'GENERATED_DATE': datetime.now().strftime("%B %d, %Y"),
|
||||||
|
}
|
||||||
|
|
||||||
|
# Apply substitutions
|
||||||
|
for key, value in substitutions.items():
|
||||||
|
template = template.replace(f'{{{{{key}}}}}', value)
|
||||||
|
|
||||||
|
# Write output
|
||||||
|
with open(output_path, 'w') as f:
|
||||||
|
f.write(template)
|
||||||
|
|
||||||
|
print(f"HTML report generated: {output_path}", file=sys.stderr)
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description='Generate HTML component health report')
|
||||||
|
parser.add_argument('--release', required=True, help='Release version (e.g., 4.20)')
|
||||||
|
parser.add_argument('--data', help='Path to JSON data file (or read from stdin)')
|
||||||
|
parser.add_argument('--dates', help='Path to release dates JSON file')
|
||||||
|
parser.add_argument('--output', required=True, help='Output HTML file path')
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
# Read regression data
|
||||||
|
if args.data:
|
||||||
|
with open(args.data, 'r') as f:
|
||||||
|
data = json.load(f)
|
||||||
|
else:
|
||||||
|
data = json.load(sys.stdin)
|
||||||
|
|
||||||
|
# Read release dates
|
||||||
|
release_dates = {}
|
||||||
|
if args.dates:
|
||||||
|
with open(args.dates, 'r') as f:
|
||||||
|
release_dates = json.load(f)
|
||||||
|
|
||||||
|
generate_html_report(args.release, data, release_dates, args.output)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
||||||
486
skills/analyze-regressions/report_template.html
Normal file
486
skills/analyze-regressions/report_template.html
Normal file
@@ -0,0 +1,486 @@
|
|||||||
|
<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
<title>Component Health Report - OpenShift {{RELEASE}}</title>
|
||||||
|
<style>
|
||||||
|
* {
|
||||||
|
margin: 0;
|
||||||
|
padding: 0;
|
||||||
|
box-sizing: border-box;
|
||||||
|
}
|
||||||
|
|
||||||
|
body {
|
||||||
|
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, sans-serif;
|
||||||
|
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
||||||
|
padding: 20px;
|
||||||
|
line-height: 1.6;
|
||||||
|
}
|
||||||
|
|
||||||
|
.container {
|
||||||
|
max-width: 1400px;
|
||||||
|
margin: 0 auto;
|
||||||
|
background: white;
|
||||||
|
border-radius: 12px;
|
||||||
|
box-shadow: 0 20px 60px rgba(0,0,0,0.3);
|
||||||
|
overflow: hidden;
|
||||||
|
}
|
||||||
|
|
||||||
|
.header {
|
||||||
|
background: linear-gradient(135deg, #2c3e50 0%, #34495e 100%);
|
||||||
|
color: white;
|
||||||
|
padding: 40px;
|
||||||
|
text-align: center;
|
||||||
|
}
|
||||||
|
|
||||||
|
.header h1 {
|
||||||
|
font-size: 2.5em;
|
||||||
|
margin-bottom: 10px;
|
||||||
|
font-weight: 600;
|
||||||
|
}
|
||||||
|
|
||||||
|
.header .subtitle {
|
||||||
|
font-size: 1.1em;
|
||||||
|
opacity: 0.9;
|
||||||
|
margin-top: 10px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.content {
|
||||||
|
padding: 40px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.section {
|
||||||
|
margin-bottom: 50px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.section h2 {
|
||||||
|
color: #2c3e50;
|
||||||
|
font-size: 1.8em;
|
||||||
|
margin-bottom: 20px;
|
||||||
|
padding-bottom: 10px;
|
||||||
|
border-bottom: 3px solid #667eea;
|
||||||
|
}
|
||||||
|
|
||||||
|
.metrics-grid {
|
||||||
|
display: grid;
|
||||||
|
grid-template-columns: repeat(auto-fit, minmax(300px, 1fr));
|
||||||
|
gap: 20px;
|
||||||
|
margin-bottom: 30px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.metric-card {
|
||||||
|
background: #f8f9fa;
|
||||||
|
border-radius: 8px;
|
||||||
|
padding: 25px;
|
||||||
|
border-left: 5px solid #667eea;
|
||||||
|
transition: transform 0.2s, box-shadow 0.2s;
|
||||||
|
}
|
||||||
|
|
||||||
|
.metric-card:hover {
|
||||||
|
transform: translateY(-2px);
|
||||||
|
box-shadow: 0 4px 12px rgba(0,0,0,0.1);
|
||||||
|
}
|
||||||
|
|
||||||
|
.metric-card.poor {
|
||||||
|
border-left-color: #e74c3c;
|
||||||
|
background: #fee;
|
||||||
|
}
|
||||||
|
|
||||||
|
.metric-card.warning {
|
||||||
|
border-left-color: #f39c12;
|
||||||
|
background: #fff8e1;
|
||||||
|
}
|
||||||
|
|
||||||
|
.metric-card.good {
|
||||||
|
border-left-color: #27ae60;
|
||||||
|
background: #e8f5e9;
|
||||||
|
}
|
||||||
|
|
||||||
|
.metric-label {
|
||||||
|
font-size: 0.9em;
|
||||||
|
color: #666;
|
||||||
|
text-transform: uppercase;
|
||||||
|
letter-spacing: 0.5px;
|
||||||
|
margin-bottom: 8px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.metric-value {
|
||||||
|
font-size: 2.5em;
|
||||||
|
font-weight: 700;
|
||||||
|
color: #2c3e50;
|
||||||
|
margin-bottom: 5px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.metric-assessment {
|
||||||
|
font-size: 1.1em;
|
||||||
|
font-weight: 600;
|
||||||
|
margin-top: 10px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.metric-details {
|
||||||
|
font-size: 0.9em;
|
||||||
|
color: #555;
|
||||||
|
margin-top: 10px;
|
||||||
|
}
|
||||||
|
|
||||||
|
table {
|
||||||
|
width: 100%;
|
||||||
|
border-collapse: collapse;
|
||||||
|
margin-top: 20px;
|
||||||
|
background: white;
|
||||||
|
border-radius: 8px;
|
||||||
|
overflow: hidden;
|
||||||
|
box-shadow: 0 2px 8px rgba(0,0,0,0.1);
|
||||||
|
}
|
||||||
|
|
||||||
|
thead {
|
||||||
|
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
||||||
|
color: white;
|
||||||
|
}
|
||||||
|
|
||||||
|
th {
|
||||||
|
padding: 15px;
|
||||||
|
text-align: left;
|
||||||
|
font-weight: 600;
|
||||||
|
font-size: 0.9em;
|
||||||
|
text-transform: uppercase;
|
||||||
|
letter-spacing: 0.5px;
|
||||||
|
}
|
||||||
|
|
||||||
|
td {
|
||||||
|
padding: 12px 15px;
|
||||||
|
border-bottom: 1px solid #ecf0f1;
|
||||||
|
}
|
||||||
|
|
||||||
|
tbody tr:hover {
|
||||||
|
background: #f8f9fa;
|
||||||
|
}
|
||||||
|
|
||||||
|
tbody tr:last-child td {
|
||||||
|
border-bottom: none;
|
||||||
|
}
|
||||||
|
|
||||||
|
.grade-excellent {
|
||||||
|
color: #27ae60;
|
||||||
|
font-weight: 600;
|
||||||
|
}
|
||||||
|
|
||||||
|
.grade-good {
|
||||||
|
color: #3498db;
|
||||||
|
font-weight: 600;
|
||||||
|
}
|
||||||
|
|
||||||
|
.grade-warning {
|
||||||
|
color: #f39c12;
|
||||||
|
font-weight: 600;
|
||||||
|
}
|
||||||
|
|
||||||
|
.grade-poor {
|
||||||
|
color: #e74c3c;
|
||||||
|
font-weight: 600;
|
||||||
|
}
|
||||||
|
|
||||||
|
.alert-box {
|
||||||
|
background: #fee;
|
||||||
|
border-left: 5px solid #e74c3c;
|
||||||
|
padding: 20px;
|
||||||
|
border-radius: 8px;
|
||||||
|
margin-bottom: 20px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.alert-box h3 {
|
||||||
|
color: #c0392b;
|
||||||
|
margin-bottom: 15px;
|
||||||
|
font-size: 1.3em;
|
||||||
|
}
|
||||||
|
|
||||||
|
.alert-box ul {
|
||||||
|
list-style-position: inside;
|
||||||
|
color: #555;
|
||||||
|
}
|
||||||
|
|
||||||
|
.alert-box li {
|
||||||
|
margin-bottom: 8px;
|
||||||
|
padding-left: 10px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.insight-box {
|
||||||
|
background: #e8f5e9;
|
||||||
|
border-left: 5px solid #27ae60;
|
||||||
|
padding: 20px;
|
||||||
|
border-radius: 8px;
|
||||||
|
margin-bottom: 20px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.insight-box h3 {
|
||||||
|
color: #27ae60;
|
||||||
|
margin-bottom: 15px;
|
||||||
|
font-size: 1.3em;
|
||||||
|
}
|
||||||
|
|
||||||
|
.insight-box ul {
|
||||||
|
list-style-position: inside;
|
||||||
|
color: #555;
|
||||||
|
}
|
||||||
|
|
||||||
|
.insight-box li {
|
||||||
|
margin-bottom: 8px;
|
||||||
|
padding-left: 10px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.recommendation-box {
|
||||||
|
background: #fff8e1;
|
||||||
|
border-left: 5px solid #f39c12;
|
||||||
|
padding: 20px;
|
||||||
|
border-radius: 8px;
|
||||||
|
margin-bottom: 20px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.recommendation-box h3 {
|
||||||
|
color: #e67e22;
|
||||||
|
margin-bottom: 15px;
|
||||||
|
font-size: 1.3em;
|
||||||
|
}
|
||||||
|
|
||||||
|
.recommendation-box ol {
|
||||||
|
list-style-position: inside;
|
||||||
|
color: #555;
|
||||||
|
}
|
||||||
|
|
||||||
|
.recommendation-box li {
|
||||||
|
margin-bottom: 10px;
|
||||||
|
padding-left: 10px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.stats-row {
|
||||||
|
display: flex;
|
||||||
|
gap: 15px;
|
||||||
|
flex-wrap: wrap;
|
||||||
|
margin-top: 15px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.stat-pill {
|
||||||
|
background: white;
|
||||||
|
padding: 8px 15px;
|
||||||
|
border-radius: 20px;
|
||||||
|
font-size: 0.9em;
|
||||||
|
border: 2px solid #ddd;
|
||||||
|
}
|
||||||
|
|
||||||
|
.stat-pill strong {
|
||||||
|
color: #2c3e50;
|
||||||
|
}
|
||||||
|
|
||||||
|
.footer {
|
||||||
|
background: #f8f9fa;
|
||||||
|
padding: 20px 40px;
|
||||||
|
text-align: center;
|
||||||
|
color: #666;
|
||||||
|
font-size: 0.9em;
|
||||||
|
border-top: 1px solid #ddd;
|
||||||
|
}
|
||||||
|
|
||||||
|
.component-name {
|
||||||
|
font-weight: 600;
|
||||||
|
color: #2c3e50;
|
||||||
|
}
|
||||||
|
|
||||||
|
.filter-controls {
|
||||||
|
margin: 20px 0;
|
||||||
|
padding: 20px;
|
||||||
|
background: #f8f9fa;
|
||||||
|
border-radius: 8px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.filter-controls label {
|
||||||
|
margin-right: 15px;
|
||||||
|
font-weight: 500;
|
||||||
|
}
|
||||||
|
|
||||||
|
.filter-controls input[type="text"] {
|
||||||
|
padding: 8px 12px;
|
||||||
|
border: 2px solid #ddd;
|
||||||
|
border-radius: 4px;
|
||||||
|
font-size: 1em;
|
||||||
|
width: 300px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.filter-controls select {
|
||||||
|
padding: 8px 12px;
|
||||||
|
border: 2px solid #ddd;
|
||||||
|
border-radius: 4px;
|
||||||
|
font-size: 1em;
|
||||||
|
margin-left: 10px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.hidden {
|
||||||
|
display: none;
|
||||||
|
}
|
||||||
|
</style>
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<div class="container">
|
||||||
|
<div class="header">
|
||||||
|
<h1>Component Health Report</h1>
|
||||||
|
<div class="subtitle">OpenShift {{RELEASE}} Release</div>
|
||||||
|
<div class="subtitle">{{RELEASE_PERIOD}}</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="content">
|
||||||
|
<!-- Overall Health Section -->
|
||||||
|
<div class="section">
|
||||||
|
<h2>Overall Health Grade</h2>
|
||||||
|
|
||||||
|
<div class="metrics-grid">
|
||||||
|
<div class="metric-card {{TRIAGE_COVERAGE_CLASS}}">
|
||||||
|
<div class="metric-label">Triage Coverage</div>
|
||||||
|
<div class="metric-value">{{TRIAGE_COVERAGE}}%</div>
|
||||||
|
<div class="metric-assessment {{TRIAGE_COVERAGE_GRADE_CLASS}}">{{TRIAGE_COVERAGE_GRADE}}</div>
|
||||||
|
<div class="metric-details">
|
||||||
|
<div>Total: {{TOTAL_REGRESSIONS}} regressions</div>
|
||||||
|
<div>Triaged: {{TRIAGED_REGRESSIONS}}</div>
|
||||||
|
<div>Untriaged: {{UNTRIAGED_REGRESSIONS}}</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="metric-card {{TRIAGE_TIME_CLASS}}">
|
||||||
|
<div class="metric-label">Triage Timeliness</div>
|
||||||
|
<div class="metric-value">{{TRIAGE_TIME_AVG}} hrs</div>
|
||||||
|
<div class="metric-assessment {{TRIAGE_TIME_GRADE_CLASS}}">{{TRIAGE_TIME_GRADE}}</div>
|
||||||
|
<div class="metric-details">
|
||||||
|
<div>Average: {{TRIAGE_TIME_AVG_DAYS}} days</div>
|
||||||
|
<div>Maximum: {{TRIAGE_TIME_MAX}} hrs ({{TRIAGE_TIME_MAX_DAYS}} days)</div>
|
||||||
|
<div>Target: <72 hours</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="metric-card {{RESOLUTION_TIME_CLASS}}">
|
||||||
|
<div class="metric-label">Resolution Speed</div>
|
||||||
|
<div class="metric-value">{{RESOLUTION_TIME_AVG}} hrs</div>
|
||||||
|
<div class="metric-assessment {{RESOLUTION_TIME_GRADE_CLASS}}">{{RESOLUTION_TIME_GRADE}}</div>
|
||||||
|
<div class="metric-details">
|
||||||
|
<div>Average: {{RESOLUTION_TIME_AVG_DAYS}} days</div>
|
||||||
|
<div>Maximum: {{RESOLUTION_TIME_MAX}} hrs ({{RESOLUTION_TIME_MAX_DAYS}} days)</div>
|
||||||
|
<div>Target: 1-2 weeks</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="stats-row">
|
||||||
|
<div class="stat-pill">
|
||||||
|
<strong>Open:</strong> {{OPEN_REGRESSIONS}} regressions ({{OPEN_TRIAGE_PERCENTAGE}}% triaged)
|
||||||
|
</div>
|
||||||
|
<div class="stat-pill">
|
||||||
|
<strong>Closed:</strong> {{CLOSED_REGRESSIONS}} regressions ({{CLOSED_TRIAGE_PERCENTAGE}}% triaged)
|
||||||
|
</div>
|
||||||
|
<div class="stat-pill">
|
||||||
|
<strong>Avg Age (Open):</strong> {{OPEN_AGE_AVG}} hours ({{OPEN_AGE_AVG_DAYS}} days)
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Component Scorecard -->
|
||||||
|
<div class="section">
|
||||||
|
<h2>Per-Component Health Scorecard</h2>
|
||||||
|
|
||||||
|
<div class="filter-controls">
|
||||||
|
<label for="searchInput">Search Component:</label>
|
||||||
|
<input type="text" id="searchInput" placeholder="Type component name...">
|
||||||
|
|
||||||
|
<label for="gradeFilter">Filter by Grade:</label>
|
||||||
|
<select id="gradeFilter">
|
||||||
|
<option value="all">All Grades</option>
|
||||||
|
<option value="excellent">Excellent</option>
|
||||||
|
<option value="good">Good</option>
|
||||||
|
<option value="warning">Needs Improvement</option>
|
||||||
|
<option value="poor">Poor</option>
|
||||||
|
</select>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<table id="componentTable">
|
||||||
|
<thead>
|
||||||
|
<tr>
|
||||||
|
<th>Component</th>
|
||||||
|
<th>Total Regressions</th>
|
||||||
|
<th>Triage Coverage</th>
|
||||||
|
<th>Avg Triage Time</th>
|
||||||
|
<th>Avg Resolution Time</th>
|
||||||
|
<th>Open</th>
|
||||||
|
<th>Health Grade</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody id="componentTableBody">
|
||||||
|
{{COMPONENT_ROWS}}
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Critical Attention Section -->
|
||||||
|
<div class="section">
|
||||||
|
<h2>Components Needing Critical Attention</h2>
|
||||||
|
{{ATTENTION_SECTIONS}}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Key Insights -->
|
||||||
|
<div class="section">
|
||||||
|
<h2>Key Insights</h2>
|
||||||
|
<div class="insight-box">
|
||||||
|
<h3>📈 Analysis Summary</h3>
|
||||||
|
<ul>
|
||||||
|
{{INSIGHTS}}
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Recommendations -->
|
||||||
|
<div class="section">
|
||||||
|
<h2>Recommendations</h2>
|
||||||
|
<div class="recommendation-box">
|
||||||
|
<h3>🎯 Action Items</h3>
|
||||||
|
<ol>
|
||||||
|
{{RECOMMENDATIONS}}
|
||||||
|
</ol>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="footer">
|
||||||
|
<strong>Data Source:</strong> Sippy Component Readiness API<br>
|
||||||
|
<strong>Date Range:</strong> {{DATE_RANGE}}<br>
|
||||||
|
<strong>Total Regressions Analyzed:</strong> {{TOTAL_REGRESSIONS}}<br>
|
||||||
|
<strong>Generated:</strong> {{GENERATED_DATE}}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<script>
|
||||||
|
// Search functionality
|
||||||
|
const searchInput = document.getElementById('searchInput');
|
||||||
|
const gradeFilter = document.getElementById('gradeFilter');
|
||||||
|
const tableBody = document.getElementById('componentTableBody');
|
||||||
|
const rows = tableBody.getElementsByTagName('tr');
|
||||||
|
|
||||||
|
function filterTable() {
|
||||||
|
const searchTerm = searchInput.value.toLowerCase();
|
||||||
|
const gradeValue = gradeFilter.value;
|
||||||
|
|
||||||
|
for (let row of rows) {
|
||||||
|
const componentName = row.querySelector('.component-name').textContent.toLowerCase();
|
||||||
|
const grade = row.getAttribute('data-grade');
|
||||||
|
|
||||||
|
const matchesSearch = componentName.includes(searchTerm);
|
||||||
|
const matchesGrade = gradeValue === 'all' || grade === gradeValue;
|
||||||
|
|
||||||
|
if (matchesSearch && matchesGrade) {
|
||||||
|
row.style.display = '';
|
||||||
|
} else {
|
||||||
|
row.style.display = 'none';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
searchInput.addEventListener('input', filterTable);
|
||||||
|
gradeFilter.addEventListener('change', filterTable);
|
||||||
|
</script>
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
135
skills/get-release-dates/README.md
Normal file
135
skills/get-release-dates/README.md
Normal file
@@ -0,0 +1,135 @@
|
|||||||
|
# Get Release Dates
|
||||||
|
|
||||||
|
Fetch OpenShift release dates and metadata from the Sippy API.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This skill retrieves release information for OpenShift releases, including:
|
||||||
|
|
||||||
|
- GA (General Availability) dates
|
||||||
|
- Development start dates
|
||||||
|
- Previous release in the sequence
|
||||||
|
- Release status (in development vs GA'd)
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/get-release-dates/get_release_dates.py \
|
||||||
|
--release <release>
|
||||||
|
```
|
||||||
|
|
||||||
|
## Arguments
|
||||||
|
|
||||||
|
- `--release` (required): Release identifier (e.g., "4.21", "4.20", "4.17")
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### Get information for release 4.21
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/get-release-dates/get_release_dates.py \
|
||||||
|
--release 4.21
|
||||||
|
```
|
||||||
|
|
||||||
|
### Get information for release 4.17
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/get-release-dates/get_release_dates.py \
|
||||||
|
--release 4.17
|
||||||
|
```
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
### Successful Query (Release Found)
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"release": "4.21",
|
||||||
|
"found": true,
|
||||||
|
"ga": "2026-02-17T00:00:00Z",
|
||||||
|
"development_start": "2025-09-02T00:00:00Z",
|
||||||
|
"previous_release": "4.20"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Release Not Found
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"release": "99.99",
|
||||||
|
"found": false
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Exit code: 1
|
||||||
|
|
||||||
|
## Output Fields
|
||||||
|
|
||||||
|
- `release`: The release identifier queried
|
||||||
|
- `found`: Boolean indicating if release exists in Sippy
|
||||||
|
- `ga`: GA date. **Null means release is still in development.**
|
||||||
|
- `development_start`: When development started
|
||||||
|
- `previous_release`: Previous release in sequence
|
||||||
|
|
||||||
|
**Note**: If the `ga` field is `null`, the release is still under active development and has not reached General Availability yet.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- Python 3.6 or later
|
||||||
|
- Network access to `sippy.dptools.openshift.org`
|
||||||
|
|
||||||
|
## API Endpoint
|
||||||
|
|
||||||
|
The script queries: https://sippy.dptools.openshift.org/api/releases
|
||||||
|
|
||||||
|
## Use Cases
|
||||||
|
|
||||||
|
### Verify Release Exists
|
||||||
|
|
||||||
|
Before analyzing a release, verify it exists in Sippy:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 get_release_dates.py --release 4.21
|
||||||
|
# Check "found": true in output
|
||||||
|
```
|
||||||
|
|
||||||
|
### Get Release Timeline
|
||||||
|
|
||||||
|
Understand the development timeline:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 get_release_dates.py --release 4.17
|
||||||
|
# Check "development_start" and "ga" dates
|
||||||
|
```
|
||||||
|
|
||||||
|
### Determine Release Status
|
||||||
|
|
||||||
|
Check if a release is in development or has GA'd:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 get_release_dates.py --release 4.21
|
||||||
|
# If "ga" is null -> still in development
|
||||||
|
# If "ga" has timestamp -> has reached GA
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
The script handles:
|
||||||
|
|
||||||
|
- Network errors (connection failures)
|
||||||
|
- HTTP errors (404, 500, etc.)
|
||||||
|
- Release not found (exit code 1)
|
||||||
|
- Invalid JSON responses
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Uses Python standard library only (no external dependencies)
|
||||||
|
- Release identifiers are case-sensitive
|
||||||
|
- OKD releases use "-okd" suffix (e.g., "4.21-okd")
|
||||||
|
- Special releases: "Presubmits", "aro-production", "aro-stage", "aro-integration"
|
||||||
|
|
||||||
|
## See Also
|
||||||
|
|
||||||
|
- SKILL.md: Detailed implementation guide
|
||||||
|
- Component Health Plugin: `plugins/component-health/README.md`
|
||||||
|
- List Regressions Skill: `plugins/component-health/skills/list-regressions/`
|
||||||
258
skills/get-release-dates/SKILL.md
Normal file
258
skills/get-release-dates/SKILL.md
Normal file
@@ -0,0 +1,258 @@
|
|||||||
|
---
|
||||||
|
name: Get Release Dates
|
||||||
|
description: Fetch OpenShift release dates and metadata from Sippy API
|
||||||
|
---
|
||||||
|
|
||||||
|
# Get Release Dates
|
||||||
|
|
||||||
|
This skill provides functionality to fetch OpenShift release information including GA dates and development start dates from the Sippy API.
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
Use this skill when you need to:
|
||||||
|
|
||||||
|
- Get GA (General Availability) date for a specific OpenShift release
|
||||||
|
- Find when development started for a release
|
||||||
|
- Identify the previous release in the sequence
|
||||||
|
- Validate if a release exists in Sippy
|
||||||
|
- Determine if a release is in development or has GA'd
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
1. **Python 3 Installation**
|
||||||
|
|
||||||
|
- Check if installed: `which python3`
|
||||||
|
- Python 3.6 or later is required
|
||||||
|
- Comes pre-installed on most systems
|
||||||
|
|
||||||
|
2. **Network Access**
|
||||||
|
|
||||||
|
- The script requires network access to reach the Sippy API
|
||||||
|
- Ensure you can make HTTPS requests to `sippy.dptools.openshift.org`
|
||||||
|
|
||||||
|
## Implementation Steps
|
||||||
|
|
||||||
|
### Step 1: Verify Prerequisites
|
||||||
|
|
||||||
|
First, ensure Python 3 is available:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 --version
|
||||||
|
```
|
||||||
|
|
||||||
|
If Python 3 is not installed, guide the user through installation for their platform.
|
||||||
|
|
||||||
|
### Step 2: Locate the Script
|
||||||
|
|
||||||
|
The script is located at:
|
||||||
|
|
||||||
|
```
|
||||||
|
plugins/component-health/skills/get-release-dates/get_release_dates.py
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Run the Script
|
||||||
|
|
||||||
|
Execute the script with the release parameter:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Get dates for release 4.21
|
||||||
|
python3 plugins/component-health/skills/get-release-dates/get_release_dates.py \
|
||||||
|
--release 4.21
|
||||||
|
|
||||||
|
# Get dates for release 4.20
|
||||||
|
python3 plugins/component-health/skills/get-release-dates/get_release_dates.py \
|
||||||
|
--release 4.20
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Process the Output
|
||||||
|
|
||||||
|
The script outputs JSON data with the following structure:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"release": "4.21",
|
||||||
|
"found": true,
|
||||||
|
"ga": "2026-02-17T00:00:00Z",
|
||||||
|
"development_start": "2025-09-02T00:00:00Z",
|
||||||
|
"previous_release": "4.20"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Field Descriptions**:
|
||||||
|
|
||||||
|
- `release`: The release identifier that was queried
|
||||||
|
- `found`: Boolean indicating if the release exists in Sippy
|
||||||
|
- `ga`: GA (General Availability) date. **If null, the release is still in development.**
|
||||||
|
- `development_start`: When development started for this release
|
||||||
|
- `previous_release`: The previous release in the sequence (empty string if none)
|
||||||
|
|
||||||
|
**If Release Not Found**:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"release": "99.99",
|
||||||
|
"found": false
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Release Status - Development vs GA'd**:
|
||||||
|
|
||||||
|
- **In Development**: If `ga` is `null`, the release is still under active development
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"release": "4.21",
|
||||||
|
"found": true,
|
||||||
|
"development_start": "2025-09-02T00:00:00Z",
|
||||||
|
"previous_release": "4.20"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
- **GA'd (Released)**: If `ga` has a timestamp, the release has reached General Availability
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"release": "4.17",
|
||||||
|
"found": true,
|
||||||
|
"ga": "2024-10-01T00:00:00Z",
|
||||||
|
"development_start": "2024-05-17T00:00:00Z",
|
||||||
|
"previous_release": "4.16"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Use the Information
|
||||||
|
|
||||||
|
Based on the release dates:
|
||||||
|
|
||||||
|
1. **Determine release status**: Check if release is in development or GA'd
|
||||||
|
- If `ga` is `null`: Release is still in development
|
||||||
|
- If `ga` has a timestamp: Release has reached General Availability
|
||||||
|
2. **Determine release timeline**: Use `development_start` and `ga` dates
|
||||||
|
- Calculate time in development: `ga` - `development_start`
|
||||||
|
- For in-development releases: Calculate time since `development_start`
|
||||||
|
3. **Find related releases**: Use `previous_release` to navigate the release sequence
|
||||||
|
4. **Validate release**: Check `found` field before using the release in other operations
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
The script handles several error scenarios:
|
||||||
|
|
||||||
|
1. **Network Errors**: If unable to reach Sippy API
|
||||||
|
|
||||||
|
```
|
||||||
|
Error: URL Error: [reason]
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **HTTP Errors**: If API returns an error status
|
||||||
|
|
||||||
|
```
|
||||||
|
Error: HTTP Error 404: Not Found
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Invalid Release**: Script returns exit code 1 with `found: false` in output
|
||||||
|
|
||||||
|
4. **Parsing Errors**: If API response is malformed
|
||||||
|
```
|
||||||
|
Error: Failed to fetch release dates: [details]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
The script outputs JSON to stdout with:
|
||||||
|
|
||||||
|
- **Success**: Exit code 0, JSON with `found: true`
|
||||||
|
- **Release Not Found**: Exit code 1, JSON with `found: false`
|
||||||
|
- **Error**: Exit code 1, error message to stderr
|
||||||
|
|
||||||
|
## API Details
|
||||||
|
|
||||||
|
The script queries the Sippy releases API:
|
||||||
|
|
||||||
|
- **URL**: https://sippy.dptools.openshift.org/api/releases
|
||||||
|
- **Method**: GET
|
||||||
|
- **Response**: JSON containing all releases and their metadata
|
||||||
|
|
||||||
|
The full API response includes:
|
||||||
|
|
||||||
|
- `releases`: Array of all available release identifiers
|
||||||
|
- `ga_dates`: Simple mapping of release to GA date
|
||||||
|
- `dates`: Detailed mapping with GA and development_start dates
|
||||||
|
- `release_attrs`: Extended attributes including previous release
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### Example 1: Get Current Development Release
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/get-release-dates/get_release_dates.py \
|
||||||
|
--release 4.21
|
||||||
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"release": "4.21",
|
||||||
|
"found": true,
|
||||||
|
"development_start": "2025-09-02T00:00:00Z",
|
||||||
|
"previous_release": "4.20"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example 2: Get GA'd Release
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/get-release-dates/get_release_dates.py \
|
||||||
|
--release 4.17
|
||||||
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"release": "4.17",
|
||||||
|
"found": true,
|
||||||
|
"ga": "2024-10-01T00:00:00Z",
|
||||||
|
"development_start": "2024-05-17T00:00:00Z",
|
||||||
|
"previous_release": "4.16"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example 3: Query Non-Existent Release
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/get-release-dates/get_release_dates.py \
|
||||||
|
--release 99.99
|
||||||
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"release": "99.99",
|
||||||
|
"found": false
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Exit code: 1
|
||||||
|
|
||||||
|
## Integration with Other Commands
|
||||||
|
|
||||||
|
This skill can be used in conjunction with other component-health skills:
|
||||||
|
|
||||||
|
1. **Before analyzing regressions**: Verify the release exists
|
||||||
|
2. **Timeline context**: Understand how long a release has been in development
|
||||||
|
3. **Release status**: Determine if a release is in development or has GA'd
|
||||||
|
4. **Release navigation**: Find previous/next releases in the sequence
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- The script uses Python's standard library only (no external dependencies)
|
||||||
|
- API responses are cached by Sippy, so repeated calls are fast
|
||||||
|
- Release identifiers are case-sensitive (use "4.21" not "4.21.0")
|
||||||
|
- OKD releases are suffixed with "-okd" (e.g., "4.21-okd")
|
||||||
|
- ARO releases have special identifiers (e.g., "aro-production")
|
||||||
|
- "Presubmits" is a special release for pull request data
|
||||||
|
|
||||||
|
## See Also
|
||||||
|
|
||||||
|
- Skill Documentation: `plugins/component-health/skills/list-regressions/SKILL.md`
|
||||||
|
- Sippy API: https://sippy.dptools.openshift.org/api/releases
|
||||||
|
- Component Health Plugin: `plugins/component-health/README.md`
|
||||||
137
skills/get-release-dates/get_release_dates.py
Normal file
137
skills/get-release-dates/get_release_dates.py
Normal file
@@ -0,0 +1,137 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Fetch OpenShift release dates from Sippy API.
|
||||||
|
|
||||||
|
This script fetches release information including GA dates and development start dates
|
||||||
|
for OpenShift releases from the Sippy API.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import sys
|
||||||
|
import urllib.request
|
||||||
|
import urllib.error
|
||||||
|
|
||||||
|
|
||||||
|
def fetch_release_dates():
|
||||||
|
"""
|
||||||
|
Fetch all release dates from Sippy API.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary containing release information
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
Exception: If the API request fails
|
||||||
|
"""
|
||||||
|
url = "https://sippy.dptools.openshift.org/api/releases"
|
||||||
|
|
||||||
|
try:
|
||||||
|
with urllib.request.urlopen(url) as response:
|
||||||
|
data = json.loads(response.read().decode('utf-8'))
|
||||||
|
return data
|
||||||
|
except urllib.error.HTTPError as e:
|
||||||
|
raise Exception(f"HTTP Error {e.code}: {e.reason}")
|
||||||
|
except urllib.error.URLError as e:
|
||||||
|
raise Exception(f"URL Error: {e.reason}")
|
||||||
|
except Exception as e:
|
||||||
|
raise Exception(f"Failed to fetch release dates: {str(e)}")
|
||||||
|
|
||||||
|
|
||||||
|
def get_release_info(data: dict, release: str) -> dict:
|
||||||
|
"""
|
||||||
|
Extract information for a specific release.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
data: Full API response containing all release data
|
||||||
|
release: Release identifier (e.g., "4.21", "4.20")
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary containing release-specific information
|
||||||
|
|
||||||
|
Note:
|
||||||
|
If 'ga' and 'ga_date' fields are null/missing, the release is still
|
||||||
|
in development and has not reached General Availability yet.
|
||||||
|
"""
|
||||||
|
result = {
|
||||||
|
"release": release,
|
||||||
|
"found": False
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check if release exists in the releases list
|
||||||
|
if release in data.get("releases", []):
|
||||||
|
result["found"] = True
|
||||||
|
|
||||||
|
# Get detailed dates (GA and development start)
|
||||||
|
dates = data.get("dates", {})
|
||||||
|
if release in dates:
|
||||||
|
release_dates = dates[release]
|
||||||
|
result["ga"] = release_dates.get("ga")
|
||||||
|
result["development_start"] = release_dates.get("development_start")
|
||||||
|
|
||||||
|
# Get release attributes if available
|
||||||
|
release_attrs = data.get("release_attrs", {})
|
||||||
|
if release in release_attrs:
|
||||||
|
attrs = release_attrs[release]
|
||||||
|
result["previous_release"] = attrs.get("previous_release", "")
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def format_output(data: dict) -> str:
|
||||||
|
"""
|
||||||
|
Format the release data for output.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
data: Dictionary containing release information
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Formatted JSON string
|
||||||
|
"""
|
||||||
|
return json.dumps(data, indent=2)
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description='Fetch OpenShift release dates from Sippy',
|
||||||
|
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||||
|
epilog="""
|
||||||
|
Examples:
|
||||||
|
# Get dates for release 4.21
|
||||||
|
python3 get_release_dates.py --release 4.21
|
||||||
|
|
||||||
|
# Get dates for release 4.20
|
||||||
|
python3 get_release_dates.py --release 4.20
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
'--release',
|
||||||
|
type=str,
|
||||||
|
required=True,
|
||||||
|
help='Release version (e.g., "4.21", "4.20")'
|
||||||
|
)
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Fetch all release data
|
||||||
|
data = fetch_release_dates()
|
||||||
|
|
||||||
|
# Extract info for the specific release
|
||||||
|
release_info = get_release_info(data, args.release)
|
||||||
|
|
||||||
|
# Format and print output
|
||||||
|
output = format_output(release_info)
|
||||||
|
print(output)
|
||||||
|
|
||||||
|
# Return exit code based on whether release was found
|
||||||
|
return 0 if release_info["found"] else 1
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error: {e}", file=sys.stderr)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
sys.exit(main())
|
||||||
|
|
||||||
270
skills/list-components/SKILL.md
Normal file
270
skills/list-components/SKILL.md
Normal file
@@ -0,0 +1,270 @@
|
|||||||
|
---
|
||||||
|
name: List Components
|
||||||
|
description: Fetch component names from Sippy component readiness API
|
||||||
|
---
|
||||||
|
|
||||||
|
# List Components
|
||||||
|
|
||||||
|
This skill provides functionality to fetch a list of all component names tracked in the Sippy component readiness system for a specific OpenShift release.
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
Use this skill when you need to:
|
||||||
|
|
||||||
|
- Get a complete list of components for a specific release
|
||||||
|
- Validate component names before querying regression or bug data
|
||||||
|
- Discover available components for analysis
|
||||||
|
- Generate component lists for reports or documentation
|
||||||
|
- Understand which teams/components are tracked in Sippy
|
||||||
|
- Provide autocomplete suggestions for component names
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
1. **Python 3 Installation**
|
||||||
|
|
||||||
|
- Check if installed: `which python3`
|
||||||
|
- Python 3.6 or later is required
|
||||||
|
- Comes pre-installed on most systems
|
||||||
|
|
||||||
|
2. **Network Access**
|
||||||
|
|
||||||
|
- The script requires network access to reach the Sippy API
|
||||||
|
- Ensure you can make HTTPS requests to `sippy.dptools.openshift.org`
|
||||||
|
|
||||||
|
## Implementation Steps
|
||||||
|
|
||||||
|
### Step 1: Verify Prerequisites
|
||||||
|
|
||||||
|
First, ensure Python 3 is available:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 --version
|
||||||
|
```
|
||||||
|
|
||||||
|
If Python 3 is not installed, guide the user through installation for their platform.
|
||||||
|
|
||||||
|
### Step 2: Locate the Script
|
||||||
|
|
||||||
|
The script is located at:
|
||||||
|
|
||||||
|
```
|
||||||
|
plugins/component-health/skills/list-components/list_components.py
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Run the Script
|
||||||
|
|
||||||
|
Execute the script with the release parameter:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Get components for release 4.21
|
||||||
|
python3 plugins/component-health/skills/list-components/list_components.py \
|
||||||
|
--release 4.21
|
||||||
|
|
||||||
|
# Get components for release 4.20
|
||||||
|
python3 plugins/component-health/skills/list-components/list_components.py \
|
||||||
|
--release 4.20
|
||||||
|
```
|
||||||
|
|
||||||
|
**Important**: The script automatically appends "-main" to the release version to construct the view parameter (e.g., "4.21" becomes "4.21-main").
|
||||||
|
|
||||||
|
### Step 4: Process the Output
|
||||||
|
|
||||||
|
The script outputs JSON data with the following structure:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"release": "4.21",
|
||||||
|
"view": "4.21-main",
|
||||||
|
"component_count": 42,
|
||||||
|
"components": [
|
||||||
|
"API",
|
||||||
|
"Build",
|
||||||
|
"Cloud Compute",
|
||||||
|
"Cluster Version Operator",
|
||||||
|
"Etcd",
|
||||||
|
"Image Registry",
|
||||||
|
"Installer",
|
||||||
|
"Kubernetes",
|
||||||
|
"Management Console",
|
||||||
|
"Monitoring",
|
||||||
|
"Networking",
|
||||||
|
"OLM",
|
||||||
|
"Storage",
|
||||||
|
"etcd",
|
||||||
|
"kube-apiserver",
|
||||||
|
"..."
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Field Descriptions**:
|
||||||
|
|
||||||
|
- `release`: The release identifier that was queried
|
||||||
|
- `view`: The constructed view parameter used in the API call (release + "-main")
|
||||||
|
- `component_count`: Total number of unique components found
|
||||||
|
- `components`: Alphabetically sorted array of unique component names
|
||||||
|
|
||||||
|
**If View Not Found**:
|
||||||
|
|
||||||
|
If the release view doesn't exist, the script will return an HTTP 404 error:
|
||||||
|
|
||||||
|
```
|
||||||
|
HTTP Error 404: Not Found
|
||||||
|
View '4.99-main' not found. Please check the release version.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Use the Component List
|
||||||
|
|
||||||
|
Based on the component list, you can:
|
||||||
|
|
||||||
|
1. **Validate component names**: Check if a component exists before querying data
|
||||||
|
2. **Generate documentation**: Create component lists for reports
|
||||||
|
3. **Filter queries**: Use component names to filter regression or bug queries
|
||||||
|
4. **Autocomplete**: Provide suggestions when users type component names
|
||||||
|
5. **Discover teams**: Understand which components/teams are tracked
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
The script handles several error scenarios:
|
||||||
|
|
||||||
|
1. **Network Errors**: If unable to reach Sippy API
|
||||||
|
|
||||||
|
```
|
||||||
|
Error: URL Error: [reason]
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **HTTP Errors**: If API returns an error status
|
||||||
|
|
||||||
|
```
|
||||||
|
Error: HTTP Error 404: Not Found
|
||||||
|
View '4.99-main' not found. Please check the release version.
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Invalid Release**: Script returns exit code 1 with error message
|
||||||
|
|
||||||
|
4. **Parsing Errors**: If API response is malformed
|
||||||
|
```
|
||||||
|
Error: Failed to fetch components: [details]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
The script outputs JSON to stdout with:
|
||||||
|
|
||||||
|
- **Success**: Exit code 0, JSON with component list
|
||||||
|
- **Error**: Exit code 1, error message to stderr
|
||||||
|
|
||||||
|
Diagnostic messages (like "Fetching components from...") are written to stderr, so they don't interfere with JSON parsing.
|
||||||
|
|
||||||
|
## API Details
|
||||||
|
|
||||||
|
The script queries the Sippy component readiness API:
|
||||||
|
|
||||||
|
- **URL**: `https://sippy.dptools.openshift.org/api/component_readiness?view={release}-main`
|
||||||
|
- **Method**: GET
|
||||||
|
- **Response**: JSON containing component readiness data with rows
|
||||||
|
|
||||||
|
The API response structure includes:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"rows": [
|
||||||
|
{
|
||||||
|
"component": "Networking",
|
||||||
|
...
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"component": "Monitoring",
|
||||||
|
...
|
||||||
|
}
|
||||||
|
],
|
||||||
|
...
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The script:
|
||||||
|
|
||||||
|
1. Extracts the `component` field from each row
|
||||||
|
2. Filters out empty/null component names
|
||||||
|
3. Returns unique components, sorted alphabetically
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### Example 1: Get Components for 4.21
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/list-components/list_components.py \
|
||||||
|
--release 4.21
|
||||||
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"release": "4.21",
|
||||||
|
"view": "4.21-main",
|
||||||
|
"component_count": 42,
|
||||||
|
"components": ["API", "Build", "Etcd", "..."]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example 2: Query Non-Existent Release
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/list-components/list_components.py \
|
||||||
|
--release 99.99
|
||||||
|
```
|
||||||
|
|
||||||
|
Output (to stderr):
|
||||||
|
|
||||||
|
```
|
||||||
|
Fetching components from: https://sippy.dptools.openshift.org/api/component_readiness?view=99.99-main
|
||||||
|
HTTP Error 404: Not Found
|
||||||
|
View '99.99-main' not found. Please check the release version.
|
||||||
|
Failed to fetch components: HTTP Error 404: Not Found
|
||||||
|
```
|
||||||
|
|
||||||
|
Exit code: 1
|
||||||
|
|
||||||
|
## Integration with Other Commands
|
||||||
|
|
||||||
|
This skill can be used in conjunction with other component-health skills:
|
||||||
|
|
||||||
|
1. **Before analyzing components**: Validate component names exist
|
||||||
|
2. **Component discovery**: Find available components for a release
|
||||||
|
3. **Autocomplete**: Provide component name suggestions to users
|
||||||
|
4. **Batch operations**: Iterate over all components for comprehensive analysis
|
||||||
|
|
||||||
|
**Example Integration**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Get all components for 4.21
|
||||||
|
COMPONENTS=$(python3 plugins/component-health/skills/list-components/list_components.py \
|
||||||
|
--release 4.21 | jq -r '.components[]')
|
||||||
|
|
||||||
|
# Analyze each component
|
||||||
|
for component in $COMPONENTS; do
|
||||||
|
echo "Analyzing $component..."
|
||||||
|
# Use component in other commands
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- The script uses Python's standard library only (no external dependencies)
|
||||||
|
- The script automatically appends "-main" to the release version
|
||||||
|
- Component names are case-sensitive
|
||||||
|
- Component names are returned in alphabetical order
|
||||||
|
- Duplicate component names are automatically removed
|
||||||
|
- Empty or null component names are filtered out
|
||||||
|
- The script has a 30-second timeout for HTTP requests
|
||||||
|
- Diagnostic messages go to stderr, JSON output goes to stdout
|
||||||
|
|
||||||
|
## See Also
|
||||||
|
|
||||||
|
- Related Skill: `plugins/component-health/skills/list-regressions/SKILL.md`
|
||||||
|
- Related Skill: `plugins/component-health/skills/get-release-dates/SKILL.md`
|
||||||
|
- Related Command: `/component-health:list-regressions` (for regression data)
|
||||||
|
- Related Command: `/component-health:analyze` (for health grading)
|
||||||
|
- Sippy API: https://sippy.dptools.openshift.org/api/component_readiness
|
||||||
|
- Component Health Plugin: `plugins/component-health/README.md`
|
||||||
109
skills/list-components/list_components.py
Executable file
109
skills/list-components/list_components.py
Executable file
@@ -0,0 +1,109 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Script to fetch component names from Sippy component readiness API.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python3 list_components.py --release <release>
|
||||||
|
|
||||||
|
Example:
|
||||||
|
python3 list_components.py --release 4.21
|
||||||
|
python3 list_components.py --release 4.20
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import sys
|
||||||
|
import urllib.request
|
||||||
|
import urllib.error
|
||||||
|
|
||||||
|
|
||||||
|
def fetch_components(release: str) -> list:
|
||||||
|
"""
|
||||||
|
Fetch component names from the component readiness API.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
release: The release version (e.g., "4.21", "4.20")
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of unique component names
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
urllib.error.URLError: If the request fails
|
||||||
|
"""
|
||||||
|
# Construct the view parameter (e.g., "4.21-main")
|
||||||
|
view = f"{release}-main"
|
||||||
|
|
||||||
|
# Construct the URL
|
||||||
|
url = f"https://sippy.dptools.openshift.org/api/component_readiness?view={view}"
|
||||||
|
|
||||||
|
print(f"Fetching components from: {url}", file=sys.stderr)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with urllib.request.urlopen(url, timeout=30) as response:
|
||||||
|
if response.status == 200:
|
||||||
|
data = json.loads(response.read().decode('utf-8'))
|
||||||
|
|
||||||
|
# Extract component names from rows
|
||||||
|
components = []
|
||||||
|
if 'rows' in data:
|
||||||
|
for row in data['rows']:
|
||||||
|
if 'component' in row and row['component']:
|
||||||
|
components.append(row['component'])
|
||||||
|
|
||||||
|
# Return unique components, sorted alphabetically
|
||||||
|
unique_components = sorted(set(components))
|
||||||
|
|
||||||
|
print(f"Found {len(unique_components)} unique components", file=sys.stderr)
|
||||||
|
|
||||||
|
return unique_components
|
||||||
|
else:
|
||||||
|
raise Exception(f"HTTP {response.status}: {response.reason}")
|
||||||
|
except urllib.error.HTTPError as e:
|
||||||
|
print(f"HTTP Error {e.code}: {e.reason}", file=sys.stderr)
|
||||||
|
if e.code == 404:
|
||||||
|
print(f"View '{view}' not found. Please check the release version.", file=sys.stderr)
|
||||||
|
raise
|
||||||
|
except urllib.error.URLError as e:
|
||||||
|
print(f"URL Error: {e.reason}", file=sys.stderr)
|
||||||
|
raise
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error: {e}", file=sys.stderr)
|
||||||
|
raise
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
"""Main entry point for the script."""
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description='Fetch component names from Sippy component readiness API'
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
'--release',
|
||||||
|
type=str,
|
||||||
|
required=True,
|
||||||
|
help='Release version (e.g., "4.21", "4.20")'
|
||||||
|
)
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Fetch components
|
||||||
|
components = fetch_components(args.release)
|
||||||
|
|
||||||
|
# Output as JSON array
|
||||||
|
output = {
|
||||||
|
"release": args.release,
|
||||||
|
"view": f"{args.release}-main",
|
||||||
|
"component_count": len(components),
|
||||||
|
"components": components
|
||||||
|
}
|
||||||
|
|
||||||
|
print(json.dumps(output, indent=2))
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Failed to fetch components: {e}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
||||||
385
skills/list-jiras/SKILL.md
Normal file
385
skills/list-jiras/SKILL.md
Normal file
@@ -0,0 +1,385 @@
|
|||||||
|
---
|
||||||
|
name: List JIRAs
|
||||||
|
description: Query and return raw JIRA bug data for a specific project
|
||||||
|
---
|
||||||
|
|
||||||
|
# List JIRAs
|
||||||
|
|
||||||
|
This skill provides functionality to query JIRA bugs for a specified project and return raw issue data. It uses the JIRA REST API to fetch complete bug information with all fields and metadata, without performing any summarization.
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
Use this skill when you need to:
|
||||||
|
|
||||||
|
- Fetch raw JIRA issue data for further processing
|
||||||
|
- Access complete issue details including all fields
|
||||||
|
- Build custom analysis workflows
|
||||||
|
- Provide data to other commands (like `summarize-jiras`)
|
||||||
|
- Export JIRA data for offline analysis
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
1. **Python 3 Installation**
|
||||||
|
|
||||||
|
- Check if installed: `which python3`
|
||||||
|
- Python 3.6 or later is required
|
||||||
|
- Comes pre-installed on most systems
|
||||||
|
|
||||||
|
2. **JIRA Authentication**
|
||||||
|
|
||||||
|
- Requires environment variables to be set:
|
||||||
|
- `JIRA_URL`: Base URL for JIRA instance (e.g., "https://issues.redhat.com")
|
||||||
|
- `JIRA_PERSONAL_TOKEN`: Your JIRA bearer token or personal access token
|
||||||
|
- How to get a JIRA token:
|
||||||
|
- Navigate to JIRA → Profile → Personal Access Tokens
|
||||||
|
- Generate a new token with appropriate permissions
|
||||||
|
- Export it as an environment variable
|
||||||
|
|
||||||
|
3. **Network Access**
|
||||||
|
- The script requires network access to reach your JIRA instance
|
||||||
|
- Ensure you can make HTTPS requests to the JIRA URL
|
||||||
|
|
||||||
|
## Implementation Steps
|
||||||
|
|
||||||
|
### Step 1: Verify Prerequisites
|
||||||
|
|
||||||
|
First, ensure Python 3 is available:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 --version
|
||||||
|
```
|
||||||
|
|
||||||
|
If Python 3 is not installed, guide the user through installation for their platform.
|
||||||
|
|
||||||
|
### Step 2: Verify Environment Variables
|
||||||
|
|
||||||
|
Check that required environment variables are set:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Verify JIRA credentials are configured
|
||||||
|
echo "JIRA_URL: ${JIRA_URL}"
|
||||||
|
echo "JIRA_PERSONAL_TOKEN: ${JIRA_PERSONAL_TOKEN:+***set***}"
|
||||||
|
```
|
||||||
|
|
||||||
|
If any are missing, guide the user to set them:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export JIRA_URL="https://issues.redhat.com"
|
||||||
|
export JIRA_PERSONAL_TOKEN="your-token-here"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Locate the Script
|
||||||
|
|
||||||
|
The script is located at:
|
||||||
|
|
||||||
|
```
|
||||||
|
plugins/component-health/skills/list-jiras/list_jiras.py
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Run the Script
|
||||||
|
|
||||||
|
Execute the script with appropriate arguments:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Basic usage - all open bugs in a project
|
||||||
|
python3 plugins/component-health/skills/list-jiras/list_jiras.py \
|
||||||
|
--project OCPBUGS
|
||||||
|
|
||||||
|
# Filter by component
|
||||||
|
python3 plugins/component-health/skills/list-jiras/list_jiras.py \
|
||||||
|
--project OCPBUGS \
|
||||||
|
--component "kube-apiserver"
|
||||||
|
|
||||||
|
# Filter by multiple components
|
||||||
|
python3 plugins/component-health/skills/list-jiras/list_jiras.py \
|
||||||
|
--project OCPBUGS \
|
||||||
|
--component "kube-apiserver" "Management Console"
|
||||||
|
|
||||||
|
# Include closed bugs
|
||||||
|
python3 plugins/component-health/skills/list-jiras/list_jiras.py \
|
||||||
|
--project OCPBUGS \
|
||||||
|
--include-closed
|
||||||
|
|
||||||
|
# Filter by status
|
||||||
|
python3 plugins/component-health/skills/list-jiras/list_jiras.py \
|
||||||
|
--project OCPBUGS \
|
||||||
|
--status New "In Progress"
|
||||||
|
|
||||||
|
# Set maximum results limit (default 100)
|
||||||
|
python3 plugins/component-health/skills/list-jiras/list_jiras.py \
|
||||||
|
--project OCPBUGS \
|
||||||
|
--limit 500
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Process the Output
|
||||||
|
|
||||||
|
The script outputs JSON data with the following structure:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"project": "OCPBUGS",
|
||||||
|
"total_count": 1500,
|
||||||
|
"fetched_count": 100,
|
||||||
|
"query": "project = OCPBUGS AND (status != Closed OR (status = Closed AND resolved >= \"2025-10-11\"))",
|
||||||
|
"filters": {
|
||||||
|
"components": null,
|
||||||
|
"statuses": null,
|
||||||
|
"include_closed": false,
|
||||||
|
"limit": 100
|
||||||
|
},
|
||||||
|
"issues": [
|
||||||
|
{
|
||||||
|
"key": "OCPBUGS-12345",
|
||||||
|
"fields": {
|
||||||
|
"summary": "Bug title here",
|
||||||
|
"status": {
|
||||||
|
"name": "New",
|
||||||
|
"id": "1"
|
||||||
|
},
|
||||||
|
"priority": {
|
||||||
|
"name": "Major",
|
||||||
|
"id": "3"
|
||||||
|
},
|
||||||
|
"components": [
|
||||||
|
{"name": "kube-apiserver", "id": "12345"}
|
||||||
|
],
|
||||||
|
"assignee": {
|
||||||
|
"displayName": "John Doe",
|
||||||
|
"emailAddress": "jdoe@example.com"
|
||||||
|
},
|
||||||
|
"created": "2025-11-01T10:30:00.000+0000",
|
||||||
|
"updated": "2025-11-05T14:20:00.000+0000",
|
||||||
|
"resolutiondate": null,
|
||||||
|
"versions": [
|
||||||
|
{"name": "4.21"}
|
||||||
|
],
|
||||||
|
"fixVersions": [
|
||||||
|
{"name": "4.22"}
|
||||||
|
],
|
||||||
|
"customfield_12319940": "4.22.0"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
...more issues...
|
||||||
|
],
|
||||||
|
"note": "Showing first 100 of 1500 total results. Increase --limit for more data."
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Field Descriptions**:
|
||||||
|
|
||||||
|
- `project`: The JIRA project queried
|
||||||
|
- `total_count`: Total number of matching issues in JIRA (from search results)
|
||||||
|
- `fetched_count`: Number of issues actually fetched (limited by --limit parameter)
|
||||||
|
- `query`: The JQL query executed (includes filter for recently closed bugs)
|
||||||
|
- `filters`: Applied filters (components, statuses, include_closed, limit)
|
||||||
|
- `issues`: Array of raw JIRA issue objects, each containing:
|
||||||
|
- `key`: Issue key (e.g., "OCPBUGS-12345")
|
||||||
|
- `fields`: Object containing all JIRA fields for the issue:
|
||||||
|
- `summary`: Issue title/summary
|
||||||
|
- `status`: Status object with name and ID
|
||||||
|
- `priority`: Priority object with name and ID
|
||||||
|
- `components`: Array of component objects
|
||||||
|
- `assignee`: Assignee object with user details
|
||||||
|
- `created`: Creation timestamp
|
||||||
|
- `updated`: Last updated timestamp
|
||||||
|
- `resolutiondate`: Resolution timestamp (null if not closed)
|
||||||
|
- `versions`: Affects Version/s array
|
||||||
|
- `fixVersions`: Fix Version/s array
|
||||||
|
- `customfield_12319940`: Target Version (custom field)
|
||||||
|
- And many other JIRA fields as applicable
|
||||||
|
- `note`: Informational message if results are truncated
|
||||||
|
|
||||||
|
**Important Notes**:
|
||||||
|
|
||||||
|
- **By default, the query includes**: Open bugs + bugs closed in the last 30 days
|
||||||
|
- This allows tracking of recent closure activity alongside current open bugs
|
||||||
|
- The script fetches a maximum number of issues (default 1000, configurable with `--limit`)
|
||||||
|
- The `total_count` represents all matching issues in JIRA
|
||||||
|
- The returned data includes ALL fields for each issue, providing complete information
|
||||||
|
- For large datasets, increase the `--limit` parameter to fetch more issues
|
||||||
|
- Issues can have multiple components
|
||||||
|
- All JIRA field data is preserved in the raw format
|
||||||
|
|
||||||
|
### Step 6: Present Results
|
||||||
|
|
||||||
|
Based on the raw JIRA data:
|
||||||
|
|
||||||
|
1. Inform the user about the total count vs fetched count
|
||||||
|
2. Explain that the raw data includes all JIRA fields
|
||||||
|
3. Suggest using `/component-health:summarize-jiras` if they need summary statistics
|
||||||
|
4. The raw issue data can be passed to other commands for further processing
|
||||||
|
5. Highlight any truncation and suggest increasing --limit if needed
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
### Common Errors
|
||||||
|
|
||||||
|
1. **Authentication Errors**
|
||||||
|
|
||||||
|
- **Symptom**: HTTP 401 Unauthorized
|
||||||
|
- **Solution**: Verify JIRA_PERSONAL_TOKEN is correct
|
||||||
|
- **Check**: Ensure token has not expired
|
||||||
|
|
||||||
|
2. **Network Errors**
|
||||||
|
|
||||||
|
- **Symptom**: `URLError` or connection timeout
|
||||||
|
- **Solution**: Check network connectivity and JIRA_URL is accessible
|
||||||
|
- **Retry**: The script has a 30-second timeout, consider retrying
|
||||||
|
|
||||||
|
3. **Invalid Project**
|
||||||
|
|
||||||
|
- **Symptom**: HTTP 400 or empty results
|
||||||
|
- **Solution**: Verify the project key is correct (e.g., "OCPBUGS", not "ocpbugs")
|
||||||
|
|
||||||
|
4. **Missing Environment Variables**
|
||||||
|
|
||||||
|
- **Symptom**: Error message about missing credentials
|
||||||
|
- **Solution**: Set required environment variables (JIRA_URL, JIRA_USERNAME, JIRA_PERSONAL_TOKEN)
|
||||||
|
|
||||||
|
5. **Rate Limiting**
|
||||||
|
- **Symptom**: HTTP 429 Too Many Requests
|
||||||
|
- **Solution**: Wait before retrying, reduce query frequency
|
||||||
|
|
||||||
|
### Debugging
|
||||||
|
|
||||||
|
Enable verbose output by examining stderr:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/list-jiras/list_jiras.py \
|
||||||
|
--project OCPBUGS 2>&1 | tee debug.log
|
||||||
|
```
|
||||||
|
|
||||||
|
## Script Arguments
|
||||||
|
|
||||||
|
### Required Arguments
|
||||||
|
|
||||||
|
- `--project`: JIRA project key to query
|
||||||
|
- Format: Project key (e.g., "OCPBUGS", "OCPSTRAT")
|
||||||
|
- Must be a valid JIRA project
|
||||||
|
|
||||||
|
### Optional Arguments
|
||||||
|
|
||||||
|
- `--component`: Filter by component names
|
||||||
|
|
||||||
|
- Values: Space-separated list of component names
|
||||||
|
- Default: None (returns all components)
|
||||||
|
- Case-sensitive matching
|
||||||
|
- Examples: `--component "kube-apiserver" "Management Console"`
|
||||||
|
|
||||||
|
- `--status`: Filter by status values
|
||||||
|
|
||||||
|
- Values: Space-separated list of status names
|
||||||
|
- Default: None (returns all statuses except Closed)
|
||||||
|
- Examples: `--status New "In Progress" Verified`
|
||||||
|
|
||||||
|
- `--include-closed`: Include closed bugs in the results
|
||||||
|
|
||||||
|
- Default: false (only open bugs)
|
||||||
|
- When specified, includes bugs in "Closed" status
|
||||||
|
|
||||||
|
- `--limit`: Maximum number of issues to fetch
|
||||||
|
- Default: 100
|
||||||
|
- Maximum: 1000 (JIRA API limit per request)
|
||||||
|
- Higher values provide more accurate statistics but slower performance
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
The script outputs JSON with metadata and raw issue data:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"project": "OCPBUGS",
|
||||||
|
"total_count": 5430,
|
||||||
|
"fetched_count": 100,
|
||||||
|
"query": "project = OCPBUGS AND (status != Closed OR (status = Closed AND resolved >= \"2025-10-11\"))",
|
||||||
|
"filters": {
|
||||||
|
"components": null,
|
||||||
|
"statuses": null,
|
||||||
|
"include_closed": false,
|
||||||
|
"limit": 100
|
||||||
|
},
|
||||||
|
"issues": [
|
||||||
|
{
|
||||||
|
"key": "OCPBUGS-12345",
|
||||||
|
"fields": {
|
||||||
|
"summary": "Example bug",
|
||||||
|
"status": {"name": "New"},
|
||||||
|
"priority": {"name": "Major"},
|
||||||
|
"components": [{"name": "kube-apiserver"}],
|
||||||
|
"created": "2025-11-01T10:30:00.000+0000",
|
||||||
|
...
|
||||||
|
}
|
||||||
|
},
|
||||||
|
...
|
||||||
|
],
|
||||||
|
"note": "Showing first 100 of 5430 total results. Increase --limit for more data."
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### Example 1: List All Open Bugs
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/list-jiras/list_jiras.py \
|
||||||
|
--project OCPBUGS
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected Output**: JSON containing raw issue data for all open bugs in OCPBUGS project
|
||||||
|
|
||||||
|
### Example 2: Filter by Component
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/list-jiras/list_jiras.py \
|
||||||
|
--project OCPBUGS \
|
||||||
|
--component "kube-apiserver"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected Output**: JSON containing raw issue data for the kube-apiserver component only
|
||||||
|
|
||||||
|
### Example 3: Include Closed Bugs
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/list-jiras/list_jiras.py \
|
||||||
|
--project OCPBUGS \
|
||||||
|
--include-closed \
|
||||||
|
--limit 500
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected Output**: JSON containing raw issue data for both open and closed bugs (up to 500 issues)
|
||||||
|
|
||||||
|
### Example 4: Filter by Multiple Components
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/list-jiras/list_jiras.py \
|
||||||
|
--project OCPBUGS \
|
||||||
|
--component "kube-apiserver" "etcd" "Networking"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected Output**: JSON containing raw issue data for bugs in specified components
|
||||||
|
|
||||||
|
## Integration with Commands
|
||||||
|
|
||||||
|
This skill is designed to:
|
||||||
|
|
||||||
|
- Provide raw JIRA data to other commands (like `summarize-jiras`)
|
||||||
|
- Be used directly for ad-hoc JIRA queries
|
||||||
|
- Serve as a data source for custom analysis workflows
|
||||||
|
- Export JIRA data for offline processing
|
||||||
|
|
||||||
|
## Related Skills
|
||||||
|
|
||||||
|
- `summarize-jiras`: Calculate summary statistics from JIRA data
|
||||||
|
- `list-regressions`: Fetch regression data for releases
|
||||||
|
- `analyze-regressions`: Grade component health based on regressions
|
||||||
|
- `get-release-dates`: Fetch OpenShift release dates
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- The script uses Python's `urllib` and `json` modules (no external dependencies)
|
||||||
|
- Output is always JSON format for easy parsing and further processing
|
||||||
|
- Diagnostic messages are written to stderr, data to stdout
|
||||||
|
- The script has a 30-second timeout for HTTP requests
|
||||||
|
- For large projects, consider using component filters to reduce query size
|
||||||
|
- The returned data includes ALL JIRA fields for complete information
|
||||||
|
- Use `/component-health:summarize-jiras` if you need summary statistics instead of raw data
|
||||||
314
skills/list-jiras/list_jiras.py
Normal file
314
skills/list-jiras/list_jiras.py
Normal file
@@ -0,0 +1,314 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
JIRA Bug Query Script
|
||||||
|
|
||||||
|
This script queries JIRA bugs for a specified project and returns raw issue data.
|
||||||
|
It uses environment variables for authentication and supports filtering by component,
|
||||||
|
status, and other criteria.
|
||||||
|
|
||||||
|
Environment Variables:
|
||||||
|
JIRA_URL: Base URL for JIRA instance (e.g., "https://issues.redhat.com")
|
||||||
|
JIRA_PERSONAL_TOKEN: Your JIRA API bearer token or personal access token
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python3 list_jiras.py --project OCPBUGS
|
||||||
|
python3 list_jiras.py --project OCPBUGS --component "kube-apiserver"
|
||||||
|
python3 list_jiras.py --project OCPBUGS --status New "In Progress"
|
||||||
|
python3 list_jiras.py --project OCPBUGS --include-closed --limit 500
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import urllib.request
|
||||||
|
import urllib.error
|
||||||
|
import urllib.parse
|
||||||
|
from typing import Optional, List, Dict, Any
|
||||||
|
from datetime import datetime, timedelta
|
||||||
|
|
||||||
|
|
||||||
|
def get_env_var(name: str) -> str:
|
||||||
|
"""Get required environment variable or exit with error."""
|
||||||
|
value = os.environ.get(name)
|
||||||
|
if not value:
|
||||||
|
print(f"Error: Environment variable {name} is not set", file=sys.stderr)
|
||||||
|
print(f"Please set {name} before running this script", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
return value
|
||||||
|
|
||||||
|
|
||||||
|
def build_jql_query(project: str, components: Optional[List[str]] = None,
|
||||||
|
statuses: Optional[List[str]] = None,
|
||||||
|
include_closed: bool = False) -> str:
|
||||||
|
"""Build JQL query string from parameters."""
|
||||||
|
parts = [f'project = {project}']
|
||||||
|
|
||||||
|
# Calculate date for 30 days ago
|
||||||
|
thirty_days_ago = (datetime.now() - timedelta(days=30)).strftime('%Y-%m-%d')
|
||||||
|
|
||||||
|
# Add status filter - include recently closed bugs (within last 30 days) or open bugs
|
||||||
|
if statuses:
|
||||||
|
# If specific statuses are requested, use them
|
||||||
|
status_list = ', '.join(f'"{s}"' for s in statuses)
|
||||||
|
parts.append(f'status IN ({status_list})')
|
||||||
|
elif not include_closed:
|
||||||
|
# Default: open bugs OR bugs closed in the last 30 days
|
||||||
|
parts.append(f'(status != Closed OR (status = Closed AND resolved >= "{thirty_days_ago}"))')
|
||||||
|
# If include_closed is True, get all bugs (no status filter)
|
||||||
|
|
||||||
|
# Add component filter
|
||||||
|
if components:
|
||||||
|
component_list = ', '.join(f'"{c}"' for c in components)
|
||||||
|
parts.append(f'component IN ({component_list})')
|
||||||
|
|
||||||
|
return ' AND '.join(parts)
|
||||||
|
|
||||||
|
|
||||||
|
def fetch_jira_issues(jira_url: str, token: str,
|
||||||
|
jql: str, max_results: int = 100) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Fetch issues from JIRA using JQL query.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
jira_url: Base JIRA URL
|
||||||
|
token: JIRA bearer token
|
||||||
|
jql: JQL query string
|
||||||
|
max_results: Maximum number of results to fetch
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary containing JIRA API response
|
||||||
|
"""
|
||||||
|
# Build API URL
|
||||||
|
api_url = f"{jira_url}/rest/api/2/search"
|
||||||
|
|
||||||
|
# Build query parameters - Note: fields should be comma-separated without URL encoding the commas
|
||||||
|
fields_list = [
|
||||||
|
'summary', 'status', 'priority', 'components', 'assignee',
|
||||||
|
'created', 'updated', 'resolutiondate',
|
||||||
|
'versions', # Affects Version/s
|
||||||
|
'fixVersions', # Fix Version/s
|
||||||
|
'customfield_12319940' # Target Version
|
||||||
|
]
|
||||||
|
|
||||||
|
params = {
|
||||||
|
'jql': jql,
|
||||||
|
'maxResults': max_results,
|
||||||
|
'fields': ','.join(fields_list)
|
||||||
|
}
|
||||||
|
|
||||||
|
# Encode parameters - but don't encode commas in fields parameter
|
||||||
|
encoded_params = []
|
||||||
|
for k, v in params.items():
|
||||||
|
if k == 'fields':
|
||||||
|
# Don't encode commas in fields list
|
||||||
|
encoded_params.append(f'{k}={v}')
|
||||||
|
else:
|
||||||
|
encoded_params.append(f'{k}={urllib.parse.quote(str(v))}')
|
||||||
|
|
||||||
|
query_string = '&'.join(encoded_params)
|
||||||
|
full_url = f"{api_url}?{query_string}"
|
||||||
|
|
||||||
|
# Create request with bearer token authentication
|
||||||
|
request = urllib.request.Request(full_url)
|
||||||
|
request.add_header('Authorization', f'Bearer {token}')
|
||||||
|
# Note: Don't add Content-Type for GET requests
|
||||||
|
|
||||||
|
print(f"Fetching issues from JIRA...", file=sys.stderr)
|
||||||
|
print(f"JQL: {jql}", file=sys.stderr)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with urllib.request.urlopen(request, timeout=30) as response:
|
||||||
|
data = json.loads(response.read().decode())
|
||||||
|
print(f"Fetched {len(data.get('issues', []))} of {data.get('total', 0)} total issues",
|
||||||
|
file=sys.stderr)
|
||||||
|
return data
|
||||||
|
except urllib.error.HTTPError as e:
|
||||||
|
print(f"HTTP Error {e.code}: {e.reason}", file=sys.stderr)
|
||||||
|
try:
|
||||||
|
error_body = e.read().decode()
|
||||||
|
print(f"Response: {error_body}", file=sys.stderr)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
sys.exit(1)
|
||||||
|
except urllib.error.URLError as e:
|
||||||
|
print(f"URL Error: {e.reason}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error fetching data: {e}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
"""Main entry point."""
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description='Query JIRA bugs and return raw issue data',
|
||||||
|
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||||
|
epilog="""
|
||||||
|
Examples:
|
||||||
|
%(prog)s --project OCPBUGS
|
||||||
|
%(prog)s --project OCPBUGS --component "kube-apiserver"
|
||||||
|
%(prog)s --project OCPBUGS --component "kube-apiserver" "etcd"
|
||||||
|
%(prog)s --project OCPBUGS --status New "In Progress"
|
||||||
|
%(prog)s --project OCPBUGS --include-closed --limit 500
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
'--project',
|
||||||
|
required=True,
|
||||||
|
help='JIRA project key (e.g., OCPBUGS, OCPSTRAT)'
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
'--component',
|
||||||
|
nargs='+',
|
||||||
|
help='Filter by component names (space-separated)'
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
'--status',
|
||||||
|
nargs='+',
|
||||||
|
help='Filter by status values (space-separated)'
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
'--include-closed',
|
||||||
|
action='store_true',
|
||||||
|
help='Include closed bugs in results (default: only open bugs)'
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
'--limit',
|
||||||
|
type=int,
|
||||||
|
default=1000,
|
||||||
|
help='Maximum number of issues to fetch per component (default: 1000, max: 1000)'
|
||||||
|
)
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
# Validate limit
|
||||||
|
if args.limit < 1 or args.limit > 1000:
|
||||||
|
print("Error: --limit must be between 1 and 1000", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
# Get environment variables
|
||||||
|
jira_url = get_env_var('JIRA_URL').rstrip('/')
|
||||||
|
token = get_env_var('JIRA_PERSONAL_TOKEN')
|
||||||
|
|
||||||
|
# If multiple components are provided, warn user and iterate through them
|
||||||
|
if args.component and len(args.component) > 1:
|
||||||
|
print(f"\nQuerying {len(args.component)} components individually...", file=sys.stderr)
|
||||||
|
print("This may take a few seconds.", file=sys.stderr)
|
||||||
|
print(f"Components: {', '.join(args.component)}\n", file=sys.stderr)
|
||||||
|
|
||||||
|
# Initialize aggregated results
|
||||||
|
all_issues = []
|
||||||
|
all_total_count = 0
|
||||||
|
component_queries = []
|
||||||
|
|
||||||
|
# Iterate through each component
|
||||||
|
for idx, component in enumerate(args.component, 1):
|
||||||
|
print(f"[{idx}/{len(args.component)}] Querying component: {component}...", file=sys.stderr)
|
||||||
|
|
||||||
|
# Build JQL query for this component
|
||||||
|
jql = build_jql_query(
|
||||||
|
project=args.project,
|
||||||
|
components=[component],
|
||||||
|
statuses=args.status,
|
||||||
|
include_closed=args.include_closed
|
||||||
|
)
|
||||||
|
|
||||||
|
# Fetch issues for this component
|
||||||
|
response = fetch_jira_issues(jira_url, token, jql, args.limit)
|
||||||
|
|
||||||
|
# Aggregate results
|
||||||
|
component_issues = response.get('issues', [])
|
||||||
|
component_total = response.get('total', 0)
|
||||||
|
|
||||||
|
all_issues.extend(component_issues)
|
||||||
|
all_total_count += component_total
|
||||||
|
component_queries.append(f"{component}: {jql}")
|
||||||
|
|
||||||
|
print(f" Found {len(component_issues)} of {component_total} total issues for {component}",
|
||||||
|
file=sys.stderr)
|
||||||
|
|
||||||
|
print(f"\nTotal issues fetched: {len(all_issues)} (from {all_total_count} total across all components)\n",
|
||||||
|
file=sys.stderr)
|
||||||
|
|
||||||
|
# Build combined JQL query string for output (informational only)
|
||||||
|
combined_jql = build_jql_query(
|
||||||
|
project=args.project,
|
||||||
|
components=args.component,
|
||||||
|
statuses=args.status,
|
||||||
|
include_closed=args.include_closed
|
||||||
|
)
|
||||||
|
|
||||||
|
# Build output with aggregated data
|
||||||
|
output = {
|
||||||
|
'project': args.project,
|
||||||
|
'total_count': all_total_count,
|
||||||
|
'fetched_count': len(all_issues),
|
||||||
|
'query': combined_jql,
|
||||||
|
'component_queries': component_queries,
|
||||||
|
'filters': {
|
||||||
|
'components': args.component,
|
||||||
|
'statuses': args.status,
|
||||||
|
'include_closed': args.include_closed,
|
||||||
|
'limit': args.limit
|
||||||
|
},
|
||||||
|
'issues': all_issues
|
||||||
|
}
|
||||||
|
|
||||||
|
# Add note if results are truncated
|
||||||
|
if len(all_issues) < all_total_count:
|
||||||
|
output['note'] = (
|
||||||
|
f"Showing {len(all_issues)} of {all_total_count} total results across {len(args.component)} components. "
|
||||||
|
f"Increase --limit to fetch more per component."
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
# Single component or no component filter - use original logic
|
||||||
|
# Build JQL query
|
||||||
|
jql = build_jql_query(
|
||||||
|
project=args.project,
|
||||||
|
components=args.component,
|
||||||
|
statuses=args.status,
|
||||||
|
include_closed=args.include_closed
|
||||||
|
)
|
||||||
|
|
||||||
|
# Fetch issues
|
||||||
|
response = fetch_jira_issues(jira_url, token, jql, args.limit)
|
||||||
|
|
||||||
|
# Extract data
|
||||||
|
issues = response.get('issues', [])
|
||||||
|
total_count = response.get('total', 0)
|
||||||
|
fetched_count = len(issues)
|
||||||
|
|
||||||
|
# Build output with metadata and raw issues
|
||||||
|
output = {
|
||||||
|
'project': args.project,
|
||||||
|
'total_count': total_count,
|
||||||
|
'fetched_count': fetched_count,
|
||||||
|
'query': jql,
|
||||||
|
'filters': {
|
||||||
|
'components': args.component,
|
||||||
|
'statuses': args.status,
|
||||||
|
'include_closed': args.include_closed,
|
||||||
|
'limit': args.limit
|
||||||
|
},
|
||||||
|
'issues': issues
|
||||||
|
}
|
||||||
|
|
||||||
|
# Add note if results are truncated
|
||||||
|
if fetched_count < total_count:
|
||||||
|
output['note'] = (
|
||||||
|
f"Showing first {fetched_count} of {total_count} total results. "
|
||||||
|
f"Increase --limit for more data."
|
||||||
|
)
|
||||||
|
|
||||||
|
# Output JSON to stdout
|
||||||
|
print(json.dumps(output, indent=2))
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
||||||
100
skills/list-regressions/README.md
Normal file
100
skills/list-regressions/README.md
Normal file
@@ -0,0 +1,100 @@
|
|||||||
|
# List Regressions Skill
|
||||||
|
|
||||||
|
Python script for fetching component health regression data for OpenShift releases.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This skill provides a Python script that queries a component health API to retrieve regression information for specific OpenShift releases. The data can be filtered by component names.
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List all regressions for a release
|
||||||
|
python3 list_regressions.py --release 4.17
|
||||||
|
|
||||||
|
# Filter by specific components
|
||||||
|
python3 list_regressions.py --release 4.21 --components Monitoring etcd
|
||||||
|
|
||||||
|
# Filter by single component
|
||||||
|
python3 list_regressions.py --release 4.21 --components "kube-apiserver"
|
||||||
|
|
||||||
|
# Filter to development window (GA'd release - both start and end)
|
||||||
|
python3 list_regressions.py --release 4.17 --start 2024-05-17 --end 2024-10-01
|
||||||
|
|
||||||
|
# Filter to development window (in-development release - start only, no GA yet)
|
||||||
|
python3 list_regressions.py --release 4.21 --start 2025-09-02
|
||||||
|
```
|
||||||
|
|
||||||
|
## Arguments
|
||||||
|
|
||||||
|
- `--release` (required): OpenShift release version (e.g., "4.17", "4.16")
|
||||||
|
- `--components` (optional): Space-separated list of component names to filter by (case-insensitive)
|
||||||
|
- `--start` (optional): Start date in YYYY-MM-DD format. Excludes regressions closed before this date.
|
||||||
|
- `--end` (optional): End date in YYYY-MM-DD format. Excludes regressions opened after this date.
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
The script outputs JSON data with the following structure to stdout:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"summary": {...},
|
||||||
|
"components": {
|
||||||
|
"ComponentName": {
|
||||||
|
"summary": {...},
|
||||||
|
"open": [...],
|
||||||
|
"closed": [...]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Diagnostic messages are written to stderr.
|
||||||
|
|
||||||
|
**Note**:
|
||||||
|
|
||||||
|
- Regressions are grouped by component name (sorted alphabetically)
|
||||||
|
- Each component maps to an object containing:
|
||||||
|
- `summary`: Per-component statistics (total, open, closed, triaged counts, average time to triage)
|
||||||
|
- `open`: Array of open regression objects
|
||||||
|
- `closed`: Array of closed regression objects
|
||||||
|
- Time fields are automatically simplified:
|
||||||
|
- `closed`: Shows timestamp string if closed (e.g., `"2025-09-27T12:04:24.966914Z"`), otherwise `null`
|
||||||
|
- `last_failure`: Shows timestamp string if valid (e.g., `"2025-09-25T14:41:17Z"`), otherwise `null`
|
||||||
|
- Unnecessary fields removed for response size optimization:
|
||||||
|
- `links`: Removed from each regression
|
||||||
|
- `test_id`: Removed from each regression
|
||||||
|
- Optional date filtering to focus on development window:
|
||||||
|
- Use `--start` and `--end` to filter regressions to a specific time period
|
||||||
|
- Typical use: Filter to development window using release dates
|
||||||
|
- `--start`: Always applied (development_start date)
|
||||||
|
- `--end`: Only for GA'd releases (GA date)
|
||||||
|
- For GA'd releases: Both start and end filtering applied
|
||||||
|
- For in-development releases: Only start filtering applied (no end date yet)
|
||||||
|
- Triaged counts: Number of regressions with non-empty `triages` list (triaged to JIRA bugs)
|
||||||
|
- Average time to triage: Average hours from regression opened to earliest triage timestamp (null if no triaged regressions)
|
||||||
|
- Maximum time to triage: Maximum hours from regression opened to earliest triage timestamp (null if no triaged regressions)
|
||||||
|
- Average open duration: Average hours that open regressions have been open (from opened to current time, only for open regressions)
|
||||||
|
- Maximum open duration: Maximum hours that open regressions have been open (from opened to current time, only for open regressions)
|
||||||
|
- Average time to close: Average hours from regression opened to closed timestamp (null if no valid data, only for closed regressions)
|
||||||
|
- Maximum time to close: Maximum hours from regression opened to closed timestamp (null if no valid data, only for closed regressions)
|
||||||
|
- Average time triaged to closed: Average hours from first triage to closed timestamp (null if no valid data, only for triaged closed regressions)
|
||||||
|
- Maximum time triaged to closed: Maximum hours from first triage to closed timestamp (null if no valid data, only for triaged closed regressions)
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
Before using, update the API endpoint in `list_regressions.py`:
|
||||||
|
|
||||||
|
```python
|
||||||
|
base_url = f"https://your-actual-api.example.com/api/v1/regressions"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
- Python 3.6 or later
|
||||||
|
- Network access to the component health API
|
||||||
|
- No external Python dependencies (uses standard library only)
|
||||||
|
|
||||||
|
## See Also
|
||||||
|
|
||||||
|
- [SKILL.md](./SKILL.md) - Detailed implementation guide for AI agents
|
||||||
579
skills/list-regressions/SKILL.md
Normal file
579
skills/list-regressions/SKILL.md
Normal file
@@ -0,0 +1,579 @@
|
|||||||
|
---
|
||||||
|
name: List Regressions
|
||||||
|
description: Fetch and analyze component health regressions for OpenShift releases
|
||||||
|
---
|
||||||
|
|
||||||
|
# List Regressions
|
||||||
|
|
||||||
|
This skill provides functionality to fetch regression data for OpenShift components across different releases. It uses a Python script to query a component health API and retrieve regression information.
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
Use this skill when you need to:
|
||||||
|
|
||||||
|
- Analyze component health for a specific OpenShift release
|
||||||
|
- Track regressions across releases
|
||||||
|
- Filter regressions by their open/closed status
|
||||||
|
- Generate reports on component stability
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
1. **Python 3 Installation**
|
||||||
|
|
||||||
|
- Check if installed: `which python3`
|
||||||
|
- Python 3.6 or later is required
|
||||||
|
- Comes pre-installed on most systems
|
||||||
|
|
||||||
|
2. **Network Access**
|
||||||
|
|
||||||
|
- The script requires network access to reach the component health API
|
||||||
|
- Ensure you can make HTTPS requests
|
||||||
|
|
||||||
|
3. **API Endpoint Configuration**
|
||||||
|
- The script includes a placeholder API endpoint that needs to be updated
|
||||||
|
- Update the `base_url` in `list_regressions.py` with the actual component health API endpoint
|
||||||
|
|
||||||
|
## Implementation Steps
|
||||||
|
|
||||||
|
### Step 1: Verify Prerequisites
|
||||||
|
|
||||||
|
First, ensure Python 3 is available:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 --version
|
||||||
|
```
|
||||||
|
|
||||||
|
If Python 3 is not installed, guide the user through installation for their platform.
|
||||||
|
|
||||||
|
### Step 2: Locate the Script
|
||||||
|
|
||||||
|
The script is located at:
|
||||||
|
|
||||||
|
```
|
||||||
|
plugins/component-health/skills/list-regressions/list_regressions.py
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Run the Script
|
||||||
|
|
||||||
|
Execute the script with appropriate arguments:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Basic usage - all regressions for a release
|
||||||
|
python3 plugins/component-health/skills/list-regressions/list_regressions.py \
|
||||||
|
--release 4.17
|
||||||
|
|
||||||
|
# Filter by specific components
|
||||||
|
python3 plugins/component-health/skills/list-regressions/list_regressions.py \
|
||||||
|
--release 4.21 \
|
||||||
|
--components Monitoring "kube-apiserver"
|
||||||
|
|
||||||
|
# Filter by multiple components
|
||||||
|
python3 plugins/component-health/skills/list-regressions/list_regressions.py \
|
||||||
|
--release 4.21 \
|
||||||
|
--components Monitoring etcd "kube-apiserver"
|
||||||
|
|
||||||
|
# Filter by development window (GA'd release - both start and end)
|
||||||
|
python3 plugins/component-health/skills/list-regressions/list_regressions.py \
|
||||||
|
--release 4.17 \
|
||||||
|
--start 2024-05-17 \
|
||||||
|
--end 2024-10-01
|
||||||
|
|
||||||
|
# Filter by development window (in-development release - start only)
|
||||||
|
python3 plugins/component-health/skills/list-regressions/list_regressions.py \
|
||||||
|
--release 4.21 \
|
||||||
|
--start 2025-09-02
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Process the Output
|
||||||
|
|
||||||
|
The script outputs JSON data with the following structure:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"summary": {
|
||||||
|
"total": <number>,
|
||||||
|
"triaged": <number>,
|
||||||
|
"triage_percentage": <number>,
|
||||||
|
"time_to_triage_hrs_avg": <number or null>,
|
||||||
|
"time_to_triage_hrs_max": <number or null>,
|
||||||
|
"time_to_close_hrs_avg": <number or null>,
|
||||||
|
"time_to_close_hrs_max": <number or null>,
|
||||||
|
"open": {
|
||||||
|
"total": <number>,
|
||||||
|
"triaged": <number>,
|
||||||
|
"triage_percentage": <number>,
|
||||||
|
"time_to_triage_hrs_avg": <number or null>,
|
||||||
|
"time_to_triage_hrs_max": <number or null>,
|
||||||
|
"open_hrs_avg": <number or null>,
|
||||||
|
"open_hrs_max": <number or null>
|
||||||
|
},
|
||||||
|
"closed": {
|
||||||
|
"total": <number>,
|
||||||
|
"triaged": <number>,
|
||||||
|
"triage_percentage": <number>,
|
||||||
|
"time_to_triage_hrs_avg": <number or null>,
|
||||||
|
"time_to_triage_hrs_max": <number or null>,
|
||||||
|
"time_to_close_hrs_avg": <number or null>,
|
||||||
|
"time_to_close_hrs_max": <number or null>,
|
||||||
|
"time_triaged_closed_hrs_avg": <number or null>,
|
||||||
|
"time_triaged_closed_hrs_max": <number or null>
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"components": {
|
||||||
|
"ComponentName": {
|
||||||
|
"summary": {
|
||||||
|
"total": <number>,
|
||||||
|
"triaged": <number>,
|
||||||
|
"triage_percentage": <number>,
|
||||||
|
"time_to_triage_hrs_avg": <number or null>,
|
||||||
|
"time_to_triage_hrs_max": <number or null>,
|
||||||
|
"time_to_close_hrs_avg": <number or null>,
|
||||||
|
"time_to_close_hrs_max": <number or null>,
|
||||||
|
"open": {
|
||||||
|
"total": <number>,
|
||||||
|
"triaged": <number>,
|
||||||
|
"triage_percentage": <number>,
|
||||||
|
"time_to_triage_hrs_avg": <number or null>,
|
||||||
|
"time_to_triage_hrs_max": <number or null>,
|
||||||
|
"open_hrs_avg": <number or null>,
|
||||||
|
"open_hrs_max": <number or null>
|
||||||
|
},
|
||||||
|
"closed": {
|
||||||
|
"total": <number>,
|
||||||
|
"triaged": <number>,
|
||||||
|
"triage_percentage": <number>,
|
||||||
|
"time_to_triage_hrs_avg": <number or null>,
|
||||||
|
"time_to_triage_hrs_max": <number or null>,
|
||||||
|
"time_to_close_hrs_avg": <number or null>,
|
||||||
|
"time_to_close_hrs_max": <number or null>,
|
||||||
|
"time_triaged_closed_hrs_avg": <number or null>,
|
||||||
|
"time_triaged_closed_hrs_max": <number or null>
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"open": [...],
|
||||||
|
"closed": [...]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**CRITICAL**: The output includes pre-calculated counts and health metrics:
|
||||||
|
|
||||||
|
- `summary`: Overall statistics across all components
|
||||||
|
- `summary.total`: Total number of regressions
|
||||||
|
- `summary.triaged`: Total number of regressions triaged (open + closed)
|
||||||
|
- **`summary.triage_percentage`**: Percentage of all regressions that have been triaged (KEY HEALTH METRIC)
|
||||||
|
- **`summary.time_to_triage_hrs_avg`**: Overall average hours to triage (combining open and closed, KEY HEALTH METRIC)
|
||||||
|
- `summary.time_to_triage_hrs_max`: Overall maximum hours to triage
|
||||||
|
- **`summary.time_to_close_hrs_avg`**: Overall average hours to close regressions (closed only, KEY HEALTH METRIC)
|
||||||
|
- `summary.time_to_close_hrs_max`: Overall maximum hours to close regressions (closed only)
|
||||||
|
- `summary.open.total`: Number of open regressions (where `closed` is null)
|
||||||
|
- `summary.open.triaged`: Number of open regressions that have been triaged to a JIRA bug
|
||||||
|
- `summary.open.triage_percentage`: Percentage of open regressions triaged
|
||||||
|
- `summary.open.time_to_triage_hrs_avg`: Average hours from opened to first triage (open only)
|
||||||
|
- `summary.open.time_to_triage_hrs_max`: Maximum hours from opened to first triage (open only)
|
||||||
|
- `summary.open.open_hrs_avg`: Average hours that open regressions have been open (from opened to current time)
|
||||||
|
- `summary.open.open_hrs_max`: Maximum hours that open regressions have been open (from opened to current time)
|
||||||
|
- `summary.closed.total`: Number of closed regressions (where `closed` is not null)
|
||||||
|
- `summary.closed.triaged`: Number of closed regressions that have been triaged to a JIRA bug
|
||||||
|
- `summary.closed.triage_percentage`: Percentage of closed regressions triaged
|
||||||
|
- `summary.closed.time_to_triage_hrs_avg`: Average hours from opened to first triage (closed only)
|
||||||
|
- `summary.closed.time_to_triage_hrs_max`: Maximum hours from opened to first triage (closed only)
|
||||||
|
- `summary.closed.time_to_close_hrs_avg`: Average hours from opened to closed timestamp (null if no valid data)
|
||||||
|
- `summary.closed.time_to_close_hrs_max`: Maximum hours from opened to closed timestamp (null if no valid data)
|
||||||
|
- `summary.closed.time_triaged_closed_hrs_avg`: Average hours from first triage to closed (null if no triaged closed regressions)
|
||||||
|
- `summary.closed.time_triaged_closed_hrs_max`: Maximum hours from first triage to closed (null if no triaged closed regressions)
|
||||||
|
- `components`: Dictionary mapping component names to objects containing:
|
||||||
|
- `summary`: Per-component statistics (includes same fields as overall summary)
|
||||||
|
- `open`: Array of open regression objects for that component
|
||||||
|
- `closed`: Array of closed regression objects for that component
|
||||||
|
|
||||||
|
**Time to Triage Calculation**:
|
||||||
|
|
||||||
|
The `time_to_triage_hrs_avg` field is calculated as:
|
||||||
|
|
||||||
|
1. For each triaged regression, find the earliest `created_at` timestamp in the `triages` array
|
||||||
|
2. Calculate the time difference between the regression's `opened` timestamp and the earliest triage timestamp
|
||||||
|
3. Convert the difference to hours and round to the nearest hour
|
||||||
|
4. Only include positive time differences (zero or negative values are skipped - these occur when triages are reused across regression instances)
|
||||||
|
5. Average all valid time-to-triage values for open regressions separately from closed regressions
|
||||||
|
6. Return `null` if no regressions have valid time-to-triage data in that category
|
||||||
|
|
||||||
|
**Time to Close Calculation**:
|
||||||
|
|
||||||
|
The `time_to_close_hrs_avg` and `time_to_close_hrs_max` fields (only for closed regressions) are calculated as:
|
||||||
|
|
||||||
|
1. For each closed regression, calculate the time difference between `opened` and `closed` timestamps
|
||||||
|
2. Convert the difference to hours and round to the nearest hour
|
||||||
|
3. Only include positive time differences (skip data inconsistencies)
|
||||||
|
4. Calculate average and maximum of all valid time-to-close values
|
||||||
|
5. Return `null` if no closed regressions have valid time data
|
||||||
|
|
||||||
|
**Open Duration Calculation**:
|
||||||
|
|
||||||
|
The `open_hrs_avg` and `open_hrs_max` fields (only for open regressions) are calculated as:
|
||||||
|
|
||||||
|
1. For each open regression, calculate the time difference between `opened` timestamp and current time
|
||||||
|
2. Convert the difference to hours and round to the nearest hour
|
||||||
|
3. Only include positive time differences
|
||||||
|
4. Calculate average and maximum of all open duration values
|
||||||
|
5. Return `null` if no open regressions have valid time data
|
||||||
|
|
||||||
|
**Time Triaged to Closed Calculation**:
|
||||||
|
|
||||||
|
The `time_triaged_closed_hrs_avg` and `time_triaged_closed_hrs_max` fields (only for triaged closed regressions) are calculated as:
|
||||||
|
|
||||||
|
1. For each closed regression that has been triaged, calculate the time difference between earliest `triages.created_at` timestamp and `closed` timestamp
|
||||||
|
2. Convert the difference to hours and round to the nearest hour
|
||||||
|
3. Only include positive time differences
|
||||||
|
4. Calculate average and maximum of all triaged-to-closed values
|
||||||
|
5. Return `null` if no triaged closed regressions have valid time data
|
||||||
|
|
||||||
|
**ALWAYS use these summary counts** rather than attempting to count the regression arrays yourself. This ensures accuracy even when the output is truncated due to size.
|
||||||
|
|
||||||
|
The script automatically simplifies and optimizes the response:
|
||||||
|
|
||||||
|
**Time field simplification** (`closed` and `last_failure`):
|
||||||
|
|
||||||
|
- Original API format: `{"Time": "2025-09-27T12:04:24.966914Z", "Valid": true}`
|
||||||
|
- Simplified format: `"closed": "2025-09-27T12:04:24.966914Z"` (if Valid is true)
|
||||||
|
- Or: `"closed": null` (if Valid is false)
|
||||||
|
- Same applies to `last_failure` field
|
||||||
|
|
||||||
|
**Field removal for response size optimization**:
|
||||||
|
|
||||||
|
- `links`: Removed from each regression (reduces response size significantly)
|
||||||
|
- `test_id`: Removed from each regression (large field, can be reconstructed from test_name if needed)
|
||||||
|
|
||||||
|
**Date filtering (optional)**:
|
||||||
|
|
||||||
|
- Use `--start` and `--end` parameters to filter regressions to a specific time window
|
||||||
|
- `--start YYYY-MM-DD`: Excludes regressions that were closed before this date
|
||||||
|
- `--end YYYY-MM-DD`: Excludes regressions that were opened after this date
|
||||||
|
- Typical use case: Filter to the development window
|
||||||
|
- `--start`: development_start date from get-release-dates skill (always applied)
|
||||||
|
- `--end`: GA date from get-release-dates skill (only for GA'd releases)
|
||||||
|
- For GA'd releases: Both start and end filtering applied
|
||||||
|
- For in-development releases (null GA date): Only start filtering applied (no end date)
|
||||||
|
- Benefits: Focuses analysis on regressions during active development, excluding:
|
||||||
|
- Regressions closed before the release development started (not relevant)
|
||||||
|
- Regressions opened after GA (post-release, often not monitored/triaged - GA'd releases only)
|
||||||
|
|
||||||
|
Parse this JSON output to extract relevant information for analysis.
|
||||||
|
|
||||||
|
### Step 5: Generate Analysis (Optional)
|
||||||
|
|
||||||
|
Based on the regression data:
|
||||||
|
|
||||||
|
1. **Use the summary counts** from the `summary` and `components.*.summary` objects (do NOT count the arrays)
|
||||||
|
2. Identify most affected components using `components.*.summary.open.total`
|
||||||
|
3. Compare with previous releases
|
||||||
|
4. Analyze trends in open vs closed regressions per component
|
||||||
|
5. Create visualizations if needed
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
### Common Errors
|
||||||
|
|
||||||
|
1. **Network Errors**
|
||||||
|
|
||||||
|
- **Symptom**: `URLError` or connection timeout
|
||||||
|
- **Solution**: Check network connectivity and firewall rules
|
||||||
|
- **Retry**: The script has a 30-second timeout, consider retrying
|
||||||
|
|
||||||
|
2. **HTTP Errors**
|
||||||
|
|
||||||
|
- **Symptom**: HTTP 404, 500, etc.
|
||||||
|
- **Solution**: Verify the API endpoint URL is correct
|
||||||
|
- **Check**: Ensure the release parameter is valid
|
||||||
|
|
||||||
|
3. **Invalid Release**
|
||||||
|
|
||||||
|
- **Symptom**: Empty results or error response
|
||||||
|
- **Solution**: Verify the release format (e.g., "4.17", not "v4.17")
|
||||||
|
|
||||||
|
4. **Invalid Boolean Value**
|
||||||
|
- **Symptom**: `ValueError: Invalid boolean value`
|
||||||
|
- **Solution**: Use only "true" or "false" for the --opened flag
|
||||||
|
|
||||||
|
### Debugging
|
||||||
|
|
||||||
|
Enable verbose output by examining stderr:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/list-regressions/list_regressions.py \
|
||||||
|
--release 4.17 2>&1 | tee debug.log
|
||||||
|
```
|
||||||
|
|
||||||
|
## Script Arguments
|
||||||
|
|
||||||
|
### Required Arguments
|
||||||
|
|
||||||
|
- `--release`: Release version to query
|
||||||
|
- Format: `"X.Y"` (e.g., "4.17", "4.16")
|
||||||
|
- Must be a valid OpenShift release number
|
||||||
|
|
||||||
|
### Optional Arguments
|
||||||
|
|
||||||
|
- `--components`: Filter by component names
|
||||||
|
- Values: Space-separated list of component names
|
||||||
|
- Default: None (returns all components)
|
||||||
|
- Case-insensitive matching
|
||||||
|
- Examples: `--components Monitoring etcd "kube-apiserver"`
|
||||||
|
- Filtering is performed after fetching data from the API
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
The script outputs JSON with summaries and regressions grouped by component:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"summary": {
|
||||||
|
"total": 62,
|
||||||
|
"triaged": 59,
|
||||||
|
"triage_percentage": 95.2,
|
||||||
|
"time_to_triage_hrs_avg": 68,
|
||||||
|
"time_to_triage_hrs_max": 240,
|
||||||
|
"time_to_close_hrs_avg": 168,
|
||||||
|
"time_to_close_hrs_max": 480,
|
||||||
|
"open": {
|
||||||
|
"total": 2,
|
||||||
|
"triaged": 1,
|
||||||
|
"triage_percentage": 50.0,
|
||||||
|
"time_to_triage_hrs_avg": 48,
|
||||||
|
"time_to_triage_hrs_max": 48,
|
||||||
|
"open_hrs_avg": 120,
|
||||||
|
"open_hrs_max": 200
|
||||||
|
},
|
||||||
|
"closed": {
|
||||||
|
"total": 60,
|
||||||
|
"triaged": 58,
|
||||||
|
"triage_percentage": 96.7,
|
||||||
|
"time_to_triage_hrs_avg": 72,
|
||||||
|
"time_to_triage_hrs_max": 240,
|
||||||
|
"time_to_close_hrs_avg": 168,
|
||||||
|
"time_to_close_hrs_max": 480,
|
||||||
|
"time_triaged_closed_hrs_avg": 96,
|
||||||
|
"time_triaged_closed_hrs_max": 240
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"components": {
|
||||||
|
"Monitoring": {
|
||||||
|
"summary": {
|
||||||
|
"total": 15,
|
||||||
|
"triaged": 13,
|
||||||
|
"triage_percentage": 86.7,
|
||||||
|
"time_to_triage_hrs_avg": 68,
|
||||||
|
"time_to_triage_hrs_max": 180,
|
||||||
|
"time_to_close_hrs_avg": 156,
|
||||||
|
"time_to_close_hrs_max": 360,
|
||||||
|
"open": {
|
||||||
|
"total": 1,
|
||||||
|
"triaged": 0,
|
||||||
|
"triage_percentage": 0.0,
|
||||||
|
"time_to_triage_hrs_avg": null,
|
||||||
|
"time_to_triage_hrs_max": null,
|
||||||
|
"open_hrs_avg": 72,
|
||||||
|
"open_hrs_max": 72
|
||||||
|
},
|
||||||
|
"closed": {
|
||||||
|
"total": 14,
|
||||||
|
"triaged": 13,
|
||||||
|
"triage_percentage": 92.9,
|
||||||
|
"time_to_triage_hrs_avg": 68,
|
||||||
|
"time_to_triage_hrs_max": 180,
|
||||||
|
"time_to_close_hrs_avg": 156,
|
||||||
|
"time_to_close_hrs_max": 360,
|
||||||
|
"time_triaged_closed_hrs_avg": 88,
|
||||||
|
"time_triaged_closed_hrs_max": 180
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"open": [
|
||||||
|
{
|
||||||
|
"id": 12894,
|
||||||
|
"component": "Monitoring",
|
||||||
|
"closed": null,
|
||||||
|
...
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"closed": [
|
||||||
|
{
|
||||||
|
"id": 12893,
|
||||||
|
"view": "4.21-main",
|
||||||
|
"release": "4.21",
|
||||||
|
"base_release": "4.18",
|
||||||
|
"component": "Monitoring",
|
||||||
|
"capability": "operator-conditions",
|
||||||
|
"test_name": "...",
|
||||||
|
"variants": [...],
|
||||||
|
"opened": "2025-09-26T00:02:51.385944Z",
|
||||||
|
"closed": "2025-09-27T12:04:24.966914Z",
|
||||||
|
"triages": [],
|
||||||
|
"last_failure": "2025-09-25T14:41:17Z",
|
||||||
|
"max_failures": 9
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"etcd": {
|
||||||
|
"summary": {
|
||||||
|
"total": 20,
|
||||||
|
"triaged": 19,
|
||||||
|
"triage_percentage": 95.0,
|
||||||
|
"time_to_triage_hrs_avg": 84,
|
||||||
|
"time_to_triage_hrs_max": 220,
|
||||||
|
"time_to_close_hrs_avg": 192,
|
||||||
|
"time_to_close_hrs_max": 500,
|
||||||
|
"open": {
|
||||||
|
"total": 0,
|
||||||
|
"triaged": 0,
|
||||||
|
"triage_percentage": 0.0,
|
||||||
|
"time_to_triage_hrs_avg": null,
|
||||||
|
"time_to_triage_hrs_max": null,
|
||||||
|
"open_hrs_avg": null,
|
||||||
|
"open_hrs_max": null
|
||||||
|
},
|
||||||
|
"closed": {
|
||||||
|
"total": 20,
|
||||||
|
"triaged": 19,
|
||||||
|
"triage_percentage": 95.0,
|
||||||
|
"time_to_triage_hrs_avg": 84,
|
||||||
|
"time_to_triage_hrs_max": 220,
|
||||||
|
"time_to_close_hrs_avg": 192,
|
||||||
|
"time_to_close_hrs_max": 500,
|
||||||
|
"time_triaged_closed_hrs_avg": 108,
|
||||||
|
"time_triaged_closed_hrs_max": 280
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"open": [],
|
||||||
|
"closed": [...]
|
||||||
|
},
|
||||||
|
"kube-apiserver": {
|
||||||
|
"summary": {
|
||||||
|
"total": 27,
|
||||||
|
"triaged": 27,
|
||||||
|
"triage_percentage": 100.0,
|
||||||
|
"time_to_triage_hrs_avg": 58,
|
||||||
|
"time_to_triage_hrs_max": 168,
|
||||||
|
"time_to_close_hrs_avg": 144,
|
||||||
|
"time_to_close_hrs_max": 400,
|
||||||
|
"open": {
|
||||||
|
"total": 1,
|
||||||
|
"triaged": 1,
|
||||||
|
"triage_percentage": 100.0,
|
||||||
|
"time_to_triage_hrs_avg": 36,
|
||||||
|
"time_to_triage_hrs_max": 36,
|
||||||
|
"open_hrs_avg": 96,
|
||||||
|
"open_hrs_max": 96
|
||||||
|
},
|
||||||
|
"closed": {
|
||||||
|
"total": 26,
|
||||||
|
"triaged": 26,
|
||||||
|
"triage_percentage": 100.0,
|
||||||
|
"time_to_triage_hrs_avg": 60,
|
||||||
|
"time_to_triage_hrs_max": 168,
|
||||||
|
"time_to_close_hrs_avg": 144,
|
||||||
|
"time_to_close_hrs_max": 400,
|
||||||
|
"time_triaged_closed_hrs_avg": 84,
|
||||||
|
"time_triaged_closed_hrs_max": 232
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"open": [...],
|
||||||
|
"closed": [...]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Important - Summary Objects**:
|
||||||
|
|
||||||
|
- The `summary` object contains overall pre-calculated counts for accuracy
|
||||||
|
- Each component in the `components` object has its own `summary` with per-component counts
|
||||||
|
- The `components` object maps component names (sorted alphabetically) to objects containing:
|
||||||
|
- `summary`: Statistics for this component (total, open, closed)
|
||||||
|
- `open`: Array of open regression objects (where `closed` is null)
|
||||||
|
- `closed`: Array of closed regression objects (where `closed` has a timestamp)
|
||||||
|
- **ALWAYS use the `summary` and `components.*.summary` fields** for counts (including `total`, `open.total`, `open.triaged`, `closed.total`, `closed.triaged`)
|
||||||
|
- Do NOT attempt to count the `components.*.open` or `components.*.closed` arrays yourself
|
||||||
|
|
||||||
|
**Note**: Time fields are simplified from the API response:
|
||||||
|
|
||||||
|
- `closed`: If the regression is closed: `"closed": "2025-09-27T12:04:24.966914Z"` (timestamp string), otherwise `null`
|
||||||
|
- `last_failure`: If valid: `"last_failure": "2025-09-25T14:41:17Z"` (timestamp string), otherwise `null`
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### Example 1: List All Regressions
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/list-regressions/list_regressions.py \
|
||||||
|
--release 4.17
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected Output**: JSON containing all regressions for release 4.17
|
||||||
|
|
||||||
|
### Example 2: Filter by Component
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/list-regressions/list_regressions.py \
|
||||||
|
--release 4.21 \
|
||||||
|
--components Monitoring etcd
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected Output**: JSON containing regressions for only Monitoring and etcd components in release 4.21
|
||||||
|
|
||||||
|
### Example 3: Filter by Single Component
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/list-regressions/list_regressions.py \
|
||||||
|
--release 4.21 \
|
||||||
|
--components "kube-apiserver"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected Output**: JSON containing regressions for the kube-apiserver component in release 4.21
|
||||||
|
|
||||||
|
## Customization
|
||||||
|
|
||||||
|
### Updating the API Endpoint
|
||||||
|
|
||||||
|
The script includes a placeholder API endpoint. Update it in `list_regressions.py`:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Current placeholder
|
||||||
|
base_url = f"https://component-health-api.example.com/api/v1/regressions"
|
||||||
|
|
||||||
|
# Update to actual endpoint
|
||||||
|
base_url = f"https://actual-api.example.com/api/v1/regressions"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Adding Custom Filters
|
||||||
|
|
||||||
|
To add additional query parameters, modify the `fetch_regressions` function:
|
||||||
|
|
||||||
|
```python
|
||||||
|
def fetch_regressions(release: str, opened: Optional[bool] = None,
|
||||||
|
component: Optional[str] = None) -> dict:
|
||||||
|
params = [f"release={release}"]
|
||||||
|
if opened is not None:
|
||||||
|
params.append(f"opened={'true' if opened else 'false'}")
|
||||||
|
if component is not None:
|
||||||
|
params.append(f"component={component}")
|
||||||
|
# ... rest of function
|
||||||
|
```
|
||||||
|
|
||||||
|
## Integration with Commands
|
||||||
|
|
||||||
|
This skill is designed to be used by the `/component-health:analyze-regressions` command, but can also be invoked directly by other commands or scripts that need regression data.
|
||||||
|
|
||||||
|
## Related Skills
|
||||||
|
|
||||||
|
- Component health analysis
|
||||||
|
- Release comparison
|
||||||
|
- Regression tracking
|
||||||
|
- Quality metrics reporting
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- The script uses Python's built-in `urllib` module (no external dependencies)
|
||||||
|
- Output is always JSON format for easy parsing
|
||||||
|
- Diagnostic messages are written to stderr, data to stdout
|
||||||
|
- The script has a 30-second timeout for HTTP requests
|
||||||
670
skills/list-regressions/list_regressions.py
Executable file
670
skills/list-regressions/list_regressions.py
Executable file
@@ -0,0 +1,670 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Script to fetch regression data for OpenShift components.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python3 list_regressions.py --release <release> [--components comp1 comp2 ...] [--short]
|
||||||
|
|
||||||
|
Example:
|
||||||
|
python3 list_regressions.py --release 4.17
|
||||||
|
python3 list_regressions.py --release 4.21 --components Monitoring etcd
|
||||||
|
python3 list_regressions.py --release 4.21 --short
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import os
|
||||||
|
import json
|
||||||
|
import sys
|
||||||
|
import urllib.request
|
||||||
|
import urllib.error
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
|
||||||
|
|
||||||
|
def calculate_hours_between(start_timestamp: str, end_timestamp: str) -> int:
|
||||||
|
"""
|
||||||
|
Calculate the number of hours between two timestamps, rounded to the nearest hour.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
start_timestamp: ISO format timestamp string (e.g., "2025-09-26T00:02:51.385944Z")
|
||||||
|
end_timestamp: ISO format timestamp string (e.g., "2025-09-27T12:04:24.966914Z")
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Number of hours between the timestamps, rounded to the nearest hour
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ValueError: If timestamp parsing fails
|
||||||
|
"""
|
||||||
|
start_time = datetime.fromisoformat(start_timestamp.replace('Z', '+00:00'))
|
||||||
|
end_time = datetime.fromisoformat(end_timestamp.replace('Z', '+00:00'))
|
||||||
|
|
||||||
|
time_diff = end_time - start_time
|
||||||
|
return round(time_diff.total_seconds() / 3600)
|
||||||
|
|
||||||
|
|
||||||
|
def fetch_regressions(release: str) -> dict:
|
||||||
|
"""
|
||||||
|
Fetch regression data from the component health API.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
release: The release version (e.g., "4.17", "4.16")
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary containing the regression data
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
urllib.error.URLError: If the request fails
|
||||||
|
"""
|
||||||
|
# Construct the base URL
|
||||||
|
base_url = f"https://sippy.dptools.openshift.org/api/component_readiness/regressions"
|
||||||
|
|
||||||
|
# Build query parameters
|
||||||
|
params = [f"release={release}"]
|
||||||
|
|
||||||
|
url = f"{base_url}?{'&'.join(params)}"
|
||||||
|
|
||||||
|
print(f"Fetching regressions from: {url}", file=sys.stderr)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with urllib.request.urlopen(url, timeout=30) as response:
|
||||||
|
if response.status == 200:
|
||||||
|
data = json.loads(response.read().decode('utf-8'))
|
||||||
|
return data
|
||||||
|
else:
|
||||||
|
raise Exception(f"HTTP {response.status}: {response.reason}")
|
||||||
|
except urllib.error.HTTPError as e:
|
||||||
|
print(f"HTTP Error {e.code}: {e.reason}", file=sys.stderr)
|
||||||
|
raise
|
||||||
|
except urllib.error.URLError as e:
|
||||||
|
print(f"URL Error: {e.reason}", file=sys.stderr)
|
||||||
|
raise
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error: {e}", file=sys.stderr)
|
||||||
|
raise
|
||||||
|
|
||||||
|
|
||||||
|
def filter_by_components(data: list, components: list = None) -> list:
|
||||||
|
"""
|
||||||
|
Filter regression data by component names.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
data: List of regression dictionaries
|
||||||
|
components: Optional list of component names to filter by
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Filtered list of regressions matching the specified components
|
||||||
|
"""
|
||||||
|
# Always filter out regressions with empty component names
|
||||||
|
# These are legacy prior to a code change to ensure it is always set.
|
||||||
|
filtered = [
|
||||||
|
regression for regression in data
|
||||||
|
if regression.get('component', '') != ''
|
||||||
|
]
|
||||||
|
|
||||||
|
# If no specific components requested, return all non-empty components
|
||||||
|
if not components:
|
||||||
|
return filtered
|
||||||
|
|
||||||
|
# Convert components to lowercase for case-insensitive comparison
|
||||||
|
components_lower = [c.lower() for c in components]
|
||||||
|
|
||||||
|
# Further filter by specified components
|
||||||
|
filtered = [
|
||||||
|
regression for regression in filtered
|
||||||
|
if regression.get('component', '').lower() in components_lower
|
||||||
|
]
|
||||||
|
|
||||||
|
print(f"Filtered to {len(filtered)} regressions for components: {', '.join(components)}",
|
||||||
|
file=sys.stderr)
|
||||||
|
|
||||||
|
return filtered
|
||||||
|
|
||||||
|
|
||||||
|
def simplify_time_fields(data: list) -> list:
|
||||||
|
"""
|
||||||
|
Simplify time fields in regression data.
|
||||||
|
|
||||||
|
Converts time fields from a nested structure like:
|
||||||
|
{"Time": "2025-09-27T12:04:24.966914Z", "Valid": true}
|
||||||
|
to either:
|
||||||
|
- The timestamp string if Valid is true
|
||||||
|
- null if Valid is false
|
||||||
|
|
||||||
|
This applies to fields: 'closed', 'last_failure'
|
||||||
|
|
||||||
|
Args:
|
||||||
|
data: List of regression dictionaries
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of regressions with simplified time fields
|
||||||
|
"""
|
||||||
|
time_fields = ['closed', 'last_failure']
|
||||||
|
|
||||||
|
for regression in data:
|
||||||
|
for field in time_fields:
|
||||||
|
if field in regression:
|
||||||
|
value = regression[field]
|
||||||
|
# Check if the field is a dict with Valid and Time fields
|
||||||
|
if isinstance(value, dict):
|
||||||
|
if value.get('Valid') is True:
|
||||||
|
# Replace with just the timestamp string
|
||||||
|
regression[field] = value.get('Time')
|
||||||
|
else:
|
||||||
|
# Replace with null if not valid
|
||||||
|
regression[field] = None
|
||||||
|
|
||||||
|
return data
|
||||||
|
|
||||||
|
|
||||||
|
def filter_by_date_range(regressions: list, start_date: str = None, end_date: str = None) -> list:
|
||||||
|
"""
|
||||||
|
Filter regressions by date range.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
regressions: List of regression dictionaries
|
||||||
|
start_date: Start date in YYYY-MM-DD format. Filters out regressions closed before this date.
|
||||||
|
end_date: End date in YYYY-MM-DD format. Filters out regressions opened after this date.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Filtered list of regressions
|
||||||
|
|
||||||
|
Note:
|
||||||
|
- If start_date is provided: excludes regressions that were closed before start_date
|
||||||
|
- If end_date is provided: excludes regressions that were opened after end_date
|
||||||
|
- This allows filtering to a development window (e.g., from development_start to GA)
|
||||||
|
"""
|
||||||
|
if not start_date and not end_date:
|
||||||
|
return regressions
|
||||||
|
|
||||||
|
filtered = []
|
||||||
|
|
||||||
|
for regression in regressions:
|
||||||
|
# Skip if opened after end_date
|
||||||
|
if end_date and regression.get('opened'):
|
||||||
|
opened_date = regression['opened'].split('T')[0] # Extract YYYY-MM-DD
|
||||||
|
if opened_date > end_date:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Skip if closed before start_date
|
||||||
|
if start_date and regression.get('closed'):
|
||||||
|
closed_date = regression['closed'].split('T')[0] # Extract YYYY-MM-DD
|
||||||
|
if closed_date < start_date:
|
||||||
|
continue
|
||||||
|
|
||||||
|
filtered.append(regression)
|
||||||
|
|
||||||
|
return filtered
|
||||||
|
|
||||||
|
|
||||||
|
def remove_unnecessary_fields(regressions: list) -> list:
|
||||||
|
"""
|
||||||
|
Remove unnecessary fields from regressions to reduce response size.
|
||||||
|
|
||||||
|
Removes 'links' and 'test_id' fields from each regression object.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
regressions: List of regression dictionaries
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of regression dictionaries with unnecessary fields removed
|
||||||
|
"""
|
||||||
|
for regression in regressions:
|
||||||
|
# Remove links and test_id to reduce response size
|
||||||
|
regression.pop('links', None)
|
||||||
|
regression.pop('test_id', None)
|
||||||
|
|
||||||
|
return regressions
|
||||||
|
|
||||||
|
|
||||||
|
def exclude_suspected_infra_regressions(regressions: list) -> tuple[list, int]:
|
||||||
|
"""
|
||||||
|
Filter out suspected infrastructure-related mass regressions.
|
||||||
|
|
||||||
|
This is an imprecise attempt to filter out mass regressions caused by infrastructure
|
||||||
|
issues which the TRT handles via a separate mechanism. These
|
||||||
|
mass incidents typically result in many short-lived regressions being opened and
|
||||||
|
closed on the same day.
|
||||||
|
|
||||||
|
Algorithm:
|
||||||
|
1. First pass: Count how many short-lived regressions (closed within 96 hours of opening)
|
||||||
|
were closed on each date.
|
||||||
|
2. Second pass: Filter out regressions that:
|
||||||
|
- Were closed within 96 hours of being opened, AND
|
||||||
|
- Were closed on a date where >50 short-lived regressions were closed
|
||||||
|
|
||||||
|
Args:
|
||||||
|
regressions: List of regression dictionaries
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple of (filtered_regressions, count_of_filtered_regressions)
|
||||||
|
"""
|
||||||
|
# First pass: Track count of short-lived regressions closed on each date
|
||||||
|
short_lived_closures_by_date = {}
|
||||||
|
|
||||||
|
for regression in regressions:
|
||||||
|
opened = regression.get('opened')
|
||||||
|
closed = regression.get('closed')
|
||||||
|
|
||||||
|
# Skip if not closed or missing opened timestamp
|
||||||
|
if not closed or not opened:
|
||||||
|
continue
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Calculate how long the regression was open
|
||||||
|
hours_open = calculate_hours_between(opened, closed)
|
||||||
|
|
||||||
|
# If closed within 96 hours, increment counter for the closed date
|
||||||
|
if hours_open <= 96:
|
||||||
|
closed_date = closed.split('T')[0] # Extract YYYY-MM-DD
|
||||||
|
short_lived_closures_by_date[closed_date] = short_lived_closures_by_date.get(closed_date, 0) + 1
|
||||||
|
except (ValueError, KeyError, TypeError):
|
||||||
|
# Skip if timestamp parsing fails
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Second pass: Filter out suspected infra regressions
|
||||||
|
filtered_regressions = []
|
||||||
|
filtered_count = 0
|
||||||
|
|
||||||
|
for regression in regressions:
|
||||||
|
opened = regression.get('opened')
|
||||||
|
closed = regression.get('closed')
|
||||||
|
|
||||||
|
# Keep open regressions
|
||||||
|
if not closed or not opened:
|
||||||
|
filtered_regressions.append(regression)
|
||||||
|
continue
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Calculate how long the regression was open
|
||||||
|
hours_open = calculate_hours_between(opened, closed)
|
||||||
|
closed_date = closed.split('T')[0] # Extract YYYY-MM-DD
|
||||||
|
|
||||||
|
# Filter out if:
|
||||||
|
# 1. Was closed within 96 hours, AND
|
||||||
|
# 2. More than 50 short-lived regressions were closed on that date
|
||||||
|
if hours_open <= 96 and short_lived_closures_by_date.get(closed_date, 0) > 50:
|
||||||
|
filtered_count += 1
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Keep this regression
|
||||||
|
filtered_regressions.append(regression)
|
||||||
|
except (ValueError, KeyError, TypeError):
|
||||||
|
# If timestamp parsing fails, keep the regression
|
||||||
|
filtered_regressions.append(regression)
|
||||||
|
|
||||||
|
return filtered_regressions, filtered_count
|
||||||
|
|
||||||
|
|
||||||
|
def group_by_component(data: list) -> dict:
|
||||||
|
"""
|
||||||
|
Group regressions by component name and split into open/closed.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
data: List of regression dictionaries
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary mapping component names to objects containing open and closed regression lists
|
||||||
|
"""
|
||||||
|
components = {}
|
||||||
|
|
||||||
|
for regression in data:
|
||||||
|
component = regression.get('component', 'Unknown')
|
||||||
|
if component not in components:
|
||||||
|
components[component] = {
|
||||||
|
"open": [],
|
||||||
|
"closed": []
|
||||||
|
}
|
||||||
|
|
||||||
|
# Split based on whether closed field is null
|
||||||
|
if regression.get('closed') is None:
|
||||||
|
components[component]["open"].append(regression)
|
||||||
|
else:
|
||||||
|
components[component]["closed"].append(regression)
|
||||||
|
|
||||||
|
# Sort component names for consistent output
|
||||||
|
return dict(sorted(components.items()))
|
||||||
|
|
||||||
|
|
||||||
|
def calculate_summary(regressions: list, filtered_suspected_infra: int = 0) -> dict:
|
||||||
|
"""
|
||||||
|
Calculate summary statistics for a list of regressions.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
regressions: List of regression dictionaries
|
||||||
|
filtered_suspected_infra: Count of regressions filtered out as suspected infrastructure issues
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary containing summary statistics with nested open/closed totals, triaged counts,
|
||||||
|
and average time to triage
|
||||||
|
"""
|
||||||
|
total = 0
|
||||||
|
open_total = 0
|
||||||
|
open_triaged = 0
|
||||||
|
open_triage_times = []
|
||||||
|
open_times = []
|
||||||
|
closed_total = 0
|
||||||
|
closed_triaged = 0
|
||||||
|
closed_triage_times = []
|
||||||
|
closed_times = []
|
||||||
|
triaged_to_closed_times = []
|
||||||
|
|
||||||
|
# Get current time for calculating open duration
|
||||||
|
current_time = datetime.now(timezone.utc)
|
||||||
|
current_time_str = current_time.isoformat().replace('+00:00', 'Z')
|
||||||
|
|
||||||
|
# Single pass through all regressions
|
||||||
|
for regression in regressions:
|
||||||
|
total += 1
|
||||||
|
triages = regression.get('triages', [])
|
||||||
|
is_triaged = bool(triages)
|
||||||
|
|
||||||
|
# Calculate time to triage if regression is triaged
|
||||||
|
time_to_triage_hrs = None
|
||||||
|
if is_triaged and regression.get('opened'):
|
||||||
|
try:
|
||||||
|
# Find earliest triage timestamp
|
||||||
|
earliest_triage_time = min(
|
||||||
|
t['created_at'] for t in triages if t.get('created_at')
|
||||||
|
)
|
||||||
|
|
||||||
|
# Calculate difference in hours
|
||||||
|
time_to_triage_hrs = calculate_hours_between(
|
||||||
|
regression['opened'],
|
||||||
|
earliest_triage_time
|
||||||
|
)
|
||||||
|
except (ValueError, KeyError, TypeError):
|
||||||
|
# Skip if timestamp parsing fails
|
||||||
|
pass
|
||||||
|
|
||||||
|
# It is common for a triage to be reused as new regressions appear, which makes this a very tricky case to calculate time to triage.
|
||||||
|
# If you triaged a first round of regressions, then added more 24 hours later, we don't actually know when you triaged them in the db.
|
||||||
|
# Treating them as if they were immediately triaged would skew results.
|
||||||
|
# Best we can do is ignore these from consideration. They will count as if they got triaged, but we have no idea what to do with the time to triage.
|
||||||
|
if regression.get('closed') is None:
|
||||||
|
# Open regression
|
||||||
|
open_total += 1
|
||||||
|
if is_triaged:
|
||||||
|
open_triaged += 1
|
||||||
|
if time_to_triage_hrs is not None and time_to_triage_hrs > 0:
|
||||||
|
open_triage_times.append(time_to_triage_hrs)
|
||||||
|
|
||||||
|
# Calculate how long regression has been open
|
||||||
|
if regression.get('opened'):
|
||||||
|
try:
|
||||||
|
time_open_hrs = calculate_hours_between(
|
||||||
|
regression['opened'],
|
||||||
|
current_time_str
|
||||||
|
)
|
||||||
|
# Only include positive time differences
|
||||||
|
if time_open_hrs > 0:
|
||||||
|
open_times.append(time_open_hrs)
|
||||||
|
except (ValueError, KeyError, TypeError):
|
||||||
|
# Skip if timestamp parsing fails
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
# Closed regression
|
||||||
|
closed_total += 1
|
||||||
|
if is_triaged:
|
||||||
|
closed_triaged += 1
|
||||||
|
if time_to_triage_hrs is not None and time_to_triage_hrs > 0:
|
||||||
|
closed_triage_times.append(time_to_triage_hrs)
|
||||||
|
|
||||||
|
# Calculate time from triage to closed
|
||||||
|
if regression.get('closed') and triages:
|
||||||
|
try:
|
||||||
|
earliest_triage_time = min(
|
||||||
|
t['created_at'] for t in triages if t.get('created_at')
|
||||||
|
)
|
||||||
|
time_triaged_to_closed_hrs = calculate_hours_between(
|
||||||
|
earliest_triage_time,
|
||||||
|
regression['closed']
|
||||||
|
)
|
||||||
|
# Only include positive time differences:
|
||||||
|
if time_triaged_to_closed_hrs > 0:
|
||||||
|
triaged_to_closed_times.append(time_triaged_to_closed_hrs)
|
||||||
|
except (ValueError, KeyError, TypeError):
|
||||||
|
# Skip if timestamp parsing fails
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Calculate time to close
|
||||||
|
if regression.get('opened') and regression.get('closed'):
|
||||||
|
try:
|
||||||
|
time_to_close_hrs = calculate_hours_between(
|
||||||
|
regression['opened'],
|
||||||
|
regression['closed']
|
||||||
|
)
|
||||||
|
# Only include positive time differences
|
||||||
|
if time_to_close_hrs > 0:
|
||||||
|
closed_times.append(time_to_close_hrs)
|
||||||
|
except (ValueError, KeyError, TypeError):
|
||||||
|
# Skip if timestamp parsing fails
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Calculate averages and maximums
|
||||||
|
open_avg_triage_time = round(sum(open_triage_times) / len(open_triage_times)) if open_triage_times else None
|
||||||
|
open_max_triage_time = max(open_triage_times) if open_triage_times else None
|
||||||
|
open_avg_time = round(sum(open_times) / len(open_times)) if open_times else None
|
||||||
|
open_max_time = max(open_times) if open_times else None
|
||||||
|
closed_avg_triage_time = round(sum(closed_triage_times) / len(closed_triage_times)) if closed_triage_times else None
|
||||||
|
closed_max_triage_time = max(closed_triage_times) if closed_triage_times else None
|
||||||
|
closed_avg_time = round(sum(closed_times) / len(closed_times)) if closed_times else None
|
||||||
|
closed_max_time = max(closed_times) if closed_times else None
|
||||||
|
triaged_to_closed_avg_time = round(sum(triaged_to_closed_times) / len(triaged_to_closed_times)) if triaged_to_closed_times else None
|
||||||
|
triaged_to_closed_max_time = max(triaged_to_closed_times) if triaged_to_closed_times else None
|
||||||
|
|
||||||
|
# Calculate triage percentages
|
||||||
|
total_triaged = open_triaged + closed_triaged
|
||||||
|
triage_percentage = round((total_triaged / total * 100), 1) if total > 0 else 0
|
||||||
|
open_triage_percentage = round((open_triaged / open_total * 100), 1) if open_total > 0 else 0
|
||||||
|
closed_triage_percentage = round((closed_triaged / closed_total * 100), 1) if closed_total > 0 else 0
|
||||||
|
|
||||||
|
# Calculate overall time to triage (combining open and closed)
|
||||||
|
all_triage_times = open_triage_times + closed_triage_times
|
||||||
|
overall_avg_triage_time = round(sum(all_triage_times) / len(all_triage_times)) if all_triage_times else None
|
||||||
|
overall_max_triage_time = max(all_triage_times) if all_triage_times else None
|
||||||
|
|
||||||
|
# Time to close is only for closed regressions (already calculated in closed_avg_time/closed_max_time)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"total": total,
|
||||||
|
"triaged": total_triaged,
|
||||||
|
"triage_percentage": triage_percentage,
|
||||||
|
"filtered_suspected_infra_regressions": filtered_suspected_infra,
|
||||||
|
"time_to_triage_hrs_avg": overall_avg_triage_time,
|
||||||
|
"time_to_triage_hrs_max": overall_max_triage_time,
|
||||||
|
"time_to_close_hrs_avg": closed_avg_time,
|
||||||
|
"time_to_close_hrs_max": closed_max_time,
|
||||||
|
"open": {
|
||||||
|
"total": open_total,
|
||||||
|
"triaged": open_triaged,
|
||||||
|
"triage_percentage": open_triage_percentage,
|
||||||
|
"time_to_triage_hrs_avg": open_avg_triage_time,
|
||||||
|
"time_to_triage_hrs_max": open_max_triage_time,
|
||||||
|
"open_hrs_avg": open_avg_time,
|
||||||
|
"open_hrs_max": open_max_time
|
||||||
|
},
|
||||||
|
"closed": {
|
||||||
|
"total": closed_total,
|
||||||
|
"triaged": closed_triaged,
|
||||||
|
"triage_percentage": closed_triage_percentage,
|
||||||
|
"time_to_triage_hrs_avg": closed_avg_triage_time,
|
||||||
|
"time_to_triage_hrs_max": closed_max_triage_time,
|
||||||
|
"time_to_close_hrs_avg": closed_avg_time,
|
||||||
|
"time_to_close_hrs_max": closed_max_time,
|
||||||
|
"time_triaged_closed_hrs_avg": triaged_to_closed_avg_time,
|
||||||
|
"time_triaged_closed_hrs_max": triaged_to_closed_max_time
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def add_component_summaries(components: dict) -> dict:
|
||||||
|
"""
|
||||||
|
Add summary statistics to each component object.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
components: Dictionary mapping component names to objects containing open and closed regression lists
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary with summaries added to each component
|
||||||
|
"""
|
||||||
|
for component, component_data in components.items():
|
||||||
|
# Combine open and closed to get all regressions for this component
|
||||||
|
all_regressions = component_data["open"] + component_data["closed"]
|
||||||
|
component_data["summary"] = calculate_summary(all_regressions)
|
||||||
|
|
||||||
|
return components
|
||||||
|
|
||||||
|
|
||||||
|
def format_output(data: dict) -> str:
|
||||||
|
"""
|
||||||
|
Format the regression data for output.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
data: Dictionary containing regression data with keys:
|
||||||
|
- 'summary': Overall statistics (total, open, closed)
|
||||||
|
- 'components': Dictionary mapping component names to objects with:
|
||||||
|
- 'summary': Per-component statistics
|
||||||
|
- 'open': List of open regression objects
|
||||||
|
- 'closed': List of closed regression objects
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Formatted JSON string output
|
||||||
|
"""
|
||||||
|
return json.dumps(data, indent=2)
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description='Fetch regression data for OpenShift components',
|
||||||
|
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||||
|
epilog="""
|
||||||
|
Examples:
|
||||||
|
# List all regressions for release 4.17
|
||||||
|
%(prog)s --release 4.17
|
||||||
|
|
||||||
|
# Filter by specific components
|
||||||
|
%(prog)s --release 4.21 --components Monitoring "kube-apiserver"
|
||||||
|
|
||||||
|
# Filter by multiple components
|
||||||
|
%(prog)s --release 4.21 --components Monitoring etcd "kube-apiserver"
|
||||||
|
|
||||||
|
# Short output mode (summaries only, no regression data)
|
||||||
|
%(prog)s --release 4.17 --short
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
'--release',
|
||||||
|
type=str,
|
||||||
|
required=True,
|
||||||
|
help='Release version (e.g., "4.17", "4.16")'
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
'--components',
|
||||||
|
type=str,
|
||||||
|
nargs='+',
|
||||||
|
default=None,
|
||||||
|
help='Filter by component names (space-separated list, case-insensitive)'
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
'--start',
|
||||||
|
type=str,
|
||||||
|
default=None,
|
||||||
|
help='Start date for filtering (YYYY-MM-DD format, e.g., "2022-03-10"). Filters out regressions closed before this date.'
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
'--end',
|
||||||
|
type=str,
|
||||||
|
default=None,
|
||||||
|
help='End date for filtering (YYYY-MM-DD format, e.g., "2022-08-10"). Filters out regressions opened after this date.'
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
'--short',
|
||||||
|
action='store_true',
|
||||||
|
help='Short output mode: exclude regression data, only include summaries'
|
||||||
|
)
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Fetch regressions
|
||||||
|
regressions = fetch_regressions(args.release)
|
||||||
|
|
||||||
|
# Filter by components (always called to remove empty component names)
|
||||||
|
if isinstance(regressions, list):
|
||||||
|
regressions = filter_by_components(regressions, args.components)
|
||||||
|
|
||||||
|
# Simplify time field structures (closed, last_failure)
|
||||||
|
if isinstance(regressions, list):
|
||||||
|
regressions = simplify_time_fields(regressions)
|
||||||
|
|
||||||
|
# Filter by date range (to focus on development window)
|
||||||
|
if isinstance(regressions, list):
|
||||||
|
regressions = filter_by_date_range(regressions, args.start, args.end)
|
||||||
|
|
||||||
|
# Remove unnecessary fields to reduce response size
|
||||||
|
if isinstance(regressions, list):
|
||||||
|
regressions = remove_unnecessary_fields(regressions)
|
||||||
|
|
||||||
|
# Filter out suspected infrastructure regressions
|
||||||
|
filtered_infra_count = 0
|
||||||
|
if isinstance(regressions, list):
|
||||||
|
regressions, filtered_infra_count = exclude_suspected_infra_regressions(regressions)
|
||||||
|
print(f"Filtered out {filtered_infra_count} suspected infrastructure regressions",
|
||||||
|
file=sys.stderr)
|
||||||
|
|
||||||
|
# Group regressions by component
|
||||||
|
if isinstance(regressions, list):
|
||||||
|
components = group_by_component(regressions)
|
||||||
|
else:
|
||||||
|
components = {}
|
||||||
|
|
||||||
|
# Add summaries to each component
|
||||||
|
if isinstance(components, dict):
|
||||||
|
components = add_component_summaries(components)
|
||||||
|
|
||||||
|
# Calculate overall summary statistics from all regressions
|
||||||
|
all_regressions = []
|
||||||
|
for comp_data in components.values():
|
||||||
|
all_regressions.extend(comp_data["open"])
|
||||||
|
all_regressions.extend(comp_data["closed"])
|
||||||
|
|
||||||
|
overall_summary = calculate_summary(all_regressions, filtered_infra_count)
|
||||||
|
|
||||||
|
# Construct output with summary and components
|
||||||
|
# If --short flag is specified, remove regression data from components
|
||||||
|
if args.short:
|
||||||
|
# Create a copy of components with only summaries
|
||||||
|
components_short = {}
|
||||||
|
for component_name, component_data in components.items():
|
||||||
|
components_short[component_name] = {
|
||||||
|
"summary": component_data["summary"]
|
||||||
|
}
|
||||||
|
output_data = {
|
||||||
|
"summary": overall_summary,
|
||||||
|
"components": components_short
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
output_data = {
|
||||||
|
"summary": overall_summary,
|
||||||
|
"components": components
|
||||||
|
}
|
||||||
|
|
||||||
|
# Format and print output
|
||||||
|
output = format_output(output_data)
|
||||||
|
print(output)
|
||||||
|
|
||||||
|
return 0
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Failed to fetch regressions: {e}", file=sys.stderr)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
sys.exit(main())
|
||||||
|
|
||||||
440
skills/summarize-jiras/SKILL.md
Normal file
440
skills/summarize-jiras/SKILL.md
Normal file
@@ -0,0 +1,440 @@
|
|||||||
|
---
|
||||||
|
name: Summarize JIRAs
|
||||||
|
description: Query and summarize JIRA bugs for a specific project with counts by component
|
||||||
|
---
|
||||||
|
|
||||||
|
# Summarize JIRAs
|
||||||
|
|
||||||
|
This skill provides functionality to query JIRA bugs for a specified project and generate summary statistics. It leverages the `list-jiras` skill to fetch raw JIRA data, then calculates counts by status, priority, and component to provide insights into the bug backlog.
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
Use this skill when you need to:
|
||||||
|
|
||||||
|
- Get a count of open bugs in a JIRA project
|
||||||
|
- Analyze bug distribution by status, priority, or component
|
||||||
|
- Generate summary reports for bug backlog
|
||||||
|
- Track bug trends and velocity over time (opened vs closed in last 30 days)
|
||||||
|
- Compare bug counts across different components
|
||||||
|
- Monitor component health based on bug metrics
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
1. **Python 3 Installation**
|
||||||
|
- Check if installed: `which python3`
|
||||||
|
- Python 3.6 or later is required
|
||||||
|
- Comes pre-installed on most systems
|
||||||
|
|
||||||
|
2. **JIRA Authentication**
|
||||||
|
- Requires environment variables to be set:
|
||||||
|
- `JIRA_URL`: Base URL for JIRA instance (e.g., "https://issues.redhat.com")
|
||||||
|
- `JIRA_PERSONAL_TOKEN`: Your JIRA bearer token or personal access token
|
||||||
|
- How to get a JIRA token:
|
||||||
|
- Navigate to JIRA → Profile → Personal Access Tokens
|
||||||
|
- Generate a new token with appropriate permissions
|
||||||
|
- Export it as an environment variable
|
||||||
|
|
||||||
|
3. **Network Access**
|
||||||
|
- The script requires network access to reach your JIRA instance
|
||||||
|
- Ensure you can make HTTPS requests to the JIRA URL
|
||||||
|
|
||||||
|
## Implementation Steps
|
||||||
|
|
||||||
|
### Step 1: Verify Prerequisites
|
||||||
|
|
||||||
|
First, ensure Python 3 is available:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 --version
|
||||||
|
```
|
||||||
|
|
||||||
|
If Python 3 is not installed, guide the user through installation for their platform.
|
||||||
|
|
||||||
|
### Step 2: Verify Environment Variables
|
||||||
|
|
||||||
|
Check that required environment variables are set:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Verify JIRA credentials are configured
|
||||||
|
echo "JIRA_URL: ${JIRA_URL}"
|
||||||
|
echo "JIRA_PERSONAL_TOKEN: ${JIRA_PERSONAL_TOKEN:+***set***}"
|
||||||
|
```
|
||||||
|
|
||||||
|
If any are missing, guide the user to set them:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export JIRA_URL="https://issues.redhat.com"
|
||||||
|
export JIRA_PERSONAL_TOKEN="your-token-here"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Locate the Script
|
||||||
|
|
||||||
|
The script is located at:
|
||||||
|
|
||||||
|
```
|
||||||
|
plugins/component-health/skills/summarize-jiras/summarize_jiras.py
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Run the Script
|
||||||
|
|
||||||
|
Execute the script with appropriate arguments:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Basic usage - summarize all open bugs in a project
|
||||||
|
python3 plugins/component-health/skills/summarize-jiras/summarize_jiras.py \
|
||||||
|
--project OCPBUGS
|
||||||
|
|
||||||
|
# Filter by component
|
||||||
|
python3 plugins/component-health/skills/summarize-jiras/summarize_jiras.py \
|
||||||
|
--project OCPBUGS \
|
||||||
|
--component "kube-apiserver"
|
||||||
|
|
||||||
|
# Filter by multiple components
|
||||||
|
python3 plugins/component-health/skills/summarize-jiras/summarize_jiras.py \
|
||||||
|
--project OCPBUGS \
|
||||||
|
--component "kube-apiserver" "Management Console"
|
||||||
|
|
||||||
|
# Include closed bugs
|
||||||
|
python3 plugins/component-health/skills/summarize-jiras/summarize_jiras.py \
|
||||||
|
--project OCPBUGS \
|
||||||
|
--include-closed
|
||||||
|
|
||||||
|
# Filter by status
|
||||||
|
python3 plugins/component-health/skills/summarize-jiras/summarize_jiras.py \
|
||||||
|
--project OCPBUGS \
|
||||||
|
--status New "In Progress"
|
||||||
|
|
||||||
|
# Set maximum results limit (default 100)
|
||||||
|
python3 plugins/component-health/skills/summarize-jiras/summarize_jiras.py \
|
||||||
|
--project OCPBUGS \
|
||||||
|
--limit 500
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Process the Output
|
||||||
|
|
||||||
|
The script outputs JSON data with the following structure:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"project": "OCPBUGS",
|
||||||
|
"total_count": 1500,
|
||||||
|
"fetched_count": 100,
|
||||||
|
"query": "project = OCPBUGS AND (status != Closed OR (status = Closed AND resolved >= \"2025-10-11\"))",
|
||||||
|
"filters": {
|
||||||
|
"components": null,
|
||||||
|
"statuses": null,
|
||||||
|
"include_closed": false,
|
||||||
|
"limit": 100
|
||||||
|
},
|
||||||
|
"summary": {
|
||||||
|
"total": 100,
|
||||||
|
"opened_last_30_days": 15,
|
||||||
|
"closed_last_30_days": 8,
|
||||||
|
"by_status": {
|
||||||
|
"New": 35,
|
||||||
|
"In Progress": 25,
|
||||||
|
"Verified": 20,
|
||||||
|
"Modified": 15,
|
||||||
|
"ON_QA": 5,
|
||||||
|
"Closed": 8
|
||||||
|
},
|
||||||
|
"by_priority": {
|
||||||
|
"Normal": 50,
|
||||||
|
"Major": 30,
|
||||||
|
"Minor": 12,
|
||||||
|
"Critical": 5,
|
||||||
|
"Undefined": 3
|
||||||
|
},
|
||||||
|
"by_component": {
|
||||||
|
"kube-apiserver": 25,
|
||||||
|
"Management Console": 30,
|
||||||
|
"Networking": 20,
|
||||||
|
"etcd": 15,
|
||||||
|
"No Component": 10
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"components": {
|
||||||
|
"kube-apiserver": {
|
||||||
|
"total": 25,
|
||||||
|
"opened_last_30_days": 4,
|
||||||
|
"closed_last_30_days": 2,
|
||||||
|
"by_status": {
|
||||||
|
"New": 10,
|
||||||
|
"In Progress": 8,
|
||||||
|
"Verified": 5,
|
||||||
|
"Modified": 2,
|
||||||
|
"Closed": 2
|
||||||
|
},
|
||||||
|
"by_priority": {
|
||||||
|
"Major": 12,
|
||||||
|
"Normal": 10,
|
||||||
|
"Minor": 2,
|
||||||
|
"Critical": 1
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"Management Console": {
|
||||||
|
"total": 30,
|
||||||
|
"opened_last_30_days": 6,
|
||||||
|
"closed_last_30_days": 3,
|
||||||
|
"by_status": {
|
||||||
|
"New": 12,
|
||||||
|
"In Progress": 10,
|
||||||
|
"Verified": 6,
|
||||||
|
"Modified": 2,
|
||||||
|
"Closed": 3
|
||||||
|
},
|
||||||
|
"by_priority": {
|
||||||
|
"Normal": 18,
|
||||||
|
"Major": 8,
|
||||||
|
"Minor": 3,
|
||||||
|
"Critical": 1
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"etcd": {
|
||||||
|
"total": 15,
|
||||||
|
"opened_last_30_days": 3,
|
||||||
|
"closed_last_30_days": 2,
|
||||||
|
"by_status": {
|
||||||
|
"New": 8,
|
||||||
|
"In Progress": 4,
|
||||||
|
"Verified": 3,
|
||||||
|
"Closed": 2
|
||||||
|
},
|
||||||
|
"by_priority": {
|
||||||
|
"Normal": 10,
|
||||||
|
"Major": 4,
|
||||||
|
"Critical": 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"note": "Showing first 100 of 1500 total results. Increase --limit for more accurate statistics."
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Field Descriptions**:
|
||||||
|
|
||||||
|
- `project`: The JIRA project queried
|
||||||
|
- `total_count`: Total number of matching issues (from JIRA search results)
|
||||||
|
- `fetched_count`: Number of issues actually fetched (limited by --limit parameter)
|
||||||
|
- `query`: The JQL query executed (includes filter for recently closed bugs)
|
||||||
|
- `filters`: Applied filters (components, statuses, include_closed, limit)
|
||||||
|
- `summary`: Overall statistics across all fetched issues
|
||||||
|
- `total`: Count of fetched issues (same as `fetched_count`)
|
||||||
|
- `opened_last_30_days`: Number of issues created in the last 30 days
|
||||||
|
- `closed_last_30_days`: Number of issues closed/resolved in the last 30 days
|
||||||
|
- `by_status`: Count of issues per status (includes recently closed issues)
|
||||||
|
- `by_priority`: Count of issues per priority
|
||||||
|
- `by_component`: Count of issues per component (note: issues can have multiple components)
|
||||||
|
- `components`: Per-component breakdown with individual summaries
|
||||||
|
- Each component key maps to:
|
||||||
|
- `total`: Number of issues assigned to this component
|
||||||
|
- `opened_last_30_days`: Number of issues created in the last 30 days for this component
|
||||||
|
- `closed_last_30_days`: Number of issues closed in the last 30 days for this component
|
||||||
|
- `by_status`: Status distribution for this component
|
||||||
|
- `by_priority`: Priority distribution for this component
|
||||||
|
- `note`: Informational message if results are truncated
|
||||||
|
|
||||||
|
**Important Notes**:
|
||||||
|
|
||||||
|
- **By default, the query includes**: Open bugs + bugs closed in the last 30 days
|
||||||
|
- This allows tracking of recent closure activity alongside current open bugs
|
||||||
|
- The script fetches a maximum number of issues (default 100, configurable with `--limit`)
|
||||||
|
- The `total_count` represents all matching issues in JIRA
|
||||||
|
- Summary statistics are based on the fetched issues only
|
||||||
|
- For accurate statistics across large datasets, increase the `--limit` parameter
|
||||||
|
- Issues can have multiple components, so component totals may sum to more than the overall total
|
||||||
|
- `opened_last_30_days` and `closed_last_30_days` help track recent bug flow and velocity
|
||||||
|
|
||||||
|
### Step 6: Present Results
|
||||||
|
|
||||||
|
Based on the summary data:
|
||||||
|
|
||||||
|
1. Present total bug counts
|
||||||
|
2. Highlight distribution by status (e.g., how many in "New" vs "In Progress")
|
||||||
|
3. Identify priority breakdown (Critical, Major, Normal, etc.)
|
||||||
|
4. Show component distribution
|
||||||
|
5. Display per-component breakdowns with status and priority counts
|
||||||
|
6. Calculate actionable metrics (e.g., New + Assigned = bugs needing triage/work)
|
||||||
|
7. Highlight recent activity (opened/closed in last 30 days) per component
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
### Common Errors
|
||||||
|
|
||||||
|
1. **Authentication Errors**
|
||||||
|
- **Symptom**: HTTP 401 Unauthorized
|
||||||
|
- **Solution**: Verify JIRA_URL and JIRA_PERSONAL_TOKEN are correct
|
||||||
|
- **Check**: Ensure token has not expired
|
||||||
|
|
||||||
|
2. **Network Errors**
|
||||||
|
- **Symptom**: `URLError` or connection timeout
|
||||||
|
- **Solution**: Check network connectivity and JIRA_URL is accessible
|
||||||
|
- **Retry**: The script has a 30-second timeout, consider retrying
|
||||||
|
|
||||||
|
3. **Invalid Project**
|
||||||
|
- **Symptom**: HTTP 400 or empty results
|
||||||
|
- **Solution**: Verify the project key is correct (e.g., "OCPBUGS", not "ocpbugs")
|
||||||
|
|
||||||
|
4. **Missing Environment Variables**
|
||||||
|
- **Symptom**: Error message about missing credentials
|
||||||
|
- **Solution**: Set required environment variables (JIRA_URL, JIRA_PERSONAL_TOKEN)
|
||||||
|
|
||||||
|
5. **Rate Limiting**
|
||||||
|
- **Symptom**: HTTP 429 Too Many Requests
|
||||||
|
- **Solution**: Wait before retrying, reduce query frequency
|
||||||
|
|
||||||
|
### Debugging
|
||||||
|
|
||||||
|
Enable verbose output by examining stderr:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/summarize-jiras/summarize_jiras.py \
|
||||||
|
--project OCPBUGS 2>&1 | tee debug.log
|
||||||
|
```
|
||||||
|
|
||||||
|
## Script Arguments
|
||||||
|
|
||||||
|
### Required Arguments
|
||||||
|
|
||||||
|
- `--project`: JIRA project key to query
|
||||||
|
- Format: Project key (e.g., "OCPBUGS", "OCPSTRAT")
|
||||||
|
- Must be a valid JIRA project
|
||||||
|
|
||||||
|
### Optional Arguments
|
||||||
|
|
||||||
|
- `--component`: Filter by component names
|
||||||
|
- Values: Space-separated list of component names
|
||||||
|
- Default: None (returns all components)
|
||||||
|
- Case-sensitive matching
|
||||||
|
- Examples: `--component "kube-apiserver" "Management Console"`
|
||||||
|
|
||||||
|
- `--status`: Filter by status values
|
||||||
|
- Values: Space-separated list of status names
|
||||||
|
- Default: None (returns all statuses except Closed)
|
||||||
|
- Examples: `--status New "In Progress" Verified`
|
||||||
|
|
||||||
|
- `--include-closed`: Include closed bugs in the results
|
||||||
|
- Default: false (only open bugs)
|
||||||
|
- When specified, includes bugs in "Closed" status
|
||||||
|
|
||||||
|
- `--limit`: Maximum number of issues to fetch
|
||||||
|
- Default: 100
|
||||||
|
- Maximum: 1000 (JIRA API limit per request)
|
||||||
|
- Higher values provide more accurate statistics but slower performance
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
The script outputs JSON with summary statistics and per-component breakdowns:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"project": "OCPBUGS",
|
||||||
|
"total_count": 5430,
|
||||||
|
"fetched_count": 100,
|
||||||
|
"query": "project = OCPBUGS AND (status != Closed OR (status = Closed AND resolved >= \"2025-10-11\"))",
|
||||||
|
"filters": {
|
||||||
|
"components": null,
|
||||||
|
"statuses": null,
|
||||||
|
"include_closed": false,
|
||||||
|
"limit": 100
|
||||||
|
},
|
||||||
|
"summary": {
|
||||||
|
"total": 100,
|
||||||
|
"opened_last_30_days": 15,
|
||||||
|
"closed_last_30_days": 8,
|
||||||
|
"by_status": {
|
||||||
|
"New": 1250,
|
||||||
|
"In Progress": 800,
|
||||||
|
"Verified": 650
|
||||||
|
},
|
||||||
|
"by_priority": {
|
||||||
|
"Critical": 50,
|
||||||
|
"Major": 450,
|
||||||
|
"Normal": 2100
|
||||||
|
},
|
||||||
|
"by_component": {
|
||||||
|
"kube-apiserver": 146,
|
||||||
|
"Management Console": 392
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"components": {
|
||||||
|
"kube-apiserver": {
|
||||||
|
"total": 146,
|
||||||
|
"opened_last_30_days": 20,
|
||||||
|
"closed_last_30_days": 12,
|
||||||
|
"by_status": {...},
|
||||||
|
"by_priority": {...}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"note": "Showing first 100 of 5430 total results. Increase --limit for more accurate statistics."
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### Example 1: Summarize All Open Bugs
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/summarize-jiras/summarize_jiras.py \
|
||||||
|
--project OCPBUGS
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected Output**: JSON containing summary statistics of all open bugs in OCPBUGS project
|
||||||
|
|
||||||
|
### Example 2: Filter by Component
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/summarize-jiras/summarize_jiras.py \
|
||||||
|
--project OCPBUGS \
|
||||||
|
--component "kube-apiserver"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected Output**: JSON containing summary for the kube-apiserver component only
|
||||||
|
|
||||||
|
### Example 3: Include Closed Bugs
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/summarize-jiras/summarize_jiras.py \
|
||||||
|
--project OCPBUGS \
|
||||||
|
--include-closed \
|
||||||
|
--limit 500
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected Output**: JSON containing summary of both open and closed bugs (up to 500 issues)
|
||||||
|
|
||||||
|
### Example 4: Filter by Multiple Components
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 plugins/component-health/skills/summarize-jiras/summarize_jiras.py \
|
||||||
|
--project OCPBUGS \
|
||||||
|
--component "kube-apiserver" "etcd" "Networking"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected Output**: JSON containing summary for specified components
|
||||||
|
|
||||||
|
## Integration with Commands
|
||||||
|
|
||||||
|
This skill is designed to:
|
||||||
|
- Provide summary statistics for JIRA bug analysis
|
||||||
|
- Be used by component health analysis workflows
|
||||||
|
- Generate reports for bug triage and planning
|
||||||
|
- Track component health metrics over time
|
||||||
|
- Leverage the `list-jiras` skill for raw data fetching
|
||||||
|
|
||||||
|
## Related Skills
|
||||||
|
|
||||||
|
- `list-jiras`: Fetch raw JIRA issue data
|
||||||
|
- `list-regressions`: Fetch regression data for releases
|
||||||
|
- `analyze-regressions`: Grade component health based on regressions
|
||||||
|
- `get-release-dates`: Fetch OpenShift release dates
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- The script uses Python's standard library only (no external dependencies)
|
||||||
|
- Output is always JSON format for easy parsing
|
||||||
|
- Diagnostic messages are written to stderr, data to stdout
|
||||||
|
- The script internally calls `list_jiras.py` to fetch raw data
|
||||||
|
- The script has a 30-second timeout for HTTP requests (inherited from list_jiras.py)
|
||||||
|
- For large projects, consider using component filters to reduce query size
|
||||||
|
- Summary statistics are based on fetched issues (controlled by --limit), not total matching issues
|
||||||
|
- For raw JIRA data without summarization, use `/component-health:list-jiras` instead
|
||||||
362
skills/summarize-jiras/summarize_jiras.py
Executable file
362
skills/summarize-jiras/summarize_jiras.py
Executable file
@@ -0,0 +1,362 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
JIRA Bug Summarization Script
|
||||||
|
|
||||||
|
This script queries JIRA bugs for a specified project and generates summary statistics.
|
||||||
|
It leverages the list_jiras.py script to fetch raw data, then calculates counts by
|
||||||
|
status, priority, and component.
|
||||||
|
|
||||||
|
Environment Variables:
|
||||||
|
JIRA_URL: Base URL for JIRA instance (e.g., "https://issues.redhat.com")
|
||||||
|
JIRA_PERSONAL_TOKEN: Your JIRA API bearer token or personal access token
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python3 summarize_jiras.py --project OCPBUGS
|
||||||
|
python3 summarize_jiras.py --project OCPBUGS --component "kube-apiserver"
|
||||||
|
python3 summarize_jiras.py --project OCPBUGS --status New "In Progress"
|
||||||
|
python3 summarize_jiras.py --project OCPBUGS --include-closed --limit 500
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import subprocess
|
||||||
|
from typing import List, Dict, Any
|
||||||
|
from collections import defaultdict
|
||||||
|
from datetime import datetime, timedelta
|
||||||
|
|
||||||
|
|
||||||
|
def call_list_jiras(project: str, components: List[str] = None,
|
||||||
|
statuses: List[str] = None,
|
||||||
|
include_closed: bool = False,
|
||||||
|
limit: int = 100) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Call the list_jiras.py script to fetch raw JIRA data.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
project: JIRA project key
|
||||||
|
components: Optional list of component names to filter by
|
||||||
|
statuses: Optional list of status values to filter by
|
||||||
|
include_closed: Whether to include closed bugs
|
||||||
|
limit: Maximum number of issues to fetch
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary containing raw JIRA data from list_jiras.py
|
||||||
|
"""
|
||||||
|
# Build command to call list_jiras.py
|
||||||
|
script_path = os.path.join(
|
||||||
|
os.path.dirname(os.path.dirname(__file__)),
|
||||||
|
'list-jiras',
|
||||||
|
'list_jiras.py'
|
||||||
|
)
|
||||||
|
|
||||||
|
cmd = ['python3', script_path, '--project', project, '--limit', str(limit)]
|
||||||
|
|
||||||
|
if components:
|
||||||
|
cmd.append('--component')
|
||||||
|
cmd.extend(components)
|
||||||
|
|
||||||
|
if statuses:
|
||||||
|
cmd.append('--status')
|
||||||
|
cmd.extend(statuses)
|
||||||
|
|
||||||
|
if include_closed:
|
||||||
|
cmd.append('--include-closed')
|
||||||
|
|
||||||
|
print(f"Calling list_jiras.py to fetch raw data...", file=sys.stderr)
|
||||||
|
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
cmd,
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
check=True,
|
||||||
|
timeout=300 # 5 minutes to allow for multiple component queries
|
||||||
|
)
|
||||||
|
# Pass through stderr to show progress messages from list_jiras.py
|
||||||
|
if result.stderr:
|
||||||
|
print(result.stderr, file=sys.stderr, end='')
|
||||||
|
return json.loads(result.stdout)
|
||||||
|
except subprocess.CalledProcessError as e:
|
||||||
|
print(f"Error calling list_jiras.py: {e}", file=sys.stderr)
|
||||||
|
if e.stderr:
|
||||||
|
print(f"Error output: {e.stderr}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
print(f"Timeout calling list_jiras.py (exceeded 5 minutes)", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
except json.JSONDecodeError as e:
|
||||||
|
print(f"Error parsing JSON from list_jiras.py: {e}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
|
||||||
|
def generate_summary(issues: List[Dict[str, Any]]) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Generate summary statistics from issues.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
issues: List of JIRA issue objects
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary containing overall summary and per-component summaries
|
||||||
|
"""
|
||||||
|
# Calculate cutoff dates
|
||||||
|
now = datetime.now()
|
||||||
|
thirty_days_ago = now - timedelta(days=30)
|
||||||
|
ninety_days_ago = now - timedelta(days=90)
|
||||||
|
one_eighty_days_ago = now - timedelta(days=180)
|
||||||
|
|
||||||
|
# Overall summary
|
||||||
|
overall_summary = {
|
||||||
|
'total': 0,
|
||||||
|
'opened_last_30_days': 0,
|
||||||
|
'closed_last_30_days': 0,
|
||||||
|
'by_status': defaultdict(int),
|
||||||
|
'by_priority': defaultdict(int),
|
||||||
|
'by_component': defaultdict(int),
|
||||||
|
'open_bugs_by_age': {
|
||||||
|
'0-30d': 0,
|
||||||
|
'30-90d': 0,
|
||||||
|
'90-180d': 0,
|
||||||
|
'>180d': 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Per-component data
|
||||||
|
components_data = defaultdict(lambda: {
|
||||||
|
'total': 0,
|
||||||
|
'opened_last_30_days': 0,
|
||||||
|
'closed_last_30_days': 0,
|
||||||
|
'by_status': defaultdict(int),
|
||||||
|
'by_priority': defaultdict(int),
|
||||||
|
'open_bugs_by_age': {
|
||||||
|
'0-30d': 0,
|
||||||
|
'30-90d': 0,
|
||||||
|
'90-180d': 0,
|
||||||
|
'>180d': 0
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
for issue in issues:
|
||||||
|
fields = issue.get('fields', {})
|
||||||
|
overall_summary['total'] += 1
|
||||||
|
|
||||||
|
# Parse created date
|
||||||
|
created_str = fields.get('created')
|
||||||
|
if created_str:
|
||||||
|
try:
|
||||||
|
# JIRA date format: 2024-01-15T10:30:00.000+0000
|
||||||
|
created_date = datetime.strptime(created_str[:19], '%Y-%m-%dT%H:%M:%S')
|
||||||
|
if created_date >= thirty_days_ago:
|
||||||
|
overall_summary['opened_last_30_days'] += 1
|
||||||
|
is_recently_opened = True
|
||||||
|
else:
|
||||||
|
is_recently_opened = False
|
||||||
|
except (ValueError, TypeError):
|
||||||
|
is_recently_opened = False
|
||||||
|
else:
|
||||||
|
is_recently_opened = False
|
||||||
|
|
||||||
|
# Parse resolution date (when issue was closed)
|
||||||
|
resolution_date_str = fields.get('resolutiondate')
|
||||||
|
if resolution_date_str:
|
||||||
|
try:
|
||||||
|
resolution_date = datetime.strptime(resolution_date_str[:19], '%Y-%m-%dT%H:%M:%S')
|
||||||
|
if resolution_date >= thirty_days_ago:
|
||||||
|
overall_summary['closed_last_30_days'] += 1
|
||||||
|
is_recently_closed = True
|
||||||
|
else:
|
||||||
|
is_recently_closed = False
|
||||||
|
except (ValueError, TypeError):
|
||||||
|
is_recently_closed = False
|
||||||
|
else:
|
||||||
|
is_recently_closed = False
|
||||||
|
|
||||||
|
# Count by status
|
||||||
|
status = fields.get('status', {}).get('name', 'Unknown')
|
||||||
|
overall_summary['by_status'][status] += 1
|
||||||
|
|
||||||
|
# Count by priority
|
||||||
|
priority = fields.get('priority')
|
||||||
|
if priority:
|
||||||
|
priority_name = priority.get('name', 'Undefined')
|
||||||
|
else:
|
||||||
|
priority_name = 'Undefined'
|
||||||
|
overall_summary['by_priority'][priority_name] += 1
|
||||||
|
|
||||||
|
# Calculate age for open bugs
|
||||||
|
is_open = status != 'Closed'
|
||||||
|
age_bucket = None
|
||||||
|
if is_open and created_str:
|
||||||
|
try:
|
||||||
|
created_date = datetime.strptime(created_str[:19], '%Y-%m-%dT%H:%M:%S')
|
||||||
|
age_days = (now - created_date).days
|
||||||
|
|
||||||
|
if age_days <= 30:
|
||||||
|
age_bucket = '0-30d'
|
||||||
|
elif age_days <= 90:
|
||||||
|
age_bucket = '30-90d'
|
||||||
|
elif age_days <= 180:
|
||||||
|
age_bucket = '90-180d'
|
||||||
|
else:
|
||||||
|
age_bucket = '>180d'
|
||||||
|
|
||||||
|
overall_summary['open_bugs_by_age'][age_bucket] += 1
|
||||||
|
except (ValueError, TypeError):
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Process components (issues can have multiple components)
|
||||||
|
components = fields.get('components', [])
|
||||||
|
component_names = []
|
||||||
|
|
||||||
|
if components:
|
||||||
|
for component in components:
|
||||||
|
component_name = component.get('name', 'Unknown')
|
||||||
|
component_names.append(component_name)
|
||||||
|
overall_summary['by_component'][component_name] += 1
|
||||||
|
else:
|
||||||
|
component_names = ['No Component']
|
||||||
|
overall_summary['by_component']['No Component'] += 1
|
||||||
|
|
||||||
|
# Update per-component statistics
|
||||||
|
for component_name in component_names:
|
||||||
|
components_data[component_name]['total'] += 1
|
||||||
|
components_data[component_name]['by_status'][status] += 1
|
||||||
|
components_data[component_name]['by_priority'][priority_name] += 1
|
||||||
|
if is_recently_opened:
|
||||||
|
components_data[component_name]['opened_last_30_days'] += 1
|
||||||
|
if is_recently_closed:
|
||||||
|
components_data[component_name]['closed_last_30_days'] += 1
|
||||||
|
if age_bucket:
|
||||||
|
components_data[component_name]['open_bugs_by_age'][age_bucket] += 1
|
||||||
|
|
||||||
|
# Convert defaultdicts to regular dicts and sort
|
||||||
|
overall_summary['by_status'] = dict(sorted(
|
||||||
|
overall_summary['by_status'].items(),
|
||||||
|
key=lambda x: x[1], reverse=True
|
||||||
|
))
|
||||||
|
overall_summary['by_priority'] = dict(sorted(
|
||||||
|
overall_summary['by_priority'].items(),
|
||||||
|
key=lambda x: x[1], reverse=True
|
||||||
|
))
|
||||||
|
overall_summary['by_component'] = dict(sorted(
|
||||||
|
overall_summary['by_component'].items(),
|
||||||
|
key=lambda x: x[1], reverse=True
|
||||||
|
))
|
||||||
|
|
||||||
|
# Convert component data to regular dicts and sort
|
||||||
|
components = {}
|
||||||
|
for comp_name, comp_data in sorted(components_data.items()):
|
||||||
|
components[comp_name] = {
|
||||||
|
'total': comp_data['total'],
|
||||||
|
'opened_last_30_days': comp_data['opened_last_30_days'],
|
||||||
|
'closed_last_30_days': comp_data['closed_last_30_days'],
|
||||||
|
'by_status': dict(sorted(
|
||||||
|
comp_data['by_status'].items(),
|
||||||
|
key=lambda x: x[1], reverse=True
|
||||||
|
)),
|
||||||
|
'by_priority': dict(sorted(
|
||||||
|
comp_data['by_priority'].items(),
|
||||||
|
key=lambda x: x[1], reverse=True
|
||||||
|
)),
|
||||||
|
'open_bugs_by_age': comp_data['open_bugs_by_age']
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
'summary': overall_summary,
|
||||||
|
'components': components
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
"""Main entry point."""
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description='Query JIRA bugs and generate summary statistics',
|
||||||
|
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||||
|
epilog="""
|
||||||
|
Examples:
|
||||||
|
%(prog)s --project OCPBUGS
|
||||||
|
%(prog)s --project OCPBUGS --component "kube-apiserver"
|
||||||
|
%(prog)s --project OCPBUGS --component "kube-apiserver" "etcd"
|
||||||
|
%(prog)s --project OCPBUGS --status New "In Progress"
|
||||||
|
%(prog)s --project OCPBUGS --include-closed --limit 500
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
'--project',
|
||||||
|
required=True,
|
||||||
|
help='JIRA project key (e.g., OCPBUGS, OCPSTRAT)'
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
'--component',
|
||||||
|
nargs='+',
|
||||||
|
help='Filter by component names (space-separated)'
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
'--status',
|
||||||
|
nargs='+',
|
||||||
|
help='Filter by status values (space-separated)'
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
'--include-closed',
|
||||||
|
action='store_true',
|
||||||
|
help='Include closed bugs in results (default: only open bugs)'
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
'--limit',
|
||||||
|
type=int,
|
||||||
|
default=1000,
|
||||||
|
help='Maximum number of issues to fetch per component (default: 1000, max: 1000)'
|
||||||
|
)
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
# Validate limit
|
||||||
|
if args.limit < 1 or args.limit > 1000:
|
||||||
|
print("Error: --limit must be between 1 and 1000", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
# Fetch raw JIRA data using list_jiras.py
|
||||||
|
print(f"Fetching JIRA data for project {args.project}...", file=sys.stderr)
|
||||||
|
raw_data = call_list_jiras(
|
||||||
|
project=args.project,
|
||||||
|
components=args.component,
|
||||||
|
statuses=args.status,
|
||||||
|
include_closed=args.include_closed,
|
||||||
|
limit=args.limit
|
||||||
|
)
|
||||||
|
|
||||||
|
# Extract issues from raw data
|
||||||
|
issues = raw_data.get('issues', [])
|
||||||
|
print(f"Generating summary statistics from {len(issues)} issues...", file=sys.stderr)
|
||||||
|
|
||||||
|
# Generate summary statistics
|
||||||
|
summary_data = generate_summary(issues)
|
||||||
|
|
||||||
|
# Build output with metadata and summaries
|
||||||
|
output = {
|
||||||
|
'project': raw_data.get('project'),
|
||||||
|
'total_count': raw_data.get('total_count'),
|
||||||
|
'fetched_count': raw_data.get('fetched_count'),
|
||||||
|
'query': raw_data.get('query'),
|
||||||
|
'filters': raw_data.get('filters'),
|
||||||
|
'summary': summary_data['summary'],
|
||||||
|
'components': summary_data['components']
|
||||||
|
}
|
||||||
|
|
||||||
|
# Add note if present in raw data
|
||||||
|
if 'note' in raw_data:
|
||||||
|
output['note'] = raw_data['note']
|
||||||
|
|
||||||
|
# Output JSON to stdout
|
||||||
|
print(json.dumps(output, indent=2))
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
||||||
Reference in New Issue
Block a user