--- name: osdu description: "GitLab CI/CD test job reliability analysis for OSDU projects. Tracks test job (unit/integration/acceptance) pass/fail status across pipeline runs. Use for test job status, flaky test job detection, test reliability/quality metrics, cloud provider analytics. Wraps osdu-quality CLI." version: 2.0.0 brief_description: "OSDU GitLab CI/CD test reliability analysis" triggers: keywords: [osdu, gitlab, quality, ci, cd, pipeline, test, job, reliability, flaky, acceptance, integration, unit, azure, aws, gcp, cloud, provider] verbs: [analyze, track, monitor, test, check] patterns: - "test.*(?:reliability|status|job)" - "pipeline.*(?:analysis|status)" - "flaky.*test" - "ci.*cd" - "gitlab.*(?:pipeline|job)" allowed-tools: Bash --- Analyze GitLab CI/CD test job reliability for OSDU platform projects, tracking test job pass/fail status across pipeline runs to identify flaky tests and provide quality metrics. OSDU project test status queries ("how is {project} looking", "partition test quality") Flaky test detection ("are there flaky tests in {project}") Pipeline health monitoring ("recent pipeline failures") Cloud provider comparison ("azure vs aws test reliability") Stage-specific analysis ("unit test status", "integration test failures") Individual test case tracking (we track job-level, not test-level) Non-test jobs (build, deploy, lint, security scans) Non-OSDU projects or non-GitLab CI systems Real-time monitoring (data is from completed pipelines only) Pipeline Run → Test Stage (unit/integration/acceptance) → Test Job → Test Suite (many tests) Test job pass/fail status across multiple pipeline runs Flaky test job detection (jobs that intermittently fail) Stage-level metrics (unit/integration/acceptance) Cloud provider breakdown (azure, aws, gcp, ibm, cimpl) Individual test results (not tracked) Non-test jobs like build, deploy, lint Pipeline #1: job "unit-tests-azure" → PASS (100/100 tests passed) Pipeline #2: job "unit-tests-azure" → FAIL (99/100 tests passed) Pipeline #3: job "unit-tests-azure" → PASS (100/100 tests passed) Result: This job is FLAKY (unreliable across runs) Use status.py for quick overview script_run osdu status.py --format json --pipelines 3 --project {name} Lightweight, fast, safe token usage Use analyze.py with strict filters script_run osdu analyze.py --format json --pipelines 5 --project {name} --stage unit Heavy query, use only when status insufficient ALWAYS specify --project to avoid 30-project scan Prevents token limit exceeded error Extracting specific metrics or calculating statistics Building summaries or comparisons Parsing structured data programmatically ALWAYS for status.py (lightweight, parseable) Analyze.py queries (10x smaller than JSON, still readable) Creating reports for sharing Need human-readable tables without parsing Token budget is tight Includes ANSI codes and colors, hard to parse Only for direct human terminal viewing osdu-quality CLI installed: uv tool install git+https://community.opengroup.org/danielscholl/osdu-quality.git GitLab authentication (choose one): - GITLAB_TOKEN environment variable, OR - glab CLI authenticated (glab auth login) Access to OSDU GitLab projects Best approach: Start with status.py script_run osdu status.py --format json --pipelines 3 --project partition Check status script_run osdu status.py --format json --pipelines 5 --project partition If issues found, deep dive with analyze.py script_run osdu analyze.py --format markdown --pipelines 5 --project partition --stage unit Compare Azure vs AWS for specific project/stage script_run osdu analyze.py --format markdown --pipelines 5 --project storage --stage integration --provider azure script_run osdu analyze.py --format markdown --pipelines 5 --project storage --stage integration --provider aws Focus on unit tests only script_run osdu analyze.py --format markdown --pipelines 5 --project entitlements --stage unit script_run osdu analyze.py --format json --pipelines 10 Will exceed 200K token limit! script_run osdu analyze.py --format json --pipelines 20 Takes 3+ minutes, huge output script_run osdu status.py --format terminal --project partition Includes ANSI codes, hard to parse script_run osdu analyze.py --format json --pipelines 10 --project partition Heavy query when status.py would suffice Always start with status.py, only use analyze.py if needed Always specify --format json or --format markdown Always include --project {name} to avoid all-30-projects scan Start with --pipelines 3-5, increase only if necessary Use --stage or --provider to narrow scope Prefer --format markdown for analyze.py (10x token savings) Structured parsing, metrics extraction Reports, sharing, analyze.py queries Human viewing in terminal with colors Clear message with installation command Message about authentication requirements API error details List of valid options Suggestion to reduce --pipelines or add filters Always follow progressive query approach Never query without --project filter Start with minimal pipeline counts Use markdown format for analyze.py to save tokens Apply stage or provider filters when possible