Initial commit
This commit is contained in:
14
.claude-plugin/plugin.json
Normal file
14
.claude-plugin/plugin.json
Normal file
@@ -0,0 +1,14 @@
|
||||
{
|
||||
"name": "ci",
|
||||
"description": "Miscellaenous tools for working with OpenShift CI",
|
||||
"version": "0.0.1",
|
||||
"author": {
|
||||
"name": "openshift"
|
||||
},
|
||||
"skills": [
|
||||
"./skills"
|
||||
],
|
||||
"commands": [
|
||||
"./commands"
|
||||
]
|
||||
}
|
||||
3
README.md
Normal file
3
README.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# ci
|
||||
|
||||
Miscellaenous tools for working with OpenShift CI
|
||||
623
commands/add-debug-wait.md
Normal file
623
commands/add-debug-wait.md
Normal file
@@ -0,0 +1,623 @@
|
||||
---
|
||||
description: Add a wait step to a CI workflow for debugging test failures
|
||||
argument-hint: <workflow-or-job-name> [timeout]
|
||||
---
|
||||
|
||||
## Name
|
||||
ci:add-debug-wait
|
||||
|
||||
## Synopsis
|
||||
```
|
||||
/ci:add-debug-wait <workflow-or-job-name> [timeout]
|
||||
```
|
||||
|
||||
## Description
|
||||
|
||||
The `ci:add-debug-wait` command adds a `wait` step to a CI job/workflow for debugging test failures.
|
||||
|
||||
**What it does:**
|
||||
1. Takes job name, OCP version, and optional timeout as input
|
||||
2. Finds and edits the job config or workflow file
|
||||
3. Adds `- ref: wait` before the last test step (with optional timeout configuration)
|
||||
4. Commits and pushes the change
|
||||
5. Gives you a GitHub link to create the PR
|
||||
|
||||
**That's it!** Simple, fast, and automated.
|
||||
|
||||
## Implementation
|
||||
|
||||
The command performs the following steps:
|
||||
|
||||
### Step 1: Gather Required Information
|
||||
|
||||
**Prompt user for** (in this order):
|
||||
|
||||
1. **Workflow/Job Name**: (from command argument $1 or prompt)
|
||||
```
|
||||
Workflow or job name: <user-input>
|
||||
Example: aws-c2s-ipi-disc-priv-fips-f7
|
||||
Example: baremetalds-two-node-arbiter-e2e-openshift-test-private-tests
|
||||
```
|
||||
|
||||
2. **Timeout** (optional, from command argument $2):
|
||||
```
|
||||
Wait timeout in hours (optional, default: 3h):
|
||||
Examples: "1h", "2h", "8h", "24h", "72h"
|
||||
Valid range: 1h to 72h
|
||||
```
|
||||
- If not provided, uses the wait step's default behavior (3 hours)
|
||||
- Format: Integer followed by 'h' (e.g., "1h", "2h", "8h")
|
||||
- Valid range: 1h to 72h (maximum enforced by wait step's timeout setting)
|
||||
- Will be normalized to Go duration format (e.g., "8h" → "8h0m0s")
|
||||
- This will be set as the `timeout:` property on the wait step in the workflow/job YAML
|
||||
|
||||
3. **OCP Version**: (prompt - REQUIRED for searching job configs)
|
||||
```
|
||||
OCP version for debugging (e.g., 4.18, 4.19, 4.20, 4.21, 4.22):
|
||||
```
|
||||
This is used to:
|
||||
- Search the correct job config file (e.g., release-4.21)
|
||||
- Document which version needs debugging
|
||||
- Add context to the PR
|
||||
|
||||
4. **OpenShift Release Repo Path**: (prompt if not in current directory)
|
||||
```
|
||||
Path to openshift/release repository:
|
||||
Default: ~/repos/openshift-release
|
||||
```
|
||||
|
||||
### Step 2: Validate Environment
|
||||
|
||||
**Silently validate** (no user prompts):
|
||||
|
||||
```bash
|
||||
cd <repo-path>
|
||||
|
||||
# Check 1: Repository exists and is correct
|
||||
git remote -v | grep "openshift/release" || exit 1
|
||||
|
||||
# Skip repo update - work with current state
|
||||
# User can manually update their repo if needed
|
||||
```
|
||||
|
||||
### Step 3: Search for Job/Test Configuration
|
||||
|
||||
**Priority 1: Search job configs first** (more specific and targeted):
|
||||
|
||||
```bash
|
||||
cd <repo-path>
|
||||
|
||||
# Search for job config files matching the OCP version
|
||||
# The job name could be in various config files, so search broadly
|
||||
grep -r "as: ${job_name}" ci-operator/config/ --include="*release-${ocp_version}*.yaml" -l
|
||||
```
|
||||
|
||||
**Example searches**:
|
||||
- For `aws-c2s-ipi-disc-priv-fips-f7` and OCP 4.21:
|
||||
```bash
|
||||
grep -r "as: aws-c2s-ipi-disc-priv-fips-f7" ci-operator/config/ --include="*release-4.21*.yaml" -l
|
||||
```
|
||||
|
||||
**Handle job config search results**:
|
||||
|
||||
- **1 file found**:
|
||||
```
|
||||
✅ Found job configuration:
|
||||
${file_path}
|
||||
|
||||
Type: Job configuration file
|
||||
|
||||
Proceeding with job config modification...
|
||||
```
|
||||
→ Continue to **Step 4a: Analyze Job Configuration**
|
||||
|
||||
- **Multiple files found**:
|
||||
```
|
||||
Found ${count} matching job config files:
|
||||
|
||||
1. ci-operator/config/.../release-4.21__amd64-nightly.yaml
|
||||
2. ci-operator/config/.../release-4.21__arm64-nightly.yaml
|
||||
3. ci-operator/config/.../release-4.21__ppc64le-nightly.yaml
|
||||
|
||||
Select file (1-${count}) or 'q' to quit:
|
||||
```
|
||||
|
||||
**Prompt user to select** which file to modify, then continue to **Step 4a: Analyze Job Configuration**
|
||||
|
||||
- **0 files found**:
|
||||
```
|
||||
ℹ️ No job config found for: ${job_name} (OCP ${ocp_version})
|
||||
|
||||
Searching for workflow files instead...
|
||||
```
|
||||
→ Continue to **Priority 2** below
|
||||
|
||||
**Priority 2: Search workflow files** (if job config not found):
|
||||
|
||||
```bash
|
||||
cd <repo-path>
|
||||
|
||||
# Search for workflow files
|
||||
find ci-operator/step-registry -type f -name "*${workflow_name}*workflow*.yaml"
|
||||
```
|
||||
|
||||
**Handle workflow search results**:
|
||||
|
||||
- **0 files found**:
|
||||
```
|
||||
❌ No job config or workflow file found for: ${job_name}
|
||||
|
||||
Suggestions:
|
||||
1. Check spelling of job/workflow name
|
||||
2. Verify OCP version (${ocp_version})
|
||||
3. Try with partial name
|
||||
4. Search manually:
|
||||
- Job configs: grep -r "as: ${job_name}" ci-operator/config/
|
||||
- Workflows: find ci-operator/step-registry -name "*workflow*.yaml" | grep <partial-name>
|
||||
```
|
||||
|
||||
- **1 file found**:
|
||||
```
|
||||
✅ Found workflow file:
|
||||
${file_path}
|
||||
|
||||
Type: Workflow file
|
||||
|
||||
Proceeding with workflow modification...
|
||||
```
|
||||
→ Continue to **Step 4b: Analyze Workflow File**
|
||||
|
||||
- **Multiple files found**:
|
||||
```
|
||||
Found ${count} matching workflow files:
|
||||
|
||||
1. ci-operator/step-registry/.../workflow1.yaml
|
||||
2. ci-operator/step-registry/.../workflow2.yaml
|
||||
3. ci-operator/step-registry/.../workflow3.yaml
|
||||
|
||||
Select file (1-${count}) or 'q' to quit:
|
||||
```
|
||||
|
||||
**Prompt user to select** which file to modify, then continue to **Step 4b: Analyze Workflow File**
|
||||
|
||||
### Step 4a: Analyze Job Configuration
|
||||
|
||||
**Read and parse the job config YAML**:
|
||||
|
||||
```bash
|
||||
# Find the specific test definition
|
||||
grep -A 30 "as: ${job_name}" <job-config-file>
|
||||
```
|
||||
|
||||
**Check for**:
|
||||
1. ✅ Has `steps:` section
|
||||
2. ✅ Has `test:` section inside steps
|
||||
3. ❌ Does NOT already have `- ref: wait`
|
||||
|
||||
**Example current structure**:
|
||||
```yaml
|
||||
- as: aws-c2s-ipi-disc-priv-fips-f7
|
||||
cron: 36 16 3,12,19,26 * *
|
||||
steps:
|
||||
cluster_profile: aws-c2s-qe
|
||||
env:
|
||||
BASE_DOMAIN: qe.devcluster.openshift.com
|
||||
FIPS_ENABLED: "true"
|
||||
test:
|
||||
- chain: openshift-e2e-test-qe
|
||||
workflow: cucushift-installer-rehearse-aws-c2s-ipi-disconnected-private
|
||||
```
|
||||
|
||||
**If wait already exists**:
|
||||
```
|
||||
ℹ️ Wait step already configured in job config
|
||||
|
||||
Current test section:
|
||||
test:
|
||||
- ref: wait
|
||||
- chain: openshift-e2e-test-qe
|
||||
|
||||
No changes needed. The job is already set up for debugging.
|
||||
```
|
||||
|
||||
**If no test section found**:
|
||||
```
|
||||
ℹ️ Job config found but no test: section
|
||||
|
||||
This job uses only the workflow's test steps.
|
||||
Searching for the workflow: ${workflow_name}
|
||||
```
|
||||
→ Fall back to searching for workflow (Priority 2 in Step 3)
|
||||
|
||||
→ Continue to **Step 5a: Show Diff for Job Config**
|
||||
|
||||
### Step 4b: Analyze Workflow File
|
||||
|
||||
**Read and parse the workflow YAML**:
|
||||
|
||||
```bash
|
||||
cat <workflow-file>
|
||||
```
|
||||
|
||||
**Check for**:
|
||||
1. ✅ Has `workflow:` section
|
||||
2. ✅ Has `test:` section
|
||||
3. ❌ Does NOT already have `- ref: wait`
|
||||
|
||||
**Example current structure**:
|
||||
```yaml
|
||||
workflow:
|
||||
as: baremetalds-two-node-arbiter-upgrade
|
||||
steps:
|
||||
pre:
|
||||
- chain: baremetalds-ipi-pre
|
||||
test:
|
||||
- chain: baremetalds-ipi-test
|
||||
post:
|
||||
- chain: baremetalds-ipi-post
|
||||
```
|
||||
|
||||
**If wait already exists**:
|
||||
```
|
||||
ℹ️ Wait step already configured in workflow
|
||||
|
||||
Current test section:
|
||||
test:
|
||||
- ref: wait
|
||||
- chain: baremetalds-ipi-test
|
||||
|
||||
No changes needed. The workflow is already set up for debugging.
|
||||
```
|
||||
|
||||
**If no test section exists**:
|
||||
```
|
||||
ℹ️ Workflow has no test: section
|
||||
|
||||
This workflow is provision/deprovision only.
|
||||
The test steps must be defined in the job config.
|
||||
|
||||
Please provide the full job name to modify the job config instead.
|
||||
```
|
||||
→ Exit or prompt for job name
|
||||
|
||||
→ Continue to **Step 5b: Modify Workflow File**
|
||||
|
||||
### Step 5a: Modify Job Config File
|
||||
|
||||
**Edit the job config file directly** - no confirmation needed:
|
||||
|
||||
```bash
|
||||
# Add wait step before the last test step
|
||||
# If timeout is provided, add it as a step property
|
||||
# See Step 6 for the YAML modification algorithm
|
||||
```
|
||||
|
||||
**Two scenarios**:
|
||||
|
||||
1. **Without custom timeout** (uses wait step's built-in default of 3h):
|
||||
```yaml
|
||||
test:
|
||||
- ref: wait
|
||||
- chain: openshift-e2e-test-qe
|
||||
```
|
||||
Note: No timeout or best_effort needed - the wait step will use its default TIMEOUT env var (3 hours)
|
||||
|
||||
2. **With custom timeout** (user provided timeout parameter):
|
||||
```yaml
|
||||
test:
|
||||
- ref: wait
|
||||
timeout: 8h0m0s
|
||||
best_effort: true
|
||||
- chain: openshift-e2e-test-qe
|
||||
```
|
||||
Note: `best_effort: true` is required when timeout is customized to prevent the wait step from failing the job if it times out
|
||||
|
||||
**Show brief confirmation**:
|
||||
```
|
||||
✅ Modified: ${job_name} (OCP ${ocp_version})
|
||||
File: <job-config-file-path>
|
||||
Added: - ref: wait${timeout:+ (timeout: ${timeout})}
|
||||
```
|
||||
|
||||
### Step 5b: Modify Workflow File
|
||||
|
||||
**Edit the workflow file directly** - no confirmation needed:
|
||||
|
||||
```bash
|
||||
# Add wait step before the last test step
|
||||
# If timeout is provided, add it as a step property
|
||||
# See Step 6 for the YAML modification algorithm
|
||||
```
|
||||
|
||||
**Two scenarios**:
|
||||
|
||||
1. **Without custom timeout** (uses wait step's built-in default of 3h):
|
||||
```yaml
|
||||
test:
|
||||
- ref: wait
|
||||
- chain: baremetalds-ipi-test
|
||||
```
|
||||
Note: No timeout or best_effort needed - the wait step will use its default TIMEOUT env var (3 hours)
|
||||
|
||||
2. **With custom timeout** (user provided timeout parameter):
|
||||
```yaml
|
||||
test:
|
||||
- ref: wait
|
||||
timeout: 8h0m0s
|
||||
best_effort: true
|
||||
- chain: baremetalds-ipi-test
|
||||
```
|
||||
Note: `best_effort: true` is required when timeout is customized to prevent the wait step from failing the job if it times out
|
||||
|
||||
**Show brief confirmation**:
|
||||
```
|
||||
✅ Modified: ${workflow_name} workflow
|
||||
File: <workflow-file-path>
|
||||
Added: - ref: wait${timeout:+ (timeout: ${timeout})}
|
||||
⚠️ Impact: Affects ALL jobs using this workflow
|
||||
```
|
||||
|
||||
### Step 6: Create Branch and Commit
|
||||
|
||||
**Branch naming**:
|
||||
```
|
||||
debug-${workflow_name}-${ocp_version}-$(date +%Y%m%d)
|
||||
```
|
||||
|
||||
Example: `debug-baremetalds-two-node-arbiter-4.21-20250131`
|
||||
|
||||
**Git operations**:
|
||||
```bash
|
||||
# Create branch
|
||||
git checkout -b "${branch_name}"
|
||||
|
||||
# Modify the file (add wait step using the implementation below)
|
||||
# Add '- ref: wait' as the first step in the test: section
|
||||
|
||||
# Stage change
|
||||
git add <workflow-file>
|
||||
|
||||
# Commit
|
||||
git commit -m "[Debug] Add wait step to ${workflow_name} for OCP ${ocp_version}
|
||||
|
||||
This adds a wait step to enable debugging of test failures in OCP ${ocp_version}.
|
||||
|
||||
The wait step pauses the workflow before tests run, allowing QE to:
|
||||
- SSH into the test environment
|
||||
- Inspect system state and logs
|
||||
- Debug configuration issues
|
||||
- Investigate test failures
|
||||
|
||||
OCP Version: ${ocp_version}
|
||||
Workflow: ${workflow_name}"
|
||||
```
|
||||
|
||||
**YAML Modification Algorithm**:
|
||||
|
||||
The modification process for both job configs and workflow files follows the same pattern:
|
||||
|
||||
1. **Locate the target**: Find the `test:` section
|
||||
- For job configs: Within the specific job definition (`- as: ${job_name}`)
|
||||
- For workflows: At the workflow level
|
||||
|
||||
2. **Find test steps**: Identify all steps (lines with `- ref:` or `- chain:`)
|
||||
|
||||
3. **Check for duplicates**: Ensure `- ref: wait` doesn't already exist
|
||||
|
||||
4. **Insert wait step**: Add before the **last** test step with matching indentation
|
||||
|
||||
5. **Handle timeout**:
|
||||
- Without timeout: Add simple `- ref: wait`
|
||||
- With timeout: Add as multi-line with `timeout` and `best_effort` properties
|
||||
|
||||
**Example transformation:**
|
||||
|
||||
Before:
|
||||
```yaml
|
||||
test:
|
||||
- chain: openshift-e2e-test-qe
|
||||
```
|
||||
|
||||
After (without timeout):
|
||||
```yaml
|
||||
test:
|
||||
- ref: wait
|
||||
- chain: openshift-e2e-test-qe
|
||||
```
|
||||
|
||||
After (with timeout=8h):
|
||||
```yaml
|
||||
test:
|
||||
- ref: wait
|
||||
timeout: 8h0m0s
|
||||
best_effort: true
|
||||
- chain: openshift-e2e-test-qe
|
||||
```
|
||||
|
||||
**Critical constraints:**
|
||||
- Preserve exact YAML indentation (typically 2 spaces per level)
|
||||
- Insert BEFORE the last step, not after
|
||||
- When timeout is set, `best_effort: true` is required to prevent job failure
|
||||
- Normalize timeout format to Go duration (e.g., "8h" → "8h0m0s")
|
||||
|
||||
### Step 7: Push and Show GitHub Link
|
||||
|
||||
**Auto-push the branch**:
|
||||
```bash
|
||||
git push origin "${branch_name}"
|
||||
```
|
||||
|
||||
**Display GitHub PR creation link**:
|
||||
```
|
||||
✅ Changes pushed successfully!
|
||||
|
||||
Create PR here:
|
||||
https://github.com/openshift/release/compare/master...${branch_name}
|
||||
|
||||
Branch: ${branch_name}
|
||||
Job: ${job_name}
|
||||
OCP: ${ocp_version}
|
||||
|
||||
⚠️ Remember to close PR after debugging (DO NOT MERGE)
|
||||
```
|
||||
|
||||
That's it! Simple and clean.
|
||||
|
||||
### Error Handling
|
||||
|
||||
**Error: Repository Not Found**
|
||||
```
|
||||
❌ Error: Repository not found at ${repo_path}
|
||||
|
||||
Please provide the correct path to openshift/release repository.
|
||||
|
||||
To clone:
|
||||
git clone https://github.com/openshift/release.git
|
||||
```
|
||||
|
||||
**Error: Not in openshift/release Repo**
|
||||
```
|
||||
❌ Error: This doesn't appear to be the openshift/release repository
|
||||
|
||||
Remote URL: ${current_remote}
|
||||
Expected: github.com/openshift/release
|
||||
|
||||
Please navigate to the correct repository.
|
||||
```
|
||||
|
||||
**Error: Workflow File Not Found**
|
||||
```
|
||||
❌ Error: Workflow file not found
|
||||
|
||||
Searched for: *${workflow_name}*workflow*.yaml
|
||||
Location: ci-operator/step-registry/
|
||||
|
||||
Suggestions:
|
||||
1. Verify the workflow name
|
||||
2. Try a partial match
|
||||
3. Search manually: find ci-operator/step-registry -name "*workflow*.yaml"
|
||||
```
|
||||
|
||||
**Error: Wait Step Already Exists**
|
||||
```
|
||||
ℹ️ Wait step already configured in this workflow
|
||||
|
||||
No action needed - you can proceed with debugging using the existing wait step.
|
||||
```
|
||||
|
||||
**Error: Invalid OCP Version**
|
||||
```
|
||||
❌ Invalid OCP version: ${version}
|
||||
|
||||
Valid versions: 4.18, 4.19, 4.20, 4.21, 4.22, master
|
||||
|
||||
Please provide a valid version.
|
||||
```
|
||||
|
||||
### Error: Invalid Timeout Format
|
||||
```
|
||||
❌ Invalid timeout format: ${timeout}
|
||||
|
||||
Valid format: Integer followed by 'h' (e.g., "1h", "2h", "8h", "24h", "72h")
|
||||
Valid range: 1h to 72h
|
||||
|
||||
Examples:
|
||||
- "1h" (1 hour)
|
||||
- "8h" (8 hours)
|
||||
- "24h" (24 hours)
|
||||
- "72h" (72 hours, maximum)
|
||||
|
||||
Please provide a valid timeout in hours.
|
||||
```
|
||||
|
||||
### Note: Timeout Normalization
|
||||
|
||||
When a user provides a timeout like "8h", the implementation should normalize it to the standard Go duration format "8h0m0s" for consistency with existing configurations in the codebase.
|
||||
|
||||
## Return Value
|
||||
|
||||
- **Success**: PR URL and debugging instructions
|
||||
- **Error**: Error message with suggestions for resolution
|
||||
- **Format**: Text output with emoji indicators for status
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Without Timeout (Default 3h)
|
||||
|
||||
```bash
|
||||
/ci:add-debug-wait aws-ipi-f7-longduration-workload
|
||||
```
|
||||
|
||||
Prompts for: OCP version (4.21), repo path
|
||||
|
||||
Result:
|
||||
```yaml
|
||||
test:
|
||||
- ref: wait
|
||||
- chain: openshift-e2e-test-qe
|
||||
```
|
||||
|
||||
Returns: PR creation link
|
||||
|
||||
### Example 2: With Custom Timeout
|
||||
|
||||
```bash
|
||||
/ci:add-debug-wait aws-ipi-f7-longduration-workload 8h
|
||||
```
|
||||
|
||||
Prompts for: OCP version (4.21), repo path
|
||||
|
||||
Result:
|
||||
```yaml
|
||||
test:
|
||||
- ref: wait
|
||||
timeout: 8h0m0s
|
||||
best_effort: true
|
||||
- chain: openshift-e2e-test-qe
|
||||
```
|
||||
|
||||
Returns: PR creation link with timeout info
|
||||
|
||||
### Example 3: Workflow File
|
||||
|
||||
```bash
|
||||
/ci:add-debug-wait baremetalds-two-node-arbiter-upgrade 24h
|
||||
```
|
||||
|
||||
Behavior: Searches job config first, falls back to workflow if not found. Warns that workflow changes affect ALL jobs using it.
|
||||
|
||||
Returns: PR creation link
|
||||
|
||||
## Arguments
|
||||
|
||||
- **$1** (workflow-or-job-name): The name of the CI workflow or job to add the wait step to (required)
|
||||
- **$2** (timeout): Optional timeout in hours (1h-72h). Examples: "1h", "8h", "24h", "72h". If not provided, uses wait step's default (3h)
|
||||
|
||||
## Notes
|
||||
|
||||
### Best Practices for QE
|
||||
|
||||
**Before Running Command**:
|
||||
- ✅ Confirm test is actually failing
|
||||
- ✅ Check existing debug PRs
|
||||
- ✅ Know which OCP version is affected
|
||||
|
||||
**During Debugging**:
|
||||
- 📝 Take detailed notes
|
||||
- 💾 Save logs and screenshots
|
||||
- 🔍 Document root cause
|
||||
- 📊 Record all findings
|
||||
|
||||
**After Debugging**:
|
||||
- ✅ Document findings
|
||||
- ✅ Close the debug PR
|
||||
- ✅ Delete the branch
|
||||
- ✅ Share learnings with team
|
||||
- ✅ Create fix PR if needed
|
||||
|
||||
### Future Enhancements
|
||||
|
||||
Consider adding companion commands:
|
||||
- `/ci:close-debug-pr` - Lists open debug PRs, prompts for findings, closes PR
|
||||
- `/ci:list-debug-prs` - Show all open debug PRs
|
||||
- `/ci:revert-debug-pr` - Revert a debug PR that was merged by mistake
|
||||
125
commands/ask-sippy.md
Normal file
125
commands/ask-sippy.md
Normal file
@@ -0,0 +1,125 @@
|
||||
---
|
||||
description: Ask the Sippy AI agent questions about OpenShift CI payloads, jobs, and test results
|
||||
argument-hint: [question]
|
||||
---
|
||||
|
||||
## Name
|
||||
ci:ask-sippy
|
||||
|
||||
## Synopsis
|
||||
```
|
||||
/ask-sippy [question]
|
||||
```
|
||||
|
||||
## Description
|
||||
|
||||
The `ask-sippy` command allows you to query the Sippy AI agent, which has deep knowledge about OpenShift CI infrastructure, including:
|
||||
- CI payload status and rejection reasons
|
||||
- Job failures and patterns
|
||||
- Test results and trends
|
||||
- Release quality metrics
|
||||
- Historical CI data analysis
|
||||
|
||||
The command sends your question to the Sippy API and returns the agent's
|
||||
response. Note that complex queries may take some time to process as the
|
||||
agent analyzes CI data. Please inform the user of this.
|
||||
|
||||
## Security
|
||||
|
||||
**IMPORTANT SECURITY REQUIREMENTS:**
|
||||
|
||||
Claude is granted LIMITED and SPECIFIC access to the DPCR cluster token for the following AUTHORIZED operations ONLY:
|
||||
- **READ operations**: Querying the Sippy API for CI data analysis
|
||||
|
||||
Claude is EXPLICITLY PROHIBITED from:
|
||||
- Modifying cluster resources (deployments, pods, services, etc.)
|
||||
- Deleting or altering any data
|
||||
- Accessing secrets, configmaps, or sensitive data beyond Sippy API responses
|
||||
- Making any cluster modifications
|
||||
- Using the token for any purpose other than the specific operations listed above
|
||||
|
||||
**Token Usage:**
|
||||
The DPCR cluster token is used solely for authentication with the Sippy API. This token grants the same permissions as the authenticated user and must be handled with appropriate care. The `curl_with_token.sh` wrapper handles all authentication automatically.
|
||||
|
||||
## Implementation
|
||||
|
||||
1. **Validate Arguments**: Checks that a question was provided
|
||||
2. **Notify User**: Informs the user that the query is being processed (may take time)
|
||||
3. **API Request**: Sends a POST request to the Sippy API using the `oc-auth` skill's curl wrapper:
|
||||
```bash
|
||||
# Use curl_with_token.sh from oc-auth skill - it automatically adds the OAuth token
|
||||
# DPCR cluster API: https://api.cr.j7t7.p1.openshiftapps.com:6443
|
||||
curl_with_token.sh https://api.cr.j7t7.p1.openshiftapps.com:6443 -s -X POST "https://sippy-auth.dptools.openshift.org/api/chat" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d @- <<'EOF'
|
||||
{
|
||||
"message": "$1",
|
||||
"chat_history": [],
|
||||
"show_thinking": false,
|
||||
"persona": "default"
|
||||
}
|
||||
EOF
|
||||
```
|
||||
4. **Return JSON**: Returns the full JSON response for Claude to parse
|
||||
|
||||
## Return Value
|
||||
- **Success**: JSON response from Sippy API with the following structure:
|
||||
- `response`: Markdown-formatted answer from the agent (this is what should be displayed to the user)
|
||||
- `visualizations`: Optional field containing Plotly JSON for interactive charts and graphs
|
||||
- `error`: null if successful
|
||||
- **Error**: JSON with `error` field populated if the request fails
|
||||
|
||||
**Important for Claude**:
|
||||
1. **REQUIRED**: Before executing this command, you MUST ensure the `ci:oc-auth` skill is loaded by invoking it with the Skill tool. The curl_with_token.sh script depends on this skill being active.
|
||||
2. You must locate and verify curl_with_token.sh before running it, you (Claude Code) have a bug that tries to use the script from the wrong directory!
|
||||
3. **Before invoking this command**, inform the user that querying Sippy may take 10-60 seconds for complex queries
|
||||
4. Extract the `response` field from the JSON and render it as markdown to the user
|
||||
5. If the response includes a `visualizations` field, it contains Plotly JSON. Render the visualization(s) in an interactive, user-friendly way by creating an HTML file with the Plotly chart(s) embedded. Open it in the user's browser for them.
|
||||
6. If there's an `error` field, display that instead
|
||||
|
||||
## Examples
|
||||
|
||||
1. **Query about payload rejection**:
|
||||
```
|
||||
/ask-sippy Why was the last 4.21 payload rejected?
|
||||
```
|
||||
Response will include analysis of the latest 4.21 payload rejection with specific job failures and reasons.
|
||||
|
||||
2. **Ask about job failures**:
|
||||
```
|
||||
/ask-sippy What are the most common test failures in the e2e-aws job this week?
|
||||
```
|
||||
Response will analyze recent test failure patterns in the specified job.
|
||||
|
||||
3. **Investigate CI trends**:
|
||||
```
|
||||
/ask-sippy How is the overall CI health for 4.20 compared to last week?
|
||||
```
|
||||
Response will provide comparative analysis of CI metrics.
|
||||
|
||||
4. **Specific test inquiry**:
|
||||
```
|
||||
/ask-sippy Why is the test "sig-network Feature:SCTP should create a Pod with SCTP HostPort" failing?
|
||||
```
|
||||
Response will analyze failure patterns and potential causes for the specific test.
|
||||
|
||||
## Notes
|
||||
|
||||
- **Response Time**: Complex queries analyzing large datasets may take 30-60 seconds
|
||||
- **Chat History**: Each query is independent; no conversation context is maintained between calls
|
||||
- **Response Format**: The API returns JSON with a `response` field containing markdown-formatted text
|
||||
- **Markdown Rendering**: Claude will automatically render the markdown response nicely with proper formatting
|
||||
- **Visualizations**: When available, the `visualizations` field contains Plotly JSON for interactive charts and graphs. Claude should render these as HTML files for the user to view
|
||||
- **Error Handling**: If the API returns an error, it will be displayed in the `error` field of the JSON response
|
||||
|
||||
## Data Sources Available
|
||||
|
||||
Sippy can query and analyze:
|
||||
- **Release Payloads**: Status, rejections, promotions for all 4.x versions
|
||||
- **CI Jobs**: Failure rates, patterns, infrastructure issues (aws, gcp, azure, metal, vsphere, etc.)
|
||||
- **Test Results**: Pass/fail rates, flakes, regressions, execution times
|
||||
- **Historical Analysis**: Week-over-week and release-to-release comparisons
|
||||
- **Infrastructure Metrics**: Provisioning issues, platform problems, resource patterns
|
||||
|
||||
## Arguments
|
||||
- **$1** (question): The question to ask the Sippy AI agent. Should be a clear, specific question about OpenShift CI infrastructure, payloads, jobs, or test results.
|
||||
190
commands/list-unstable-tests.md
Normal file
190
commands/list-unstable-tests.md
Normal file
@@ -0,0 +1,190 @@
|
||||
---
|
||||
description: List unstable tests with pass rate below 95%
|
||||
argument-hint: <version> <keywords> [sippy-url]
|
||||
---
|
||||
|
||||
## Name
|
||||
ci:list-unstable-tests
|
||||
|
||||
## Synopsis
|
||||
```
|
||||
/ci:list-unstable-tests <version> <keywords> [sippy-url]
|
||||
```
|
||||
|
||||
## Description
|
||||
The `ci:list-unstable-tests` command queries OpenShift CI test results from Sippy and lists all tests matching the keywords that have a pass rate below 95%. This is useful for quickly identifying unstable tests that need attention.
|
||||
|
||||
By default, it queries the production Sippy instance at `sippy.dptools.openshift.org`. You can optionally specify a different Sippy instance URL to query alternative environments (e.g., QE component readiness).
|
||||
|
||||
This command is useful for:
|
||||
- Identifying unstable tests with inconsistent pass rates
|
||||
- Finding regression candidates for investigation
|
||||
- Generating reports of unstable test cases
|
||||
- Prioritizing test stabilization efforts
|
||||
- Quality gate checks before releases
|
||||
|
||||
## Arguments
|
||||
- `$1` (version): OpenShift version to query (e.g., "4.21", "4.20", "4.19")
|
||||
- `$2` (keywords): Keywords to search in test names (e.g., "olmv1", "sig-storage", "operator")
|
||||
- `$3` (sippy-url) [optional]: Sippy instance base URL. Defaults to "sippy.dptools.openshift.org" if not provided. Examples: "qe-component-readiness.dptools.openshift.org"
|
||||
|
||||
## Implementation
|
||||
|
||||
1. **Parse Arguments**
|
||||
- Extract version from `$1` (e.g., "4.20")
|
||||
- Extract keywords from `$2` (e.g., "olmv1")
|
||||
- Extract Sippy URL from `$3` if provided, otherwise use default "sippy.dptools.openshift.org"
|
||||
- Normalize URL to extract base domain for API endpoint (strip "/sippy-ng/" suffix if present)
|
||||
- Add "https://" prefix if not already present
|
||||
|
||||
2. **Build Sippy API Request**
|
||||
- Construct filter JSON for the `/api/tests` endpoint:
|
||||
```python
|
||||
filters = {
|
||||
"items": [
|
||||
{
|
||||
"columnField": "name",
|
||||
"not": False,
|
||||
"operatorValue": "contains",
|
||||
"value": keywords
|
||||
}
|
||||
],
|
||||
"linkOperator": "and"
|
||||
}
|
||||
```
|
||||
- Set query parameters:
|
||||
- `release`: The OpenShift version
|
||||
- `filter`: JSON-encoded filter object
|
||||
- `sort`: "asc"
|
||||
- `sortField`: "current_pass_percentage"
|
||||
|
||||
3. **Query Test Statistics**
|
||||
- Construct API endpoint from provided URL: `https://{base_url}/api/tests`
|
||||
- Make GET request to the constructed endpoint
|
||||
- Parse response to extract test data
|
||||
|
||||
4. **Filter Tests Below 95% Pass Rate**
|
||||
- Iterate through all returned tests
|
||||
- Filter tests where `current_pass_percentage < 95`
|
||||
- Sort filtered results by pass percentage (ascending, worst first)
|
||||
- Collect the following data for each unstable test:
|
||||
- Test name
|
||||
- Current pass percentage
|
||||
- Total runs
|
||||
- Passes
|
||||
- Failures
|
||||
- Net improvement (trend indicator)
|
||||
|
||||
5. **Format and Display Results**
|
||||
- Display summary header with:
|
||||
- Total number of tests matching keywords
|
||||
- Number of tests below 95% pass rate
|
||||
- Percentage of unstable tests
|
||||
- List each unstable test with:
|
||||
- Test name
|
||||
- Pass rate percentage
|
||||
- Run statistics (runs/passes/failures)
|
||||
- Trend indicator (net improvement)
|
||||
- Sort by pass percentage (worst tests first)
|
||||
- If no tests are below 95%, display success message indicating all tests are stable
|
||||
|
||||
## Return Value
|
||||
|
||||
**Format**: Formatted text output with:
|
||||
|
||||
**Summary Section:**
|
||||
- Total tests matching keywords
|
||||
- Tests below 95% pass rate (unstable tests)
|
||||
- Overall stability percentage
|
||||
|
||||
**Unstable Tests List:**
|
||||
For each test with pass rate < 95%:
|
||||
- Test name
|
||||
- Pass rate percentage
|
||||
- Total runs
|
||||
- Passes
|
||||
- Failures
|
||||
- Net improvement
|
||||
|
||||
If all tests pass at 95% or above, display a success message indicating all tests are stable.
|
||||
|
||||
## Examples
|
||||
|
||||
1. **List unstable OLMv1 tests from QE Sippy**:
|
||||
```
|
||||
/ci:list-unstable-tests 4.20 olmv1 qe-component-readiness.dptools.openshift.org
|
||||
```
|
||||
|
||||
Lists all OLMv1-related tests in version 4.20 from QE Sippy that have a pass rate below 95%.
|
||||
|
||||
2. **List unstable storage tests (using default Sippy)**:
|
||||
```
|
||||
/ci:list-unstable-tests 4.21 sig-storage
|
||||
```
|
||||
|
||||
Lists all storage-related tests in version 4.21 from production Sippy with pass rate below 95%.
|
||||
|
||||
3. **List unstable operator tests**:
|
||||
```
|
||||
/ci:list-unstable-tests 4.19 operator
|
||||
```
|
||||
|
||||
Lists all operator-related tests in version 4.19 with pass rate below 95%.
|
||||
|
||||
4. **Check specific component stability**:
|
||||
```
|
||||
/ci:list-unstable-tests 4.20 sig-network qe-component-readiness.dptools.openshift.org
|
||||
```
|
||||
|
||||
Lists all network-related unstable tests from QE Sippy.
|
||||
|
||||
## Notes
|
||||
|
||||
- **Pass Rate Threshold**: Fixed at 95% - tests with pass rate >= 95% are considered stable
|
||||
- **Default Sippy URL**: If no Sippy URL is provided, the command uses `sippy.dptools.openshift.org` by default
|
||||
- The command queries data from the last 7 days by default
|
||||
- Ensure you can access the Sippy API endpoints
|
||||
- Results are sorted by pass percentage (ascending) to show most unstable tests first
|
||||
- The net improvement metric shows if the test is getting worse (negative) or better (positive)
|
||||
- If no tests match the keywords, an appropriate message will be displayed
|
||||
- If all matching tests have pass rate >= 95%, a success message will be shown indicating all tests are stable
|
||||
|
||||
## Output Example
|
||||
|
||||
```
|
||||
================================================================================
|
||||
Unstable Tests Report - 4.20 olmv1
|
||||
================================================================================
|
||||
Sippy Instance: qe-component-readiness.dptools.openshift.org
|
||||
Pass Rate Threshold: < 95%
|
||||
|
||||
Summary:
|
||||
Total Tests Matching 'olmv1': 45
|
||||
Unstable Tests (< 95%): 8 (17.8%)
|
||||
Stable Tests (>= 95%): 37 (82.2%)
|
||||
|
||||
================================================================================
|
||||
Tests Below 95% Pass Rate (sorted by worst first):
|
||||
================================================================================
|
||||
|
||||
1. Test: [sig-olmv1] clusterextension install should fail validation
|
||||
Pass Rate: 23.5%
|
||||
Runs: 17 | Passes: 4 | Failures: 13
|
||||
Net Improvement: -45.2
|
||||
|
||||
2. Test: [sig-olmv1] clusterextension upgrade from v1 to v2
|
||||
Pass Rate: 67.8%
|
||||
Runs: 28 | Passes: 19 | Failures: 9
|
||||
Net Improvement: -12.3
|
||||
|
||||
[... additional tests ...]
|
||||
|
||||
================================================================================
|
||||
```
|
||||
|
||||
## See Also
|
||||
|
||||
- `/ci:query-test-result` - Query detailed results for a specific test
|
||||
- Sippy UI (Production): https://sippy.dptools.openshift.org/sippy-ng/
|
||||
- Sippy UI (QE): https://qe-component-readiness.dptools.openshift.org
|
||||
- Sippy API Documentation: https://github.com/openshift/sippy
|
||||
102
commands/query-job-status.md
Normal file
102
commands/query-job-status.md
Normal file
@@ -0,0 +1,102 @@
|
||||
---
|
||||
description: Query the status of a gangway job execution by ID
|
||||
argument-hint: <execution-id>
|
||||
---
|
||||
|
||||
## Name
|
||||
ci:query-job-status
|
||||
|
||||
## Synopsis
|
||||
```
|
||||
/query-job-status <execution-id>
|
||||
```
|
||||
|
||||
## Description
|
||||
|
||||
The `query-job-status` command queries the status of a gangway job execution via the REST API using the execution ID returned when a job is triggered.
|
||||
|
||||
The command accepts:
|
||||
- Execution ID (required, UUID returned when triggering a job)
|
||||
|
||||
It makes a GET request to the gangway API and returns the current status of the job including its name, type, status, and GCS path to artifacts if available. The `curl_with_token.sh` wrapper handles all authentication automatically.
|
||||
|
||||
## Implementation
|
||||
|
||||
The command performs the following steps:
|
||||
|
||||
1. **Parse Arguments**:
|
||||
- $1: execution ID (required, UUID format)
|
||||
|
||||
2. **Execute API Request**: Make a GET request to query the job status using the `oc-auth` skill's curl wrapper:
|
||||
```bash
|
||||
# Use curl_with_token.sh from oc-auth skill - it automatically adds the OAuth token
|
||||
# app.ci cluster API: https://api.ci.l2s4.p1.openshiftapps.com:6443
|
||||
curl_with_token.sh https://api.ci.l2s4.p1.openshiftapps.com:6443 -X GET \
|
||||
https://gangway-ci.apps.ci.l2s4.p1.openshiftapps.com/v1/executions/<EXECUTION_ID>
|
||||
```
|
||||
The `curl_with_token.sh` wrapper retrieves the OAuth token from the app.ci cluster and adds it as an Authorization header automatically, without exposing the token.
|
||||
|
||||
3. **Display Results**: Parse and present the JSON response with:
|
||||
- `id`: The execution ID
|
||||
- `job_name`: The name of the job
|
||||
- `job_type`: The type of job execution (PERIODIC, POSTSUBMIT, PRESUBMIT)
|
||||
- `job_status`: Current status (SUCCESS, FAILURE, PENDING, RUNNING, ABORTED)
|
||||
- `gcs_path`: Path to job artifacts in GCS (if available)
|
||||
|
||||
4. **Offer Follow-up Actions**:
|
||||
- If status is PENDING or RUNNING: Offer to check again after a delay
|
||||
- If status is SUCCESS or FAILURE with gcs_path: Offer to help access logs/artifacts
|
||||
|
||||
## Return Value
|
||||
- **Success**: JSON response with job status details
|
||||
- **Error**: HTTP error, authentication failure, or invalid execution ID
|
||||
|
||||
**Important for Claude**:
|
||||
1. **REQUIRED**: Before executing this command, you MUST ensure the `ci:oc-auth` skill is loaded by invoking it with the Skill tool. The curl_with_token.sh script depends on this skill being active.
|
||||
2. You must locate and verify curl_with_token.sh before running it, you (Claude Code) have a bug that tries to use the script from the wrong directory!
|
||||
3. Parse the JSON response and present it in a readable format
|
||||
4. Highlight the job status prominently
|
||||
5. If PENDING/RUNNING, mention the job is still in progress
|
||||
6. If SUCCESS/FAILURE, indicate completion status
|
||||
7. If gcs_path is available, provide the path to artifacts
|
||||
|
||||
## Examples
|
||||
|
||||
1. **Query status of a triggered job**:
|
||||
```
|
||||
/query-job-status ca249d50-dee8-4424-a0a7-6dd9d5605267
|
||||
```
|
||||
Returns:
|
||||
```json
|
||||
{
|
||||
"id": "ca249d50-dee8-4424-a0a7-6dd9d5605267",
|
||||
"job_name": "periodic-ci-openshift-release-master-ci-4.14-e2e-aws-ovn",
|
||||
"job_type": "PERIODIC",
|
||||
"job_status": "SUCCESS",
|
||||
"gcs_path": "gs://origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.14-e2e-aws-ovn/1234567890"
|
||||
}
|
||||
```
|
||||
|
||||
2. **Check running job**:
|
||||
```
|
||||
/query-job-status 8f3a9b2c-1234-5678-9abc-def012345678
|
||||
```
|
||||
Status shows "RUNNING" - Claude offers to check again later.
|
||||
|
||||
3. **Check failed job**:
|
||||
```
|
||||
/query-job-status 5a6b7c8d-9e0f-1a2b-3c4d-5e6f7a8b9c0d
|
||||
```
|
||||
Status shows "FAILURE" - Claude displays the gcs_path for log analysis.
|
||||
|
||||
## Notes
|
||||
|
||||
- **Execution ID Format**: UUID format (e.g., `ca249d50-dee8-4424-a0a7-6dd9d5605267`)
|
||||
- **Job Status Values**: SUCCESS, FAILURE, PENDING, RUNNING, ABORTED
|
||||
- **Rate Limits**: The REST API has rate limits
|
||||
- **Authentication**: Tokens expire and may need to be refreshed via browser login
|
||||
- **GCS Path**: Provides access to job logs and artifacts when available
|
||||
- **Polling**: For long-running jobs, you may need to query multiple times
|
||||
|
||||
## Arguments
|
||||
- **$1** (execution-id): The UUID execution ID returned when a job was triggered (required)
|
||||
247
commands/query-test-result.md
Normal file
247
commands/query-test-result.md
Normal file
@@ -0,0 +1,247 @@
|
||||
---
|
||||
description: Query test results from Sippy by version and test keywords
|
||||
argument-hint: <version> <keywords> [sippy-url]
|
||||
---
|
||||
|
||||
## Name
|
||||
ci:query-test-result
|
||||
|
||||
## Synopsis
|
||||
```
|
||||
/ci:query-test-result <version> <keywords> [sippy-url]
|
||||
```
|
||||
|
||||
## Description
|
||||
The `ci:query-test-result` command queries OpenShift CI test results from Sippy based on the OpenShift version and test name keywords. It retrieves test statistics including pass rate, number of runs, failures, and links to failed job runs.
|
||||
|
||||
By default, it queries the production Sippy instance at `sippy.dptools.openshift.org`. You can optionally specify a different Sippy instance URL to query alternative environments (e.g., QE component readiness).
|
||||
|
||||
This command is useful for:
|
||||
- Checking the health of specific test cases
|
||||
- Finding test failure patterns
|
||||
- Getting links to failed Prow job runs for debugging
|
||||
- Monitoring test regression trends
|
||||
- Querying different Sippy instances (production, QE, etc.)
|
||||
|
||||
## Arguments
|
||||
- `$1` (version): OpenShift version to query (e.g., "4.21", "4.20", "4.19")
|
||||
- `$2` (keywords): Keywords to search in test names (e.g., "PolarionID:81664", "olmv1", "sig-storage")
|
||||
- `$3` (sippy-url) [optional]: Sippy instance base URL. Defaults to "sippy.dptools.openshift.org" if not provided. Examples: "qe-component-readiness.dptools.openshift.org"
|
||||
|
||||
## Implementation
|
||||
|
||||
1. **Parse Arguments**
|
||||
- Extract version from `$1` (e.g., "4.21")
|
||||
- Extract keywords from `$2` (e.g., "81664" or "olmv1")
|
||||
- Extract Sippy URL from `$3` if provided, otherwise use default "sippy.dptools.openshift.org"
|
||||
- Normalize URL to extract base domain for API endpoint (strip "/sippy-ng/" suffix if present)
|
||||
- Add "https://" prefix if not already present
|
||||
|
||||
2. **Build Sippy API Request**
|
||||
- Construct filter JSON for the `/api/tests` endpoint:
|
||||
```python
|
||||
filters = {
|
||||
"items": [
|
||||
{
|
||||
"columnField": "name",
|
||||
"not": False,
|
||||
"operatorValue": "contains",
|
||||
"value": keywords
|
||||
}
|
||||
],
|
||||
"linkOperator": "and"
|
||||
}
|
||||
```
|
||||
- Set query parameters:
|
||||
- `release`: The OpenShift version
|
||||
- `filter`: JSON-encoded filter object
|
||||
- `sort`: "asc"
|
||||
- `sortField`: "net_improvement"
|
||||
|
||||
3. **Query Test Statistics**
|
||||
- Construct API endpoint from provided URL: `https://{base_url}/api/tests`
|
||||
- Example: If input is "sippy.dptools.openshift.org", use `https://sippy.dptools.openshift.org/api/tests`
|
||||
- Example: If input is "qe-component-readiness.dptools.openshift.org", use `https://qe-component-readiness.dptools.openshift.org/api/tests`
|
||||
- Make GET request to the constructed endpoint
|
||||
- Parse response to extract:
|
||||
- Test name
|
||||
- Current pass percentage
|
||||
- Total runs, passes, failures
|
||||
- Net improvement (trend indicator)
|
||||
|
||||
4. **Query Failed Job Runs** (for each matching test)
|
||||
- Calculate timestamp for 7 days ago: `start_time = int((datetime.now() - timedelta(days=7)).timestamp() * 1000)`
|
||||
- Build filter for failed runs:
|
||||
```python
|
||||
filters = {
|
||||
"items": [
|
||||
{
|
||||
"columnField": "failed_test_names",
|
||||
"operatorValue": "contains",
|
||||
"value": test_name
|
||||
},
|
||||
{
|
||||
"columnField": "timestamp",
|
||||
"operatorValue": ">",
|
||||
"value": str(start_time)
|
||||
}
|
||||
],
|
||||
"linkOperator": "and"
|
||||
}
|
||||
```
|
||||
- Make GET request to `https://{base_url}/api/jobs/runs` with parameters:
|
||||
- `release`: version
|
||||
- `filter`: JSON-encoded filter
|
||||
- `limit`: "20"
|
||||
- `sortField`: "timestamp"
|
||||
- `sort`: "desc"
|
||||
- **Parse Response Structure**:
|
||||
- Response is a dict with structure: `{"rows": [...], "page": N, "page_size": N, "total_rows": N}`
|
||||
- Extract job runs from `response["rows"]`
|
||||
- Each run contains:
|
||||
- `timestamp`: Unix timestamp in milliseconds
|
||||
- `brief_name` or `job`: Job name
|
||||
- `test_grid_url`: Link to Prow job details
|
||||
|
||||
5. **Format and Display Results**
|
||||
- Show summary statistics for each matching test
|
||||
- List failed job runs with:
|
||||
- Timestamp of the failure
|
||||
- Job name
|
||||
- Clickable Prow URL for each failed run
|
||||
- **After listing individual runs, provide a summary section:**
|
||||
- Create a "Failed Prow URLs (for easy copying)" section
|
||||
- List all Prow URLs from the failed runs in plain text format (one per line)
|
||||
- This allows users to easily copy all URLs at once for further analysis
|
||||
- Format output in a clear, readable structure with proper spacing
|
||||
- Present URLs as markdown links for easy clicking
|
||||
|
||||
## Return Value
|
||||
|
||||
**Format**: Formatted text output with:
|
||||
- Test name(s) matching the keywords
|
||||
- Statistics section showing:
|
||||
- Pass Rate (percentage)
|
||||
- Total Runs
|
||||
- Passes
|
||||
- Failures
|
||||
- Net Improvement
|
||||
- Failed Job Runs section listing (for last 7 days):
|
||||
- Sequential numbering (1, 2, 3...)
|
||||
- Timestamp (formatted as YYYY-MM-DD HH:MM:SS)
|
||||
- Job name (brief name)
|
||||
- Clickable Prow URL (as markdown link or plain URL)
|
||||
- Failed Prow URLs summary section:
|
||||
- Plain text list of all Prow URLs (one per line)
|
||||
- Allows easy copying of all URLs for batch analysis
|
||||
|
||||
**Output Format Example:**
|
||||
```
|
||||
Failed Job Runs (Last 7 Days):
|
||||
1. 2025-11-03 12:12:31 - periodic-ci-openshift-operator-framework-...
|
||||
https://prow.ci.openshift.org/view/gs/test-platform-results/logs/...
|
||||
|
||||
2. 2025-11-02 12:12:29 - periodic-ci-openshift-operator-framework-...
|
||||
https://prow.ci.openshift.org/view/gs/test-platform-results/logs/...
|
||||
```
|
||||
|
||||
If no tests match the keywords, inform the user that no results were found.
|
||||
If a test has no failed runs in the last 7 days, display a success message.
|
||||
|
||||
## Examples
|
||||
|
||||
1. **Query by Polarion ID (using default Sippy instance)**:
|
||||
```
|
||||
/ci:query-test-result 4.21 81664
|
||||
```
|
||||
|
||||
Returns test results for tests containing "81664" in version 4.21 from the default production Sippy instance (sippy.dptools.openshift.org).
|
||||
|
||||
2. **Query by test signature (using default Sippy instance)**:
|
||||
```
|
||||
/ci:query-test-result 4.20 olmv1
|
||||
```
|
||||
|
||||
Returns all OLMv1-related test results for version 4.20 from the default Sippy instance.
|
||||
|
||||
3. **Query from QE Sippy instance (custom URL)**:
|
||||
```
|
||||
/ci:query-test-result 4.20 olmv1 qe-component-readiness.dptools.openshift.org
|
||||
```
|
||||
|
||||
Returns all OLMv1-related test results for version 4.20 from the QE component readiness Sippy instance.
|
||||
|
||||
4. **Query by component with custom Sippy URL**:
|
||||
```
|
||||
/ci:query-test-result 4.19 sig-storage sippy.dptools.openshift.org
|
||||
```
|
||||
|
||||
Returns all storage-related test results for version 4.19 from the specified Sippy instance.
|
||||
|
||||
5. **Custom URL variations (all valid formats)**:
|
||||
```
|
||||
/ci:query-test-result 4.21 olmv1 sippy.dptools.openshift.org
|
||||
/ci:query-test-result 4.21 olmv1 https://sippy.dptools.openshift.org
|
||||
```
|
||||
|
||||
Both URL formats are accepted and will query the same Sippy instance.
|
||||
|
||||
## Output Example
|
||||
|
||||
```
|
||||
====================================================================================================
|
||||
Test Results for PolarionID: 81664 (Version 4.21)
|
||||
====================================================================================================
|
||||
|
||||
Test Name:
|
||||
[sig-olmv1][Jira:OLM] clusterextension PolarionID:81664-[Skipped:Disconnected]preflight check
|
||||
|
||||
Statistics (Last 7 Days):
|
||||
• Pass Rate: 0.00%
|
||||
• Total Runs: 6
|
||||
• Passes: 0
|
||||
• Failures: 6
|
||||
• Net Improvement: -100.00
|
||||
|
||||
Failed Job Runs (Last 7 Days):
|
||||
----------------------------------------------------------------------------------------------------
|
||||
|
||||
1. 2025-11-03 12:12:31
|
||||
Job: periodic-ci-openshift-operator-framework-operator-controller-release-4.21-periodics-e2e-aws-ovn-techpreview-extended-f1
|
||||
https://prow.ci.openshift.org/view/gs/test-platform-results/logs/periodic-ci-openshift-operator-framework-operator-controller-release-4.21-periodics-e2e-aws-ovn-techpreview-extended-f1/1985198377557561344
|
||||
|
||||
2. 2025-11-02 12:12:29
|
||||
Job: periodic-ci-openshift-operator-framework-operator-controller-release-4.21-periodics-e2e-aws-ovn-techpreview-extended-f1
|
||||
https://prow.ci.openshift.org/view/gs/test-platform-results/logs/periodic-ci-openshift-operator-framework-operator-controller-release-4.21-periodics-e2e-aws-ovn-techpreview-extended-f1/1984835985292136448
|
||||
|
||||
[... additional failures ...]
|
||||
|
||||
----------------------------------------------------------------------------------------------------
|
||||
Failed Prow URLs (for easy copying):
|
||||
----------------------------------------------------------------------------------------------------
|
||||
https://prow.ci.openshift.org/view/gs/test-platform-results/logs/periodic-ci-openshift-operator-framework-operator-controller-release-4.21-periodics-e2e-aws-ovn-techpreview-extended-f1/1985198377557561344
|
||||
https://prow.ci.openshift.org/view/gs/test-platform-results/logs/periodic-ci-openshift-operator-framework-operator-controller-release-4.21-periodics-e2e-aws-ovn-techpreview-extended-f1/1984835985292136448
|
||||
[... additional URLs ...]
|
||||
|
||||
====================================================================================================
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- **Default Sippy URL**: If no Sippy URL is provided, the command uses `sippy.dptools.openshift.org` by default
|
||||
- The command queries data from the last 7 days by default
|
||||
- Ensure you can access the Sippy API endpoints
|
||||
- Results are sorted by net improvement to show regressed tests first
|
||||
- Failed job runs are limited to the most recent 20 occurrences
|
||||
- **URLs are displayed as clickable links** for easy access to Prow job details
|
||||
- If multiple tests match the keywords, results for all matches will be displayed
|
||||
- URL normalization:
|
||||
- The command automatically strips common suffixes like "/sippy-ng/" from the URL
|
||||
- It adds "https://" prefix if not provided
|
||||
- Both domain-only and full path formats are supported
|
||||
|
||||
## See Also
|
||||
|
||||
- Sippy UI (Production): https://sippy.dptools.openshift.org/sippy-ng/
|
||||
- Sippy UI (QE): https://qe-component-readiness.dptools.openshift.org
|
||||
- Sippy API Documentation: https://github.com/openshift/sippy
|
||||
136
commands/trigger-periodic.md
Normal file
136
commands/trigger-periodic.md
Normal file
@@ -0,0 +1,136 @@
|
||||
---
|
||||
description: Trigger a periodic gangway job with optional environment variable overrides
|
||||
argument-hint: <job-name> [ENV_VAR=value ...]
|
||||
---
|
||||
|
||||
## Name
|
||||
ci:trigger-periodic
|
||||
|
||||
## Synopsis
|
||||
```
|
||||
/trigger-periodic <job-name> [ENV_VAR=value ...]
|
||||
```
|
||||
|
||||
## Description
|
||||
|
||||
The `trigger-periodic` command triggers a periodic gangway job via the REST API. Periodic jobs run on a schedule but can be manually triggered for testing or urgent runs.
|
||||
|
||||
The command accepts:
|
||||
- Job name (required, first argument)
|
||||
- Environment variable overrides (optional, additional arguments in KEY=VALUE format)
|
||||
|
||||
It then constructs and executes the appropriate curl command to trigger the job via the gangway REST API.
|
||||
|
||||
## Security
|
||||
|
||||
**IMPORTANT SECURITY REQUIREMENTS:**
|
||||
|
||||
Claude is granted LIMITED and SPECIFIC access to the app.ci cluster token for the following AUTHORIZED operations ONLY:
|
||||
- **READ operations**: Checking authentication status (`oc whoami`)
|
||||
- **TRIGGERING jobs**: POST requests to the gangway API to trigger jobs
|
||||
|
||||
Claude is EXPLICITLY PROHIBITED from:
|
||||
- Modifying cluster resources (deployments, pods, services, etc.)
|
||||
- Deleting or altering existing jobs or executions
|
||||
- Accessing secrets, configmaps, or sensitive data
|
||||
- Making any cluster modifications beyond job triggering
|
||||
- Using the token for any purpose other than the specific operations listed above
|
||||
|
||||
**MANDATORY USER CONFIRMATION:**
|
||||
Before executing ANY POST operation (job trigger), Claude MUST:
|
||||
1. Display the complete payload that will be sent
|
||||
2. Show the exact curl command that will be executed
|
||||
3. Request explicit user confirmation with a clear "yes/no" prompt
|
||||
4. Only proceed after receiving affirmative confirmation
|
||||
|
||||
**Token Usage:**
|
||||
The app.ci cluster token is used solely for authentication with the gangway REST API. This token grants the same permissions as the authenticated user and must be handled with appropriate care. The `curl_with_token.sh` wrapper handles all authentication automatically.
|
||||
|
||||
## Implementation
|
||||
|
||||
The command performs the following steps:
|
||||
|
||||
1. **Parse Arguments**:
|
||||
- First argument is the job name (required)
|
||||
- Remaining arguments are environment variable overrides in KEY=VALUE format
|
||||
- Note: Variables that need to override multistage parameters should be prefixed with `MULTISTAGE_PARAM_OVERRIDE_`
|
||||
|
||||
2. **Construct API Request**: Build the appropriate curl command using the `oc-auth` skill's curl wrapper:
|
||||
|
||||
**Without overrides:**
|
||||
```bash
|
||||
# Use curl_with_token.sh from oc-auth skill - it automatically adds the OAuth token
|
||||
# app.ci cluster API: https://api.ci.l2s4.p1.openshiftapps.com:6443
|
||||
curl_with_token.sh https://api.ci.l2s4.p1.openshiftapps.com:6443 -v -X POST \
|
||||
-d '{"job_name": "<JOB_NAME>", "job_execution_type": "1"}' \
|
||||
https://gangway-ci.apps.ci.l2s4.p1.openshiftapps.com/v1/executions
|
||||
```
|
||||
|
||||
**With overrides:**
|
||||
```bash
|
||||
curl_with_token.sh https://api.ci.l2s4.p1.openshiftapps.com:6443 -v -X POST \
|
||||
-d '{"job_name": "<JOB_NAME>", "job_execution_type": "1", "pod_spec_options": {"envs": {"ENV_VAR": "value"}}}' \
|
||||
https://gangway-ci.apps.ci.l2s4.p1.openshiftapps.com/v1/executions
|
||||
```
|
||||
|
||||
**With multistage parameter override:**
|
||||
```bash
|
||||
curl_with_token.sh https://api.ci.l2s4.p1.openshiftapps.com:6443 -v -X POST \
|
||||
-d '{"job_name": "periodic-to-trigger", "job_execution_type": "1", "pod_spec_options": {"envs": {"MULTISTAGE_PARAM_OVERRIDE_FOO": "bar"}}}' \
|
||||
https://gangway-ci.apps.ci.l2s4.p1.openshiftapps.com/v1/executions
|
||||
```
|
||||
|
||||
The `curl_with_token.sh` wrapper retrieves the OAuth token from the app.ci cluster and adds it as an Authorization header automatically, without exposing the token.
|
||||
|
||||
3. **Request User Confirmation**: Display the complete JSON payload and curl command to the user, then explicitly ask for confirmation before proceeding. Wait for affirmative user response.
|
||||
|
||||
4. **Execute Request**: Only after receiving user confirmation, run the constructed curl command
|
||||
|
||||
6. **Display Results**: Show the API response including the execution ID
|
||||
|
||||
7. **Offer Follow-up**: Optionally offer to query the job status using `/query-job-status`
|
||||
|
||||
## Return Value
|
||||
- **Success**: JSON response with execution ID and job details
|
||||
- **Error**: HTTP error, authentication failure, or missing job name
|
||||
|
||||
**Important for Claude**:
|
||||
1. **REQUIRED**: Before executing this command, you MUST ensure the `ci:oc-auth` skill is loaded by invoking it with the Skill tool. The curl_with_token.sh script depends on this skill being active.
|
||||
2. You must locate and verify curl_with_token.sh before running it, you (Claude Code) have a bug that tries to use the script from the wrong directory!
|
||||
3. Parse the JSON response and extract the execution ID
|
||||
4. Display the execution ID to the user
|
||||
5. Offer to check job status with `/query-job-status`
|
||||
|
||||
## Examples
|
||||
|
||||
1. **Trigger a periodic job without overrides**:
|
||||
```
|
||||
/trigger-periodic periodic-ci-openshift-release-master-ci-4.14-e2e-aws-ovn
|
||||
```
|
||||
|
||||
2. **Trigger a periodic job with payload override**:
|
||||
```
|
||||
/trigger-periodic periodic-ci-openshift-release-master-ci-4.14-e2e-aws-ovn RELEASE_IMAGE_LATEST=quay.io/openshift-release-dev/ocp-release:4.18.8-x86_64
|
||||
```
|
||||
|
||||
3. **Trigger with multistage parameter override**:
|
||||
```
|
||||
/trigger-periodic periodic-to-trigger MULTISTAGE_PARAM_OVERRIDE_FOO=bar
|
||||
```
|
||||
|
||||
4. **Trigger with multiple environment overrides**:
|
||||
```
|
||||
/trigger-periodic periodic-ci-job RELEASE_IMAGE_LATEST=quay.io/image:4.18.8 MULTISTAGE_PARAM_OVERRIDE_TIMEOUT=3600
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- **Job Execution Type**: For periodic jobs, always use `"1"`
|
||||
- **Rate Limits**: The REST API has rate limits; username is recorded in annotations
|
||||
- **Authentication**: Tokens expire and may need to be refreshed via browser login
|
||||
- **Multistage Overrides**: Prefix variables with `MULTISTAGE_PARAM_OVERRIDE_` to override multistage job parameters
|
||||
- **Execution ID**: Save the execution ID from the response to query job status later
|
||||
|
||||
## Arguments
|
||||
- **$1** (job-name): The name of the periodic job to trigger (required)
|
||||
- **$2-$N** (ENV_VAR=value): Optional environment variable overrides in KEY=VALUE format
|
||||
164
commands/trigger-postsubmit.md
Normal file
164
commands/trigger-postsubmit.md
Normal file
@@ -0,0 +1,164 @@
|
||||
---
|
||||
description: Trigger a postsubmit gangway job with repository refs
|
||||
argument-hint: <job-name> <org> <repo> <base-ref> <base-sha> [ENV_VAR=value ...]
|
||||
---
|
||||
|
||||
## Name
|
||||
ci:trigger-postsubmit
|
||||
|
||||
## Synopsis
|
||||
```
|
||||
/trigger-postsubmit <job-name> <org> <repo> <base-ref> <base-sha> [ENV_VAR=value ...]
|
||||
```
|
||||
|
||||
## Description
|
||||
|
||||
The `trigger-postsubmit` command triggers a postsubmit gangway job via the REST API. Postsubmit jobs run after code is merged and require repository reference information.
|
||||
|
||||
The command accepts:
|
||||
- Job name (required)
|
||||
- Organization (required, e.g., "openshift")
|
||||
- Repository name (required, e.g., "assisted-installer")
|
||||
- Base ref/branch (required, e.g., "release-4.12")
|
||||
- Base SHA/commit hash (required)
|
||||
- Environment variable overrides (optional, additional arguments in KEY=VALUE format)
|
||||
|
||||
It constructs the necessary JSON payload with refs structure and executes the curl command to trigger the job via the gangway REST API.
|
||||
|
||||
## Security
|
||||
|
||||
**IMPORTANT SECURITY REQUIREMENTS:**
|
||||
|
||||
Claude is granted LIMITED and SPECIFIC access to the app.ci cluster token for the following AUTHORIZED operations ONLY:
|
||||
- **READ operations**: Checking authentication status (`oc whoami`)
|
||||
- **TRIGGERING jobs**: POST requests to the gangway API to trigger jobs
|
||||
|
||||
Claude is EXPLICITLY PROHIBITED from:
|
||||
- Modifying cluster resources (deployments, pods, services, etc.)
|
||||
- Deleting or altering existing jobs or executions
|
||||
- Accessing secrets, configmaps, or sensitive data
|
||||
- Making any cluster modifications beyond job triggering
|
||||
- Using the token for any purpose other than the specific operations listed above
|
||||
|
||||
**MANDATORY USER CONFIRMATION:**
|
||||
Before executing ANY POST operation (job trigger), Claude MUST:
|
||||
1. Display the complete payload that will be sent
|
||||
2. Show the exact curl command that will be executed
|
||||
3. Request explicit user confirmation with a clear "yes/no" prompt
|
||||
4. Only proceed after receiving affirmative confirmation
|
||||
|
||||
**Token Usage:**
|
||||
The app.ci cluster token is used solely for authentication with the gangway REST API. This token grants the same permissions as the authenticated user and must be handled with appropriate care. The `curl_with_token.sh` wrapper handles all authentication automatically.
|
||||
|
||||
## Implementation
|
||||
|
||||
The command performs the following steps:
|
||||
|
||||
1. **Parse Arguments**:
|
||||
- $1: job name (required)
|
||||
- $2: organization (required)
|
||||
- $3: repository name (required)
|
||||
- $4: base ref/branch (required)
|
||||
- $5: base SHA (required)
|
||||
- $6-$N: environment variable overrides in KEY=VALUE format (optional)
|
||||
|
||||
3. **Construct JSON Payload**: Build the payload with refs structure:
|
||||
|
||||
**Without overrides:**
|
||||
```json
|
||||
{
|
||||
"job_name": "<JOB_NAME>",
|
||||
"job_execution_type": "2",
|
||||
"refs": {
|
||||
"org": "<ORG>",
|
||||
"repo": "<REPO>",
|
||||
"base_ref": "<BASE_REF>",
|
||||
"base_sha": "<BASE_SHA>",
|
||||
"repo_link": "https://github.com/<ORG>/<REPO>"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**With overrides:**
|
||||
```json
|
||||
{
|
||||
"job_name": "<JOB_NAME>",
|
||||
"job_execution_type": "2",
|
||||
"refs": {
|
||||
"org": "<ORG>",
|
||||
"repo": "<REPO>",
|
||||
"base_ref": "<BASE_REF>",
|
||||
"base_sha": "<BASE_SHA>",
|
||||
"repo_link": "https://github.com/<ORG>/<REPO>"
|
||||
},
|
||||
"pod_spec_options": {
|
||||
"envs": {"ENV_VAR": "value"}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
4. **Save JSON to Temporary File**: Write the payload to a temp file (e.g., `/tmp/postsubmit-spec.json`)
|
||||
|
||||
5. **Request User Confirmation**: Display the complete JSON payload and curl command to the user, then explicitly ask for confirmation before proceeding. Wait for affirmative user response.
|
||||
|
||||
6. **Execute Request**: Only after receiving user confirmation, run the curl command using the `oc-auth` skill's curl wrapper:
|
||||
```bash
|
||||
# Use curl_with_token.sh from oc-auth skill - it automatically adds the OAuth token
|
||||
# app.ci cluster API: https://api.ci.l2s4.p1.openshiftapps.com:6443
|
||||
curl_with_token.sh https://api.ci.l2s4.p1.openshiftapps.com:6443 -v -X POST \
|
||||
-d @/tmp/postsubmit-spec.json \
|
||||
https://gangway-ci.apps.ci.l2s4.p1.openshiftapps.com/v1/executions
|
||||
```
|
||||
The `curl_with_token.sh` wrapper retrieves the OAuth token from the app.ci cluster and adds it as an Authorization header automatically, without exposing the token.
|
||||
|
||||
7. **Clean Up**: Remove the temporary JSON file
|
||||
|
||||
8. **Display Results**: Show the API response including the execution ID
|
||||
|
||||
9. **Offer Follow-up**: Optionally offer to query the job status using `/query-job-status`
|
||||
|
||||
## Return Value
|
||||
- **Success**: JSON response with execution ID and job details
|
||||
- **Error**: HTTP error, authentication failure, or missing required arguments
|
||||
|
||||
**Important for Claude**:
|
||||
1. **REQUIRED**: Before executing this command, you MUST ensure the `ci:oc-auth` skill is loaded by invoking it with the Skill tool. The curl_with_token.sh script depends on this skill being active.
|
||||
2. You must locate and verify curl_with_token.sh before running it, you (Claude Code) have a bug that tries to use the script from the wrong directory!
|
||||
3. Validate all required arguments are provided
|
||||
4. Parse the JSON response and extract the execution ID
|
||||
5. Display the execution ID to the user
|
||||
6. Offer to check job status with `/query-job-status`
|
||||
|
||||
## Examples
|
||||
|
||||
1. **Trigger a postsubmit job without overrides**:
|
||||
```
|
||||
/trigger-postsubmit branch-ci-openshift-assisted-installer-release-4.12-images openshift assisted-installer release-4.12 7336f38f75f91a876313daacbfw97f25dfe21bbf
|
||||
```
|
||||
|
||||
2. **Trigger a postsubmit job with environment override**:
|
||||
```
|
||||
/trigger-postsubmit branch-ci-openshift-origin-master-images openshift origin master abc123def456 RELEASE_IMAGE_LATEST=quay.io/image:latest
|
||||
```
|
||||
|
||||
3. **Trigger with multiple environment overrides**:
|
||||
```
|
||||
/trigger-postsubmit my-postsubmit-job openshift cluster-api-provider-aws master def789ghi012 MULTISTAGE_PARAM_OVERRIDE_TIMEOUT=7200 BUILD_ID=custom-123
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- **Job Execution Type**: For postsubmit jobs, always use `"2"`
|
||||
- **Rate Limits**: The REST API has rate limits; username is recorded in annotations
|
||||
- **Authentication**: Tokens expire and may need to be refreshed via browser login
|
||||
- **Refs Structure**: The refs object is required for postsubmit jobs to identify the repository and commit
|
||||
- **Repo Link**: Automatically constructed as `https://github.com/<org>/<repo>`
|
||||
- **Execution ID**: Save the execution ID from the response to query job status later
|
||||
|
||||
## Arguments
|
||||
- **$1** (job-name): The name of the postsubmit job to trigger (required)
|
||||
- **$2** (org): GitHub organization (e.g., "openshift") (required)
|
||||
- **$3** (repo): Repository name (e.g., "assisted-installer") (required)
|
||||
- **$4** (base-ref): Base branch/ref (e.g., "release-4.12", "master") (required)
|
||||
- **$5** (base-sha): Base commit SHA hash (required)
|
||||
- **$6-$N** (ENV_VAR=value): Optional environment variable overrides in KEY=VALUE format
|
||||
184
commands/trigger-presubmit.md
Normal file
184
commands/trigger-presubmit.md
Normal file
@@ -0,0 +1,184 @@
|
||||
---
|
||||
description: Trigger a presubmit gangway job (typically use GitHub Prow commands instead)
|
||||
argument-hint: <job-name> <org> <repo> <base-ref> <base-sha> <pr-number> <pr-sha> [ENV_VAR=value ...]
|
||||
---
|
||||
|
||||
## Name
|
||||
ci:trigger-presubmit
|
||||
|
||||
## Synopsis
|
||||
```
|
||||
/trigger-presubmit <job-name> <org> <repo> <base-ref> <base-sha> <pr-number> <pr-sha> [ENV_VAR=value ...]
|
||||
```
|
||||
|
||||
## Description
|
||||
|
||||
The `trigger-presubmit` command triggers a presubmit gangway job via the REST API.
|
||||
|
||||
**WARNING:** Triggering presubmit jobs via REST is generally unnecessary and not recommended. Presubmit jobs should typically be triggered using Prow commands like `/test` and `/retest` via GitHub interactions. Only use this command if you have a specific reason to trigger via REST API.
|
||||
|
||||
The command accepts:
|
||||
- Job name (required)
|
||||
- Organization (required, e.g., "openshift")
|
||||
- Repository name (required, e.g., "origin")
|
||||
- Base ref/branch (required, e.g., "master")
|
||||
- Base SHA/commit hash (required)
|
||||
- Pull request number (required)
|
||||
- Pull request SHA/head commit (required)
|
||||
- Environment variable overrides (optional, additional arguments in KEY=VALUE format)
|
||||
|
||||
It constructs the necessary JSON payload with refs and pulls structure and executes the curl command to trigger the job via the gangway REST API.
|
||||
|
||||
## Security
|
||||
|
||||
**IMPORTANT SECURITY REQUIREMENTS:**
|
||||
|
||||
Claude is granted LIMITED and SPECIFIC access to the app.ci cluster token for the following AUTHORIZED operations ONLY:
|
||||
- **READ operations**: Checking authentication status (`oc whoami`)
|
||||
- **TRIGGERING jobs**: POST requests to the gangway API to trigger jobs
|
||||
|
||||
Claude is EXPLICITLY PROHIBITED from:
|
||||
- Modifying cluster resources (deployments, pods, services, etc.)
|
||||
- Deleting or altering existing jobs or executions
|
||||
- Accessing secrets, configmaps, or sensitive data
|
||||
- Making any cluster modifications beyond job triggering
|
||||
- Using the token for any purpose other than the specific operations listed above
|
||||
|
||||
**MANDATORY USER CONFIRMATION:**
|
||||
Before executing ANY POST operation (job trigger), Claude MUST:
|
||||
1. Display the complete payload that will be sent
|
||||
2. Show the exact curl command that will be executed
|
||||
3. Request explicit user confirmation with a clear "yes/no" prompt
|
||||
4. Only proceed after receiving affirmative confirmation
|
||||
|
||||
**Token Usage:**
|
||||
The app.ci cluster token is used solely for authentication with the gangway REST API. This token grants the same permissions as the authenticated user and must be handled with appropriate care. The `curl_with_token.sh` wrapper handles all authentication automatically.
|
||||
|
||||
## Implementation
|
||||
|
||||
The command performs the following steps:
|
||||
|
||||
1. **Warn User**: Display a warning that presubmit jobs should typically use GitHub Prow commands (`/test`, `/retest`)
|
||||
|
||||
2. **Parse Arguments**:
|
||||
- $1: job name (required)
|
||||
- $2: organization (required)
|
||||
- $3: repository name (required)
|
||||
- $4: base ref/branch (required)
|
||||
- $5: base SHA (required)
|
||||
- $6: pull request number (required)
|
||||
- $7: pull request SHA (required)
|
||||
- $8-$N: environment variable overrides in KEY=VALUE format (optional)
|
||||
|
||||
4. **Construct JSON Payload**: Build the payload with refs and pulls structure:
|
||||
|
||||
**Without overrides:**
|
||||
```json
|
||||
{
|
||||
"job_name": "<JOB_NAME>",
|
||||
"job_execution_type": "3",
|
||||
"refs": {
|
||||
"org": "<ORG>",
|
||||
"repo": "<REPO>",
|
||||
"base_ref": "<BASE_REF>",
|
||||
"base_sha": "<BASE_SHA>",
|
||||
"pulls": [{
|
||||
"number": <PR_NUMBER>,
|
||||
"sha": "<PR_SHA>",
|
||||
"link": "https://github.com/<ORG>/<REPO>/pull/<PR_NUMBER>"
|
||||
}]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**With overrides:**
|
||||
```json
|
||||
{
|
||||
"job_name": "<JOB_NAME>",
|
||||
"job_execution_type": "3",
|
||||
"refs": {
|
||||
"org": "<ORG>",
|
||||
"repo": "<REPO>",
|
||||
"base_ref": "<BASE_REF>",
|
||||
"base_sha": "<BASE_SHA>",
|
||||
"pulls": [{
|
||||
"number": <PR_NUMBER>,
|
||||
"sha": "<PR_SHA>",
|
||||
"link": "https://github.com/<ORG>/<REPO>/pull/<PR_NUMBER>"
|
||||
}]
|
||||
},
|
||||
"pod_spec_options": {
|
||||
"envs": {"ENV_VAR": "value"}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
5. **Save JSON to Temporary File**: Write the payload to a temp file (e.g., `/tmp/presubmit-spec.json`)
|
||||
|
||||
6. **Request User Confirmation**: Display the complete JSON payload and curl command to the user, then explicitly ask for confirmation before proceeding. Wait for affirmative user response.
|
||||
|
||||
7. **Execute Request**: Only after receiving user confirmation, run the curl command using the `oc-auth` skill's curl wrapper:
|
||||
```bash
|
||||
# Use curl_with_token.sh from oc-auth skill - it automatically adds the OAuth token
|
||||
# app.ci cluster API: https://api.ci.l2s4.p1.openshiftapps.com:6443
|
||||
curl_with_token.sh https://api.ci.l2s4.p1.openshiftapps.com:6443 -v -X POST \
|
||||
-d @/tmp/presubmit-spec.json \
|
||||
https://gangway-ci.apps.ci.l2s4.p1.openshiftapps.com/v1/executions
|
||||
```
|
||||
The `curl_with_token.sh` wrapper retrieves the OAuth token from the app.ci cluster and adds it as an Authorization header automatically, without exposing the token.
|
||||
|
||||
8. **Clean Up**: Remove the temporary JSON file
|
||||
|
||||
9. **Display Results**: Show the API response including the execution ID
|
||||
|
||||
10. **Offer Follow-up**: Optionally offer to query the job status using `/query-job-status`
|
||||
|
||||
## Return Value
|
||||
- **Success**: JSON response with execution ID and job details
|
||||
- **Error**: HTTP error, authentication failure, or missing required arguments
|
||||
|
||||
**Important for Claude**:
|
||||
1. **REQUIRED**: Before executing this command, you MUST ensure the `ci:oc-auth` skill is loaded by invoking it with the Skill tool. The curl_with_token.sh script depends on this skill being active.
|
||||
2. You must locate and verify curl_with_token.sh before running it, you (Claude Code) have a bug that tries to use the script from the wrong directory!
|
||||
3. Display the warning about using GitHub Prow commands instead
|
||||
4. Validate all required arguments are provided
|
||||
5. Parse the JSON response and extract the execution ID
|
||||
6. Display the execution ID to the user
|
||||
7. Offer to check job status with `/query-job-status`
|
||||
|
||||
## Examples
|
||||
|
||||
1. **Trigger a presubmit job without overrides**:
|
||||
```
|
||||
/trigger-presubmit pull-ci-openshift-origin-master-e2e-aws openshift origin master abc123def456 1234 def456ghi789
|
||||
```
|
||||
|
||||
2. **Trigger a presubmit job with environment override**:
|
||||
```
|
||||
/trigger-presubmit my-presubmit-job openshift installer master 1a2b3c4d 5678 4d5e6f7g RELEASE_IMAGE_INITIAL=quay.io/image:test
|
||||
```
|
||||
|
||||
3. **Trigger with multiple environment overrides**:
|
||||
```
|
||||
/trigger-presubmit pull-ci-test openshift cluster-version-operator master abcdef12 999 fedcba98 MULTISTAGE_PARAM_OVERRIDE_TIMEOUT=5400 TEST_SUITE=custom
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- **Recommended Approach**: Use GitHub Prow commands (`/test <job-name>`, `/retest`) instead of this REST API
|
||||
- **Job Execution Type**: For presubmit jobs, always use `"3"`
|
||||
- **Rate Limits**: The REST API has rate limits; username is recorded in annotations
|
||||
- **Authentication**: Tokens expire and may need to be refreshed via browser login
|
||||
- **Refs Structure**: The refs object with pulls array is required for presubmit jobs
|
||||
- **Pull Link**: Automatically constructed as `https://github.com/<org>/<repo>/pull/<pr-number>`
|
||||
- **Execution ID**: Save the execution ID from the response to query job status later
|
||||
|
||||
## Arguments
|
||||
- **$1** (job-name): The name of the presubmit job to trigger (required)
|
||||
- **$2** (org): GitHub organization (e.g., "openshift") (required)
|
||||
- **$3** (repo): Repository name (e.g., "origin") (required)
|
||||
- **$4** (base-ref): Base branch/ref (e.g., "master") (required)
|
||||
- **$5** (base-sha): Base commit SHA hash (required)
|
||||
- **$6** (pr-number): Pull request number (required)
|
||||
- **$7** (pr-sha): Pull request head commit SHA (required)
|
||||
- **$8-$N** (ENV_VAR=value): Optional environment variable overrides in KEY=VALUE format
|
||||
85
plugin.lock.json
Normal file
85
plugin.lock.json
Normal file
@@ -0,0 +1,85 @@
|
||||
{
|
||||
"$schema": "internal://schemas/plugin.lock.v1.json",
|
||||
"pluginId": "gh:openshift-eng/ai-helpers:plugins/ci",
|
||||
"normalized": {
|
||||
"repo": null,
|
||||
"ref": "refs/tags/v20251128.0",
|
||||
"commit": "7737e85c5ca27e52a796bc2d0b061c97f2f1f32f",
|
||||
"treeHash": "97270a1ffc783dd9b58bd88323d1dc1dffa2e7844bc96a183b5cbf05ae22e2cb",
|
||||
"generatedAt": "2025-11-28T10:27:28.213344Z",
|
||||
"toolVersion": "publish_plugins.py@0.2.0"
|
||||
},
|
||||
"origin": {
|
||||
"remote": "git@github.com:zhongweili/42plugin-data.git",
|
||||
"branch": "master",
|
||||
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
|
||||
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
|
||||
},
|
||||
"manifest": {
|
||||
"name": "ci",
|
||||
"description": "Miscellaenous tools for working with OpenShift CI",
|
||||
"version": "0.0.1"
|
||||
},
|
||||
"content": {
|
||||
"files": [
|
||||
{
|
||||
"path": "README.md",
|
||||
"sha256": "9e8d65f1a25fc2a2494a8a9ae268203eb142d072178cb949459be1ce6255e507"
|
||||
},
|
||||
{
|
||||
"path": ".claude-plugin/plugin.json",
|
||||
"sha256": "2ce26c04db90ab593505074cd44a83ef67274a6a3d28138df1d92df57c975d4f"
|
||||
},
|
||||
{
|
||||
"path": "commands/trigger-postsubmit.md",
|
||||
"sha256": "b02dec028407ad4af528c2d8ab6364a37a8572cdd8ea91fc86f2101ff795a36c"
|
||||
},
|
||||
{
|
||||
"path": "commands/query-job-status.md",
|
||||
"sha256": "0797d6ddb0ac200516676e34a4a55306bd412936308c671ab1a0fa715e07fa96"
|
||||
},
|
||||
{
|
||||
"path": "commands/ask-sippy.md",
|
||||
"sha256": "d1c3f878196867c853998a26efe84ad5d7d5dd175b14370c1e1b22053d9823b1"
|
||||
},
|
||||
{
|
||||
"path": "commands/query-test-result.md",
|
||||
"sha256": "7729cbaa76b2c7a711532c29582395205dcc499f370c4fd1c676bbb47b701256"
|
||||
},
|
||||
{
|
||||
"path": "commands/trigger-presubmit.md",
|
||||
"sha256": "469a713ff6fe5c5f0f8ccc2d73bab8f628fd943216cbffdcf293d473841f673d"
|
||||
},
|
||||
{
|
||||
"path": "commands/list-unstable-tests.md",
|
||||
"sha256": "727ccec592ec93250d15b59be317b7390d51c9565427948cceb41f116cc49bfe"
|
||||
},
|
||||
{
|
||||
"path": "commands/trigger-periodic.md",
|
||||
"sha256": "a0de6d72dfd2e8741555e4caa356975e8c317accec4f73aa360c08e7b1e6cc07"
|
||||
},
|
||||
{
|
||||
"path": "commands/add-debug-wait.md",
|
||||
"sha256": "840456e36504909392d430af799424840244eb2ede7449c8627087224680d543"
|
||||
},
|
||||
{
|
||||
"path": "skills/oc-auth/README.md",
|
||||
"sha256": "16f5abfcf85cf25a8984b3811ab750c638196e717ab6a9b26514386222bd9428"
|
||||
},
|
||||
{
|
||||
"path": "skills/oc-auth/curl_with_token.sh",
|
||||
"sha256": "9042e0cd037b04d461659604ace5423e0902c258a588de3c37ea807830cbbe7d"
|
||||
},
|
||||
{
|
||||
"path": "skills/oc-auth/SKILL.md",
|
||||
"sha256": "47d6cdf674b515c0fb85d383e2645d01f011bb9a1856026fc76cde92595ad7b1"
|
||||
}
|
||||
],
|
||||
"dirSha256": "97270a1ffc783dd9b58bd88323d1dc1dffa2e7844bc96a183b5cbf05ae22e2cb"
|
||||
},
|
||||
"security": {
|
||||
"scannedAt": null,
|
||||
"scannerVersion": null,
|
||||
"flags": []
|
||||
}
|
||||
}
|
||||
93
skills/oc-auth/README.md
Normal file
93
skills/oc-auth/README.md
Normal file
@@ -0,0 +1,93 @@
|
||||
# OC Authentication Helper Skill
|
||||
|
||||
A centralized skill for authenticated curl requests to OpenShift cluster APIs using OAuth tokens from multiple cluster contexts.
|
||||
|
||||
## Overview
|
||||
|
||||
This skill provides a curl wrapper that automatically handles OAuth token retrieval and injection, eliminating code duplication and preventing accidental token exposure.
|
||||
|
||||
## Components
|
||||
|
||||
### `curl_with_token.sh`
|
||||
|
||||
Curl wrapper that automatically retrieves OAuth tokens and adds them to requests.
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
curl_with_token.sh <cluster_api_url> [curl arguments...]
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `cluster_api_url`: Full cluster API server URL (e.g., `https://api.ci.l2s4.p1.openshiftapps.com:6443`)
|
||||
- `[curl arguments...]`: All standard curl arguments
|
||||
|
||||
**How it works:**
|
||||
- Finds the correct oc context matching the specified cluster API URL
|
||||
- Retrieves OAuth token using `oc whoami -t --context=<context>`
|
||||
- Adds `Authorization: Bearer <token>` header automatically
|
||||
- Executes curl with all provided arguments
|
||||
- Token never appears in output
|
||||
|
||||
**Exit Codes:**
|
||||
- `0`: Success
|
||||
- `1`: Invalid cluster_id
|
||||
- `2`: No context found for cluster
|
||||
- `3`: Failed to retrieve token
|
||||
|
||||
## Common Clusters
|
||||
|
||||
### app.ci - OpenShift CI Cluster
|
||||
- **Console**: https://console-openshift-console.apps.ci.l2s4.p1.openshiftapps.com/
|
||||
- **API Server**: https://api.ci.l2s4.p1.openshiftapps.com:6443
|
||||
- **Used by**: trigger-periodic, trigger-postsubmit, trigger-presubmit, query-job-status
|
||||
|
||||
### dpcr - DPCR Cluster
|
||||
- **Console**: https://console-openshift-console.apps.cr.j7t7.p1.openshiftapps.com/
|
||||
- **API Server**: https://api.cr.j7t7.p1.openshiftapps.com:6443
|
||||
- **Used by**: ask-sippy
|
||||
|
||||
**Note**: This skill supports any OpenShift cluster - simply provide the cluster's API server URL.
|
||||
|
||||
## Example Usage
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# Make authenticated API call to app.ci cluster
|
||||
curl_with_token.sh https://api.ci.l2s4.p1.openshiftapps.com:6443 -X POST \
|
||||
-d '{"job_name": "my-job"}' \
|
||||
https://gangway-ci.apps.ci.l2s4.p1.openshiftapps.com/v1/executions
|
||||
|
||||
# Make authenticated API call to DPCR cluster
|
||||
curl_with_token.sh https://api.cr.j7t7.p1.openshiftapps.com:6443 -s -X POST \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"message": "question"}' \
|
||||
https://sippy-auth.dptools.openshift.org/api/chat
|
||||
|
||||
# Make authenticated API call to any other OpenShift cluster
|
||||
curl_with_token.sh https://api.your-cluster.example.com:6443 -X GET \
|
||||
https://your-api.example.com/endpoint
|
||||
```
|
||||
|
||||
## How It Works
|
||||
|
||||
1. **Context Discovery**: Lists all `oc` contexts and finds the one matching the cluster API server URL
|
||||
2. **Token Retrieval**: Uses `oc whoami -t --context=<context>` to get the token from the correct cluster
|
||||
3. **Token Injection**: Automatically adds `Authorization: Bearer <token>` header to curl
|
||||
4. **Execution**: Runs curl with all provided arguments
|
||||
5. **Token Protection**: Token never appears in output or logs
|
||||
|
||||
## Benefits
|
||||
|
||||
- **No Token Exposure**: Token never shown in command output or logs
|
||||
- **No Duplication**: Single source of truth for authentication logic
|
||||
- **Simple Usage**: Just prefix curl commands with `curl_with_token.sh <cluster>`
|
||||
- **Consistent Errors**: All commands show the same error messages
|
||||
- **Easy Maintenance**: Update cluster patterns in one place
|
||||
- **Multi-Cluster**: Supports multiple simultaneous cluster authentications
|
||||
|
||||
## See Also
|
||||
|
||||
- [SKILL.md](./SKILL.md) - Detailed skill documentation
|
||||
- [CI Plugin README](../../README.md) - Parent plugin documentation
|
||||
|
||||
207
skills/oc-auth/SKILL.md
Normal file
207
skills/oc-auth/SKILL.md
Normal file
@@ -0,0 +1,207 @@
|
||||
---
|
||||
name: OC Authentication Helper
|
||||
description: Helper skill to retrieve OAuth tokens from the correct OpenShift cluster context when multiple clusters are configured
|
||||
---
|
||||
|
||||
# OC Authentication Helper
|
||||
|
||||
This skill provides a centralized way to retrieve OAuth tokens from specific OpenShift clusters when multiple cluster contexts are configured in the user's kubeconfig.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill whenever you need to:
|
||||
- Get an OAuth token for API authentication from a specific OpenShift cluster
|
||||
- Verify authentication to a specific cluster
|
||||
- Work with multiple OpenShift cluster contexts simultaneously
|
||||
|
||||
This skill is used by all commands that need to authenticate with OpenShift clusters:
|
||||
- `ask-sippy` command (DPCR cluster)
|
||||
- `trigger-periodic`, `trigger-postsubmit`, `trigger-presubmit` commands (app.ci cluster)
|
||||
- `query-job-status` command (app.ci cluster)
|
||||
|
||||
The skill provides a single `curl_with_token.sh` script that wraps curl and automatically handles OAuth token retrieval and injection, preventing accidental token exposure.
|
||||
|
||||
**Due to a known Claude Code bug with git-installed marketplace plugins:**
|
||||
|
||||
When referencing files from this skill (scripts, configuration files, etc.), you MUST:
|
||||
|
||||
1. **Always use the "Base directory" path** provided at the top of this skill prompt
|
||||
2. **Never assume** skills are located in `~/.claude/plugins/`
|
||||
3. **Construct full absolute paths** by combining the base directory with the relative file path
|
||||
|
||||
**Example:**
|
||||
- ❌ WRONG: `~/.claude/plugins/ci/skills/oc-auth/curl_with_token.sh`
|
||||
- ✅ CORRECT: Use the base directory shown above + `/curl_with_token.sh`
|
||||
|
||||
If you see "no such file or directory" errors, verify you're using the base directory path, not the assumed marketplace cache location.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **oc CLI Installation**
|
||||
- Check if installed: `which oc`
|
||||
- If not installed, provide instructions for the user's platform
|
||||
- Installation guide: https://docs.openshift.com/container-platform/latest/cli_reference/openshift_cli/getting-started-cli.html
|
||||
|
||||
2. **User Authentication**
|
||||
- User must be logged in to the target cluster via browser-based authentication
|
||||
- Each `oc login` creates a new context in the kubeconfig
|
||||
|
||||
## How It Works
|
||||
|
||||
The `oc` CLI maintains multiple cluster contexts in `~/.kube/config`. When a user runs `oc login` to different clusters, each login creates a separate context. This skill:
|
||||
|
||||
1. Lists all available contexts
|
||||
2. Searches for the context matching the target cluster by API server URL
|
||||
3. Retrieves the OAuth token from that specific context
|
||||
4. Returns the token for use in API calls
|
||||
|
||||
## Common Clusters
|
||||
|
||||
Here are commonly used OpenShift clusters:
|
||||
|
||||
### 1. `app.ci` - OpenShift CI Cluster
|
||||
- **Console URL**: https://console-openshift-console.apps.ci.l2s4.p1.openshiftapps.com/
|
||||
- **API Server**: https://api.ci.l2s4.p1.openshiftapps.com:6443
|
||||
- **Used by**: trigger-periodic, trigger-postsubmit, trigger-presubmit, query-job-status
|
||||
|
||||
### 2. `dpcr` - DPCR Cluster
|
||||
- **Console URL**: https://console-openshift-console.apps.cr.j7t7.p1.openshiftapps.com/
|
||||
- **API Server**: https://api.cr.j7t7.p1.openshiftapps.com:6443
|
||||
- **Used by**: ask-sippy
|
||||
|
||||
**Note**: The skill supports any OpenShift cluster - simply provide the cluster's API server URL.
|
||||
|
||||
## Usage
|
||||
|
||||
### Script: `curl_with_token.sh`
|
||||
|
||||
A curl wrapper that automatically retrieves the OAuth token and adds it to the request, preventing token exposure.
|
||||
|
||||
```bash
|
||||
curl_with_token.sh <cluster_api_url> [curl arguments...]
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `<cluster_api_url>`: Full cluster API server URL (e.g., `https://api.ci.l2s4.p1.openshiftapps.com:6443`)
|
||||
- `[curl arguments...]`: All standard curl arguments (URL, headers, data, etc.)
|
||||
|
||||
**How it works:**
|
||||
1. Finds the oc context matching the specified cluster API URL
|
||||
2. Retrieves OAuth token from that cluster context
|
||||
3. Adds `Authorization: Bearer <token>` header automatically
|
||||
4. Executes curl with all provided arguments
|
||||
5. Token never appears in output or command history
|
||||
|
||||
**Exit Codes:**
|
||||
- `0`: Success
|
||||
- `1`: Invalid cluster_id or missing arguments
|
||||
- `2`: No context found for the specified cluster
|
||||
- `3`: Failed to retrieve token from context
|
||||
- Other: curl exit codes
|
||||
|
||||
### Example Usage in Commands
|
||||
|
||||
Use the curl wrapper instead of regular curl for authenticated requests:
|
||||
|
||||
```bash
|
||||
# Query app.ci API
|
||||
curl_with_token.sh https://api.ci.l2s4.p1.openshiftapps.com:6443 -X POST \
|
||||
-d '{"job_name": "my-job", "job_execution_type": "1"}' \
|
||||
https://gangway-ci.apps.ci.l2s4.p1.openshiftapps.com/v1/executions
|
||||
|
||||
# Query Sippy API (DPCR cluster)
|
||||
curl_with_token.sh https://api.cr.j7t7.p1.openshiftapps.com:6443 -s -X POST \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"message": "question", "chat_history": []}' \
|
||||
https://sippy-auth.dptools.openshift.org/api/chat
|
||||
|
||||
# Query any other OpenShift cluster API
|
||||
curl_with_token.sh https://api.your-cluster.example.com:6443 -X GET \
|
||||
https://your-api.example.com/endpoint
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- Token never exposed in logs or output
|
||||
- Automatic authentication error handling
|
||||
- Same curl arguments you're already familiar with
|
||||
- Works with any curl flags (-v, -s, -X, -H, -d, etc.)
|
||||
|
||||
## Error Handling
|
||||
|
||||
The script provides clear error messages for common scenarios:
|
||||
|
||||
1. **Missing or invalid arguments**
|
||||
- Error: "Usage: curl_with_token.sh <cluster_api_url> [curl arguments...]"
|
||||
- Shows example usage
|
||||
|
||||
2. **No context found**
|
||||
- Error: "No oc context found for cluster with API server: {url}"
|
||||
- Provides authentication instructions
|
||||
|
||||
3. **Token retrieval failed**
|
||||
- Error: "Failed to retrieve token from context {context}"
|
||||
- Suggests re-authenticating to the cluster
|
||||
|
||||
## Authentication Instructions
|
||||
|
||||
### General Authentication Process:
|
||||
```
|
||||
Please authenticate first:
|
||||
1. Visit the cluster's console URL in your browser
|
||||
2. Log in through the browser with your credentials
|
||||
3. Click on username → 'Copy login command'
|
||||
4. Paste and execute the 'oc login' command in terminal
|
||||
|
||||
Verify authentication with:
|
||||
oc config get-contexts
|
||||
oc cluster-info
|
||||
```
|
||||
|
||||
### Examples:
|
||||
|
||||
**For app.ci cluster:**
|
||||
1. Visit https://console-openshift-console.apps.ci.l2s4.p1.openshiftapps.com/
|
||||
2. Follow the authentication process above
|
||||
3. Verify with `oc cluster-info` - should show API server: https://api.ci.l2s4.p1.openshiftapps.com:6443
|
||||
|
||||
**For DPCR cluster:**
|
||||
1. Visit https://console-openshift-console.apps.cr.j7t7.p1.openshiftapps.com/
|
||||
2. Follow the authentication process above
|
||||
3. Verify with `oc cluster-info` - should show API server: https://api.cr.j7t7.p1.openshiftapps.com:6443
|
||||
|
||||
## Benefits
|
||||
|
||||
1. **Single Source of Truth**: All context discovery logic is in one place
|
||||
2. **Consistency**: All commands use the same authentication method
|
||||
3. **Maintainability**: Changes to cluster names or patterns only need to be updated in one place
|
||||
4. **Error Handling**: Centralized error messages and authentication instructions
|
||||
5. **Multi-Cluster Support**: Users can be authenticated to multiple clusters simultaneously
|
||||
|
||||
## Implementation Details
|
||||
|
||||
The script uses the following approach:
|
||||
|
||||
1. **Get all context names**
|
||||
```bash
|
||||
oc config get-contexts -o name
|
||||
```
|
||||
|
||||
2. **Find matching context by API server URL**
|
||||
```bash
|
||||
for ctx in $contexts; do
|
||||
cluster_name=$(oc config view -o jsonpath="{.contexts[?(@.name=='$ctx')].context.cluster}")
|
||||
server=$(oc config view -o jsonpath="{.clusters[?(@.name=='$cluster_name')].cluster.server}")
|
||||
if [ "$server" = "$target_url" ]; then
|
||||
echo "$ctx"
|
||||
break
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
3. **Retrieve token from context**
|
||||
```bash
|
||||
oc whoami -t --context=$context_name
|
||||
```
|
||||
|
||||
This ensures we get the token from the correct cluster by matching the exact API server URL, even when multiple cluster contexts exist.
|
||||
|
||||
68
skills/oc-auth/curl_with_token.sh
Executable file
68
skills/oc-auth/curl_with_token.sh
Executable file
@@ -0,0 +1,68 @@
|
||||
#!/bin/bash
|
||||
# Curl wrapper that automatically adds OAuth token from specified cluster
|
||||
# Usage: curl_with_token.sh <cluster_api_url> [curl arguments...]
|
||||
# cluster_api_url: Full API server URL (e.g., https://api.ci.l2s4.p1.openshiftapps.com:6443)
|
||||
#
|
||||
# The token is retrieved and added as "Authorization: Bearer <token>" header
|
||||
# automatically, so it never appears in output or command history.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
if [ $# -lt 2 ]; then
|
||||
echo "Usage: $0 <cluster_api_url> [curl arguments...]" >&2
|
||||
echo "cluster_api_url: Full API server URL" >&2
|
||||
echo "Example: $0 https://api.ci.l2s4.p1.openshiftapps.com:6443 -X GET https://api.example.com/endpoint" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CLUSTER_API_URL="$1"
|
||||
shift # Remove cluster_api_url from arguments
|
||||
|
||||
# Extract the cluster API server without protocol for matching
|
||||
CLUSTER_SERVER=$(echo "$CLUSTER_API_URL" | sed -E 's|^https?://||')
|
||||
|
||||
# Find the context for the specified cluster by matching the server URL
|
||||
CONTEXT=$(oc config get-contexts -o name 2>/dev/null | while read -r ctx; do
|
||||
server=$(oc config view -o jsonpath="{.clusters[?(@.name=='$(oc config view -o jsonpath="{.contexts[?(@.name=='$ctx')].context.cluster}" 2>/dev/null)')].cluster.server}" 2>/dev/null || echo "")
|
||||
# Extract server without protocol for comparison
|
||||
server_clean=$(echo "$server" | sed -E 's|^https?://||')
|
||||
if [ "$server_clean" = "$CLUSTER_SERVER" ]; then
|
||||
echo "$ctx"
|
||||
break
|
||||
fi
|
||||
done)
|
||||
|
||||
if [ -z "$CONTEXT" ]; then
|
||||
# Generate console URL from API URL
|
||||
# Transform: https://api.{subdomain}.{domain}:6443 -> https://console-openshift-console.apps.{subdomain}.{domain}/
|
||||
CONSOLE_URL=$(echo "$CLUSTER_API_URL" | sed -E 's|https://api\.(.*):6443|https://console-openshift-console.apps.\1/|')
|
||||
|
||||
echo "Error: No oc context found for cluster with API server: $CLUSTER_API_URL" >&2
|
||||
echo "" >&2
|
||||
echo "Please authenticate first:" >&2
|
||||
echo "1. Visit the cluster's console URL in your browser:" >&2
|
||||
echo " $CONSOLE_URL" >&2
|
||||
echo "2. Log in through the browser with your credentials" >&2
|
||||
echo "3. Click on username → 'Copy login command'" >&2
|
||||
echo "4. Paste and execute the 'oc login' command in terminal" >&2
|
||||
echo "" >&2
|
||||
echo "Verify authentication with:" >&2
|
||||
echo " oc config get-contexts" >&2
|
||||
echo " oc cluster-info" >&2
|
||||
echo "" >&2
|
||||
echo "The oc login command should connect to: $CLUSTER_API_URL" >&2
|
||||
exit 2
|
||||
fi
|
||||
|
||||
# Get token from the context
|
||||
TOKEN=$(oc whoami -t --context="$CONTEXT" 2>/dev/null || echo "")
|
||||
|
||||
if [ -z "$TOKEN" ]; then
|
||||
echo "Error: Failed to retrieve token from context $CONTEXT" >&2
|
||||
echo "Please re-authenticate to the cluster: $CLUSTER_API_URL" >&2
|
||||
exit 3
|
||||
fi
|
||||
|
||||
# Execute curl with the Authorization header and all provided arguments
|
||||
exec curl -H "Authorization: Bearer $TOKEN" "$@"
|
||||
|
||||
Reference in New Issue
Block a user