Initial commit
This commit is contained in:
183
skills/scripts/SKILL.md
Normal file
183
skills/scripts/SKILL.md
Normal file
@@ -0,0 +1,183 @@
|
||||
---
|
||||
name: Node Tuning Helper Scripts
|
||||
description: Generate tuned manifests and evaluate node tuning snapshots
|
||||
---
|
||||
|
||||
# Node Tuning Helper Scripts
|
||||
|
||||
Detailed instructions for invoking the helper utilities that back `/node-tuning` commands:
|
||||
- `generate_tuned_profile.py` renders Tuned manifests (`tuned.openshift.io/v1`).
|
||||
- `analyze_node_tuning.py` inspects live nodes or sosreports for tuning gaps.
|
||||
|
||||
## When to Use These Scripts
|
||||
- Translate structured command inputs into Tuned manifests for the Node Tuning Operator.
|
||||
- Iterate on generated YAML outside the assistant or integrate the generator into automation.
|
||||
- Analyze CPU isolation, IRQ affinity, huge pages, sysctl values, and networking counters from live clusters or archived sosreports.
|
||||
|
||||
## Prerequisites
|
||||
- Python 3.8 or newer (`python3 --version`).
|
||||
- Repository checkout so the scripts under `plugins/node-tuning/skills/scripts/` are accessible.
|
||||
- Optional: `oc` CLI when validating or applying manifests.
|
||||
- Optional: Extracted sosreport directory when running the analysis script offline.
|
||||
- Optional (remote analysis): `oc` CLI access plus a valid `KUBECONFIG` when capturing `/proc`/`/sys` or sosreport via `oc debug node/<name>`. The sosreport workflow pulls the `registry.redhat.io/rhel9/support-tools` image (override with `--toolbox-image` or `TOOLBOX_IMAGE`) and requires registry access. HTTP(S) proxy env vars from the host are forwarded automatically when present, but using a proxy is optional.
|
||||
|
||||
---
|
||||
|
||||
## Script: `generate_tuned_profile.py`
|
||||
|
||||
### Implementation Steps
|
||||
1. **Collect Inputs**
|
||||
- `--profile-name`: Tuned resource name.
|
||||
- `--summary`: `[main]` section summary.
|
||||
- Repeatable options: `--include`, `--main-option`, `--variable`, `--sysctl`, `--section` (`SECTION:KEY=VALUE`).
|
||||
- Target selectors: `--machine-config-label key=value`, `--match-label key[=value]`.
|
||||
- Optional: `--priority` (default 20), `--namespace`, `--output`, `--dry-run`.
|
||||
- Use `--list-nodes`/`--node-selector` to inspect nodes and `--label-node NODE:KEY[=VALUE]` (plus `--overwrite-labels`) to tag machines.
|
||||
|
||||
2. **Inspect or Label Nodes (optional)**
|
||||
```bash
|
||||
# List all worker nodes
|
||||
python3 plugins/node-tuning/skills/scripts/generate_tuned_profile.py --list-nodes --node-selector "node-role.kubernetes.io/worker" --skip-manifest
|
||||
|
||||
# Label a specific node for the worker-hp pool
|
||||
python3 plugins/node-tuning/skills/scripts/generate_tuned_profile.py \
|
||||
--label-node ip-10-0-1-23.ec2.internal:node-role.kubernetes.io/worker-hp= \
|
||||
--overwrite-labels \
|
||||
--skip-manifest
|
||||
```
|
||||
|
||||
3. **Render the Manifest**
|
||||
```bash
|
||||
python3 plugins/node-tuning/skills/scripts/generate_tuned_profile.py \
|
||||
--profile-name "$PROFILE" \
|
||||
--summary "$SUMMARY" \
|
||||
--sysctl net.core.netdev_max_backlog=16384 \
|
||||
--match-label tuned.openshift.io/custom-net \
|
||||
--output .work/node-tuning/$PROFILE/tuned.yaml
|
||||
```
|
||||
- Omit `--output` to write `<profile-name>.yaml` in the current directory.
|
||||
- Add `--dry-run` to print the manifest to stdout.
|
||||
|
||||
4. **Review Output**
|
||||
- Inspect the generated YAML for accuracy.
|
||||
- Optionally format with `yq` or open in an editor for readability.
|
||||
|
||||
5. **Validate and Apply**
|
||||
- Dry-run: `oc apply --server-dry-run=client -f <manifest>`.
|
||||
- Apply: `oc apply -f <manifest>`.
|
||||
|
||||
### Error Handling
|
||||
- Missing required options raise `ValueError` with descriptive messages.
|
||||
- The script exits non-zero when no target selectors (`--machine-config-label` or `--match-label`) are supplied.
|
||||
- Invalid key/value or section inputs identify the failing argument explicitly.
|
||||
|
||||
### Examples
|
||||
```bash
|
||||
python3 plugins/node-tuning/skills/scripts/generate_tuned_profile.py \
|
||||
--profile-name realtime-worker \
|
||||
--summary "Realtime tuned profile" \
|
||||
--include openshift-node --include realtime \
|
||||
--variable isolated_cores=1 \
|
||||
--section bootloader:cmdline_ocp_realtime=+systemd.cpu_affinity=${not_isolated_cores_expanded} \
|
||||
--machine-config-label machineconfiguration.openshift.io/role=worker-rt \
|
||||
--priority 25 \
|
||||
--output .work/node-tuning/realtime-worker/tuned.yaml
|
||||
```
|
||||
```bash
|
||||
python3 plugins/node-tuning/skills/scripts/generate_tuned_profile.py \
|
||||
--profile-name openshift-node-hugepages \
|
||||
--summary "Boot time configuration for hugepages" \
|
||||
--include openshift-node \
|
||||
--section bootloader:cmdline_openshift_node_hugepages="hugepagesz=2M hugepages=50" \
|
||||
--machine-config-label machineconfiguration.openshift.io/role=worker-hp \
|
||||
--priority 30 \
|
||||
--output .work/node-tuning/openshift-node-hugepages/hugepages-tuned-boottime.yaml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Script: `analyze_node_tuning.py`
|
||||
|
||||
### Purpose
|
||||
Inspect either a live node (`/proc`, `/sys`) or an extracted sosreport snapshot for tuning signals (CPU isolation, IRQ affinity, huge pages, sysctl state, networking counters) and emit actionable recommendations.
|
||||
|
||||
### Usage Patterns
|
||||
- **Live node analysis**
|
||||
```bash
|
||||
python3 plugins/node-tuning/skills/scripts/analyze_node_tuning.py --format markdown
|
||||
```
|
||||
- **Remote analysis via oc debug**
|
||||
```bash
|
||||
python3 plugins/node-tuning/skills/scripts/analyze_node_tuning.py \
|
||||
--node worker-rt-0 \
|
||||
--kubeconfig ~/.kube/prod \
|
||||
--format markdown
|
||||
```
|
||||
- **Collect sosreport via oc debug and analyze locally**
|
||||
```bash
|
||||
python3 plugins/node-tuning/skills/scripts/analyze_node_tuning.py \
|
||||
--node worker-rt-0 \
|
||||
--toolbox-image registry.example.com/support-tools:latest \
|
||||
--sosreport-arg "--case-id=01234567" \
|
||||
--sosreport-output .work/node-tuning/sosreports \
|
||||
--format json
|
||||
```
|
||||
- **Offline sosreport analysis**
|
||||
```bash
|
||||
python3 plugins/node-tuning/skills/scripts/analyze_node_tuning.py \
|
||||
--sosreport /path/to/sosreport-2025-10-20
|
||||
```
|
||||
- **Automation-friendly JSON**
|
||||
```bash
|
||||
python3 plugins/node-tuning/skills/scripts/analyze_node_tuning.py \
|
||||
--sosreport /path/to/sosreport \
|
||||
--format json --output .work/node-tuning/node-analysis.json
|
||||
```
|
||||
|
||||
### Implementation Steps
|
||||
1. **Select data source**
|
||||
- Provide `--node <name>` (with optional `--kubeconfig` / `--oc-binary`). By default the helper runs `sosreport` remotely from inside the RHCOS toolbox container (`registry.redhat.io/rhel9/support-tools`). Override the image with `--toolbox-image`, extend the sosreport command with `--sosreport-arg`, or disable the curated OpenShift flags via `--skip-default-sosreport-flags`. Pass `--no-collect-sosreport` to fall back to the direct `/proc` snapshot mode.
|
||||
- Provide `--sosreport <dir>` for archived diagnostics; detection finds embedded `proc/` and `sys/`.
|
||||
- Omit both switches to query the live filesystem (defaults to `/proc` and `/sys`).
|
||||
- Override paths with `--proc-root` or `--sys-root` when the layout differs.
|
||||
2. **Run analysis**
|
||||
- The script parses `cpuinfo`, kernel cmdline parameters (`isolcpus`, `nohz_full`, `tuned.non_isolcpus`), default IRQ affinities, huge page counters, sysctl values (net, vm, kernel), transparent hugepage settings, `netstat`/`sockstat` counters, and `ps` snapshots (when available in sosreport).
|
||||
3. **Review the report**
|
||||
- Markdown output groups findings by section (System Overview, CPU & Isolation, Huge Pages, Sysctl Highlights, Network Signals, IRQ Affinity, Process Snapshot) and lists recommendations.
|
||||
- JSON output contains the same information in structured form for pipelines or dashboards.
|
||||
4. **Act on recommendations**
|
||||
- Apply Tuned profiles, MachineConfig updates, or manual sysctl/irqbalance adjustments.
|
||||
- Feed actionable items back into `/node-tuning:generate-tuned-profile` to codify desired state.
|
||||
|
||||
### Error Handling
|
||||
- Missing `proc/` or `sys/` directories trigger descriptive errors.
|
||||
- Unreadable files are skipped gracefully and noted in observations where relevant.
|
||||
- Non-numeric sysctl values are flagged for manual investigation.
|
||||
|
||||
### Example Output (Markdown excerpt)
|
||||
```
|
||||
# Node Tuning Analysis
|
||||
|
||||
## System Overview
|
||||
- Hostname: worker-rt-1
|
||||
- Kernel: 4.18.0-477.el8
|
||||
- NUMA nodes: 2
|
||||
- Kernel cmdline: `BOOT_IMAGE=... isolcpus=2-15 tuned.non_isolcpus=0-1`
|
||||
|
||||
## CPU & Isolation
|
||||
- Logical CPUs: 32
|
||||
- Physical cores: 16 across 2 socket(s)
|
||||
- SMT detected: yes
|
||||
- Isolated CPUs: 2-15
|
||||
...
|
||||
|
||||
## Recommended Actions
|
||||
- Configure net.core.netdev_max_backlog (>=32768) to accommodate bursty NIC traffic.
|
||||
- Transparent Hugepages are not disabled (`[never]` not selected). Consider setting to `never` for latency-sensitive workloads.
|
||||
- 4 IRQs overlap isolated CPUs. Relocate interrupt affinities using tuned profiles or irqbalance.
|
||||
```
|
||||
|
||||
### Follow-up Automation Ideas
|
||||
- Persist JSON results in `.work/node-tuning/<host>/analysis.json` for historical tracing.
|
||||
- Gate upgrades by comparing recommendations across nodes.
|
||||
- Integrate with CI jobs that validate cluster tuning post-change.
|
||||
1292
skills/scripts/analyze_node_tuning.py
Normal file
1292
skills/scripts/analyze_node_tuning.py
Normal file
File diff suppressed because it is too large
Load Diff
414
skills/scripts/generate_tuned_profile.py
Normal file
414
skills/scripts/generate_tuned_profile.py
Normal file
@@ -0,0 +1,414 @@
|
||||
"""
|
||||
Utility script to generate tuned.openshift.io/v1 Tuned manifests.
|
||||
|
||||
The script is intentionally dependency-free so it can run anywhere Python 3.8+
|
||||
is available (CI, developer workstations, or automation pipelines).
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
from collections import OrderedDict
|
||||
from typing import Iterable, List, Optional, Sequence, Tuple
|
||||
|
||||
|
||||
def _parse_key_value_pairs(
|
||||
raw_values: Sequence[str],
|
||||
*,
|
||||
parameter: str,
|
||||
allow_empty_value: bool = False,
|
||||
) -> List[Tuple[str, str]]:
|
||||
"""Split KEY=VALUE (or KEY when allow_empty_value=True) pairs."""
|
||||
parsed: List[Tuple[str, str]] = []
|
||||
for raw in raw_values:
|
||||
if "=" in raw:
|
||||
key, value = raw.split("=", 1)
|
||||
elif allow_empty_value:
|
||||
key, value = raw, ""
|
||||
else:
|
||||
raise ValueError(f"{parameter} expects KEY=VALUE entries, got '{raw}'")
|
||||
key = key.strip()
|
||||
value = value.strip()
|
||||
if not key:
|
||||
raise ValueError(f"{parameter} entries must include a non-empty key (got '{raw}')")
|
||||
parsed.append((key, value))
|
||||
return parsed
|
||||
|
||||
|
||||
def _parse_section_entries(raw_values: Sequence[str]) -> List[Tuple[str, str, str]]:
|
||||
"""
|
||||
Parse SECTION:KEY=VALUE entries for arbitrary tuned.ini sections.
|
||||
|
||||
Examples:
|
||||
bootloader:cmdline_ocp_realtime=+nohz_full=1-3
|
||||
service:service.stalld=start,enable
|
||||
"""
|
||||
parsed: List[Tuple[str, str, str]] = []
|
||||
for raw in raw_values:
|
||||
if ":" not in raw:
|
||||
raise ValueError(
|
||||
f"--section expects SECTION:KEY=VALUE entries, got '{raw}'"
|
||||
)
|
||||
section, remainder = raw.split(":", 1)
|
||||
section = section.strip()
|
||||
if not section:
|
||||
raise ValueError(f"--section requires a section name before ':', got '{raw}'")
|
||||
key_value = _parse_key_value_pairs([remainder], parameter="--section")
|
||||
parsed.append((section, key_value[0][0], key_value[0][1]))
|
||||
return parsed
|
||||
|
||||
|
||||
def _build_profile_ini(
|
||||
*,
|
||||
summary: str,
|
||||
includes: Sequence[str],
|
||||
main_options: Sequence[Tuple[str, str]],
|
||||
variables: Sequence[Tuple[str, str]],
|
||||
sysctls: Sequence[Tuple[str, str]],
|
||||
extra_sections: Sequence[Tuple[str, str, str]],
|
||||
) -> str:
|
||||
sections: "OrderedDict[str, List[str]]" = OrderedDict()
|
||||
sections["main"] = [f"summary={summary}"]
|
||||
if includes:
|
||||
sections["main"].append(f"include={','.join(includes)}")
|
||||
for key, value in main_options:
|
||||
sections["main"].append(f"{key}={value}")
|
||||
|
||||
if variables:
|
||||
sections["variables"] = [f"{key}={value}" for key, value in variables]
|
||||
if sysctls:
|
||||
sections["sysctl"] = [f"{key}={value}" for key, value in sysctls]
|
||||
|
||||
for section, key, value in extra_sections:
|
||||
section = section.strip()
|
||||
if not section:
|
||||
continue
|
||||
if section not in sections:
|
||||
sections[section] = []
|
||||
sections[section].append(f"{key}={value}")
|
||||
|
||||
rendered_sections: List[str] = []
|
||||
non_empty_sections = [(name, lines) for name, lines in sections.items() if lines]
|
||||
for idx, (name, lines) in enumerate(non_empty_sections):
|
||||
rendered_sections.append(f"[{name}]")
|
||||
rendered_sections.extend(lines)
|
||||
if idx != len(non_empty_sections) - 1:
|
||||
rendered_sections.append("")
|
||||
return "\n".join(rendered_sections)
|
||||
|
||||
|
||||
def _json_string(value: str) -> str:
|
||||
"""Return a JSON-encoded string (adds surrounding quotes, escapes)."""
|
||||
return json.dumps(value)
|
||||
|
||||
|
||||
def _render_manifest(
|
||||
*,
|
||||
profile_name: str,
|
||||
namespace: str,
|
||||
profile_ini: str,
|
||||
machine_config_labels: Sequence[Tuple[str, str]],
|
||||
match_labels: Sequence[Tuple[str, str]],
|
||||
priority: int,
|
||||
) -> str:
|
||||
lines: List[str] = [
|
||||
"apiVersion: tuned.openshift.io/v1",
|
||||
"kind: Tuned",
|
||||
"metadata:",
|
||||
f" name: {profile_name}",
|
||||
]
|
||||
if namespace:
|
||||
lines.append(f" namespace: {namespace}")
|
||||
lines.extend(
|
||||
[
|
||||
"spec:",
|
||||
" profile:",
|
||||
" - data: |",
|
||||
]
|
||||
)
|
||||
profile_lines = profile_ini.splitlines()
|
||||
if not profile_lines:
|
||||
raise ValueError("Profile contents may not be empty")
|
||||
for entry in profile_lines:
|
||||
# Preserve blank lines for readability inside the literal block.
|
||||
if entry:
|
||||
lines.append(f" {entry}")
|
||||
else:
|
||||
lines.append(" ")
|
||||
lines.append(f" name: {profile_name}")
|
||||
|
||||
if not machine_config_labels and not match_labels:
|
||||
raise ValueError("At least one --machine-config-label or --match-label must be provided")
|
||||
|
||||
lines.append(" recommend:")
|
||||
|
||||
if machine_config_labels:
|
||||
lines.append(" - machineConfigLabels:")
|
||||
for key, value in machine_config_labels:
|
||||
lines.append(f" {key}: {_json_string(value)}")
|
||||
start_written = True
|
||||
else:
|
||||
start_written = False
|
||||
|
||||
if match_labels:
|
||||
prefix = " match:" if start_written else " - match:"
|
||||
lines.append(prefix)
|
||||
item_indent = " - " if start_written else " - "
|
||||
value_indent = " " if start_written else " "
|
||||
for label, value in match_labels:
|
||||
lines.append(f"{item_indent}label: {_json_string(label)}")
|
||||
if value != "":
|
||||
lines.append(f"{value_indent}value: {_json_string(value)}")
|
||||
start_written = True
|
||||
|
||||
priority_prefix = " priority" if start_written else " - priority"
|
||||
lines.append(f"{priority_prefix}: {priority}")
|
||||
|
||||
profile_prefix = " profile" if start_written else " - profile"
|
||||
lines.append(f"{profile_prefix}: {_json_string(profile_name)}")
|
||||
|
||||
return "\n".join(lines) + "\n"
|
||||
|
||||
|
||||
def _run_oc_command(command: Sequence[str]) -> subprocess.CompletedProcess:
|
||||
"""Execute an oc command and return the completed process."""
|
||||
try:
|
||||
result = subprocess.run(
|
||||
command,
|
||||
check=True,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
text=True,
|
||||
)
|
||||
except FileNotFoundError as exc:
|
||||
raise RuntimeError(
|
||||
"Unable to locate the 'oc' binary. Install the OpenShift CLI or set --oc-binary."
|
||||
) from exc
|
||||
except subprocess.CalledProcessError as exc:
|
||||
message = exc.stderr.strip() or exc.stdout.strip() or str(exc)
|
||||
raise RuntimeError(f"Command '{' '.join(command)}' failed: {message}") from exc
|
||||
return result
|
||||
|
||||
|
||||
def list_nodes(*, oc_binary: str, selector: Optional[str]) -> List[str]:
|
||||
"""List nodes using the oc CLI and return their names."""
|
||||
command: List[str] = [oc_binary, "get", "nodes", "-o", "name"]
|
||||
if selector:
|
||||
command.extend(["-l", selector])
|
||||
result = _run_oc_command(command)
|
||||
nodes = [line.strip() for line in result.stdout.splitlines() if line.strip()]
|
||||
if nodes:
|
||||
for node in nodes:
|
||||
print(node)
|
||||
else:
|
||||
print("No nodes matched the provided selector.")
|
||||
return nodes
|
||||
|
||||
|
||||
def label_nodes(
|
||||
*,
|
||||
oc_binary: str,
|
||||
entries: Sequence[str],
|
||||
overwrite: bool,
|
||||
) -> None:
|
||||
"""Label nodes via oc CLI using NODE:label entries."""
|
||||
if not entries:
|
||||
return
|
||||
for raw in entries:
|
||||
if ":" not in raw:
|
||||
raise ValueError(
|
||||
f"--label-node expects NODE:KEY[=VALUE] format (e.g. node1:node-role.kubernetes.io/worker-hp=) - got '{raw}'"
|
||||
)
|
||||
node_name, label = raw.split(":", 1)
|
||||
node_name = node_name.strip()
|
||||
label = label.strip()
|
||||
if not node_name or not label:
|
||||
raise ValueError(f"--label-node entry must include both node name and label (got '{raw}')")
|
||||
command: List[str] = [oc_binary, "label", "node", node_name, label]
|
||||
if overwrite:
|
||||
command.append("--overwrite")
|
||||
_run_oc_command(command)
|
||||
print(f"Labeled {node_name} with {label}")
|
||||
|
||||
|
||||
def generate_manifest(args: argparse.Namespace) -> str:
|
||||
includes = [value.strip() for value in args.include or [] if value.strip()]
|
||||
|
||||
main_options = _parse_key_value_pairs(args.main_option or [], parameter="--main-option")
|
||||
variables = _parse_key_value_pairs(args.variable or [], parameter="--variable")
|
||||
sysctls = _parse_key_value_pairs(args.sysctl or [], parameter="--sysctl")
|
||||
extra_sections = _parse_section_entries(args.section or [])
|
||||
|
||||
match_labels = _parse_key_value_pairs(
|
||||
args.match_label or [],
|
||||
parameter="--match-label",
|
||||
allow_empty_value=True,
|
||||
)
|
||||
machine_config_labels = _parse_key_value_pairs(
|
||||
args.machine_config_label or [],
|
||||
parameter="--machine-config-label",
|
||||
allow_empty_value=True,
|
||||
)
|
||||
|
||||
profile_ini = _build_profile_ini(
|
||||
summary=args.summary,
|
||||
includes=includes,
|
||||
main_options=main_options,
|
||||
variables=variables,
|
||||
sysctls=sysctls,
|
||||
extra_sections=extra_sections,
|
||||
)
|
||||
|
||||
manifest = _render_manifest(
|
||||
profile_name=args.profile_name,
|
||||
namespace=args.namespace,
|
||||
profile_ini=profile_ini,
|
||||
machine_config_labels=machine_config_labels,
|
||||
match_labels=match_labels,
|
||||
priority=args.priority,
|
||||
)
|
||||
return manifest
|
||||
|
||||
|
||||
def parse_arguments(argv: Iterable[str]) -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Generate tuned.openshift.io/v1 Tuned manifests for the Node Tuning Operator.",
|
||||
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
|
||||
)
|
||||
parser.add_argument("--profile-name", help="Name of the Tuned profile and resource")
|
||||
parser.add_argument("--summary", help="Summary placed inside the [main] section")
|
||||
parser.add_argument(
|
||||
"--namespace",
|
||||
default="openshift-cluster-node-tuning-operator",
|
||||
help="Namespace to place in metadata.namespace",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--include",
|
||||
action="append",
|
||||
help="Append an entry to the 'include=' list (multiple flags allowed)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--main-option",
|
||||
action="append",
|
||||
help="Add KEY=VALUE to the [main] section beyond summary/include",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--variable",
|
||||
action="append",
|
||||
help="Add KEY=VALUE to the [variables] section",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--sysctl",
|
||||
action="append",
|
||||
help="Add KEY=VALUE to the [sysctl] section",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--section",
|
||||
action="append",
|
||||
help="Add arbitrary SECTION:KEY=VALUE lines (e.g. bootloader:cmdline=...)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--machine-config-label",
|
||||
action="append",
|
||||
help="Add a MachineConfigPool selector (key=value) under machineConfigLabels",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--match-label",
|
||||
action="append",
|
||||
help="Add a node label entry (key[=value]) under recommend[].match[]",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--priority",
|
||||
type=int,
|
||||
default=20,
|
||||
help="Recommendation priority",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--output",
|
||||
help="Output file path; defaults to <profile-name>.yaml in the current directory",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--dry-run",
|
||||
action="store_true",
|
||||
help="Print manifest to stdout instead of writing to disk",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--skip-manifest",
|
||||
action="store_true",
|
||||
help="Skip manifest generation; useful when only listing or labeling nodes",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--list-nodes",
|
||||
action="store_true",
|
||||
help="List nodes via 'oc get nodes' before other actions",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--node-selector",
|
||||
help="Label selector to filter nodes when using --list-nodes",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--label-node",
|
||||
action="append",
|
||||
help="Label nodes using NODE:KEY[=VALUE] entries (repeat for multiple nodes)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--overwrite-labels",
|
||||
action="store_true",
|
||||
help="Allow overwriting existing labels when using --label-node",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--oc-binary",
|
||||
default=os.environ.get("OC_BIN", "oc"),
|
||||
help="Path to the oc binary to execute",
|
||||
)
|
||||
return parser.parse_args(argv)
|
||||
|
||||
|
||||
def main(argv: Sequence[str]) -> int:
|
||||
args = parse_arguments(argv)
|
||||
try:
|
||||
if args.list_nodes:
|
||||
list_nodes(oc_binary=args.oc_binary, selector=args.node_selector)
|
||||
|
||||
if args.label_node:
|
||||
label_nodes(
|
||||
oc_binary=args.oc_binary,
|
||||
entries=args.label_node,
|
||||
overwrite=args.overwrite_labels,
|
||||
)
|
||||
|
||||
if args.skip_manifest:
|
||||
return 0
|
||||
|
||||
if not args.profile_name:
|
||||
raise ValueError("--profile-name is required unless --skip-manifest is set")
|
||||
if not args.summary:
|
||||
raise ValueError("--summary is required unless --skip-manifest is set")
|
||||
|
||||
manifest = generate_manifest(args)
|
||||
except (ValueError, RuntimeError) as exc:
|
||||
print(f"error: {exc}", file=sys.stderr)
|
||||
return 1
|
||||
|
||||
if args.dry_run:
|
||||
sys.stdout.write(manifest)
|
||||
return 0
|
||||
|
||||
output_path = args.output or f"{args.profile_name}.yaml"
|
||||
output_dir = os.path.dirname(output_path)
|
||||
if output_dir and not os.path.exists(output_dir):
|
||||
os.makedirs(output_dir, exist_ok=True)
|
||||
|
||||
with open(output_path, "w", encoding="utf-8") as handle:
|
||||
handle.write(manifest)
|
||||
print(f"Wrote Tuned manifest to {output_path}")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main(sys.argv[1:]))
|
||||
|
||||
Reference in New Issue
Block a user