Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:46:08 +08:00
commit b9da7b3a23
8 changed files with 2283 additions and 0 deletions

View File

@@ -0,0 +1,14 @@
{
"name": "node-tuning",
"description": "Automatically create and apply tuned profile",
"version": "1.0.0",
"author": {
"name": "github.com/openshift-eng"
},
"skills": [
"./skills"
],
"commands": [
"./commands"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# node-tuning
Automatically create and apply tuned profile

View File

@@ -0,0 +1,116 @@
---
description: Analyze kernel/sysctl tuning from a live node or sosreport snapshot and propose NTO recommendations
argument-hint: "[--sosreport PATH] [--format json|markdown] [--max-irq-samples N]"
---
## Name
node-tuning:analyze-node-tuning
## Synopsis
```text
/node-tuning:analyze-node-tuning [--sosreport PATH] [--collect-sosreport|--no-collect-sosreport] [--sosreport-output PATH] [--node NODE] [--kubeconfig PATH] [--oc-binary PATH] [--format json|markdown] [--max-irq-samples N] [--keep-snapshot]
```
## Description
The `node-tuning:analyze-node-tuning` command inspects kernel tuning signals gathered from either a live OpenShift node (`/proc`, `/sys`), an `oc debug node/<name>` snapshot captured via KUBECONFIG, or an extracted sosreport directory. It parses CPU isolation parameters, IRQ affinity, huge page allocation, critical sysctl settings, and networking counters before compiling actionable recommendations that can be enforced through Tuned profiles or MachineConfig updates.
Use this command when you need to:
- Audit a node for tuning regressions after upgrades or configuration changes.
- Translate findings into remediation steps for the Node Tuning Operator.
- Produce JSON or Markdown reports suitable for incident response, CI gates, or documentation.
## Implementation
1. **Establish data source**
- Live (local) analysis: the helper script defaults to `/proc` and `/sys`. Ensure the command runs on the target node (or within an SSH session / debug pod).
- Remote analysis via `oc debug`: provide `--node <name>` (plus optional `--kubeconfig` and `--oc-binary`). The helper defaults to entering the RHCOS `toolbox` (backed by the `registry.redhat.io/rhel9/support-tools` image) via `oc debug node/<name>`, running `sosreport --batch --quiet -e openshift -e openshift_ovn -e openvswitch -e podman -e crio -k crio.all=on -k crio.logs=on -k podman.all=on -k podman.logs=on -k networking.ethtool-namespaces=off --all-logs --plugin-timeout=600`, streaming the archive locally (respecting `--sosreport-output` when set), and analyzing the extracted data. Use `--toolbox-image` (or `TOOLBOX_IMAGE`) to point at a mirrored support-tools image, `--sosreport-arg` to append extra flags (repeat per flag), or `--skip-default-sosreport-flags` to take full control. Host HTTP(S) proxy variables are forwarded when present but entirely optional. Add `--no-collect-sosreport` to skip sosreport generation entirely, and `--keep-snapshot` if you want to retain the downloaded files.
- Offline analysis: provide `--sosreport /path/to/sosreport-<timestamp>` pointing to an extracted sosreport directory; the script auto-discovers embedded `proc/` and `sys/` trees.
- Override non-standard layouts with `--proc-root` or `--sys-root` as needed.
2. **Prepare workspace**
- Create `.work/node-tuning/<hostname>/` to store generated reports (remote snapshots and sosreport captures may reuse this path or default to a temporary directory).
- Decide whether you want Markdown (human-readable) or JSON (automation-ready) output. Set `--format json` and `--output` for machine consumption.
3. **Invoke the analysis helper**
```bash
python3 plugins/node-tuning/skills/scripts/analyze_node_tuning.py \
--sosreport "$SOS_DIR" \
--format markdown \
--max-irq-samples 10 \
--output ".work/node-tuning/${HOSTNAME}/analysis.md"
```
- Omit `--sosreport` and `--node` to evaluate the local environment.
- Lower `--max-irq-samples` to cap the number of IRQ affinity overlaps listed in the report.
4. **Interpret results**
- **System Overview**: Validates kernel release, NUMA nodes, and kernel cmdline flags (isolcpus, nohz_full, tuned.non_isolcpus).
- **CPU & Isolation**: Highlights SMT detection, isolated CPU masks, and mismatches between default IRQ affinity and isolated cores.
- **Huge Pages**: Summarizes global and per-NUMA huge page pools, reserved counts, and sysctl targets.
- **Sysctl Highlights**: Surfaces values for tuning-critical keys (e.g., `net.core.netdev_max_backlog`, `vm.swappiness`, THP state) with recommendations when thresholds are missed.
- **Network Signals**: Examines `TcpExt` counters and sockstat data for backlog drops, syncookie failures, or orphaned sockets.
- **IRQ Affinity**: Lists IRQs overlapping isolated CPUs so you can adjust tuned profiles or irqbalance policies.
- **Process Snapshot**: When available in sosreport snapshots, shows top CPU consumers and flags irqbalance presence.
5. **Apply remediation**
- Feed the recommendations into `/node-tuning:generate-tuned-profile` or MachineConfig workflows.
- For immediate live tuning, adjust sysctls or interrupt affinities manually, then rerun the analysis to confirm remediation.
## Return Value
- **Success**: Returns a Markdown or JSON report summarizing findings and recommended actions.
- **Failure**: Reports descriptive errors (e.g., missing `proc/` or `sys/` directories, unreadable sosreport path) and exits non-zero.
## Examples
1. **Analyze a live node and print Markdown**
```text
/node-tuning:analyze-node-tuning --format markdown
```
2. **Capture `/proc` and `/sys` via `oc debug` (sosreport by default) and analyze remotely**
```text
/node-tuning:analyze-node-tuning \
--node worker-rt-0 \
--kubeconfig ~/.kube/prod \
--format markdown
```
3. **Collect a sosreport via `oc debug` (custom image + flags) and analyze it locally**
```text
/node-tuning:analyze-node-tuning \
--node worker-rt-0 \
--toolbox-image registry.example.com/support-tools:latest \
--sosreport-arg "--case-id=01234567" \
--sosreport-output .work/node-tuning/sosreports \
--format json
```
4. **Inspect an extracted sosreport and save JSON to disk**
```text
/node-tuning:analyze-node-tuning \
--sosreport ~/Downloads/sosreport-worker-001 \
--format json \
--max-irq-samples 20
```
5. **Limit the recommendation set to a handful of IRQ overlaps**
```text
/node-tuning:analyze-node-tuning --sosreport /tmp/sosreport --max-irq-samples 5
```
## Arguments:
- **--sosreport**: Path to an extracted sosreport directory to analyze instead of the live filesystem.
- **--format**: Output format (`markdown` default or `json` for structured data).
- **--output**: Optional file path where the helper writes the report.
- **--max-irq-samples**: Maximum number of IRQ affinity overlaps to include in the output (default 15).
- **--proc-root**: Override path to the procfs tree when auto-detection is insufficient.
- **--sys-root**: Override path to the sysfs tree when auto-detection is insufficient.
- **--node**: OpenShift node name to analyze via `oc debug node/<name>` when direct access is not possible.
- **--kubeconfig**: Path to the kubeconfig file used for `oc debug`; relies on the current oc context when omitted.
- **--oc-binary**: Path to the `oc` binary (defaults to `$OC_BIN` or `oc`).
- **--keep-snapshot**: Preserve the temporary directory produced from `oc debug` (snapshots or sosreports) for later inspection.
- **--collect-sosreport**: Trigger `sosreport` via `oc debug node/<name>`, download the archive, and analyze the extracted contents automatically (default behavior whenever `--node` is supplied and no other source is chosen).
- **--no-collect-sosreport**: Disable the default sosreport workflow when `--node` is supplied, falling back to the raw `/proc`/`/sys` snapshot.
- **--sosreport-output**: Directory where downloaded sosreport archives and their extraction should be placed (defaults to a temporary directory).
- **--toolbox-image**: Override the container image that toolbox pulls when collecting sosreport (defaults to `registry.redhat.io/rhel9/support-tools:latest` or `TOOLBOX_IMAGE` env).
- **--sosreport-arg**: Append an additional argument to the sosreport command (repeatable).
- **--skip-default-sosreport-flags**: Do not include the default OpenShift-focused sosreport plugins/collectors; only use values supplied via `--sosreport-arg`.

View File

@@ -0,0 +1,200 @@
---
description: Generate a Tuned (tuned.openshift.io/v1) profile manifest for the Node Tuning Operator
argument-hint: "[profile-name] [--summary ...] [--sysctl ...] [options]"
---
## Name
node-tuning:generate-tuned-profile
## Synopsis
```text
/node-tuning:generate-tuned-profile [profile-name] [--summary TEXT] [--include VALUE ...] [--sysctl KEY=VALUE ...] [--match-label KEY[=VALUE] ...] [options]
```
## Description
The `node-tuning:generate-tuned-profile` command streamlines creation of `tuned.openshift.io/v1` manifests for the OpenShift Node Tuning Operator. It captures the desired Tuned profile metadata, tuned daemon configuration blocks (e.g. `[sysctl]`, `[variables]`, `[bootloader]`), and recommendation rules, then invokes the helper script at `plugins/node-tuning/skills/scripts/generate_tuned_profile.py` to render a ready-to-apply YAML file.
Use this command whenever you need to:
- Bootstrap a new Tuned custom profile targeting selected nodes or machine config pools
- Generate manifests that can be version-controlled alongside other automation
- Iterate on sysctl, bootloader, or service parameters without hand-editing multi-line YAML
The generated manifest follows the structure expected by the cluster Node Tuning Operator:
```
apiVersion: tuned.openshift.io/v1
kind: Tuned
metadata:
name: <profile-name>
namespace: openshift-cluster-node-tuning-operator
spec:
profile:
- data: |
[main]
summary=...
include=...
...
name: <profile-name>
recommend:
- machineConfigLabels: {...}
match:
- label: ...
value: ...
priority: <priority>
profile: <profile-name>
```
## Implementation
1. **Collect inputs**
- Confirm Python 3.8+ is available (`python3 --version`).
- Gather the Tuned profile name, summary, optional include chain, sysctl values, variables, and any additional section lines (e.g. `[bootloader]`, `[service]`).
- Determine targeting rules: either `--match-label` entries (node labels) or `--machine-config-label` entries (MachineConfigPool selectors).
- Decide whether an accompanying MachineConfigPool (MCP) workflow is required for kernel boot arguments (see **Advanced Workflow** below).
- Use the helper's `--list-nodes` and `--label-node` flags when you need to inspect or label nodes prior to manifest generation.
2. **Build execution workspace**
- Create or reuse `.work/node-tuning/<profile-name>/`.
- Decide on the manifest filename (default `tuned.yaml` inside the workspace) or provide `--output` to override.
3. **Invoke the generator script**
- Run the helper with the collected switches:
```text
bash
python3 plugins/node-tuning/skills/scripts/generate_tuned_profile.py \
--profile-name "$PROFILE_NAME" \
--summary "$SUMMARY" \
--include openshift-node \
--sysctl net.core.netdev_max_backlog=16384 \
--variable isolated_cores=1 \
--section bootloader:cmdline_ocp_realtime=+systemd.cpu_affinity=${not_isolated_cores_expanded} \
--machine-config-label machineconfiguration.openshift.io/role=worker-rt \
--match-label tuned.openshift.io/elasticsearch="" \
--priority 25 \
--output ".work/node-tuning/$PROFILE_NAME/tuned.yaml"
```
- Use `--dry-run` to print the manifest to stdout before writing, if desired.
4. **Validate output**
- Inspect the generated YAML (`yq e . .work/node-tuning/$PROFILE_NAME/tuned.yaml` or open in an editor).
- Optionally run `oc apply --server-dry-run=client -f .work/node-tuning/$PROFILE_NAME/tuned.yaml` to confirm schema compatibility.
5. **Apply or distribute**
- Apply to a cluster with `oc apply -f .work/node-tuning/$PROFILE_NAME/tuned.yaml`.
- Commit the manifest to Git or attach to automated pipelines as needed.
## Advanced Workflow: Huge Pages with a Dedicated MachineConfigPool
Use this workflow when enabling huge pages or other kernel boot parameters that require coordinating the Node Tuning Operator with the Machine Config Operator while minimizing reboots.
1. **Label target nodes**
- Preview candidates: `python3 plugins/node-tuning/skills/scripts/generate_tuned_profile.py --list-nodes --node-selector "node-role.kubernetes.io/worker" --skip-manifest`.
- Label workers with the helper (repeat per node):
```text
bash
python3 plugins/node-tuning/skills/scripts/generate_tuned_profile.py \
--label-node ip-10-0-1-23.ec2.internal:node-role.kubernetes.io/worker-hp= \
--overwrite-labels \
--skip-manifest
```
- Alternatively run `oc label node <node> node-role.kubernetes.io/worker-hp=` directly if you prefer the CLI.
2. **Generate the Tuned manifest**
- Include bootloader arguments via the helper script:
```text
bash
python3 plugins/node-tuning/skills/scripts/generate_tuned_profile.py \
--profile-name "openshift-node-hugepages" \
--summary "Boot time configuration for hugepages" \
--include openshift-node \
--section bootloader:cmdline_openshift_node_hugepages="hugepagesz=2M hugepages=50" \
--machine-config-label machineconfiguration.openshift.io/role=worker-hp \
--priority 30 \
--output .work/node-tuning/openshift-node-hugepages/hugepages-tuned-boottime.yaml
```
- Review the `[bootloader]` section to ensure the kernel arguments match the desired configuration (e.g. `kernel-rt`, huge pages, additional sysctls).
3. **Author the MachineConfigPool manifest**
- Create `.work/node-tuning/openshift-node-hugepages/hugepages-mcp.yaml` with:
```yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
name: worker-hp
labels:
worker-hp: ""
spec:
machineConfigSelector:
matchExpressions:
- key: machineconfiguration.openshift.io/role
operator: In
values:
- worker
- worker-hp
nodeSelector:
matchLabels:
node-role.kubernetes.io/worker-hp: ""
```
4. **Apply manifests (optional `--dry-run`)**
- `oc apply -f .work/node-tuning/openshift-node-hugepages/hugepages-tuned-boottime.yaml`
- `oc apply -f .work/node-tuning/openshift-node-hugepages/hugepages-mcp.yaml`
- Watch progress: `oc get mcp worker-hp -w`
5. **Verify results**
- Confirm huge page allocation after the reboot: `oc get node <node> -o jsonpath="{.status.allocatable.hugepages-2Mi}"`
- Inspect kernel arguments: `oc debug node/<node> -q -- chroot /host cat /proc/cmdline`
## Return Value
- **Success**: Path to the generated manifest and the profile name are returned to the caller.
- **Failure**: Script exits non-zero with stderr diagnostics (e.g. invalid `KEY=VALUE` pair, missing labels, unwritable output path).
## Examples
1. **Realtime worker profile targeting worker-rt MCP**
```text
/node-tuning:generate-tuned-profile openshift-realtime \
--summary "Custom realtime tuned profile" \
--include openshift-node --include realtime \
--variable isolated_cores=1 \
--section bootloader:cmdline_ocp_realtime=+systemd.cpu_affinity=${not_isolated_cores_expanded} \
--machine-config-label machineconfiguration.openshift.io/role=worker-rt \
--output .work/node-tuning/openshift-realtime/realtime.yaml
```
2. **Sysctl-only profile matched by node label**
```text
/node-tuning:generate-tuned-profile custom-net-tuned \
--summary "Increase conntrack table" \
--sysctl net.netfilter.nf_conntrack_max=262144 \
--match-label tuned.openshift.io/custom-net \
--priority 18
```
3. **Preview manifest without writing to disk**
```text
/node-tuning:generate-tuned-profile pidmax-test \
--summary "Raise pid max" \
--sysctl kernel.pid_max=131072 \
--match-label tuned.openshift.io/pidmax="" \
--dry-run
```
## Arguments:
- **$1** (`profile-name`): Name for the Tuned profile and manifest resource.
- **--summary**: Required summary string placed in the `[main]` section.
- **--include**: Optional include chain entries (multiple allowed).
- **--main-option**: Additional `[main]` section key/value pairs (`KEY=VALUE`).
- **--variable**: Add entries to the `[variables]` section (`KEY=VALUE`).
- **--sysctl**: Add sysctl settings to the `[sysctl]` section (`KEY=VALUE`).
- **--section**: Add lines to arbitrary sections using `SECTION:KEY=VALUE`.
- **--machine-config-label**: MachineConfigPool selector labels (`key=value`) applied under `machineConfigLabels`.
- **--match-label**: Node selector labels for the `recommend[].match[]` block; omit `=value` to match existence only.
- **--priority**: Recommendation priority (integer, default 20).
- **--namespace**: Override the manifest namespace (default `openshift-cluster-node-tuning-operator`).
- **--output**: Destination file path; defaults to `<profile-name>.yaml` in the current directory.
- **--dry-run**: Print manifest to stdout instead of writing to a file.
- **--skip-manifest**: Skip manifest generation; useful when only listing or labeling nodes.
- **--list-nodes**: List nodes via `oc get nodes` (works with `--node-selector`).
- **--node-selector**: Label selector applied when `--list-nodes` is used.
- **--label-node**: Apply labels to nodes using `NODE:KEY[=VALUE]` notation; repeatable.
- **--overwrite-labels**: Allow overwriting existing labels when labeling nodes.
- **--oc-binary**: Path to the `oc` executable (defaults to `$OC_BIN` or `oc`).

61
plugin.lock.json Normal file
View File

@@ -0,0 +1,61 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:openshift-eng/ai-helpers:plugins/node-tuning",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "fef29390caa6cbea53442961962a2c708c15ed47",
"treeHash": "7845a42d4cc3371458e0d08eeb3238859cdd7ca540e0715e933622281dbb1735",
"generatedAt": "2025-11-28T10:27:31.563422Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "node-tuning",
"description": "Automatically create and apply tuned profile",
"version": "1.0.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "26494cca3deb591b796ec73b9a3f6d378789efcbbc047fc85bf5a1f512d88bf0"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "2b8d7a8e0767f06b1146b67ba4aa6ba83e0936588cdd692c0ae6c443d672d39c"
},
{
"path": "commands/analyze-node-tuning.md",
"sha256": "6c3fc4379ffaca1f44fdfbe6232d03289c50f522dd97287f365d9e4aabd30351"
},
{
"path": "commands/generate-tuned-profile.md",
"sha256": "5656e219f18ed3ddcd95b386f1f7e147ba1cb69e1878114aa62d76e4c39fa2c5"
},
{
"path": "skills/scripts/generate_tuned_profile.py",
"sha256": "64ff03b8b2c05b08bbff33e2c601cf7c53c02c55d54f39a9e9ffb7bb822cc72f"
},
{
"path": "skills/scripts/analyze_node_tuning.py",
"sha256": "3769a2edcf48cc2113214f47e2a91c3d6a8b0494b3b955bf94ac3adcc34d613a"
},
{
"path": "skills/scripts/SKILL.md",
"sha256": "d1afc55c0edeb5d8272ca792f13b86148c66bc00192fbbd561b067f5c7448fd3"
}
],
"dirSha256": "7845a42d4cc3371458e0d08eeb3238859cdd7ca540e0715e933622281dbb1735"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

183
skills/scripts/SKILL.md Normal file
View File

@@ -0,0 +1,183 @@
---
name: Node Tuning Helper Scripts
description: Generate tuned manifests and evaluate node tuning snapshots
---
# Node Tuning Helper Scripts
Detailed instructions for invoking the helper utilities that back `/node-tuning` commands:
- `generate_tuned_profile.py` renders Tuned manifests (`tuned.openshift.io/v1`).
- `analyze_node_tuning.py` inspects live nodes or sosreports for tuning gaps.
## When to Use These Scripts
- Translate structured command inputs into Tuned manifests for the Node Tuning Operator.
- Iterate on generated YAML outside the assistant or integrate the generator into automation.
- Analyze CPU isolation, IRQ affinity, huge pages, sysctl values, and networking counters from live clusters or archived sosreports.
## Prerequisites
- Python 3.8 or newer (`python3 --version`).
- Repository checkout so the scripts under `plugins/node-tuning/skills/scripts/` are accessible.
- Optional: `oc` CLI when validating or applying manifests.
- Optional: Extracted sosreport directory when running the analysis script offline.
- Optional (remote analysis): `oc` CLI access plus a valid `KUBECONFIG` when capturing `/proc`/`/sys` or sosreport via `oc debug node/<name>`. The sosreport workflow pulls the `registry.redhat.io/rhel9/support-tools` image (override with `--toolbox-image` or `TOOLBOX_IMAGE`) and requires registry access. HTTP(S) proxy env vars from the host are forwarded automatically when present, but using a proxy is optional.
---
## Script: `generate_tuned_profile.py`
### Implementation Steps
1. **Collect Inputs**
- `--profile-name`: Tuned resource name.
- `--summary`: `[main]` section summary.
- Repeatable options: `--include`, `--main-option`, `--variable`, `--sysctl`, `--section` (`SECTION:KEY=VALUE`).
- Target selectors: `--machine-config-label key=value`, `--match-label key[=value]`.
- Optional: `--priority` (default 20), `--namespace`, `--output`, `--dry-run`.
- Use `--list-nodes`/`--node-selector` to inspect nodes and `--label-node NODE:KEY[=VALUE]` (plus `--overwrite-labels`) to tag machines.
2. **Inspect or Label Nodes (optional)**
```bash
# List all worker nodes
python3 plugins/node-tuning/skills/scripts/generate_tuned_profile.py --list-nodes --node-selector "node-role.kubernetes.io/worker" --skip-manifest
# Label a specific node for the worker-hp pool
python3 plugins/node-tuning/skills/scripts/generate_tuned_profile.py \
--label-node ip-10-0-1-23.ec2.internal:node-role.kubernetes.io/worker-hp= \
--overwrite-labels \
--skip-manifest
```
3. **Render the Manifest**
```bash
python3 plugins/node-tuning/skills/scripts/generate_tuned_profile.py \
--profile-name "$PROFILE" \
--summary "$SUMMARY" \
--sysctl net.core.netdev_max_backlog=16384 \
--match-label tuned.openshift.io/custom-net \
--output .work/node-tuning/$PROFILE/tuned.yaml
```
- Omit `--output` to write `<profile-name>.yaml` in the current directory.
- Add `--dry-run` to print the manifest to stdout.
4. **Review Output**
- Inspect the generated YAML for accuracy.
- Optionally format with `yq` or open in an editor for readability.
5. **Validate and Apply**
- Dry-run: `oc apply --server-dry-run=client -f <manifest>`.
- Apply: `oc apply -f <manifest>`.
### Error Handling
- Missing required options raise `ValueError` with descriptive messages.
- The script exits non-zero when no target selectors (`--machine-config-label` or `--match-label`) are supplied.
- Invalid key/value or section inputs identify the failing argument explicitly.
### Examples
```bash
python3 plugins/node-tuning/skills/scripts/generate_tuned_profile.py \
--profile-name realtime-worker \
--summary "Realtime tuned profile" \
--include openshift-node --include realtime \
--variable isolated_cores=1 \
--section bootloader:cmdline_ocp_realtime=+systemd.cpu_affinity=${not_isolated_cores_expanded} \
--machine-config-label machineconfiguration.openshift.io/role=worker-rt \
--priority 25 \
--output .work/node-tuning/realtime-worker/tuned.yaml
```
```bash
python3 plugins/node-tuning/skills/scripts/generate_tuned_profile.py \
--profile-name openshift-node-hugepages \
--summary "Boot time configuration for hugepages" \
--include openshift-node \
--section bootloader:cmdline_openshift_node_hugepages="hugepagesz=2M hugepages=50" \
--machine-config-label machineconfiguration.openshift.io/role=worker-hp \
--priority 30 \
--output .work/node-tuning/openshift-node-hugepages/hugepages-tuned-boottime.yaml
```
---
## Script: `analyze_node_tuning.py`
### Purpose
Inspect either a live node (`/proc`, `/sys`) or an extracted sosreport snapshot for tuning signals (CPU isolation, IRQ affinity, huge pages, sysctl state, networking counters) and emit actionable recommendations.
### Usage Patterns
- **Live node analysis**
```bash
python3 plugins/node-tuning/skills/scripts/analyze_node_tuning.py --format markdown
```
- **Remote analysis via oc debug**
```bash
python3 plugins/node-tuning/skills/scripts/analyze_node_tuning.py \
--node worker-rt-0 \
--kubeconfig ~/.kube/prod \
--format markdown
```
- **Collect sosreport via oc debug and analyze locally**
```bash
python3 plugins/node-tuning/skills/scripts/analyze_node_tuning.py \
--node worker-rt-0 \
--toolbox-image registry.example.com/support-tools:latest \
--sosreport-arg "--case-id=01234567" \
--sosreport-output .work/node-tuning/sosreports \
--format json
```
- **Offline sosreport analysis**
```bash
python3 plugins/node-tuning/skills/scripts/analyze_node_tuning.py \
--sosreport /path/to/sosreport-2025-10-20
```
- **Automation-friendly JSON**
```bash
python3 plugins/node-tuning/skills/scripts/analyze_node_tuning.py \
--sosreport /path/to/sosreport \
--format json --output .work/node-tuning/node-analysis.json
```
### Implementation Steps
1. **Select data source**
- Provide `--node <name>` (with optional `--kubeconfig` / `--oc-binary`). By default the helper runs `sosreport` remotely from inside the RHCOS toolbox container (`registry.redhat.io/rhel9/support-tools`). Override the image with `--toolbox-image`, extend the sosreport command with `--sosreport-arg`, or disable the curated OpenShift flags via `--skip-default-sosreport-flags`. Pass `--no-collect-sosreport` to fall back to the direct `/proc` snapshot mode.
- Provide `--sosreport <dir>` for archived diagnostics; detection finds embedded `proc/` and `sys/`.
- Omit both switches to query the live filesystem (defaults to `/proc` and `/sys`).
- Override paths with `--proc-root` or `--sys-root` when the layout differs.
2. **Run analysis**
- The script parses `cpuinfo`, kernel cmdline parameters (`isolcpus`, `nohz_full`, `tuned.non_isolcpus`), default IRQ affinities, huge page counters, sysctl values (net, vm, kernel), transparent hugepage settings, `netstat`/`sockstat` counters, and `ps` snapshots (when available in sosreport).
3. **Review the report**
- Markdown output groups findings by section (System Overview, CPU & Isolation, Huge Pages, Sysctl Highlights, Network Signals, IRQ Affinity, Process Snapshot) and lists recommendations.
- JSON output contains the same information in structured form for pipelines or dashboards.
4. **Act on recommendations**
- Apply Tuned profiles, MachineConfig updates, or manual sysctl/irqbalance adjustments.
- Feed actionable items back into `/node-tuning:generate-tuned-profile` to codify desired state.
### Error Handling
- Missing `proc/` or `sys/` directories trigger descriptive errors.
- Unreadable files are skipped gracefully and noted in observations where relevant.
- Non-numeric sysctl values are flagged for manual investigation.
### Example Output (Markdown excerpt)
```
# Node Tuning Analysis
## System Overview
- Hostname: worker-rt-1
- Kernel: 4.18.0-477.el8
- NUMA nodes: 2
- Kernel cmdline: `BOOT_IMAGE=... isolcpus=2-15 tuned.non_isolcpus=0-1`
## CPU & Isolation
- Logical CPUs: 32
- Physical cores: 16 across 2 socket(s)
- SMT detected: yes
- Isolated CPUs: 2-15
...
## Recommended Actions
- Configure net.core.netdev_max_backlog (>=32768) to accommodate bursty NIC traffic.
- Transparent Hugepages are not disabled (`[never]` not selected). Consider setting to `never` for latency-sensitive workloads.
- 4 IRQs overlap isolated CPUs. Relocate interrupt affinities using tuned profiles or irqbalance.
```
### Follow-up Automation Ideas
- Persist JSON results in `.work/node-tuning/<host>/analysis.json` for historical tracing.
- Gate upgrades by comparing recommendations across nodes.
- Integrate with CI jobs that validate cluster tuning post-change.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,414 @@
"""
Utility script to generate tuned.openshift.io/v1 Tuned manifests.
The script is intentionally dependency-free so it can run anywhere Python 3.8+
is available (CI, developer workstations, or automation pipelines).
"""
from __future__ import annotations
import argparse
import json
import os
import subprocess
import sys
from collections import OrderedDict
from typing import Iterable, List, Optional, Sequence, Tuple
def _parse_key_value_pairs(
raw_values: Sequence[str],
*,
parameter: str,
allow_empty_value: bool = False,
) -> List[Tuple[str, str]]:
"""Split KEY=VALUE (or KEY when allow_empty_value=True) pairs."""
parsed: List[Tuple[str, str]] = []
for raw in raw_values:
if "=" in raw:
key, value = raw.split("=", 1)
elif allow_empty_value:
key, value = raw, ""
else:
raise ValueError(f"{parameter} expects KEY=VALUE entries, got '{raw}'")
key = key.strip()
value = value.strip()
if not key:
raise ValueError(f"{parameter} entries must include a non-empty key (got '{raw}')")
parsed.append((key, value))
return parsed
def _parse_section_entries(raw_values: Sequence[str]) -> List[Tuple[str, str, str]]:
"""
Parse SECTION:KEY=VALUE entries for arbitrary tuned.ini sections.
Examples:
bootloader:cmdline_ocp_realtime=+nohz_full=1-3
service:service.stalld=start,enable
"""
parsed: List[Tuple[str, str, str]] = []
for raw in raw_values:
if ":" not in raw:
raise ValueError(
f"--section expects SECTION:KEY=VALUE entries, got '{raw}'"
)
section, remainder = raw.split(":", 1)
section = section.strip()
if not section:
raise ValueError(f"--section requires a section name before ':', got '{raw}'")
key_value = _parse_key_value_pairs([remainder], parameter="--section")
parsed.append((section, key_value[0][0], key_value[0][1]))
return parsed
def _build_profile_ini(
*,
summary: str,
includes: Sequence[str],
main_options: Sequence[Tuple[str, str]],
variables: Sequence[Tuple[str, str]],
sysctls: Sequence[Tuple[str, str]],
extra_sections: Sequence[Tuple[str, str, str]],
) -> str:
sections: "OrderedDict[str, List[str]]" = OrderedDict()
sections["main"] = [f"summary={summary}"]
if includes:
sections["main"].append(f"include={','.join(includes)}")
for key, value in main_options:
sections["main"].append(f"{key}={value}")
if variables:
sections["variables"] = [f"{key}={value}" for key, value in variables]
if sysctls:
sections["sysctl"] = [f"{key}={value}" for key, value in sysctls]
for section, key, value in extra_sections:
section = section.strip()
if not section:
continue
if section not in sections:
sections[section] = []
sections[section].append(f"{key}={value}")
rendered_sections: List[str] = []
non_empty_sections = [(name, lines) for name, lines in sections.items() if lines]
for idx, (name, lines) in enumerate(non_empty_sections):
rendered_sections.append(f"[{name}]")
rendered_sections.extend(lines)
if idx != len(non_empty_sections) - 1:
rendered_sections.append("")
return "\n".join(rendered_sections)
def _json_string(value: str) -> str:
"""Return a JSON-encoded string (adds surrounding quotes, escapes)."""
return json.dumps(value)
def _render_manifest(
*,
profile_name: str,
namespace: str,
profile_ini: str,
machine_config_labels: Sequence[Tuple[str, str]],
match_labels: Sequence[Tuple[str, str]],
priority: int,
) -> str:
lines: List[str] = [
"apiVersion: tuned.openshift.io/v1",
"kind: Tuned",
"metadata:",
f" name: {profile_name}",
]
if namespace:
lines.append(f" namespace: {namespace}")
lines.extend(
[
"spec:",
" profile:",
" - data: |",
]
)
profile_lines = profile_ini.splitlines()
if not profile_lines:
raise ValueError("Profile contents may not be empty")
for entry in profile_lines:
# Preserve blank lines for readability inside the literal block.
if entry:
lines.append(f" {entry}")
else:
lines.append(" ")
lines.append(f" name: {profile_name}")
if not machine_config_labels and not match_labels:
raise ValueError("At least one --machine-config-label or --match-label must be provided")
lines.append(" recommend:")
if machine_config_labels:
lines.append(" - machineConfigLabels:")
for key, value in machine_config_labels:
lines.append(f" {key}: {_json_string(value)}")
start_written = True
else:
start_written = False
if match_labels:
prefix = " match:" if start_written else " - match:"
lines.append(prefix)
item_indent = " - " if start_written else " - "
value_indent = " " if start_written else " "
for label, value in match_labels:
lines.append(f"{item_indent}label: {_json_string(label)}")
if value != "":
lines.append(f"{value_indent}value: {_json_string(value)}")
start_written = True
priority_prefix = " priority" if start_written else " - priority"
lines.append(f"{priority_prefix}: {priority}")
profile_prefix = " profile" if start_written else " - profile"
lines.append(f"{profile_prefix}: {_json_string(profile_name)}")
return "\n".join(lines) + "\n"
def _run_oc_command(command: Sequence[str]) -> subprocess.CompletedProcess:
"""Execute an oc command and return the completed process."""
try:
result = subprocess.run(
command,
check=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
)
except FileNotFoundError as exc:
raise RuntimeError(
"Unable to locate the 'oc' binary. Install the OpenShift CLI or set --oc-binary."
) from exc
except subprocess.CalledProcessError as exc:
message = exc.stderr.strip() or exc.stdout.strip() or str(exc)
raise RuntimeError(f"Command '{' '.join(command)}' failed: {message}") from exc
return result
def list_nodes(*, oc_binary: str, selector: Optional[str]) -> List[str]:
"""List nodes using the oc CLI and return their names."""
command: List[str] = [oc_binary, "get", "nodes", "-o", "name"]
if selector:
command.extend(["-l", selector])
result = _run_oc_command(command)
nodes = [line.strip() for line in result.stdout.splitlines() if line.strip()]
if nodes:
for node in nodes:
print(node)
else:
print("No nodes matched the provided selector.")
return nodes
def label_nodes(
*,
oc_binary: str,
entries: Sequence[str],
overwrite: bool,
) -> None:
"""Label nodes via oc CLI using NODE:label entries."""
if not entries:
return
for raw in entries:
if ":" not in raw:
raise ValueError(
f"--label-node expects NODE:KEY[=VALUE] format (e.g. node1:node-role.kubernetes.io/worker-hp=) - got '{raw}'"
)
node_name, label = raw.split(":", 1)
node_name = node_name.strip()
label = label.strip()
if not node_name or not label:
raise ValueError(f"--label-node entry must include both node name and label (got '{raw}')")
command: List[str] = [oc_binary, "label", "node", node_name, label]
if overwrite:
command.append("--overwrite")
_run_oc_command(command)
print(f"Labeled {node_name} with {label}")
def generate_manifest(args: argparse.Namespace) -> str:
includes = [value.strip() for value in args.include or [] if value.strip()]
main_options = _parse_key_value_pairs(args.main_option or [], parameter="--main-option")
variables = _parse_key_value_pairs(args.variable or [], parameter="--variable")
sysctls = _parse_key_value_pairs(args.sysctl or [], parameter="--sysctl")
extra_sections = _parse_section_entries(args.section or [])
match_labels = _parse_key_value_pairs(
args.match_label or [],
parameter="--match-label",
allow_empty_value=True,
)
machine_config_labels = _parse_key_value_pairs(
args.machine_config_label or [],
parameter="--machine-config-label",
allow_empty_value=True,
)
profile_ini = _build_profile_ini(
summary=args.summary,
includes=includes,
main_options=main_options,
variables=variables,
sysctls=sysctls,
extra_sections=extra_sections,
)
manifest = _render_manifest(
profile_name=args.profile_name,
namespace=args.namespace,
profile_ini=profile_ini,
machine_config_labels=machine_config_labels,
match_labels=match_labels,
priority=args.priority,
)
return manifest
def parse_arguments(argv: Iterable[str]) -> argparse.Namespace:
parser = argparse.ArgumentParser(
description="Generate tuned.openshift.io/v1 Tuned manifests for the Node Tuning Operator.",
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
)
parser.add_argument("--profile-name", help="Name of the Tuned profile and resource")
parser.add_argument("--summary", help="Summary placed inside the [main] section")
parser.add_argument(
"--namespace",
default="openshift-cluster-node-tuning-operator",
help="Namespace to place in metadata.namespace",
)
parser.add_argument(
"--include",
action="append",
help="Append an entry to the 'include=' list (multiple flags allowed)",
)
parser.add_argument(
"--main-option",
action="append",
help="Add KEY=VALUE to the [main] section beyond summary/include",
)
parser.add_argument(
"--variable",
action="append",
help="Add KEY=VALUE to the [variables] section",
)
parser.add_argument(
"--sysctl",
action="append",
help="Add KEY=VALUE to the [sysctl] section",
)
parser.add_argument(
"--section",
action="append",
help="Add arbitrary SECTION:KEY=VALUE lines (e.g. bootloader:cmdline=...)",
)
parser.add_argument(
"--machine-config-label",
action="append",
help="Add a MachineConfigPool selector (key=value) under machineConfigLabels",
)
parser.add_argument(
"--match-label",
action="append",
help="Add a node label entry (key[=value]) under recommend[].match[]",
)
parser.add_argument(
"--priority",
type=int,
default=20,
help="Recommendation priority",
)
parser.add_argument(
"--output",
help="Output file path; defaults to <profile-name>.yaml in the current directory",
)
parser.add_argument(
"--dry-run",
action="store_true",
help="Print manifest to stdout instead of writing to disk",
)
parser.add_argument(
"--skip-manifest",
action="store_true",
help="Skip manifest generation; useful when only listing or labeling nodes",
)
parser.add_argument(
"--list-nodes",
action="store_true",
help="List nodes via 'oc get nodes' before other actions",
)
parser.add_argument(
"--node-selector",
help="Label selector to filter nodes when using --list-nodes",
)
parser.add_argument(
"--label-node",
action="append",
help="Label nodes using NODE:KEY[=VALUE] entries (repeat for multiple nodes)",
)
parser.add_argument(
"--overwrite-labels",
action="store_true",
help="Allow overwriting existing labels when using --label-node",
)
parser.add_argument(
"--oc-binary",
default=os.environ.get("OC_BIN", "oc"),
help="Path to the oc binary to execute",
)
return parser.parse_args(argv)
def main(argv: Sequence[str]) -> int:
args = parse_arguments(argv)
try:
if args.list_nodes:
list_nodes(oc_binary=args.oc_binary, selector=args.node_selector)
if args.label_node:
label_nodes(
oc_binary=args.oc_binary,
entries=args.label_node,
overwrite=args.overwrite_labels,
)
if args.skip_manifest:
return 0
if not args.profile_name:
raise ValueError("--profile-name is required unless --skip-manifest is set")
if not args.summary:
raise ValueError("--summary is required unless --skip-manifest is set")
manifest = generate_manifest(args)
except (ValueError, RuntimeError) as exc:
print(f"error: {exc}", file=sys.stderr)
return 1
if args.dry_run:
sys.stdout.write(manifest)
return 0
output_path = args.output or f"{args.profile_name}.yaml"
output_dir = os.path.dirname(output_path)
if output_dir and not os.path.exists(output_dir):
os.makedirs(output_dir, exist_ok=True)
with open(output_path, "w", encoding="utf-8") as handle:
handle.write(manifest)
print(f"Wrote Tuned manifest to {output_path}")
return 0
if __name__ == "__main__":
raise SystemExit(main(sys.argv[1:]))