Project

General

Profile

Actions

action #176061

closed

[Containers] test fails in kubectl

Added by ggardet_arm 8 days ago. Updated 1 day ago.

Status:
Resolved
Priority:
High
Assignee:
Target version:
-
Start date:
2025-01-23
Due date:
% Done:

0%

Estimated time:

Description

Observation

openQA test in scenario opensuse-Tumbleweed-DVD-aarch64-container_host_kubectl@aarch64 fails in
kubectl

kernel 6.13 update removed some CGROUP config options:

- CONFIG_CGROUP_CPUACCT: missing (fail)
- CONFIG_CGROUP_DEVICE: missing (fail)
- CONFIG_CGROUP_FREEZER: missing (fail)

Test suite description

Maintainer: dheidler. Extra tests about CLI software in container module
2023-08-10/dimstar: added QEMURAM=2048 (boo#1212824)

Reproducible

Fails since (at least) Build 20250122

Expected result

Last good: 20250121 (or more recent)

Further details

Always latest result in this scenario: latest

Actions #2

Updated by ggardet_arm 7 days ago

  • Description updated (diff)
Actions #3

Updated by ggardet_arm 7 days ago

Maybe @mkoutny would have more information?

Actions #4

Updated by ph03nix 7 days ago

  • Tags set to containers, tumbleweed
  • Subject changed from test fails in kubectl to [Containers] test fails in kubectl
  • Status changed from New to Workable
  • Priority changed from Normal to High
Actions #5

Updated by mkoutny 7 days ago

Good, good, this is exactly what I wanted to figure out by the disablement -- I disabled controllers that don't exist on the v2 hierarchy.

Given:

- cgroup hierarchy: cgroups V2 mounted, cpu|cpuset|memory controllers status: good

kubectl actually runs on v2 hierarchy, so it won't likely use any of those controllers and this is only a stale kernel config check.

Is this check part of kubectl itself or some of our distro install scripts? I think the check should be either moved to optional features (for legacy v1 users) or not checked at all. Maybe @RBrownSUSE knows more?

Actions #6

Updated by dimstar 7 days ago

considering k3s is installed using timeout 180 curl -sfL https://get.k3s.io -o install_k3s.sh; echo vtznr-$?- this looks like upstream code

Actions #8

Updated by mloviska 7 days ago

mkoutny wrote in #note-5:

kubectl actually runs on v2 hierarchy, so it won't likely use any of those controllers and this is only a stale kernel config check.

Is this check part of kubectl itself or some of our distro install scripts? I think the check should be either moved to optional features (for legacy v1 users) or not checked at all. Maybe @RBrownSUSE knows more?

It should be a shell script provided by k3s -> /var/lib/rancher/k3s/data/.../bin/check-config, thus should we file an issue for k3s?

Actions #9

Updated by ph03nix 2 days ago

  • Project changed from openQA Tests (public) to Containers and images
  • Category deleted (Bugs in existing tests)
Actions #10

Updated by rbranco 2 days ago

  • Status changed from Workable to In Progress
  • Assignee set to rbranco
Actions #11

Updated by rbranco 2 days ago

Actions #12

Updated by rbranco 2 days ago ยท Edited

I cloned the failing jobs without the assertion on k3s check-config and both passed:

helm: https://openqa.opensuse.org/tests/4813743
kubectl: https://openqa.opensuse.org/tests/4813744

Actions #13

Updated by rbranco 1 day ago

Actions #14

Updated by rbranco 1 day ago

  • Status changed from In Progress to Resolved
Actions

Also available in: Atom PDF