action #176061
closed[Containers] test fails in kubectl
0%
Description
Observation¶
openQA test in scenario opensuse-Tumbleweed-DVD-aarch64-container_host_kubectl@aarch64 fails in
kubectl
kernel 6.13 update removed some CGROUP config options:
- CONFIG_CGROUP_CPUACCT: https://github.com/openSUSE/kernel-source/commit/8de3c73f3f3caa8a0a4e4befe7b614047804ba62
- CONFIG_CGROUP_DEVICE: https://github.com/openSUSE/kernel-source/commit/b0a82b685647702eefc77a08aa16c2983a4092c2
- CONFIG_CGROUP_FREEZER: https://github.com/openSUSE/kernel-source/commit/4d0d89470a94dec67199b24c65f26cac972581c6 and the test seems to fails because of:
- [1mCONFIG_CGROUP_CPUACCT[m: [1;31mmissing (fail)[m
- [1mCONFIG_CGROUP_DEVICE[m: [1;31mmissing (fail)[m
- [1mCONFIG_CGROUP_FREEZER[m: [1;31mmissing (fail)[m
Test suite description¶
Maintainer: dheidler. Extra tests about CLI software in container module
2023-08-10/dimstar: added QEMURAM=2048 (boo#1212824)
Reproducible¶
Fails since (at least) Build 20250122
Expected result¶
Last good: 20250121 (or more recent)
Further details¶
Always latest result in this scenario: latest
Updated by ggardet_arm 8 days ago
helm_K3S fails the same way: https://openqa.opensuse.org/tests/4801130#step/helm_K3S/90
Updated by mkoutny 7 days ago
Good, good, this is exactly what I wanted to figure out by the disablement -- I disabled controllers that don't exist on the v2 hierarchy.
Given:
- [37mcgroup hierarchy[m: [32mcgroups V2 mounted, cpu|cpuset|memory controllers status: good[m
kubectl actually runs on v2 hierarchy, so it won't likely use any of those controllers and this is only a stale kernel config check.
Is this check part of kubectl itself or some of our distro install scripts? I think the check should be either moved to optional features (for legacy v1 users) or not checked at all. Maybe @RBrownSUSE knows more?
Updated by mloviska 7 days ago
mkoutny wrote in #note-5:
kubectl actually runs on v2 hierarchy, so it won't likely use any of those controllers and this is only a stale kernel config check.
Is this check part of kubectl itself or some of our distro install scripts? I think the check should be either moved to optional features (for legacy v1 users) or not checked at all. Maybe @RBrownSUSE knows more?
It should be a shell script provided by k3s -> /var/lib/rancher/k3s/data/.../bin/check-config
, thus should we file an issue for k3s?
Updated by rbranco 2 days ago
Opened bug upstream: https://github.com/k3s-io/k3s/issues/11676
Updated by rbranco 2 days ago ยท Edited
I cloned the failing jobs without the assertion on k3s check-config
and both passed:
helm: https://openqa.opensuse.org/tests/4813743
kubectl: https://openqa.opensuse.org/tests/4813744
Updated by rbranco 1 day ago
Added soft-failure https://github.com/k3s-io/k3s/issues/11676