Project

General

Profile

Actions

action #155908

closed

coordination #152769: [epic] Reduction of yaml files in YaST installations

Reduce yaml files for guided_btrfs

Added by JERiveraMoya 4 months ago. Updated 4 days ago.

Status:
Resolved
Priority:
Normal
Assignee:
Target version:
-
Start date:
2024-01-04
Due date:
% Done:

0%

Estimated time:

Description

Motivation

See epic and tickets in the epic for best practices.

Some general summary as a hint (after reading previous tickets):
Main goals are to reduce for single test suite to single yaml file comparing the ones created per each architecture before and drive the schedule having different yaml default files per each architecture. The final yaml should be stored in a folder where we have the representation of each test case: schedule/yam/test_cases
In order to do that you have to consider if it is just about yaml or would require some code changes to make things more homogeneous, there are multiple approaches ranging from simple dropping small tested functionality in some specific architecture to some more fancy code strategy.
At the same time we should care of variables, basically moving all of them to job group yaml.
Leave out for now ppc64le, due to most of our test coverage is in PowerVM and it is not working, so verification are not possible atm. Anyway in the future most likely we will have to reduce hugely test coverage for ppc64le, so we should take that into account instead of trying to do much effort in the code due to PowerVM requires special treatment in the console or installation that only make sense to have them in textmode breaking this homogeneity that we are aiming to.

Specific from this test suite:
For ipmi we might need to create another default file due to disk selection required there (not exclusive of ipmi), basically the only difference is that guided_hard_disks is not empty in the this new default file.
Validation is different for each architecture, in general for validation is not a good idea to have such a granularity that create those schedules differences, because the installed system are quite different, for the installer is a different case because the unified installer is quite the same for all archs except some particular differences sometimes.
In this ticket we can attempt to do the following:

  • Unify hibernation_enabled hibernation_disabled in single data-driven test module.
  • Deal with these two modules: - console/validate_partition_table_via_blkid - console/validate_blockdevices in the sense that we could also unify the way of checking the partition table when blkid is not available or if the blockdevice check can be done for all. Therefore, in summary, it is not about removing those two modules but trying to make them work for all.

Schedules for this test suite exists in schedule/yast/sle/guided_btrfs/.

Acceptance criteria

AC1: Reduce yaml files for corresponding test suite over all the architectures.
AC2: Apply additional refactor to those 3 modules mentioned above and make them data-driven.
AC3: Clean-up unused files.

Actions #1

Updated by JERiveraMoya 4 months ago

  • Description updated (diff)
Actions #2

Updated by JERiveraMoya 3 months ago

  • Tags changed from qe-yam-mar-sprint to qe-yam-apr-sprint
Actions #3

Updated by JERiveraMoya 2 months ago

  • Tags changed from qe-yam-apr-sprint to qe-yam-may-sprint
Actions #4

Updated by JERiveraMoya about 2 months ago

  • Priority changed from Normal to Low
Actions #5

Updated by JERiveraMoya about 1 month ago

  • Priority changed from Low to Normal
Actions #6

Updated by JERiveraMoya about 1 month ago

  • Tags changed from qe-yam-may-sprint to qe-yam-jan-sprint
Actions #7

Updated by JERiveraMoya about 1 month ago

  • Tags changed from qe-yam-jan-sprint to qe-yam-jun-sprint
Actions #8

Updated by zoecao about 1 month ago

  • Status changed from Workable to In Progress
  • Assignee set to zoecao
Actions #9

Updated by zoecao 17 days ago

I'll work on this ticket tomorrow. Right now I'm working on an old ticket which was blocked by bug, and the bug was fixed recently, I'll finish that one then start to work on this.

Actions #10

Updated by zoecao 13 days ago

Checked all the guided_btrfs yaml files, and created one under schedule/yam/, will verify whether it would work for all arches tomorrow (the openqa is down today)
BTW, still need to add a default yaml file for ipmi.

Actions #11

Updated by JERiveraMoya 13 days ago

Feel free to send PR whenever you have the time.

Actions #12

Updated by zoecao 11 days ago

PR for this ticket, still not ready, need to double think about how to deal with the difference in ipmi:
For unknown reason, it loads
installation/partitioning/guided_setup/select_disks
instead of loading
installation/partitioning/guided_setup/accept_default_hard_disks_selection
https://openqa.suse.de/tests/14677702#

PR: https://github.com/os-autoinst/os-autoinst-distri-opensuse/pull/19553

Actions #13

Updated by JERiveraMoya 11 days ago

zoecao wrote in #note-12:

PR for this ticket, still not ready, need to double think about how to deal with the difference in ipmi:
For unknown reason, it loads
installation/partitioning/guided_setup/select_disks
instead of loading
installation/partitioning/guided_setup/accept_default_hard_disks_selection
https://openqa.suse.de/tests/14677702#

PR: https://github.com/os-autoinst/os-autoinst-distri-opensuse/pull/19553

We need to exercise this part or selecting disk in the test coverage and I think in ipmi the workers are configured like that, we cannot change it, so I suggested in ticket description as possible solution:

For ipmi we might need to create another default file due to disk selection required there (not exclusive of ipmi), basically the only difference is that guided_hard_disks is not empty in the this new default file.
Actions #14

Updated by zoecao 11 days ago

JERiveraMoya wrote in #note-13:

zoecao wrote in #note-12:

PR for this ticket, still not ready, need to double think about how to deal with the difference in ipmi:
For unknown reason, it loads
installation/partitioning/guided_setup/select_disks
instead of loading
installation/partitioning/guided_setup/accept_default_hard_disks_selection
https://openqa.suse.de/tests/14677702#

PR: https://github.com/os-autoinst/os-autoinst-distri-opensuse/pull/19553

We need to exercise this part or selecting disk in the test coverage and I think in ipmi the workers are configured like that, we cannot change it, so I suggested in ticket description as possible solution:

For ipmi we might need to create another default file due to disk selection required there (not exclusive of ipmi), basically the only difference is that guided_hard_disks is not empty in the this new default file.

Yes, I created a new default yaml file for ipmi, I forgot to replace it with the new one. This is the latest VR with the new ipmi default yaml file, but the result has no difference:
https://openqa.suse.de/tests/14677742#details

Actions #15

Updated by JERiveraMoya 11 days ago

zoecao wrote in #note-14:

JERiveraMoya wrote in #note-13:

zoecao wrote in #note-12:

PR for this ticket, still not ready, need to double think about how to deal with the difference in ipmi:
For unknown reason, it loads
installation/partitioning/guided_setup/select_disks
instead of loading
installation/partitioning/guided_setup/accept_default_hard_disks_selection
https://openqa.suse.de/tests/14677702#

PR: https://github.com/os-autoinst/os-autoinst-distri-opensuse/pull/19553

We need to exercise this part or selecting disk in the test coverage and I think in ipmi the workers are configured like that, we cannot change it, so I suggested in ticket description as possible solution:

For ipmi we might need to create another default file due to disk selection required there (not exclusive of ipmi), basically the only difference is that guided_hard_disks is not empty in the this new default file.

Yes, I created a new default yaml file for ipmi, I forgot to replace it with the new one. This is the latest VR with the new ipmi default yaml file, but the result has no difference:
https://openqa.suse.de/tests/14677742#details

because we should use there installation/partitioning/guided_setup/accept_default_hard_disks_selection

Actions #16

Updated by zoecao 11 days ago

In schedule/yam/guided_btrfs.yaml, it loads

guided_partitioning:
  - installation/partitioning/select_guided_setup
  - installation/partitioning/guided_setup/accept_default_part_scheme
  - installation/partitioning/guided_setup/accept_default_fs_options

And in ipmi default yaml schedule, it sets:

guided_open: []
guided_hard_disks:
  - installation/partitioning/guided_setup/accept_default_hard_disks_selection
guided_scheme: []
guided_filesystem: []

And in other defaul yaml schedule file, it sets:

guided_partitioning: []

I guess this is the problem. Is there a way to insert installation/partitioning/guided_setup/accept_default_hard_disks_selection to guided_partitioning:

Actions #17

Updated by zoecao 10 days ago

Updated the PR: https://github.com/os-autoinst/os-autoinst-distri-opensuse/pull/19553 CI failed, will check it.
And the latest [VR links]

It is really wired that the ipmi job always load select_disks, I have already set ipmi default yaml as

guided_open: []
guided_hard_disks:
  - installation/partitioning/guided_setup/accept_default_hard_disks_selection
guided_scheme: []
guided_filesystem: []

And schedule/yam/guided_btrfs.yaml as

  guided_open:
    - installation/partitioning/select_guided_setup
  guided_scheme:
    - installation/partitioning/guided_setup/accept_default_part_scheme
  guided_filesystem:
    - installation/partitioning/guided_setup/accept_default_fs_options

But it still loads installation/partitioning/guided_setup/select_disks

Actions #19

Updated by zoecao 4 days ago

PR and MR are updated based on comments.

Actions #20

Updated by JERiveraMoya 4 days ago

  • Status changed from In Progress to Resolved
Actions

Also available in: Atom PDF