action #96155
closedImprove YAML schedule: reducing content and considering reducing number of files
0%
Description
Once we finish redisigning most of the test modules in interactive installation we should have a good picture to tackle this task. We have seen during this phase how much repetition there is in our schedules. Currently we have 87 files only under schedule/yast/.
Option 1¶
One idea to to explore is to support the design on some base classes and allow replacements on them.
For example, we could have the following:
(1) yast/default_installation.yaml (default installation without checking default, just next, next).
(2) yast/check_defaults_installation.yaml (default installation checking defaults/preselected values).
(1) is the most simple case to use as base as probably we will not have test data there, we don't check any defaults, so it is the less complex and covers each stage in the interactive installation. It might look as follows:
---
name: interactive_installation
description: >
Interactive installation without checking defaults, only navigating
to the next screen and boot after install.
vars:
YUI_REST_API: 1
schedule:
- installation/bootloader_start
- installation/setup_libyui
- installation/validate_beta_popup
- installation/product_selection/select_product
- installation/licensing/accept_license
- installation/scc_registration
- installation/addon_products_sle
- installation/system_role
- installation/partitioning/accept_proposed_layout
- installation/clock_and_timezone/accept_timezone_configuration
- installation/authentication/use_same_password_for_root
- installation/authentication/default_user_simple_pwd
...
- installation/first_boot
interactive_installation.yaml path could be /schedule/yast/interactive_installation.yaml
Now if we want to have exactly the same but for example install ext4 with guided partitioning we could do something like this:
---
name: ext4
description: >
Test for ext4 filesystem.
base_on: interactive_installation
override:
accept_proposed_layout:
- installation/partitioning/select_guided_setup
- installation/partitioning/guided_setup
- installation/partitioning/accept_proposed_layout
We don't need the full name of the test, just accept_proposed_layout
as every test in openQA needs to have unique name.
We need some sort of wording that indicate which schedule we use as base and how we manipulate it, in this case we override only the partitioning part.
We will not need the full path in base_on due to the schedule will be located in the same directory.
It has the advantage of separate the functionality that is important for the scenario.
We need to consider how many overrides are ok, sometimes would be better to create a new base test in the hierarchy. So a nested directory following the same idea.
We must maintain previous functionality, so both solution can work fine in parallel.
The goal with this would not be to reduce the number of file but its content, we still have to separate by architecture, so for example when we do some special partitioning we need an schedule for x86_64 which schedule few modules for that and and schedule for s390x which does the same plus add some replacement for the DASDs disks for example. If we generalize per architecture we would be spreading the file, so it is not a good idea, it is better to keep multiple replacements in this case.
Going an step forward in order to solve that we could gather those architecture specific replacements with its corresponding list of test modules in one place and apply them always when corresponds, like the example with DASD disks. This sounds like conditional schedule, but in fact would be an alternative, to have some folders with names relative to ARCH or BACKEND (or other current settings that conditional schedule is using) which contains fixed replacement for all, in a way that you don't have to worry to provide that for each schedule, and you can just focus on testing an specific functionality, raid, ext4, etc.
Option 2¶
Another option which has the advantage to not add artificial keys, is to use similar approach to the one used by Configuration Management tools. So for example we can have a list of schedules corresponding to our test suites and they are in some root directly, could be schedules/yast/ for example. Those test suite will represent all what we test and in some groups in openQA we will include some subset and in other groups another subset. Those test suite will contain some keys, for example "{{raid}}" which will be resolved automatically finding some folder with the same name of the test suite. Such a folder will contain conditional schedule depending on architecture, virtualization or ANYTHING (this one is important, because there are not strict requirements and things vary even after feature are validated). If something is related to ANY raid, would go in that folder, if there is some test to be schedules only depending on raid1 we will need to overwrite/write that key in a folder /raid/raid1. Test data for each level will be set in the main test suite (applying for all) and along each conditional schedule (applying to respective folder). The, advantage, only one place to change things. For common changes in schedule which do not depend on test suite, we could do in a folder like for example: /schedule/yast/default/.
This way also remove the artificial way to set test data at the level of architecture in job groups, due to differences are not only at the level of architecture. There are not magic keys in yaml, just some key values that would be resolved by a list of test module to be scheduled in corresponding files.
raid1.yaml:
name: raid1
schedule:
- installation/bootloader_start
- installation/setup_libyui
- installation/product_selection/access_beta_distribution
- "{{raid1}}"
- installation/product_selection/something
- "{{validate_general_raid}}"
- "{{validate_raid1}}"
test_data:
a: x
raid/raid.yaml:
validate_general_raid:
ARCH:
'x86_64':
- installation/general_raid_validation
raid/raid1.yaml:
raid1:
ARCH:
's390x':
- installation/blaba1
'x86_64':
- installation/blaba2
- installation/blaba3
validate_raid1:
ARCH:
's390x':
- console/validate_raid1
'x86_64':
- console/validate_bla1
- console/validate_bla2
test_data:
b: c