action #42578
closed
coordination #40475: [functional][y][saga] Establish YaST team split
coordination #42191: [functional][y][epic] Have separate job group for YaST subteam
[functional][y][spike] Rethink what is the purpose of create_hdd* scenarios
Added by riafarov about 6 years ago.
Updated almost 6 years ago.
Category:
Enhancement to existing tests
Target version:
SUSE QA (private) - Milestone 22
Description
****See motivation in the parent ticket. After discussion regarding #42227, we recognized that we are not clear what is the purpose of the test suite, as some parts are covered in other scenarios.
So there are 2 options:
a) Purpose is to create image for further tests
Then installation is not the most efficient way to do that as well. Then we keep them in current job group.
b) Purpose is to test installation and use resulting image for further testing
In this case Y-subteam should take over to keep test suite in a good shape and produce images
Acceptance criteria¶
- AC1: Decision is defined, accepted by all stakeholders and documented for create_hdd* scenarios
Suggestions¶
- Discuss within the QSF-y team, then QSF, then ask some stakeholders outside QSF for their opinion and requirements, e.g. RMs, or just go with any decision when you see the impact not too big and assume "silent consent" based on this ticket's content
- Document the results, e.g. in this ticket
- Ensure the test suite descriptions reflect the common understanding
- If trivial, adjust test suites on mismatch or create follow-up tasks
- Category set to Enhancement to existing tests
- Target version set to Milestone 22
- Due date set to 2019-02-12
pre-fill last sprint in M22 with all tickets within milestone not yet assigned to sprints
- Description updated (diff)
- Status changed from New to Workable
I would describe it as the goal being to just provide the image, nothing more, but the purpose includes the implicit testing of installation.
We can reduce the risk of the installation failing by not scheduling any specific "installation validations" in these scenarios and e.g. move these checks to "installer_extended". Then still the installation can fail at various points which we could prevent by scheduling the according create_hdd scenarios only after "installer-only" tests have finished successfully. However that would mean an increased overall test run time by "restarting" the installation in the create_hdd scenarios after predecessor jobs could at least reach e.g. EXIT_AFTER_START_INSTALL=1.
I recommend to go with option b) but be very open to change the create_hdd scenarios to make them more robust and simple. We could also think about autoyast installations to create the images as well.
- Subject changed from [functional][y] Rethink what is the purpose of create_hdd* scenarios to [functional][y][spike] Rethink what is the purpose of create_hdd* scenarios
We need follow-up ticket depending on the outcome.
- Status changed from Workable to In Progress
- Assignee set to JERiveraMoya
I'm going to try to collect what we are doing in openQA group for YaST to establish some dependencies (you probably have done this exercise before but I haven't had the opportunity yet):
- Check installation media
- Add additional add-on product via ftp and http:
- HA module: It has a first boot but does not check that the module is added. Nevertheless, what about not finishing installation on those scenarios and combine them? We could focus in the rest of options besides http and ftp and not care about the installation as the other products already care about it, for example.
- Development Tools module (dud?): via ftp, it looks kind of the same thing that we are doing we HA in the previous paragraph.
- Patterns:
- Expert Partitioner:
- Setup different RAID configurations but we don't verify them after installation.
- Warnings for btrfs.
- LVM + RAID
- Guided Setup:
- Explicit btrfs for each arch.
- Explicit ext4 for each arch.
- Explicit xfs for several archs (I think both previous are most used, perhaps we could just have a x64_86 autoyast test for xfs)
- Enabling crypt partition.
- Enabling crypt partition in LVM.
- Select the right disk
- Usb install. Does it make sense to install with btrfs in this test?
- Test yast2_clone_system module: here we have a dependency with create_hdd_gnome. Besides that, are we really using to reinstall from that profile?
- Detect YaST2 failures in a simple installation.
- gnome installation (http, self-signed https and smb)
- gpt: gpt it is default so I guess we could include this one in some other default that we have.
- installer extended: multiple extra checks.
- icsi client and server: here we have a dependency with create_hdd_gnome
- role minimal
- multipath hw
- nvme hw
- nis client & server: here we have a dependency with create_hdd_gnome
- release notes: perhaps we could put all together in extended_tests.
- remote controllers (ssh, vnc)
- repo_inst: Are not we covering this in gnome_http?
- rt product
- Skip registration: it also contains after installation some tests that I don't think are related?
- Switch keyboard: probably another candidate to unify in extended_tests or not even that, after changing back the language to EN-US we could stop execution.
- Yast2 gui: here we have a dependency with create_hdd_gnome
- Yast2 ncurses: here we have a dependency with create_hdd_gnome
- Yast2 ui devel: I don't get the description but here we have a dependency with create_hdd_minimal_base+sdk -> but I think we could avoid it in some way.
- Hostname checks
- Check disable self-udpate
- ZFCP storage.
- AutoYaST tests, multiple tests.
According to previous analysis I would try to think about question rised in this ticket, taking into account that we should "cook for ourself" and reduce this dependencies in a way that we should not be bottle-neck for anybody or vice versa.
- Status changed from In Progress to Feedback
- What I would do is create our own test suite for create_hdd_gnome for our own YaST tests. However I'm wondering why this ticket was not in [u] backlog as most the images are there and I might be thinking in a more narrow way for [y] sub-team getting focus avoiding deps.
- In my opinion every group (Functional, Migration, Desktop, Yast, HPC ...) should take control of their images (probably with a job publishing, as I didn't see running autoyast for other archs than x86_64) because a basic installation should be really obvious what is failing, but we could always help out to make it more robust. Therefore we could move all those create_hdd jobs to test suites that are depending on. Does it make sense? I don't know, I guess also [u] sub-team could know better the dependencies.
I set the ticket in feedback for others to post opinion.
Well, on one hand it might be good for each job group with the backing team to have their own "image creation job" to provide each team more flexibility. However in reality it is quite likely that either a regression introduced by QSF-y or a YaST regression is causing the installation to fail. And every team in parallel investigating what is wrong is quite an effort. Why the ticket is not in the backlog of QSF-u? Well, because it is the QSF-y team that wanted "split out", see parent ticket.
What hasn't been mentioned yet as an alternative but might reduce more dependency on the image creating jobs within openQA would be to create "appliance images" with kiwi inside OBS directly.
Just to clarify, it doesn't matter where these test suites currently are as it depends on what is their purpose. I agree with statements @okurz to use those to publish images, which is the purpose of the test suite rather than testing installation, so no one is blocked from their testing. This will also allow us to completely change the approach there, e.g. by not using installer at all, but kiwi instead.
I agree, I could see that we are already doing a lot of testing of installation in yast subgroup so using images with kiwi sounds good to me. So if I am understanding, kiwi is able to create all those different images with different modules selected, etc. using customization.
https://build.opensuse.org/project/show/devel:kanku:images# is an example where Frank Schreiner (M0ses) is creating quite some images already. We could either use these images or collaborate with him to move them to a more standard place or do it similar to that approach on our own or ask RMs if they are interested to ship as products or at least build in sync with their products and we use it for testing.
- Status changed from Feedback to Resolved
Ideally RM responsible for the product should produce these images. Then we can completely get rid of those scenarios completely.
AS for purpose, we all agreed that those should not test and ideally rely on the installer.
Also available in: Atom
PDF