Project

General

Profile

action #91509

openQA Project - coordination #39719: [saga][epic] Detection of "known failures" for stable tests, easy test results review and easy tracking of known issues

openQA Project - coordination #88229: [epic] Prevent unintended test coverage decrease

[tools] Easy way to check and compare coverage in multiple openQA instances

Added by hurhaj about 2 months ago. Updated 23 days ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
Enhancement to existing tests
Target version:
Start date:
2021-04-23
Due date:
% Done:

0%

Estimated time:
(Total: 0.00 h)
Difficulty:

Description

We have (AFAIK) three official instances of openQA:

  1. openqa.suse.de
  2. openqa.qam.suse.cz
  3. openqa.opensuse.org

To get a full picture of our openQA efforts, we need data from all of them, otherwise we will be missing info about some of the products (e.g. QEM has runs in both o.s.d and o.q.s.c).

The tool should be able to answer two main questions:

  1. What is the coverage / What tests do we run for product X? (i.e. What tests are running on SLE 15 SP2?)
  2. What is the difference in coverage for products X and Y? (i.e. What tests are running on openSUSE Tumbleweed but not on SLE 15 SP2 and vice versa? )
    • both of these questions were already asked, but it's hard to give quick and precise answer at this moment

Subtasks

action #91656: [qe-core] os-autoinst-distri-opensuse YAML schedule file comparisonNew

History

#1 Updated by hurhaj about 2 months ago

  • Description updated (diff)

#2 Updated by VANASTASIADIS about 2 months ago

  • Category set to Feature requests
  • Target version set to future

#3 Updated by okurz about 2 months ago

  • Parent task set to #88229

#4 Updated by okurz about 2 months ago

I have linked this ticket to already existing #88229 . I have the feeling that we did not see this need as severly before most of SUSE openQA contributors have selected to use YAML schedule files per openQA scenario in comparison to previously using a shared schedule definition where differences would all be noted down explicitly. IMHO the whole problem of differing coverage was made worse with #54839 which of course helped to mitigate short-term pain because teams "felt" as if they would step less on each other's toes.
Would we need a "tool" to compare coverage in case we would simply use the same schedule definitions by default?

#5 Updated by okurz about 2 months ago

  • Subject changed from [tools] Create tool for checking and comparing coverage in openQAs to Easy way to check and compare coverage in multiple openQA instances

As discussed in chat

To be able to proceed we need actual use cases. For example I wonder: What are the actual goals you want to achieve? The template https://progress.opensuse.org/projects/openqav3/wiki/#Feature-requests should help to fill the necessary details.

#6 Updated by hurhaj about 2 months ago

okurz wrote:

To be able to proceed we need actual use cases. For example I wonder: What are the actual goals you want to achieve?

From the description:

The tool should be able to answer two main questions:

What is the coverage / What tests do we run for product X? (i.e. What tests are running on SLE 15 SP2?)
What is the difference in coverage for products X and Y? (i.e. What tests are running on openSUSE Tumbleweed but not on SLE 15 SP2 and vice versa? )
both of these questions were already asked, but it's hard to give quick and precise answer at this moment

#7 Updated by okurz about 2 months ago

I have read the description. But why do you need the coverage? What would you do with this information if you have it?

#8 Updated by okurz about 2 months ago

  • Project changed from openQA Project to openQA Tests
  • Category changed from Feature requests to Enhancement to existing tests

#9 Updated by tjyrinki_suse about 2 months ago

  • Subject changed from Easy way to check and compare coverage in multiple openQA instances to [tools] Easy way to check and compare coverage in multiple openQA instances

#10 Updated by hurhaj 29 days ago

okurz wrote:

I have read the description. But why do you need the coverage? What would you do with this information if you have it?

Mostly for filling the gaps, checking if all is OK during release of new service pack or even whole new SLES. Also there is a possibility that someone will want to use it for whatever statistics they need.

It seems to be very interesting for product owners, but any team in QE could find it useful.

#11 Updated by okurz 23 days ago

Just to get expectations aligned: SUSE QE Tools has not much experience with the test distribution os-autoinst-distri-opensuse itself and also not expectations regarding something like the expected test coverage data. AFAIK reading data from multiple instances and comparing against each other has never been done so far. What I could think of being possible here is an external script accessing the databases of each instance directly, reading test modules, sorting test modules by DISTRI, FLAVOR, VERSION, ARCH, MACHINE . I assume what we would end up with is a very big document that can be used for reference and searching for individual test modules. This can help to answer a question like "Is module X tested on Y at all". But I think it will not be usable to effectively compare test coverage to find gaps assuming that the actual difference will be very big. This is the reason why I proposed #91656 assuming that it's easier to implement, easier to use and more helpful in the long run.

In the meantime maybe also https://github.com/okurz/scripts/blob/master/openqa-db_query_last_use_of_module can be helpful to find out in which scenarios specified modules are used.

Regarding a time expectation when we could come to implementing the current ticket my current estimate is in the range of months. See the complete current SUSE QE Tools team backlog under https://progress.opensuse.org/issues?query_id=230

#12 Updated by hurhaj 23 days ago

I'm fully aware how difficult this issue is. And my personal expectations don't matter, really, as this is coming more from product owners and I'm here just a middleman who created the ticket. As I mentioned in chat, most interested people in this kind of functionality were Marita, Heiko and Timo. I suggest you talk to each other and align expectations, without me introducing needles noise to the discussion.

#13 Updated by okurz 23 days ago

I understand. Yes, that makes sense. I guess it's best to do that then in #72877

Also available in: Atom PDF