Project

General

Profile

Actions

action #109358

closed

[qe-core] Describe desktop maintenance testing

Added by yfjiang over 2 years ago. Updated over 1 year ago.

Status:
Resolved
Priority:
Normal
Assignee:
Target version:
Start date:
2022-04-01
Due date:
% Done:

0%

Estimated time:
Sprint:
QE-Core: June Sprint (Jun 08 - Jul 06)

Description

Current QE LSG setup mixed the legacy product and maintenance QA as functional squads. While it is not the case for desktop related testing yet, it may be a good idea to think about the possibility of getting the desktop testing closer to the current QE map. To move this a bit forward, currently I am looking into what knowledge and workload are needed for the desktop maintenance testing, but I have little knowledge about maintenance related testing. So knowing more about the general maintenance tasks, taking desktop as an example, could be a good start. There are some information I am thinking about to gather:

  - what is the current process to review the distributed tasks in desktop maintenance testing area?
  - what is the current process to execute/implement the testing tasks in desktop maintenance testing area?
  - what is the expected KPI, timing of finishing/scope of the relevant workload?
  - where (ie. with which toolchains) can people proceed concrete working items, tasks and workload in desktop maintenance testing area?

If there is anything more we should share about the topic, please do not hesitate to add. Thanks!

To provide more context there's this comment on QE Core's list of mantained tests and #93351

Actions #1

Updated by okurz over 2 years ago

  • Project changed from QA to openQA Tests
  • Subject changed from [qe-core] Describe desktop maintenance testing. to [qe-core] Describe desktop maintenance testing
  • Category set to Enhancement to existing tests
Actions #2

Updated by szarate over 2 years ago

  • Project changed from openQA Tests to QA
  • Category deleted (Enhancement to existing tests)

Moving back to the original project.

Actions #3

Updated by okurz over 2 years ago

  • Target version set to future
Actions #4

Updated by szarate over 2 years ago

  • Description updated (diff)

@yfjiang before jumping into the topic, do you have a timeframe where you would like to start giving it a go? I still need to answer couple of questions, before moving to providing you with meaningful answers specially:

  • What do we mean when we say Desktop testing. What specific product are we talking about? (I'm thinking https://www.suse.com/products/desktop/)
  • Are SLES systems with system role Desktop Gnome considered to be responsibility of Desktop team (personally I don't think so)
  • What happens with things like Firefox in SLES (since as far as I understand, there's a customer using firefox on a server for something)

We don't need to find the answers to all of my questions either, only the first one is the most important, as it impacts the weight of the next questions.

When I first raised this almost a year ago, it was about more or less similar idea, I'm glad we're finally talking about it!

Actions #5

Updated by yfjiang over 2 years ago

Hi Santi,

In the context of the topic, my idea of mentioned desktop testing was the SUSE enterprise (mostly L3) supported desktop techniques (GNOME related packages, icewm, rdp, ibus, PackageKit, NetworkManager, Applications like firefox, libreoffice etc.) used by our product.

The SLED (https://www.suse.com/products/desktop/) is indeed the major target product, while "SLES (GNOME desktop) + Workstation Extensions" is also an important combination. Specifically, taking SLE-15-SP4 iso as an example, the techniques may be roughly directed to packages in "Desktop Applications" modules and "Product-WE" product. So let's count Firefox on SLES in.

Actions #6

Updated by szarate about 2 years ago

  • Sprint set to QE-Core: May Sprint (May 11 - Jun 08)
  • Tags set to qe-core-may-sprint
  • Status changed from New to In Progress
  • Target version changed from future to QE-Core: Ready
Actions #8

Updated by szarate about 2 years ago

  • Sprint changed from QE-Core: May Sprint (May 11 - Jun 08) to QE-Core: June Sprint (Jun 08 - Jul 06)
Actions #10

Updated by szarate about 2 years ago

  • what is the current process to review the distributed tasks in desktop maintenance testing area?

At the moment, there is a two-fold step (thinking specifically for desktop), and doesn't differ much from the current state, for SLED in development or for openSUSE; while what I will describe says desktop, this is what we do for QE-Core:

  • openQA review:
    1. Review the maintenance aggregate tests for desktop, tag accordingly failing jobs [1]
    2. Review the single incidents tests for desktop, tag accordingly failing jobs [1]
  • Make a decision when something is wrong in the test environment (usually, a bug in an existing test):
    1. If the test has a fix that can be achieved within a working day, fix it.
    2. PO/team to evaluate whether the loss of coverage would impact the result of the tests, and decide to unschedule the test if it will be blocking updates for anything longer than few days (We'd like to avoid manually approved updates)

[1]: Tagging accordingly can mean either creating a bug, if the failure is product related, or if it's a problem with the test environment, then to opening a ticket in the issue tracker (we normally use progress, so I would suggest to keep it on the same platform, either by having your own subproject, or by using a [desktop] in the subject line, or a tag for desktop directly, in the tag field)

Product Bugs that don't have a resolution within a day, could have a soft failure (you can refer to the openQA review process in confluence)

  • what is the current process to execute/implement the testing tasks in the desktop maintenance testing area?

There is no process in place, Desktop area in Maintenance context has not have updates in a very long time. From QE-Core, we can (are) help you with the initial setup, and split of the Job groups in openQA, so that the work is more focused, once your team is ready to take over.

  • what is the expected KPI, timing of finishing/scope of the relevant workload?

Let's start with the premise that a job that fails in any of the maintenance areas, will block updates from being auto approved, so the more critical the update, the faster the lower Defect Resolution Time should be, yet there are cases where an update can be manually approved.

With this in mind, during the last coordination call (15.06.2022), automated notifications will be sent after one day of a test still failing. (I might need to cross-check information here, but this is my current understanding.

That said, the KPIs are:

  1. 0 unreviewed jobs, daily
  2. When it's a bug in the test environment, Resolution Time is less than 24h [2]

[2]: Resolution time can mean: the test is fixed, a soft failure is added to circumvent a product bug, or it is unscheduled while the test gets fixed.

  • where (ie. with which toolchains) can people proceed concrete working items, tasks, and workload in desktop maintenance testing area?

They are the same as one would use for working with https://github.com/os-autoinst/os-autoinst-distri-opensuse, with the possible change that there is https://gitlab.suse.de/qa-maintenance/qam-openqa-yml for managing some of the jobgroups and some config for the testsuite maintenance

PS: Sorry for the huge delay.

Actions #11

Updated by szarate about 2 years ago

Following Heko's comment on the main efforts, especially Firefox, one of the problems that I've noticed is the usage of openQA to test web things, instead of the UI (firefox_html5 test being a perfect example). From QE-Core we're looking for a generic approach to test web applications and I have in the roadmap, to find something that would also help to control better the UI of some applications.

Main effort often comes when there's a change in a font, or changes in the icons or look and feel of applications like Firefox, Thunderbird or LibreOffice.

One question that comes to my mind, if whether your team would also be working with openSUSE (Leap Maintenance), not only fixing tests, but also growing/improving the test matrix.

Actions #12

Updated by okurz about 2 years ago

szarate wrote:

[1]: Tagging accordingly can mean either creating a bug, if the failure is product related, or if it's a problem with the test environment, then to opening a ticket in the issue tracker (we normally use progress, so I would suggest to keep it on the same platform, either by having your own subproject, or by using a [desktop] in the subject line, or a tag for desktop directly, in the tag field)

Just to make sure you are aware of the correct nomenclature so that you can find more information in the openQA docs http://open.qa/docs/: The concept is called "labelling". We use "tagging" only for tagging complete builds, e.g. on group overview pages.

Product Bugs that don't have a resolution within a day, could have a soft failure (you can refer to the openQA review process in confluence)

That should be https://confluence.suse.com/display/openqa/QAM+openQA+review+guide

With this in mind, during the last coordination call (15.06.2022), automated notifications will be sent after one day of a test still failing. (I might need to cross-check information here, but this is my current understanding.

I assume you mean automatic notifications based on http://open.qa/docs/#_enable_custom_hook_scripts_on_job_done_based_on_result . Those hooks are executed synchronously as soon as jobs finish. Exceptions apply based on individual squad's preferences, see https://github.com/os-autoinst/salt-states-openqa/blob/master/openqa/server.sls#L81 Notifications are sent if the jobs fail with new unknown issues (no carry-over, no auto-review matches) and notification addresses are configured in the according job groups otherwise notifications are still sent and end up in a "fall-back" Slack room: #discuss-openqa-auto-review-unknown-issues-osd

  • where (ie. with which toolchains) can people proceed concrete working items, tasks, and workload in desktop maintenance testing area?

They are the same as one would use for working with https://github.com/os-autoinst/os-autoinst-distri-opensuse, with the possible change that there is https://gitlab.suse.de/qa-maintenance/qam-openqa-yml for managing some of the jobgroups and some config for the testsuite maintenance

Sounds like maybe yfjiang meant which issue tracker to use? That would be primarily https://progress.opensuse.org/projects/openqatests/issues/ with according subprojects.

Actions #13

Updated by szarate about 2 years ago

  • Status changed from In Progress to Feedback

Waiting for Yifan's comments

Actions #14

Updated by yfjiang about 2 years ago

  • Status changed from Feedback to In Progress

Hi Santi, Heiko, Oliver,thanks for the comments. I kept looking at the effort spending and the commitment of time-aware delivery (ie. daily 0 reviewed jobs, 24-hours resolution time for issues), and found we could only fully take over it with at least 1 extra QA hand to handle everything with maintenance.

The current desktop QA setup is primarily designed for new product testing, especially in this ALP prioritized time window, we can hardly handle the full engagement of maintenance openQA testing. For example, we were fully devoting to gdm container, openSUSE GNOME testing nowadays (and in future, there will be more new stuff to test). However, as we promised, the current desktop QA will constantly help on the relatively complicated openQA testing issues fixing, and making good connection with developers.

One question that comes to my mind, if whether your team would also be working with openSUSE (Leap Maintenance), not only fixing tests, but also growing/improving the test matrix.

My team has not had a plan to work on Leap maintenance testing.

For openSUSE openQA effort, we contributed driven by the SLE's needs. That is to say, when we had something in mind to add in SLE, we would add the test to Tumbleweed in the first place. Otherwise, the contribution for openSUSE testing case is really based on on-demand needs. Also, we spend time on tricky openSUSE testing failure analysis and fixing as well when upstream does not have a immediate clue/hand to investigate deeper.

On the other hand, I still see the opportunity to integrate the QAM and QA of desktop side if we can hire, I will talk to upper managers elaborating that we definitely need more people support to do the integration (with ALP and SLED LTSS added in mind)!

Again, thank you for the details and it helps me to understand the situation.

Actions #15

Updated by szarate almost 2 years ago

  • Status changed from In Progress to Feedback

Yifan do you need more info here? or something else we can help with?

Actions #16

Updated by yfjiang almost 2 years ago

szarate wrote:

Yifan do you need more info here? or something else we can help with?

Hi Santi,

With the QAM process training Vit given, currently it is very clear to me how the whole thing works and what's expected. We can safely close this issue. Thank you for the effort to help on describing it.

Actions #17

Updated by szarate over 1 year ago

  • Status changed from Feedback to Resolved

Anytime Yifan! :)

Actions

Also available in: Atom PDF