Project

General

Profile

action #109358

[qe-core] Describe desktop maintenance testing

Added by yfjiang 3 months ago. Updated 16 days ago.

Status:
In Progress
Priority:
Normal
Assignee:
Target version:
Start date:
2022-04-01
Due date:
% Done:

0%

Estimated time:

Description

Current QE LSG setup mixed the legacy product and maintenance QA as functional squads. While it is not the case for desktop related testing yet, it may be a good idea to think about the possibility of getting the desktop testing closer to the current QE map. To move this a bit forward, currently I am looking into what knowledge and workload are needed for the desktop maintenance testing, but I have little knowledge about maintenance related testing. So knowing more about the general maintenance tasks, taking desktop as an example, could be a good start. There are some information I am thinking about to gather:

  - what is the current process to review the distributed tasks in desktop maintenance testing area?
  - what is the current process to execute/implement the testing tasks in desktop maintenance testing area?
  - what is the expected KPI, timing of finishing/scope of the relevant workload?
  - where (ie. with which toolchains) can people proceed concrete working items, tasks and workload in desktop maintenance testing area?

If there is anything more we should share about the topic, please do not hesitate to add. Thanks!

To provide more context there's this comment on QE Core's list of mantained tests and #93351

History

#1 Updated by okurz 3 months ago

  • Project changed from QA to openQA Tests
  • Subject changed from [qe-core] Describe desktop maintenance testing. to [qe-core] Describe desktop maintenance testing
  • Category set to Enhancement to existing tests

#2 Updated by szarate 3 months ago

  • Project changed from openQA Tests to QA
  • Category deleted (Enhancement to existing tests)

Moving back to the original project.

#3 Updated by okurz 3 months ago

  • Target version set to future

#4 Updated by szarate 3 months ago

  • Description updated (diff)

yfjiang before jumping into the topic, do you have a timeframe where you would like to start giving it a go? I still need to answer couple of questions, before moving to providing you with meaningful answers specially:

  • What do we mean when we say Desktop testing. What specific product are we talking about? (I'm thinking https://www.suse.com/products/desktop/)
  • Are SLES systems with system role Desktop Gnome considered to be responsibility of Desktop team (personally I don't think so)
  • What happens with things like Firefox in SLES (since as far as I understand, there's a customer using firefox on a server for something)

We don't need to find the answers to all of my questions either, only the first one is the most important, as it impacts the weight of the next questions.

When I first raised this almost a year ago, it was about more or less similar idea, I'm glad we're finally talking about it!

#5 Updated by yfjiang 3 months ago

Hi Santi,

In the context of the topic, my idea of mentioned desktop testing was the SUSE enterprise (mostly L3) supported desktop techniques (GNOME related packages, icewm, rdp, ibus, PackageKit, NetworkManager, Applications like firefox, libreoffice etc.) used by our product.

The SLED (https://www.suse.com/products/desktop/) is indeed the major target product, while "SLES (GNOME desktop) + Workstation Extensions" is also an important combination. Specifically, taking SLE-15-SP4 iso as an example, the techniques may be roughly directed to packages in "Desktop Applications" modules and "Product-WE" product. So let's count Firefox on SLES in.

#6 Updated by szarate about 2 months ago

  • Sprint set to QE-Core: May Sprint (May 11 - Jun 08)
  • Tags set to qe-core-may-sprint
  • Status changed from New to In Progress
  • Target version changed from future to QE-Core: Ready

#8 Updated by szarate 23 days ago

  • Sprint changed from QE-Core: May Sprint (May 11 - Jun 08) to QE-Core: June Sprint (Jun 08 - Jul 06)

#10 Updated by szarate 17 days ago

  • what is the current process to review the distributed tasks in desktop maintenance testing area?

At the moment, there is a two-fold step (thinking specifically for desktop), and doesn't differ much from the current state, for SLED in development or for openSUSE; while what I will describe says desktop, this is what we do for QE-Core:

  • openQA review:
    1. Review the maintenance aggregate tests for desktop, tag accordingly failing jobs [1]
    2. Review the single incidents tests for desktop, tag accordingly failing jobs [1]
  • Make a decision when something is wrong in the test environment (usually, a bug in an existing test):
    1. If the test has a fix that can be achieved within a working day, fix it.
    2. PO/team to evaluate whether the loss of coverage would impact the result of the tests, and decide to unschedule the test if it will be blocking updates for anything longer than few days (We'd like to avoid manually approved updates)

[1]: Tagging accordingly can mean either creating a bug, if the failure is product related, or if it's a problem with the test environment, then to opening a ticket in the issue tracker (we normally use progress, so I would suggest to keep it on the same platform, either by having your own subproject, or by using a [desktop] in the subject line, or a tag for desktop directly, in the tag field)

Product Bugs that don't have a resolution within a day, could have a soft failure (you can refer to the openQA review process in confluence)

  • what is the current process to execute/implement the testing tasks in the desktop maintenance testing area?

There is no process in place, Desktop area in Maintenance context has not have updates in a very long time. From QE-Core, we can (are) help you with the initial setup, and split of the Job groups in openQA, so that the work is more focused, once your team is ready to take over.

  • what is the expected KPI, timing of finishing/scope of the relevant workload?

Let's start with the premise that a job that fails in any of the maintenance areas, will block updates from being auto approved, so the more critical the update, the faster the lower Defect Resolution Time should be, yet there are cases where an update can be manually approved.

With this in mind, during the last coordination call (15.06.2022), automated notifications will be sent after one day of a test still failing. (I might need to cross-check information here, but this is my current understanding.

That said, the KPIs are:

  1. 0 unreviewed jobs, daily
  2. When it's a bug in the test environment, Resolution Time is less than 24h [2]

[2]: Resolution time can mean: the test is fixed, a soft failure is added to circumvent a product bug, or it is unscheduled while the test gets fixed.

  • where (ie. with which toolchains) can people proceed concrete working items, tasks, and workload in desktop maintenance testing area?

They are the same as one would use for working with https://github.com/os-autoinst/os-autoinst-distri-opensuse, with the possible change that there is https://gitlab.suse.de/qa-maintenance/qam-openqa-yml for managing some of the jobgroups and some config for the testsuite maintenance

PS: Sorry for the huge delay.

#11 Updated by szarate 17 days ago

Following Heko's comment on the main efforts, especially Firefox, one of the problems that I've noticed is the usage of openQA to test web things, instead of the UI (firefox_html5 test being a perfect example). From QE-Core we're looking for a generic approach to test web applications and I have in the roadmap, to find something that would also help to control better the UI of some applications.

Main effort often comes when there's a change in a font, or changes in the icons or look and feel of applications like Firefox, Thunderbird or LibreOffice.

One question that comes to my mind, if whether your team would also be working with openSUSE (Leap Maintenance), not only fixing tests, but also growing/improving the test matrix.

#12 Updated by okurz 16 days ago

szarate wrote:

[1]: Tagging accordingly can mean either creating a bug, if the failure is product related, or if it's a problem with the test environment, then to opening a ticket in the issue tracker (we normally use progress, so I would suggest to keep it on the same platform, either by having your own subproject, or by using a [desktop] in the subject line, or a tag for desktop directly, in the tag field)

Just to make sure you are aware of the correct nomenclature so that you can find more information in the openQA docs http://open.qa/docs/: The concept is called "labelling". We use "tagging" only for tagging complete builds, e.g. on group overview pages.

Product Bugs that don't have a resolution within a day, could have a soft failure (you can refer to the openQA review process in confluence)

That should be https://confluence.suse.com/display/openqa/QAM+openQA+review+guide

With this in mind, during the last coordination call (15.06.2022), automated notifications will be sent after one day of a test still failing. (I might need to cross-check information here, but this is my current understanding.

I assume you mean automatic notifications based on http://open.qa/docs/#_enable_custom_hook_scripts_on_job_done_based_on_result . Those hooks are executed synchronously as soon as jobs finish. Exceptions apply based on individual squad's preferences, see https://github.com/os-autoinst/salt-states-openqa/blob/master/openqa/server.sls#L81 Notifications are sent if the jobs fail with new unknown issues (no carry-over, no auto-review matches) and notification addresses are configured in the according job groups otherwise notifications are still sent and end up in a "fall-back" Slack room: #discuss-openqa-auto-review-unknown-issues-osd

  • where (ie. with which toolchains) can people proceed concrete working items, tasks, and workload in desktop maintenance testing area?

They are the same as one would use for working with https://github.com/os-autoinst/os-autoinst-distri-opensuse, with the possible change that there is https://gitlab.suse.de/qa-maintenance/qam-openqa-yml for managing some of the jobgroups and some config for the testsuite maintenance

Sounds like maybe yfjiang meant which issue tracker to use? That would be primarily https://progress.opensuse.org/projects/openqatests/issues/ with according subprojects.

Also available in: Atom PDF