Wiki » History » Version 98

Version 97 (okurz, 2020-11-27 11:14) → Version 98/413 (okurz, 2020-11-27 11:18)


# Test results overview
* Latest report based on openQA test results , SLE12: , SLE15:
* only "blocker" or "shipstopper" bugs on "interesting products" for SLE: , SLE15:, SLE12:

# QE tools - Team description

"The easiest way to provide complete quality for your software"

We provide the most complete free-software system-level testing solution to ensure high quality of operating systems, complete software stacks and multi-machine services for software distribution builders, system integration engineers and release teams. We continuously develop, maintain and release our software to be readily used by anyone while we offer a friendly community to support you in your needs. We maintain the public openQA server and the SUSE internal openQA server as well as supporting tools in the surrounding ecosystem.

## Team responsibilities

* Develop and maintain upstream openQA
* Administration of and workers (But not physical hardware, as these belong to the departments that purchased them and we merely facilitate)
* Helps administrating and maintaining, including coordination of efforts aiming at solving problems affecting o3
* Develop and maintain internal maintenance QA tools (SMELT, template generator, MTUI, openQA QAM bot, etc, e.g. from
* Support colleagues, team members and open source community

## Out of scope

* Maintenance of individual tests
* Maintenance of physical hardware
* Maintenance of special worker addendums needed for tests, e.g. external hypervisor hosts for s390x, powerVM
* Ticket triaging of
* Feature development within the backend for single teams (commonly provided by teams themselves)

## Our common userbase

Known users of our products: Most SUSE QA engineers, SUSE SLE release managers and release engineers, every SLE developer submitting "submit requests" in OBS/IBS where product changes are tested as part of the "staging" process before changes are accepted in either SLE or openSUSE (staging tests must be green before packages are accepted), same for all openSUSE contributors submitting to either openSUSE:Factory (for Tumbleweed, SLE, future Leap versions) or Leap, other GNU/Linux distributions like Fedora , Debian ( , unavailable at time of writing), , openSUSE KDE contributors (with their own workflows, ), openSUSE GNOME contributors ( ), OBS developers ( , wicked developers (, and of course our team itself for "openQA-in-openQA Tests" :)
Keep in mind: "Users of openQA" and talking about "openSUSE release managers and engineers" means SUSE employees but also employees of other companies, also development partners of SUSE.
In summary our products, for example openQA, are a critical part of many development processes hence outages and regressions are disruptive and costly. Hence we need to ensure a high quality in production hence we practice DevOps with a slight tendency to a conservative approach for introducing changes while still ensuring a high development velocity.

## How we work

The QE Tools team is following the DevOps approach working using a lightweight Agile approach. We plan and track our works using tickets on . We pick tickets based on priority and planning decisions. We use weekly meetings as checkpoints for progress and also track cycle and lead times to crosscheck progress against expectations.

* [tools team - backlog]( The complete backlog of the team
* [tools team - backlog, high-level view]( A high-level view of the backlog, all epics and higher (an "epic" includes multiple stories)
* [tools team - backlog, top-level view]( A top-level view of the backlog, only sagas and higher (a "saga" is bigger than an epic and can include multiple epics, i.e. "epic of epics")
* [tools team - what members of the team are working on]( To check progress and know what the team is currently occupied with

Be aware: Custom queries in the right-hand sidebar of individual projects, e.g. , show queries with the same name but are limited to the scope of the specific projects so can show only a subset of all relevant tickets.

### Common tasks for team members

This is a list of common tasks that we follow, e.g. reviewing daily based on individual steps in the DevOps Process ![DevOps Process](devops-process_25p.png)

* **Plan**:
* State daily learning and planned tasks in internal chat room
* Review backlog for time-critical, triage new tickets, pick tickets from backlog; see
* **Code**:
* See project specific contribution instructions
* Provide peer-review following based on projects within the scope of with the exception of test code repositories, especially,,,,, as well as other projects like
* **Build**:
* See project specific contribution instructions
* **Test**:
* Monitor failures on relying on for os-autoinst (email notifications)
* Monitor failures on relying on for openQA (email notifications)
* **Release**:
* By default we use the rolling-release model for all projects unless specified otherwise
* Monitor [devel:openQA on OBS]( (all packages and all subprojects) for failures, ensure packages are published on (members need to be added individually, you can ask existing team members, e.g. the SM)
* Monitor for the openQA-in-openQA Tests and automatic submissions of os-autoinst and openQA to openSUSE:Factory through
* **Deploy**:
* o3 is automatically deployed (daily), see
* osd is automatically deployed (weekly), monitor and watch for notification email to
* **Operate**:
* Apply infrastructure changes from (osd) or manually over sshd (o3)
* Monitor for backup, see
config changes in salt (osd), backups, job group configuration changes
* Ensure old unused/non-matching needles are cleaned up (osd+o3), see #73387
* **Monitor**:
* React on alerts from (emails on . You need to be logged in to reach the alert list by the provided URL)
* Look for incomplete jobs or scheduled not being worked on o3 and osd (API or webUI)
* React on alerts from,, (subscribe to projects for notifications)
* Be responsive on #opensuse-factory (irc:// for help, support and collaboration (Unless you have a better solution it is suggested to use []( for a sustainable presence; you also need a [registered IRC account](
* Be responsive on [#qa-tools]( for internal coordination and alarm handling, fallback to #opensuse-factory (irc:// as backup if [#qa-tools]( is not available, e.g. if is down
* Be responsive on [#testing]( for help, support and collaboration
* Be responsive on mailing lists and (see

### How we work on our backlog

* "due dates" are only used as exception or reminders
* every team member can pick up tickets themselves
* everybody can set priority, PO can help to resolve conflicts
* consider the [ready, not assigned/blocked/low]( query as preferred
* ask questions in tickets, even potentially "stupid" questions, oftentimes descriptions are unclear and should be improved
* There are "low-level infrastructure tasks" only conducted by some team members, the "DevOps" aspect does not include that but focusses on the joint development and operation of our main products
* Consider tickets with the subject keyword or tag "learning" as good learning opportunities for people new to a certain area. Experts in the specific area should prefer helping others but not work on the ticket
* For tickets which are out of the scope of the team remove from backlog, delegate to corresponding teams or persons but be nice and supportive, e.g. [SUSE-IT](, [EngInfra]( also see [SLA](, [test maintainer](, QE-LSG PrjMgr/mgmt
* Refactoring and general improvements are conducted while we work on features or regression fixes
* For every regression or bigger issue that we encounter try to come up with at least two improvements, e.g. the actual issue is fixed and similar cases are prevented in the future with better tests and optionally also monitoring is improved
* okurz proposes to use "#NoEstimates". Though that topic is controversial and often misunderstood. describes it nicely :)

#### Definition of DONE

Also see and

* Code changes are made available via a pull request on a version control repository, e.g. github for openQA
* [Guidelines for git commits]( have been followed
* Code has been reviewed (e.g. in the github PR)
* Depending on criticality/complexity/size/feature: A local verification test has been run, e.g. post link to a local openQA machine or screenshot or logfile
* Potentially impacted package builds have been considered, e.g. openSUSE Tumbleweed and Leap, Fedora, etc.
* Code has been merged (either by reviewer or "mergify" bot or reviewee after 'LGTM' from others)
* Code has been deployed to osd and o3 (monitor automatic deployment, apply necessary config or infrastructure changes)

#### Definition of READY for new features

The following points should be considered before a new feature ticket is READY to be implemented:

* Follow the ticket template from
* A clear motivation or user expressing a wish is available
* Acceptance criteria are stated (see ticket template)
* add tasks as a hint where to start

#### WIP-limits (reference "Kanban development")

* global limit of 10 tickets, and 3 tickets per person respectively [In Progress](
* limit of 20 tickets per person in [Feedback](

#### Target numbers or "guideline", "should be", in priorities

1. *New, untriaged openQA:* [0 (daily)]( . Every ticket should have a target version, e.g. "Ready" for QE tools team, "future" if unplanned, others for other teams
1. *Untriaged "tools" tagged:* [0 (daily)]( . Every ticket should have a target version, e.g. "Ready" for QE tools team, "future" if unplanned, others for other teams
1. *Workable (properly defined):* [~40 (20-50)]( . Enough tickets to reflect a proper plan but not too many to limit unfinished data (see "waste")
1. *Overall backlog length:* [ideally less than 100]( . Similar as for "Workable". Enough tickets to reflect a proper roadmap as well as give enough flexibility for all unfinished work but limited to a feasible number that can still be overlooked by the team without loosing overview. One more reason for a maximum of 100 are that pagination in redmine UI allows to show only up to 100 issues on one page at a time, same for redmine API access.
1. *Within due-date:* [0 (daily/weekly)]( . We should take due-dates serious, finish tickets fast and at the very least update tickets with an explanation why the due-date could not be hold and update to a reasonable time in the future based on usual cycle time expectations

#### SLOs (service level objectives)

* for picking up tickets based on priority, first goal is "urgency removal":
* **immediate openQA**: [<1 day](
* **urgent openQA**: [<1 week](
* **high openQA**: [<1 month](
* **normal openQA**: [<1 year](
* **low openQA**: undefined

* aim for cycle time of individual tickets (not epics or sagas): 1h-2w

#### Backlog prioritization

When we prioritize tickets we assess:
1. What the main use cases of openQA are among all users, be it SUSE QA engineers, other SUSE employees, openSUSE contributors as well as any other outside user of openQA
2. We try to understand how many persons and products are affected by feature requests as well as regressions (or "concrete bugs" as the ticket category is called within the openQA Project) and prioritize issues affecting more persons and products and use cases over limited issues
3. We prioritize regressions higher than work on (new) feature requests
4. If a workaround or alternative exists then this lowers priority. We prioritize tasks that need deep understanding of the architecture and an efficient low-level implementation over convenience additions that other contributors are more likely to be able to implement themselves.

### Team meetings

* **Daily:** Use (internal) chat actively, e.g. formulate your findings or achievements and plans for the day, "think out loud" while working on individual problems. Optionally join the call every day 1030 CET/CEST
* *Goal*: Quick support on problems, feedback on plans, collaboration and self-reflection (compare to [Daily Scrum](
* **Weekly coordination:** Every Friday 1115-1145 CET/CEST in [m.o.o/suse_qa_tools]( ([fallback]( Community members and guests are particularly welcome to join this meeting.
* *Goal*: Team backlog coordination and design decisions of bigger topics (compare to [Sprint Planning](
* **Fortnightly Retrospective:** Friday 1145-1215 CET/CEST every even week, same room as the weekly meeting. On these days the weekly has hard time limit of 1115-1145. At the start of the week a game on is started which can be filled in all week. Specific actions are recorded as tickets at the end of the week.
* *Goal*: Inspect and adapt, learn and improve (compare to [Sprint Retrospective](
* **Virtual coffee talk:** Weekly every Thursday 1100-1120 CET/CEST, same room as the weekly.
* *Goal*: Connect and bond as a team, understand each other (compare to [Informal Communication in an all-remote environment](
* **extension on-demand:** Optional meeting on invitation in the suggested time slot Thursday 1000-1200 CET/CEST, in the same room as the weekly, on-demand or replacing the *Virtual coffee talk*.
* *Goal*: Introduce, research and discuss bigger topics, e.g. backlog overview, processes and workflows

Note: Meetings concerning the whole team are moderated by the scrum master by default, who should join the call early and verify that the meeting itself and any tools used are working or e.g. advise the use of the fallback option.

### Team

The team is comprised of engineers from different teams, some only partially available:
* Xiaojing Liu (Jane, [QA APAC 1](
* Marius Kittler
* Nick Singer
* Sebastian Riedel (Part time contributions)
* Oliver Kurz (acting Product Owner)
* Tina Müller (Part time, now mainly working for OBS, [QAM3](
* Christian Dywan (Scrum Master, [QEM1](
* Ivan Lausuch (QEM3)
* Ondřej Súkup
* Jan Baier (Part time)
* Vasileios Anastasiadis (Bill)

### Alert handling

#### Best practices

* "if it hurts, do it more often":
* Reduce [Mean-time-to-Detect (MTTD)]( and [Mean-time-to-Recovery](

#### Process

* React on any alert
* For each failing grafana alert
* Create a ticket for the issue (with a tag "alert"; create ticket unless the alert is trivial to resolve and needs no improvement)
* Link the corresponding grafana panel in the ticket
* Respond to the notification email with a link to the ticket
* Optional: Inform in chat
* Optional: Add "annotation" in corresponding grafana panel with a link to the corresponding ticket
* Pause the alert if you think further alerting the team does not help (e.g. you can work on fixing the problem, alert is non-critical but problem can not be fixed within minutes)
* If you consider an alert non-actionable then change it accordingly
* If you do not know how to handle an alert ask the team for help
* After resolving the issue add explanation in ticket, unpause alert and verify it going to "ok" again, resolve ticket

#### References


### Historical

Previously the former QA tools team used target versions "Ready" (to be planned into individual milestone periods or sprints), "Current Sprint" and "Done". However the team never really did use proper time-limited sprints so the distinction was rather vague. After having tickets "Resolved" after some time the PO or someone else would also update the target version to "Done" to signal that the result has been reviewed. This was causing a lot of ticket update noise for not much value considering that the [Definition-of-Done]( when properly followed already has rather strict requirements on when something can be considered really "Resolved" hence the team eventually decided to not use the "Done" target version anymore. Since about 2019-05 (and since okurz is doing more backlog management) the team uses priorities more as well as the status "Workable" together with an explicit team member list for "What the team is working on" to better visualize what is making team members busy regardless of what was "officially" planned to be part of the team's work. So we closed the target version. On 2020-07-03 okurz subsequently closed "Current Sprint" as also this one was in most cases equivalent to just picking an assignee for a ticket or setting to "In Progress". We can just distinguish between "(no version)" meaning untriaged, "Ready" meaning tools team should consider picking up these issues and "future" meaning that there is no plan for this to be picked up. Everything else is defined by status and priority.
In 2020-10-27 we discussed together to find out the history of the team. We clarified that the team started out as a not well defined "Dev+Ops" team. "team responsibilities" have been mainly unchanged since at least beginning of 2019. We agreed that learning from users and production about our "Dev" contributions is good, so this part of "Ops" is responsibility of everyone.

# QE Core and QE Yast - Team descriptions

(this chapter has seen changes in 2020-11 regarding QSF -> QE Core / QE Yast change)

**QE Core** (formerly QSF, QA SLE Functional) and **QE Yast** are squads focusing on Quality Engineering of the core and yast functionality of the SUSE SLE products. The squad is comprised of members of QE Integration - [SUSE QA SLE Nbg](, including [SUSE QA SLE Prg]( - and QE Maintenance people (formerly "QAM"). The [SLE Departement]( page describes our QA responsibilities. We focus on our automatic tests running in [openQA]( under the job groups "Functional" as well as "Autoyast" for the respective products, for example [SLE 15 / Functional]( and [SLE 15 / Autoyast]( We back our automatic tests with exploratory manual tests, especially for the product milestone builds. Additionally we care about corresponding openSUSE openQA tests (see as well

* long-term roadmap:
* overview of current openQA SLE12SP5 tests with progress ticket references:
* fate tickets for SLE12SP5 feature testing: based on new report based on all tickets with milestone before SLE12SP5 GM, for SLE15SP1
* only "blocker" or "shipstopper" bugs on "interesting products" for SLE15, for SLE12
* Better organization of planned work can be seen at the [SUSE QA]( project (which is not public).

## Test plan

When looking for coverage of certain components or use cases keep the [openQA glossary]( in mind. It is important to understand that "tests in openQA" could be a scenario, for example a "textmode installation run", a combined multi-machine scenario, for example "a remote ssh based installation using X-forwarding", or a test module, for example "vim", which checks if the vim editor is correctly installed, provides correct rendering and basic functionality. You are welcome to contact any member of the team to ask for more clarification about this.

In detail the following areas are tested as part of "SLE functional":

* different hardware setups (UEFI, acpi)
* support for localization
* openSUSE: virtualization - some "virtualization" tests are active on o3 with reduced set compared to SLE coverage (on behalf of QA SLE virtualization due to team capacity constraints, clarified in QA SLE coordination meeting 2018-03-28)
* openSUSE: migration - comparable to "virtualization", a reduced set compared to SLE coverage is active on o3 (on behalf of QA SLE migration due to team capacity constraints, clarified in QA SLE coordination meeting 2018-04)

### QE Yast

Squad focuses on testing YaST components, including installer and snapper.

Detailed test plan for SLES can be found here: [](

* Latest report based on openQA test results SLE12: , SLE15:

### QE Core

"Testing is the future, and the future starts with you"

* basic operations (firefox, zypper, logout/reboot/shutdown)
* boot_to_snapshot
* functional application tests (kdump, gpg, ipv6, java, git, openssl, openvswitch, VNC)
* NIS (server, client)
* toolchain (development module)
* systemd
* "transactional-updates" as part of the corresponding SLE server role, not CaaSP

* Latest report based on openQA test results SLE12: , SLE15:

## In new organization also qovered by QE Core and others

* quarterly updated media: former QA Maintenance (QAM) is now part of the various QE squads. However, QU media does happen together with Maintenance Coordination that is not part of these squads.

## What we do

We collected opinions, personal experiences and preferences starting with the following four topics: What are fun-tasks ("new tests", "collaborate", "do it right"), what parts are annoying ("old & sporadic issues"), what do we think is expected from qsf-u ("be quick", "keep stuff running", "assess quality") and what we should definitely keep doing to prevent stakeholders becoming disappointed ("build validation", "communication & support").

### How we work on our backlog

* no "due date"
* we pick up tickets that have not been previously discussed
* more flexible choice
* WIP-limits:
* global limit of 10 tickets "In Progress"

* target numbers or "guideline", "should be", in priorities:
1. New, untriaged: 0
2. Workable: 40
3. New, assigned to [qe-core] or [qe-yast]: ideally less than 200 (should not stop you from triaging)

* SLAs for priority tickets - how to ensure to work on tickets which are more urgent?
* "taken": <1d: immediate -> looking daily
* 2-3d: urgent
* first goal is "urgency removal": <1d: immediate, 1w: urgent

* our current "cycle time" is 1h - 1y (maximum, with interruptions)

* everybody should set priority + milestone in obvious cases, e.g. new reproducible test failures in multiple critical scenarios, in general case the PO decides

### How we like to choose our battles

We self-assessed our tasks on a scale from "administrative" to "creative" and found in the following descending order: daily test review (very "administrative"), ticket triaging, milestone validation, code review, create needles, infrastructure issues, fix and cleanup tests, find bugs while fixing failing tests, find bugs while designing new tests, new automated tests (very "creative"). Then we found we appreciate if our work has a fair share of both sides. Probably a good ratio is 60% creative plus 40% administrative tasks. Both types have their advantages and we should try to keep the healthy balance.

### What "product(s)" do we (really) *care* about?

Brainstorming results:

* openSUSE Krypton -> good example of something that we only remotely care about or not at all even though we see the connection point, e.g. test plasma changes early before they reach TW or Leap as operating systems we rely on or SLE+packagehub which SUSE does not receive direct revenue from but indirect benefit. Should be "community only", that includes members from QSF though
* openQA -> (like OBS), helps to provide ROI for SUSE
* SLE(S) (in development versions)
* Tumbleweed
* Leap, because we use it
* SLE migration
* os-autoinst-distri-opensuse+backend+needles

From this list strictly no "product" gives us direct revenue however most likely SLE(S) (as well as SLES HA and SLE migration) are good examples of direct connection to revenue (based on SLE subscriptions). Conducting a poll in the team has revealed that 3 persons see "SLE(S)" as our main product and 3 see "os-autoinst-distri-opensuse+backend+needles" as the main product. We mainly agreed that however we can not *own* a product like "SLE" because that product is mainly not under our control.

Visualizing "cost of testing" vs. "risk of business impact" showed that both metrics have an inverse dependency, e.g. on a range from "upstream source code" over "package self-tests", "openSUSE Factory staging", "Tumbleweed", "SLE" we consider SLE to have the highest business risk attached and therefore defines our priority however testing at upstream source level is considered most effective to prevent higher cost of bugs or issues. Our conclusion is that we must ensure that the high-risk SLE base has its quality assured while supporting a quality assurance process as early as possible in the development process. package self-tests as well as the openQA staging tests are seen as a useful approach in that direction as well as "domain specfic specialist QA engineers" working closely together with according in-house development parties.

## Documentation

This documentation should only be interesting for the team QA SLE functional. If you find that some of the following topics are interesting for other people, please extract those topics to another wiki section.

### QA SLE functional Dashboards

In room 3.2.15 from Nuremberg office are two dedicated laptops each with a monitor attached showing a selected overview of openQA test resuls with important builds from SLE and openSUSE.
Such laptops are configured with a root account with the default password for production machines. First point of contact: [](, ([]

* '''': Showing current view of filtered for some job group results, e.g. "Functional"
* '''': Showing current view of filtered for some job group results which we took responsibility to review and are mostly interested in

### dashboard-osd-3215

* OS: openSUSE Tumbleweed
* Services: ssh, mosh, vnc, x2x
* Users:
** root
** dashboard
* VNC: `vncviewer dashboard-osd-3215`
* X2X: `ssh -XC dashboard@dashboard-osd-3215 x2x -west -to :0.0`
** (attaches the dashboard monitor as an extra display to the left of your screens. Then move the mouse over and the attached X11 server will capture mouse and keyboard)

#### Content of /home/dashboard/.xinitrc

# Source common code shared between the
# X session and X init scripts
. /etc/X11/xinit/xinitrc.common

xset -dpms
xset s off
xset s noblank
# Add your own lines here...
$HOME/bin/osd_dashboard &

#### Content of /home/dashboard/bin/osd_dashboard


DISPLAY=:0 unclutter &

DISPLAY=:0 xset -dpms
DISPLAY=:0 xset s off
DISPLAY=:0 xset s noblank

DISPLAY=:0 chromium --kiosk "$url"

#### Cron job:

Min H DoM Mo DoW Command
* * * * * /home/dashboard/bin/reload_chromium

#### Content of /home/dashboard/bin/reload_chromium


DISPLAY=:0 xset -dpms
DISPLAY=:0 xset s off
DISPLAY=:0 xset s noblank

DISPLAY=:0 xdotool windowactivate $(DISPLAY=:0 xdotool search --class Chromium)
DISPLAY=:0 xdotool key F5
DISPLAY=:0 xdotool windowactivate $(DISPLAY=:0 xdotool getactivewindow)

#### Issues:

* ''When the screen shows a different part of the web page''
** a simple mouse scroll through vnc or x2x may suffice.
* ''When the builds displayed are freeze without showing a new build, it usually means that midori, the browser displaying the info on the screen, crashed.''
** you can try to restart midori this way:
*** ps aux | grep midori
*** kill $pid
*** /home/dashboard/bin/osd_dashboard
** If this also doesn't work, restart the machine.

### dashboard-o3

* Raspberry Pi 3B+
* IP: ``

#### Content of /home/tux/.xinitrc

unclutter &
openbox &
xset s off
xset -dpms
sleep 5
url=" Tumbleweed\$|openSUSE Leap [0-9]{2}.?[0-9]*\$|openSUSE Leap.\*JeOS\$|openSUSE Krypton|openQA|GNOME Next&limit_builds=2&time_limit_days=14&&show_tags=1&fullscreen=1#build-results"
chromium --kiosk "$url" &

while sleep 300 ; do
xdotool windowactivate $(xdotool search --class Chromium)
xdotool key F5
xdotool windowactivate $(xdotool getactivewindow)

#### Content of /usr/share/lightdm/lightdm.conf.d/50-suse-defaults.conf
pam-service = lightdm
pam-autologin-service = lightdm-autologin
pam-greeter-service = lightdm-greeter