Project

General

Profile

Actions

Wiki » History » Revision 55

« Previous | Revision 55/424 (diff) | Next »
okurz, 2020-10-10 11:06
SUSE QA Tools: Add definition of DONE+READY


Test results overview

QA tools - Team description

Team responsibilities

  • Develop and maintain upstream openQA
  • Administration of openqa.suse.de and workers (But not physical hardware, as these belong to the departments that purchased them and we merely facilitate)
  • Helps administrating and maintaining openqa.opensuse.org, including coordination of efforts aiming at solving problems affecting o3
  • Support colleagues, team members and open source community

Out of scope

  • Maintenance of individual tests
  • Maintenance of physical hardware
  • Maintenance of special worker addendums needed for tests, e.g. external hypervisor hosts for s390x, powerVM
  • Ticket triaging of http://progress.opensuse.org/projects/openqatests/
  • Feature development within the backend for single teams (commonly provided by teams themselves)

How we work

The QA Tools team is following the DevOps approach working using a lightweight Agile approach. We plan and track our works using tickets on https://progress.opensuse.org . We pick tickets based on priority and planning decisions. We use weekly meetings as checkpoints for progress and also track cycle and lead times to crosscheck progress against expectations.

Also find the custom queries in the right-hand sidebar of https://progress.opensuse.org/projects/openqav3/issues for tickets and their plans.

Common tasks for team members

This is a list of common tasks that we follow, e.g. reviewing daily based on individual steps in the DevOps Process DevOps Process

How we work on our backlog

  • "due dates" are only used as exception or reminders
  • every team member can pick up tickets themselves
  • everybody can set priority, PO can help to resolve conflicts

Definition of DONE

Also see http://www.allaboutagile.com/definition-of-done-10-point-checklist/ and https://www.scrumalliance.org/community/articles/2008/september/what-is-definition-of-done-%28dod%29

  • Code changes are made available via a pull request on a version control repository, e.g. github for openQA
  • Guidelines for git commits have been followed
  • Code has been reviewed (e.g. in the github PR)
  • Depending on criticality/complexity/size/feature: A local verification test has been run, e.g. post link to a local openQA machine or screenshot or logfile
  • Potentially impacted package builds have been considered, e.g. openSUSE Tumbleweed and Leap, Fedora, etc.
  • Code has been merged (either by reviewer or "mergify" bot or reviewee after 'LGTM' from others)
  • Code has been deployed to osd and o3 (monitor automatic deployment, apply necessary config or infrastructure changes)

Definition of READY for new features

The following points should be considered before a new feature ticket is READY to be implemented:

WIP-limits (reference "Kanban development")

  • global limit of 10 tickets "In Progress"
  • personal limit of 3 tickets "In Progress"

To check: Open query and look for tickets total number of tickets as well as per person

Target numbers or "guideline", "should be", in priorities

  1. New, untriaged: 0 (daily) . Every ticket should have a target version, e.g. "Ready" for QA tools team, "future" if unplanned, others for other teams
  2. Untriaged "tools" tagged: 0 (daily) . Every ticket should have a target version, e.g. "Ready" for QA tools team, "future" if unplanned, others for other teams
  3. Workable (properly defined): ~40 (20-50) . Enough tickets to reflect a proper plan but not too many to limit unfinished data (see "waste")
  4. Overall backlog length: ideally less than 100 . Similar as for "Workable"

SLOs (service level objectives)

  • for picking up tickets based on priority, first goal is "urgency removal":

  • aim for cycle time of individual tickets (not epics or sagas): 1h-2w

Backlog prioritization

When we prioritize tickets we assess:

  1. What the main use cases of openQA are among all users, be it SUSE QA engineers, other SUSE employees, openSUSE contributors as well as any other outside user of openQA
  2. We try to understand how many persons and products are affected by feature requests as well as regressions (or "concrete bugs" as the ticket category is called within the openQA Project) and prioritize issues affecting more persons and products and use cases over limited issues
  3. We prioritize regressions higher than work on (new) feature requests
  4. If a workaround or alternative exists then this lowers priority. We prioritize tasks that need deep understanding of the architecture and an efficient low-level implementation over convenience additions that other contributors are more likely to be able to implement themselves.

Team meetings

Alert handling

Best practices

Process

  • React on any alert
  • For each failing grafana alert
    • Create a ticket for the issue (with a tag "alert"; create ticket unless the alert is trivial to resolve and needs no improvement)
    • Link the corresponding grafana panel in the ticket
    • Respond to the notification email with a link to the ticket
    • Optional: Inform in chat
    • Optional: Add "annotation" in corresponding grafana panel with a link to the corresponding ticket
    • Pause the alert if you think further alerting the team does not help (e.g. you can work on fixing the problem, alert is non-critical but problem can not be fixed within minutes)
  • If you consider an alert non-actionable then change it accordingly
  • If you do not know how to handle an alert ask the team for help
  • After resolving the issue add explanation in ticket, unpause alert and verify it going to "ok" again, resolve ticket

References

Historical

Previously the QA tools team used target versions "Ready" (to be planned into individual milestone periods or sprints), "Current Sprint" and "Done". However the team never really did use proper time-limited sprints so the distinction was rather vague. After having tickets "Resolved" after some time the PO or someone else would also update the target version to "Done" to signal that the result has been reviewed. This was causing a lot of ticket update noise for not much value considering that the Definition-of-Done when properly followed already has rather strict requirements on when something can be considered really "Resolved" hence the team eventually decided to not use the "Done" target version anymore. Since about 2019-05 (and since okurz is doing more backlog management) the team uses priorities more as well as the status "Workable" together with an explicit team member list for "What the team is working on" to better visualize what is making team members busy regardless of what was "officially" planned to be part of the team's work. So we closed the target version. On 2020-07-03 okurz subsequently closed "Current Sprint" as also this one was in most cases equivalent to just picking an assignee for a ticket or setting to "In Progress". We can just distinguish between "(no version)" meaning untriaged, "Ready" meaning tools team should consider picking up these issues and "future" meaning that there is no plan for this to be picked up. Everything else is defined by status and priority.

QA SLE Functional - Team description

QSF (QA SLE Functional) is a virtual team focusing on QA of the "functional" domain of the SUSE SLE products. The virtual team is mainly comprised of members of SUSE QA SLE Nbg including members from SUSE QA SLE Prg. The SLE Departement page describes our QA responsibilities. We focus on our automatic tests running in openQA under the job groups "Functional" as well as "Autoyast" for the respective products, for example SLE 15 / Functional and SLE 15 / Autoyast. We back our automatic tests with exploratory manual tests, especially for the product milestone builds. Additionally we care about corresponding openSUSE openQA tests (see as well https://openqa.opensuse.org).

Test plan

When looking for coverage of certain components or use cases keep the openQA glossary in mind. It is important to understand that "tests in openQA" could be a scenario, for example a "textmode installation run", a combined multi-machine scenario, for example "a remote ssh based installation using X-forwarding", or a test module, for example "vim", which checks if the vim editor is correctly installed, provides correct rendering and basic functionality. You are welcome to contact any member of the team to ask for more clarification about this.

In detail the following areas are tested as part of "SLE functional":

  • different hardware setups (UEFI, acpi)
  • support for localization
  • openSUSE: virtualization - some "virtualization" tests are active on o3 with reduced set compared to SLE coverage (on behalf of QA SLE virtualization due to team capacity constraints, clarified in QA SLE coordination meeting 2018-03-28)
  • openSUSE: migration - comparable to "virtualization", a reduced set compared to SLE coverage is active on o3 (on behalf of QA SLE migration due to team capacity constraints, clarified in QA SLE coordination meeting 2018-04)

QSF-y

Virtual team focuses on testing YaST components, including installer and snapper.

Detailed test plan for SLES can be found here: SLES_Integration_Level_Testplan.md

QSF-u

"Testing is the future, and the future starts with you"

Explicitly not covered by QSF

  • quarterly updated media: Expected to be covered by Maintenance + QAM

What we do

We collected opinions, personal experiences and preferences starting with the following four topics: What are fun-tasks ("new tests", "collaborate", "do it right"), what parts are annoying ("old & sporadic issues"), what do we think is expected from qsf-u ("be quick", "keep stuff running", "assess quality") and what we should definitely keep doing to prevent stakeholders becoming disappointed ("build validation", "communication & support").

How we work on our backlog

  • no "due date"
  • we pick up tickets that have not been previously discussed
  • more flexible choice
  • WIP-limits:

    • global limit of 10 tickets "In Progress"
  • target numbers or "guideline", "should be", in priorities:

    1. New, untriaged: 0
    2. Workable: 40
    3. New, assigned to [u]: ideally less than 200 (should not stop you from triaging)
  • SLAs for priority tickets - how to ensure to work on tickets which are more urgent?

    • "taken": looking daily
    • 2-3d: urgent
    • first goal is "urgency removal": <1d: immediate, 1w: urgent
  • our current "cycle time" is 1h - 1y (maximum, with interruptions)

  • everybody should set priority + milestone in obvious cases, e.g. new reproducible test failures in multiple critical scenarios, in general case the PO decides

How we like to choose our battles

We self-assessed our tasks on a scale from "administrative" to "creative" and found in the following descending order: daily test review (very "administrative"), ticket triaging, milestone validation, code review, create needles, infrastructure issues, fix and cleanup tests, find bugs while fixing failing tests, find bugs while designing new tests, new automated tests (very "creative"). Then we found we appreciate if our work has a fair share of both sides. Probably a good ratio is 60% creative plus 40% administrative tasks. Both types have their advantages and we should try to keep the healthy balance.

What "product(s)" do we (really) care about?

Brainstorming results:

  • openSUSE Krypton -> good example of something that we only remotely care about or not at all even though we see the connection point, e.g. test plasma changes early before they reach TW or Leap as operating systems we rely on or SLE+packagehub which SUSE does not receive direct revenue from but indirect benefit. Should be "community only", that includes members from QSF though
  • openQA -> (like OBS), helps to provide ROI for SUSE
  • SLE(S) (in development versions)
  • Tumbleweed
  • Leap, because we use it
  • SLES HA
  • SLE migration
  • os-autoinst-distri-opensuse+backend+needles

From this list strictly no "product" gives us direct revenue however most likely SLE(S) (as well as SLES HA and SLE migration) are good examples of direct connection to revenue (based on SLE subscriptions). Conducting a poll in the team has revealed that 3 persons see "SLE(S)" as our main product and 3 see "os-autoinst-distri-opensuse+backend+needles" as the main product. We mainly agreed that however we can not own a product like "SLE" because that product is mainly not under our control.

Visualizing "cost of testing" vs. "risk of business impact" showed that both metrics have an inverse dependency, e.g. on a range from "upstream source code" over "package self-tests", "openSUSE Factory staging", "Tumbleweed", "SLE" we consider SLE to have the highest business risk attached and therefore defines our priority however testing at upstream source level is considered most effective to prevent higher cost of bugs or issues. Our conclusion is that we must ensure that the high-risk SLE base has its quality assured while supporting a quality assurance process as early as possible in the development process. package self-tests as well as the openQA staging tests are seen as a useful approach in that direction as well as "domain specfic specialist QA engineers" working closely together with according in-house development parties.

Documentation

This documentation should only be interesting for the team QA SLE functional. If you find that some of the following topics are interesting for other people, please extract those topics to another wiki section.

QA SLE functional Dashboards

In room 3.2.15 from Nuremberg office are two dedicated laptops each with a monitor attached showing a selected overview of openQA test resuls with important builds from SLE and openSUSE.
Such laptops are configured with a root account with the default password for production machines. First point of contact: slindomansilla.suse.com, (okurz@suse.de)[mailto:okurz@suse.de]

  • ''dashboard-osd-3215.suse.de'': Showing current view of openqa.suse.de filtered for some job group results, e.g. "Functional"
  • ''dashboard-o3-3215.suse.de'': Showing current view of openqa.opensuse.org filtered for some job group results which we took responsibility to review and are mostly interested in

dashboard-osd-3215

  • OS: openSUSE Tumbleweed
  • Services: ssh, mosh, vnc, x2x
  • Users: ** root ** dashboard
  • VNC: vncviewer dashboard-osd-3215
  • X2X: ssh -XC dashboard@dashboard-osd-3215 x2x -west -to :0.0 ** (attaches the dashboard monitor as an extra display to the left of your screens. Then move the mouse over and the attached X11 server will capture mouse and keyboard)

Content of /home/dashboard/.xinitrc

#
# Source common code shared between the
# X session and X init scripts
#
. /etc/X11/xinit/xinitrc.common

xset -dpms
xset s off
xset s noblank
[...]
#
# Add your own lines here...
#
$HOME/bin/osd_dashboard &

Content of /home/dashboard/bin/osd_dashboard

#!/bin/bash

DISPLAY=:0 unclutter &

DISPLAY=:0 xset -dpms
DISPLAY=:0 xset s off
DISPLAY=:0 xset s noblank

url="${url:-"https://openqa.suse.de/?group=SLE+15+%2F+%28Functional%7CAutoyast%29&default_expanded=1&limit_builds=3&time_limit_days=14&show_tags=1&fullscreen=1#"}"
DISPLAY=:0 chromium --kiosk "$url"

Cron job:

Min     H       DoM     Mo      DoW     Command
*   *   *   *   *   /home/dashboard/bin/reload_chromium

Content of /home/dashboard/bin/reload_chromium

#!/bin/bash

DISPLAY=:0 xset -dpms
DISPLAY=:0 xset s off
DISPLAY=:0 xset s noblank

DISPLAY=:0 xdotool windowactivate $(DISPLAY=:0 xdotool search --class Chromium)
DISPLAY=:0 xdotool key F5
DISPLAY=:0 xdotool windowactivate $(DISPLAY=:0 xdotool getactivewindow)

Issues:

  • ''When the screen shows a different part of the web page'' ** a simple mouse scroll through vnc or x2x may suffice.
  • ''When the builds displayed are freeze without showing a new build, it usually means that midori, the browser displaying the info on the screen, crashed.'' ** you can try to restart midori this way: *** ps aux | grep midori *** kill $pid *** /home/dashboard/bin/osd_dashboard ** If this also doesn't work, restart the machine.

dashboard-o3

  • Raspberry Pi 3B+
  • IP: 10.160.65.207

Content of /home/tux/.xinitrc

#!/bin/bash

unclutter &
openbox &
xset s off
xset -dpms
sleep 5
url="https://openqa.opensuse.org?group=openSUSE Tumbleweed\$|openSUSE Leap [0-9]{2}.?[0-9]*\$|openSUSE Leap.\*JeOS\$|openSUSE Krypton|openQA|GNOME Next&limit_builds=2&time_limit_days=14&&show_tags=1&fullscreen=1#build-results"
chromium --kiosk "$url" &

while sleep 300 ; do
        xdotool windowactivate $(xdotool search --class Chromium)
        xdotool key F5
        xdotool windowactivate $(xdotool getactivewindow)
done

Content of /usr/share/lightdm/lightdm.conf.d/50-suse-defaults.conf

[Seat:*]
pam-service = lightdm
pam-autologin-service = lightdm-autologin
pam-greeter-service = lightdm-greeter
xserver-command=/usr/bin/X
session-wrapper=/etc/X11/xdm/Xsession
greeter-setup-script=/etc/X11/xdm/Xsetup
session-setup-script=/etc/X11/xdm/Xstartup
session-cleanup-script=/etc/X11/xdm/Xreset
autologin-user=tux
autologin-timeout=0

Updated by okurz about 4 years ago · 55 revisions