This is the organisation wiki for the openQA Project.
The source code is hosted in the os-autoinst github project, especially openQA itself and the main backend os-autoinst

If you are interested in the tests for SUSE/openSUSE products take a look into the openqatests project.

If you are looking for entry level issues to contribute to the backend, take a look at this search query


ticket workflow


The following ticket statuses are used together and their meaning is explained:

  • New: No one has worked on the ticket (e.g. the ticket has not been properly refined) or no one is feeling responsible for the work on this ticket.
  • Workable: The ticket has been refined and is ready to be picked.
  • In Progress: Assignee is actively working on the ticket.
  • Resolved: The complete work on this issue is done and the according issue is supposed to be fixed as observed (Should be updated together with a link to a merged pull request or also a link to an production openQA showing the effect)
  • Feedback: Further work on the ticket is blocked by open points or is awaiting for the feedback to proceed. Sometimes also used to ask Assignee about progress on inactivity.
  • Blocked: Further work on the ticket is blocked by some external dependency (e.g. bugs, not implemented features). There should be a link to another ticket, bug, trello card, etc. where it can be seen what the ticket is blocked by.
  • Rejected: The issue is considered invalid, should not be done, is considered out of scope.
  • Closed: As this can be set only by administrators it is suggested to not use this status.

It is good practice to update the status together with a comment about it, e.g. a link to a pull request or a reason for reject.

ticket categories

  • Concrete Bugs: Regressions, crashes, error messages
  • Feature requests: Ideas or wishes for extension, enhancement, improvement
  • Organisational: Organisational tasks within the project(s), not directly code related
  • Support: Support of users, usage problems, questions

Please avoid the use of other, deprecated categories

ticket templates

You can use these templates to fill in tickets and further improve them with more detail over time. Copy the code block, paste it into a new issue, replace every block marked with "<…>" with your content or delete if not appropriate.


Subject: <Short description, example: "openQA dies when triggering any Windows ME tests">

## Observation
<description of what can be observed and what the symptoms are, provide links to failing test results and/or put short blocks from the log output here to visualize what is happening>

## Steps to reproduce
* <do this>
* <do that>
* <observe result>

## Problem
<problem investigation, can also include different hypotheses, should be labeled as "H1" for first hypothesis, etc.>

## Suggestion
<what to do as a first step>

## Workaround
<example: retrigger job>

example ticket: #10526

Feature requests

Subject: <Short description, example: "grub3 btrfs support" (feature)>

## User story
<As a <role>, I want to <do an action>, to <achieve which goal> >

## Acceptance criteria
* <**AC1:** the first acceptance criterion that needs to be fulfilled to do this, example: Clicking "restart button" causes restart of the job>
* <**AC2:** also think about the "not-actions", example: other jobs are not affected>

## Tasks
* <first task to do as an easy starting point>
* <what do do next, all tasks optionally with an effort estimation in hours, e.g. "(0.5-2h)">
* <optional: mark "optional" tasks>

## Further details
<everything that does not fit into above sections>

example ticket: #10212

Further decision steps working on test issues

Test issues could be one of the following sources. Feel free to use the following template in tickets as well

## Problem
* **H1** The product has changed
 * **H1.1** product changed slightly but in an acceptable way without the need for communication with DEV+RM --> adapt test
 * **H1.2** product changed slightly but in an acceptable way found after feedback from RM --> adapt test
 * **H1.3** product changed significantly --> after approval by RM adapt test

* **H2** Fails because of changes in test setup
 * **H2.1** Our test hardware equipment behaves different
 * **H2.2** The network behaves different

* **H3** Fails because of changes in test infrastructure software, e.g. os-autoinst, openQA
* **H4** Fails because of changes in test management configuration, e.g. openQA database settings
* **H5** Fails because of changes in the test software itself (the test plan in source code as well as needles)
* **H6** Sporadic issue, i.e. the root problem is already hidden in the system for a long time but does not show symptoms every time

pull request handling on github

As a reviewer of pull requests on github for all related repositories, e.g., apply labels in case PRs are open for a longer time and can not be merged so that we keep our backlog clean and know why PRs are blocked.

  • notready: Triaged as not ready yet for merging, no (immediate) reaction by the reviewee, e.g. when tests are missing, other scenarios break, only tested for one of SLE/TW
  • wip: Marked by the reviewee itself as "[WIP]" or "[DO-NOT-MERGE]" or similar
  • question: Questions to the reviewee, not answered yet

Where to contribute?

If you want to help openQA development you can take a look into the existing issues. There are also some "always valid" tasks to be working on:

  • improve test coverage:
    • user story: As openqa backend as well as test developer I want better test coverage of our projects to reduce technical debt
    • acceptance criteria: test coverage is significantly higher than before
    • suggestions: check current coverage in each individual project (os-autoinst/openQA/os-autoinst-distri-opensuse) and add tests as necessary

Use cases

The following use cases 1-6 have been defined within a SUSE workshop (others have been defined later) to clarify how different actors work with openQA. Some of them are covered already within openQA quite well, some others are stated as motivation for further feature development.

Use case 1

User: QA-Project Managment
primary actor: QA Project Manager, QA Team Leads
stakeholder: Directors, VP
trigger: product milestones, providing a daily status
user story: „As a QA project manager I want to check on a daily basis the „openQA Dashboard“ to get a summary/an overall status of the „reviewers results“ in order to take the right actions and prioritize tasks in QA accordingly.“

Use case 2

User: openQA-Admin
primary actor: Backend-Team
stakeholder: Qa-Prjmgr, QA-TL, openQA Tech-Lead
trigger: Bugs, features, new testcases
user story: „As an openQA admin I constantly check in the web-UI the system health and I manage its configuration to ensure smooth operation of the tool.“

Use case 3

User: QA-Reviewer
primary actor: QA-Team
stakeholder: QA-Prjmgr, Release-Mgmt, openQA-Admin
trigger: every new build
user story: „As an openQA-Reviewer at any point in time I review on the webpage of openQA the overall status of a build in order to track and find bugs, because I want to find bugs as early as possible and report them.“

Use case 4

User: Testcase-Contributor
primary actor: All development teams, Maintenance QA
stakeholder: QA-Reviewer, openQA-Admin, openQA Tech-Lead
trigger: features, new functionality, bugs, new product/package
user story: „As developer when there are new features, new functionality, bugs, new product/package in git I contribute my testcases because I want to ensure good quality submissions and smooth product integration.“

Use case 5

User: Release-Mgmt
primary actor: Release Manager
stakeholder: Directors, VP, PM, TAMs, Partners
trigger: Milestones
user story: „As a Release-Manager on a daily basis I check on a dashboard for the product health/build status in order to act early in case of failures and have concrete and current reports.“

Use case 6

User: Staging-Admin
primary actor: Staging-Manager for the products
stakeholder: Release-Mgmt, Build-Team
trigger: every single submission to projects
user story: „As a Staging-Manager I review the build status of packages with every staged submission to the „staging projects“ in the „staging dashboard“ and the test-status of the pre-integrated fixes, because I want to identify major breakage before integration to the products and provide fast feedback back to the development.“

Use case 7

User: Bug investigator
primary actor: Any bug assignee for openQA observed bugs
stakeholder: Developer
trigger: bugs
user story: „As a developer that has been assigned a bug which has been observed in openQA I can review referenced tests, find a newer and the most recent job in the same scenario, understand what changed since the last successful job, what other jobs show same symptoms to investigate the root cause fast and use openQA for verification of a bug fix.“

Thoughts about categorizing test results, issues, states within openQA

by okurz

When reviewing test results it is important to distinguish between different causes of "failed tests"


Test status categories

A common definition about the status of a test regarding the product it tests: "false|true positive|negative" as described on "positive|negative" describes the outcome of a test ("positive": test signals presence of issue; "negative": no signal) whereas "false|true" describes the conclusion of the test regarding the presence of issues in the SUT or product in our case ("true": correct reporting; "false": incorrect reporting), e.g. "true negative", test successful, no issues detected and there are no issues, product is working as expected by customer. Another example: Think of testing as of a fire alarm. An alarm (event detector) should only go off (be "positive") if there is a fire (event to detect) --> "true positive" whereas if there is no fire there should be no alarm --> "true negative".

Another common but potentially ambiguous categorization:

  • broken: the test is not behaving as expected (Ambiguity: "as expected" by whom?) --> commonly a "false positive", can also be "false negative" but hard to detect
  • failing: the test is behaving as expected, but the test output is a fail --> "true positive"
  • working: the test is behaving as expected (with no comment regarding the result, though some might ambiguously imply 'result is negative')
  • passing: the test is behaving as expected, but the result is a success --> "true negative"

If in doubt declare a test as "broken". We should review the test and examine if it is behaving as expected.

Be careful about "positive/negative" as some might also use "positive" to incorrectly denote a passing test (and "negative" for failing test) as an indicator of "working product" not an indicator about "issue present". If you argue what is "used in common speech" think about how "false positive" is used as in "false alarm" --> "positive" == "alarm raised", also see

Priorization of work regarding categories

In this sense development+QA want to accomplish a "true negative" state whenever possible (no issues present, therefore none detected). As QA and test developers we want to prevent "false positives" ("false alarms" declaring a product as broken when it is not but the test failed for other reasons), also known as "type I error" and "false negatives" (a product issue is not catched by tests and might "slip through" QA and at worst is only found by an external outside customer) also known as "type II error". Also see In the context of openQA and system testing paired with screen matching a "false positive" is much more likely as the tests are very susceptible to subtle variations and changes even if they should be accepted. So when in doubt, create an issue in progress, look at it again, and find that it was a false alarm, rather than wasting more peoples time with INVALID bug reports by believing the product to be broken when it isn't. To quote Richard Brown: "I […] believe this is the route to ongoing improvement - if we have tests which produce such false alarms, then that is a clear indicator that the test needs to be reworked to be less ambiguous, and that IS our job as openQA developers to deal with".

Further categorization of statuses, issues and such in testing, especially automatic tests

By okurz

This categorization scheme is meant to help in communication in either written or spoken discussions being simple, concise, easy to remember while unambiguous in every case.
While used for naming it should also be used as a decision tree and can be followed from the top following each branch.

Categorization scheme

To keep it simple I will try to go in steps of deciding if a potential issue is of one of two categories in every step (maybe three) and go further down from there. The degree of further detailing is not limited, i.e. it can be further extended. Naming scheme should follow arabic number (for two levels just 1 and 2) counting schemes added from the right for every additional level of decision step and detail without any separation between the digits, e.g. "1111" for the first type in every level of detail up to level four. Also, I am thinking of giving the fully written form phonetic name to unambiguously identify each on every level as long as not more individual levels are necessary. The alphabet should be reserved for higher levels and higher priority types.
Every leaf of the tree must have an action assigned to it.

1 failed (ZULU)
11 new (passed->failed) (YANKEE)
111 product issue ("true positive") (WHISKEY)
1111 unfiled issue (SIERRA)
11111 hard issue (openqa fail) (KILO)
111121 critical / potential ship stopper (INDIA) --> immediately file bug report with "ship_stopper?" flag; opt. inform RM directly
111122 non-critical hard issue (HOTEL) --> file bug report
11112 soft issue (openqa softfail on job level, not on module level) (JULIETT) --> file bug report on failing test module
1112 bugzilla bug exists (ROMEO)
11121 bug was known to openqa / openqa developer --> cross-reference (bug->test, test->bug) AND raise review process issue, improve openqa process
11122 bug was filed by other sources (e.g. beta-tester) --> cross-reference (bug->test, test->bug)
112 test issue ("false positive") (VICTOR)
1121 progress issue exists (QUEBEC) --> cross-reference (issue->test, test->issue)
1122 unfiled test issue (PAPA)
11221 easy to do w/o progress issue
112211 need needles update --> re-needle if sure, TODO how to notify?
112212 pot. flaky, timeout
1122121 retrigger yields PASS --> comment in progress about flaky issue fixed
1122122 reproducible on retrigger --> file progress issue
11222 needs progress issue filed --> file progress issue
12 existing / still failing (failed->failed) (XRAY)
121 product issue (UNIFORM)
1211 unfiled issue (OSCAR) --> file bug report AND raise review process issue (why has it not been found and filed?)
1212 bugzilla bug exists (NOVEMBER) --> ensure cross-reference, also see rules for 1112 ROMEO
122 test issue (TANGO)
1221 progress issue exists (MIKE) --> monitor, if persisting reprioritize test development work
1222 needs progress issue filed (LIMA) --> file progress issue AND raise review process issue, see 1211 OSCAR
2 passed (ALFA)
21 stable (passed->passed) (BRAVO)
211 existing "true negative" (DELTA) --> monitor, maybe can be made stricter
212 existing "false negative" (ECHO) --> needs test improvement
22 fixed (failed->passed) (CHARLIE)
222 fixed "true negative" (FOXTROTT) --> TODO split monitor, see 211 DELTA
2221 was test issue --> close progress issue
2222 was product issue
22221 no bug report exists --> raise review process issue (why was it not filed?)
22222 bug report exists
222221 was marked as RESOLVED FIXED
221 fixed but "false negative" (GOLF) --> potentially revert test fix, also see 212 ECHO

Priority from high to low: INDIA->OSCAR->HOTEL->JULIETT->…

Proposals for uses of labels

With Show bug or label icon on overview if labeled (gh#550) it is possible to add custom labels just by writing them. Nevertheless, a convention should be found for a common benefit. Beware that labels are also automatically carried over with (Carry over labels from previous jobs in same scenario if still failing [gh#564])( which might make consistent test failures less visible when reviewers only look for test results without labels or bugrefs. Labels are not anymore automatically carried over (gh#1071).

List of proposed labels with their meaning and where they could be applied.

  • fixed_<build_ref>: If a test failure is already fixed in a more recent build and no bug reference is known, use this label together with a reference to a more recent passed test run in the same scenario. Useful for reviewing older builds. Example (

  • needles_added: In case needles were missing for test changes or expected product changes caused needle matching to fail, use this label with a reference to the test PR or a proper reasoning why the needles were missing and how you added them. Example (

needles for were missing, added by jpupava in the meantime.

s390x Test Organisation

See the following picture for a graphical overview of the current s390x test infrastructure at SUSE:

SUSE s390x test infrastructure


on z/VM

special Requirements

Due to the lack of proper use of hdd-images on zVM, we need to workaround this with having a dedicated worker_class aka a dedicated Host where we run two jobs with START_AFTER_TEST,
the first one which installs the basesystem we want to have upgraded and a second one which is doing the actually upgrade (e.g migration_offline_sle12sp2_zVM_preparation and migration_offline_sle12sp2_zVM)

Since we encountered issues with randomly other preparation jobs are started in between there, we need to ensure that we have one complete chain for all migration jobs running on one worker, that means for example:

  1. migration_offline_sle12sp2_zVM_preparation
  2. migration_offline_sle12sp2_zVM (START_AFTER_TEST=#1)
  3. migration_offline_sle12sp2_allpatterns_zVM_preparation (START_AFTER_TEST=#2)
  4. migration_offline_sle12sp2_allpatterns_zVM
  5. ...

This scheme ensures that all actual Upgrade jobs are finding the prepared system and are able to upgrade it

on z/KVM

No special requirements anymore, see details in #18016

Automated z/VM LPAR installation with openQA using qnipl

There is an ongoing effort to automate the LPAR creation and installation on z/VM. A first idea resulted in the creation of qnipl. qnipl enables one to boot a very slim initramfs from a shared medium (e.g. shared SCSI-disks) and supply it with the needed parameters to chainload a "normal SLES installation" using kexec.
This method is required for z/VM because snipl (Simple network initial program loader) can only load/boot LPARs from specific disks, not network resources.


  1. Get a shared disk for all your LPARs
    • Normally this can easily done by infra/gschlotter
    • Disks needs to be connected to all guests which should be able to network-boot
  2. Boot a fully installed SLES on one of the LPARs to start preparing the shared-disk
  3. Put a DOS partition table on the disk and create one single, large partition on there
  4. Put a FS on there. Our first test was on ext2 and it worked flawlessly in our attempts
  5. Install zipl (The s390x bootloader from IBM) on this partition
    • A simple and sufficient config can be found in poo#33682
  6. clone qnipl to your dracut modules (e.g. /usr/lib/dracut/modules.d/95qnipl)
  7. Include the module named qnipl to your dracut modules for initramfs generation
    • e.g. in /etc/dracut.conf.d/99-qnipl.conf add: add_dracutmodules+=qnipl
  8. Generate your initramfs (e.g. dracut -f -a "url-lib qnipl" --no-hostonly-cmdline /tmp/custom_initramfs)
    • Put the initramfs next to your kernel binary on the partition you want to prepare
  9. From now on you can use snipl to boot any LPAR connected with this shared disk from network
    • example: snipl -f ./snipl.conf -s P0069A27-LP3 -A fa00 --wwpn_scsiload 500507630713d3b3 --lun_scsiload 4001401100000000 --ossparms_scsiload "install= hostip= gateway= Nameserver= ssh=1 regurl="
    • --ossparms_scsiload is then evaluated and used by qnipl to kexec into the installer with the (for the installer) needed parameters

Further details

Further details can also be found in the github repo. Pull requests, questions and ideas always welcome!

qa_sle_openqa_s390x_test_infrastructure.jpg (823 KB) okurz, 16/08/2017 11:17 am