action #10212

Updated by okurz about 4 years ago

## User stories
[[Wiki#User-story-1|US1]], [[Wiki#User-story-3|US3]], [[Wiki#User-story-5|US5]]

## acceptance criteria
* **AC1:** A *label* can be *made visible* to every test bubble in the build overview page
* **AC2:** If at least all failed tests are labeled, a *badge* is displayed on the job group overview page

## tasks
* Add an optional simple one-letter/symbol one-letter link/badge/label to build overview for every test
* Add the label based on data in the test itself, e.g. comment present
* optional: Add description field to test, also see https://progress.opensuse.org/issues/7476
* Add label based on description
* Add badge to build on group overview page based on labels in the build overview

ask okurz for further details

## further details
* *label*: symbol/text that can be set by a user
* *can be made visible*: By any mean, e.g. writing a comment, clicking a button, filling a text field, using a CLI tool
* *badge*: should be a symbol showing that a review has been completed on a build, e.g. a star or similar

Regarding **AC2** it might be hard to label all failing tests in a build as many of them might have a common cause and it would be tedious to select every failing test if we know the reason is the same. It might help if a label or description is preserved for a build that is still failing like in https://wiki.jenkins-ci.org/display/JENKINS/Claim+plugin

In discussions I mentioned there should be a "fail reason description" which is exactly what coolo mentioned here https://progress.opensuse.org/issues/7476#note-3. I propose we should have a test description field, e.g. a field just below the "Overall summary" which we can edit to describe the current state, add bug links, describe who is working on it, etc. (see https://openqa.opensuse.org/tests/overview?distri=opensuse&version=Tumbleweed&build=20160116&groupid=1 for an example)

A label ask okurz for each scenario would help to follow the history of test runs within each scenario but it can not automatically identify the same problem in a different scenario, e.g. same problem on two different architectures or same problem in RAID0 and gnome even though it is not related to RAID. So this still needs a human interaction and reasoning. A "common failure analyzer" that remembers failure causes can help which would provide suggestions to a test reviewer or list on another view if the same issue is found in different scenarios. further details

Back