Project

General

Profile

Actions

action #10212

closed

action #10148: better notification and user feedback

labels and badges for builds

Added by okurz almost 9 years ago. Updated almost 9 years ago.

Status:
Resolved
Priority:
High
Assignee:
Category:
Feature requests
Target version:
Start date:
2016-01-13
Due date:
% Done:

100%

Estimated time:

Description

User stories

US1, US3, US5

acceptance criteria

  • AC1: A label can be made visible to every test bubble in the build overview page
  • AC2: If at least all failed tests are labeled, a badge is displayed on the job group overview page

tasks

  • Add an optional simple one-letter/symbol link/badge/label to build overview for every test
  • Add the label based on data in the test itself, e.g. comment present see
  • optional: Add description field to test, also see https://progress.opensuse.org/issues/7476
  • Add label based on description
  • Add badge to build on group overview page based on labels in the build overview

ask okurz for further details

further details

  • label: symbol/text that can be set by a user
  • can be made visible: By any mean, e.g. writing a comment, clicking a button, filling a text field, using a CLI tool
  • badge: should be a symbol showing that a review has been completed on a build, e.g. a star or similar

Regarding AC2 it might be hard to label all failing tests in a build as many of them might have a common cause and it would be tedious to select every failing test if we know the reason is the same. It might help if a label or description is preserved for a build that is still failing like in https://wiki.jenkins-ci.org/display/JENKINS/Claim+plugin. The problem here is described in #10212#note-12 "koalas are not openQA first class citizens" means a database query for every job in one build would take too long when generating the overview page so this should be done in a GRU task or at the completion time of each job.

In discussions I mentioned there should be a "fail reason description" which is exactly what coolo mentioned here https://progress.opensuse.org/issues/7476#note-3. I propose we should have a test description field, e.g. a field just below the "Overall summary" which we can edit to describe the current state, add bug links, describe who is working on it, etc. (see https://openqa.opensuse.org/tests/overview?distri=opensuse&version=Tumbleweed&build=20160116&groupid=1 for an example)

A label for each scenario would help to follow the history of test runs within each scenario but it can not automatically identify the same problem in a different scenario, e.g. same problem on two different architectures or same problem in RAID0 and gnome even though it is not related to RAID. So this still needs a human interaction and reasoning. A "common failure analyzer" that remembers failure causes can help which would provide suggestions to a test reviewer or list on another view if the same issue is found in different scenarios.


Related issues 1 (0 open1 closed)

Related to openQA Project (public) - action #7476: Support comments in testsResolved2015-05-12

Actions
Actions #1

Updated by okurz almost 9 years ago

  • Description updated (diff)
Actions #2

Updated by okurz almost 9 years ago

  • Parent task set to #10148
Actions #3

Updated by okurz almost 9 years ago

Actions #4

Updated by okurz almost 9 years ago

see https://progress.opensuse.org/projects/openqav3/wiki#Thoughts-about-categorizing-test-results-issues-states-within-openQA for a proposal how to categorize tests and builds accordingly. The issue category could go into the labels.

Actions #5

Updated by okurz almost 9 years ago

  • Description updated (diff)
Actions #6

Updated by RBrownSUSE almost 9 years ago

okurz wrote:

see https://progress.opensuse.org/projects/openqav3/wiki#Thoughts-about-categorizing-test-results-issues-states-within-openQA for a proposal how to categorize tests and builds accordingly. The issue category could go into the labels.

I fear there is some overthinking going on here

We need labels to add additional meta data alongside a test result

Some of these labels would be of a prescribed 'class'

The classes I can think we need immediately are STATUS, BUG

Additional label classes could be something like PRODUCT/ORIGIN

STATUS - is the 'main' label we're talking about here, which should have a drop down of options. I favour the 'broken', 'failing', 'passing' options, because they avoid the false/positive negative issues. In terms of colour, 'passing' should be green, 'failing' should be red, 'broken' could be Orange (or some other 'not red or green' colour)

BUG - This class of labels would exist to mark what bug is related to this issue. Labels of Class 'BUG' would have have a mandatory string field for a bug number, using the abbrivation formats listed here - https://en.opensuse.org/openSUSE:Packaging_Patches_guidelines (though with an additional bsc# for bugzilla.suse.com and poo# for progress.opensuse.org). This would let us track both product issues and openQA issues. It's also useful for openSUSE and other projects in this form as it opens the door for tracking upstream and other issues.

PRODUCT/ORIGIN might be a useful needle for identifying when the issue is caused by the PRODUCT or by openQA, but the STATUS would take care of that, broken tests are openQA's issue, failing or passing would be product issues

That's all I can think of as a starting point - I don't think we need to reinvent the wheel or dramatically rework processes or workflows - and starting here would allow us to investigate things like API links to Bugzilla and other bug trackers in the future.

Actions #7

Updated by RBrownSUSE almost 9 years ago

  • Priority changed from Normal to High
Actions #8

Updated by RBrownSUSE almost 9 years ago

  • Target version set to Milestone 1
Actions #9

Updated by okurz almost 9 years ago

Reviewed #10212#note-6 and agreeing except for the "overthinking part" ;-) Discussed with RBrownSUSE.

IMHO it is necessary that one testrun can identify its predecessor so that we can carry on labels from one testrun to another (at first: regardless of the status or status change). Right now it seems to be only possible to identify the "scenario" by assembling "--" or similar as is used somewhere in lib/OpenQA/WebAPI/Controller/Test.pm to find all test runs for a build.

regarding the names, also see #2246.

I plan to accomplish these single tasks to go forward:

  • Add comment icon next to a test run on /tests/overview if corresponding test has comment(s): Can be considered an intermediate steps as we are more interested in "labels" but I need to learn first how to identify scenarios, add a field, which can be editable, …
  • Add badge to a build on /group_overview/* if a review is found in comments: Also intermediate
  • Add label to each test result as proposed by RBrownSUSE, e.g. for STATUS (broken: flash, passing and correct (false negative): checked, failing and correct: ban)
  • Add a persistent label feature to each test scenario, i.e. add edit field on each test run but preserve it over runs
  • Add tag next to a test run on /tests/overview if corresponding test has label
  • Same as above but tags for multiple
  • Add more labels to each test result, e.g. bug, PRODUCT/ORIGIN
  • Add some stars depending on labels half-o and such
Actions #10

Updated by RBrownSUSE almost 9 years ago

okurz wrote:

Lots

+1 from me

Actions #11

Updated by okurz almost 9 years ago

  • Description updated (diff)
Actions #12

Updated by okurz almost 9 years ago

The current challenge is that openQA does not have specific scenarios as "first class citizens", i.e. the composition of <distri>-<version>-<flavor>-<arch>-<scenario> (e.g. openSUSE-Tumbleweed-DVD-x86_64-gnome), let's call it "koalas" (a bad name is better than no name, koalas are cute and it's a funny name to remember easily). From a design perspective and because of scalability we should not conduct job-specific queries when generating the overview pages. Therefore we should generate the "forensic information" when rendering the job result pages individually.

as discussed with coolo and RBrownSUSE the way to go forward is:

Actions #13

Updated by okurz almost 9 years ago

  • Description updated (diff)

done

@waitfor DONE: gh#514

  • add a tab on the test results page querying for test results for previous runs of the same koala
  • extract logic from templates/test/overviewhtml.ep and use it on previous.html.ep, too

@waitfor gh#538

  • go on with steps as described in #10212#note-9, e.g. state as selected by reviewer with multiple-choice based on allowed values defined over admin-page
  • Save a new entry on a job using the comments table in each job (no need for another table to query)
  • Add either checkbox to comments entry field to make "sticky/label/review/bugref" or use special tags in text entry like 'bug:bsc#23526 status:BROKEN'

@waitfor gh#550

so far using "label:" keyword within a comment window without verification on the label value which could be done later if just "convention" is not enough

next

  • Tell all SLE reviewers that the comments in jobs help and that they can use it and others will probably look at them more often
  • Provide API call to CRUD comments and labels

then

  • Trigger the "carry over" of comments with external tools, e.g. "review bot"
  • a simple carry over approach could be: as soon as job is finished, i.e. results are entered into the database, IF job is NOT passed, the previous jobs of same scenario are parsed, IF any comments found (including label, bug, etc.) THEN IF bug OR label, copy to current job
Actions #14

Updated by okurz almost 9 years ago

  • Description updated (diff)
  • Status changed from New to In Progress
Actions #15

Updated by okurz almost 9 years ago

  • Status changed from In Progress to Feedback

final PR ready gh#564

Actions #16

Updated by okurz almost 9 years ago

further ideas regarding comment carry over after getting early feedback from mgriessmeier:

  • failed->passed: emit event and/or hook to inform that an issue got fixed -> potential candidate for resolving/verifying a bug
  • failed->failed: be careful not to miss new issues, e.g. when test "degrades"
  • extension to check for "improved"/"degraded", e.g. less successful modules executed
Actions #17

Updated by okurz almost 9 years ago

  • Status changed from Feedback to In Progress

openQA "labels and badges" are live and no major issues reported so far :-)

I documented these changes - with screenshots - and how they could
help us on this wiki page:
https://progress.opensuse.org/projects/openqav3/wiki/Wiki#Test-reviewing

There is one issue reported by coolo that for rendering the group_overview page all job->comments are parsed individually and not prefetched for all jobs in a build.

What's left for this ticket to do is collect all the cool ideas which are not directly related in a new ticket and reference it.

Actions #18

Updated by okurz almost 9 years ago

  • Status changed from In Progress to Resolved
  • % Done changed from 0 to 100

everything out of scope for this feature is noted down in #10148#note-7

Actions

Also available in: Atom PDF