Project

General

Profile

Actions

action #18000

closed

coordination #13812: [epic][dashboard] openQA Dashboard ideas

[dashboard] Better integration of status from external jobs and testing systems

Added by mkittler over 7 years ago. Updated 3 months ago.

Status:
Resolved
Priority:
Low
Assignee:
Category:
Feature requests
Target version:
Start date:
2017-03-24
Due date:
% Done:

0%

Estimated time:

Description

User story

As a tester I want to use an easy way to publish my test results on an openQA server to use openQA as a dashboard for test results

acceptance criteria

  • AC1: There is a way to fill the whole jobs details page without the need to use os-autoinst, e.g. console client to fill "Details" and "Logs & Assets"
  • AC2: The whole workflow is documented

tasks

  • Come up with a way how the console tool should upload the missing information, e.g. how the command line parameters are called, e.g. "openqa-client upload_screenshot ..."
  • document the current API used internally by the openQA backend to uploads results, e.g. mandatory variables.
  • optional: some refactoring and improvements can be done to ease using that API by other tools, too.

further details

goal: To use the openQA dashboard as the main reporting platform also for other tools we have in our infrastructure (at SUSE).

There are already test runners feeding back the information e.g. in junit format. Also, using the client script jobs can be started and their result set along with comments, e.g. for tags. Example:

job_id=$(client --host http://localhost jobs post TEST=foo_external _GROUP="openSUSE Tumbleweed AArch64" BUILD=1 | sed -n 's/^{ id => \([0-9]\+\) }$/\1/p')
client --host http://localhost jobs/${job_id}/set_done post result=passed
client --host http://localhost jobs/${job_id}/comments post text=bsc#1234

Related issues 2 (0 open2 closed)

Related to qe-yam - coordination #42890: [functional][y][epic] Established workflow how system level test failures found in openQA of QA validation tests can be moved to earlier testing, e.g. staging, OBS devel projects, PR, commitRejectedJERiveraMoya2018-10-24

Actions
Has duplicate openQA Project - action #163016: Documentation how to feed simple external test results into openQARejectedokurz2024-06-29

Actions
Actions #1

Updated by RBrownSUSE over 7 years ago

  • Subject changed from [tools] Better integration of status from external jobs and testing systems to [tools][dashboard] Better integration of status from external jobs and testing systems
Actions #2

Updated by krauselukas over 7 years ago

  • Description updated (diff)
  • Category set to Feature requests
Actions #3

Updated by coolo almost 7 years ago

  • Status changed from New to Rejected

To upload status from external tools use the worker api and not client api calls. As such set_done should be forbidden from any other than workers. We're currently developing a federation worker bridge, which could be reused for such tools - if we need them in practise.

Actions #4

Updated by mkittler almost 7 years ago

As far as I remember the request for this feature didn't say anything about the particular API to be used.

It was about providing some way to add information to openQA like os-autoinst does. Obviously this is already implemented from the openQA-side, otherwise os-autoinst could not do its job.

So this issue was about documenting a workflow, eg. how to use the worker API and maybe provide some helper scripts to ease the task. So this is actually not about modifying openQA, it is about providing help to use it. (Richard likely tagged it with [dashboard] because the goal is to display external jobs results in the dashboard.)

We're currently developing a federation worker bridge, which could be reused for such tools

Not sure what this 'federation worker bridge' would be, but it might help.

if we need them in practise

Don't know whether this issue is still relevant, too. Note that it was not my idea, I've just wrote it down.

Actions #5

Updated by okurz almost 7 years ago

  • Status changed from Rejected to In Progress

mkittler wrote:

[…]

if we need them in practise

Don't know whether this issue is still relevant, too. Note that it was not my idea, I've just wrote it down.

Yes, we still need this "in practice". Agreeing with mkittler here. He has added the request also based on my suggestions and because I asked him to so let's please handle this as a feature request that is still open. I don't see any information missing regarding the request. The request is still a valid one also by e.g. "QA SLE kernel", e.g. see https://github.com/os-autoinst/openQA/pull/1492 . I think I know what "federation" is but I don't see that as related.

Actions #7

Updated by metan almost 7 years ago

Let me just reference action #27726 [kernel] NVMe Over Fabrics Unit Test Framework here. We are asked to integrate nose2 based testsuite into the openQA. The obvious question is how export the test results so that we have something meaningful in the openQA result page. I looked into nose2 and it looks like you can implement custom result formatter class in python or use existing one that produces junit.

So the question is how will we handle that? For the action #25680 the best solution seems to be to implement an openQA API function to report the test result since the testcases are executed in the openQA test Perl code since we implemented the test runner from the scratch there. But in this case I doubt that we can call openQA API functions from the python class without significant overhead so file based result export sounds like reasonable solution.

Any ideas how to handle nose2 based tests?

Actions #8

Updated by okurz almost 7 years ago

Who asked you to "integrate nose2 based testsuite into the openQA"?

I see the following options:

  1. Execute the "unit test framework" (it's calling itself this way) in OBS/IBS (preferred)
  2. just execute nosetests --with-xunit within the SUT in the test framework directory and make sure in openQA the xunit format is properly parsed and visualized
Actions #9

Updated by coolo almost 7 years ago

For parsing/rendering the xunit, we work on https://progress.opensuse.org/issues/16076 (misleading title, trust me :) as part of the current sprint

Actions #10

Updated by metan almost 7 years ago

@okurz Well it's actually kernel testsuite that happens to use nose2 so the request came from kernel labs team.

Actions #11

Updated by EDiGiacinto almost 7 years ago

i will report here for reference what i've added in https://github.com/os-autoinst/openQA/pull/1492.

Since mostly the issues are pointing to the same direction ('an uniform' way to treat test results by other frameworks in openQA) to ease out this situation, i'm working in a generic parsing solution that allows at least to include external test in an (almost) uniform way, translating them in an internal test representation that can be serialized and deserialized - with the possibility to extend it with different formats and also to map them as openQA test modules run if needed.

That's my branch https://github.com/mudler/openQA/tree/test_parser, currently the parser is able to read/write/translate JUnit - will extend it to the JSON format proposed by @richiejp and XUnit - so we can use it to serialize/deserialize external tests results inside the openQA DB.

Actions #12

Updated by okurz over 6 years ago

https://github.com/okurz/scripts/blob/master/openqa-client-record-manual-test should show what I mean with an easy way to record results, e.g. from manual testing, with one go.

Actions #13

Updated by okurz over 5 years ago

  • Related to coordination #42890: [functional][y][epic] Established workflow how system level test failures found in openQA of QA validation tests can be moved to earlier testing, e.g. staging, OBS devel projects, PR, commit added
Actions #14

Updated by okurz about 5 years ago

  • Status changed from In Progress to Workable

by now not really "In Progress", back to "Workable".

Actions #15

Updated by okurz over 4 years ago

  • Subject changed from [tools][dashboard] Better integration of status from external jobs and testing systems to [dashboard] Better integration of status from external jobs and testing systems
  • Priority changed from Normal to Low

trying to set prio of parent to "Low" implicitly.

Actions #16

Updated by okurz about 4 years ago

  • Target version set to future
Actions #17

Updated by okurz 3 months ago

  • Target version changed from future to Ready

From 2024-06-29 openQA user meetup. For years one can just trigger empty openQA jobs which do not have any valid backend set so that they are not picked up by openQA workers and then can be updated with according calls afterwards. We should explain that in our documentation.

Actions #18

Updated by okurz 3 months ago

  • Has duplicate action #163016: Documentation how to feed simple external test results into openQA added
Actions #19

Updated by okurz 3 months ago

  • Due date set to 2024-07-13
  • Status changed from Workable to Feedback
  • Assignee set to okurz
Actions #20

Updated by livdywan 3 months ago

okurz wrote in #note-19:

https://github.com/os-autoinst/openQA/pull/5735

Would be great to also cover the external formats and how to extend them, since that also came up during the internal part of our workshop as something people weren't clear on what is supported and how to integrate new frameworks.

Actions #21

Updated by livdywan 3 months ago

Another bit of feedback: Why is parse_test_results a part of osado rather than just os-autoinst?

Actions #22

Updated by okurz 3 months ago

  • Due date deleted (2024-07-13)
  • Status changed from Feedback to Resolved

livdywan wrote in #note-21:

Another bit of feedback: Why is parse_test_results a part of osado rather than just os-autoinst?

That's just the osado specific implementation. http://open.qa/docs/#installing_harness describes the generic approach.

https://github.com/os-autoinst/openQA/blob/master/docs/WritingTests.asciidoc#integrating-test-results-from-external-systems should suffice for now.

Actions

Also available in: Atom PDF