Project

General

Profile

Actions

action #178252

open

Support to set a step to the 'skipped' status in the test size:S

Added by zagagyka 2 months ago. Updated 18 days ago.

Status:
Feedback
Priority:
Low
Assignee:
Category:
Support
Target version:
Start date:
2025-03-04
Due date:
% Done:

0%

Estimated time:

Description

Motivation

Is it possible to set any step to skipped status during test execution? (without using EXCLUDE_MODULES)

I wrote this function:

sub record_skip_test {
    my ($message) = shift // '';
    $autotest::current_test->result('skip');
    $autotest::current_test->record_resultfile('SKIP TEST', $message, result => 'unk');
}

and call it in tests where I need to skip step:

record_skip_test('skip test because ...');

As a result, I get the following (screenshot skip_test.png)

But I'm not sure what is the right way

Acceptance criteria

  • AC1: os-autoinst+openQA have a clear concept of "skipped" on all the levels of step, module, job
  • AC2: We have a good understanding of what general use case the OP tries to cover

Suggestions

  • Consider already existing support for "skip" or "skipped" in the external result parser
  • Also look into existing ways to declare the module as "passed" which ends up doing the same. So which way is the proper way to do it?
  • Review existing documentation about "skipped" in both os-autoinst+openQA

Files

skip_test.png (42.7 KB) skip_test.png zagagyka, 2025-03-04 13:06
screenshot.png (122 KB) screenshot.png zagagyka, 2025-04-17 13:40
skip_test.png
screenshot.png
Actions #1

Updated by okurz 2 months ago

  • Category set to Support
  • Assignee set to okurz
  • Target version set to future

What you did is certainly valid and could be considered to be added to the test API. But first I would like to clarify what is your use case for doing this. Can you elaborate?

Actions #2

Updated by livdywan 2 months ago

  • Description updated (diff)
Actions #3

Updated by zagagyka about 2 months ago

okurz wrote in #note-1:

But first I would like to clarify what is your use case for doing this. Can you elaborate?

We use the test results from OpenQA to automatically execution test cases in test management tool (Testlink, Zephyr for Jira, etc.)

Therefore, to set the results as skipped/blocked, we need similar test results in OpenQA

In addition, when running a group of tests (for example, using the sub loadtestdir()), in the test result overview, we need to distinguish between tests that were run (but skipped/blocked for some reason) and those that were excluded from running

Actions #4

Updated by okurz about 2 months ago

  • Target version changed from future to Tools - Next
Actions #5

Updated by okurz about 1 month ago

  • Category changed from Support to Feature requests
  • Assignee deleted (okurz)
  • Target version changed from Tools - Next to Ready
Actions #6

Updated by okurz about 1 month ago

  • Subject changed from Support to set a step to the 'skipped' status in the test to Support to set a step to the 'skipped' status in the test size:S
  • Description updated (diff)
  • Category changed from Feature requests to Support
  • Status changed from New to Workable
Actions #7

Updated by mkittler about 1 month ago

  • Status changed from Workable to In Progress
  • Assignee set to mkittler
Actions #8

Updated by mkittler about 1 month ago

@zagagyka In your screenshot the test is passing overall. Is that actually how you want it or just what happened? What are you generally trying to achieve here?

Note that the code you wrote is not problematic from the coding perspective. However, using "skipped" in cases like this might be problematic from a reviewer/management perspective. That is because it can lead to the wrong impression that we test certain aspects of our product because we run a corresponding test module and have passing tests - while in reality the functionality isn't actually tested. So maybe we should think twice before adding e.g. a test API function to skip modules like this. (Especially because your code shows that if really needed you can already do this kind of skipping with test distribution code.)

Actions #9

Updated by okurz about 1 month ago

By coincidence I just found circleci is adopting a change on how they treat their "skipped": https://circleci.com/changelog/breaking-change-april-14-2025-skipped-status-will-now-return-success/

Actions #10

Updated by openqa_review about 1 month ago

  • Due date set to 2025-04-18

Setting due date based on mean cycle time of SUSE QE Tools

Actions #11

Updated by livdywan about 1 month ago

  • Status changed from In Progress to Feedback

Mentioned in the daily. Prioritizing other work right now and mostly waiting for feedback.

Actions #12

Updated by livdywan 24 days ago

  • Priority changed from Normal to Low
  • Target version changed from Ready to future

As I understand it there's no work going on here atm and we want to wait on the OP before looking into as we don't really have a use case for it. Hence moving out of the backlog for now.

Actions #13

Updated by livdywan 24 days ago

  • Due date deleted (2025-04-18)
Actions #14

Updated by zagagyka 23 days ago

screenshot.png

Sorry for the late reply

mkittler wrote in #note-8:

@zagagyka In your screenshot the test is passing overall. Is that actually how you want it or just what happened?

maybe the screenshot is not entirely correct

mkittler wrote in #note-8:

What are you generally trying to achieve here?

Probably, in my case, it is correct to call the result of this step not "skipped", but "blocked".

As I wrote above, at the moment we use TestLink to store and execute test cases
And we use the test results from OpenQA to execution automated test cases (see screenshot)

We write test cases that are used in our various products.
It may happen that some step for a particular product is not being tested and needs to be blocked
The TestLink (and as far as I know, in other test management tools) has the following status: "blocked"
This allows you to distinguish a test case that was launched but blocked from one that was not launched at all (or excluded from running)

But OpenQA can't set the step to "blocked" result

Actions #15

Updated by mkittler 18 days ago

zagagyka wrote in #note-14:

Sorry for the late reply

mkittler wrote in #note-8:

@zagagyka In your screenshot the test is passing overall. Is that actually how you want it or just what happened?

maybe the screenshot is not entirely correct

That still doesn't answer the question of what the expected behavior would be (from your point of view). So how are steps you want to skip supposed to affect the overall test result?

mkittler wrote in #note-8:

What are you generally trying to achieve here?

Probably, in my case, it is correct to call the result of this step not "skipped", but "blocked".

For this we have "softfailed" in openQA. It is basically treated as "passed" but indicates that a known issue has been found (which might have blocked the test execution to some extend). Checkout the documentation for further explanation.

As I wrote above, at the moment we use TestLink to store and execute test cases
And we use the test results from OpenQA to execution automated test cases (see screenshot)

We write test cases that are used in our various products.
It may happen that some step for a particular product is not being tested and needs to be blocked

Maybe it would make more sense if steps that are not tested are added to the test plan in the first place then. Skipping steps dynamically like you propose makes only sense if whether or not the step needs to be skipped can only be determined at the moment the test is executed. However, as this is about having a different test plan per product it makes more sense to decide what steps to leave out upfront.

The TestLink (and as far as I know, in other test management tools) has the following status: "blocked"
This allows you to distinguish a test case that was launched but blocked from one that was not launched at all (or excluded from running)

But OpenQA can't set the step to "blocked" result

Ok, although I'm still not sure why/how TestLink and openQA need to be streamlined in that regard. Are you running TestLink within openQA? What is your overall setup? Feel free to share source code for that (e.g. code from your test distribution) or links to openQA jobs (on your openQA instance).

Actions

Also available in: Atom PDF