action #178252
openSupport to set a step to the 'skipped' status in the test size:S
0%
Description
Motivation¶
Is it possible to set any step to skipped status during test execution? (without using EXCLUDE_MODULES)
I wrote this function:
sub record_skip_test {
my ($message) = shift // '';
$autotest::current_test->result('skip');
$autotest::current_test->record_resultfile('SKIP TEST', $message, result => 'unk');
}
and call it in tests where I need to skip step:
record_skip_test('skip test because ...');
As a result, I get the following (screenshot skip_test.png)
But I'm not sure what is the right way
Acceptance criteria¶
- AC1: os-autoinst+openQA have a clear concept of "skipped" on all the levels of step, module, job
- AC2: We have a good understanding of what general use case the OP tries to cover
Suggestions¶
- Consider already existing support for "skip" or "skipped" in the external result parser
- Also look into existing ways to declare the module as "passed" which ends up doing the same. So which way is the proper way to do it?
- Review existing documentation about "skipped" in both os-autoinst+openQA
Files
Updated by zagagyka about 2 months ago
okurz wrote in #note-1:
But first I would like to clarify what is your use case for doing this. Can you elaborate?
We use the test results from OpenQA to automatically execution test cases in test management tool (Testlink, Zephyr for Jira, etc.)
Therefore, to set the results as skipped/blocked, we need similar test results in OpenQA
In addition, when running a group of tests (for example, using the sub loadtestdir()), in the test result overview, we need to distinguish between tests that were run (but skipped/blocked for some reason) and those that were excluded from running
Updated by okurz about 2 months ago
- Target version changed from future to Tools - Next
Updated by okurz about 1 month ago
- Category changed from Support to Feature requests
- Assignee deleted (
okurz) - Target version changed from Tools - Next to Ready
Updated by okurz about 1 month ago
- Subject changed from Support to set a step to the 'skipped' status in the test to Support to set a step to the 'skipped' status in the test size:S
- Description updated (diff)
- Category changed from Feature requests to Support
- Status changed from New to Workable
Updated by mkittler about 1 month ago
- Status changed from Workable to In Progress
- Assignee set to mkittler
Updated by mkittler about 1 month ago
@zagagyka In your screenshot the test is passing overall. Is that actually how you want it or just what happened? What are you generally trying to achieve here?
Note that the code you wrote is not problematic from the coding perspective. However, using "skipped" in cases like this might be problematic from a reviewer/management perspective. That is because it can lead to the wrong impression that we test certain aspects of our product because we run a corresponding test module and have passing tests - while in reality the functionality isn't actually tested. So maybe we should think twice before adding e.g. a test API function to skip modules like this. (Especially because your code shows that if really needed you can already do this kind of skipping with test distribution code.)
Updated by okurz about 1 month ago
By coincidence I just found circleci is adopting a change on how they treat their "skipped": https://circleci.com/changelog/breaking-change-april-14-2025-skipped-status-will-now-return-success/
Updated by openqa_review about 1 month ago
- Due date set to 2025-04-18
Setting due date based on mean cycle time of SUSE QE Tools
Updated by livdywan about 1 month ago
- Status changed from In Progress to Feedback
Mentioned in the daily. Prioritizing other work right now and mostly waiting for feedback.
Updated by zagagyka 23 days ago
- File screenshot.png screenshot.png added
Sorry for the late reply
mkittler wrote in #note-8:
@zagagyka In your screenshot the test is passing overall. Is that actually how you want it or just what happened?
maybe the screenshot is not entirely correct
mkittler wrote in #note-8:
What are you generally trying to achieve here?
Probably, in my case, it is correct to call the result of this step not "skipped", but "blocked".
As I wrote above, at the moment we use TestLink to store and execute test cases
And we use the test results from OpenQA to execution automated test cases (see screenshot)
We write test cases that are used in our various products.
It may happen that some step for a particular product is not being tested and needs to be blocked
The TestLink (and as far as I know, in other test management tools) has the following status: "blocked"
This allows you to distinguish a test case that was launched but blocked from one that was not launched at all (or excluded from running)
But OpenQA can't set the step to "blocked" result
Updated by mkittler 18 days ago
zagagyka wrote in #note-14:
Sorry for the late reply
mkittler wrote in #note-8:
@zagagyka In your screenshot the test is passing overall. Is that actually how you want it or just what happened?
maybe the screenshot is not entirely correct
That still doesn't answer the question of what the expected behavior would be (from your point of view). So how are steps you want to skip supposed to affect the overall test result?
mkittler wrote in #note-8:
What are you generally trying to achieve here?
Probably, in my case, it is correct to call the result of this step not "skipped", but "blocked".
For this we have "softfailed" in openQA. It is basically treated as "passed" but indicates that a known issue has been found (which might have blocked the test execution to some extend). Checkout the documentation for further explanation.
As I wrote above, at the moment we use TestLink to store and execute test cases
And we use the test results from OpenQA to execution automated test cases (see screenshot)We write test cases that are used in our various products.
It may happen that some step for a particular product is not being tested and needs to be blocked
Maybe it would make more sense if steps that are not tested are added to the test plan in the first place then. Skipping steps dynamically like you propose makes only sense if whether or not the step needs to be skipped can only be determined at the moment the test is executed. However, as this is about having a different test plan per product it makes more sense to decide what steps to leave out upfront.
The TestLink (and as far as I know, in other test management tools) has the following status: "blocked"
This allows you to distinguish a test case that was launched but blocked from one that was not launched at all (or excluded from running)But OpenQA can't set the step to "blocked" result
Ok, although I'm still not sure why/how TestLink and openQA need to be streamlined in that regard. Are you running TestLink within openQA? What is your overall setup? Feel free to share source code for that (e.g. code from your test distribution) or links to openQA jobs (on your openQA instance).