Project

General

Profile

Actions

action #103326

closed

coordination #103323: [epic] BCI testing

Split BCI foor loop calls into loadtest modules

Added by jlausuch over 2 years ago. Updated about 2 years ago.

Status:
Rejected
Priority:
Low
Assignee:
-
Target version:
-
Start date:
2021-11-30
Due date:
% Done:

0%

Estimated time:

Description

Currently, BCI tests are executed by the environments defined by the variable BCI_TEST_ENVS.
The current code is running all the commands in a loop

    for my $env (split(/,/, $test_envs)) {
        record_info($env);
        my $ret = script_run("timeout $bci_timeout tox -e $env $cmd_options", timeout => ($bci_timeout + 3));
        if ($ret == 124) {
            # man timeout: If  the command times out, and --preserve-status is not set, then exit with status 124.
            record_soft_failure("The command <tox -e $env $cmd_options> timed out.");
            $error_count += 1;
        } elsif ($ret != 0) {
            record_soft_failure("The command <tox -e $env $cmd_options> failed.");
            $error_count += 1;
        } else {
            record_info('PASSED');
        }
    }

The current execution time is around 2 hours.

The problem of this is that openQA stays at 100% during most part of that execution since the progress bar looks at the number of modules executed.

We could follow the approach of LTP and other tests that use loadtest during runtime. This way, we make openQA treat every test as a new module, so the progress bar is more realistic.

Actions #1

Updated by jlausuch over 2 years ago

  • Status changed from In Progress to Workable
  • Priority changed from Normal to Low

After some tests I came up with a proposal: https://github.com/os-autoinst/os-autoinst-distri-opensuse/pull/13794
But I got more problems than fixing others which makes this task not helpful..
Also, when a module was failing, the next ones were also failing due to https://progress.opensuse.org/issues/103791

Another problem I faced when the tests were green, e.g. http://fromm.arch.suse.de/tests/5273
is that the junit results are printed twice: 1) in the test module instead of the regular openQA output frames 2) after all the modules
I would like to keep only those results at the end, but not replacing the output frames from the console cmd execution.

I am moving this to workable and might work on this as low priority, as it's a pure cosmetic thing.

Actions #2

Updated by jlausuch over 2 years ago

  • Assignee deleted (jlausuch)
Actions #3

Updated by jlausuch about 2 years ago

  • Status changed from Workable to Rejected

As we are running the test environments individually, this doesn't make sense any more.

Actions

Also available in: Atom PDF