action #10784

test framework for openQA test distributions

Added by okurz about 4 years ago. Updated over 1 year ago.

Status:NewStart date:17/02/2016
Priority:NormalDue date:
Assignee:-% Done:


Category:Feature requests
Target version:QA - future


user story

As a test distribution developer using openQA I want to execute my tests in a safe and fast environment to catch early mistakes before executing anything on a productive server or real hardware

acceptance criteria

  • "os-autoinst-distri-example" or "os-autoinst-distri-opensuse" can be executed locally without relying on any real worker


  • maybe a simple first step could be to check importability of as is done by isotovideo
  • mock the testapi to execute a "happy path" of a test, e.g. assert_screen always succeeds and such
  • provide means to select the mock as either a "null backend" or a mode executing the test distribution locally, e.g. "--dry-run"
  • provide a way to configure this mode to simulate different real backends or machine types
  • optional: provide "failing assert_screen" and such to cover more execution paths

further details

benefit: By using this we can give a first hint to test distribution developers which code path they are touching, which variables are involved, etc. Another use case is the detection of unused test modules based on variable sets.

Related issues

Related to openQA Tests - action #48389: [tools] self-tests in os-autoinst-distri-opensuse executi... Workable 25/02/2019


#1 Updated by okurz about 4 years ago

As discussed with coolo and others, the "null backend" is the way to go, e.g. call "isotovideo" and also encode empty videos and such.

#2 Updated by okurz over 3 years ago

  • Description updated (diff)

#3 Updated by okurz about 3 years ago

  • Category set to Feature requests
  • Status changed from New to In Progress

some preliminary work on null backend has been done:

#4 Updated by coolo over 2 years ago

  • Status changed from In Progress to New
  • Target version set to future

Page not found

#5 Updated by okurz about 2 years ago

In the aforementioned branch I tried to develop a "null backend" and actually succeeded to start tests. Then I realized that we can't really reach anywhere useful because the tests are too much tied in to the instrumented SUT behaviour. E.g. when mocking every testapi call to "succeed" the test would still fail very early because the next "check_screen" or "wait_serial" mock-succeeds and leading the test flow into a branch where we expect something else, e.g. with match_has_tag which in turn also mock-succeeds and then gets stuck or fails because the mocked SUT - which isn't really there for a null backend - simply does not behave like the real thing. I don't know how to continue from there.

#6 Updated by coolo about 2 years ago

yeah, we would need to provide records of real tests to replay. Which is quite a challenge

#7 Updated by okurz about 2 years ago

Correct. I am thinking of getting the actions and expected results from the logfile, store these without all the not necessary debug output in the tests framework as "recordings" and play back these checking for regressions (i.e. any change is considered a test failure by default).

#8 Updated by okurz over 1 year ago

  • Target version changed from future to future

#9 Updated by okurz 8 months ago

  • Related to action #48389: [tools] self-tests in os-autoinst-distri-opensuse executing a simple (staging) test using isotovideo added

Also available in: Atom PDF