action #10784
open
test framework for openQA test distributions
Added by okurz almost 9 years ago.
Updated 9 months ago.
Category:
Feature requests
Description
user story¶
As a test distribution developer using openQA I want to execute my tests in a safe and fast environment to catch early mistakes before executing anything on a productive server or real hardware
acceptance criteria¶
- "os-autoinst-distri-example" or "os-autoinst-distri-opensuse" can be executed locally without relying on any real worker
tasks¶
- maybe a simple first step could be to check importability of main.pm as is done by isotovideo
- mock the testapi to execute a "happy path" of a test, e.g. assert_screen always succeeds and such
- provide means to select the mock as either a "null backend" or a mode executing the test distribution locally, e.g. "--dry-run"
- provide a way to configure this mode to simulate different real backends or machine types
- optional: provide "failing assert_screen" and such to cover more execution paths
further details¶
benefit: By using this we can give a first hint to test distribution developers which code path they are touching, which variables are involved, etc. Another use case is the detection of unused test modules based on variable sets.
As discussed with coolo and others, the "null backend" is the way to go, e.g. call "isotovideo" and also encode empty videos and such.
- Description updated (diff)
- Category set to Feature requests
- Status changed from New to In Progress
- Status changed from In Progress to New
- Target version set to future
In the aforementioned branch I tried to develop a "null backend" and actually succeeded to start tests. Then I realized that we can't really reach anywhere useful because the tests are too much tied in to the instrumented SUT behaviour. E.g. when mocking every testapi call to "succeed" the test would still fail very early because the next "check_screen" or "wait_serial" mock-succeeds and leading the test flow into a branch where we expect something else, e.g. with match_has_tag which in turn also mock-succeeds and then gets stuck or fails because the mocked SUT - which isn't really there for a null backend - simply does not behave like the real thing. I don't know how to continue from there.
yeah, we would need to provide records of real tests to replay. Which is quite a challenge
Correct. I am thinking of getting the actions and expected results from the logfile, store these without all the not necessary debug output in the tests framework as "recordings" and play back these checking for regressions (i.e. any change is considered a test failure by default).
- Target version changed from future to future
- Related to action #48389: self-tests in os-autoinst-distri-opensuse executing a simple (staging) test using isotovideo added
- Status changed from New to Workable
- Priority changed from Normal to Low
- Target version changed from future to Tools - Next
- Status changed from Workable to New
- Target version changed from Tools - Next to future
Also available in: Atom
PDF