test framework for openQA test distributions
|Target version:||QA - future|
As a test distribution developer using openQA I want to execute my tests in a safe and fast environment to catch early mistakes before executing anything on a productive server or real hardware
- "os-autoinst-distri-example" or "os-autoinst-distri-opensuse" can be executed locally without relying on any real worker
- maybe a simple first step could be to check importability of main.pm as is done by isotovideo
- mock the testapi to execute a "happy path" of a test, e.g. assert_screen always succeeds and such
- provide means to select the mock as either a "null backend" or a mode executing the test distribution locally, e.g. "--dry-run"
- provide a way to configure this mode to simulate different real backends or machine types
- optional: provide "failing assert_screen" and such to cover more execution paths
benefit: By using this we can give a first hint to test distribution developers which code path they are touching, which variables are involved, etc. Another use case is the detection of unused test modules based on variable sets.
#3 Updated by okurz about 3 years ago
- Category set to Feature requests
- Status changed from New to In Progress
some preliminary work on null backend has been done: https://github.com/okurz/os-autoinst/tree/feature/null_backend
#5 Updated by okurz about 2 years ago
In the aforementioned branch I tried to develop a "null backend" and actually succeeded to start tests. Then I realized that we can't really reach anywhere useful because the tests are too much tied in to the instrumented SUT behaviour. E.g. when mocking every testapi call to "succeed" the test would still fail very early because the next "check_screen" or "wait_serial" mock-succeeds and leading the test flow into a branch where we expect something else, e.g. with match_has_tag which in turn also mock-succeeds and then gets stuck or fails because the mocked SUT - which isn't really there for a null backend - simply does not behave like the real thing. I don't know how to continue from there.
#7 Updated by okurz about 2 years ago
Correct. I am thinking of getting the actions and expected results from the logfile, store these without all the not necessary debug output in the tests framework as "recordings" and play back these checking for regressions (i.e. any change is considered a test failure by default).