action #19222
open[discussion] Improve automatic carryover to be more strict - when bugs relate to individual steps within test details
0%
Description
Automatic carryover is cool feature which ease test reviewer life a lot, but because we not yet invent AI for openQA sometimes automatic carryover label jobs which actually need human eye.
Acceptance criteria for this ticket would be - dramatically decrease cases when automatic algorithm label jobs which should not be labeled or fully exclude such cases.
Examples of Negative cases :
- https://openqa.suse.de/tests/938443 ( false carryover of fixed bug , in a fact test regression )
- https://openqa.suse.de/tests/938555 ( false carryover of fixed bug , in a fact new bug)
- https://openqa.suse.de/tests/937960 ( false carryover same module fail but actually another bug reproduces)
- https://openqa.suse.de/tests/938212 ( false carryover same module fail but actually another bug reproduces)
Updated by okurz over 7 years ago
- Category set to Support
- Assignee set to okurz
need to check the cases
Updated by okurz over 7 years ago
- Subject changed from [tools] Negative cases of automatic carryover to [tools] Improve automatic carryover to be more strict
- Category changed from Support to Feature requests
- Assignee deleted (
okurz) - Priority changed from Normal to Low
In the examples you mentioned carry-over works as expected but is carrying over issues which are not related because they refer to different steps within that job. A way to remedy that from the side of tests is to split the tests into more modules. A way to improve that from the side of openQA itself might be to configure the automatic carry over to look at steps and not only modules, maybe paired with some fuzziness, e.g. if still fails in step N+-1.
Updated by coolo over 7 years ago
a 'step' in this context is a bit fuzzy defined - if you do a loop in the test, there will be more screens and as such more steps.
I would go rather along the line of 'same line of code', but that needs better markup from os-autoinst
Updated by asmorodskyi over 7 years ago
what about comparison of failed needle ? does not matter actually which step it is , just show me same picture that was last time and I will be convinced that this is the same failure ?
Updated by coolo over 7 years ago
openQA has no image comparision - next to bit identical. And failures are not always assert_screen
Updated by asmorodskyi over 7 years ago
but we have comparison in isotovideo. We can extract it into library and reuse in openQA
Updated by asmorodskyi over 7 years ago
another approach - can we somehow detect reason of failure ? I mean when we writing into log "Test died in ..." ? We can standardize this into some structure like :
{
failure_reason : needle match / script run failed / general error etc.
failure_message : will contain message like "Needle match failed" / "Script run timed out " / "Script end with error"
}
and openQA can compare this between jobs
Updated by okurz over 7 years ago
yes, good idea. I recommend that at we make sure that every test failure outputs this "record_info" box. We could check for this then within openQA without needing to move any image comparison algorithm to openQA.
Updated by asmorodskyi over 7 years ago
https://openqa.suse.de/tests/1036627#step/force_cron_run/4 - another example of false carryover . test fails before it reach described in bug failure point.
Updated by coolo almost 7 years ago
- Subject changed from [tools] Improve automatic carryover to be more strict to Improve automatic carryover to be more strict
- Target version set to future
Updated by okurz almost 4 years ago
- Subject changed from Improve automatic carryover to be more strict to Improve automatic carryover to be more strict - when bugs relate to individual steps within test details
Updated by okurz over 2 years ago
- Tags set to discussion
- Subject changed from Improve automatic carryover to be more strict - when bugs relate to individual steps within test details to [discussion] Improve automatic carryover to be more strict - when bugs relate to individual steps within test details
Updated by okurz almost 2 years ago
- Related to action #121429: [qe-tools] qe-review bot sends wrong notification even the issue got fixed and the test case failed on different place size:M added