Project

General

Profile

action #36100

Updated by okurz almost 6 years ago

## Motivation 

 Our openQA tests are very good at finding system performance issues, e.g. [bsc#1063638](https://bugzilla.suse.com/show_bug.cgi?id=1063638) and [bsc#1032831](https://bugzilla.suse.com/show_bug.cgi?id=1032831), but very bad at revealing the actual problems to test reviewers because seemingly random test modules fail sporadically due to just timeouts, windows not popping up, etc. . Recording random soft-fails in random scenarios is apparently not the way. We should define specific scenarios to test everything we do not want to test in other scenarios. Whenever we encounter other tests failing we should ensure that they work in a stable way and corresponding test steps are moved into the new "system performance" test scenario 

 ## Acceptance criteria 
 * **AC1:** [bsc#1063638](https://bugzilla.suse.com/show_bug.cgi?id=1063638) is not tracked in any other test scenario than one for system performance testing 

 ## Tasks 
 * Track in development for TW 
 * Add to Leap 
 * Add to SLE15 
 * Add to SLE12SP4 
 * Look for more stuff to add in this scenario, e.g. something about yast2_snapper which was recently removed by lnussel from standard scenarios, "reboot and call krunner test", see

Back