When we'll have separate job group, we need good understanding of what we have in it and what we want to have. By that we also can decide where how we want to test one thing and another.
There are many examples where we could be more efficient and gain time for increasing test coverage for more critical areas instead of covering e.g. firefox on all possible environments.
This also includes proper evaluation of the risks when not running certain test module.
As a starting point we already have test suites descriptions + test modules descriptions. But it's hard to identify which modules are executed where, and on which env do we cover certain setups. That is visible only in the openQA dashboard.
There is one team which uses testopia exactly for this purpose. We need to identify good format which could be reused among teams and used e.g. as a reporting template to visualize coverage per build.
Markdown is more preferable
- All existing test suites in the YaST job group have description
- Test suite includes env description, executed test cases summary and purpose
#7 Updated by riafarov over 1 year ago
- Status changed from Workable to Feedback
After spending 1 full day I was not able to describe all test suites: https://gitlab.suse.de/riafarov/qa-sle-functional-y/blob/master/SLES_Integration_Level_Testplan.md
As of now, we agreed to gather some feedback before proceeding further. Reviewing current coverage has already helped to identify multiple action points.
#14 Updated by riafarov about 1 year ago
- Status changed from Feedback to Resolved
All scenarios currently enabled in YaST job group are defined: https://gitlab.suse.de/riafarov/qa-sle-functional-y/blob/master/SLES_Integration_Level_Testplan.md
Let's keep it up to date now.