Project

General

Profile

coordination #88229

Updated by okurz about 3 years ago

## Motivation 
 In general the idea or demand for "prevent test coverage decrease" is nothing new and also not limited to QAM. Similar example would be in Tumbleweed as well as in other products: At any point in time it could happen that an openQA test scenario or test module or parts of test code or a combination of those is not executed anymore for a certain product and that is unlikely to be realized because neither TTM nor the openQA maintenance bot would count a reduction in test coverage as an alert condition. github.com/os-autoinst/openqa_review reports about this as well as https://github.com/os-autoinst/openqa_review/blob/master/openqa_review/tumblesle_release.py which would not publish snapshots if less scenarios are found than in before. Both are not actively used for decision making about product releases. 

 ## User story 
 As a QE engineer, I would like to have a way of comparing openQA's tests (*.pm) utilization between our products, mostly before their release and after the release. With ~1500 tests available at this moment, it's very difficult to find out whether particular test is being used to it's full potential, if it is running on all products, code streams and architectures that it could and should run on. That brings a risk of needlessly lowering the testing coverage that is already available to us. 

 ## Acceptance criteria 
 * **AC1:** An alert is raised if the test coverage within openQA tests is reduced without intention 
 * **AC2:** Intended and acceptable openQA test coverage decreases are explicitly referenced, e.g. in open issues 
 * **AC3:** Possibility to filter the data, to easily compare coverage between whichever products, code streams. 
 * **AC4:** Possibility to access data by script/bot to process them further (dashboards, alerts, etc.) 

 ## Further details 
 As per previous discussions, Metabase can help for some parts of this story.

Back