[qam] lists issues in aggregate test
Currently we do not have list of issues/updates/patches in aggregate tests, we only have it in incidents - it would be useful to have it easily available.
#1 Updated by pgeorgiadis over 4 years ago
I would like to note that a solution of this problem should mitigate at some point the issue of releasing qam-regressions. To do so, it would be nice to be able to address a case where a test-module (that runs after the step of the initial installation of the system) installs a package (or more -- directly or indirectly as dependencies) which utilizes one of its binaries because is required for the test itself. For example, is case of 'php5_mysql', this test module runs after the 'update/installation' step, so having an one-time check during the installation of the product wouldn't catch the regression in this case. This test-module runs afterwards and requires and installs the 'php5-mysql' package which triggers the installation of 'mysql', which triggers the installation of 'mariadb' which utilizes its binary later on. At this point, the reviewer misses the information from which repository the 'php5-mysql' comes from and also misses another information that 'mariadb' is related to 'php5-mysql' test itself! As a result, in a potential scenario where 'php5_mysql' test-module fails, the reviewer misses the correlation with QAM. To solve this, I think we need to solve another two sub-problems first:
Full list of packages that a test-module utilizes their binaries per test module.
e.g. in case of 'php5_mysql' test module, the list of packages would be something like: libmysqlclient18 libwrap0 mariadb mariadb-client mariadb-errormessages systemd apache2 apache2-prefork apache2-utils libapr-util1 libapr1 liblua5_2 libnghttp2-14 curl wget
Check if any of the packages listed in  are installed from QAM Testing repository.
As soon as we have this information available, then we could emphasize them and make them visible in the following way: I would like introduce you an idea of having a new kind of failure, called 'QAM Fail'. Utilizing such a 'tag', we would be able to quickly spot problems related to our QAM updates and also efficiently review the openQA results by minimizing the time and decreasing the 'guesswork' factor.
Given that a test-module installs and/or uses a binary of a package that comes from a QAM repository, when it fails, then it has to tag this as 'QAM fail'. To draw attention, choose a different color (e.g. purple) to mark this kind of failure.
In that case, just by a quick glance of the result-bar, you could be already highlighted and alarmed in case of 'purple' color, meaning that in this testsuite, there's a test that used a package from QAM and something in the related test got failed.
Notice: This doesn't mean that the reviewer has to check only QAM-Fail test-cases and disregard all the other, but he/she has to especially pay closer attention to those 'QAM Fail' findings.
#7 Updated by okurz over 4 years ago
- Category set to Enhancement to existing tests
ok, sorry. Have corresponding tests been passing at any time? Then it would be "bugs in existing tests". I categorized it as "New test" because I think it's an extension of existing tests to cover more -> "new test" should cover this as well. If it is just an enhancement to existing tests but not increasing the coverage then it can be "enhancement to existing tests" as well
#8 Updated by osukup over 4 years ago
- Status changed from New to In Progress
- % Done changed from 0 to 100
okurz, no problem, In reality this was resolved about month ago. Main problem was communication in team
--> all tests using patch_and_reboot and in future sub 'add_test_repositories' logs all available patches and packages in added test aggregate repositories