action #97274
opencoordination #99303: [saga][epic] Future improvements for SUSE Maintenance QA workflows with fully automated testing, approval and release
qam dashboard improvement ideas
100%
Description
Hello, doing openQA review I always used smelt comments to find out which test run needs to be checked to approve an update.
Ideally approval is automated, but when a single test fails (out of dozens/hundreds) it still needs some manual work to decide if such failures can be ignored for that particular test.
I won't mention crosscheck aggregate runs with precedent days (see https://progress.opensuse.org/issues/97118).
These are the current issues I found while using the dashboard for my week of review:
- Sorting order: I like to sort on smelt the priority or due date to have an idea on the situation. Neither of which is available. Incidents are sorted by incident ID, which I do not care
- Missing Release Request ID: If I am given only a RR ID, I must go to smelt to find the incident and back to the dashboard.
- Result History: I can only see latest results, so I find more painful to crosscheck different days, but I would be happier to see such think automated (see other poo linked earlier). In the meantime though, it is just more painful than before. I also have a good overview of the situation near the end of the day, because in the morning all runs are still ongoing and cannot do review based on yesterday's results.
- Development Job Groups: such job groups are not ignored, also some test groups will fit in. This creates some confusion and time wasted.
Extra thought:
The dashboard and smelt might be duplicating some work. Why not having a link in smelt to the list of related tests on the dashboard? I would be using the indexing/priority/informations on smelt and then go on the dashboard to check tests, possibly with result history.
What I am basically asking for is the same features as smelt comments, whichever implementation is used.