Project

General

Profile

coordination #96263

Updated by okurz over 2 years ago

There are certain Minion tasks which fail regularly but there's not much we can do about it. 

 * #96263 All OBS rsync related tasks: Such failing tasks seem to have no impact; apparently it is sufficient if it works on the next attempt or users are able to fix problems themselves. 
 * #70774 `save_needle` tasks: The needles dir on OSD might just be misconfigured by the user, e.g. we recently had lots of failing jobs for a needles directory for a new distribution. The users were often able to fix the problem themselves after they see the error (which is directly visible when saving a needle). 
 * `finalize_job_results` tasks: I am not sure about this one. If the job just fails due to a user-provided hook script it should not be our business. On the other hand, we also configure the hook script for the investigation ourselves and want to be informed about problems. (So likely we want to consider these failures after all.) 
 * jobs failing with the result `'Job terminated unexpectedly (exit code: 0, signal: 15)'`: This problem is independent of the task but is of course seen much more often on task we spawn many jobs of (e.g. `finalize_job_result` tasks) and tasks with possibly long running jobs (e.g. `limit_assets`). I suppose the error just means the Minion worker was restarted as signal 15 is `SIGTERM`. Since such tasks are either not very important or triggered periodically we can likely just ignore those failed jobs. 

 --- 

 Instead of implementing this on the monitoring-level we could also change openQA's behavior so these jobs are not considered failing. This would allow for a finer distinction (e.g. jobs would still fail if there's an unhanded exception due to a real regression). The disadvantage would be that all openQA instances would be affected. However, that's maybe not a bad thing and we can make it always configurable.

Back