action #64881

Reconsider triggering cleanup jobs

Added by mkittler 10 months ago. Updated 3 months ago.

Feature requests
Target version:
Start date:
Due date:
% Done:


Estimated time:



Currently the cleanup jobs are triggered time-based using a systemd timer. So far this ticket is just a collection of ideas how we can improve.

The time-based trigger might not be frequent enough, especially if the cleanup is aborted in the middle for some reason. E.g. on o3 we see that results/screenshots might pile up a lot which makes the error rate even higher because in consequence the cleanup jobs can take quite long (see and are possible interrupted with the result "worker went away".

Triggering the cleanup blindly when new jobs are scheduled as we did before is not nice either. It means creating tons of Minion jobs which just terminate because a cleanup job is already running and clutter the dashboard.

Acceptance criteria

  • AC1: Idle instances of openQA, e.g. personal single-user developer instances, only trigger cleanup jobs when quota usage is likely to change, e.g. when new builds or jobs are scheduled or jobs complete
  • AC2: Cleanup jobs are only triggered when a useful effect is expected, e.g. not 100 times in a row shortly after each other


One possibility to solve this would be that the jobs delete themselves if they can't acquire the lock. Another possibility would be acquiring the lock before creating the job and if that's not possible there will simply be no job (and if it is possible the job needs to adapt the lock).

Note that triggering the cleanup more frequently will not magically solve problems without adjusting quotas for the result storage duration. Now that we keep track of the result size we could additionally add a size-based threshold for results (maybe specify a max. total size and a percentage for each job group).

Related issues

Related to openQA Project - action #76984: Automatically remove assets+results based on available free spaceFeedback


#1 Updated by okurz 10 months ago

  • Category set to Feature requests

#2 Updated by okurz 6 months ago

  • Target version set to Ready

#3 Updated by okurz 6 months ago

  • Description updated (diff)
  • Status changed from New to Workable

#5 Updated by okurz 6 months ago

We discussed this topic in the QA tools weekly meeting 2020-07-28. "Expiring jobs" would expire based on just time and as we want to look into event-based triggers they are not that helpful. But as kraih explained we can go another way: We can create minion jobs that can create locks which expire after a time which can be longer than the time the actual minion job runs. So what we should do:

  • Add back trigger for cleanup job
  • Use a configurable "dead-time" for locks
  • Optional, after that: Based on configuration, call "df", compare free space with a configurable limit and only trigger cleanup job if threshold is exceeded

#6 Updated by okurz 3 months ago

  • Priority changed from Normal to Low

#7 Updated by okurz 3 months ago

  • Related to action #76984: Automatically remove assets+results based on available free space added

#8 Updated by okurz 3 months ago

  • Status changed from Workable to Blocked
  • Assignee set to okurz

I think we can look into #76984 first

Also available in: Atom PDF