coordination #64881
open
coordination #103941: [saga][epic] Scale up: Efficient, event-based handling of storage on new, clean instances
[epic] Reconsider triggering cleanup jobs
Added by mkittler almost 5 years ago.
Updated about 3 years ago.
Category:
Feature requests
Estimated time:
(Total: 0.00 h)
Description
Motivation¶
Currently the cleanup jobs are triggered time-based using a systemd timer. So far this ticket is just a collection of ideas how we can improve.
The time-based trigger might not be frequent enough, especially if the cleanup is aborted in the middle for some reason. E.g. on o3 we see that results/screenshots might pile up a lot which makes the error rate even higher because in consequence the cleanup jobs can take quite long (see https://progress.opensuse.org/issues/55922) and are possible interrupted with the result "worker went away".
Triggering the cleanup blindly when new jobs are scheduled as we did before is not nice either. It means creating tons of Minion jobs which just terminate because a cleanup job is already running and clutter the dashboard.
Acceptance criteria¶
- AC1: Idle instances of openQA, e.g. personal single-user developer instances, only trigger cleanup jobs when quota usage is likely to change, e.g. when new builds or jobs are scheduled or jobs complete
- AC2: Cleanup jobs are only triggered when a useful effect is expected, e.g. not 100 times in a row shortly after each other
Suggestions¶
One possibility to solve this would be that the jobs delete themselves if they can't acquire the lock. Another possibility would be acquiring the lock before creating the job and if that's not possible there will simply be no job (and if it is possible the job needs to adapt the lock).
Note that triggering the cleanup more frequently will not magically solve problems without adjusting quotas for the result storage duration. Now that we keep track of the result size we could additionally add a size-based threshold for results (maybe specify a max. total size and a percentage for each job group).
- Category set to Feature requests
- Target version set to Ready
- Description updated (diff)
- Status changed from New to Workable
We discussed this topic in the QA tools weekly meeting 2020-07-28. "Expiring jobs" would expire based on just time and as we want to look into event-based triggers they are not that helpful. But as kraih explained we can go another way: We can create minion jobs that can create locks which expire after a time which can be longer than the time the actual minion job runs. So what we should do:
- Add back trigger for cleanup job
- Use a configurable "dead-time" for locks
- Optional, after that: Based on configuration, call "df", compare free space with a configurable limit and only trigger cleanup job if threshold is exceeded
- Priority changed from Normal to Low
- Related to coordination #76984: [epic] Automatically remove assets+results based on available free space added
- Status changed from Workable to Blocked
- Assignee set to okurz
I think we can look into #76984 first
- Status changed from Blocked to Feedback
We have not yet done #76984 but I think that #88121 brought us further.
@mkittler I would very much appreciate your feedback on how you see this ticket here after #88121 . We can discuss here or also have a video chat.
- Status changed from Feedback to Blocked
By default the cleanup systemd timers are started as dependency in systemd/openqa-webui.service . Currently the space aware cleanup is triggered also when the timers trigger. We can consider the cases when "jobs post" or "isos post" is called to trigger the space aware cleanup and not have the systemd timer pulled in by default anymore.
waiting for #91782 which we see as related
We also have the feature of keeping a minimum amount of space but have not enabled it in production yet
- Parent task set to #64746
- Status changed from Blocked to New
- Assignee deleted (
okurz)
- Due date set to 2021-08-31
- Status changed from New to Feedback
- Assignee set to livdywan
Some thoughts from the planning poker:
- We may want backups in place for tackling this (#94555)
- Maybe we want event-based triggers instead of (systemd) timers?
- We could have a workshop about this topic? From the user story, personal setups, experience with production setups e.g. assets being cleaned up too soon - see workshop topics
I'll take the ticket and look into a workshop slot with @mkittler as a resident expert
- Status changed from Feedback to Resolved
We could have a workshop about this topic? From the user story, personal setups, experience with production setups e.g. assets being cleaned up too soon - see workshop topics
I'll take the ticket and look into a workshop slot with mkittler as a resident expert
Workshop slot taken for this Friday.
- Status changed from Resolved to Feedback
Meh. Why does Redmine keep flipping around states.
cdywan wrote:
We could have a workshop about this topic? From the user story, personal setups, experience with production setups e.g. assets being cleaned up too soon - see workshop topics
I'll take the ticket and look into a workshop slot with mkittler as a resident expert
Workshop slot taken for this Friday.
- Related to action #97304: Assets deleted even if there are still pending jobs size:M added
- Related to coordination #96974: [epic] Improve/reconsider thresholds for skipping cleanup added
Note that #97304 is not really related except for the tact that it is about cleanup. It is a problem which is independent from triggering the cleanup which this ticket is about. It was also not a direct outcome of the workshop or discussed there. I just looked at incompletes with the cleanup topic in mind.
Actually, #96974 is more related. It is basically about the very same problem as this ticket. I've created it to note down a few ideas I had.
- Related to deleted (action #97304: Assets deleted even if there are still pending jobs size:M)
- Subject changed from Reconsider triggering cleanup jobs to [epic] Reconsider triggering cleanup jobs
- Tracker changed from action to coordination
@cdywan With the single subtask resolved I think the next step could be a subtask about using your new feature switch as default to cover AC1
- Copied to action #101376: Use cleanup triggers on finished jobs by default added
- Status changed from Feedback to Blocked
- Parent task changed from #64746 to #103941
- Status changed from Blocked to New
- Assignee deleted (
livdywan)
- Target version changed from Ready to future
Also available in: Atom
PDF