[tools] configure vm settings for workers with rotating discs
|Target version:||openQA Project - Current Sprint|
Especially aarch64 machines are too slow syncing qemu, so we need to tweak their configs in salt
This will cost performance - and possibly making the 'HMP timeout' issue more prominent, but it will also make the
needling matching more predictable.
Jan Kara's recommendation is:
dirty_bytes to 200000000 (~200 MB) and
dirty_background_bytes to 50000000 (~50 MB).
after the experiments in https://github.com/os-autoinst/os-autoinst/pull/664
We only need this for the HDD hosts, having it on NVME shouldn't hurt - but I can't really say
- Status changed from Workable to Feedback
- Assignee set to okurz
- Target version set to Current Sprint
You did https://gitlab.suse.de/openqa/salt-states-openqa/merge_requests/215 and called it "Increase the dirty buffer size" whoever I believe you are actually decreasing it as the values are lower than default.
I have good experience with the following:
# https://askubuntu.com/questions/157793/why-is-swap-being-used-even-though-i-have-plenty-of-free-ram # https://askubuntu.com/questions/440326/how-can-i-turn-off-swap-permanently # https://superuser.com/questions/1115983/prevent-system-freeze-unresponsiveness-due-to-swapping-run-away-memory-usage vm.dirty_background_ratio = 5 vm.dirty_ratio = 80 # okurz: 2019-01-04: Trying to prevent even more stuttering # vm.swappiness = 10 # https://rudd-o.com/linux-and-free-software/tales-from-responsivenessland-why-linux-feels-slow-and-how-to-fix-that vm.swappiness = 1 # did not actually experiment with finding a good value, just took the one from the above webpage vm.vfs_cache_pressure = 50
As an alternative we can say whenever we hit problems due to this we need to simply buy more RAM.