action #32314
closed
action #32296: openvswitch salt receipe is 'unstable'
[salt] make GRE tunnels salt-states compatible with global worker configuration from pillars
Added by thehejik almost 7 years ago.
Updated over 5 years ago.
Description
Problem is that in past we had for every worker instance (instance of worker on worker host) defined its own WORKER_CLASS in form:
1:
WORKER_CLASS: something
2:
WORKER_CLASS: something_else
but now we have Global settings (aarch64) valid for every worker instance (amount defined by numofworkers: 20)
numofworkers: 20
global:
WORKER_CLASS: qemu_aarch64,qemu_aarch64_slow_worker
Btw we should also restart wickedd service when the initial mm ovs setup is done to get it working (see poo#32296)
- Related to action #31978: Multimachine configuration is busted for aarch64 added
Another task for this poo:
- add ability to remove GRE tunnels for workers that are not part of the multiworker cluster any more
- perform "wicked ifup br1; systemctl restart wickedd" to make ovs bridge up at the end of openvswitch.sls state
- Status changed from New to In Progress
- % Done changed from 0 to 30
- % Done changed from 30 to 90
Several things still needed:
- reload GRE config for br1 interface in case GRE remote addresses in /etc/wicked/scripts/pre_up (or its count) have changed - by restarting wicked or wicked ifdown br1 followed by ifup br1
- removal unused ifcfg-tap* devices
- maybe also a cleanup of /run/wicked/nanny for removed interfaces
- use STARTMODE='hotplug' instead of 'auto' for tap devices (then wicked ifreload all works correctly)
- Related to action #33253: [salt] add support for multiple multi-host worker clusters - connect multiple workers using GRE within the same WORKER_CLASS added
- Project changed from openQA Project (public) to openQA Infrastructure (public)
- Status changed from In Progress to Resolved
- % Done changed from 90 to 100
Also available in: Atom
PDF