Project

General

Profile

action #32314

action #32296: openvswitch salt receipe is 'unstable'

[salt] make GRE tunnels salt-states compatible with global worker configuration from pillars

Added by thehejik over 3 years ago. Updated about 2 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Target version:
-
Start date:
2018-02-26
Due date:
% Done:

100%

Estimated time:

Description

Problem is that in past we had for every worker instance (instance of worker on worker host) defined its own WORKER_CLASS in form:
1:
WORKER_CLASS: something
2:
WORKER_CLASS: something_else

but now we have Global settings (aarch64) valid for every worker instance (amount defined by numofworkers: 20)
numofworkers: 20
global:
WORKER_CLASS: qemu_aarch64,qemu_aarch64_slow_worker

Btw we should also restart wickedd service when the initial mm ovs setup is done to get it working (see poo#32296)


Related issues

Related to openQA Infrastructure - action #31978: Multimachine configuration is busted for aarch64Resolved2018-02-19

Related to openQA Infrastructure - action #33253: [salt] add support for multiple multi-host worker clusters - connect multiple workers using GRE within the same WORKER_CLASSResolved2018-03-14

History

#1 Updated by thehejik over 3 years ago

  • Related to action #31978: Multimachine configuration is busted for aarch64 added

#2 Updated by thehejik over 3 years ago

Another task for this poo:

  • add ability to remove GRE tunnels for workers that are not part of the multiworker cluster any more
  • perform "wicked ifup br1; systemctl restart wickedd" to make ovs bridge up at the end of openvswitch.sls state

#3 Updated by thehejik over 3 years ago

  • Status changed from New to In Progress
  • % Done changed from 0 to 30

Now salt state can read pillar global: WORKER_CLASS definition, see https://gitlab.suse.de/thehejik/salt-states-openqa/commit/21224ee547fc39b4e715ceb50dae6a46fb30f6a3

The rest is still in progress.

#5 Updated by thehejik over 3 years ago

  • % Done changed from 30 to 90

#6 Updated by thehejik about 3 years ago

Several things still needed:

  • reload GRE config for br1 interface in case GRE remote addresses in /etc/wicked/scripts/pre_up (or its count) have changed - by restarting wicked or wicked ifdown br1 followed by ifup br1
  • removal unused ifcfg-tap* devices
  • maybe also a cleanup of /run/wicked/nanny for removed interfaces
  • use STARTMODE='hotplug' instead of 'auto' for tap devices (then wicked ifreload all works correctly)

#7 Updated by thehejik about 3 years ago

  • Related to action #33253: [salt] add support for multiple multi-host worker clusters - connect multiple workers using GRE within the same WORKER_CLASS added

#9 Updated by okurz about 2 years ago

  • Project changed from openQA Project to openQA Infrastructure

#10 Updated by thehejik about 2 years ago

  • Status changed from In Progress to Resolved
  • % Done changed from 90 to 100

This is solved, closing

Also available in: Atom PDF