I did some initial experiments with the tunnels and openvswitch. The setup should be similar to openQA, but have some (important) differences. I will address them in the next step.
Initial setup for all experiments¶
# Enable ip forwarding
sysctl -w net.ipv4.ip_forward=1
sysctl -w net.ipv6.conf.all.forwarding=1
# Install and enable openvswitch
zypper in openvswitch3
systemctl enable --now openvswitch
Experiments¶
Host |
Network address |
Bridge address |
Remote IP |
A |
192.0.2.1/24 |
192.168.42.1/24 |
192.0.2.2 |
B |
192.0.2.2/24 |
192.168.43.1/24 |
192.0.2.1 |
Note: instead of having two /24 networks, it is also possible to assign addresses from one bigger network (which have the benefit of not needing explicit route assignment).
Simple scenario¶
Two servers with a single bridge on each side connected with GRE tunnel.
# Create bridge and tunnel
nmcli con add type bridge con.int br0 bridge.stp yes ipv4.method manual ipv4.address "$bridge_address" ipv4.routes 192.168.42.0/23
nmcli con add type ip-tunnel mode gretap con.int gre1 master br0 remote "$remote_ip"
# Test the tunnel with ping
# -M do -- prohibit fragmentation
# -s xxxx -- set packet size
ping -c 3 -M do -s 1300 192.168.42.1
ping -c 3 -M do -s 1300 192.168.43.1
Scenario with openvswitch¶
Two servers with a one virtual bridge connected with GRE tunnel.
# Create bridge, port and interface
nmcli con add type ovs-bridge con.int br0 ovs-bridge.stp-enable yes
nmcli con add type ovs-port con.int br0 con.master br0
nmcli con add type ovs-interface con.int br0 con.master br0 ipv4.method manual ipv4.address "$bridge_address" ipv4.routes 192.168.42.0/23
# Create GRE tunnel
nmcli con add type ovs-port con.int gre1 con.master br0
nmcli con add type ip-tunnel mode gretap con.int gre1 master gre1 remote "$remote_ip"
# Test the tunnel
ping -c 3 -M do -s 1300 192.168.42.1
ping -c 3 -M do -s 1300 192.168.43.1
# ovs-vsctl show
de1f31e9-1b51-4cc3-954a-4e037191ac07
Bridge br0
Port br0
Interface br0
type: internal
Port gre1
Interface gre1
type: system
ovs_version: "3.1.0"
GRE tunnel made in openvswitch¶
openvswitch uses flow-based GRE tunneling, i.e. one interface gre_sys
for all tunnels, the tunnel can be created by ovs-vsctl
. After that, everything works as expected.
# Create bridge, port and interface
nmcli con add type ovs-bridge con.int br0 ovs-bridge.stp-enable yes
nmcli con add type ovs-port con.int br0 con.master br0
nmcli con add type ovs-interface con.int br0 con.master br0 ipv4.method manual ipv4.address "$bridge_address" ipv4.routes 192.168.42.0/23
# Create GRE tunnel
ovs-vsctl add-port br0 gre1 -- set interface gre1 type=gre options:remote_ip="$remote_ip"
# Test the tunnel
ping -c 3 -M do -s 1300 192.168.42.1
ping -c 3 -M do -s 1300 192.168.43.1
# ovs-vsctl show
de1f31e9-1b51-4cc3-954a-4e037191ac07
Bridge br0
Port br0
Interface br0
type: internal
Port gre1
Interface gre1
type: gre
options: {remote_ip="192.0.2.2"}
ovs_version: "3.1.0"
openQA-like¶
Each worker has the same 10.0.2.2/15 address set on the bridge interface and some extra openvswitch "magic" (i.e. os-autoinst-openvswitch) which allows the SUT to contact the worker machine via common address. This unfortunately renders the IP unusable for any inter-machine communication (pinging 10.0.2.2 from 10.0.2.2 just can't work).
Maybe the solution here is to just add an extra unique address for the bridge interface, which can be used for network checks. The added address does not even need to be from the same address space as long as we have correct routing tables.