Project

General

Profile

Actions

tickets #161411

open

Dedicated networks for openSUSE GitHub Runners

Added by SchoolGuy 27 days ago. Updated 21 days ago.

Status:
In Progress
Priority:
Normal
Assignee:
Category:
Network
Target version:
-
Start date:
2024-06-03
Due date:
% Done:

40%

Estimated time:

Description

The SUSE Labs department will sponsor an unused old four-node chassis for usage as GitHub Runners. The maintenance will be done by me (Enno Gotthold/SchoolGuy) during my work hours. One of the nodes will be used for the Cobbler org but the other three can be freely integrated into the openSUSE GitHub Org.

As GitHub Runners are essentially executing untrusted code by design they should be isolated as much as possible. I am proposing a VLAN for each GitHub Org (one for Cobbler and one for openSUSE).

The idea is to use https://github.com/actions/actions-runner-controller on top of a k3s cluster to manage the runners. Furthermore, I would desire to use MicroOS as the base OS.

The host is not yet configured with a static network configuration. The four nodes each have a dedicated BMC that only has a Java Web Start based UI for machine access.


Related issues 1 (1 open0 closed)

Precedes openSUSE admin - tickets #161963: Prepare GitHub runner serversIn Progresscrameleon2024-06-04

Actions
Actions #1

Updated by SchoolGuy 27 days ago · Edited

Of course, for each VLAN we will need a network. The machine atm has an outdated hostname in the SUSE internal Racktables. I would propose to name the chassis "gh-runner-chassis-01" and the nodes "gh-runner--01" with ascending numbers.

Actions #2

Updated by crameleon 27 days ago

  • Category set to Network
  • Private changed from Yes to No
Actions #3

Updated by crameleon 23 days ago

Hi Enno,

I will try to configure the network soon. From reading your SUSE ticket, I should probably be able to find the physical connections in SUSE RackTables. Is there some networking configured I can use to connect to the BMC to then configure the correct addresses for our management network? If not, I could spawn a temporary DHCP server.

I understand why MicroOS would be a good candidate for this application. However, I had terrible experience integrating it with our infrastructure in the past. A lot of the Salt states either do not support transactional operation at all, or require dirty hacks. Also a lot of packages are not included in the base distribution, and required maintaining a separate project with various links: https://build.opensuse.org/project/show/openSUSE:infrastructure:Micro. It lead me to eventually move the two servers I tried it with to Leap again and to give up with pursuing the effort to make it work.
Hence I suggest to make your servers Leap based as well but to confine the relevant services with systemd hardening and AppArmor.

I have an AutoYaST profile we can use for deployment of the base OS (there's currently no network boot server in our infrastructure since we rarely ever have new hardware, hence I'd just load it with an image through the BMC, if possible).

The names are fine with me.

Actions #4

Updated by crameleon 23 days ago

On a second thought, I wonder if the names shouldn't be something more generic.
I know we will only use these machines as GitHub runners now, but I have this fear of finding a new purpose at some point in the future making the names no longer make sense. ;-)

Actions #5

Updated by SchoolGuy 23 days ago

crameleon wrote in #note-3:

Hi Enno,

I will try to configure the network soon. From reading your SUSE ticket, I should probably be able to find the physical connections in SUSE RackTables. Is there some networking configured I can use to connect to the BMC to then configure the correct addresses for our management network? If not, I could spawn a temporary DHCP server.

I understand why MicroOS would be a good candidate for this application. However, I had terrible experience integrating it with our infrastructure in the past. A lot of the Salt states either do not support transactional operation at all, or require dirty hacks. Also a lot of packages are not included in the base distribution, and required maintaining a separate project with various links: https://build.opensuse.org/project/show/openSUSE:infrastructure:Micro. It lead me to eventually move the two servers I tried it with to Leap again and to give up with pursuing the effort to make it work.
Hence I suggest to make your servers Leap based as well but to confine the relevant services with systemd hardening and AppArmor.

I have an AutoYaST profile we can use for deployment of the base OS (there's currently no network boot server in our infrastructure since we rarely ever have new hardware, hence I'd just load it with an image through the BMC, if possible).

The names are fine with me.

Feel free to go ahead with Leap. I just wanted to save myself a bit of maintenance. The BMC should have DHCP, so spawning a temporary DHCP server should make them accessible. Username/Password I will give you via the work messenger.

Actions #6

Updated by SchoolGuy 23 days ago

crameleon wrote in #note-4:

On a second thought, I wonder if the names shouldn't be something more generic.
I know we will only use these machines as GitHub runners now, but I have this fear of finding a new purpose at some point in the future making the names no longer make sense. ;-)

I have no hard feelings about other names. It was an idea from me. I don't know if we have a naming schema in the openSUSE Infra but if yes then feel free to apply it.

Actions #7

Updated by crameleon 23 days ago

Thanks, found the credentials. Will try them soon and let you know.

Naming scheme is sometimes service related and sometimes just creativity. For physical machines usually the latter (as I feel those are more involved to relabel down the line). What about apollo-chassis + apollo0{1,2,3,4}?

Actions #8

Updated by crameleon 23 days ago

Actions #9

Updated by crameleon 23 days ago

  • Status changed from New to In Progress
  • Assignee set to crameleon
  • % Done changed from 0 to 10

Ports on management switches configured.

Actions #10

Updated by crameleon 22 days ago · Edited

  • % Done changed from 10 to 20

Created network allocations:

2a07:de40:b27e:1207::/64 - Machine network for Cobbler runners
https://netbox.infra.opensuse.org/ipam/prefixes/35
with
VLAN 1207 openSUSE-GHR-Cobbler
https://netbox.infra.opensuse.org/ipam/vlans/33

2a07:de40:b27e:1208::/64 - Machine network for openSUSE runners
https://netbox.infra.opensuse.org/ipam/prefixes/36
with
VLAN 1208 openSUSE-GHR-openSUSE
https://netbox.infra.opensuse.org/ipam/vlans/34

2a07:de40:b27e:4003::/64 - K3S Cluster network for Cobbler runners
https://netbox.infra.opensuse.org/ipam/prefixes/37

2a07:de40:b27e:4004::/64 - K3S Service network for Cobbler runners
https://netbox.infra.opensuse.org/ipam/prefixes/38

2a07:de40:b27e:4005::/64 - K3S Cluster network for openSUSE runners
https://netbox.infra.opensuse.org/ipam/prefixes/39

2a07:de40:b27e:4006::/64 - K3S Service network for openSUSE runners
https://netbox.infra.opensuse.org/ipam/prefixes/40

For configuring K3S networking, https://docs.k3s.io/networking/basic-network-options#single-stack-ipv6-networking should be followed (we don't use router advertisements so the warning is not relevant).

Actions #11

Updated by crameleon 22 days ago

  • % Done changed from 20 to 30

Patch for routing configuration and firewall baseline submitted as https://gitlab.infra.opensuse.org/infra/salt/-/merge_requests/1917.

Actions #12

Updated by crameleon 21 days ago

  • % Done changed from 30 to 40

Configured VLANs and ports on switches. Prepared MC-LAG for the one working node.

Actions

Also available in: Atom PDF