action #151396
closedQA (public) - coordination #121720: [saga][epic] Migration to QE setup in PRG2+NUE3 while ensuring availability
After osiris is now in salt decide about the fate of seth
0%
Description
Motivation¶
See #151390 for context first. Now with osiris in salt we should decide what to do with seth. We could add it to salt and keep it running idling but that's just wasteful. We could power down the machine or we try to bring back the libvirt hot-redundancy cluster approach or use other more fancy software like a harvester cluster.
Acceptance criteria¶
- AC1: seth is used for a useful purpose, not just wasting power
- AC2: racktables is up-to-date
Suggestions¶
- Discuss alternatives
- Select an approach and execute it
- Ensure racktables is up-to-date
- Update the wiki instructions accordingly on https://wiki.suse.net/index.php/SUSE-Quality_Assurance/Labs/QA-SLE_cluster
Updated by okurz over 1 year ago
- Copied from action #151390: Brute-force salt osiris so that we enable self-management of VMs for users size:M added
Updated by okurz over 1 year ago
- Status changed from New to In Progress
- Assignee set to okurz
- Target version changed from future to Ready
osiris is in salt, continuing with seth.
Updated by okurz over 1 year ago
salt state cleanly applied. Starting auto-upgrade service.
Updated by openqa_review over 1 year ago
- Due date set to 2023-12-12
Setting due date based on mean cycle time of SUSE QE Tools
Updated by okurz over 1 year ago
- Due date deleted (
2023-12-12) - Status changed from In Progress to Resolved
Up-to-date in salt but then powered off and removed from salt keys again as the machine is currently not in active use. Updated racktables with description "Used as VM hypervisor, see https://wiki.suse.net/index.php/SUSE-Quality_Assurance/Labs/QA-SLE_cluster, cold-redundancy for osiris as we couldn't get the failover-cluster to work properly".