action #116794
closedBring back grenache.qa.suse.de + grenache-1.qa.suse.de
0%
Description
Motivation¶
After the situation within SUSE Maxtorhof SRV2 was mitigated we can use grenache again.
Acceptance criteria¶
- AC1: grenache-1 is up, fully reachable and is executing openQA jobs
Suggestions¶
- Read instructions from https://gitlab.suse.de/openqa/salt-pillars-openqa/-/blob/master/openqa/workerconf.sls#L1128
- Unpause related alerts
- Updated parent ticket #114379 accordingly
Updated by okurz about 2 years ago
- Project changed from 175 to openQA Infrastructure
Updated by nicksinger about 2 years ago
- Status changed from New to In Progress
- Assignee set to nicksinger
Updated by nicksinger about 2 years ago
grenache was already running. LPAR was stuck and I had to reboot it with pvmctl lpar power-off --restart --hard -i id=3
. I accepted the salt-key of grenache-1 again on OSD and had to reinstall salt-minion on grenache-1 (it was missing there). Afterwards a quick salt 'grenache-1.qa.suse.de' test.ping
worked. Currently running a state.highstate to apply all config changes from the past.
Updated by nicksinger about 2 years ago
- Status changed from In Progress to Feedback
- Assignee changed from nicksinger to rainerkoenig
Had to manually unmask and enable openqa-worker-auto-restart@{21..28}.service
, hopefully I didn't overlook something why it was masked in the past.
@rainerkoenig I quickly checked that jobs are capable of running at all with https://openqa.suse.de/tests/9518325. Could you please check if everything works as you expect it?
Updated by nicksinger about 2 years ago
- Status changed from Feedback to In Progress
- Assignee changed from rainerkoenig to nicksinger
Updated by okurz about 2 years ago
- Related to action #113701: [qe-core] Move workers back to grenache added
Updated by openqa_review about 2 years ago
- Due date set to 2022-10-05
Setting due date based on mean cycle time of SUSE QE Tools
Updated by nicksinger about 2 years ago
- Status changed from In Progress to Resolved
The machine seems to work perfect again and moving workers back onto it is covered by another ticket so we can close it here :)