Project

General

Profile

Actions

action #132617

closed

coordination #121720: [saga][epic] Migration to QE setup in PRG2+NUE3 while ensuring availability

coordination #153685: [epic] Move from SUSE NUE1 (Maxtorhof) to PRG2e

Move of selected LSG QE machines NUE1 to PRG2e size:M

Added by okurz 10 months ago. Updated about 2 months ago.

Status:
Resolved
Priority:
Low
Assignee:
Target version:
Start date:
Due date:
% Done:

0%

Estimated time:

Description

Motivation

NUE1 needs to be emptied. For some machines we opted to have them moved to "PRG2e" aka. "PRG2 Colo Extension" or similar. Assuming nobody does the job for us we need to unrack and organize the move with Facilities and SUSE-IT.

Acceptance criteria

Suggestions

  • Ask hreinecke and SUSE-IT and wengel and others how this move is organized
  • As necessary organize transport of equipment: Create ticket over https://sd.suse.com to component "Facilities" asking them how and where to prepare machines for move and ask them to move the equipment to FC Basement
  • As necessary: Go to NUE1 Maxtorhof place beforehand and prepare the move, e.g. nothing connected anymore, put on pallet, labeled, packed into boxes, etc.
  • Inform users about the pending move
  • Ensure machines are usable from PRG2e
  • Ensure documentation is up-to-date
  • Inform users after everything is done

Rollback steps

Remove alert silence from https://monitor.qa.suse.de/alerting/silences with alertname=openqaw5-xen: host up alert


Related issues 5 (0 open5 closed)

Related to openQA Infrastructure - action #132947: Bring back ada.qe.suse.de and fix it properlyResolvednicksinger

Actions
Related to openQA Infrastructure - action #152813: openqaw5-xen.qa.suse.de is not responding to salt commandsResolvedokurz2023-12-14

Actions
Related to openQA Infrastructure - action #152811: ada.qe.suse.de is not responding to salt commandsResolvedokurz2023-12-14

Actions
Copied to QA - action #132620: Move of selected LSG QE machines NUE1 to NUE3 size:MResolvedokurz2023-07-12

Actions
Copied to QA - action #153670: Move of selected LSG QE machines NUE1 to PRG2e - fozzie size:MResolveddheidler

Actions
Actions #2

Updated by okurz 10 months ago

  • Copied to action #132620: Move of selected LSG QE machines NUE1 to NUE3 size:M added
Actions #3

Updated by okurz 10 months ago

  • Assignee deleted (okurz)

Ready for work

Actions #4

Updated by livdywan 10 months ago

  • Status changed from New to Blocked
  • Assignee set to livdywan

As discussed I'm blocking this ticket (on #132671) to clarify how everyone in the team can login before we estimate it.

Actions #5

Updated by okurz 10 months ago

  • Status changed from Blocked to New
  • Assignee changed from livdywan to mgriessmeier
  • Target version changed from Ready to future

@mgriessmeier same as the other ticket you can help here

Actions #6

Updated by okurz 10 months ago

  • Related to action #132947: Bring back ada.qe.suse.de and fix it properly added
Actions #7

Updated by nicksinger 9 months ago

  • Related to deleted (action #132947: Bring back ada.qe.suse.de and fix it properly)
Actions #8

Updated by nicksinger 9 months ago

  • Blocks action #132947: Bring back ada.qe.suse.de and fix it properly added
Actions #9

Updated by okurz 9 months ago

  • Blocks deleted (action #132947: Bring back ada.qe.suse.de and fix it properly)
Actions #10

Updated by okurz 9 months ago

  • Related to action #132947: Bring back ada.qe.suse.de and fix it properly added
Actions #11

Updated by okurz 9 months ago

Based on the discussion in https://suse.slack.com/archives/C02CANHLANP/p1691091091576219 we made the decision to move orion.openqa.opensuse.org and andromeda.openqa.opensuse.org to PRG2e as well. Tags in netbox updated. Please accomodate them in the moving plans as well. Previously the machines were designated for NUE3 but we reconsider after NUE3 was now decided to be "cold redundancy" hence a waste for those rather new machines (from 2021). Decided for "PRG2e" because those machines are not maintained by SUSE QE Tools but by QEC, although they did not manage to setup the machines for two years already.

For openqaworker20.openqa.opensuse.org we decided to move to PRG2 next to other o3 machines for similar reasons as above. Tags in netbox updated. For openqaworker19.openqa.opensuse.org we keep the original designation to move to NUE3 so that the cold redundancy for o3 has at least some recent machine available in cold redundancy.

Actions #12

Updated by okurz 9 months ago

Additionally as discussed in an email thread started by Calen Chen and Xiaoli Ai I have now designated 6 more machines with "Move to Prague Colo2" and "Colo2 second wave" in netbox. Assuming that it is still possible please make sure that those machines get a place on the train/truck/plane/ship to PRG2e :)

Actions #13

Updated by okurz 8 months ago

mgriessmeier and me clarified in Jitsi call 2023-08-28. As commented in #132617-11 w20 should go to PRG2 (not PRG2e). w20 is tagged in netbox with "Move to Prague Colo". I created the specific ticket #134714 about the move to PRG2

  1. @mgriessmeier please ensure that orion+andromeda will be prepared for move to PRG2e as they are in NUE1-SRV1 so this needs coordination with hare and Eng-Infra
  2. @mgriessmeier as part of #134714 please ensure that machines planned for PRG2 (not PRG2e) are accounted for in moving plans
Actions #14

Updated by mgriessmeier 8 months ago

okurz wrote in #note-13:

  1. @mgriessmeier please ensure that orion+andromeda will be prepared for move to PRG2e as they are in NUE1-SRV1 so this needs coordination with hare and Eng-Infra

orion+andromeda (+fibonacci) will be de-racked tomorrow by me and mmoese with help of gschlotter and put in the according Strohmann moving boxes provided in NUE-SRV2 and to be picked up by Strohmann on Sep 15th

Actions #15

Updated by mgriessmeier 8 months ago

  • Status changed from New to In Progress

orion, andromeda and fibonacci had been unplugged, de-racked and packed into according boxes for the first wave move to PRG2e (scheduled for may 15th).
The box was labeled with machine and contact names of the respective teams and persons.

Actions #16

Updated by okurz 8 months ago

  • Parent task changed from #130955 to #129280
Actions #17

Updated by okurz 7 months ago

  • Subject changed from Move of selected LSQ QE machines NUE1-SRV2 to PRG2e to Move of selected LSQ QE machines NUE1 to PRG2e

https://gitlab.suse.de/openqa/salt-pillars-openqa/-/merge_requests/633 to prepare move of worker7-hyperv and worker8-vmware (merged)

Waiting for hreinecke for next call of action regarding move of machines.

Actions #18

Updated by okurz 7 months ago

  • Target version changed from future to Tools - Next
Actions #20

Updated by okurz 6 months ago

  • Subject changed from Move of selected LSQ QE machines NUE1 to PRG2e to Move of selected LSG QE machines NUE1 to PRG2e
Actions #21

Updated by livdywan 6 months ago

  • Subject changed from Move of selected LSG QE machines NUE1 to PRG2e to Move of selected LSG QE machines NUE1 to PRG2e size:M
Actions #23

Updated by okurz 6 months ago

Maxtorhof evacuation planned as notified by hreinecke happens 2023-11-29, Wednesday. Decommissioning will be prepared by hreinecke. We should await his notice and then put our old machine(s) likely only in a container. Crosscheck netbox for number of machines and needed rackspace.

https://suse.slack.com/archives/C04MDKHQE20/p1699003595410659

Oliver Kurz @Hannes Reinecke (CC @Matthias Griessmeier) as you asked about necessary rack space in PRG2e for LSG QE: I crosschecked https://netbox.suse.de/dcim/devices/?tag=qe-lsg&tag=move-to-prague-colo2 and I can confirm it is up-to-date and properly reflects the current plan. The query shows 12 machines. 3 are apparently already in PRG2 cold storage (first wave), 9 still in NUE1-SRV2, waiting for deracking and transport. Currently I see not more need of rackspace besides that. (edited)

Actions #24

Updated by okurz 6 months ago

  • Priority changed from Normal to High

Response from hreinecke

(Hannes Reinecke) To help with the move, can you register the destination location with 'placeholder' entries (like I did for machines like 'adalid' or 'moorfoot')? I have assigned two rack to you https://netbox.suse.de/dcim/racks/1223/ and https://netbox.suse.de/dcim/racks/1224/ . If they already contain placeholder entries where machines should be mounted it will help with the process. Unless you or your team want to rack your machines ...

@mgriessmeier will you do that or should we?

Full message from hreinecke by Slack and email:

Hi all,

we now seem to have agreement on the Maxtorhof SRV2 evacuation / Colo2 second wave timeline.

On Wed Nov 22nd we will begin with shutting down and de-racking machines from SRV2. I have already contacted all respective owners; if you happen
to have machines in SRV2 and have _not_ received a mail from me please
do get in touch such that we can plan accordingly.

On Wed Nov 29th will the containers with the machine loaded onto a lorry, and will be delivered to PRG Colo2 on Thu Nov 30th.
Machines will then be unloaded and mounted into the assigned locations.

That work will most likely take some time; the hope is to have all
machines from the second wave up and running on Dec 4th.
Priorities can be discussed, of course.

I have started a tentative rack assignment in netbox; the rows for the new location are PRG2E-D and PRG2E-E, which each team having machines
registered with either the first or second wave having racks assigned
to them. Please check with netbox if the rack allocation is correct.
Note that I have created 'placeholder' entries for machines for the second wave to indicate that this is just the position where the
machine should be mounted, not the machine itself.
Please get in touch if something needs to be changed.

Start of December work will start to 'delapidate' (as the term goes) SRV2 and SRV2e to revert them into their original state.

Please get in touch if you have further questions.
Actions #25

Updated by mgriessmeier 6 months ago

okurz wrote in #note-24:

Response from hreinecke

(Hannes Reinecke) To help with the move, can you register the destination location with 'placeholder' entries (like I did for machines like 'adalid' or 'moorfoot')? I have assigned two rack to you https://netbox.suse.de/dcim/racks/1223/ and https://netbox.suse.de/dcim/racks/1224/ . If they already contain placeholder entries where machines should be mounted it will help with the process. Unless you or your team want to rack your machines ...

@mgriessmeier will you do that or should we?

feel free to do that :)

Actions #26

Updated by okurz 6 months ago

  • Assignee changed from mgriessmeier to okurz
  • Target version changed from Tools - Next to Ready

planning to do myself then to give others more hackweek fun

Actions #27

Updated by openqa_review 6 months ago

  • Due date set to 2023-11-20

Setting due date based on mean cycle time of SUSE QE Tools

Actions #28

Updated by okurz 6 months ago

  • Due date changed from 2023-11-20 to 2023-11-22
  • Status changed from In Progress to Feedback

https://suse.slack.com/archives/C04MDKHQE20/p1699363244951429?thread_ts=1699003595.410659&cid=C04MDKHQE20

(Oliver Kurz) @Hannes Reinecke I have checked the new PRG2e planning racks you created https://netbox.suse.de/dcim/racks/?location_id=288 and https://netbox.suse.de/dcim/racks/?location_id=289 and I have added placeholders for all the 9 above mentioned machines in https://netbox.suse.de/dcim/racks/1223/rack "D09 - QE" which now with tight packing occupies 32.6% of the rack according to netbox, correct U-sizes selected based on racktable entries. With that right now I see the planning from my side finished and "D10 - QE" as not needed. Can you please verify?

Actions #29

Updated by xlai 6 months ago

@okurz Hi Oliver, thanks for helping with the MOVE. Just now I had a check in netbox.suse.de for the latest information of all virtualization machines in NUE1. I noticed some differences with our agreement. Would you please have a look?

Agreement

Our final agreement in the email thread of Output from last week's IT/BCL workshop (notes and actions) is that we do not want cold redundancy, but hot redundancy , and they will be put in PRG2e.

Quote of email:

> So for now I set the tag "Move to Prague Colo2" for all the machines we
discussed here.

Yes. Thanks for that. 

BTW, I had a check in netbox.suse.de, below machines have been updated to "Prague Colo2" from previous NUE3, and gonzo stays in FC lab.

> openqaw9-hyperv.qa.suse.de -> Move to  PRG COLO2
> worker7-hyperv.oqa.suse.de -> Move to  PRG COLO2
> worker8-vmware.oqa.suse.de -> Move to  PRG COLO2
> openqaw5-xen.qa.suse.de -> Move to PRG COLO2
> fozzie -> Move to  PRG COLO2
> quinn  -> Move to PRG COLO2

Besides, the plan for the other two machines have been always

  • amd-zen2-gpu-sut1.qa.suse.de -> Move to Prague Colo2(same as PRG2e), second wave
  • blackbauhinia.qa.suse.de -> Move to Prague Colo2(same as PRG2e),second wave

Current status in netbox

I noticed that all others are as planned, except:

  • worker7-hyperv.oqa.suse.de -> plan is PRG2e, but in netbox, it has been put to PRG2 / PRG2 - Cold Storage. Is it cold redundancy room? We need it hot operating.
  • worker8-vmware.oqa.suse.de -> same as above
  • blackbauhinia.qa.suse.de -> plan is to move to Prague Colo2,second wave, but in netbox, it is not Planned status like others of second wave(eg amd-zen2-gpu-sut1) which has placeholder for PRG2E-D rack D09 - QE. Is it forgotten, or just not yet been planned?
Actions #31

Updated by okurz 6 months ago

xlai wrote in #note-29:

@okurz Hi Oliver, thanks for helping with the MOVE. Just now I had a check in netbox.suse.de for the latest information of all virtualization machines in NUE1. I noticed some differences with our agreement. Would you please have a look?

Agreement

Our final agreement in the email thread of Output from last week's IT/BCL workshop (notes and actions) is that we do not want cold redundancy, but hot redundancy , and they will be put in PRG2e.

Let me clarify so that we are sure we are talking about the same wording here: The "cold redundancy" that we talk about here is a concept for providing geo-redundancy of production datacenters. So if the PRG2 datacenter including PRG2e is not available for a longer period then the cold redundancy location NUE3 aka "Marienberg DR" for "Marienberg Disaster Recovery" located in Nuremberg, Germany, takes over serving as replacement production datacenter. There is currently no plan by SUSE-IT to support hot redundant production datacenters for critical services. However, we within LSG QE, meaning as supported by LSG QE Tools team, we provide best effort hot redundancy by using lab rooms. With this we can provide hot geo-redundancy so multiple servers for the same or similar purpose being online and available at the same time but with no hardcoded agreement regarding reaction or resolution times if problems arise.

I noticed that all others are as planned, except:

  • worker7-hyperv.oqa.suse.de -> plan is PRG2e, but in netbox, it has been put to PRG2 / PRG2 - Cold Storage. Is it cold redundancy room? We need it hot operating.
  • worker8-vmware.oqa.suse.de -> same as above

"cold storage" is just temporary storage in the PRG2 datacenter until PRG2e is fully functional. PRG2e is expected to be functional start of 2023-12. Still, it seems you have uncovered an inconsistency which I am grateful for. I have raised that topic in
https://suse.slack.com/archives/C04MDKHQE20/p1699456686160239?thread_ts=1699003595.410659&cid=C04MDKHQE20

(Oliver Kurz) @Hannes Reinecke @John Ford @Moroni Flores apparently some device entries in netbox have lost their according tags like https://netbox.suse.de/dcim/devices/6833/ where "hare@suse.com", likely your user account running scripted, in https://netbox.suse.de/extras/changelog/247395/ removed "Move to Prague Colo2" and "Colo2 second wave". Another example would be storage.oqa.suse.de https://netbox.suse.de/dcim/devices/7233/ . Given that

  1. I will need to crosscheck all "QE LSG" machines for missing tags and re-add where needed. This means I/we will need to plan for more rackspace that anticipated potentially also still needing to move machines from PRG2 to PRG2e
  2. What in general is the plan for all machines in PRG2 - Cold Storage ?

Your next point:

  • blackbauhinia.qa.suse.de -> plan is to move to Prague Colo2,second wave, but in netbox, it is not Planned status like others of second wave(eg amd-zen2-gpu-sut1) which has placeholder for PRG2E-D rack D09 - QE. Is it forgotten, or just not yet been planned?

Good. I overlooked this machine for planning some rack space. It would likely not have been a problem as the planned rackspace was just needed for estimation but I created a proper linked placeholder entry now:
https://netbox.suse.de/dcim/devices/10665/
Thank you.

Actions #32

Updated by xlai 6 months ago

@okurz Glad that my feedback helps and thank you for the reply in https://progress.opensuse.org/issues/132617#note-31. Moving so many servers are indeed super complex. Thank you all for the hard work.

worker7-hyperv.oqa.suse.de -> plan is PRG2e, but in netbox, it has been put to PRG2 / PRG2 - Cold Storage. Is it cold redundancy room? We need it hot operating.
worker8-vmware.oqa.suse.de -> same as above

"cold storage" is just temporary storage in the PRG2 datacenter until PRG2e is fully functional. PRG2e is expected to be functional start of 2023-12. Still, it seems you have uncovered an inconsistency which I am grateful for. I have raised that topic in
https://suse.slack.com/archives/C04MDKHQE20/p1699456686160239?thread_ts=1699003595.410659&cid=C04MDKHQE20

Thank you for the explanations. My understanding is that, for the machines that are currently in PRG2 cold storage, they will be moved to PRG2e after the lab is functional. Basically the machines will be up again in PRG2e in this December. Is this correct?

Actions #33

Updated by okurz 6 months ago

xlai wrote in #note-32:

for the machines that are currently in PRG2 cold storage, they will be moved to PRG2e after the lab is functional. Basically the machines will be up again in PRG2e in this December. Is this correct?

Not before December so my realistic estimate is that some machines might be online and usable December this year but due to limited capacity in Eng-Infra I expect that there will be problems and then some machines can only be handled 2024-01 or later

Actions #34

Updated by xlai 6 months ago

okurz wrote in #note-33:

xlai wrote in #note-32:

for the machines that are currently in PRG2 cold storage, they will be moved to PRG2e after the lab is functional. Basically the machines will be up again in PRG2e in this December. Is this correct?

Not before December so my realistic estimate is that some machines might be online and usable December this year but due to limited capacity in Eng-Infra I expect that there will be problems and then some machines can only be handled 2024-01 or later

@okurz Got it.
If it is possible, appreciate if below 2 machines can be given higher priority when bringing machines back again, to unblock O3 virt test or decrease workload pressure on one machine pair. Other VT machines can be given regular priority. Thanks!

  • amd-zen2-gpu-sut1.qa.suse.de
  • fozzie
Actions #35

Updated by okurz 6 months ago

  • Status changed from Feedback to In Progress

xlai wrote in #note-34:

okurz wrote in #note-33:

xlai wrote in #note-32:

for the machines that are currently in PRG2 cold storage, they will be moved to PRG2e after the lab is functional. Basically the machines will be up again in PRG2e in this December. Is this correct?

Not before December so my realistic estimate is that some machines might be online and usable December this year but due to limited capacity in Eng-Infra I expect that there will be problems and then some machines can only be handled 2024-01 or later

@okurz Got it.
If it is possible, appreciate if below 2 machines can be given higher priority when bringing machines back again, to unblock O3 virt test or decrease workload pressure on one machine pair. Other VT machines can be given regular priority. Thanks!

  • amd-zen2-gpu-sut1.qa.suse.de
  • fozzie

As far as where it makes a difference, yes, it will be possible and I will keep this in mind

Actions #36

Updated by xlai 6 months ago

okurz wrote in #note-35:

If it is possible, appreciate if below 2 machines can be given higher priority when bringing machines back again, to unblock O3 virt test or decrease workload pressure on one machine pair. Other VT machines can be given regular priority. Thanks!

  • amd-zen2-gpu-sut1.qa.suse.de
  • fozzie

As far as where it makes a difference, yes, it will be possible and I will keep this in mind

Cool, thanks Oliver!

Actions #37

Updated by okurz 6 months ago

I reviewed all entries of "LSG QE" machines twice, added according placeholder entries in PRG2 racks, informed jford and hreinecke in https://suse.slack.com/archives/C04MDKHQE20/p1699887233109419?thread_ts=1699003595.410659&cid=C04MDKHQE20

(Oliver Kurz) @Hannes Reinecke @John Ford @Moroni Flores apparently some device entries in netbox have lost their according tags like https://netbox.suse.de/dcim/devices/6833/ where "hare@suse.com", likely your user account running scripted, in https://netbox.suse.de/extras/changelog/247395/ removed "Move to Prague Colo2" and "Colo2 second wave". Another example would be storage.oqa.suse.de https://netbox.suse.de/dcim/devices/7233/ . Given that
I will need to crosscheck all "QE LSG" machines for missing tags and re-add where needed. This means I/we will need to plan for more rackspace that anticipated potentially also still needing to move machines from PRG2 to PRG2e
What in general is the plan for all machines in PRG2 - Cold Storage ?
(Oliver Kurz) I covered 1., this leaves 2 open.

On 2023-11-15 in the weekly DCT progress call I plan to bring up this topic. Then afterwards we will mostly wait for the physical move end of 2023-11

Actions #38

Updated by okurz 6 months ago

  • Status changed from In Progress to Feedback

Topic was brought up in weekly DCT call although no answer on general plan. So let's hope our assignment for machines in netbox suffices. Asked about specific NUE1-SRV2 cleanup task in https://suse.slack.com/archives/C04MDKHQE20/p1700052656280649

(Oliver Kurz) @Hannes Reinecke @Michael Haefner for when should we plan to unrack hardware in NUE1-SRV2 or at least to provide helping hands? 2023-11-22?

expecting an answer within the next days to be able to scramble personnel.

Actions #39

Updated by okurz 5 months ago

  • Status changed from Feedback to In Progress

Unracked, arm4, openqaw5-xen, quinn, fozzie, amd-zen2-gpu-sut1, blackbauhinia, openqaw9-hyperv, voyager, ada. Updated racktables. Disabled openQA workers in https://gitlab.suse.de/openqa/salt-pillars-openqa/-/merge_requests/680

Actions #40

Updated by okurz 5 months ago

  • Due date changed from 2023-11-22 to 2023-12-08
  • Status changed from In Progress to Feedback
  • Priority changed from High to Low

I wrote in https://suse.slack.com/archives/C04MDKHQE20/p1700472505429589?thread_ts=1700052656.280649&cid=C04MDKHQE20

We unracked 9 machines and prepared them for move, 1 missing, possibly already packed/moved. @Hannes Reinecke @Michael Haefner please let me know if you need/wish further help for us for other work like preparing other equipment for move, unracking left-over non-server hardware, racks itself, etc. Otherwise that would have been my last visit to SRV2.

With that waiting for machines to arrive in PRG2e and being informed about the setup by Eng-Infra.

Actions #41

Updated by okurz 5 months ago

  • Description updated (diff)
Actions #42

Updated by okurz 5 months ago

hreinecke+egotthold will install machines in PRG2e starting next week. Wrote in https://suse.slack.com/archives/C04MDKHQE20/p1701268766562649?thread_ts=1701263971.649939&cid=C04MDKHQE20

@Hannes Reinecke a list of QE machines to install in PRG2e by descending priority:
ada
amd-zen2-gpu-sut1.qa.suse.de
fozzie
quinn
blackbauhinia
arm4
openqaw5-xen
openqaw9-hyperv
All the rest
Our internal reference ticket https://progress.opensuse.org/issues/132617

Actions #43

Updated by okurz 5 months ago

  • Due date changed from 2023-12-08 to 2023-12-22

So far I have only received unprofessional updates like in https://suse.slack.com/archives/C04MDKHQE20/p1701702469305339 asking where machines should go even if this plan was communicated over various channels and with repeated messages the past weeks. This will take longer and we just have to wait. There are not even proper other tickets that we can wait on.

Actions #44

Updated by livdywan 4 months ago

openqaw9-hyperv.qa.suse.de PRG2E-D
openqaw5-xen.qa.suse.de PRG2E-D
quinn.qa.suse.de PRG2E-D
fibonacci.qam.suse.de PRG2E-D
sauron.qa.suse.de PRG2E-D
arm4.qe.suse.de PRG2E-D
voyager.qam.suse.de PRG2E-D
ada.qe.suse.de PRG2E-D
fozzie.qa.suse.de NUE-Scrapped_2023
orion.openqanet.opensuse.org PRG2-J
andromeda.openqanet.opensuse.org PRG2-J
amd-zen2-gpu-sut1.qa.suse.de PRG2E-D

It looks like most machines have been moved to PRG2e? However most of them have no IP, suggesting they're not usable yet?

Actions #45

Updated by okurz 4 months ago

  • Due date changed from 2023-12-22 to 2024-01-21

Yes, that. Despite me asking repeatedly also to jford there was no update so #132617-43 still applies. I am afraid that nothing will move on until mid of 2024-01 due to simply getting no reasonable answer by anyone and we simply don't have permissions or authority to do more ourselves

Actions #46

Updated by okurz 4 months ago

  • Related to action #152813: openqaw5-xen.qa.suse.de is not responding to salt commands added
Actions #47

Updated by okurz 4 months ago

  • Related to action #152811: ada.qe.suse.de is not responding to salt commands added
Actions #48

Updated by okurz 4 months ago

  • Status changed from Feedback to Blocked

Worked with mhaeffner to create explicit specific tasks, starting with ada
https://jira.suse.com/browse/ENGINFRA-3685

Actions #49

Updated by okurz 3 months ago

  • Copied to action #153670: Move of selected LSG QE machines NUE1 to PRG2e - fozzie size:M added
Actions #50

Updated by okurz 3 months ago

  • Parent task changed from #129280 to #153685
Actions #51

Updated by livdywan 3 months ago

  • Due date changed from 2024-01-21 to 2024-02-02

okurz wrote in #note-48:

Worked with mhaeffner to create explicit specific tasks, starting with ada
https://jira.suse.com/browse/ENGINFRA-3685

No response yet. Added a comment to confirm expectations.

Actions #52

Updated by livdywan 3 months ago

  • Due date changed from 2024-02-02 to 2024-02-16
  • Status changed from Blocked to Workable

livdywan wrote in #note-51:

okurz wrote in #note-48:

Worked with mhaeffner to create explicit specific tasks, starting with ada
https://jira.suse.com/browse/ENGINFRA-3685

No response yet. Added a comment to confirm expectations.

IPMI IP 192.168.153.114 is now reachable from qe-jumpy (10.145.14.1)

Things are happening

Actions #53

Updated by okurz 3 months ago

  • Status changed from Workable to Blocked

yes, but still blocked on Jira tasks

Actions #54

Updated by okurz 2 months ago

  • Due date deleted (2024-02-16)
Actions #55

Updated by okurz about 2 months ago

  • Status changed from Blocked to Resolved

Ok, so multiple tasks for individual hosts successfully resolved. We have tickets for each machine now. Rollback steps are completed.

Actions

Also available in: Atom PDF