action #124655
closed
[openQA][infra][pxe] Physical SUT machine can not boot from pxe and mismatch hostname
Added by waynechen55 almost 2 years ago.
Updated almost 2 years ago.
Description
Observation¶
All available physical sut machines can not boot from pxe, including openqaipmi5, ix64ph1075 and quinn.
Let us take quinn as example:
Its BIOS setting looks good:
But what is more interesting is booting up quinn shows ix64ph1087. ipmitool -I lanplus -C 3 -H quinn-sp.xxxx -U xxxx -P xxxx sol activate shows:
Steps to reproduce¶
- Open ipmitool sol console
- Wait for boot from pxe and then local disk
Impact¶
No test run can go
Problem¶
Looks networking configuration problem. Maybe something else work is ongoing.
Suggestion¶
- Check DHCP/DNS config
- Check tftp server
Workaround¶
No workaround at the moment
Files
- Subject changed from [openQA][infra[pxe] Physical SUT machine can not boot from pxe to [openQA][infra[pxe] Physical SUT machine can not boot from pxe and mismatch hostname
- Tags set to infra, reactive work
- Subject changed from [openQA][infra[pxe] Physical SUT machine can not boot from pxe and mismatch hostname to [openQA][infra][pxe] Physical SUT machine can not boot from pxe and mismatch hostname
- Priority changed from Normal to Urgent
- Target version set to Ready
- Related to action #124661: [qe-tools] tftp server and directory mount issue on qanet.qa added
- Status changed from New to In Progress
- Assignee set to okurz
Update on this at the moment:
pxe hostname ssh
fozzie ok |ok |ok
quinn ok |not ok(ix64ph1087 on system) |ssh to (ix64ph1087 or quinn) ok
ix64ph1075 ok |not ok(susetest on system) |ssh to ix64ph1075 ok, but takes long time
openqaipmi5 ok |ok |ok
- Status changed from In Progress to Resolved
- Status changed from Resolved to Feedback
This is an autogenerated message for openQA integration by the openqa_review script:
This bug is still referenced in a failing openQA test: gi-guest_developing-on-host_sles15sp4-kvm@64bit-ipmi-large-mem
https://openqa.suse.de/tests/10699146#step/boot_from_pxe/1
To prevent further reminder comments one of the following options should be followed:
- The test scenario is fixed by applying the bug fix to the tested product or the test is adjusted
- The openQA job group is moved to "Released" or "EOL" (End-of-Life)
- The bugref in the openQA scenario is removed or replaced, e.g.
label:wontfix:boo1234
Expect the next reminder at the earliest in 28 days if nothing changes in this ticket.
- Status changed from Feedback to Resolved
I called openqa-query-for-job-label 124655
3124620|2023-02-17 10:17:24|done|failed|create_hdd_leap_transactional_server_autoyast||openqaworker19
3124620|2023-02-17 10:17:24|done|failed|create_hdd_leap_transactional_server_autoyast||openqaworker19
10707232|2023-03-16 09:51:04|done|failed|gi-guest_win2016-on-host_developing-kvm|backend done: ipmitool -I lanplus -H 10.162.28.200 -U admin -P [masked] chassis power status: Error: Unable to establish IPMI v2 / RMCP+ session at /usr/lib/os-autoinst/backend/ipmi.pm line 45.|grenache-1
10707235|2023-03-16 08:59:24|done|failed|gi-guest_win2019-on-host_developing-kvm||grenache-1
10706670|2023-03-16 08:00:37|done|failed|gi-guest_developing-on-host_sles15sp4-kvm||grenache-1
10706677|2023-03-16 06:42:28|done|failed|gi-guest_win2019-on-host_developing-kvm||grenache-1
10706675|2023-03-16 05:43:56|done|failed|gi-guest_win2016-on-host_developing-kvm||grenache-1
10706671|2023-03-16 05:24:03|done|failed|gi-guest_developing-on-host_sles15sp4-xen||grenache-1
10699146|2023-03-15 16:56:42|done|failed|gi-guest_developing-on-host_sles15sp4-kvm||grenache-1
10697796|2023-03-15 07:46:58|done|failed|gi-guest_developing-on-host_sles15sp4-xen||grenache-1
10697795|2023-03-15 06:45:07|done|failed|gi-guest_developing-on-host_sles15sp4-kvm||grenache-1
10693360|2023-03-15 03:49:45|done|failed|gi-guest_developing-on-host_sles15sp4-xen||grenache-1
I was looking into all jobs that I could access and found that all errors are actually handled in #119551. I followed the test scenarios and found all ended up as ok eventually. I did not remove all references to this ticket as jobs ended up ok. So if the problem reappears this ticket might be reopened and we can check jobs.
Also available in: Atom
PDF