action #33187
closed
[sle][functional][12sp4][medium][yast][y]test fails in partitioning_lvm in partitioning proposal needle
Added by okurz almost 7 years ago.
Updated over 6 years ago.
Category:
Bugs in existing tests
Description
Observation¶
openQA test in scenario sle-12-SP4-Server-DVD-aarch64-cryptlvm+activate_existing+import_users@aarch64 fails in
partitioning_lvm
Hypotheses¶
- H1: The qcow image of the parent has wrong partitions.
- H2: A change in the partitioner causes that wrong behavior:
- the swap volume is not detected.
- the swap volume is ignored.
Tasks¶
- E1: Take a look at the partition/volumes available in the qcow2 images create by the parent and find differences
- E2: Find out with the help of yast team for changes in the partitioner related to logical volumes between builds 0164 and 0234.
Reproducible¶
Fails since (at least) Build 0234 (current job)
Expected result¶
Last good: 0164 (or more recent)
Further details¶
Always latest result in this scenario: latest
- Assignee set to SLindoMansilla
- Description updated (diff)
- Subject changed from [sle][functional][12sp4][medium]test fails in partitioning_lvm in partitioning proposal needle to [sle][functional][12sp4][medium][yast][y]test fails in partitioning_lvm in partitioning proposal needle
- Due date changed from 2018-04-24 to 2018-03-27
As it seems we do have some capacity in S13 for more [yast] tickets adding late to S13. @SLindoMansilla feel free to unassign again after your first initial assessment.
- Assignee deleted (
SLindoMansilla)
Back to backlog after groming.
Ticket planned for sprint.
- Status changed from New to Workable
- Assignee set to JERiveraMoya
- Related to action #33325: [sle][functional][sle12sp4][yast][fast][medium] test fails in partitioning_raid - multiple missing needles added
- Status changed from Workable to In Progress
Found in YAst2/ylog:
2018-03-12 17:58:58 install(3188) [libstorage] SystemCmd.cc(addLine):740 Adding Line 46 " LV Status NOT available"
well, that could be something. But I recommend to crosscheck logfiles from "last good". Maybe the error was there in before as well.
- Status changed from In Progress to Feedback
I found that difference in the crosscheck. In the last good is available. I am trying to clone them both parent and child aarch64 machine seems not updated for my client now (after update latest TW):
seattle10:/etc/openqa # systemctl status openqa-worker@51.service
...
[worker:error] ignoring server - server refused with code 426: {"id":null}
Pinging @SLindoMansillo
As spoken, the version installed on seattle10 is not compatible with the new one on devel project.
This version was missing for aarch64. Yesterday that new version got built for aarch64, but with security issues (you need to stop apparmor).
I don't want to use that version on seattle10 until the apparmor rule get fixed.
As a workaround, I can run aarch64 jobs when someone need it.
- Status changed from Feedback to Resolved
After discussion with @riafarov, we have decided to restart the job in osd and as expected (because using the same parent image .qcow we obtained the same result in local aarch worker) the test now is passing: https://openqa.suse.de/tests/1562126#step/partitioning_lvm/1
This kind of failure are good for statistics but very unclear the root cause or which job overwrite the parent image, if that was what really happened.
- Status changed from Resolved to Feedback
Keep it until the end of the sprint for statistics and check if there is a new build and happens again.
- Status changed from Feedback to Resolved
As per discussion, issue not replicated, hence resolving.
Also available in: Atom
PDF