action #33187
closed[sle][functional][12sp4][medium][yast][y]test fails in partitioning_lvm in partitioning proposal needle
0%
Description
Observation¶
openQA test in scenario sle-12-SP4-Server-DVD-aarch64-cryptlvm+activate_existing+import_users@aarch64 fails in
partitioning_lvm
Hypotheses¶
- H1: The qcow image of the parent has wrong partitions.
- H2: A change in the partitioner causes that wrong behavior:
- the swap volume is not detected.
- the swap volume is ignored.
Tasks¶
- E1: Take a look at the partition/volumes available in the qcow2 images create by the parent and find differences
- Parent of failing: https://openqa.suse.de/tests/1535892#step/partitioning_lvm/4
- Parent of last good: https://openqa.suse.de/tests/1407970#step/partitioning_lvm/4
- E2: Find out with the help of yast team for changes in the partitioner related to logical volumes between builds 0164 and 0234.
Reproducible¶
Fails since (at least) Build 0234 (current job)
Expected result¶
Last good: 0164 (or more recent)
Further details¶
Always latest result in this scenario: latest
Updated by okurz almost 7 years ago
- Assignee set to SLindoMansilla
@SLindoMansilla could you quickly check the needle please?
Updated by SLindoMansilla almost 7 years ago
- Description updated (diff)
The action "Delete logical volume /dev/system/swap" is missing as can be seen in last good: https://openqa.suse.de/tests/1407997#step/partitioning_lvm/1
This test relies on an hdd image published by cryptlvm. The parent of both show similar results:
- Parent of failing: https://openqa.suse.de/tests/1535892#step/partitioning_lvm/4
- Parent of last good: https://openqa.suse.de/tests/1407970#step/partitioning_lvm/4
Updated by okurz almost 7 years ago
- Subject changed from [sle][functional][12sp4][medium]test fails in partitioning_lvm in partitioning proposal needle to [sle][functional][12sp4][medium][yast][y]test fails in partitioning_lvm in partitioning proposal needle
Updated by okurz almost 7 years ago
- Due date changed from 2018-04-24 to 2018-03-27
As it seems we do have some capacity in S13 for more [yast] tickets adding late to S13. @SLindoMansilla feel free to unassign again after your first initial assessment.
Updated by SLindoMansilla almost 7 years ago
- Assignee deleted (
SLindoMansilla)
Back to backlog after groming.
Ticket planned for sprint.
Updated by SLindoMansilla almost 7 years ago
- Status changed from New to Workable
Updated by SLindoMansilla almost 7 years ago
- Related to action #33325: [sle][functional][sle12sp4][yast][fast][medium] test fails in partitioning_raid - multiple missing needles added
Updated by JERiveraMoya almost 7 years ago
- Status changed from Workable to In Progress
Found in YAst2/ylog:
2018-03-12 17:58:58 install(3188) [libstorage] SystemCmd.cc(addLine):740 Adding Line 46 " LV Status NOT available"
Updated by okurz almost 7 years ago
well, that could be something. But I recommend to crosscheck logfiles from "last good". Maybe the error was there in before as well.
Updated by JERiveraMoya almost 7 years ago
- Status changed from In Progress to Feedback
I found that difference in the crosscheck. In the last good is available. I am trying to clone them both parent and child aarch64 machine seems not updated for my client now (after update latest TW):
seattle10:/etc/openqa # systemctl status openqa-worker@51.service
...
[worker:error] ignoring server - server refused with code 426: {"id":null}
Pinging @SLindoMansillo
Updated by SLindoMansilla almost 7 years ago
As spoken, the version installed on seattle10 is not compatible with the new one on devel project.
This version was missing for aarch64. Yesterday that new version got built for aarch64, but with security issues (you need to stop apparmor).
I don't want to use that version on seattle10 until the apparmor rule get fixed.
As a workaround, I can run aarch64 jobs when someone need it.
Updated by JERiveraMoya almost 7 years ago
I have checked verification run provided by @SLindoMansilla and looks as expected http://copland.arch.suse.de/tests/1009#step/partitioning_lvm/2
For x86_64 I have found some conflicting cases with the same parent test (both publishing the same image):
https://openqa.suse.de/tests/1536096#settings
PUBLISH_HDD_1 sle-12-SP4-Server-DVD-x86_64-gnome-encrypted.qcow2
https://openqa.suse.de/tests/1536149
PUBLISH_HDD_1 sle-12-SP4-Server-DVD-x86_64-gnome-encrypted.qcow2
Taking a look to aarch64 tests with possible conflicting images, this register was modified https://openqa.suse.de/admin/auditlog?eventid=1033773 so, the the .qcow parent image from 9 days ago could have some conflict that I cannot see now. Checking with @riafarov.
Updated by JERiveraMoya almost 7 years ago
- Status changed from Feedback to Resolved
After discussion with @riafarov, we have decided to restart the job in osd and as expected (because using the same parent image .qcow we obtained the same result in local aarch worker) the test now is passing: https://openqa.suse.de/tests/1562126#step/partitioning_lvm/1
This kind of failure are good for statistics but very unclear the root cause or which job overwrite the parent image, if that was what really happened.
Updated by JERiveraMoya almost 7 years ago
- Status changed from Resolved to Feedback
Keep it until the end of the sprint for statistics and check if there is a new build and happens again.
Updated by riafarov over 6 years ago
- Status changed from Feedback to Resolved
As per discussion, issue not replicated, hence resolving.