Project

General

Profile

Actions

action #33187

closed

[sle][functional][12sp4][medium][yast][y]test fails in partitioning_lvm in partitioning proposal needle

Added by okurz about 6 years ago. Updated about 6 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
Bugs in existing tests
Start date:
2018-03-13
Due date:
2018-03-27
% Done:

0%

Estimated time:
Difficulty:

Description

Observation

openQA test in scenario sle-12-SP4-Server-DVD-aarch64-cryptlvm+activate_existing+import_users@aarch64 fails in
partitioning_lvm

Hypotheses

  • H1: The qcow image of the parent has wrong partitions.
  • H2: A change in the partitioner causes that wrong behavior:
    • the swap volume is not detected.
    • the swap volume is ignored.

Tasks

Reproducible

Fails since (at least) Build 0234 (current job)

Expected result

Last good: 0164 (or more recent)

Further details

Always latest result in this scenario: latest


Related issues 1 (0 open1 closed)

Related to openQA Tests - action #33325: [sle][functional][sle12sp4][yast][fast][medium] test fails in partitioning_raid - multiple missing needlesResolvedSLindoMansilla2018-03-152018-03-27

Actions
Actions #1

Updated by okurz about 6 years ago

  • Assignee set to SLindoMansilla

@SLindoMansilla could you quickly check the needle please?

Actions #2

Updated by SLindoMansilla about 6 years ago

  • Description updated (diff)

The action "Delete logical volume /dev/system/swap" is missing as can be seen in last good: https://openqa.suse.de/tests/1407997#step/partitioning_lvm/1

This test relies on an hdd image published by cryptlvm. The parent of both show similar results:

Actions #3

Updated by okurz about 6 years ago

  • Subject changed from [sle][functional][12sp4][medium]test fails in partitioning_lvm in partitioning proposal needle to [sle][functional][12sp4][medium][yast][y]test fails in partitioning_lvm in partitioning proposal needle
Actions #4

Updated by okurz about 6 years ago

  • Due date changed from 2018-04-24 to 2018-03-27

As it seems we do have some capacity in S13 for more [yast] tickets adding late to S13. @SLindoMansilla feel free to unassign again after your first initial assessment.

Actions #5

Updated by SLindoMansilla about 6 years ago

  • Assignee deleted (SLindoMansilla)

Back to backlog after groming.
Ticket planned for sprint.

Actions #6

Updated by SLindoMansilla about 6 years ago

  • Status changed from New to Workable
Actions #7

Updated by JERiveraMoya about 6 years ago

  • Assignee set to JERiveraMoya
Actions #8

Updated by SLindoMansilla about 6 years ago

  • Related to action #33325: [sle][functional][sle12sp4][yast][fast][medium] test fails in partitioning_raid - multiple missing needles added
Actions #9

Updated by JERiveraMoya about 6 years ago

  • Status changed from Workable to In Progress

Found in YAst2/ylog:
2018-03-12 17:58:58 install(3188) [libstorage] SystemCmd.cc(addLine):740 Adding Line 46 " LV Status NOT available"

Actions #10

Updated by okurz about 6 years ago

well, that could be something. But I recommend to crosscheck logfiles from "last good". Maybe the error was there in before as well.

Actions #11

Updated by JERiveraMoya about 6 years ago

  • Status changed from In Progress to Feedback

I found that difference in the crosscheck. In the last good is available. I am trying to clone them both parent and child aarch64 machine seems not updated for my client now (after update latest TW):
seattle10:/etc/openqa # systemctl status openqa-worker@51.service
...
[worker:error] ignoring server - server refused with code 426: {"id":null}
Pinging @SLindoMansillo

Actions #12

Updated by SLindoMansilla about 6 years ago

As spoken, the version installed on seattle10 is not compatible with the new one on devel project.
This version was missing for aarch64. Yesterday that new version got built for aarch64, but with security issues (you need to stop apparmor).
I don't want to use that version on seattle10 until the apparmor rule get fixed.

As a workaround, I can run aarch64 jobs when someone need it.

Actions #13

Updated by JERiveraMoya about 6 years ago

I have checked verification run provided by @SLindoMansilla and looks as expected http://copland.arch.suse.de/tests/1009#step/partitioning_lvm/2
For x86_64 I have found some conflicting cases with the same parent test (both publishing the same image):

https://openqa.suse.de/tests/1536096#settings
PUBLISH_HDD_1 sle-12-SP4-Server-DVD-x86_64-gnome-encrypted.qcow2

https://openqa.suse.de/tests/1536149
PUBLISH_HDD_1 sle-12-SP4-Server-DVD-x86_64-gnome-encrypted.qcow2

Taking a look to aarch64 tests with possible conflicting images, this register was modified https://openqa.suse.de/admin/auditlog?eventid=1033773 so, the the .qcow parent image from 9 days ago could have some conflict that I cannot see now. Checking with @riafarov.

Actions #14

Updated by JERiveraMoya about 6 years ago

  • Status changed from Feedback to Resolved

After discussion with @riafarov, we have decided to restart the job in osd and as expected (because using the same parent image .qcow we obtained the same result in local aarch worker) the test now is passing: https://openqa.suse.de/tests/1562126#step/partitioning_lvm/1
This kind of failure are good for statistics but very unclear the root cause or which job overwrite the parent image, if that was what really happened.

Actions #15

Updated by JERiveraMoya about 6 years ago

  • Status changed from Resolved to Feedback

Keep it until the end of the sprint for statistics and check if there is a new build and happens again.

Actions #16

Updated by riafarov about 6 years ago

  • Status changed from Feedback to Resolved

As per discussion, issue not replicated, hence resolving.

Actions

Also available in: Atom PDF