Project

General

Profile

Actions

action #34888

closed

[functional][sle][u][s390x][zvm] Make sure s390x DASD devices used for testing represent our expected default ~20GB

Added by okurz about 6 years ago. Updated almost 6 years ago.

Status:
Resolved
Priority:
High
Assignee:
Category:
Bugs in existing tests
Target version:
SUSE QA - Milestone 17
Start date:
2018-04-13
Due date:
2018-07-03
% Done:

0%

Estimated time:
Difficulty:

Description

Acceptance criteria

  • AC1: all default@s390x-zvm scenarios enable snapshots by default in the installer because the HDD size available is at least 16GB

Suggestions

  • As a workaround for today systemctl mask the s390x-zvm workers with small DASDs
  • Enlarge DASD devices OR if not possible find a definition of machines that make it explicit which machines have bigger HDD size and which have smaller and use them accordingly in scenarios

Related issues 1 (0 open1 closed)

Related to openQA Tests - coordination #34858: [functional][sle][opensuse][y][epic] Ensure HDD sizes are consistent and have a reasonable relation to product specificationsResolvedriafarov2018-03-192019-06-18

Actions
Actions #1

Updated by mgriessmeier about 6 years ago

  • Status changed from New to Feedback

So I've stopped and masked all s390x workers which have a Model 9 dasd (~8GB)

We have for now only two z/VM workers running which have Model 30 dasds (~23GB)

mgriessmeier@openqaworker2:~> sudo systemctl mask openqa-worker@{3,4,5,6}
Created symlink from /etc/systemd/system/openqa-worker@3.service to /dev/null.
Created symlink from /etc/systemd/system/openqa-worker@4.service to /dev/null.
Created symlink from /etc/systemd/system/openqa-worker@5.service to /dev/null.
Created symlink from /etc/systemd/system/openqa-worker@6.service to /dev/null.
mgriessmeier@openqaworker2:~> sudo systemctl stop openqa-worker@{3,4,5,6}
Actions #2

Updated by mgriessmeier about 6 years ago

  • Status changed from Feedback to Blocked
  • Priority changed from Immediate to High

tracked in infra ticket https://infra.nue.suse.com/Ticket/Display.html?id=110828&results=f0c23270f3cb13a038ec287c3b2b5f96
setting to blocked by this and lowering prio as the workaround is in place

Actions #3

Updated by riafarov about 6 years ago

  • Parent task deleted (#34858)

Removing parent to change priorities.

Actions #4

Updated by riafarov about 6 years ago

  • Related to coordination #34858: [functional][sle][opensuse][y][epic] Ensure HDD sizes are consistent and have a reasonable relation to product specifications added
Actions #5

Updated by mgriessmeier about 6 years ago

  • Due date changed from 2018-04-24 to 2018-05-08
  • Target version changed from Milestone 15 to Milestone 16

still blocked by the infra ticket
moving to next sprint

Actions #6

Updated by mgriessmeier about 6 years ago

  • Subject changed from [functional][sle][u][s390x][zvm][fast] Make sure s390x DASD devices used for testing represent our expected default ~20GB to [functional][sle][u][s390x][zvm] Make sure s390x DASD devices used for testing represent our expected default ~20GB
Actions #7

Updated by mgriessmeier about 6 years ago

  • Due date changed from 2018-05-08 to 2018-05-22

no update from infra side - moving

Actions #8

Updated by mgriessmeier almost 6 years ago

mgriessmeier wrote:

no update from infra side - moving

I've approached wolfgang again... no answer yet, hopefully will get some today

Actions #9

Updated by mgriessmeier almost 6 years ago

  • Due date changed from 2018-05-22 to 2018-06-05
Actions #10

Updated by mgriessmeier almost 6 years ago

  • Due date changed from 2018-06-05 to 2018-06-19
  • Target version changed from Milestone 16 to Milestone 17

Three issues here:

  • Wolfgang still on vacation/Paternity leave
  • SUSE IT has offsite upcoming in next two weeks (don't know the exact date)
  • Not sure if anyone knows how to do it, maybe we just need to create new disks
Actions #11

Updated by okurz almost 6 years ago

  • Target version changed from Milestone 17 to Milestone 17
Actions #12

Updated by mgriessmeier almost 6 years ago

wolfgang is back from vacation and gerhard assured me that they will finish this in this week

Actions #13

Updated by mgriessmeier almost 6 years ago

  • Due date changed from 2018-06-19 to 2018-07-03
Actions #14

Updated by mgriessmeier almost 6 years ago

  • Status changed from Blocked to Feedback

DASDs were resized, I've verified that.
let's enable the workers again: https://gitlab.suse.de/openqa/salt-pillars-openqa/merge_requests/95

Actions #15

Updated by mgriessmeier almost 6 years ago

  • Status changed from Feedback to Resolved

PR merged

Actions #16

Updated by okurz almost 6 years ago

Shouldn't we crosscheck that it actually works in production (see ACs) and also the according workers are actually used?

Actions #17

Updated by mgriessmeier almost 6 years ago

okurz wrote:

Shouldn't we crosscheck that it actually works in production (see ACs) and also the according workers are actually used?

yeah... I would have reopened it otherwise =)

6 zVM jobs running in parallel
all jobs have a root partition around 20GB. Two workers have ~2GB more due to old storage system
https://openqa.suse.de/tests/1785947#step/partitioning/1
https://openqa.suse.de/tests/1785926#step/partitioning/1
https://openqa.suse.de/tests/1785945#step/partitioning/1
https://openqa.suse.de/tests/1785938#step/partitioning/1
https://openqa.suse.de/tests/1785932#step/partitioning/1
https://openqa.suse.de/tests/1785951#step/partitioning/1

Actions

Also available in: Atom PDF