action #34888
closed[functional][sle][u][s390x][zvm] Make sure s390x DASD devices used for testing represent our expected default ~20GB
0%
Description
Acceptance criteria¶
- AC1: all default@s390x-zvm scenarios enable snapshots by default in the installer because the HDD size available is at least 16GB
Suggestions¶
- As a workaround for today
systemctl mask
the s390x-zvm workers with small DASDs - Enlarge DASD devices OR if not possible find a definition of machines that make it explicit which machines have bigger HDD size and which have smaller and use them accordingly in scenarios
Updated by mgriessmeier about 6 years ago
- Status changed from New to Feedback
So I've stopped and masked all s390x workers which have a Model 9 dasd (~8GB)
We have for now only two z/VM workers running which have Model 30 dasds (~23GB)
mgriessmeier@openqaworker2:~> sudo systemctl mask openqa-worker@{3,4,5,6}
Created symlink from /etc/systemd/system/openqa-worker@3.service to /dev/null.
Created symlink from /etc/systemd/system/openqa-worker@4.service to /dev/null.
Created symlink from /etc/systemd/system/openqa-worker@5.service to /dev/null.
Created symlink from /etc/systemd/system/openqa-worker@6.service to /dev/null.
mgriessmeier@openqaworker2:~> sudo systemctl stop openqa-worker@{3,4,5,6}
Updated by mgriessmeier about 6 years ago
- Status changed from Feedback to Blocked
- Priority changed from Immediate to High
tracked in infra ticket https://infra.nue.suse.com/Ticket/Display.html?id=110828&results=f0c23270f3cb13a038ec287c3b2b5f96
setting to blocked by this and lowering prio as the workaround is in place
Updated by riafarov about 6 years ago
- Parent task deleted (
#34858)
Removing parent to change priorities.
Updated by riafarov about 6 years ago
- Related to coordination #34858: [functional][sle][opensuse][y][epic] Ensure HDD sizes are consistent and have a reasonable relation to product specifications added
Updated by mgriessmeier about 6 years ago
- Due date changed from 2018-04-24 to 2018-05-08
- Target version changed from Milestone 15 to Milestone 16
still blocked by the infra ticket
moving to next sprint
Updated by mgriessmeier about 6 years ago
- Subject changed from [functional][sle][u][s390x][zvm][fast] Make sure s390x DASD devices used for testing represent our expected default ~20GB to [functional][sle][u][s390x][zvm] Make sure s390x DASD devices used for testing represent our expected default ~20GB
Updated by mgriessmeier about 6 years ago
- Due date changed from 2018-05-08 to 2018-05-22
no update from infra side - moving
Updated by mgriessmeier almost 6 years ago
mgriessmeier wrote:
no update from infra side - moving
I've approached wolfgang again... no answer yet, hopefully will get some today
Updated by mgriessmeier almost 6 years ago
- Due date changed from 2018-05-22 to 2018-06-05
Updated by mgriessmeier almost 6 years ago
- Due date changed from 2018-06-05 to 2018-06-19
- Target version changed from Milestone 16 to Milestone 17
Three issues here:
- Wolfgang still on vacation/Paternity leave
- SUSE IT has offsite upcoming in next two weeks (don't know the exact date)
- Not sure if anyone knows how to do it, maybe we just need to create new disks
Updated by okurz almost 6 years ago
- Target version changed from Milestone 17 to Milestone 17
Updated by mgriessmeier almost 6 years ago
wolfgang is back from vacation and gerhard assured me that they will finish this in this week
Updated by mgriessmeier almost 6 years ago
- Due date changed from 2018-06-19 to 2018-07-03
Updated by mgriessmeier almost 6 years ago
- Status changed from Blocked to Feedback
DASDs were resized, I've verified that.
let's enable the workers again: https://gitlab.suse.de/openqa/salt-pillars-openqa/merge_requests/95
Updated by okurz almost 6 years ago
Shouldn't we crosscheck that it actually works in production (see ACs) and also the according workers are actually used?
Updated by mgriessmeier almost 6 years ago
okurz wrote:
Shouldn't we crosscheck that it actually works in production (see ACs) and also the according workers are actually used?
yeah... I would have reopened it otherwise =)
6 zVM jobs running in parallel
all jobs have a root partition around 20GB. Two workers have ~2GB more due to old storage system
https://openqa.suse.de/tests/1785947#step/partitioning/1
https://openqa.suse.de/tests/1785926#step/partitioning/1
https://openqa.suse.de/tests/1785945#step/partitioning/1
https://openqa.suse.de/tests/1785938#step/partitioning/1
https://openqa.suse.de/tests/1785932#step/partitioning/1
https://openqa.suse.de/tests/1785951#step/partitioning/1