action #34888
closed
[functional][sle][u][s390x][zvm] Make sure s390x DASD devices used for testing represent our expected default ~20GB
Added by okurz over 6 years ago.
Updated about 6 years ago.
Category:
Bugs in existing tests
Target version:
SUSE QA - Milestone 17
Description
Acceptance criteria¶
- AC1: all default@s390x-zvm scenarios enable snapshots by default in the installer because the HDD size available is at least 16GB
Suggestions¶
- As a workaround for today
systemctl mask
the s390x-zvm workers with small DASDs
- Enlarge DASD devices OR if not possible find a definition of machines that make it explicit which machines have bigger HDD size and which have smaller and use them accordingly in scenarios
- Status changed from New to Feedback
So I've stopped and masked all s390x workers which have a Model 9 dasd (~8GB)
We have for now only two z/VM workers running which have Model 30 dasds (~23GB)
mgriessmeier@openqaworker2:~> sudo systemctl mask openqa-worker@{3,4,5,6}
Created symlink from /etc/systemd/system/openqa-worker@3.service to /dev/null.
Created symlink from /etc/systemd/system/openqa-worker@4.service to /dev/null.
Created symlink from /etc/systemd/system/openqa-worker@5.service to /dev/null.
Created symlink from /etc/systemd/system/openqa-worker@6.service to /dev/null.
mgriessmeier@openqaworker2:~> sudo systemctl stop openqa-worker@{3,4,5,6}
- Status changed from Feedback to Blocked
- Priority changed from Immediate to High
- Parent task deleted (
#34858)
Removing parent to change priorities.
- Related to coordination #34858: [functional][sle][opensuse][y][epic] Ensure HDD sizes are consistent and have a reasonable relation to product specifications added
- Due date changed from 2018-04-24 to 2018-05-08
- Target version changed from Milestone 15 to Milestone 16
- Subject changed from [functional][sle][u][s390x][zvm][fast] Make sure s390x DASD devices used for testing represent our expected default ~20GB to [functional][sle][u][s390x][zvm] Make sure s390x DASD devices used for testing represent our expected default ~20GB
- Due date changed from 2018-05-08 to 2018-05-22
no update from infra side - moving
mgriessmeier wrote:
no update from infra side - moving
I've approached wolfgang again... no answer yet, hopefully will get some today
- Due date changed from 2018-05-22 to 2018-06-05
- Due date changed from 2018-06-05 to 2018-06-19
- Target version changed from Milestone 16 to Milestone 17
Three issues here:
- Wolfgang still on vacation/Paternity leave
- SUSE IT has offsite upcoming in next two weeks (don't know the exact date)
- Not sure if anyone knows how to do it, maybe we just need to create new disks
- Target version changed from Milestone 17 to Milestone 17
wolfgang is back from vacation and gerhard assured me that they will finish this in this week
- Due date changed from 2018-06-19 to 2018-07-03
- Status changed from Blocked to Feedback
- Status changed from Feedback to Resolved
Shouldn't we crosscheck that it actually works in production (see ACs) and also the according workers are actually used?
Also available in: Atom
PDF