Project

General

Profile

Actions

action #109969

closed

s390zp19 - out of disk space

Added by mgriessmeier about 2 years ago. Updated about 2 years ago.

Status:
Resolved
Priority:
High
Assignee:
Category:
-
Target version:
Start date:
2022-04-14
Due date:
% Done:

0%

Estimated time:

Description

Observation

https://app.slack.com/client/T02863RC2AC/C02CANHLANP/thread/C02CANHLANP-1649923742.408719

s390x LPAR s390zp19 run out of disk space.

Actions taken:

  • checked if cleanup script /usr/local/bin/cleanup-openqa-assets >/dev/null works as intended -> [DONE]
  • checked why cronjob was not running -> [DONE]
    • observed multiple warnings and reports regarding dangling references to old glibc versions.
    • tried to update the system via zypper, which resulted in a crash of the machine, booting into kernel panic
  • started re-installation of the machine via zhmc
  • Steps to configure the LPAR:
change hostname  /etc/hostname -> s390zp19
install libvirt -> zypper in libvirt
systemctl start multipathd
systemctl enable multipathd
cio_ignore -r fa00
cio_ignore -r fc00
/usr/bin/rescan-scsi-bus.sh
zfcp_host_configure fa00 1
zfcp_host_configure fc00 1

multipath -ll to check if multipath was configured
mkdir -p /var/lib/openqa/share/factory
mkdir -p /var/lib/libvirt/images
fdisk /dev/mapper/...
n -> p -> ... -> w
mkfs.ext4 /dev/mapper/...-part1

modify /etc/fstab:
# libvirt images
/dev/mapper/36005076307ffd3b30000000000000149-part1 /var/lib/libvirt/images ext4 nobarrier,data=writeback 1 0

# openqa nfs
openqa.suse.de:/var/lib/openqa/share/factory /var/lib/openqa/share/factory nfs ro 0 0

copy cleanup script from e.g. s390zp18 /usr/local/bin/cleanup-openqa-assets
crontab -e:
0 */1 * * * /usr/local/bin/cleanup-openqa-assets >/dev/null

Related issues 1 (0 open1 closed)

Related to openQA Infrastructure - action #51836: Manage (parts) of s390 kvm instances (formerly s390p7 and s390p8) with saltResolvedokurz2019-05-22

Actions
Actions #1

Updated by mgriessmeier about 2 years ago

  • Description updated (diff)
Actions #2

Updated by okurz about 2 years ago

  • Target version set to Ready

@mgriessmeier why do you assign that to nicksinger? That s390x instance is still outside the scope of SUSE QE Tools, right? I am for now adding the ticket to our backlog with "Ready" as you assigned nicksinger who is part of the team SUSE QE Tools but I would prefer if this is handled outside, e.g. QE Core or you.

Actions #3

Updated by mgriessmeier about 2 years ago

  • Description updated (diff)
  • Target version deleted (Ready)
Actions #4

Updated by mgriessmeier about 2 years ago

okurz wrote:

@mgriessmeier why do you assign that to nicksinger? That s390x instance is still outside the scope of SUSE QE Tools, right? I am for now adding the ticket to our backlog with "Ready" as you assigned nicksinger who is part of the team SUSE QE Tools but I would prefer if this is handled outside, e.g. QE Core or you.

because @nicksinger is working with me on that right now - and this should be reflected properly :)
he helped me debugging it, while we observed major issues

Actions #5

Updated by mgriessmeier about 2 years ago

  • Description updated (diff)
Actions #6

Updated by okurz about 2 years ago

  • Status changed from New to In Progress
  • Target version set to Ready

ok, so then it's part of the backlog, fine.

Actions #7

Updated by openqa_review about 2 years ago

  • Due date set to 2022-05-02

Setting due date based on mean cycle time of SUSE QE Tools

Actions #8

Updated by okurz about 2 years ago

  • Related to action #51836: Manage (parts) of s390 kvm instances (formerly s390p7 and s390p8) with salt added
Actions #9

Updated by okurz about 2 years ago

  • Due date deleted (2022-05-02)
  • Status changed from In Progress to Resolved

The original problem was fixed. #51836 is the related ticket about managing the machines properly which we consider out of scope for SUSE QE Tools. I strongly suggest for QE Core too look into #51836 to prevent further problems.

Actions

Also available in: Atom PDF