action #101373
closedworker openqa-aarch64 fails on cache
0%
Description
worker openqa-aarch64
fails on cache service.
systemctl status openqa-worker-cacheservice-minion.service
:
● openqa-worker-cacheservice-minion.service - OpenQA Worker Cache Service Minion
Loaded: loaded (/usr/lib/systemd/system/openqa-worker-cacheservice-minion.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2021-10-22 14:29:48 CEST; 39min ago
Main PID: 9903 (openqa-workerca)
Tasks: 1
CGroup: /system.slice/openqa-worker-cacheservice-minion.service
└─9903 /usr/bin/perl /usr/share/openqa/script/openqa-workercache run -m production --reset-locks
oct. 22 15:09:30 openqa-aarch64 openqa-worker-cacheservice-minion[9903]: [9903] [i] Purging "/var/lib/openqa/cache/openqa1-opensuse/Tumbleweed.aarch64-1.0-libvirt_aarch64-Snapshot20211021.vagrant.libvirt.box" because it appears pending after service startup
oct. 22 15:09:30 openqa-aarch64 openqa-worker-cacheservice-minion[9903]: [9903] [i] Cache size of "/var/lib/openqa/cache" is 10GiB, with limit 160GiB
oct. 22 15:09:30 openqa-aarch64 openqa-worker-cacheservice-minion[9903]: [9903] [i] Resetting all leftover locks after restart
oct. 22 15:09:30 openqa-aarch64 openqa-worker-cacheservice-minion[9903]: [9903] [i] Worker 9903 started
oct. 22 14:29:48 openqa-aarch64 systemd[1]: openqa-worker-cacheservice-minion.service: Failed with result 'exit-code'.
oct. 22 15:09:30 openqa-aarch64 openqa-worker-cacheservice-minion[9903]: DBD::SQLite::db do failed: database is locked at /usr/share/openqa/script/../lib/OpenQA/CacheService.pm line 83.
oct. 22 14:29:48 openqa-aarch64 systemd[1]: openqa-worker-cacheservice-minion.service: Service RestartSec=100ms expired, scheduling restart.
oct. 22 15:09:30 openqa-aarch64 openqa-worker-cacheservice-minion[9903]: DBD::SQLite::db do failed: database is locked at /usr/share/openqa/script/../lib/OpenQA/CacheService.pm line 83.
oct. 22 14:29:48 openqa-aarch64 systemd[1]: Stopped OpenQA Worker Cache Service Minion.
oct. 22 14:29:48 openqa-aarch64 systemd[1]: Started OpenQA Worker Cache Service Minion.
systemctl status openqa-worker-cacheservice
returns:
● openqa-worker-cacheservice.service - OpenQA Worker Cache Service
Loaded: loaded (/usr/lib/systemd/system/openqa-worker-cacheservice.service; disabled; vendor preset: disabled)
Active: active (running) since Fri 2021-10-22 03:34:26 CEST; 11h ago
Main PID: 2871 (openqa-workerca)
Tasks: 5
CGroup: /system.slice/openqa-worker-cacheservice.service
├─ 2871 /usr/bin/perl /usr/share/openqa/script/openqa-workercache prefork -m production -i 100 -H 400 -w 4 -G 80
├─ 4149 /usr/bin/perl /usr/share/openqa/script/openqa-workercache prefork -m production -i 100 -H 400 -w 4 -G 80
├─ 9376 /usr/bin/perl /usr/share/openqa/script/openqa-workercache prefork -m production -i 100 -H 400 -w 4 -G 80
├─ 9819 /usr/bin/perl /usr/share/openqa/script/openqa-workercache prefork -m production -i 100 -H 400 -w 4 -G 80
└─12829 /usr/bin/perl /usr/share/openqa/script/openqa-workercache prefork -m production -i 100 -H 400 -w 4 -G 80
oct. 22 15:09:30 openqa-aarch64 openqa-workercache-daemon[2871]: [7909] [e] [dWll7_IGGkib] DBD::SQLite::db do failed: database is locked at /usr/share/openqa/script/../lib/OpenQA/CacheService.pm line 83.
oct. 22 15:09:30 openqa-aarch64 openqa-workercache-daemon[2871]: [7909] [e] [NfGgABn2_ds8] DBD::SQLite::db do failed: database is locked at /usr/share/openqa/script/../lib/OpenQA/CacheService.pm line 83.
oct. 22 15:09:30 openqa-aarch64 openqa-workercache-daemon[2871]: [7909] [e] [Q2WvmyLMqkG2] DBD::SQLite::db do failed: database is locked at /usr/share/openqa/script/../lib/OpenQA/CacheService.pm line 83.
oct. 22 15:09:30 openqa-aarch64 openqa-workercache-daemon[2871]: [7909] [e] [PbIxplqBzJFp] DBD::SQLite::db do failed: database is locked at /usr/share/openqa/script/../lib/OpenQA/CacheService.pm line 83.
oct. 22 15:09:30 openqa-aarch64 openqa-workercache-daemon[2871]: [7909] [e] [pnjrAJGY31Sm] DBD::SQLite::db do failed: database is locked at /usr/share/openqa/script/../lib/OpenQA/CacheService.pm line 83.
oct. 22 15:09:30 openqa-aarch64 openqa-workercache-daemon[2871]: [7909] [e] [OLuNhy-wmRTP] DBD::SQLite::st execute failed: database is locked at /usr/lib/perl5/vendor_perl/5.26.1/Minion/Backend/SQLite.pm line 138.
oct. 22 15:09:30 openqa-aarch64 openqa-workercache-daemon[2871]: [7909] [e] [4mdwMlirG86I] DBD::SQLite::st execute failed: database is locked at /usr/lib/perl5/vendor_perl/5.26.1/Minion/Backend/SQLite.pm line 138.
oct. 22 15:09:30 openqa-aarch64 openqa-workercache-daemon[2871]: [2871] [i] Worker 7909 stopped
oct. 22 15:09:30 openqa-aarch64 openqa-workercache-daemon[2871]: [2871] [e] Worker 4845 has no heartbeat (400 seconds), restarting
oct. 22 15:09:30 openqa-aarch64 openqa-workercache-daemon[2871]: [2871] [i] Stopping worker 4845 gracefully (80 seconds)
Files
Updated by ggardet_arm about 3 years ago
- Related to action #101033: openqaworker13: Too many Minion job failures alert - sqlite failed: database is locked size:M added
Updated by ggardet_arm about 3 years ago
ggardet_arm wrote:
Trying a reboot now.
It did not help, same messages are shown. And the system seems to be slower than usual.
Updated by favogt about 3 years ago
The system is unusuably slow, seems to be some disk issue. Getting backtraces of slow tasks shows them busy in disk related calls, like:
[<0>] __switch_to+0x10c/0x170
[<0>] wb_wait_for_completion+0xa0/0xd0
[<0>] sync_inodes_sb+0xc0/0x35c
[<0>] sync_inodes_one_sb+0x28/0x34
[<0>] iterate_supers+0xcc/0x1e0
[<0>] ksys_sync+0x54/0xd0
[<0>] __arm64_sys_sync+0x1c/0x30
smartctl -a /dev/sda
does not show any suspicious values.
Updated by favogt about 3 years ago
I did sg_reset -d
and sg_reset -t
on the drive, but that didn't help either.
Updated by ggardet_arm about 3 years ago
- Priority changed from High to Urgent
Few jobs manage to start but fails because of the slowness, such as https://openqa.opensuse.org/tests/1989720#step/await_install/135
Updated by ggardet_arm about 3 years ago
Indeed, disk is slow. dmesg
has several lines related to storage issues:
[15811.245594] systemd-journald[4144]: Received SIGTERM from PID 1 (systemd).
[22731.129473] systemd-journald[13090]: File /var/log/journal/a2f1b12461b3453499a68511ad59637a/system.journal corrupted or uncleanly shut down, renaming and replacing.
Also sar
reports I/O waits despite most workers are in broken state and thus unused:
03:34:26 LINUX RESTART (64 CPU)
03:40:01 CPU %user %nice %system %iowait %steal %idle
03:50:01 all 0,01 0,00 0,03 5,24 0,00 94,72
04:00:01 all 0,01 0,00 0,04 4,97 0,00 94,98
04:10:01 all 0,01 0,00 0,03 4,88 0,00 95,08
04:20:01 all 0,00 0,00 0,03 4,95 0,00 95,02
04:30:01 all 0,08 0,00 0,06 4,50 0,00 95,36
04:40:01 all 0,04 0,00 0,03 5,07 0,00 94,86
04:50:01 all 0,01 0,00 0,03 6,79 0,00 93,17
05:00:01 all 0,02 0,00 0,03 5,88 0,00 94,06
05:10:01 all 0,01 0,00 0,03 5,54 0,00 94,42
05:20:01 all 0,00 0,00 0,02 5,29 0,00 94,68
05:30:18 all 0,01 0,00 0,03 5,04 0,00 94,93
05:40:01 all 0,01 0,00 0,03 5,38 0,00 94,59
05:50:01 all 0,02 0,00 0,03 5,34 0,00 94,62
06:00:01 all 0,41 0,00 0,21 5,76 0,00 93,62
06:10:01 all 0,02 0,00 0,04 6,40 0,00 93,54
06:20:01 all 0,03 0,00 0,04 4,93 0,00 95,01
06:30:01 all 0,01 0,00 0,03 4,94 0,00 95,03
06:40:01 all 0,03 0,00 0,03 5,14 0,00 94,79
06:50:01 all 0,01 0,00 0,03 5,04 0,00 94,92
07:00:01 all 0,01 0,00 0,03 4,94 0,00 95,02
07:10:01 all 0,01 0,00 0,03 5,18 0,00 94,79
07:20:06 all 0,01 0,00 0,02 4,74 0,00 95,23
07:30:01 all 0,02 0,00 0,03 4,95 0,00 95,00
07:40:01 all 0,01 0,00 0,03 4,85 0,00 95,10
07:50:01 all 0,05 0,00 0,04 6,74 0,00 93,17
08:00:01 all 0,33 0,02 0,13 6,16 0,00 93,36
08:10:01 all 1,09 0,07 0,15 6,80 0,00 91,89
08:20:09 all 0,32 0,13 0,13 4,32 0,00 95,10
08:30:01 all 5,45 0,38 0,73 5,64 0,00 87,80
08:40:06 all 4,40 0,45 0,56 6,84 0,00 87,74
08:50:01 all 3,60 0,34 0,63 10,38 0,00 85,05
09:00:09 all 1,01 0,03 0,16 11,06 0,00 87,73
09:10:01 all 0,91 0,03 0,14 11,10 0,00 87,82
09:20:01 all 1,09 0,04 0,17 10,92 0,00 87,79
09:30:01 all 0,95 0,04 0,15 10,61 0,00 88,26
09:40:16 all 1,30 0,05 0,20 10,54 0,00 87,91
09:50:17 all 1,20 0,04 0,17 10,18 0,00 88,41
10:00:01 all 1,31 0,05 0,18 10,56 0,00 87,89
10:10:01 all 1,19 0,05 0,18 11,33 0,00 87,25
10:20:01 all 0,78 0,02 0,13 11,50 0,00 87,58
10:30:01 all 1,11 0,04 0,19 11,05 0,00 87,61
10:40:01 all 0,91 0,03 0,16 11,57 0,00 87,33
10:50:01 all 1,20 0,03 0,15 10,72 0,00 87,90
11:00:01 all 1,46 0,04 0,19 10,79 0,00 87,53
11:10:01 all 2,44 0,09 0,27 7,81 0,00 89,39
11:20:01 all 1,46 0,00 0,10 3,85 0,00 94,59
11:30:01 all 0,15 0,00 0,09 3,75 0,00 96,00
Moyenne : all 0,73 0,04 0,13 7,02 0,00 92,08
Updated by mkittler about 3 years ago
Looks like the problem started around 2021-10-19. There are many jobs with reason timeout: setup exceeded MAX_SETUP_TIME
and cache failure: Cache service queue already full (5)
and only very few jobs which actually passed (see select id, t_finished, result, reason from jobs where (select host from workers where id = assigned_worker_id) = 'openqa-aarch64' and t_finished >= '2021-10-19T00:00:00' order by t_finished;
). On other workers there's also sometimes cache failure: Cache service queue already full (5)
showing up but far less and most of the jobs have actually a chance to run. So it looks like a worker-specific problem, indeed.
As a short-term measure we could reduce the number of worker slots from 15 to e.g. 5 to reduce the I/O load (so at least some jobs can be processed).
Updated by mkittler about 3 years ago
I was trying to test the read performance via hdparm -Tt /dev/sda
. Usually the command finished quite quickly with the results but on aarch64 it got stuck and I cannot even stop it via SIGKILL (it is likely waiting for an uninterruptible syscall). The system itself feels at least kind of responsive (despite the fact that everything is installed on partitions of /dev/sda
).
Fabian has already checked, but here again the output of smartctl
just to know what kind of device we actually have here:
openqa-aarch64:~ # smartctl -a /dev/sda
smartctl 7.0 2019-05-21 r4917 [aarch64-linux-5.12.2-1.g6fcec30-default] (SUSE RPM)
Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Intel 730 and DC S35x0/3610/3700 Series SSDs
Device Model: INTEL SSDSC2BP480G4
Serial Number: BTJR507404WJ480BGN
LU WWN Device Id: 5 5cd2e4 04b7929ce
Firmware Version: L2010420
User Capacity: 480.103.981.056 bytes [480 GB]
Sector Size: 512 bytes logical/physical
Rotation Rate: Solid State Device
Form Factor: 2.5 inches
Device is: In smartctl database [for details use: -P show]
ATA Version is: ATA8-ACS T13/1699-D revision 4
SATA Version is: SATA 2.6, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Mon Oct 25 12:59:47 2021 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x02) Offline data collection activity
was completed without error.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 2) seconds.
Offline data collection
capabilities: (0x79) SMART execute Offline immediate.
No Auto Offline data collection support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 1) minutes.
Extended self-test routine
recommended polling time: ( 2) minutes.
Conveyance self-test routine
recommended polling time: ( 2) minutes.
SCT capabilities: (0x003d) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 1
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
5 Reallocated_Sector_Ct 0x0032 097 097 000 Old_age Always - 0
9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 25888
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 90
170 Available_Reservd_Space 0x0033 100 100 010 Pre-fail Always - 0
171 Program_Fail_Count 0x0032 100 100 000 Old_age Always - 0
172 Erase_Fail_Count 0x0032 100 100 000 Old_age Always - 0
174 Unsafe_Shutdown_Count 0x0032 100 100 000 Old_age Always - 86
175 Power_Loss_Cap_Test 0x0033 100 100 010 Pre-fail Always - 616 (137 415)
183 SATA_Downshift_Count 0x0032 100 100 000 Old_age Always - 0
184 End-to-End_Error 0x0033 100 100 090 Pre-fail Always - 0
187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0
190 Temperature_Case 0x0022 080 072 000 Old_age Always - 20 (Min/Max 19/31)
192 Unsafe_Shutdown_Count 0x0032 100 100 000 Old_age Always - 86
194 Temperature_Internal 0x0022 100 100 000 Old_age Always - 20
197 Current_Pending_Sector 0x0032 100 100 000 Old_age Always - 0
199 CRC_Error_Count 0x003e 100 100 000 Old_age Always - 0
225 Host_Writes_32MiB 0x0032 100 100 000 Old_age Always - 32337889
226 Workld_Media_Wear_Indic 0x0032 100 100 000 Old_age Always - 0
227 Workld_Host_Reads_Perc 0x0032 100 100 000 Old_age Always - 38
228 Workload_Minutes 0x0032 100 100 000 Old_age Always - 10494
232 Available_Reservd_Space 0x0033 100 100 010 Pre-fail Always - 0
233 Media_Wearout_Indicator 0x0032 001 001 000 Old_age Always - 0
234 Thermal_Throttle 0x0032 100 100 000 Old_age Always - 0/0
241 Host_Writes_32MiB 0x0032 100 100 000 Old_age Always - 32337889
242 Host_Reads_32MiB 0x0032 100 100 000 Old_age Always - 14494316
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 25879 -
# 2 Short offline Completed without error 00% 25855 -
# 3 Short offline Interrupted (host reset) 00% 25831 -
# 4 Short offline Completed without error 00% 25807 -
# 5 Short offline Interrupted (host reset) 00% 25783 -
# 6 Short offline Interrupted (host reset) 00% 25759 -
# 7 Short offline Completed without error 00% 25734 -
# 8 Short offline Completed without error 00% 25710 -
# 9 Short offline Completed without error 00% 25686 -
#10 Short offline Completed without error 00% 25662 -
#11 Short offline Completed without error 00% 25638 -
#12 Short offline Completed without error 00% 25614 -
#13 Short offline Completed without error 00% 25590 -
#14 Short offline Completed without error 00% 25566 -
#15 Short offline Completed without error 00% 25542 -
#16 Short offline Completed without error 00% 25518 -
#17 Short offline Completed without error 00% 25495 -
#18 Short offline Completed without error 00% 25470 -
#19 Short offline Completed without error 00% 25447 -
#20 Short offline Completed without error 00% 25423 -
#21 Short offline Completed without error 00% 25399 -
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
Updated by mkittler about 3 years ago
I was trying to test the write speed with dd
. A first test writing 80 MiB looked ok:
openqa-aarch64:~ # dd if=/dev/zero of=/var/lib/openqa/testfile bs=8k count=10k
10240+0 Datensätze ein
10240+0 Datensätze aus
83886080 bytes (84 MB, 80 MiB) copied, 0,14306 s, 586 MB/s
Then I tried again with 800 MiB and the command got stuck similar to the hdparm
command before (which now finally stopped). According to htop
it doesn't even produce any disk I/O at this point.
Updated by ggardet_arm about 3 years ago
Tried to update openQA-worker
(and others) from 4.6.1629897341.45cb977d4-lp152.4310.1
to 4.6.1635150461.66672e803-lp152.4501.1
(was not auto updated due to a vendor change required for perl-Carp-Always
package).
Rebooting atm to snapshot 542, let's if it changes anything.
Updated by ggardet_arm about 3 years ago
It rebooted to an older snapshot. Because when I run transactional-update shell
, it shows:
Syncing /etc of oldest snapshot /.snapshots/527/snapshot as base into new snapshot /.snapshots/543/snapshot
Updated by ggardet_arm about 3 years ago
Trying transactional-update rollback 542
.
EDIT: It seems to fail to come back to life.
Updated by mkittler about 3 years ago
I brought it back to life via an IPMI power cycle. I've attached the boot log as it very verbose and possibly contains something interesting.
Note that /dev/sda
doesn't seem generally broken. It just hangs from time to time. I've just been able to run hdparm
successfully:
openqa-aarch64:~ # hdparm -Tt /dev/sda
/dev/sda:
Timing cached reads: 5362 MB in 2.00 seconds = 2685.38 MB/sec
Timing buffered disk reads: 1404 MB in 3.00 seconds = 467.32 MB/sec
The speed is slow for an SSD but not totally unacceptable.
I've also just observed the following log messages:
[ 659.985280] watchdog: BUG: soft lockup - CPU#25 stuck for 22s! [qemu-system-aar:4298]
[ 663.965252] watchdog: BUG: soft lockup - CPU#18 stuck for 22s! [qemu-system-aar:4955]
[ 663.975251] watchdog: BUG: soft lockup - CPU#23 stuck for 22s! [qemu-system-aar:4869]
[ 696.085110] watchdog: BUG: soft lockup - CPU#54 stuck for 23s! [qemu-system-aar:6200]
[ 696.105107] watchdog: BUG: soft lockup - CPU#61 stuck for 22s! [qemu-system-aar:6242]
A CPU being stuck for > 20 seconds doesn't sound good.
Updated by mkittler about 3 years ago
Updated by ggardet_arm about 3 years ago
It seems that the rollback to snapshot 542 finally succeeded and is working properly.
Updated by favogt about 3 years ago
What I should mention is that in one of the debugging sessions I moved /tmp
to a tmpfs
instead, it has more than enough RAM for that.
That was necessary to get tab completion to work again, otherwise bash would block on disk access each time...
It seems that the rollback to snapshot 542 finally succeeded and is working properly.
I would say that's most likely a random good boot, I don't see any changes in the snapshots which could possibly affect the disk at that level.
Updated by mkittler about 3 years ago
The most recent job history for the worker host looks ok again (seeing cache failure: Cache service queue already full (5)
directly after restarting means the protection against too many parallel downloads is effective). I also tested some more hdparm
and dd
invocations and none got stuck. So it seems to be better indeed although I'm also wondering what the rollback changed. snapper diff 541..542
mostly shows Perl changes which are unlikely related.
Updated by mkittler about 3 years ago
The job history looks still acceptable besides the full cache service queue after the nightly reboot. Note that it booted into snapshot 550 so we're not in 542 anymore. Maybe the ticket's priority can be lowered - especially considering that I wouldn't know any immediate workaround anyways.
If it is getting worse again, I would try a different SATA port for the SSD (or check whether the SSD works in a completely different setup) and exchange it if it is broken. We would likely need to file an Infra ticket for that.
Updated by mkittler about 3 years ago
I'll also upgrade openqa-aarch64 to Leap 15.3 as part of #99189. Anything to watch out for (except poor disk I/O)? Like any aarch64-specific workarounds needed?
Updated by ggardet_arm about 3 years ago
mkittler wrote:
I'll also upgrade openqa-aarch64 to Leap 15.3 as part of #99189. Anything to watch out for (except poor disk I/O)? Like any aarch64-specific workarounds needed?
Not really. Please ping me once done, I could have a quick look. Thanks.
Updated by mkittler about 3 years ago
- Assignee set to mkittler
- Priority changed from Urgent to High
I've been updating the machine to Leap 15.3 and the update took approximately the same time than on other hosts - so it wasn't particularly slow or got stuck. It is also already running jobs (beyond the caching) and seems to be generally ok.
Updated by mkittler about 3 years ago
The worker's job history doesn't look completely terrible anymore. Can we consider the ticket resolved? We can still re-open the ticket if the situation gets worse again.
Updated by favogt about 3 years ago
- Status changed from Feedback to Resolved
mkittler wrote:
The worker's job history doesn't look completely terrible anymore. Can we consider the ticket resolved? We can still re-open the ticket if the situation gets worse again.
Agreed.