action #154417
open[functional][jeos]test fails in fs_stress
0%
Description
Observation¶
openQA test in scenario sle-15-SP6-JeOS-for-RaspberryPi-aarch64-jeos-fs_stress-RPi@aarch64 fails in
fs_stress
Test suite description¶
Reproducible¶
Fails since (at least) Build 2.28
Expected result¶
Last good: 2.5 (or more recent)
Further details¶
Always latest result in this scenario: latest
Updated by openqa_review 3 months ago
This is an autogenerated message for openQA integration by the openqa_review script:
This bug is still referenced in a failing openQA test: jeos-fs_stress-RPi
https://openqa.suse.de/tests/13493419#step/fs_stress/1
To prevent further reminder comments one of the following options should be followed:
- The test scenario is fixed by applying the bug fix to the tested product or the test is adjusted
- The openQA job group is moved to "Released" or "EOL" (End-of-Life)
- The bugref in the openQA scenario is removed or replaced, e.g.
label:wontfix:boo1234
Expect the next reminder at the earliest in 28 days if nothing changes in this ticket.
Updated by openqa_review 2 months ago
This is an autogenerated message for openQA integration by the openqa_review script:
This bug is still referenced in a failing openQA test: jeos-fs_stress-RPi
https://openqa.suse.de/tests/13679371#step/fs_stress/1
To prevent further reminder comments one of the following options should be followed:
- The test scenario is fixed by applying the bug fix to the tested product or the test is adjusted
- The openQA job group is moved to "Released" or "EOL" (End-of-Life)
- The bugref in the openQA scenario is removed or replaced, e.g.
label:wontfix:boo1234
Expect the next reminder at the earliest in 28 days if nothing changes in this ticket.
Updated by openqa_review 11 days ago
This is an autogenerated message for openQA integration by the openqa_review script:
This bug is still referenced in a failing openQA test: jeos-fs_stress-RPi
https://openqa.suse.de/tests/14105135#step/fs_stress/1
To prevent further reminder comments one of the following options should be followed:
- The test scenario is fixed by applying the bug fix to the tested product or the test is adjusted
- The openQA job group is moved to "Released" or "EOL" (End-of-Life)
- The bugref in the openQA scenario is removed or replaced, e.g.
label:wontfix:boo1234
Expect the next reminder at the earliest in 80 days if nothing changes in this ticket.
Updated by zluo 10 days ago
this looks like performance issue. I tried this on my RPi 4:
localhost:~ # time ./file_copy -j 4 -i 5 -s 5000 | tee /tmp/file_copy_5000.log
Creating initial input file...
5000+0 records in
5000+0 records out
5242880000 bytes (5.2 GB, 4.9 GiB) copied, 326.056 s, 16.1 MB/s
7419ae3fc69474bf12a58b96256dfa3d input
ls -la /tmp/file_copy.qq5RXC
Starting Iteration 0 - Tue Apr 23 09:21:02 CEST 2024
ID: 29852 - copying files
ID: 7882 - copying files
ID: 8628 - copying files
ID: 4722 - copying files
ID: 4722 - copy done
ID: 8628 - copy done
ID: 7882 - copy done
ID: 8628 - start integrity check
ID: 7882 - start integrity check
ID: 4722 - start integrity check
ID: 29852 - copy done
ID: 29852 - start integrity check
ID: 7882 - finished integrity check
ID: 4722 - finished integrity check
ID: 8628 - finished integrity check
ID: 29852 - finished integrity check
all process have finished Tue Apr 23 09:58:23 CEST 2024
Starting Iteration 1 - Tue Apr 23 09:58:23 CEST 2024
ID: 30483 - copying files
ID: 13716 - copying files
ID: 17150 - copying files
ID: 32445 - copying files
ID: 13716 - copy done
ID: 17150 - copy done
ID: 32445 - copy done
ID: 17150 - start integrity check
ID: 13716 - start integrity check
ID: 32445 - start integrity check
ID: 30483 - copy done
ID: 30483 - start integrity check
ID: 30483 - finished integrity check
ID: 17150 - finished integrity check
ID: 32445 - finished integrity check
ID: 13716 - finished integrity check
all process have finished Tue Apr 23 10:36:00 CEST 2024
Starting Iteration 2 - Tue Apr 23 10:36:01 CEST 2024
ID: 8480 - copying files
ID: 17 - copying files
ID: 30252 - copying files
ID: 25089 - copying files
ID: 30252 - copy done
ID: 30252 - start integrity check
ID: 8480 - copy done
ID: 8480 - start integrity check
ID: 25089 - copy done
ID: 25089 - start integrity check
ID: 17 - copy done
ID: 17 - start integrity check
ID: 8480 - finished integrity check
ID: 25089 - finished integrity check
ID: 30252 - finished integrity check
ID: 17 - finished integrity check
all process have finished Tue Apr 23 11:13:29 CEST 2024
Starting Iteration 3 - Tue Apr 23 11:13:29 CEST 2024
ID: 28980 - copying files
ID: 28980 - copying files
ID: 2917 - copying files
ID: 29176 - copying files
ID: 29176 - copy done
ID: 2917 - copy done
ID: 28980 - copy done
ID: 28980 - start integrity check
ID: 29176 - start integrity check
ID: 2917 - start integrity check
ID: 28980 - copy done
ID: 28980 - start integrity check
ID: 29176 - finished integrity check
ID: 28980 - finished integrity check
ID: 2917 - finished integrity check
ID: 28980 - finished integrity check
all process have finished Tue Apr 23 11:51:25 CEST 2024
Starting Iteration 4 - Tue Apr 23 11:51:26 CEST 2024
ID: 23674 - copying files
ID: 10337 - copying files
ID: 1414 - copying files
ID: 15270 - copying files
ID: 10337 - copy done
ID: 10337 - start integrity check
ID: 23674 - copy done
ID: 23674 - start integrity check
ID: 15270 - copy done
ID: 15270 - start integrity check
ID: 1414 - copy done
ID: 1414 - start integrity check
ID: 1414 - finished integrity check
ID: 23674 - finished integrity check
ID: 15270 - finished integrity check
ID: 10337 - finished integrity check
all process have finished Tue Apr 23 12:28:58 CEST 2024
Cleaned /tmp/file_copy.qq5RXC, log is located at /tmp/file_copy.243.log
Good bye and thans for using yet another test from SUSE QA
Finished Tue Apr 23 12:29:00 CEST 2024
real 197m10.238s
user 8m48.736s
sys 12m9.756s
compared to the same job running on my VM (x86_64):
real 10m59,483s
user 2m43,825s
sys 2m0,693s
We cannot believe this is so poor on RPi.
Updated by zluo about 10 hours ago
https://openqa.suse.de/tests/14195933#step/fs_stress/20
this should be okay for now