action #107002
coordination #80142: [saga][epic] Scale out: Redundant/load-balancing deployments of openQA, easy containers, containers on kubernetes
coordination #98952: [epic] t/full-stack.t sporadically fails "clickElement: element not interactable" and other errors
Expose fullstack test video from pool directory in CI size:M
0%
Description
Observation¶
Sometimes we can only reproduce an issue in CI, but have no cue from the logs or results.
Acceptance criteria¶
- AC1: Video from fullstack test is accessible via CI jobs
- AC2: Pool directory is still unique for each individual test
Suggestions¶
- Upload the video from the pool directory in CI artifacts, regardless of result i.e. failed or passed
- Ensure there's a dedicated pool directory
Related issues
History
#1
Updated by cdywan 4 months ago
- Blocks action #106912: Fullstack test can still fail due to `shutdown` module size:M added
#6
Updated by tinita 4 months ago
Now we are seeing problems with retries, because the directories aren't removed between the retries:
https://app.circleci.com/pipelines/github/os-autoinst/openQA/9031/workflows/15bd83cf-fd43-4b5e-80e3-971d01b07c22
https://app.circleci.com/pipelines/github/os-autoinst/openQA/9031/workflows/15bd83cf-fd43-4b5e-80e3-971d01b07c22/jobs/85219/steps
[17:38:19] t/full-stack.t .. can't symlink /home/squamata/project/test-results/fullstack/full-stack.d/openqa/share/factory/iso/Core-7.2.iso -> /home/squamata/os-autoinst/t/data/Core-7.2.iso: File exists at /home/squamata/project/t/lib/OpenQA/Test/Utils.pm line 393.
#7
Updated by cdywan 4 months ago
- Status changed from Feedback to In Progress
tinita wrote:
Now we are seeing problems with retries, because the directories aren't removed between the retries:
https://app.circleci.com/pipelines/github/os-autoinst/openQA/9031/workflows/15bd83cf-fd43-4b5e-80e3-971d01b07c22
https://app.circleci.com/pipelines/github/os-autoinst/openQA/9031/workflows/15bd83cf-fd43-4b5e-80e3-971d01b07c22/jobs/85219/steps[17:38:19] t/full-stack.t .. can't symlink /home/squamata/project/test-results/fullstack/full-stack.d/openqa/share/factory/iso/Core-7.2.iso -> /home/squamata/os-autoinst/t/data/Core-7.2.iso: File exists at /home/squamata/project/t/lib/OpenQA/Test/Utils.pm line 393.
So that's a case I didn't see before 🤦️ Fix coming up: https://github.com/os-autoinst/openQA/pull/4523
#9
Updated by openqa_review 4 months ago
- Due date set to 2022-03-09
Setting due date based on mean cycle time of SUSE QE Tools
#10
Updated by cdywan 4 months ago
Discussed in the Unblock:
- Tina proposed a solution based on subfolders
- Can we generate separate makefile targets for test-unstable? Would be good to try this out as a follow-up
- Can we handle the already-existing fullstack folder within the test?
- Yes, we can delete the folder in t/OpenQA/Test/Utils.pm when the fullstack folder is being looked up
- We agreed that I'll go and do it there, instead of trying to handle it in the retry hook
#12
Updated by tinita 4 months ago
https://github.com/os-autoinst/openQA/pull/4527 was merged
Looks like the ticket can be resolved
#14
Updated by okurz 4 months ago
- Blocks deleted (action #106912: Fullstack test can still fail due to `shutdown` module size:M)
#15
Updated by okurz 4 months ago
- Related to action #106912: Fullstack test can still fail due to `shutdown` module size:M added
#16
Updated by okurz 4 months ago
Please see #106912#note-11 . It seems a 5th rerun within the same github action job would cause an out of space exception. This would only really happen if we have 5 or more complete reruns in a stability test. Otherwise I assume not enough data would be saved to deplete the available space. So maybe an acceptable limitation. In case you would still want to look into that consider running df
as part of test execution cycles and remove the artifacts that take the most space.
#17
Updated by cdywan 4 months ago
- Copied to action #108052: Consistently handle results between retry runs added
#18
Updated by cdywan 4 months ago
- Status changed from Feedback to Resolved
okurz wrote:
Please see #106912#note-11 . It seems a 5th rerun within the same github action job would cause an out of space exception. This would only really happen if we have 5 or more complete reruns in a stability test. Otherwise I assume not enough data would be saved to deplete the available space. So maybe an acceptable limitation. In case you would still want to look into that consider running
df
as part of test execution cycles and remove the artifacts that take the most space.
I did ponder this a little, but since the fix may warrant some discussion I filed #108052. It's not an issue introduced with this feature.