action #107002
closedcoordination #80142: [saga][epic] Scale out: Redundant/load-balancing deployments of openQA, easy containers, containers on kubernetes
coordination #98952: [epic] t/full-stack.t sporadically fails "clickElement: element not interactable" and other errors
Expose fullstack test video from pool directory in CI size:M
Added by livdywan over 2 years ago. Updated over 2 years ago.
Description
Observation¶
Sometimes we can only reproduce an issue in CI, but have no cue from the logs or results.
Acceptance criteria¶
- AC1: Video from fullstack test is accessible via CI jobs
- AC2: Pool directory is still unique for each individual test
Suggestions¶
- Upload the video from the pool directory in CI artifacts, regardless of result i.e. failed or passed
- Ensure there's a dedicated pool directory
Updated by livdywan over 2 years ago
- Blocks action #106912: Fullstack test can still fail due to `shutdown` module size:M added
Updated by okurz over 2 years ago
- Tracker changed from coordination to action
- Category set to Feature requests
- Priority changed from Normal to Low
Updated by livdywan over 2 years ago
- Status changed from Workable to In Progress
- Assignee set to livdywan
Might as well propose a quick PR for this one on the side:
Updated by livdywan over 2 years ago
- Status changed from In Progress to Feedback
cdywan wrote:
Might as well propose a quick PR for this one on the side:
Merged, at least two people besides me saw it working on CircleCI
Updated by tinita over 2 years ago
Now we are seeing problems with retries, because the directories aren't removed between the retries:
https://app.circleci.com/pipelines/github/os-autoinst/openQA/9031/workflows/15bd83cf-fd43-4b5e-80e3-971d01b07c22
https://app.circleci.com/pipelines/github/os-autoinst/openQA/9031/workflows/15bd83cf-fd43-4b5e-80e3-971d01b07c22/jobs/85219/steps
[17:38:19] t/full-stack.t .. can't symlink /home/squamata/project/test-results/fullstack/full-stack.d/openqa/share/factory/iso/Core-7.2.iso -> /home/squamata/os-autoinst/t/data/Core-7.2.iso: File exists at /home/squamata/project/t/lib/OpenQA/Test/Utils.pm line 393.
Updated by livdywan over 2 years ago
- Status changed from Feedback to In Progress
tinita wrote:
Now we are seeing problems with retries, because the directories aren't removed between the retries:
https://app.circleci.com/pipelines/github/os-autoinst/openQA/9031/workflows/15bd83cf-fd43-4b5e-80e3-971d01b07c22
https://app.circleci.com/pipelines/github/os-autoinst/openQA/9031/workflows/15bd83cf-fd43-4b5e-80e3-971d01b07c22/jobs/85219/steps[17:38:19] t/full-stack.t .. can't symlink /home/squamata/project/test-results/fullstack/full-stack.d/openqa/share/factory/iso/Core-7.2.iso -> /home/squamata/os-autoinst/t/data/Core-7.2.iso: File exists at /home/squamata/project/t/lib/OpenQA/Test/Utils.pm line 393.
So that's a case I didn't see before 🤦️ Fix coming up: https://github.com/os-autoinst/openQA/pull/4523
Updated by livdywan over 2 years ago
- Priority changed from Low to High
Bumping prio since this is affecting all CI runs
Updated by openqa_review over 2 years ago
- Due date set to 2022-03-09
Setting due date based on mean cycle time of SUSE QE Tools
Updated by livdywan over 2 years ago
Discussed in the Unblock:
- Tina proposed a solution based on subfolders
- Can we generate separate makefile targets for test-unstable? Would be good to try this out as a follow-up
- Can we handle the already-existing fullstack folder within the test?
- Yes, we can delete the folder in t/OpenQA/Test/Utils.pm when the fullstack folder is being looked up
- We agreed that I'll go and do it there, instead of trying to handle it in the retry hook
Updated by livdywan over 2 years ago
Updated by tinita over 2 years ago
https://github.com/os-autoinst/openQA/pull/4527 was merged
Looks like the ticket can be resolved
Updated by livdywan over 2 years ago
- Status changed from In Progress to Feedback
tinita wrote:
Looks like the ticket can be resolved
I'm setting it to Feedback since we should always monitor for unexpected side-effects before resolving, at least til we've seen one or two unrelated PRs with retries.
Updated by okurz over 2 years ago
- Blocks deleted (action #106912: Fullstack test can still fail due to `shutdown` module size:M)
Updated by okurz over 2 years ago
- Related to action #106912: Fullstack test can still fail due to `shutdown` module size:M added
Updated by okurz over 2 years ago
Please see #106912#note-11 . It seems a 5th rerun within the same github action job would cause an out of space exception. This would only really happen if we have 5 or more complete reruns in a stability test. Otherwise I assume not enough data would be saved to deplete the available space. So maybe an acceptable limitation. In case you would still want to look into that consider running df
as part of test execution cycles and remove the artifacts that take the most space.
Updated by livdywan over 2 years ago
- Copied to action #108052: Consistently handle results between retry runs added
Updated by livdywan over 2 years ago
- Status changed from Feedback to Resolved
okurz wrote:
Please see #106912#note-11 . It seems a 5th rerun within the same github action job would cause an out of space exception. This would only really happen if we have 5 or more complete reruns in a stability test. Otherwise I assume not enough data would be saved to deplete the available space. So maybe an acceptable limitation. In case you would still want to look into that consider running
df
as part of test execution cycles and remove the artifacts that take the most space.
I did ponder this a little, but since the fix may warrant some discussion I filed #108052. It's not an issue introduced with this feature.