Project

General

Profile

Actions

action #107002

closed

coordination #80142: [saga][epic] Scale out: Redundant/load-balancing deployments of openQA, easy containers, containers on kubernetes

coordination #98952: [epic] t/full-stack.t sporadically fails "clickElement: element not interactable" and other errors

Expose fullstack test video from pool directory in CI size:M

Added by livdywan almost 3 years ago. Updated almost 3 years ago.

Status:
Resolved
Priority:
High
Assignee:
Category:
Feature requests
Target version:
Start date:
2022-02-17
Due date:
% Done:

0%

Estimated time:

Description

Observation

Sometimes we can only reproduce an issue in CI, but have no cue from the logs or results.

Acceptance criteria

  • AC1: Video from fullstack test is accessible via CI jobs
  • AC2: Pool directory is still unique for each individual test

Suggestions

  • Upload the video from the pool directory in CI artifacts, regardless of result i.e. failed or passed
  • Ensure there's a dedicated pool directory

Related issues 2 (1 open1 closed)

Related to openQA Project (public) - action #106912: Fullstack test can still fail due to `shutdown` module size:MResolvedokurz2022-02-16

Actions
Copied to openQA Project (public) - action #108052: Consistently handle results between retry runsNew2022-02-17

Actions
Actions #1

Updated by livdywan almost 3 years ago

  • Blocks action #106912: Fullstack test can still fail due to `shutdown` module size:M added
Actions #2

Updated by okurz almost 3 years ago

  • Target version set to Ready
Actions #3

Updated by okurz almost 3 years ago

  • Tracker changed from coordination to action
  • Category set to Feature requests
  • Priority changed from Normal to Low
Actions #4

Updated by livdywan almost 3 years ago

  • Status changed from Workable to In Progress
  • Assignee set to livdywan

Might as well propose a quick PR for this one on the side:

https://github.com/os-autoinst/openQA/pull/4517

Actions #5

Updated by livdywan almost 3 years ago

  • Status changed from In Progress to Feedback

cdywan wrote:

Might as well propose a quick PR for this one on the side:

https://github.com/os-autoinst/openQA/pull/4517

Merged, at least two people besides me saw it working on CircleCI

Actions #6

Updated by tinita almost 3 years ago

Now we are seeing problems with retries, because the directories aren't removed between the retries:
https://app.circleci.com/pipelines/github/os-autoinst/openQA/9031/workflows/15bd83cf-fd43-4b5e-80e3-971d01b07c22
https://app.circleci.com/pipelines/github/os-autoinst/openQA/9031/workflows/15bd83cf-fd43-4b5e-80e3-971d01b07c22/jobs/85219/steps

[17:38:19] t/full-stack.t .. can't symlink /home/squamata/project/test-results/fullstack/full-stack.d/openqa/share/factory/iso/Core-7.2.iso -> /home/squamata/os-autoinst/t/data/Core-7.2.iso: File exists at /home/squamata/project/t/lib/OpenQA/Test/Utils.pm line 393.
Actions #7

Updated by livdywan almost 3 years ago

  • Status changed from Feedback to In Progress

tinita wrote:

Now we are seeing problems with retries, because the directories aren't removed between the retries:
https://app.circleci.com/pipelines/github/os-autoinst/openQA/9031/workflows/15bd83cf-fd43-4b5e-80e3-971d01b07c22
https://app.circleci.com/pipelines/github/os-autoinst/openQA/9031/workflows/15bd83cf-fd43-4b5e-80e3-971d01b07c22/jobs/85219/steps

[17:38:19] t/full-stack.t .. can't symlink /home/squamata/project/test-results/fullstack/full-stack.d/openqa/share/factory/iso/Core-7.2.iso -> /home/squamata/os-autoinst/t/data/Core-7.2.iso: File exists at /home/squamata/project/t/lib/OpenQA/Test/Utils.pm line 393.

So that's a case I didn't see before 🤦️ Fix coming up: https://github.com/os-autoinst/openQA/pull/4523

Actions #8

Updated by livdywan almost 3 years ago

  • Priority changed from Low to High

Bumping prio since this is affecting all CI runs

Actions #9

Updated by openqa_review almost 3 years ago

  • Due date set to 2022-03-09

Setting due date based on mean cycle time of SUSE QE Tools

Actions #10

Updated by livdywan almost 3 years ago

Discussed in the Unblock:

  • Tina proposed a solution based on subfolders
  • Can we generate separate makefile targets for test-unstable? Would be good to try this out as a follow-up
  • Can we handle the already-existing fullstack folder within the test?
    • Yes, we can delete the folder in t/OpenQA/Test/Utils.pm when the fullstack folder is being looked up
    • We agreed that I'll go and do it there, instead of trying to handle it in the retry hook
Actions #12

Updated by tinita almost 3 years ago

https://github.com/os-autoinst/openQA/pull/4527 was merged

Looks like the ticket can be resolved

Actions #13

Updated by livdywan almost 3 years ago

  • Status changed from In Progress to Feedback

tinita wrote:

Looks like the ticket can be resolved

I'm setting it to Feedback since we should always monitor for unexpected side-effects before resolving, at least til we've seen one or two unrelated PRs with retries.

Actions #14

Updated by okurz almost 3 years ago

  • Blocks deleted (action #106912: Fullstack test can still fail due to `shutdown` module size:M)
Actions #15

Updated by okurz almost 3 years ago

  • Related to action #106912: Fullstack test can still fail due to `shutdown` module size:M added
Actions #16

Updated by okurz almost 3 years ago

Please see #106912#note-11 . It seems a 5th rerun within the same github action job would cause an out of space exception. This would only really happen if we have 5 or more complete reruns in a stability test. Otherwise I assume not enough data would be saved to deplete the available space. So maybe an acceptable limitation. In case you would still want to look into that consider running df as part of test execution cycles and remove the artifacts that take the most space.

Actions #17

Updated by livdywan almost 3 years ago

  • Copied to action #108052: Consistently handle results between retry runs added
Actions #18

Updated by livdywan almost 3 years ago

  • Status changed from Feedback to Resolved

okurz wrote:

Please see #106912#note-11 . It seems a 5th rerun within the same github action job would cause an out of space exception. This would only really happen if we have 5 or more complete reruns in a stability test. Otherwise I assume not enough data would be saved to deplete the available space. So maybe an acceptable limitation. In case you would still want to look into that consider running df as part of test execution cycles and remove the artifacts that take the most space.

I did ponder this a little, but since the fix may warrant some discussion I filed #108052. It's not an issue introduced with this feature.

Actions #19

Updated by okurz almost 3 years ago

  • Due date deleted (2022-03-09)
Actions

Also available in: Atom PDF