Project

General

Profile

Actions

action #89899

closed

coordination #80142: [saga][epic] Scale out: Redundant/load-balancing deployments of openQA, easy containers, containers on kubernetes

coordination #55364: [epic] Let's make codecov reports reliable

Fix flaky coverage - t/ui/27-plugin_obs_rsync_status_details.t

Added by okurz almost 4 years ago. Updated over 3 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
Feature requests
Target version:
Start date:
Due date:
% Done:

0%

Estimated time:

Description

Motivation

See #55364 : codecov reports often report about coverage changes which are obviously not related to the actual changes of a PR, e.g. when documentation is changed. We can already trust our coverage analysis more but should have only coverage changes reported for actual changes we introduced in a pull request.

Acceptance criteria

  • AC1: t/ui/27-plugin_obs_rsync_status_details.t does not appear anymore as changing code coverage in unrelated changes

Suggestions

  • Try to reproduce locally with rm -rf cover_db/ && make coverage KEEP_DB=1 TESTS=t/ui/27-plugin_obs_rsync_status_details.t
  • check coverage details in generated html report, e.g. call firefox cover_db/coverage.html
  • Fix uncovered lines with "uncoverable" statements, see previous commits adding these comments or look into https://metacpan.org/pod/Devel::Cover#UNCOVERABLE-CRITERIA or other means
  • retry multiple times to check for flakyness

Related issues 2 (0 open2 closed)

Related to openQA Project (public) - action #89935: t/ui/27-plugin_obs_rsync_status_details.t fails in circleCI, master branch evenResolvedmkittler2021-03-112021-04-30

Actions
Copied from openQA Project (public) - action #80274: Fix flaky coverage - t/lib/OpenQA/Test/Utils.pm size:MResolvedkraih2020-11-24

Actions
Actions #1

Updated by okurz almost 4 years ago

  • Copied from action #80274: Fix flaky coverage - t/lib/OpenQA/Test/Utils.pm size:M added
Actions #2

Updated by kraih almost 4 years ago

Two recent examples for t/ui/27-plugin_obs_rsync_status_details.t showing up in completely unrelated PRs. https://github.com/os-autoinst/openQA/pull/3765#issuecomment-790661837 https://github.com/os-autoinst/openQA/pull/3776#issuecomment-794223578

Actions #3

Updated by livdywan almost 4 years ago

  • Related to action #89935: t/ui/27-plugin_obs_rsync_status_details.t fails in circleCI, master branch even added
Actions #4

Updated by kraih almost 4 years ago

I think i have the mystery uncovered. It's the random timing of the page refreshes triggering different code paths in the plugin. https://github.com/os-autoinst/openQA/pull/3784#issuecomment-797055624

Actions #5

Updated by okurz almost 4 years ago

  • Status changed from Workable to Blocked
  • Assignee set to kraih
Actions #6

Updated by kraih over 3 years ago

  • Status changed from Blocked to Workable

I'll take a look.

Actions #7

Updated by openqa_review over 3 years ago

  • Due date set to 2021-03-30

Setting due date based on mean cycle time of SUSE QE Tools

Actions #8

Updated by kraih over 3 years ago

  • Status changed from Workable to In Progress
Actions #9

Updated by kraih over 3 years ago

  • Status changed from In Progress to Feedback

Coverage should be stable now. https://github.com/os-autoinst/openQA/pull/3790

Actions #10

Updated by kraih over 3 years ago

While running this test countless times in the past few days, i have seen it time out twice on Circle CI. That could have been bad luck (maybe we just need to give it a few extra seconds for very slow Circle CI runs?) or there is still a rare logic bug hidden somewhere that prevents the expected result from appearing in the UI. So we'll have to keep an eye on this test and maybe create a followup ticket to collect more information.

Actions #11

Updated by okurz over 3 years ago

Here you can focus on just code coverage. We will see from maybe a couple more PRs if the coverage changes for unrelated changes. For general stability of the test module you still have #89935

Actions #12

Updated by livdywan over 3 years ago

  • Due date changed from 2021-03-30 to 2021-04-09

kraih wrote:

While running this test countless times in the past few days, i have seen it time out twice on Circle CI. That could have been bad luck (maybe we just need to give it a few extra seconds for very slow Circle CI runs?) or there is still a rare logic bug hidden somewhere that prevents the expected result from appearing in the UI. So we'll have to keep an eye on this test and maybe create a followup ticket to collect more information.

Do we consider the coverage "stable" now? I agree with @okurz that this isn't about the test itself but the % we get.

Actions #13

Updated by livdywan over 3 years ago

  • Status changed from Feedback to Resolved

I asked in the team chat and got no objections to consider this solved

Actions #14

Updated by okurz over 3 years ago

  • Due date deleted (2021-04-09)
Actions

Also available in: Atom PDF