Project

General

Profile

action #89899

coordination #80142: [saga][epic] Scale out: Redundant/load-balancing deployments of openQA, easy containers, containers on kubernetes

coordination #55364: [epic] Let's make codecov reports reliable

Fix flaky coverage - t/ui/27-plugin_obs_rsync_status_details.t

Added by okurz 5 months ago. Updated 4 months ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
Feature requests
Target version:
Start date:
Due date:
% Done:

0%

Estimated time:
Difficulty:

Description

Motivation

See #55364 : codecov reports often report about coverage changes which are obviously not related to the actual changes of a PR, e.g. when documentation is changed. We can already trust our coverage analysis more but should have only coverage changes reported for actual changes we introduced in a pull request.

Acceptance criteria

  • AC1: t/ui/27-plugin_obs_rsync_status_details.t does not appear anymore as changing code coverage in unrelated changes

Suggestions

  • Try to reproduce locally with rm -rf cover_db/ && make coverage KEEP_DB=1 TESTS=t/ui/27-plugin_obs_rsync_status_details.t
  • check coverage details in generated html report, e.g. call firefox cover_db/coverage.html
  • Fix uncovered lines with "uncoverable" statements, see previous commits adding these comments or look into https://metacpan.org/pod/Devel::Cover#UNCOVERABLE-CRITERIA or other means
  • retry multiple times to check for flakyness

Related issues

Related to openQA Project - action #89935: t/ui/27-plugin_obs_rsync_status_details.t fails in circleCI, master branch evenResolved2021-03-112021-04-30

Copied from openQA Project - action #80274: Fix flaky coverage - t/lib/OpenQA/Test/Utils.pm size:MFeedback2020-11-242021-08-10

History

#1 Updated by okurz 5 months ago

  • Copied from action #80274: Fix flaky coverage - t/lib/OpenQA/Test/Utils.pm size:M added

#2 Updated by kraih 5 months ago

Two recent examples for t/ui/27-plugin_obs_rsync_status_details.t showing up in completely unrelated PRs. https://github.com/os-autoinst/openQA/pull/3765#issuecomment-790661837 https://github.com/os-autoinst/openQA/pull/3776#issuecomment-794223578

#3 Updated by cdywan 5 months ago

  • Related to action #89935: t/ui/27-plugin_obs_rsync_status_details.t fails in circleCI, master branch even added

#4 Updated by kraih 5 months ago

I think i have the mystery uncovered. It's the random timing of the page refreshes triggering different code paths in the plugin. https://github.com/os-autoinst/openQA/pull/3784#issuecomment-797055624

#5 Updated by okurz 5 months ago

  • Status changed from Workable to Blocked
  • Assignee set to kraih

#6 Updated by kraih 5 months ago

  • Status changed from Blocked to Workable

I'll take a look.

#7 Updated by openqa_review 5 months ago

  • Due date set to 2021-03-30

Setting due date based on mean cycle time of SUSE QE Tools

#8 Updated by kraih 5 months ago

  • Status changed from Workable to In Progress

#9 Updated by kraih 5 months ago

  • Status changed from In Progress to Feedback

Coverage should be stable now. https://github.com/os-autoinst/openQA/pull/3790

#10 Updated by kraih 5 months ago

While running this test countless times in the past few days, i have seen it time out twice on Circle CI. That could have been bad luck (maybe we just need to give it a few extra seconds for very slow Circle CI runs?) or there is still a rare logic bug hidden somewhere that prevents the expected result from appearing in the UI. So we'll have to keep an eye on this test and maybe create a followup ticket to collect more information.

#11 Updated by okurz 5 months ago

Here you can focus on just code coverage. We will see from maybe a couple more PRs if the coverage changes for unrelated changes. For general stability of the test module you still have #89935

#12 Updated by cdywan 4 months ago

  • Due date changed from 2021-03-30 to 2021-04-09

kraih wrote:

While running this test countless times in the past few days, i have seen it time out twice on Circle CI. That could have been bad luck (maybe we just need to give it a few extra seconds for very slow Circle CI runs?) or there is still a rare logic bug hidden somewhere that prevents the expected result from appearing in the UI. So we'll have to keep an eye on this test and maybe create a followup ticket to collect more information.

Do we consider the coverage "stable" now? I agree with okurz that this isn't about the test itself but the % we get.

#13 Updated by cdywan 4 months ago

  • Status changed from Feedback to Resolved

I asked in the team chat and got no objections to consider this solved

#14 Updated by okurz 4 months ago

  • Due date deleted (2021-04-09)

Also available in: Atom PDF