action #73123
closedcoordination #71926: [epic] t/14-grutasks.t takes multiple minutes within circleCI, only 6s locally (but errors but still succeeds?), and no log output visible in circleCI
t/14-grutasks.t shows errors but still succeeds
Description
Observation¶
[14:37:26] t/14-grutasks.t ........................................... ok 274117 ms ( 0.22 usr 0.00 sys + 260.33 cusr 9.39 csys = 269.94 CPU)
but locally I have:
[21:33:58] t/14-grutasks.t .. 34/? [2020-09-26 21:34:04.39357] [15810] [error] Gru job error: No job ID specified.
[2020-09-26 21:34:04.44377] [15814] [error] Gru job error: Job 98765 does not exist.
[2020-09-26 21:34:04.51354] [15817] [error] Gru job error: Finalizing results of 1 modules failed
[21:33:58] t/14-grutasks.t .. ok 5976 ms ( 0.08 usr 0.01 sys + 3.34 cusr 0.90 csys = 4.33 CPU)
[21:34:04]
All tests successful.
Files=1, Tests=36, 6 wallclock secs ( 0.10 usr 0.01 sys + 3.34 cusr 0.90 csys = 4.35 CPU)
Result: PASS
so errors which do not fail the test. I can reproduce my local observations in 20 reruns with the same observation. The short runtime is reproducible for others locally (#71926)
Acceptance criteria¶
- AC1: no error messages show in t/14-grutasks.t output
- AC2: test should not fail
Out of scope¶
- The significant runtime difference in circleCI vs. local runs, to be handled elsewhere
Suggestions¶
- Reproduce errors locally
- If the error messages are expected, catch them with Test::Output, otherwise prevent them (or even better fail tests if there is any unexpected output). Likely the reason for failure is that the errors happen in a different process than the main one
Updated by okurz about 4 years ago
- Copied from coordination #71926: [epic] t/14-grutasks.t takes multiple minutes within circleCI, only 6s locally (but errors but still succeeds?), and no log output visible in circleCI added
Updated by mkittler almost 4 years ago
- Status changed from Workable to Resolved
- Assignee set to mkittler
should be fixed via https://github.com/os-autoinst/openQA/pull/3483
Updated by okurz almost 4 years ago
the PR likely only handled unhandled output but I thought you and kraih saw it as problematic that the test records "success" even though there were errors in like the background minion jobs.
Or rephrased: Do errors in minion jobs fail the test?
Updated by mkittler almost 4 years ago
Actually, it is not my PR which fixes the errors but your own commit 241fe49c32a101b7f537160de6901c734b28094c
. It adds explicit handling for these errors and the test would fail if the Minion jobs would not fail (the failure is expected).
The problem you've mentioned was triggered by a "bad" Mojolicious version but is unrelated to the specific errors mentioned in this ticket's description.