Project

General

Profile

Actions

action #175746

closed

coordination #127031: [saga][epic] openQA for SUSE customers

coordination #130414: [epic] Improved code coverage in os-autoinst

[sporadic][unstable][os-autoinst] Failed test 'expected data received via WebSocket' 3: # at t/27-consoles-vmware.t line 230.

Added by okurz 14 days ago. Updated 12 days ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
Regressions/Crashes
Target version:
Start date:
2025-01-18
Due date:
% Done:

0%

Estimated time:

Description

Observation

From
https://github.com/os-autoinst/os-autoinst/actions/runs/12838180749/job/35803306549?pr=2624#step:3:445

3: 
3:     #   Failed test 'expected data received via WebSocket'
3:     #   at t/27-consoles-vmware.t line 230.
3:     #          got: ''
3:     #     expected: 'message sent from raw socket'
3:     # Looks like you failed 1 test of 6.
3: 
3: #   Failed test 'turning WebSocket into normal socket via dewebsockify'
3: #   at t/27-consoles-vmware.t line 252.
3: # Looks like you failed 1 test of 7.
3: [23:05:20] t/27-consoles-vmware.t ................... 
3: Dubious, test returned 1 (wstat 256, 0x100)
3: Failed 1/7 subtests 
3:  (less 1 skipped subtest: 5 okay)

likely related to 196776f0 https://github.com/os-autoinst/os-autoinst/pull/2598 #174232

Reproducible

Observed in circleCI tests only once so far. Locally I reproduce a 100% fail ratio with

t/27-consoles-vmware.t .. 1/? EV: error in callback (ignoring): Label not found for "last T2_SUBTEST_WRAPPER" at /usr/lib/perl5/vendor_perl/5.26.1/Test2/Hub/Subtest.pm line 67.

    #   Failed test 'expected data received via WebSocket'
    #   at t/27-consoles-vmware.t line 230.
    #          got: ''
    #     expected: 'message sent from raw socket'
    # Looks like you failed 1 test of 6.
t/27-consoles-vmware.t .. 4/? 
#   Failed test 'no (unexpected) warnings (via done_testing)'
#   at t/27-consoles-vmware.t line 291.
# Looks like you failed 1 test of 6.
Actions #1

Updated by okurz 14 days ago

  • Description updated (diff)
Actions #2

Updated by okurz 12 days ago

  • Priority changed from High to Urgent
Actions #3

Updated by mkittler 12 days ago

  • Assignee set to mkittler
Actions #4

Updated by mkittler 12 days ago

  • Status changed from New to In Progress
Actions #5

Updated by tinita 12 days ago ยท Edited

I pushed a fix to https://github.com/os-autoinst/os-autoinst/pull/2624

I couldn't reproduce it with master, and the failure seen in a different PR is a different one.

So I checked the PR 2624.

Actions #6

Updated by tinita 12 days ago

  • Assignee changed from mkittler to tinita
Actions #7

Updated by mkittler 12 days ago

This it not reproducible at all on my TW system. I ran this many times and I tried to insert sleeps in some places to check for race conditions. I'll check with podman in our CI container.

Actions #8

Updated by tinita 12 days ago

  • Priority changed from Urgent to Normal

Normal prio, since it is only a failure in one PR

Actions #9

Updated by mkittler 12 days ago

Ok, I saw @tinita comment. So this was just caused by changes of the PR itself. I must really say, please don't create tickets blaming other PRs so soon. I just wasted hours trying to figure out how to reproduce this on master.

Actions #10

Updated by tinita 12 days ago

  • Status changed from In Progress to Resolved

PR https://github.com/os-autoinst/os-autoinst/pull/2624 is green again after I pushed my fix, resolving

Actions

Also available in: Atom PDF