action #34291

[functional][y][yast] reproduce failure with autoyast_firewalld_SuSEfirewall_configuration and collect logs

Added by riafarov about 4 years ago. Updated about 4 years ago.

Bugs in existing tests
Start date:
Due date:
% Done:


Estimated time:



Test fails due to some race condition, as firewalld command fails and in the end network device is not getting up.

As network is down, we cannot collect logs automatically, so need to reproduce it manually and collect all logs and file a bug.

openQA test in scenario sle-15-Installer-DVD-x86_64-autoyast_firewalld_SuSEfirewall_configuration@64bit fails in


Fails since (at least) Build 540.2

Expected result

Last good: 538.1 (or more recent)

Further details

Always latest result in this scenario: latest


#1 Updated by riafarov about 4 years ago

okurz, please schedule and potentially raise the priority. In latest builds it's seems to be easy to reproduce.

#2 Updated by okurz about 4 years ago

  • Due date set to 2018-04-24
  • Category set to Bugs in existing tests
  • Target version set to Milestone 15

Well checking results from the latest build shows it passed in 550.2, please crosscheck right in the next sprint then if it's now fixed by product changes or what happened or if it fails again in the next build. But I keep the prio as normal as it did not fail in the most recent build.

#3 Updated by okurz about 4 years ago

oh, btw, asmorodskyi managed in to come up with a cool approach to "repair the network" in a post_fail_hook. Maybe this is something you want to try out as well in a more generic approach?

#4 Updated by riafarov about 4 years ago

okurz Yeah, I was thinking about same thing, but it can be the case that network won't start before we do some extra steps about firewalld service. As I've mentioned above, issue is sporadic and happens due to some race condition. Retriggering for the same build helps sometimes. That's why I've created a ticket, because could not reproduce it in 30 minutes and collect logs.

#5 Updated by okurz about 4 years ago

  • Due date changed from 2018-04-24 to 2018-05-22
  • Target version changed from Milestone 15 to Milestone 16

not enough capacity in S15 nor S16, moving

#6 Updated by JERiveraMoya about 4 years ago

I was able to reproduce with a vm everytime.

#8 Updated by riafarov about 4 years ago (logs not collected, but bug is there)

#9 Updated by riafarov about 4 years ago

  • Status changed from Workable to Resolved

Now logs are provided

#10 Updated by riafarov about 4 years ago

  • Assignee set to riafarov

Also available in: Atom PDF