Project

General

Profile

Actions

action #16076

closed

parse_junit_log is crashing with xunit from Avocado / proper xunit parsing in openQA

Added by pgeorgiadis almost 8 years ago. Updated almost 7 years ago.

Status:
Resolved
Priority:
High
Assignee:
Category:
Feature requests
Target version:
Start date:
2017-01-18
Due date:
% Done:

0%

Estimated time:

Description

While I was experimenting with Avocado testing framework I have noticed a very strange behavior of openQA during parsing the results. What happens is that openQA finishes the test as incomplete and it automatically clones and restarts the test by itself. What bugs me is the restarting part, this sounds like a bug, unless this is a wanted behavior which I am not aware of.

openQA fails to parse the results, because there are differences between the junit that comes from slenkins and the xunit that comes from Acocado. For example, in the testsuite tag, the slenkins junit format uses these extra attributes: package, hostname, id, disabled.

That has been told, if I change by hand the results produced by Avocado and add these attributes ...

- <testsuite name="avocado" tests="3" errors="0" failures="1" skipped="0" time="3.5769162178" timestamp="2016-05-04 14:46:52.803365">
+ <testsuite package="avocado" hostname="localhost" id="1" disabled="0" name="avocado" tests="3" errors="0" failures="1" skipped="0" time="3.5769162178" timestamp="2016-05-04 14:46:52.803365">

... then openQA finishes the test (which is an improvement compared the previous incomplete state) but it still it's not perfect. The problem now is that openQA marks the individual testcases as failed (see here).

This is happening because of another difference between the junit from slenkins and the xunit from Avocado. This time, the difference is spotted in the testcase tag. More specifically, slenkins uses an attribute called status and its value is either success or failure. However, this is not the same for Avocado. In Avocado there is not such status attribute. Instead of this, they use another logic: if the test fails, there is sub-attribute called failure. So, if the next line after a testcase is a failure, then they mark that test as failed, otherwise they mark it as passed. In order to make sure of that, I modified by hand the results of Avocado, to look like slenkins' format:

example of a test that passed:

- <testcase classname="SleepTest" name="1-sleeptest.py:SleepTest.test" time="1.00204920769"/>
+ <testcase classname="SleepTest" name="1-sleeptest.py:SleepTest.test" time="1.00204920769" status="success"/>

example of a test that failed:

- <testcase classname="FailTest" name="2-failtest.py:FailTest.test" time="0.00120401382446">
+ <testcase classname="FailTest" name="2-failtest.py:FailTest.test" time="0.00120401382446" status="failure">

After making these changes, then openQA works as expected. It managed to parse correctly my Avocado results (see here).

If you want to test this yourself, you can copy and paste the xunit example extracted from Avocado's documentation, and write a simple test in openQA, asking it to parse this file.

For example:

assert_script_run "wget --quiet " . data_url('console/avocado.xml');
parse_junit_log("avocado.xml");

I see two solutions here:

  1. Either you modify the results produced by Avocado and make them look like slenkins (IMHO I don't like this hacky way of resolving this).
  2. Enhance the parse_junit_log function to understand the xunit from Avocado.

More information about xunit/json in Avocado, can be found here and there.

Actions

Also available in: Atom PDF