action #54548

Use of `record_serialresult` in `validate_script_output` is very confusing

Added by AdamWill 12 months ago. Updated 10 months ago.

Concrete Bugs
Target version:
Start date:
Due date:
% Done:


Estimated time:


This dates back all the way to this commit in 2014:

it introduced validate_script_output, with a bit that was commented at the time as being dodgy:

# abusing the function
$autotest::current_test->record_serialresult($output, $res);

well, it's still basically there and it's still dodgy. Currently it reads:

# abusing the function
$autotest::current_test->record_serialresult($output, $res, $output) unless ($args{quiet});

but this does the same thing. In practice what this means is, every time validate_script_output is called, you get a text thumbnail in your test which reads:

# wait_serial expected: (whatever the output was)

# Result:
(whatever the output was)

The two strings will always be identical. If the validate_script_output check failed, the thumbnail will be bordered red, indicating a failure.

In the Fedora tests, we have this check:

validate_script_output 'rpm -q python3-kickstart', sub { $_ !~ m/^python3-kickstart-1-1.noarch$/ };

which basically means "ensure the output of rpm -q python3-kickstart is anything but 'python3-kickstart-1-1.noarch'". When this fails you get an extremely confusing result in the UI: you get a red-bordered text thumbnail reading:

# wait_serial expected: python3-kickstart-1-1.noarch

# Result:

That does not make it obvious at all what the problem is, and is arguably more misleading than just not including the thumbnail at all.

Really, though, even if the check was a positive assertion, the thumbnail would be equally misleading, because of how both strings will always be the exact output of the command - despite the wording "expected: ", that first string is never actually any kind of representation of what the check "expected".

I'm not sure why this was ever thought to be a good idea, but perhaps we should rethink it?

Example of the above case:

validate-script-output.png (27.8 KB) validate-script-output.png validate_script_output tinita, 2019-08-14 15:41

Related issues

Related to openQA Project - action #54347: [tools] Wrong message on serial missmatchResolved2019-07-17


#1 Updated by AdamWill 12 months ago

For instance, instead of calling record_serialresult from validate_script_output at all, why not just call record_resultfile directly, with more accurate output?

#2 Updated by okurz 12 months ago

  • Related to action #54347: [tools] Wrong message on serial missmatch added

#3 Updated by okurz 12 months ago

  • Category set to Concrete Bugs

I wonder if it really took 5 years for you and dheidler to report about the same issue but then within the very same week – see – or if we really have a regression now.

#4 Updated by AdamWill 12 months ago

I think it's just a weird coincidence. I can't swear to it, but I think I've run into this before but not bothered to file an issue. The code doesn't look like it's changed recently.

#5 Updated by tinita 11 months ago

Maybe it was done like that because the argument to validate_script_output() is a coderef and not a regex.

But it would be possible to convert the coderef to a string again with the B::Deparse module.
Additionally it would be nice if one could just pass a regex to validate_script_output(). I guess that's a very common usage.

Then displaying the regex would be even easier.

#6 Updated by tinita 11 months ago

I just tested my idea with B::Deparse with the following function in console/

validate_script_output("pwd", sub {

This is the output:

# wait_serial expected: {
    package sqlite3;
    use warnings;
    use strict;

# Result:

Would that help?

But maybe it should also use its own function instead of record_serialresult() to avoid the misleading text.

#7 Updated by AdamWill 11 months ago's a bit hard to understand, I think. As I said in comment #1 here, the implementation of record_serialresult isn't necessarily the problem here - arguably validate_script_output just shouldn't use it.

#8 Updated by tinita 11 months ago

basically B::Deparse would turn your subroutine into a string again.

In your case it would look like this:

# wait_serial expected: {
    package your::package;
    use warnings;
    use strict;
    not(($_ =~ /^python3-kickstart-1-1.noarch$/));

And if we write our own function it could look like this:

# validate_script_output expected: {
    package your::package;
    use warnings;
    use strict;
    not(($_ =~ /^python3-kickstart-1-1.noarch$/));

#9 Updated by AdamWill 11 months ago

yes, but I think your thinking is locked into this whole 'print what was expected' path because that's what record_serialresult does. But we don't actually need to do that. The output here doesn't have to be "expected X, got Y" at all. It can just be something like "validate_script_output on line X failed, output that did not validate was: Y" or something. We don't necessarily need to try and show right in the output what the validation code was, if it comes out awkward.

#10 Updated by tinita 11 months ago


But it would be nice to show what was expected anyway.

I attached a screenshot of what I just tried out.

#11 Updated by AdamWill 11 months ago

I mean...that still reads pretty strangely. also your check there looks a bit weird - is there really no $_ =~ or anything in it, or did it get removed by the parsing?

#12 Updated by tinita 11 months ago

$_ =~ m/foo/

is really the same as




because $_ is the default variable for regex matches, and the m in m/.../ is optional.

Maybe the result string should be on top, and this deparsed output of the the check subroutine comes below (with a short explanation), for people who would like to see it.

I just wrote my first openqa test a few weeks ago, and this wait_serial output was something which confused me too, and to me the check subroutine would be helpful additional output.

What do others think?

In any case, it should be easy to replace the current confusing output.

#13 Updated by AdamWill 11 months ago

OK, yeah, that's really super-compressed and hard to understand, then. I think including the deparsed validation coderef can work, but the text around it needs to explain clearly what's going on.

#14 Updated by okurz 11 months ago

I think we should try to abstract away anything that looks like perl. regex is fine but the idea is that the test API is basically its own language. This is also the reason why we have API methods like get_var and set_var.

#15 Updated by AdamWill 11 months ago

that's a bit difficult in this case as validate_script_output explicitly compares against an arbitrary code ref, though :/

#16 Updated by okurz 11 months ago

yes, true. How tinita did it in is actually just fine with stripping the automatically added use statements.

tinita don't be shy and assign a ticket to yourself when you are working on one. When you can't because you are not part of a project yet in progress talk with your managers about missing steps in onboarding :) Added you now in the parent project

#17 Updated by okurz 11 months ago

  • Assignee set to tinita

#18 Updated by tinita 11 months ago

  • Status changed from New to In Progress
  • % Done changed from 0 to 50
  • Difficulty set to easy

#19 Updated by coolo 10 months ago

  • Target version set to Current Sprint

#20 Updated by okurz 10 months ago

#21 Updated by tinita 10 months ago

  • Status changed from In Progress to Feedback
  • % Done changed from 50 to 100

#22 Updated by tinita 10 months ago

  • Status changed from Feedback to Resolved

#23 Updated by AdamWill 10 months ago

Thanks again for this, it's a big improvement :)

Also available in: Atom PDF