Project

General

Profile

Actions

action #127868

open

coordination #127031: [saga][epic] openQA for SUSE customers

[qaaas] openQA test results should be archived and not tampered size:M

Added by okurz about 1 year ago. Updated 11 months ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
Feature requests
Target version:
Start date:
2023-04-18
Due date:
% Done:

0%

Estimated time:
Tags:

Description

Motivation

From https://confluence.suse.com/display/qasle/Engineering+proposal+-+openQA+SLE+Module , not knowing more than the subject line

Acceptance criteria

  • AC1: The exact use case the customer was thinking of is known (e.g. at which step should checksums be computed or signing happening; how should validation afterwards work; what parts should be covered)
  • AC2: openQA can be configured to provide a checksum covering all relevant test details to show that results have not been "tampered" with

Suggestions

Maybe just do something like:

okurz@openqa:/var/lib/openqa/testresults/10938/10938628-sle-15-SP5-Windows_10_BIOS-x86_64-Build2.169-wsl-main+register@win10_64bit> tar -caf - * | sha256sum
e8f3c4b03783a82f513d1c0b5a68f64c98443d59a12f9b895257cb22921d2cf0  -

and show the hashsum in job details of openQA and offer over API based on configuration.

Actions #1

Updated by szarate about 1 year ago

Preferably, openQA should use a key or certificate (either provided or self-generated) that should be used as part of the signing process, allowing users to know that their results haven't been tampered with.

Actions #2

Updated by szarate about 1 year ago

  • Tags set to qaaas
Actions #3

Updated by mkittler almost 1 year ago

  • Subject changed from [qeaas] openQA test results should be archived and not tampered to [qeaas] openQA test results should be archived and not tampered size:M
  • Description updated (diff)
Actions #4

Updated by mkittler almost 1 year ago

  • Status changed from New to Workable
Actions #5

Updated by okurz 12 months ago

  • Subject changed from [qeaas] openQA test results should be archived and not tampered size:M to [qaaas] openQA test results should be archived and not tampered size:M
Actions #6

Updated by kraih 12 months ago

  • Assignee set to kraih
Actions #7

Updated by kraih 12 months ago

  • Status changed from Workable to Feedback

Contacted Santi, who promised to update the user story. Then we can revisit the exact requirements.

Actions #8

Updated by szarate 12 months ago

There are two versions of this requirement, but the one that's coming from FX is about being able to ensure that hashes submitted in test reports, that are pointing to a set of test execution reports, weren't modified by a third-party.

I have background about how VVSG certifications work and the kind of documents that have to be submitted, in most cases they are part of a test plan+test reports (see this report,

Specifically:

  • P25 System Test and Verification, item 1 (likely under NDA)
  • P27 11_QA Program and 12_System Change Notes
  • P29 Section 3.1 (an example of usability test report is here
  • P39 Table 4-1
  • P40 Table 4-2

And in the case of FIPS, one has to submit what are the commands (as part of the test plan) and their respective outputs and so on...

In the meantime, I'm looking for a better answer: "Why is it important for the user to archive the results and ensure that they weren't tampered with?"

Actions #9

Updated by szarate 12 months ago

In the meantime, another (internal) use case that goes in a different direction.

As a Security Engineer, I have to give Auditors evidence that certain programs work in a certain way. It would ease my life, if OpenQA could support me in that way.

For example:
When OpenQA runs a successful maintenance fips command line test under certain conditions, I would not only like to have the result, but also the way how to reproduce it without OpenQA.

The initial idea would be, that commands which are run in OpenQA for actually doing the test can be extracted. The commands may be:

    compiling an input artefact/code
    command line calls of openssl, grep, awk, echo and others
    processing some data generated during the test
    running a compiled code
    show some result state (bash variable of success or fail)


The idea of extraction would be, that the sequence of the test procedure would be available as an asset, perhaps a shell script, with the right input artefacts could reproduce the successful test in another unixish environment.

Of course there could preconditions like "Product XY with installed packages x, y, z and an encrypted root volume", perhaps as a comment within the shellscript. Some of this preconditions can be fullfilled with the script, but others have to be done outside of the testscript, because the have to be done by the external evidence-reproducer (aka auditor)


The value of this proposal is that it is easy for this company to regenerate reproducable evidence, so others do not have to believe our word, but can verify it independently AND it's easy for us to have.

The key for both stories is that there is an external party who is going to execute the test cases in the same way, as stated in the submitted documentation, I think the easiest way to do this, is to have a report generator that uses the os-autoinst log, to produce such thing

Actions #10

Updated by kraih 12 months ago

szarate wrote:

The key for both stories is that there is an external party who is going to execute the test cases in the same way, as stated in the submitted documentation, I think the easiest way to do this, is to have a report generator that uses the os-autoinst log, to produce such thing

Thank you, that is very valuable information. So far our assumptions were that there was merely a requirement to guarantee the integrity of archived test results regarding accidental corruption during long term storage. Clearly it wouldn't be enough to create checksums for the test output alone, the inputs also need to be auditable and reproducible. That adds a whole new level of complexity to this feature.

Actions #11

Updated by kraih 12 months ago

  • Status changed from Feedback to Workable
Actions #12

Updated by kraih 12 months ago

  • Assignee deleted (kraih)
Actions #13

Updated by okurz 12 months ago

  • Status changed from Workable to New
  • Target version changed from Ready to future

I am looking for another older ticket about that later request. A test result is only completely reproducible if all steps are followed with same timing, same environment, etc. Expecting a "simple list of commands" automatically generated from log files is just naive and won't fly. @szarate until we can clarify further I'd rather keep this ticket out of the backlog for now.

Actions #14

Updated by szarate 12 months ago

okurz wrote:

I am looking for another older ticket about that later request. A test result is only completely reproducible if all steps are followed with same timing, same environment, etc. Expecting a "simple list of commands" automatically generated from log files is just naive and won't fly. @szarate until we can clarify further I'd rather keep this ticket out of the backlog for now.

I have a call with FX tomorrow about his request, but I agree that this needs to be clarified further before it's actually workable.

I am looking for another older ticket about that later request. A test result is only completely reproducible if all steps are followed with same timing, same environment, etc.

The timing is not necessary, but the steps are... still we have to iterate over the requirement and have good discovery before working on it.

kraih wrote:

Clearly it wouldn't be enough to create checksums for the test output alone, the inputs also need to be auditable and reproducible. That adds a whole new level of complexity to this feature.

The more I think about this feature and some others, the more "reproducible buildstests" comes to mind... taking the timing variable out of the equation, there are few things we might be able to do to achieve that... and I feel that we're half way there, but that's a different story, and another whole big can of worms...

/me dreams in Gage repeatability and reproducibility (GR&R) applied to software instead of measuring equipment

Actions #15

Updated by szarate 11 months ago

I had a good chat with FX some days ago, and one thing that came clear to me is that while this might be a requirement, it is not yet set in stone, and has to be defined better from PM, at this point in time this is more related to notarization of the test artifacts as part of a SBOM.

Actions

Also available in: Atom PDF