action #1303

Test dependencies

Added by aplanas over 8 years ago. Updated almost 8 years ago.

Feature requests
Target version:
Start date:
Due date:
% Done:


Estimated time:


The context of the tests can be used to create a local order for the tests. Now the tests are ordered via a numbering protocol that is fragile.


#1 Updated by ancorgs over 8 years ago

  • Target version set to future

#2 Updated by coolo almost 8 years ago

Can you elaborate on this one? I don't understand

#3 Updated by aplanas almost 8 years ago

Better description:

  • We are using soft links from test.d/ to SUITE.d/ to set local order between tests (01_xx, 02_yy)
  • This solution is fragile: you need to know that 10_xx change the current image adding a new package / user, that 15_xx will use. If for some reason we remove 10_xx or rename it as 20_xx we break 15_xx
  • I propose a different solution: add in the test some context information, so we can declare in 10_xx something like:

provides 'ADDED_USER1'

and in 15_xx:

requires 'ADDED_USER1'

  • We remove the numbering, and the system resolve the order (including parallel executions) following these requires / provides rules

  • One a test with a provides is executed, several new tests can be executed in parallel in the new remote workers ;)

#4 Updated by coolo almost 8 years ago

  • Category set to 132

#5 Updated by aplanas almost 8 years ago

  • Assignee set to aplanas
  • Target version deleted (future)

I want to work on this.

#6 Updated by aplanas almost 8 years ago


The goal of this task is to provide a mechanism to define the
dependency between tests in a dynamic way, providing information to
the scheduler to decide which test select to run. The scheduler can,
with this information, decide to execute several tests in parallel if
the information provided by the test guarantee a safe execution when
the tests are running out of order.

Changes in the test

Add three methods to every test (implemented in base test class):

  • sub requires() {}
  • sub provides() {}
  • sub priority() {}

provides will return a list of strings. Each string will reference
a requirement and is demanded or, in this case, provided.

requires will return a list of list of string (or whatever random
combination of sigils is in Perl). This is equivalent of a CNF
(conjunctive normal form) clauses. For example:

[['a', 'b'], ['c']]

is equivalent to the clause:

(a && b) || (c)

priority will return a number, and will be used as a tie-breaker for
a non parallel scheduler. My proposal is that we return 0 always in
the base class, and that bigger the number, bigger the priority.

Changes in the driver

The driver will load all the tests and recollect, in a database, the
requires, provides and priorities.

Changes in the scheduler (now in the driver)

The scheduler will use a simple linear algorithm O(n) to find the list
of test that can be executed, evaluating the requires of the tests
with the current list of available 'requirements'. Initially the set
of available requirements is empty, so the tests that can be executed
are only the ones that return an empty list of lists in the requires

When a test ends successfully, the list of requirements that provides
are added to the 'requirements' set.

The algorithm now search linearly, for the unscheduled tests, the list
of the ones where the CNF clause can be evaluated as True using the
'requirements' list / set.

#7 Updated by coolo almost 8 years ago

my gut feeling says that your solution is too complex for what we need. e.g. for installation we have a clearly defined order and hardcoding that in complex CNF clauses sounds like overkill.

On the other hand, if we test chromium or firefox or amarok first doesn't matter at all, but we would need to add tons of provides and requires calls everywhere.

So what do we need? we need some tests to run after or before certain milestones, no? I.e. what systemd offers:

Before: sshboyuseradded, After: firstreboot, Provides: audiosetup

#8 Updated by ancorgs almost 8 years ago

I'm afraid we are abusing terms like "test" or "scheduler" in this sprint (well, we actually always do it but I'm more concerned about it in this sprint).

When Alberto is talking here about the scheduler, I assume he means some component in the worker completely client-side deciding what to do next in the current job. So it's not the "central scheduler" distributing jobs to workers.

Am I wrong?

#9 Updated by coolo almost 8 years ago

he talks about the algorithm in the driver selecting the tests to run - this is all deep in os-autoinst.

#10 Updated by coolo almost 8 years ago

after a long coffee break (without coffee actually :) we decided that it's worth it to drop the numbering and the complexity of having some test scheduler.

The solution proposed during that "meeting" was to have one DSL to define the tests to run and remove all is_applicable logic from the tests.

I couldn't sleep because I was thrilled by the possibilities of that solution, so I scripted a bit to get this going and it looks great IMO:

#11 Updated by coolo almost 8 years ago

  • Status changed from New to Resolved
  • Target version set to Sprint 12

we no longer want dependencies but manual order, so define it as resolved

Also available in: Atom PDF