action #175728
closedcoordination #169654: [epic] Create test scenarios for Agama
Generate all Agama json profiles using jsonnet profiles for qemu
0%
Description
Motivation¶
We are seeing more and more that our Agama profiles stored in git have many things in commons and feels like repeating over and over configuration.
Suggestion from develoepers:
https://suse.slack.com/archives/C02TLF25571/p1733748870701629?thread_ts=1733737967.351989&cid=C02TLF25571
jsonnet allows splitting the profile into smaller parts and using parameters for building big final profiles. Like define lvm=true to use LVM, rmt=.... to use RMT. Then you can set both to use LVM and RMT. With that you can easily generate lots of profiles with any combinations you need just from few templates.
It is quite similar to the Puppeteer tests.
A clear example of this we can see here for having ext4 or xfs as root filesystem duplicating all the settings:
https://github.com/os-autoinst/os-autoinst-distri-opensuse/pull/20973/files
Keep adding placeholder for that {{root_filesystem_type}} and corresponding openQA variable could degenerate in bad used of those variables, complex conditions, etc. Besides sometimes we would assume the settings (if documented) would do something but doesn't because depending on the profile more changes related could be needed, so it is better to use a templating language. All the json files generated would not be needed to be maintained anyway, only the jsonnet profiles will.
The only setting that cannot be stored in git is the one for regcode, that one still needs to be replaced during runtime, but the rest can be build.
Acceptance criteria¶
- AC1: All (or most of them) QE Yam json profiles are generated based on some jsonnet profiles.
- AC2: jsonnet profiles and resulting json profiles are stored in git
- AC3: jsonnet profiles allow some parametrization which (could or will?) be similar to the one used by Puppeteer.
- AC4: Explore how we can mark the json files in a way that anyone can know that are auto-generated files (under some folder, some comments in each file?)
- AC5: There is documentation in confluence about how to build locally json based on jsonnet profiles.
Additional information¶
Updated by JERiveraMoya 2 months ago
- Subject changed from Generate Agama json profile using jsonnet profiles to Generate all Agama json profiles using jsonnet profiles
Updated by jfernandez 2 months ago
- Status changed from Workable to In Progress
- Assignee set to jfernandez
Updated by JERiveraMoya 2 months ago
- Tags changed from qe-yam-jan-sprint-fy25 to qe-yam-feb-sprint-fy25
Updated by JERiveraMoya 2 months ago
AutoYaST rules and classes prepares more than one file, might help:
tests/autoyast/prepare_rules_and_classes.pm
Updated by jfernandez 2 months ago ยท Edited
We want to include a short summary of the research path to take a decision on which approach to implement.
We take into account three approaches with different features and requirements:
- Approach 1 (A1): Based on files, each file contains specific use case and a common section imported from a library. In this approach we won't need additional variables due to the use case contains all the required data into JSONNET template.
For example: We have
sle_lvm.jsonnet
with the specific definition of LVM test and a common sections imported frombase.libsonnet
.
local default = import '../lib/base.libsonnet';
{
product: base['product'],
user: base['user'],
root: base['root'],
scripts: base['scripts'],
storage: {
drives: [
{
alias: 'pvs-disk',
partitions: [
{ search: "*", delete: true }
]
},
],
volumeGroups: [
{
name: 'system',
physicalVolumes: [
{ generate: ['pvs-disk'] },
],
logicalVolumes: [
{ generate: 'default' },
],
},
],
}
}
The file sle_root_filesystem_ext4.jsonnet
has the same target with different storage configuration.
In this approach we take advantage of code re-usability.
- Approach 2 (A2): Based on libraries, there is one base library as previous approach called
base.libsonnet
that contains common sections. In addition, we create additional libraries with specific use cases likeLVM
orroot_filesystem_ext4
. The libraries can be selected by using variables to produce a final JSON profile. For example:
local lib = import 'lib/base.libsonnet';
function(storage='default') {
product: lib.getProduct(),
user: lib['user'],
root: lib['root'],
scripts: lib['scripts'],
[if storage == 'lvm' then 'storage']: lib.storage['lvm'],
[if storage == 'ext4' then 'storage']: lib.storage['ext4'],
}
It provides more potential and scalability with the counterpart of complexity of the code and traceability.
We need to use variables.
- Approach 3 (A3): Based on a single file, with conditionals and variables to manage the output JSON. This approach give us easier learning curve but it could grow to unmaintainable levels. For example:
local getProduct(product_id='SLES') = {
id: product_id
};
local getFSPartition(format='ext4') = {
drives: [
{
partitions: [
{ search: "*", delete: true },
{ generate: 'default' },
{ filesystem: { path: '/', type: format } },
],
},
],
};
local getMultipathScript() = {
pre: [
{
name: 'activate multipath',
body: |||
#!/bin/bash
if ! systemctl status multpathd ; then
echo 'Activating multipath'
systemctl start multipathd.socket
systemctl start multipathd
fi
|||,
},
]
};
local getDefaultScript() = {
post: [
{
name: 'enable root login',
chroot: true,
body: |||
#!/usr/bin/env bash
echo 'PermitRootLogin yes' > /etc/ssh/sshd_config.d/root.conf
|||,
},
],
};
local getPassword(hashed=true) = if hashed then '$6$vYbbuJ9WMriFxGHY$gQ7shLw9ZBsRcPgo6/8KmfDvQ/lCqxW8/WnMoLCoWGdHO6Touush1nhegYfdBbXRpsQuy/FTZZeg7gQL50IbA/'
else 'nots3cr3t';
{
product: getProduct(std.extVar('product')),
user: {
fullName: 'Bernhard M. Wiedemann',
password: getPassword(std.extVar('passwordHashed')),
hashedPassword: true,
userName: 'bernhard',
},
root: {
password: getPassword(std.extVar('passwordHashed')),
hashedPassword: true,
},
// Scripts selector, if multipath is defined then added multipath scripts.
scripts: if std.extVar('scripts') == 'multipath' then getMultipathScript() else getDefaultScript(),
// Storage section generator, available options [lvm, ext4, xfs]
[if std.extVar('storage') == 'lvm' then 'storage']: {
drives: [
{
alias: 'pvs-disk',
partitions: [
{ search: "*", delete: true }
]
},
],
volumeGroups: [
{
name: 'system',
physicalVolumes: [
{ generate: ['pvs-disk'] },
],
logicalVolumes: [
{ generate: 'default' },
],
},
],
},
[if std.extVar('storage') != 'lvm' then 'storage']: getFSPartition(std.extVar('storage'))
}
In addition, it requires variable management to not duplicate sections.
Finally, we decided to go on approach 2 plus functions as approach 3, giving us the potential of using variables and conditionals to select each key-value and the easy to use of functions which could avoid variable usage in default cases.
Updated by JERiveraMoya about 2 months ago
- Related to action #175111: Add CI check for agama profiles which are at json/jsonnet format added
Updated by jfernandez about 2 months ago
Updated by JERiveraMoya about 2 months ago
This could be the plan:
- Do not merge the PR due to we are doing a nasty hack installing/polluting the worker adding repos/installing packages without salting it.
- Wait for SR https://build.opensuse.org/request/show/1245614 to be merged.
- Send PR to add the package as described here: https://suse.slack.com/archives/C02CANHLANP/p1739421639505079?thread_ts=1733729839.444069&cid=C02CANHLANP
- Once deploy, merge the PR and verify correc use of the package.
- File new ticket for covering other architectures.
Updated by jfernandez about 1 month ago
Added Confluence information: https://confluence.suse.com/display/qasle/How-To%3A+Generate+JSON+profiles+using+JSONNET+tool
Updated by jfernandez about 1 month ago
PR with new jsonnet package: https://github.com/os-autoinst/os-autoinst-distri-opensuse/pull/21234
Updated by JERiveraMoya 30 days ago
- Tags changed from qe-yam-feb-sprint-fy25 to qe-yam-mar-sprint-fy25
Updated by JERiveraMoya 12 days ago
- Subject changed from Generate all Agama json profiles using jsonnet profiles to Generate all Agama json profiles using jsonnet profiles for qemu
- Priority changed from High to Normal