Project

General

Profile

Wiki » History » Version 272

okurz, 2023-09-21 11:06
remove the not needed quotes in cut command for easier multi-level quote-handling

1 3 okurz
# Introduction
2 1 alarrosa
3 3 okurz
This is the organisation wiki for the **openQA Project**.
4 49 okurz
The source code is hosted in the [os-autoinst github project](http://github.com/os-autoinst/), especially [openQA itself](http://github.com/os-autoinst/openQA) and the main backend [os-autoinst](http://github.com/os-autoinst/os-autoinst)
5 1 alarrosa
6 48 okurz
If you are interested in the tests for SUSE/openSUSE products take a look into the [openqatests](https://progress.opensuse.org/projects/openqatests) project.
7
8 165 okurz
If you are looking for entry level issues to contribute to please look into the section [[Wiki#Where-to-contribute|Where to contribute]]
9 70 szarate
10 14 okurz
{{toc}}
11
12 3 okurz
# Organisational
13 1 alarrosa
14 51 okurz
## ticket workflow
15
16
The following ticket statuses are used together and their meaning is explained:
17
18 63 okurz
* *New*: No one has worked on the ticket (e.g. the ticket has not been properly refined) or no one is feeling responsible for the work on this ticket.
19 73 riafarov
* *Workable*: The ticket has been refined and is ready to be picked.
20
* *In Progress*: Assignee is actively working on the ticket.
21 1 alarrosa
* *Resolved*: The complete work on this issue is done and the according issue is supposed to be fixed as observed (Should be updated together with a link to a merged pull request or also a link to an production openQA showing the effect)
22 239 okurz
* *Feedback*: Further work on the ticket needs clarification of open points within the ticket or is awaiting feedback from others or other systems (e.g. automated tests) to proceed. Sometimes also used to ask Assignee about progress on inactivity.
23 74 okurz
* *Blocked*: Further work on the ticket is blocked by some external dependency (e.g. bugs, not implemented features). There should be a link to another ticket, bug, trello card, etc. where it can be seen what the ticket is blocked by.
24 51 okurz
* *Rejected*: The issue is considered invalid, should not be done, is considered out of scope.
25
* *Closed*: As this can be set only by administrators it is suggested to not use this status.
26
27
It is good practice to update the status together with a comment about it, e.g. a link to a pull request or a reason for reject.
28
29 80 okurz
## ticket categories
30
31 251 okurz
* *Regressions/Crashes*: Regressions, crashes, error messages
32 80 okurz
* *Feature requests*: Ideas or wishes for extension, enhancement, improvement
33
* *Organisational*: Organisational tasks within the project(s), not directly code related
34
* *Support*: Support of users, usage problems, questions
35
36 1 alarrosa
Please avoid the use of other, deprecated categories
37
38
Suggestion by *okurz*: I recommend to avoid the word "bug" in our categories because of the usual "is it a bug or a feature" struggle. Instead I suggest to strictly define "Regressions & Crashes" to clearly separate "it used to work in before" from "this was never part of requirements" for Features. Any ticket of this category also means that our project processes missed something so we have points for improvements, e.g. extend things to look out for in code review.
39 100 okurz
40
## Epics and Sagas
41
42
[epic]s and [saga]s belong to the "coordination" tracker, project contributors are not required to follow this convention but the tracker may be changed automagically in the future: http://mailman.suse.de/mailman/private/qa-sle/2020-October/002722.html 
43 83 okurz
44 13 okurz
## ticket templates
45
You can use these templates to fill in tickets and further improve them with more detail over time. Copy the code block, paste it into a new issue, replace every block marked with "<…>" with your content or delete if not appropriate.
46
47 71 nicksinger
### Defects
48 13 okurz
49
Subject: `<Short description, example: "openQA dies when triggering any Windows ME tests">`
50
51 1 alarrosa
52 13 okurz
```
53 71 nicksinger
## Observation
54 13 okurz
<description of what can be observed and what the symptoms are, provide links to failing test results and/or put short blocks from the log output here to visualize what is happening>
55
56 71 nicksinger
## Steps to reproduce
57 1 alarrosa
* <do this>
58 13 okurz
* <do that>
59 1 alarrosa
* <observe result>
60 13 okurz
61 200 okurz
## Impact
62
<clearly state the impact of issues to make sure according prioritization is applied and rollbacks/downgrades can be applied>
63
64 71 nicksinger
## Problem
65 13 okurz
<problem investigation, can also include different hypotheses, should be labeled as "H1" for first hypothesis, etc.>
66
67 71 nicksinger
## Suggestion
68 123 okurz
* <what to do as a first step>
69
* <Fix the actual problem>
70
* <Consider fixing the design>
71
* <Consider fixing the team's process>
72
* <Consider to explore further>
73 13 okurz
74 71 nicksinger
## Workaround
75 13 okurz
<example: retrigger job>
76
```
77
78
example ticket: #10526
79
80 104 okurz
For tickets referencing "auto_review" see
81
https://github.com/os-autoinst/scripts/blob/master/README.md#auto-review---automatically-detect-known-issues-in-openqa-jobs-label-openqa-jobs-with-ticket-references-and-optionally-retrigger
82
for a suggested template snippet.
83
84 72 nicksinger
### Feature requests
85 13 okurz
86
Subject: `<Short description, example: "grub3 btrfs support" (feature)>`
87
88
89
```
90
## User story
91
<As a <role>, I want to <do an action>, to <achieve which goal> >
92
93 72 nicksinger
## Acceptance criteria
94 13 okurz
* <**AC1:** the first acceptance criterion that needs to be fulfilled to do this, example: Clicking "restart button" causes restart of the job>
95
* <**AC2:** also think about the "not-actions", example: other jobs are not affected>
96
97 72 nicksinger
## Tasks
98 13 okurz
* <first task to do as an easy starting point>
99 69 okurz
* <what do do next, all tasks optionally with an effort estimation in hours, e.g. "(0.5-2h)">
100 13 okurz
* <optional: mark "optional" tasks>
101
102 72 nicksinger
## Further details
103 17 okurz
<everything that does not fit into above sections>
104 13 okurz
```
105
106
example ticket: #10212
107
108 62 SLindoMansilla
## Further decision steps working on test issues
109 61 SLindoMansilla
110 62 SLindoMansilla
Test issues could be one of the following sources. Feel free to use the following template in tickets as well
111 1 alarrosa
112 62 SLindoMansilla
```
113
## Problem
114
* **H1** The product has changed
115
 * **H1.1** product changed slightly but in an acceptable way without the need for communication with DEV+RM --> adapt test
116
 * **H1.2** product changed slightly but in an acceptable way found after feedback from RM --> adapt test
117
 * **H1.3** product changed significantly --> after approval by RM adapt test
118 61 SLindoMansilla
119 62 SLindoMansilla
* **H2** Fails because of changes in test setup
120
 * **H2.1** Our test hardware equipment behaves different
121
 * **H2.2** The network behaves different
122
123
* **H3** Fails because of changes in test infrastructure software, e.g. os-autoinst, openQA
124
* **H4** Fails because of changes in test management configuration, e.g. openQA database settings
125
* **H5** Fails because of changes in the test software itself (the test plan in source code as well as needles)
126
* **H6** Sporadic issue, i.e. the root problem is already hidden in the system for a long time but does not show symptoms every time
127
```
128 25 okurz
129 235 okurz
This is following the [scientific method](https://en.wikipedia.org/wiki/Scientific_method). It is suggested to use the characters *H* (hypothesis), *E* (experiment), *O* (observation), e.g. like this
130
131
```
132
* **H3** Fails because of changes in test infrastructure software, e.g. os-autoinst, openQA
133
  * **H3.1** **REJECTED** Fails because of changes in openQA itself
134
    * **E3.1-1** (First experiment for hypothesis 3.1) test on an openQA server with the openQA version of "last good"
135
      * **O3.1-1-1** (First observation for first experiment for hypothesis 3) the test failed in the same way, reject *H3.1*
136
```
137
138 182 okurz
## Additional details needed for non-qemu issues
139
140
As the automatic integration tests of os-autoinst and openQA are based on qemu virtualization, for any non-qemu related requests please provide detailed manual reproduction steps, otherwise it is unlikely that any issue or feature request can be implemented.
141
142 25 okurz
## pull request handling on github
143
144
As a reviewer of pull requests on github for all related repositories, e.g. https://github.com/os-autoinst/os-autoinst-distri-opensuse/pulls, apply labels in case PRs are open for a longer time and can not be merged so that we keep our backlog clean and know why PRs are blocked.
145
146
* **notready**: Triaged as not ready yet for merging, no (immediate) reaction by the reviewee, e.g. when tests are missing, other scenarios break, only tested for one of SLE/TW
147
* **wip**: Marked by the reviewee itself as "[WIP]" or "[DO-NOT-MERGE]" or similar
148
* **question**: Questions to the reviewee, not answered yet
149 54 okurz
150
## Where to contribute?
151 1 alarrosa
152
If you want to help openQA development you can take a look into the existing [issues](https://progress.opensuse.org/projects/openqav3/issues).
153 167 okurz
You can start with
154 168 okurz
* [entrance level issues](https://progress.opensuse.org/projects/openqav3/search?q=entrance+level+issue&open_issues=1)
155 167 okurz
* issues tagged as [easy](https://progress.opensuse.org/projects/openqav3/issues?utf8=%E2%9C%93&set_filter=1&sort=id%3Adesc&f%5B%5D=status_id&op%5Bstatus_id%5D=o&f%5B%5D=issue_tags&op%5Bissue_tags%5D=%3D&v%5Bissue_tags%5D%5B%5D=easy&f%5B%5D=&c%5B%5D=subject&c%5B%5D=project&c%5B%5D=status&c%5B%5D=assigned_to&c%5B%5D=fixed_version&c%5B%5D=is_private&c%5B%5D=due_date&c%5B%5D=relations&group_by=&t%5B%5D=)
156 197 okurz
* issues tagged as [beginner](https://progress.opensuse.org/projects/openqav3/issues?utf8=%E2%9C%93&set_filter=1&sort=id%3Adesc&f%5B%5D=status_id&op%5Bstatus_id%5D=o&f%5B%5D=issue_tags&op%5Bissue_tags%5D=%3D&v%5Bissue_tags%5D%5B%5D=beginner&f%5B%5D=&c%5B%5D=subject&c%5B%5D=project&c%5B%5D=status&c%5B%5D=assigned_to&c%5B%5D=fixed_version&c%5B%5D=is_private&c%5B%5D=due_date&c%5B%5D=relations&group_by=&t%5B%5D=) - not necessarily "easy" but more suitable for someone coming to a project with little or no domain specific knowledge
157 168 okurz
* ideas from #65271
158 165 okurz
159
There are also some "always valid" tasks to be working on:
160 54 okurz
161
* *improve test coverage*:
162
 * *user story*: As openqa backend as well as test developer I want better test coverage of our projects to reduce technical debt
163
 * *acceptance criteria*: test coverage is significantly higher than before
164
 * *suggestions*: check current coverage in each individual project (os-autoinst/openQA/os-autoinst-distri-opensuse) and add tests as necessary
165 28 okurz
166 1 alarrosa
# Use cases
167 40 okurz
168 28 okurz
The following use cases 1-6 have been defined within a SUSE workshop (others have been defined later) to clarify how different actors work with openQA. Some of them are covered already within openQA quite well, some others are stated as motivation for further feature development.
169
170 6 okurz
## Use case 1
171 4 okurz
**User:** QA-Project Managment
172 1 alarrosa
**primary actor:** QA Project Manager, QA Team Leads
173
**stakeholder:** Directors, VP
174 7 okurz
**trigger:** product milestones, providing a daily status
175 1 alarrosa
**user story:** „As a QA project manager I want to check on a daily basis the „openQA Dashboard“ to get a summary/an overall status of the „reviewers results“ in order to take the right actions and prioritize tasks in QA accordingly.“
176 28 okurz
	
177 4 okurz
## Use case 2
178 1 alarrosa
**User:** openQA-Admin
179
**primary actor:** Backend-Team
180 4 okurz
**stakeholder:** Qa-Prjmgr, QA-TL, openQA Tech-Lead
181 7 okurz
**trigger:** Bugs, features, new testcases
182 5 okurz
**user story:** „As an openQA admin I constantly check in the web-UI the system health and I manage its configuration to ensure smooth operation of the tool.“
183 28 okurz
184 1 alarrosa
## Use case 3
185
**User:** QA-Reviewer
186
**primary actor:** QA-Team
187 4 okurz
**stakeholder:** QA-Prjmgr, Release-Mgmt, openQA-Admin
188 7 okurz
**trigger:** every new build
189
**user story:** „As an openQA-Reviewer at any point in time I review on the webpage of openQA the overall status of a build in order to track and find bugs, because I want to find bugs as early as possible and report them.“
190 28 okurz
191 1 alarrosa
## Use case 4
192
**User:** Testcase-Contributor
193 4 okurz
**primary actor:** All development teams, Maintenance QA
194 5 okurz
**stakeholder:** QA-Reviewer, openQA-Admin, openQA Tech-Lead
195 40 okurz
**trigger:** features, new functionality, bugs, new product/package
196 7 okurz
**user story:** „As developer when there are new features, new functionality, bugs, new product/package in git I contribute my testcases because I want to ensure good quality submissions and smooth product integration.“
197 28 okurz
198 4 okurz
## Use case 5
199
**User:** Release-Mgmt
200
**primary actor:** Release Manager
201 1 alarrosa
**stakeholder:** Directors, VP, PM, TAMs, Partners
202 7 okurz
**trigger:** Milestones
203
**user story:** „As a Release-Manager on a daily basis I check on a dashboard for the product health/build status in order to act early in case of failures and have concrete and current reports.“
204 28 okurz
205 4 okurz
## Use case 6
206
**User:** Staging-Admin
207
**primary actor:** Staging-Manager for the products
208 1 alarrosa
**stakeholder:** Release-Mgmt, Build-Team
209
**trigger:** every single submission to projects
210 40 okurz
**user story:** „As a Staging-Manager I review the build status of packages with every staged submission to the „staging projects“ in the „staging dashboard“ and the test-status of the pre-integrated fixes, because I want to identify major breakage before integration to the products and provide fast feedback back to the development.“
211
212
## Use case 7
213
**User:** Bug investigator
214
**primary actor:** Any bug assignee for openQA observed bugs
215
**stakeholder:** Developer
216
**trigger:** bugs
217 8 okurz
**user story:** „As a developer that has been assigned a bug which has been observed in openQA I can review referenced tests, find a newer and the most recent job in the same scenario, understand what changed since the last successful job, what other jobs show same symptoms to investigate the root cause fast and use openQA for verification of a bug fix.“
218 15 okurz
219 8 okurz
# Thoughts about categorizing test results, issues, states within openQA
220
by okurz
221
222
When reviewing test results it is important to distinguish between different causes of "failed tests"
223
224
## Nomenclature
225
226 58 okurz
### Test status categories
227 1 alarrosa
A common definition about the status of a test regarding the product it tests: "false|true positive|negative" as described on https://en.wikipedia.org/wiki/False_positives_and_false_negatives. "positive|negative" describes the outcome of a test ("positive": test signals presence of issue; "negative": no signal) whereas "false|true" describes the conclusion of the test regarding the presence of issues in the SUT or product in our case ("true": correct reporting; "false": incorrect reporting), e.g. "true negative", test successful, no issues detected and there are no issues, product is working as expected by customer. Another example: Think of testing as of a fire alarm. An alarm (event detector) should only go off (be "positive") *if* there is a fire (event to detect) --> "true positive" whereas *if* there is *no* fire there should be *no* alarm --> "true negative".
228 10 okurz
229 1 alarrosa
Another common but potentially ambiguous categorization:
230 10 okurz
231
* *broken*: the test is not behaving as expected (Ambiguity: "as expected" by whom?) --> commonly a "false positive", can also be "false negative" but hard to detect
232
* *failing*: the test is behaving as expected, but the test output is a fail --> "true positive"
233
* *working*: the test is behaving as expected (with no comment regarding the result, though some might ambiguously imply 'result is negative')
234
* *passing*: the test is behaving as expected, but the result is a success --> "true negative"
235 8 okurz
236 9 okurz
If in doubt declare a test as "broken". We should review the test and examine if it is behaving as expected.
237 10 okurz
238 8 okurz
Be careful about "positive/negative" as some might also use "positive" to incorrectly denote a passing test (and "negative" for failing test) as an indicator of "working product" not an indicator about "issue present". If you argue what is "used in common speech" think about how "false positive" is used as in "false alarm" --> "positive" == "alarm raised", also see https://narainko.wordpress.com/2012/08/26/understanding-false-positive-and-false-negative/
239
240 10 okurz
### Priorization of work regarding categories
241 3 okurz
In this sense development+QA want to accomplish a "true negative" state whenever possible (no issues present, therefore none detected). As QA and test developers we want to prevent "false positives" ("false alarms" declaring a product as broken when it is not but the test failed for other reasons), also known as "type I error" and "false negatives" (a product issue is not catched by tests and might "slip through" QA and at worst is only found by an external outside customer) also known as "type II error". Also see https://en.wikipedia.org/wiki/Type_I_and_type_II_errors. In the context of openQA and system testing paired with screen matching a "false positive" is much more likely as the tests are very susceptible to subtle variations and changes even if they should be accepted. So when in doubt, create an issue in progress, look at it again, and find that it was a false alarm, rather than wasting more peoples time with INVALID bug reports by believing the product to be broken when it isn't. To quote Richard Brown: "I […] believe this is the route to ongoing improvement - if we have tests which produce such false alarms, then that is a clear indicator that the test needs to be reworked to be less ambiguous, and that IS our job as openQA developers to deal with".
242 11 okurz
243
## Further categorization of statuses, issues and such in testing, especially automatic tests
244
By okurz
245
246
This categorization scheme is meant to help in communication in either written or spoken discussions being simple, concise, easy to remember while unambiguous in every case.
247
While used for naming it should also be used as a decision tree and can be followed from the top following each branch.
248
249
### Categorization scheme
250
251
To keep it simple I will try to go in steps of deciding if a potential issue is of one of two categories in every step (maybe three) and go further down from there. The degree of further detailing is not limited, i.e. it can be further extended. Naming scheme should follow arabic number (for two levels just 1 and 2) counting schemes added from the right for every additional level of decision step and detail without any separation between the digits, e.g. "1111" for the first type in every level of detail up to level four. Also, I am thinking of giving the fully written form phonetic name to unambiguously identify each on every level as long as not more individual levels are necessary. The alphabet should be reserved for higher levels and higher priority types.
252
Every leaf of the tree must have an action assigned to it.
253 12 okurz
254 11 okurz
1 **failed** (ZULU)
255
11 new (passed->failed) (YANKEE)
256
111 product issue ("true positive") (WHISKEY)
257 44 okurz
1111 unfiled issue (SIERRA)
258 11 okurz
11111 hard issue (openqa *fail*) (KILO)
259
111121 critical / potential ship stopper (INDIA) --> immediately file bug report with "ship_stopper?" flag; opt. inform RM directly
260 44 okurz
111122 non-critical hard issue (HOTEL) --> file bug report
261 11 okurz
11112 soft issue (openqa *softfail* on job level, not on module level) (JULIETT) --> file bug report on failing test module
262
1112 bugzilla bug exists (ROMEO)
263
11121 bug was known to openqa / openqa developer --> cross-reference (bug->test, test->bug) AND raise review process issue, improve openqa process
264
11122 bug was filed by other sources (e.g. beta-tester) --> cross-reference (bug->test, test->bug)
265
112 test issue ("false positive") (VICTOR)
266
1121 progress issue exists (QUEBEC) --> cross-reference (issue->test, test->issue)
267
1122 unfiled test issue (PAPA)
268
11221 easy to do w/o progress issue
269
112211 need needles update --> re-needle if sure, TODO how to notify?
270
112212 pot. flaky, timeout
271
1122121 retrigger yields PASS --> comment in progress about flaky issue fixed
272
1122122 reproducible on retrigger --> file progress issue
273
11222 needs progress issue filed --> file progress issue
274
12 existing / still failing (failed->failed) (XRAY)
275
121 product issue (UNIFORM)
276
1211 unfiled issue (OSCAR) --> file bug report AND raise review process issue (why has it not been found and filed?)
277
1212 bugzilla bug exists (NOVEMBER) --> ensure cross-reference, also see rules for 1112 ROMEO
278
122 test issue (TANGO)
279
1221 progress issue exists (MIKE) --> monitor, if persisting reprioritize test development work
280
1222 needs progress issue filed (LIMA) --> file progress issue AND raise review process issue, see 1211 OSCAR
281
2 **passed** (ALFA)
282
21 stable (passed->passed) (BRAVO)
283
211 existing "true negative" (DELTA) --> monitor, maybe can be made stricter
284
212 existing "false negative" (ECHO) --> needs test improvement
285
22 fixed (failed->passed) (CHARLIE)
286
222 fixed "true negative" (FOXTROTT) --> TODO split monitor, see 211 DELTA
287
2221 was test issue --> close progress issue
288
2222 was product issue
289
22221 no bug report exists --> raise review process issue (why was it not filed?)
290
22222 bug report exists
291
222221 was marked as RESOLVED FIXED
292
221 fixed but "false negative" (GOLF) --> potentially revert test fix, also see 212 ECHO
293 41 okurz
294
295 11 okurz
Priority from high to low: INDIA->OSCAR->HOTEL->JULIETT->…
296 35 okurz
297 142 okurz
# Important ticket queries
298
299
* All auto-review tickets: https://progress.opensuse.org/projects/openqav3/issues?query_id=697 , see https://github.com/os-autoinst/scripts/blob/master/README.md#auto-review---automatically-detect-known-issues-in-openqa-jobs-label-openqa-jobs-with-ticket-references-and-optionally-retrigger for further information regarding auto-review
300
* All auto-review+force-result tickets: https://progress.opensuse.org/projects/openqav3/issues?query_id=700
301
302 82 okurz
# Proposals for uses of labels
303 23 okurz
With [Show bug or label icon on overview if labeled (gh#550)](https://github.com/os-autoinst/openQA/pull/550) it is possible to add custom labels just by writing them. Nevertheless, a convention should be found for a common benefit. <del>Beware that labels are also automatically carried over with (Carry over labels from previous jobs in same scenario if still failing [gh#564])(https://github.com/os-autoinst/openQA/pull/564) which might make consistent test failures less visible when reviewers only look for test results without labels or bugrefs.</del> Labels are not anymore automatically carried over ([gh#1071](https://github.com/os-autoinst/openQA/pull/1071)).
304
305
List of proposed labels with their meaning and where they could be applied.
306
307
* ***`fixed_<build_ref>`***: If a test failure is already fixed in a more recent build and no bug reference is known, use this label together with a reference to a more recent passed test run in the same scenario. Useful for reviewing older builds. Example (https://openqa.suse.de/tests/382518#comments):
308
309
```
310
label:fixed_Build1501
311
312
t#382919
313
```
314 24 okurz
315
* ***`needles_added`***: In case needles were missing for test changes or expected product changes caused needle matching to fail, use this label with a reference to the test PR or a proper reasoning why the needles were missing and how you added them. Example (https://openqa.suse.de/tests/388521#comments):
316
317
```
318
label:needles_added
319
320
needles for https://github.com/os-autoinst/os-autoinst-distri-opensuse/pull/1353 were missing, added by jpupava in the meantime.
321 60 mgriessmeier
```
322
323 67 okurz
# s390x Test Organisation
324 1 alarrosa
325 67 okurz
See the following picture for a graphical overview of the current s390x test infrastructure at SUSE:
326
327
![SUSE s390x test infrastructure](qa_sle_openqa_s390x_test_infrastructure.jpg)
328
329 75 okurz
## Upgrades
330 60 mgriessmeier
331
### on z/VM 
332
#### special Requirements
333
334
Due to the lack of proper use of hdd-images on zVM, we need to workaround this with having a dedicated worker_class aka a dedicated Host where we run two jobs with START_AFTER_TEST,
335
the first one which installs the basesystem we want to have upgraded and a second one which is doing the actually upgrade (e.g migration_offline_sle12sp2_zVM_preparation and migration_offline_sle12sp2_zVM)
336
337
Since we encountered issues with randomly other preparation jobs are started in between there, we need to ensure that we have one complete chain for all migration jobs running on one worker, that means for example:
338
339 75 okurz
1. migration_offline_sle12sp2_zVM_preparation 
340
1. migration_offline_sle12sp2_zVM (START_AFTER_TEST=#1) 
341
1. migration_offline_sle12sp2_allpatterns_zVM_preparation (START_AFTER_TEST=#2) 
342
1. migration_offline_sle12sp2_allpatterns_zVM 
343
1. ...
344 66 okurz
345
This scheme ensures that all actual Upgrade jobs are finding the prepared system and are able to upgrade it
346
347
### on z/KVM
348
349 67 okurz
No special requirements anymore, see details in #18016
350 77 nicksinger
351
## Automated z/VM LPAR installation with openQA using qnipl
352
353 78 nicksinger
There is an ongoing effort to automate the LPAR creation and installation on z/VM. A first idea resulted in the creation of [qnipl](https://github.com/openSUSE/dracut-qnipl). `qnipl` enables one to boot a very slim initramfs from a shared medium (e.g. shared SCSI-disks) and supply it with the needed parameters to chainload a "normal SLES installation" using kexec.
354 77 nicksinger
This method is required for z/VM because snipl (Simple network initial program loader) can only load/boot LPARs from specific disks, not network resources.
355
356
### Setup
357
358
1. Get a shared disk for all your LPARs
359
  * Normally this can easily done by infra/gschlotter
360
  * Disks needs to be connected to all guests which should be able to network-boot
361
1. Boot a fully installed SLES on one of the LPARs to start preparing the shared-disk
362
1. Put a DOS partition table on the disk and create one single, large partition on there
363
1. Put a FS on there. Our first test was on ext2 and it worked flawlessly in our attempts
364
1. Install `zipl` (The s390x bootloader from IBM) on this partition
365
  * A simple and sufficient config can be found in [poo#33682](https://progress.opensuse.org/issues/33682)
366
1. clone [`qnipl`](https://github.com/nicksinger/dracut-qnipl) to your dracut modules (e.g. /usr/lib/dracut/modules.d/95qnipl)
367
1. Include the module named `qnipl` to your dracut modules for initramfs generation
368
  * e.g. in /etc/dracut.conf.d/99-qnipl.conf add: `add_dracutmodules+=qnipl`
369
1. Generate your initramfs (e.g. `dracut -f -a "url-lib qnipl" --no-hostonly-cmdline /tmp/custom_initramfs`)
370
  * Put the initramfs next to your kernel binary on the partition you want to prepare
371
1. From now on you can use `snipl` to boot any LPAR connected with this shared disk from network
372 263 okurz
  * example: `snipl -f ./snipl.conf -s P0069A27-LP3 -A fa00 --wwpn_scsiload 500507630713d3b3 --lun_scsiload 4001401100000000 --ossparms_scsiload "install=http://openqa.suse.de/assets/repo/SLE-15-Installer-DVD-s390x-Build533.2-Media1 hostip=10.0.0.1/20 gateway=10.0.0.254 Nameserver=10.0.0.1 Domain=suse.de ssh=1"`
373 77 nicksinger
  * `--ossparms_scsiload` is then evaluated and used by `qnipl` to kexec into the installer with the (for the installer) needed parameters
374
375
### Further details
376
377 78 nicksinger
Further details can also be found in the [github repo](https://github.com/openSUSE/dracut-qnipl/blob/master/README.md). Pull requests, questions and ideas always welcome!
378 84 okurz
379 109 okurz
# Infrastructure setup for o3 (openqa.opensuse.org) and osd (openqa.suse.de)
380 1 alarrosa
381 194 okurz
Both o3 and osd are hosted in SUSE data centers, mostly Nuremberg, Germany, and Prague, Czech Republic.
382 199 okurz
383 194 okurz
## o3 (openqa.opensuse.org)
384 109 okurz
385 263 okurz
o3 consists of a VM running the web UI and physical worker machines. The VM for o3 has netapp backed storage on rotating disk so less performant than SSD but cheaper. So eventually we might have the possibility to use SSD based storage. Currently there are four virtual storage devices provided to o3 totalling to more than 10 TB.
386 88 okurz
387 1 alarrosa
The o3 infrastructure is in detail described on https://github.com/os-autoinst/sync-and-trigger/blob/main/openqa-opensuse.md
388 270 tinita
389
### Temporary things regarding the move to PRG2
390
391
On new-ariel there is the service `autossh-old-ariel.service`. If we get an email `Problem: Interface tun5: Link down` from zabbix this is the service we need to check.
392 185 okurz
393 141 okurz
### Accessing the o3 infrastructure
394
395 262 okurz
The o3 webui host as well the workers within the o3 infrastructure can be accessed over ssh by using `ssh -p 2214 gate.opensuse.org` (and `ssh -p 2213 gate.opensuse.org` for old-ariel). Ask one of the existing admins within https://app.element.io/#/room/#openqa:opensuse.org or irc://irc.libera.chat/opensuse-factory (so that I know you can be reached over those channels when people have questions to you what you did with the ssh access) to put your ssh key on the o3 webui host to be able to login. 
396 141 okurz
397
To give access for a new user an existing admin can do the following:
398
399
```
400
sudo useradd -G users,trusted --create-home $user
401
echo "$ssh_key_from_user" | sudo tee -a /home/$user/.ssh/authorized_keys
402
```
403
404
#### SSH configuration
405
406 207 mkittler
To easily access all hosts behind the jump host you can use the following config for your ssh client (`~/.ssh/config`):
407 141 okurz
408
```
409
Host ariel
410
  HostName gate.opensuse.org
411 262 okurz
  Port 2214
412 141 okurz
413
# Note that %h as understood by -W needs the real host, aliases won't work:
414
# kex_exchange_identification: Connection closed by remote host
415
# Connection closed by UNKNOWN port 65535`
416
Host *.opensuse.org
417
  ProxyCommand ssh -q -A -x ariel -W %h:%p
418
```
419
420
**A word of warning:** be aware that this enables agent-forwarding to at least the jumphost. Please read up for yourself if and how bad you consider the security implications of doing so.
421
422
The workers can only be accessed from "ariel", not directly. One can use password authentication on the workers using the root account. Ask existing admins for the root password. It is suggested that you use key-based authentication. For this put your ssh keys on all the workers, e.g. using the above configuration and `ssh-copy-id`.
423
424
**Notice:** Some machines are connected to the o3 openQA host from other networks and might need different ways of access, at time of writing:
425
426
* Remote (owner: @ggardet_arm):
427
 * ip-10-0-0-58
428
 * oss-cobbler-03
429
 * siodtw01 (for tests on Raspberry Pi 2,3,4)
430
431
### Manual command execution on o3 workers
432
433
To execute commands manually on all workers within the o3 infrastructure one can do for example the following:
434
435
```
436 264 okurz
hosts="aarch64 openqaworker4 openqaworker6 openqaworker7 openqaworker19 openqaworker20 openqaworker21 openqaworker22 openqaworker23 openqaworker24 openqaworker25 openqaworker26 openqaworker27 openqaworker28 openqaworker-arm21 openqaworker-arm22 qa-power8-3 rebel"
437
for i in $hosts; do echo $i && ssh root@$i "zypper -n dup && reboot" ; done
438 141 okurz
```
439 1 alarrosa
440 181 mkittler
```
441 264 okurz
for i in $hosts; do echo $i && ssh root@$i " echo 'ssh-rsa … …' >> /root/.ssh/authorized_keys " ; done
442 181 mkittler
```
443
444 1 alarrosa
mind the correct list of machines.
445 193 okurz
446
Formerly for true transactional servers we used:
447
448
```
449
for i in $hosts; do echo $i && ssh root@$i "(transactional-update -n dup || zypper -n dup) && reboot" ; done
450
```
451 141 okurz
452 91 okurz
### Automatic update of o3
453 92 okurz
454 267 okurz
o3 is continuously deployed, that includes both the webUI host as well as the workers.
455 111 okurz
456
#### Automatic update of o3 webUI host
457
458 184 okurz
openqa.opensuse.org applies continuous updates of openQA related packages, conducts nightly updates of system packages and reboots automatically as required, see
459
http://open.qa/docs/#_automatic_system_upgrades_and_reboots_of_openqa_hosts
460
for details
461 111 okurz
462
#### Recurring automatic update of openQA workers
463
464 186 okurz
Same as the o3 webUI all o3 workers all apply continuous updates of openQA related packages. Additionally most apply a daily automatic system update and are "Transactional Servers" running openSUSE Leap. power8 is non-transactional with a weekly update of the system every Sunday.
465 111 okurz
466
This was for a number of reasons including:
467 109 okurz
468 96 okurz
* Getting all the machines consistent after a few years of drift
469
* Making it easier to keep them consistent by leveraging a read only root filesystem
470
* Guaranteeing rollbackability by using transactional updates
471 102 okurz
472 1 alarrosa
This was done by rbrown also to fulfill the prerequisite to getting them viable for multi-machine testing
473 102 okurz
474
These systems currently patch themselves and reboot automatically in the default maintenance window of 0330-0500 CET/CEST.
475 112 okurz
476 102 okurz
On problems this could be changed in the following way:
477
478 109 okurz
* Edit the maintenance window in /etc/rebootmgr.conf
479 105 nicksinger
* Disable the automatic reboot by "systemctl disable rebootmgr.service"
480
* Disable the automatic patching by "systemctl disable transactional-update.timer"
481
482 192 okurz
EDIT: 2022-07-11: All o3 machines are effectively not "transactional-workers" anymore as openqa-continuous-update.service is doing a complete `zypper dup` every couple of minutes. With `rebootmgr` triggered for reboot still automatic nightly reboots happen as necessary. See #111989 for details
483
484 105 nicksinger
SUSE employees have access to the bootmenu for the openQA worker machines, e.g. openqaworker1 and openqaworker4 via openqaworker1- ipmi.suse.de and openqaworker4-ipmi.suse.de which are both connected to the r&d network. For imagetester one would need to go through SUSE-IT in an unlikely event of a boot-preventing update. "snapper rollback" can be executed from a booted, functionally operative machine which one can ssh into.
485
486
For manual investigation https://github.com/kubic-project/microos-toolbox can be helpful
487
488
#### Rollback of updates
489 1 alarrosa
490 140 livdywan
Updates on workers can be rolled back using `transactional-update` affecting the transactional workers (others are likely not updated that often):
491
492 105 nicksinger
```
493 263 okurz
for i in $hosts; do echo $i && ssh root@$i "transactional-update rollback last && reboot"; done
494 105 nicksinger
```
495
496
Updates on the central webUI host openqa.opensuse.org can be rolled back by using either older variants of packages that receive maintenance updates or using the locally cached packages in e.g. /var/cache/zypp/packages/devel_openQA/noarch using `zypper in --oldpackage`, similar to https://github.com/os-autoinst/openQA/blob/master/script/openqa-rollback#L39
497 108 SLindoMansilla
498
#### Debugging qemu SUTs in openqa.opensuse.org
499
500
SUT: System Under Test
501 1 alarrosa
502 108 SLindoMansilla
os-autoinst starts qemu with network type that doesn't allow access from the outside, so ssh is not possible. But, qemu is started with a VNC channel available from the host (the openQA-worker).
503
Running vncviewer inside a headless server is useless, but it is possible to use gate.opensuse.org as a jump host and SSH port forwarding to start vncviewer client from your desktop environment and connect to the VNC channel of the qemu SUT.
504
505
```
506 263 okurz
ssh -p 2214 -L LOCAL_PORT:WORKER_HOSTNAME:QEMU_VNC_PORT USERNAME@gate.opensuse.org
507 108 SLindoMansilla
```
508
509
For example, if user **bernhard**, wants to connect to openqaworker7:11, and wants to use local port **43043**
510
Being the IP of openqaworker7 **192.168.112.12**
511
And the VNC channel port of openqa-worker@11 **6001** (5990 + 11)
512
513
##### 1. Create SSH tunnel with port forwarding
514
* on laptop shell 1: ssh -p 2213 -L 43043:192.168.112.12:6001 bernhard@gate.opensuse.org
515 1 alarrosa
* Keep shell open to keep the tunnel open and the port forwarding
516 108 SLindoMansilla
517 1 alarrosa
##### 2. Open vncviewer
518
* on laptop shell 2: vncviewer -Shared localhost:43043
519
* `-shared` is needed to not kick the VNC connection of os-autoinst. If it is kicked, the job will terminate and the qemu process will be killed.
520
521 109 okurz
### AArch64 specific configurations on o3
522 1 alarrosa
523 109 okurz
On o3, the aarch64 workers need additional configuration.
524
525 127 dheidler
#### Setup HugePages
526
527
You need to setup HugePages support to improve performances with qemu VM and to match current aarch64 `MACHINE` configuration.
528
For the D05 machine, the configuration is: `40` pages with a size of `1G`.
529
If there are some permissions issues on `/dev/hugepages/`, check https://progress.opensuse.org/issues/53234
530
531 126 dheidler
### o3 s390 workers
532
533 223 dheidler
`workers.ini`
534
```
535
[global]
536
HOST=http://openqa1-opensuse
537
WORKER_HOSTNAME = 192.168.112.6
538
CACHEDIRECTORY=/var/lib/openqa/cache
539
CACHESERVICEURL=http://10.88.0.1:9530/
540
[101]
541
WORKER_CLASS=s390x-zVM-vswitch-l2,s390x-rebel-1-linux144
542
BACKEND=s390x
543
ZVM_HOST=192.168.112.9
544
ZVM_GUEST=linux144
545
ZVM_PASSWORD=lin390
546
S390_HOST=144
547
[102]
548
WORKER_CLASS=s390x-zVM-vswitch-l2,s390x-rebel-2-linux145
549
BACKEND=s390x
550
ZVM_HOST=192.168.112.9
551
ZVM_GUEST=linux145
552
ZVM_PASSWORD=lin390
553
S390_HOST=145
554
[103]
555
WORKER_CLASS=s390x-zVM-vswitch-l2,s390x-rebel-3-linux146
556
BACKEND=s390x
557
ZVM_HOST=192.168.112.9
558
ZVM_GUEST=linux146
559
ZVM_PASSWORD=lin390
560
S390_HOST=146
561
[104]
562
WORKER_CLASS=s390x-zVM-vswitch-l2,s390x-rebel-4-linux147
563
BACKEND=s390x
564
ZVM_HOST=192.168.112.9
565
ZVM_GUEST=linux147
566
ZVM_PASSWORD=lin390
567
S390_HOST=147
568
[105]
569
WORKER_CLASS=64bit-ipmi,64bit-ipmi-large-mem,64bit-ipmi-amd,blackbauhinia
570
IPMI_HOSTNAME=blackbauhinia-ipmi.openqanet.opensuse.org
571
IPMI_USER=ADMIN
572
IPMI_PASSWORD=ADMIN
573
SUT_IP=blackbauhinia.openqanet.opensuse.org
574
SUT_NETDEVICE=em1
575
IPMI_SOL_PERSISTENT_CONSOLE=1
576
IPMI_BACKEND_MC_RESET=1
577
[http://openqa1-opensuse]
578
TESTPOOLSERVER=rsync://openqa1-opensuse/tests
579
```
580
581 227 okurz
Allow containers to access cache service (`systemctl edit openqa-worker-cacheservice.service`):
582 221 dheidler
```
583
# /etc/systemd/system/openqa-worker-cacheservice.service.d/override.conf
584
[Service]
585
Environment="MOJO_LISTEN=http://0.0.0.0:9530"
586
```
587
588 126 dheidler
The s390 workers for openQA are running within podman containers on openqaworker1.
589
The containers are started using systemd but the unit files are specific to the containers and will end up in a restart-loop if this fact is ignored. Whenever the containers are recreated, the systemd files have to be recreated.
590
591
The containers are started like this (for i=101…104):
592
593
```
594
i=101
595 226 dheidler
podman run -d -h openqaworker1_container --name openqaworker1_container_$i -p $(python3 -c"p=${i}*10+20003;print(f'{p}:{p}')") -e OPENQA_WORKER_INSTANCE=$i -v /opt/s390x_rebel_replacement:/etc/openqa -v /var/lib/openqa/share:/var/lib/openqa/share -v /var/lib/openqa/cache:/var/lib/openqa/cache registry.opensuse.org/devel/openqa/containers15.4/openqa_worker_os_autoinst_distri_opensuse:latest
596 216 dheidler
(cd /etc/systemd/system/; podman generate systemd -f -n --new openqaworker1_container_$i --restart-policy always)
597 109 okurz
systemctl daemon-reload
598 1 alarrosa
systemctl enable container-openqaworker1_container_$i
599 209 mkittler
```
600
601
To restart and permanently enable all workers at once:
602
```
603 217 dheidler
for i in {101..104} ; do systemctl stop container-openqaworker1_container_$i ; done
604 209 mkittler
podman rm -f openqaworker1_container_{101..104}
605 226 dheidler
for i in {101..104} ; do podman run -d -h openqaworker1_container --name openqaworker1_container_$i -p $(python3 -c"p=${i}*10+20003;print(f'{p}:{p}')") -e OPENQA_WORKER_INSTANCE=$i -v /opt/s390x_rebel_replacement:/etc/openqa -v /var/lib/openqa/share:/var/lib/openqa/share -v /var/lib/openqa/cache:/var/lib/openqa/cache registry.opensuse.org/devel/openqa/containers15.4/openqa_worker_os_autoinst_distri_opensuse:latest ; done
606 216 dheidler
for i in {101..104} ; do (cd /etc/systemd/system/; podman generate systemd -f -n --new openqaworker1_container_$i --restart-policy always) ; done
607 209 mkittler
systemctl daemon-reload
608 220 dheidler
podman rm -f openqaworker1_container_{101..104}
609 219 dheidler
for i in {101..104} ; do systemctl reenable container-openqaworker1_container_$i && systemctl restart container-openqaworker1_container_$i ; done
610 109 okurz
```
611
612 210 mkittler
Initial ticket when the setup was created: https://progress.opensuse.org/issues/97751
613
614 133 okurz
As alternative s390x workers can run on the host "rebel" as well. Be aware that openQA workers accessing the same s390x instances must not run in parallel so only enable one worker instance per s390x instance at a time (See https://progress.opensuse.org/issues/97658 for details).
615
616 121 okurz
### Monitoring
617
618 271 jbaier_cz
openqa.opensuse.org is monitored by SUSE over https://zabbix.suse.de/. There is a user group "Owners/O3" to which SUSE employees can be added. Alert notification is configured via trigger action in a special Infra-owned RO bot account. E-mail notification is in place for average problems and higher.
619 233 okurz
620
There is also an internal munin instance on o3. Anyone wanting to look at the HTML pages, do this:
621 121 okurz
```
622
rsync -a o3:/srv/www/htdocs/munin ~/o3-munin/ 
623
```
624
(where "o3" is configured in your ssh config of course)
625
626 241 tinita
It's also possible to view the munin page via an ssh tunnel:
627
```
628 252 tinita
ssh  -L 8080:127.0.0.1:80 o3
629 241 tinita
```
630
and then go to http://127.0.0.1:8080/munin/
631
632 247 tinita
Configuration of alerts is done in `/etc/munin/munin.conf`
633 1 alarrosa
634 247 tinita
## Hotfixing
635 183 okurz
636
Applying hotfixes, e.g. patches from an os-autoinst pull requests to O3 workers can be applied like this for a pull request <pr_id>:
637
638
```
639 263 okurz
for i in $hosts; do echo $i && ssh root@$i "(transactional-update run /bin/sh -c \"curl -sS https://patch-diff.githubusercontent.com/raw/os-autoinst/os-autoinst/pull/${pr_id}.patch | patch -p1 --directory=/usr/lib/os-autoinst\" && reboot) || curl -sS https://patch-diff.githubusercontent.com/raw/os-autoinst/os-autoinst/pull/${pr_id}.patch | patch -p1 --directory=/usr/lib/os-autoinst" ; done
640 183 okurz
```
641
642
Hotpatching on all OSD workers with the same <pr_id> as above with something like
643
644
```
645
sudo salt --no-color --state-output=changes -C 'G@roles:worker' cmd.run 'curl -sS https://patch-diff.githubusercontent.com/raw/os-autoinst/os-autoinst/pull/${pr_id}.patch | patch -p1 --directory=/usr/lib/os-autoinst'
646
```
647
648 89 ggardet_arm
## Mitigation of boot failure or disk issues
649
650
### Worker stuck in recovery
651
652
Check disk health and consider manual fixup of mount points, e.g.:
653
654
```
655
test -e /dev/md/openqa || lsblk -n | grep -v nvme | grep "/$" && mdadm --create /dev/md/openqa --level=0 --force --raid-devices=$(ls /dev/nvme?n1 | wc -l) --run /dev/nvme?n1 || mdadm --create /dev/md/openqa --level=0 --force --raid-devices=1 --run /dev/nvme0n1p3
656
```
657
658 106 okurz
## PPC specific configurations
659
660 214 okurz
In one case it was necessary to disable snapshots for petitboot with `nvram -p default --update-config "petitboot,snapshots?=false"` to prevent a race condition between dm_raid and btrfs trying to discover bootable devices (#68053#note-25). In another case https://bugzilla.opensuse.org/show_bug.cgi?id=1174166 caused the boot entries to be not properly discovered and it was necessary to prevent grub from trying to update the according sections (#68053#note-31).
661 89 ggardet_arm
662 84 okurz
## Moving worker from osd to o3
663
664
* Ensure system management, e.g. over IPMI works. This is untouched by the following steps and can be used during the process for recovery and setup
665
* Ensure network is configured for DHCP
666 242 okurz
* Instruct SUSE-IT to change VLAN for machine from oqa.suse.de VLAN to 662 (example: https://sd.suse.com/servicedesk/customer/portal/1/SD-124055, ~~https://infra.nue.suse.com/SelfService/Display.html?id=16458 (not available anymore)~~)
667 84 okurz
* Remove from osd:
668
669
```
670 242 okurz
salt-key -y -d worker7.oqa.suse.de
671 84 okurz
```
672 1 alarrosa
673 245 okurz
* On the worker * Change root password to o3 one
674
* Allow ssh password authentication: `sed -i 's/^PasswordAuthentication/#&/' /etc/ssh/sshd_config && systemctl restart sshd`
675
* Ensure ssh based root login works with `zypper -n in openssh-server-config-rootlogin` or if that is not available change 'PermitRootLogin' to 'yes' in sshd_config
676
* Add personal ssh key to machine, e.g. openqaworker7:/root/.ssh/authorized_keys
677
678
679
680 84 okurz
* Add entry on o3 to `/etc/dnsmasq.d/openqa.conf` with MAC address, e.g.
681
682
```
683
dhcp-host=54:ab:3a:24:34:b8,openqaworker7
684
```
685
686
* Add entry to `/etc/hosts` which dnsmasq picks up to give out a DHCP lease, e.g.
687
688
```
689
192.168.112.12   openqaworker7.openqanet.opensuse.org openqaworker7
690
```
691
692 243 livdywan
* Reload dnsmasq with `systemctl restart dnsmasq`
693 1 alarrosa
694 243 livdywan
* Adapt NFS mount point on the worker
695
696 85 okurz
```
697 246 okurz
sed -i '/openqa\.suse\.de/d' /etc/fstab && echo 'openqa1-opensuse:/ /var/lib/openqa/share nfs4 noauto,nofail,retry=30,ro,x-systemd.automount,x-systemd.device-timeout=10m,x-systemd.mount-timeout=30m 0 0' >> /etc/fstab
698 85 okurz
```
699 84 okurz
700
* Restart network on machine (over IMPI) using `systemctl restart network` and monitor in o3:`journalctl -f -u dnsmasq` until address is assigned, e.g.:
701
702
```
703
Feb 29 10:48:30 ariel dnsmasq[28105]: read /etc/hosts - 30 addresses
704
Feb 29 10:48:54 ariel dnsmasq-dhcp[28105]: DHCPREQUEST(eth1) 10.160.1.101 54:ab:3a:24:34:b8
705
Feb 29 10:48:54 ariel dnsmasq-dhcp[28105]: DHCPNAK(eth1) 10.160.1.101 54:ab:3a:24:34:b8 wrong network
706
Feb 29 10:49:10 ariel dnsmasq-dhcp[28105]: DHCPDISCOVER(eth1) 54:ab:3a:24:34:b8
707
Feb 29 10:49:10 ariel dnsmasq-dhcp[28105]: DHCPOFFER(eth1) 192.168.112.12 54:ab:3a:24:34:b8
708
Feb 29 10:49:10 ariel dnsmasq-dhcp[28105]: DHCPREQUEST(eth1) 192.168.112.12 54:ab:3a:24:34:b8
709
Feb 29 10:49:10 ariel dnsmasq-dhcp[28105]: DHCPACK(eth1) 192.168.112.12 54:ab:3a:24:34:b8 openqaworker7
710 85 okurz
```
711
712
* Ensure all mountpoints up
713 84 okurz
714
```
715
mount -a
716 86 okurz
```
717 84 okurz
718
* Update /etc/openqa/client.conf with the same key as used on other workers for "openqa1-opensuse"
719
* Update /etc/openqa/workers.ini with similar config as used on other workers, e.g. based on openqaworker4, example:
720
721
```
722
# diff -Naur /etc/openqa/workers.ini{.osd,}
723
--- /etc/openqa/workers.ini.osd 2020-02-29 15:21:47.737998821 +0100
724
+++ /etc/openqa/workers.ini     2020-02-29 15:22:53.334464958 +0100
725
@@ -1,17 +1,10 @@
726
-# This file is generated by salt - don't touch
727
-# Hosted on https://gitlab.suse.de/openqa/salt-pillars-openqa
728 1 alarrosa
-# numofworkers: 10
729 84 okurz
-
730
 [global]
731
-HOST=openqa.suse.de
732
-CACHEDIRECTORY=/var/lib/openqa/cache
733
-LOG_LEVEL=debug
734
-WORKER_CLASS=qemu_x86_64,qemu_x86_64_staging,tap,openqaworker7
735 263 okurz
-WORKER_HOSTNAME=10.X.X.101
736 84 okurz
-
737
-[1]
738
-WORKER_CLASS=qemu_x86_64,qemu_x86_64_staging,tap,qemu_x86_64_ibft,openqaworker7
739
+HOST=http://openqa1-opensuse
740
+WORKER_HOSTNAME=192.168.112.12
741
+CACHEDIRECTORY = /var/lib/openqa/cache
742
+CACHELIMIT = 50
743
+WORKER_CLASS = openqaworker7,qemu_x86_64
744
745
-[openqa.suse.de]
746
-TESTPOOLSERVER = rsync://openqa.suse.de/tests
747
+[http://openqa1-opensuse]
748
+TESTPOOLSERVER = rsync://openqa1-opensuse/tests
749
```
750
751
* Remove OSD specifics
752
753
```
754
systemctl disable --now auto-update.timer salt-minion telegraf
755
for i in  NPI SUSE_CA telegraf-monitoring; do zypper rr $i; done
756
zypper -n dup --force-resolution --allow-vendor-change
757
```
758
759
* If the machine is not a transactional-server one has the following options: Keep as is and handle like power8 (also not transactional), enable transactional updates w/o root being r/o, change to root being r/o on-the-fly, reinstall as transactional. At least option 2 is suggested, enable transactional updates:
760
761
```
762
zypper -n in transactional-update
763
systemctl enable --now transactional-update.timer rebootmgr
764
```
765
766
* Enable apparmor
767
768
```
769
zypper -n in apparmor-utils
770
systemctl unmask apparmor
771
systemctl enable --now apparmor
772
```
773
774
* Switch firewall from SuSEfirewall2 to firewalld
775
776
```
777
zypper -n in firewalld && zypper -n rm SuSEfirewall2
778
systemctl enable --now firewalld
779
firewall-cmd --zone=trusted --add-interface=br1
780
firewall-cmd --set-default-zone trusted
781
firewall-cmd --zone=trusted --add-masquerade
782
```
783
784
* Copy over special openSUSE UEFI staging images, see #63382
785 248 okurz
* For multi-machine configured workers make sure to have updated IPv4 entries in /etc/wicked/scripts/gre_tunnel_preup.sh
786 84 okurz
* Check operation with a single openQA worker instance:
787
788
```
789
systemctl enable --now openqa-worker.target openqa-worker@1
790
```
791
792
* Test with an openQA job cloned from a production job, e.g. for openqaworker7
793
794
```
795
openqa-clone-job --within-instance https://openqa.opensuse.org/t${id} WORKER_CLASS=openqaworker7
796
```
797
798
* After the latest openQA job could successfully finish enable more worker instances
799
800
```
801
systemctl unmask openqa-worker@{2..14} && systemctl enable --now openqa-worker@{2..14}
802
```
803
804
* Monitor if nightly update works, e.g. look for journal entry:
805
806
```
807
Mar 01 00:08:26 openqaworker7 transactional-update[10933]: Calling zypper up
808
809
Mar 01 00:08:51 openqaworker7 transactional-update[10933]: transactional-update finished - informed rebootmgr
810
Mar 01 00:08:51 openqaworker7 systemd[1]: Started Update the system.
811
812
Mar 01 03:30:00 openqaworker7 rebootmgrd[40760]: rebootmgr: reboot triggered now!
813
814
Mar 01 03:36:32 openqaworker7 systemd[1]: Reached target openQA Worker.
815
```
816 93 okurz
817 95 okurz
## Distribution upgrades
818
819 1 alarrosa
**Note:** Performing the upgrade differs slightly depending on the host setup:
820 131 livdywan
* On hosts with a writeable `/` you need to enter a root shell i.e. `sudo bash`
821 138 okurz
* Transactional hosts require that you use `transactional-update shell` thereby creating a snapshot which is applied after a reboot, optionally using `--continue` if you want to make further changes to an existing snapshot
822
* Depending on available space it might be necessary to cleanup space before conducting the upgrade, e.g. use `snapper rm <N..M>` to delete older root btrfs snapshots, cleanup unneeded packages, e.g. with https://github.com/okurz/scripts/blob/master/zypper-rm-orphaned and https://github.com/okurz/scripts/blob/master/zypper-rm-unneeded
823 196 okurz
* Upgrades might pull in too many new packages so better crosscheck with `zypper … dup … --no-recommends`
824 138 okurz
* Consider using https://github.com/okurz/auto-upgrade/blob/master/auto-upgrade or manual (*Tip**: Run this in `screen -d -r || screen` and use e.g. `sudo bash`):
825 101 okurz
826 95 okurz
```
827 263 okurz
new_version=15.5 # Specify the target release
828 1 alarrosa
829 98 livdywan
# Change the release via the special $releasever
830 1 alarrosa
. /etc/os-release
831
sed -i -e "s/${VERSION_ID}/\$releasever/g" /etc/zypp/repos.d/*
832
zypper --releasever=$new_version ref
833
test -f /etc/openqa/openqa.ini && sudo -u geekotest /opt/openqa-scripts/dump-psql
834 195 mkittler
systemctl stop openqa-continuous-update.timer  # it would interfere, e.g. revert the previous zypper ref call
835 1 alarrosa
zypper -n --releasever=$new_version dup --auto-agree-with-licenses --replacefiles --download-in-advance
836
837
# Check config files for relevant changes
838 95 okurz
rpmconfigcheck
839
for i in $(cat /var/adm/rpmconfigcheck) ; do vimdiff ${i%.rpm*} $i ; done
840
rm $(cat /var/adm/rpmconfigcheck)
841
842 1 alarrosa
reboot
843
systemctl --failed
844 213 okurz
```
845
846
* Ensure that the upgrade was really successful, e.g. /etc/os-release should show the new version, the above `zypper dup` command should show no more pending actions
847
* Crosscheck for any obvious alerts, pipelines failing, user reports, etc.
848
* On any severe problems consider a complete rollback of the upgrade or also partial downgrade of packages, e.g. force-install older version of packages and zypper locks until an issue is fixed
849
* Monitor for successful openQA jobs on the host
850
851 187 okurz
852 109 okurz
## openQA infrastructure needs (o3 + osd)
853
854 115 okurz
TL;DR: new OSD ARM workers needed, missing redundancy for o3-ppc, rest is needing replacement as nearly all current hardware is out of vendor provided maintenance (as of 2021-05), SSD storage for o3 would be good
855 93 okurz
856
2020-03: SUSE IT (EngInfra) provided us more space for O3 but we have only slow rotating-disk storage. Performance could be improved by providing SSD storage.
857
858
The most time and effort we currently struggle with storage space for OSD (openqa.suse.de) ~~both OSD (openqa.suse.de) as well as O3 (openqa.opensuse.org) (2020-03: Situation on o3 resolved with more storage provided by SUSE IT)~~. Both instances (OSD + O3) are using precious netapp-storage but there is currently no better approach to use different, external storage. An increase of the available space would be appreciated, ~~o3 being more important right now than osd,~~ see https://progress.opensuse.org/issues/57494 for details. Graphs like 
859
https://stats.openqa-monitor.qa.suse.de/d/nRDab3Jiz/openqa-jobs-test?orgId=1&from=1578343509900&to=1578653794173&fullscreen&panelId=12 show how usual test backlogs are worked on within OSD by architecture. It can be seen that both the ppc64le and aarch64 backlogs are reduced fast so we do not need more ppc64le or aarch64 machines. However, we have a stability problem with all three aarch64 workers. Potentially new machine(s) could help, see https://progress.opensuse.org/issues/41882 for details.
860 107 okurz
861 125 okurz
With number of workers and parallel processed tests as well as with the increased number of products tested on OSD and users using the system the workload on OSD constantly increases. CPU load alerts had been seen recently in #96713 and the higher load is visible in https://monitor.qa.suse.de/d/WebuiDb/webui-summary?viewPanel=25 . From time to time should increase the number of CPU cores on the OSD VM due to the higher usage.
862
863 117 okurz
## Setup guide for new machines
864 250 mkittler
* Ensure the host has a proper DNS entry
865
    * The MAC address of new o3 workers generally needs to be added to `/etc/dnsmasq.d/openqa.conf` and an IP address needs to be configured in `/etc/hosts` (both files are on ariel).
866
    * Hosts located at Frankencampus need a DNS entry via the OPS-Service repo, e.g. https://gitlab.suse.de/OPS-Service/salt/-/merge_requests/3687.
867 1 alarrosa
* Change IPMI/BMC passwords to use our common passwords instead of default IPMI
868
* OSD: Add to salt using https://gitlab.suse.de/openqa/salt-states-openqa
869 250 mkittler
    * Make sure to set /etc/salt/minion_id to the FQDN (see #90875#note-2 for reference)
870
    * Checkout the next section for details
871
* o3: Setup the worker manually, see "Manual worker setup" section below
872 1 alarrosa
873 265 okurz
### Network (legacy) boot via PXE and OS/worker setup
874 250 mkittler
One can make use of our existing PXE infrastructure (which only supports legacy boot) following these steps:
875
876
1. Ensure the boot mode allows legacy boot, e.g. select it in the machine's setup menu manually.
877
2. Connect via IPMI and select "Leap -> HTTP -> Console" in our PXE menu, append ` console=ttyS0,115200 autoyast=http://s.qa.suse.de/oqa-ay-lp rootpassword=…` to the command line and wait until the installation has finished.
878 255 mkittler
    * Use https://w3.nue.suse.com/~okurz/ay-openqa-worker-leap.xml if the URL shortener is not available.
879
    * Alternatively, there's also https://raw.githubusercontent.com/os-autoinst/openQA/master/contrib/ay-openqa-worker.xml.
880 250 mkittler
    * If nothing shows up in the serial console, try a different console parameter, e.g. `console=ttyS1,115200`.
881
3. Configure repos, e.g. via the line of the scriptlet in http://s.qa.suse.de/oqa-ay-lp.
882
    * The scriptlet cannot be executed in the context of AutoYaST so this is a manual step at this point.
883
4. Enable SSH access via `systemctl enable --now sshd` and continue via SSH.
884 254 mkittler
5. Install some basic software, e.g. `zypper in htop vim systemd-coredump`.
885 253 mkittler
6. For OSD workers, setup `salt-minion` following the [documentation in our Salt states repository](https://github.com/os-autoinst/salt-states-openqa#setup-production-machine); otherwise setup the worker manually as explained in the next section.
886 1 alarrosa
7. Check whether the config looks good on the workers and whether jobs look good on the web UI host.
887 265 okurz
8. As long as [boo#1212816](https://bugzilla.opensuse.org/show_bug.cgi?id=1212816) is open apply a workaround for salt-minion:
888
889
```
890
arch=$(uname -m)
891
sudo zypper -n in --oldpackage --allow-downgrade http://download.opensuse.org/update/leap/15.4/sle/$arch/salt-3004-150400.8.25.1.$arch.rpm http://download.opensuse.org/update/leap/15.4/sle/$arch/salt-minion-3004-150400.8.25.1.$arch.rpm http://download.opensuse.org/update/leap/15.4/sle/$arch/python3-salt-3004-150400.8.25.1.$arch.rpm && sudo zypper al --comment "poo#131249 - potential salt regression, unresponsive salt-minion" salt salt-minion salt-bash-completion python3-salt
892
```
893
894
895 250 mkittler
896
### Manual worker setup
897 258 okurz
You likely want to configure the [openQA development repository](https://open.qa/docs/#_development_version_repository).
898 250 mkittler
Then setup the worker like this:
899
900 249 mkittler
```
901
echo "requires:openQA-worker" > /etc/zypp/systemCheck.d/openqa.check
902 259 okurz
zypper -n in openQA-worker openQA-auto-update openQA-continuous-update os-autoinst-distri-opensuse-deps swtpm # openQA worker services plus dependencies for openSUSE distri or development repo if added previously
903 258 okurz
zypper -n in ffmpeg-4  # for using external video encoder as it is already configured on some machines like ow19, ow20 and power8
904 249 mkittler
zypper -n in nfs-client  # For /var/lib/openqa/share
905 259 okurz
zypper -n in bash-completion vim htop strace systemd-coredump iputils tcpdump bind-utils  # for general tinkering
906 249 mkittler
907
echo "openqa1-opensuse:/ /var/lib/openqa/share nfs4 noauto,nofail,retry=30,ro,x-systemd.automount,x-systemd.device-timeout=10m,x-systemd.mount-timeout=30m 0 0" >> /etc/fstab
908
sed -i 's/\(solver.dupAllowVendorChange = \)false/\1true/' /etc/zypp/zypp.conf
909 1 alarrosa
910
# configure /etc/openqa/client.conf and /etc/openqa/workers.ini, then enable the desired number of worker slots, e.g.:
911
systemctl enable --now openqa-worker-auto-restart@{1..30}.service openqa-reload-worker-auto-restart@{1..30}.path openqa-auto-update.timer openqa-continuous-update.timer openqa-worker-cacheservice.service openqa-worker-cacheservice-minion.service
912
```
913
914
Also copy the OVMF images for staging tests (`/usr/share/qemu/*staging*`) from other workers. Those files are from the `devel` flavor of the OVMF package built in stagings and rings, e.g. https://build.opensuse.org/package/show/openSUSE:Factory:Rings:1-MinimalX/ovmf, just renamed.
915 258 okurz
916
#### Optional: Transactional-server
917
You may chose the transaction server role but a normal server will do as well:
918
919
```
920
sed -i 's@/ btrfs ro@/ btrfs rw@' /etc/fstab
921
mount -o rw,remount /
922
btrfs property set -ts / ro false
923
```
924 249 mkittler
925
### UEFI boot via iPXE
926 250 mkittler
The following steps are for the o3 environment but can likely also be adapted for setting up OSD workers. This section skips the setup of the OS as it doesn't differ when using UEFI/iPXE. Checkout the previous sections for the OS/worker setup.
927 202 mkittler
928
---
929
930
There's a PXE setup as part of `dnsmasq.service` running on ariel. It is currently configured to serve a legacy-only boot menu utilized by some tests. After following these steps, please restore this setup so tests can continue to use it.
931
932
First, make a file that contains the iPXE commands to boot available via some HTTP server. Here's how the file could look like for installing Leap 15.4 with AutoYaST:
933
```
934
#!ipxe
935 204 mkittler
kernel http://download.opensuse.org/distribution/leap/15.4/repo/oss/boot/x86_64/loader/linux initrd=initrd console=tty0 console=ttyS1,115200 install=http://download.opensuse.org/distribution/leap/15.4/repo/oss/ autoyast=http://martchus.no-ip.biz/ipxe/ay-openqa-worker.xml rootpassword=…
936 202 mkittler
initrd http://download.opensuse.org/distribution/leap/15.4/repo/oss/boot/x86_64/loader/initrd
937
boot
938
```
939
940
Then, setup the build of an iPXE UEFI image like explained on https://en.opensuse.org/SDB:IPXE_booting#Setup:
941
```
942
git clone https://github.com/ipxe/ipxe.git
943
cd ipxe
944
echo "#!ipxe
945
dhcp
946
chain http://martchus.no-ip.biz/ipxe/leap-15.4" > myscript.ipxe
947
```
948
949
As you can see, this build script contains the URL to the previously setup file. Of course commands could be built directly into the image but then you'd need to rebuild/redeploy the image all the time you want to make a change (instead of just editing a file on your HTTP server).
950
951
To conduct the build of the image, run:
952
```
953
cd src
954
make EMBED=../myscript.ipxe NO_WERROR=1 bin/ipxe.lkrn bin/ipxe.pxe bin-i386-efi/ipxe.efi bin-x86_64-efi/ipxe.efi
955
```
956
957
Note that these build options are taken from https://github.com/archlinux/svntogit-community/blob/packages/ipxe/trunk/PKGBUILD#L58 because when attempting to build on Tumbleweed I've otherwise ran into build errors.
958
959
Then you can copy the files to ariel and move them to a location somewhere under `/srv/tftpboot`:
960
```
961
# on build host
962
rsync bin-x86_64-efi/ipxe.efi openqa.opensuse.org:/home/martchus/ipxe.efi
963
# on ariel
964
sudo cp /home/martchus/ipxe.efi /srv/tftpboot/ipxe-own-build/ipxe.efi
965
```
966
967
Then configure the use of the image in `/etc/dnsmasq.d/pxeboot.conf` on ariel. Temporarily comment-out possibly disturbing lines and make sure the following lines are present:
968
```
969
enable-tftp
970
tftp-root=/srv/tftpboot
971
pxe-prompt="Press F8 for menu. foobar", 10
972
dhcp-match=set:efi-x86_64,option:client-arch,7
973
dhcp-match=set:efi-x86_64,option:client-arch,9
974
dhcp-match=set:efi-x86,option:client-arch,6
975
dhcp-match=set:bios,option:client-arch,0
976
dhcp-boot=tag:efi-x86_64,ipxe-own-build/ipxe.efi
977
```
978
979
Then run `systemctl restart dnsmasq.service` to apply and `journalctl -fu dnsmasq.service` to see what's going on.
980 215 okurz
981
### Installation of machines being able to run kexec
982
983
If it is possible to directly execute "kexec" on a machine, e.g. on ppc64le machines running petitboot, it is possible to start a remote network installation following https://en.opensuse.org/SDB:Network_installation#Start_the_Installation . See #119008#note-6 for an example.
984 232 okurz
985 231 okurz
### Linux Endpoint Protection Agent
986 215 okurz
Ensure any non-test OS installations have the Linux Endpoint Protection Agent deployed, see https://progress.opensuse.org/issues/123094 and https://confluence.suse.com/display/CS/Sensor+-+Linux+Endpoint+Protection+Agent for details
987 120 okurz
988
## Take machines out of salt-controlled production
989
990 118 okurz
E.g. for investigation or development or manual maintenance work
991
992
```
993 179 nicksinger
ssh osd "sudo salt-key -y -d $hostname"
994 272 okurz
ssh $hostname "sudo systemctl disable --now telegraf $(systemctl list-units | grep openqa-worker-auto-restart | cut -d . -f 1 | xargs)"
995 118 okurz
```
996 174 mkittler
997
Checkout [salt-states-openqa's examples](https://gitlab.suse.de/openqa/salt-states-openqa/-/blob/master/README.md#examples) for systemd commands to start and stop workers.
998 229 nicksinger
999
## How to use samba shares to mount ISOs as virtual CD drives with SuperMicro server/mainboards
1000
1001
SuperMicro based servers have the capabilities to mount smb shares containing ISOs as virtual CD drives to e.g. boot from them.
1002
Install the samba package on any machine you control. This also works from your personal workstation if the server can access it (e.g. over VPN) and create the following `/etc/samba/smb.conf`:
1003
1004
~~~ text
1005
[global]
1006
   workgroup = MYGROUP
1007
   server string = Samba Server
1008
   log level = 3
1009
   client min protocol = core
1010
   server min protocol = core
1011
   guest ok = yes
1012 240 okurz
1013
## "Staging" test instances
1014
1015
SUSE internally we have two virtual machines that can be used for testing, developing, showcasing, reachable under convenient URLs:
1016
* https://openqa-staging-1.qe.nue2.suse.org
1017
* https://openqa-staging-2.qe.nue2.suse.org
1018
1019
You can use those machines and apply changes as desired over ssh.
1020 229 nicksinger
1021
#============================ Share Definitions ==============================
1022
[recovery]
1023
	comment = recovery
1024
	path = /home/you/recovery
1025
	public = yes
1026
~~~
1027
1028
Now start the samba service. Despite the share being accessible by everyone (be carful about this!), the SuperMicro machines still need a User on the Samba server as they don't support anonymous login. To create a user without requiring a local unix user, you can use the following command:
1029
1030
```samba-tool domain provision --use-rfc2307 --interactive```
1031
1032
afterwards create a user in the samba database with:
1033
1034
```smbpasswd -a smbtest```
1035
1036
Now it should be possible to access the share. Place an ISO file into your folder configured above and use the following settings in the webui of the SuperMicro server:
1037
1038
"Share Host": IP of your machine running samba
1039
"Path to Image": Path to your ISO inside the share, e.g. "\recovery\some_boot_medium.iso" (mind the backslashes!)
1040
"Users": The username from your just created user
1041
"Password": It's password - don't keep this empty as it will not work otherwise
1042
1043
After clicking on "mount" you should now see a connection to your samba server. The machine will try to mount the ISO and if everything goes well, will report "There is an iso file mounted." in the "Health Status" of the Devices.
1044 173 mkittler
1045 118 okurz
## Bring back machines into salt-controlled production
1046
1047 124 dheidler
```
1048 118 okurz
ssh osd "sudo salt-key -a $hostname && sudo salt --state-output=changes $hostname state.apply"
1049
```
1050
1051 117 okurz
Depending on your actions further manual cleanup might be necessary, e.g. `ssh $hostname "sudo systemctl unmask telegraf salt-minion"`
1052 230 nicksinger
1053
## Access the BMC of machines in the new security zone
1054
1055
One can use ssh portforwarding to access the services of a BMC (e.g. web interface) for a machine in the new security zone. The host "qe-jumpy" can be used for that like this:
1056
1057
~~~
1058
ssh -4 jumpy@qe-jumpy.suse.de -L 8443:openqaworker4-ipmi.qe-ipmi-ur:443 -L 8080:openqaworker4-ipmi.qe-ipmi-ur:80
1059
~~~
1060
1061
while the ssh-session is running you can then use your local browser to access the remote host by e.g. "http://localhost:8080" or "https://localhost:8443".
1062
1063
## Using the build-in java tools of BMCs to access machines in the security zone
1064
1065
*1.* Follow [Access the BMC of machines in the new security zone](#Access-the-BMC-of-machines-in-the-new-security-zone) to download the build-in java webstart file of the machine you want to control
1066
*2.* Use nmap on qe-jumpy to scan for all ports of a machines BMC. Example:
1067
1068
~~~
1069
jumpy@qe-jumpy:~> nmap openqaworker4-ipmi.qe-ipmi-ur -p-
1070
Starting Nmap 7.70 ( https://nmap.org ) at 2023-01-17 12:23 UTC
1071
Nmap scan report for openqaworker4-ipmi.qe-ipmi-ur (192.168.133.4)
1072
Host is up (0.0056s latency).
1073
Not shown: 65525 closed ports
1074
PORT     STATE SERVICE
1075
22/tcp   open  ssh
1076
80/tcp   open  http
1077
199/tcp  open  smux
1078
427/tcp  open  svrloc
1079
443/tcp  open  https
1080
623/tcp  open  oob-ws-http
1081
5120/tcp open  barracuda-bbs
1082
5122/tcp open  unknown
1083
5123/tcp open  unknown
1084
7578/tcp open  unknown
1085
~~~
1086
1087
*3.* Forward all ports relevant for the java applet to your local machine:
1088
1089
~~~
1090
sudo ssh -i /home/nicksinger/.ssh/id_rsa.SUSE -4 jumpy@qe-jumpy.suse.de -L 443:openqaworker4-ipmi.qe-ipmi-ur:443 -L 623:openqaworker4-ipmi.qe-ipmi-ur:623 -L 5120:openqaworker4-ipmi.qe-ipmi-ur:5120 -L 5122:openqaworker4-ipmi.qe-ipmi-ur:5122 -L 5123:openqaworker4-ipmi.qe-ipmi-ur:5123 -L 7578:openqaworker4-ipmi.qe-ipmi-ur:7578
1091
~~~
1092
1093
**Note 1:** You have to use the exact same ports as shown by the port scan because you cannot instruct the applet to use different ports
1094
**Note 2:** You have to execute your ssh client with root privileges for it to be able to bind to ports below 1024. These forwardings need to be present for the applet being able to download additional files from the BMC
1095
**Note 3:** Make sure to point to your right keyfile by using the -i parameter as ssh will scan different directories if run as root
1096
1097
*4.* Execute the previously downloaded applet. I use the following command to make it work with wayland:
1098
~~~
1099
LANG=C _JAVA_AWT_WM_NONREPARENTING=1 javaws -nosecurity -jnlp jviewer\ \(1\).jnlp
1100
~~~
1101
*5.* You should now be able to control the machine/BMC with all its features (e.g. mounting ISO images as virtual CD)
1102 175 okurz
1103 172 mkittler
## Use a production host for testing backend changes locally, e.g. svirt, powerVM, IPMI bare-metal, s390x, etc.
1104 177 mkittler
1105 172 mkittler
0. Find out which type of worker slot you need for the specific job you want to run, e.g. by checking which worker slots were used for previous runs of the job on OSD or by looking for the job's worker class in the [workers table](https://openqa.suse.de/admin/workers).
1106 1 alarrosa
1. Configure an additional worker slot in your local `workers.ini` using worker settings from the corresponding production worker. The production worker config can be found in [workerconf.sls](https://gitlab.suse.de/openqa/salt-pillars-openqa/-/blob/master/openqa/workerconf.sls) or on the hosts themselves.
1107 176 mkittler
2. Take out the corresponding worker slot from production using the systemd commands mentioned in [salt-states-openqa's examples](https://gitlab.suse.de/openqa/salt-states-openqa/-/blob/master/README.md#examples). This is important to prevent multiple jobs from using the same svirt host.
1108 172 mkittler
3. Start the locally configured worker slot and clone/run some jobs.
1109
4. When you're done, bring back the production worker slots using the systemd commands mentioned in [salt-states-openqa's examples](https://gitlab.suse.de/openqa/salt-states-openqa/-/blob/master/README.md#examples).
1110 178 mkittler
1111
### Alternatives
1112
It is also possible to test svirt backend changes fully locally, at least when running tests via KVM is sufficient. Checkout [os-autoinst's documentation](https://github.com/os-autoinst/os-autoinst/blob/master/doc/backends.md#svirt=) for further details.
1113 122 okurz
1114 257 mkittler
## Dealing with PowerEdge SAP servers from Dell
1115 1 alarrosa
### Acessing the management interface via SSH
1116 256 mkittler
It is possible to access the management interface via SSH as well (using the same user name and password as for the web interface). Checkout further Wiki sections for useful commands or the [manual](https://dl.dell.com/content/manual65464730-integrated-dell-remote-access-controller-9-racadm-cli-guide.pdf?language=en-us) which is also availabe as [web page](https://www.dell.com/support/manuals/de-de/integrated-dell-remote-access-cntrllr-8-with-lifecycle-controller-v2.00.00.00/racadm_idrac_pub-v1/racadm-subcommand-details?guid=guid-cd4e81e6-818c-44fb-9e7a-82950425fbbb&lang=en-us).
1117 1 alarrosa
1118 269 mkittler
One very useful pair of commands are `racadm get` and `… set` which allow reading and writing configuration values, e.g. `racadm get iDRAC.NIC.DNSRacName` and `racadm set iDRAC.NIC.DNSRacName somevalue`.
1119 268 mkittler
1120 269 mkittler
### Restoring access to the iDRAC web interface
1121
If iDRAC returns a 400 error it might be due to a wrong DNS setting. This is especially likely if you have just changed the DNS entry. Try to access iDRAC via its IP which should still work. Then goto iDRAC settings -> Network -> General settings and update the DNS iDRAC name to match the *not* fully qualified domain (e.g. `qesapworker-prg4-mgmt` for https://qesapworker-prg4-mgmt.qa.suse.cz).
1122
1123
You may also change this setting by accessing the management interface via SSH. The command would be `racadm set iDRAC.NIC.DNSRacName qesapworker-prg4-mgmt` in this case. You may also use `racadm set idrac.webserver.HostHeaderCheck 0` to get rid of this entire check completely. This is especially useful if you cannot conveniently put in a matching name, e.g. when accessing the web UI via SSH forwarding.
1124 256 mkittler
1125
### Recovering BIOS
1126 1 alarrosa
If the BIOS appears completely broken (e.g. after a firmware update) you may try to invoke `racadm systemerase bios` after accessing the management interface via SSH. This will take a while and afterwards you'll have to redo settings (e.g. the bootmode).
1127 257 mkittler
1128
### Cancel/delete stuck iDRAC jobs
1129
Invoke `racadm jobqueue delete -i JID_CLEARALL_FORCE` after accessing the management interface via SSH.
1130
1131
### Check status of BOSS-S2 NVMe disks
1132
Use the "MVCLI BOSS-S2" utility from Dell which you can download from their servers (see https://www.dell.com/support/manuals/de-de/poweredge-r6525/boss-s2_ug/run-boss-s2-cli-commands-on-poweredge-servers-running-the-linux-operating-system?guid=guid-c0f3bd0d-4725-4fed-8bc2-4aa872f3627f&lang=en-us).
1133
1134
### Firmware updates
1135
The easiest way is to download the *Windows* installer (a file that ends with `.EXE`) and upload and install that via the iDRAC web interface. This also works for updates of iDRAC but also for BIOS updates and firmware of various components. Uploading the GNU/Linux version (a file that ends with `.BIN`) is *not* possible this way. One can track the progress of those updates via the iDRAC job queue. It is possible to schedule two updates that require a reboot at the same time (e.g. BIOS update and SAS-RAID firmware) and do them this way in one go.
1136 256 mkittler
1137 122 okurz
## Backup
1138 134 okurz
1139 122 okurz
Both openqa.opensuse.org and openqa.suse.de run on virtual machine clusters that provide redundancy and differential backup using snapshotting of the involved storage. SUSE-IT currently provides backups going back up to 3 days with two daily backups conducted at 23:10Z and 11:00Z. With this it is possible in cases of catastrophic data loss to recover (raise ticket over https://sd.suse.com in that case). Additionally automatic backup for the o3 webui host introduced with https://gitlab.suse.de/okurz/backup-server-salt/tree/master/rsnapshot covering so far /etc and the SQL database dumps. Fixed assets and testresults are backed up on storage.qa.suse.de (see https://gitlab.suse.de/openqa/salt-states-openqa/-/merge_requests/612)
1140 139 okurz
1141
### openQA database backups
1142
1143
Database backups of o3+osd are available on backup.qa.suse.de, acessible over ssh, same credentials as for the OSD infrastructure
1144 144 livdywan
1145
### Fallback deployment on AWS
1146
1147 149 tinita
To get an instance running from a backup in case of a disaster, one can be created on AWS with this configuration:
1148
1149
#### Launch instance
1150 155 tinita
1151 149 tinita
##### Web Interface, from scratch (only if necessary, otherwise just use the template below)
1152 144 livdywan
1153
- Ensure your region is **Frankfurt, Germany**
1154 146 mkittler
- Pick a **t3.large** with `openSUSE Leap` on AWS Marketplace
1155
- Add two disks
1156
    - 10 GiB for the root filesystem should be sufficient (can be easily extended later if needed)
1157 144 livdywan
    - The OSD database alone needs > 30 GiB and results plus assets will also need a lot (e.g. > 4 GiB for TW snapshot ISO) so take at least 100 GiB for the 2nd drive
1158
- The security group needs to include ssh and http
1159 149 tinita
- Add `openqa_created_by`, `openqa_ttl` and `team:qa-tools` tags
1160
1161
##### Launch from a template
1162 151 tinita
1163
Note: When you modify the template (creating a new version), be sure to set the new version as the default.
1164 155 tinita
1165 154 tinita
- Go to the [openQA-webUI-openSUSE-Leap](https://eu-central-1.console.aws.amazon.com/ec2/v2/home?region=eu-central-1#LaunchTemplateDetails:launchTemplateId=lt-002dfbcbd2f818e4c) Template
1166 149 tinita
- Select "Actions - Launch instance from template"
1167
- Choose your key pair
1168 1 alarrosa
- Click "Launch instance"
1169
1170 151 tinita
###### Command line
1171 156 tinita
1172
For configuring aws cli, see [below](https://progress.opensuse.org/projects/openqav3/wiki/Wiki#Configure-aws-cli)
1173
1174 149 tinita
[aws run-instances docs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-templates.html)
1175
1176 150 tinita
    aws ec2 run-instances --launch-template LaunchTemplateId=lt-002dfbcbd2f818e4c --key-name <your-keyname>
1177
    # or
1178 149 tinita
    aws ec2 run-instances --launch-template LaunchTemplateName=openQA-webUI-openSUSE-Leap --key-name <your-keyname>
1179
1180
For this you have to create a key pair first, if you haven't done so.
1181 144 livdywan
Save the result and look for the `InstanceId`.
1182
1183
#### Transfer keys
1184
1185
Since an instance is always created with a single key, public keys of all users need to be deployed by whoever owns that key.
1186
1187
**Note**: `osd2` refers to the instance created above. Replace with the instance IP or add an alias to your SSH config.
1188
1189
    ssh openqa.suse.de "sudo su -c 'cat /home/*/.ssh/authorized_keys'" | ssh ec2-user@osd2 "cat - >> ~/.ssh/authorized_keys"
1190
1191
#### Bootstrapping
1192
1193
```
1194 169 osukup
ssh osd2
1195 145 mkittler
sudo su
1196
parted --script /dev/nvme1n1 mklabel gpt && parted --script /dev/nvme1n1 mkpart ext4 4096s 100%
1197 160 osukup
mkfs.ext4 /dev/nvme1n1p1
1198 145 mkittler
vim /etc/fstab # add mount to fstab
1199 158 okurz
mkdir /space && mount /dev/nvme1n1p1 /space
1200
mkdir -p /space/pgsql/data
1201
mkdir -p /var/lib/pgsql
1202 169 osukup
ln -s /space/pgsql/data /var/lib/pgsql/data
1203
zypper in postgresql-server # needed for user.group
1204
chown -R postgres.postgres /space/pgsql # without correct group postgresql.service fails
1205
mkdir -p /space/openqa
1206 171 osukup
mkdir -p /var/lib/openqa
1207 161 osukup
mount /space/openqa /var/lib/openqa -o bind # open also requires a lot of space 
1208 152 tinita
curl -s https://raw.githubusercontent.com/os-autoinst/openQA/master/script/openqa-bootstrap | bash -x
1209
1210 145 mkittler
ssh -A backup.qa.suse.de
1211 1 alarrosa
rsync --progress /home/rsnapshot/alpha.0/openqa.suse.de/var/lib/openqa/SQL-DUMPS/2022-02-08.dump ec2-user@osd2:/tmp
1212
1213 147 mkittler
ssh osd2
1214 1 alarrosa
sudo -u postgres createdb -O geekotest openqa-osd # create pristine db for OSD import (to avoid conflicts with existing data)
1215 153 tinita
sudo -u geekotest pg_restore -d openqa-osd /tmp/2022-02-08.dump # import data, will take a while (22m is a realistic time)
1216 1 alarrosa
vim /etc/openqa/openqa.ini # change auth from Fake to OpenID
1217 170 osukup
vim /etc/openqa/database.ini # change database to openqa-osd
1218 158 okurz
vim /etc/openqa/client.conf # change key and secret to correct one
1219 1 alarrosa
systemctl restart openqa-webui
1220 155 tinita
```
1221
1222
##### Configure aws cli
1223
1224
You can use the command
1225
1226
    aws configure
1227
1228
but it doesn't actually help you with the possible values, so you can just create the file yourself like this:
1229
1230
    % cat ~/.aws/config
1231
    [default]
1232
    region = eu-central-1
1233 157 tinita
    output = json
1234
    % cat ~/.aws/credentials
1235
    [default]
1236 155 tinita
    aws_access_key_id = ABCDE
1237 144 livdywan
    aws_secret_access_key = FGHIJ
1238 109 okurz
1239 107 okurz
## Best practices for infrastructure work
1240
1241
* Same as in OSD deployment we should look for failed grafana alerts if users report something suspicious
1242
* Collect all the information between "last good" and "first bad" and then also find the git diff in openqa/salt-states-openqa
1243
* Apply proper "scientific method" with written down hypotheses, experiments and conclusions in tickets, follow https://progress.opensuse.org/projects/openqav3/wiki#Further-decision-steps-working-on-test-issues
1244
* Keep salt states to describe what should *not* be there
1245
* Try out older btrfs snapshots in systems for crosschecking and boot with disabled salt. In the kernel cmdline append `systemd.mask=salt-minion.service`
1246 190 okurz
* Team should conduct a work backlog check on a daily base, e.g. look for urgent tickets related to infrastructure problems
1247 191 okurz
* For hardware component replacement, create EngInfra ticket for coordination, order replacement on private expenses and get reimbursed using https://intra.suse.net/company/company-news/department/finance/claim-expenses/claim-expenses/ or have order placed with the help of line managers, let the components be delivered to the according place, e.g. SUSE Nuremberg datacenter and inform EngInfra in ticket to have them conduct the physical component replacement
1248 148 livdywan
* For ordering new machines follow https://mysuse.sharepoint.com/sites/SUSEBusinessCriticalLinux/Shared%20Documents/Hardware%20Order/E&I%20Hardware.pdf (get quotes from vendor, create ticket with procurement, CC osd-admins+mgriessmeier, wait for purchase order (PO) approval, order with vendor and ask them to include PO number in invoice)
1249 116 okurz
* Prefer `reload` over `restart` where available e.g. `systemctl reload postgres` - in general `systemctl cat postgres` will show available commands for any service
1250 266 okurz
* Test reboot stability of machines with commands like https://github.com/os-autoinst/scripts/blob/master/reboot-stability-check
1251 234 okurz
1252
# Automatic submission of packages
1253 1 alarrosa
Every commit to the master branch of the git repositories of https://github.com/os-autoinst/os-autoinst and https://github.com/os-autoinst/openQA is considered a stable release and triggers package builds within https://build.opensuse.org/project/show/devel:openQA, in particular https://build.opensuse.org/package/show/devel:openQA/os-autoinst and https://build.opensuse.org/package/show/devel:openQA/openQA. http://jenkins.qa.suse.de/job/trigger-openQA_in_openQA-TW/ using https://github.com/os-autoinst/scripts/blob/master/trigger-openqa_in_openqa is monitoring the download repositories for new versions and triggers openQA-in-openQA tests as visible on https://openqa.opensuse.org/group_overview/24 . http://jenkins.qa.suse.de/job/monitor-openQA_in_openQA-TW/ monitors the test execution using https://github.com/os-autoinst/scripts/blob/master/monitor-openqa_job and on test success triggers http://jenkins.qa.suse.de/job/submit-openQA-TW-to-oS_Fctry/ periodically (with a build throttle as decided together with openSUSE reviewers) using https://github.com/os-autoinst/scripts/blob/master/os-autoinst-obs-auto-submit. This step prepares openQA related packages for automatic submission into openSUSE:Factory in https://build.opensuse.org/project/show/devel:openQA:tested, awaits build+check results and then creates automatic submissions to openSUSE:Factory for inclusion of packages into openSUSE Tumbleweed. This approach could also be extended for automatic submission to openSUSE Leap, SLE PackageHub or directly to SLE using maintenance updates based on a configurable schedule with additional check steps as applicable. Given that openQA are developed based on a rolling-release model with no maintenance branches any updates to base products supporting openQA would be new version updates along with dependency package updates as necessary.