Project

General

Profile

Wiki » History » Version 189

okurz, 2022-06-02 13:19
add transactional-update check for openQA upgrade checks

1 3 okurz
# Introduction
2 1 alarrosa
3 3 okurz
This is the organisation wiki for the **openQA Project**.
4 49 okurz
The source code is hosted in the [os-autoinst github project](http://github.com/os-autoinst/), especially [openQA itself](http://github.com/os-autoinst/openQA) and the main backend [os-autoinst](http://github.com/os-autoinst/os-autoinst)
5 1 alarrosa
6 48 okurz
If you are interested in the tests for SUSE/openSUSE products take a look into the [openqatests](https://progress.opensuse.org/projects/openqatests) project.
7
8 165 okurz
If you are looking for entry level issues to contribute to please look into the section [[Wiki#Where-to-contribute|Where to contribute]]
9 70 szarate
10 14 okurz
{{toc}}
11
12 3 okurz
# Organisational
13 1 alarrosa
14 51 okurz
## ticket workflow
15
16
The following ticket statuses are used together and their meaning is explained:
17
18 63 okurz
* *New*: No one has worked on the ticket (e.g. the ticket has not been properly refined) or no one is feeling responsible for the work on this ticket.
19 73 riafarov
* *Workable*: The ticket has been refined and is ready to be picked.
20
* *In Progress*: Assignee is actively working on the ticket.
21 1 alarrosa
* *Resolved*: The complete work on this issue is done and the according issue is supposed to be fixed as observed (Should be updated together with a link to a merged pull request or also a link to an production openQA showing the effect)
22 73 riafarov
* *Feedback*: Further work on the ticket is blocked by open points or is awaiting for the feedback to proceed. Sometimes also used to ask Assignee about progress on inactivity.
23 74 okurz
* *Blocked*: Further work on the ticket is blocked by some external dependency (e.g. bugs, not implemented features). There should be a link to another ticket, bug, trello card, etc. where it can be seen what the ticket is blocked by.
24 51 okurz
* *Rejected*: The issue is considered invalid, should not be done, is considered out of scope.
25
* *Closed*: As this can be set only by administrators it is suggested to not use this status.
26
27
It is good practice to update the status together with a comment about it, e.g. a link to a pull request or a reason for reject.
28
29 80 okurz
## ticket categories
30
31
* *Concrete Bugs*: Regressions, crashes, error messages
32
* *Feature requests*: Ideas or wishes for extension, enhancement, improvement
33
* *Organisational*: Organisational tasks within the project(s), not directly code related
34
* *Support*: Support of users, usage problems, questions
35
36 1 alarrosa
Please avoid the use of other, deprecated categories
37
38
Suggestion by *okurz*: I recommend to avoid the word "bug" in our categories because of the usual "is it a bug or a feature" struggle. Instead I suggest to strictly define "Regressions & Crashes" to clearly separate "it used to work in before" from "this was never part of requirements" for Features. Any ticket of this category also means that our project processes missed something so we have points for improvements, e.g. extend things to look out for in code review.
39 100 okurz
40
## Epics and Sagas
41
42
[epic]s and [saga]s belong to the "coordination" tracker, project contributors are not required to follow this convention but the tracker may be changed automagically in the future: http://mailman.suse.de/mailman/private/qa-sle/2020-October/002722.html 
43 83 okurz
44 13 okurz
## ticket templates
45
You can use these templates to fill in tickets and further improve them with more detail over time. Copy the code block, paste it into a new issue, replace every block marked with "<…>" with your content or delete if not appropriate.
46
47 71 nicksinger
### Defects
48 13 okurz
49
Subject: `<Short description, example: "openQA dies when triggering any Windows ME tests">`
50
51 1 alarrosa
52 13 okurz
```
53 71 nicksinger
## Observation
54 13 okurz
<description of what can be observed and what the symptoms are, provide links to failing test results and/or put short blocks from the log output here to visualize what is happening>
55
56 71 nicksinger
## Steps to reproduce
57 1 alarrosa
* <do this>
58 13 okurz
* <do that>
59 1 alarrosa
* <observe result>
60 13 okurz
61 71 nicksinger
## Problem
62 13 okurz
<problem investigation, can also include different hypotheses, should be labeled as "H1" for first hypothesis, etc.>
63
64 71 nicksinger
## Suggestion
65 123 okurz
* <what to do as a first step>
66
* <Fix the actual problem>
67
* <Consider fixing the design>
68
* <Consider fixing the team's process>
69
* <Consider to explore further>
70 13 okurz
71 71 nicksinger
## Workaround
72 13 okurz
<example: retrigger job>
73
```
74
75
example ticket: #10526
76
77 104 okurz
For tickets referencing "auto_review" see
78
https://github.com/os-autoinst/scripts/blob/master/README.md#auto-review---automatically-detect-known-issues-in-openqa-jobs-label-openqa-jobs-with-ticket-references-and-optionally-retrigger
79
for a suggested template snippet.
80
81 72 nicksinger
### Feature requests
82 13 okurz
83
Subject: `<Short description, example: "grub3 btrfs support" (feature)>`
84
85
86
```
87
## User story
88
<As a <role>, I want to <do an action>, to <achieve which goal> >
89
90 72 nicksinger
## Acceptance criteria
91 13 okurz
* <**AC1:** the first acceptance criterion that needs to be fulfilled to do this, example: Clicking "restart button" causes restart of the job>
92
* <**AC2:** also think about the "not-actions", example: other jobs are not affected>
93
94 72 nicksinger
## Tasks
95 13 okurz
* <first task to do as an easy starting point>
96 69 okurz
* <what do do next, all tasks optionally with an effort estimation in hours, e.g. "(0.5-2h)">
97 13 okurz
* <optional: mark "optional" tasks>
98
99 72 nicksinger
## Further details
100 17 okurz
<everything that does not fit into above sections>
101 13 okurz
```
102
103
example ticket: #10212
104
105 62 SLindoMansilla
## Further decision steps working on test issues
106 61 SLindoMansilla
107 62 SLindoMansilla
Test issues could be one of the following sources. Feel free to use the following template in tickets as well
108 1 alarrosa
109 62 SLindoMansilla
```
110
## Problem
111
* **H1** The product has changed
112
 * **H1.1** product changed slightly but in an acceptable way without the need for communication with DEV+RM --> adapt test
113
 * **H1.2** product changed slightly but in an acceptable way found after feedback from RM --> adapt test
114
 * **H1.3** product changed significantly --> after approval by RM adapt test
115 61 SLindoMansilla
116 62 SLindoMansilla
* **H2** Fails because of changes in test setup
117
 * **H2.1** Our test hardware equipment behaves different
118
 * **H2.2** The network behaves different
119
120
* **H3** Fails because of changes in test infrastructure software, e.g. os-autoinst, openQA
121
* **H4** Fails because of changes in test management configuration, e.g. openQA database settings
122
* **H5** Fails because of changes in the test software itself (the test plan in source code as well as needles)
123
* **H6** Sporadic issue, i.e. the root problem is already hidden in the system for a long time but does not show symptoms every time
124
```
125 25 okurz
126 182 okurz
## Additional details needed for non-qemu issues
127
128
As the automatic integration tests of os-autoinst and openQA are based on qemu virtualization, for any non-qemu related requests please provide detailed manual reproduction steps, otherwise it is unlikely that any issue or feature request can be implemented.
129
130 25 okurz
## pull request handling on github
131
132
As a reviewer of pull requests on github for all related repositories, e.g. https://github.com/os-autoinst/os-autoinst-distri-opensuse/pulls, apply labels in case PRs are open for a longer time and can not be merged so that we keep our backlog clean and know why PRs are blocked.
133
134
* **notready**: Triaged as not ready yet for merging, no (immediate) reaction by the reviewee, e.g. when tests are missing, other scenarios break, only tested for one of SLE/TW
135
* **wip**: Marked by the reviewee itself as "[WIP]" or "[DO-NOT-MERGE]" or similar
136
* **question**: Questions to the reviewee, not answered yet
137 54 okurz
138
## Where to contribute?
139 1 alarrosa
140
If you want to help openQA development you can take a look into the existing [issues](https://progress.opensuse.org/projects/openqav3/issues).
141 167 okurz
You can start with
142 168 okurz
* [entrance level issues](https://progress.opensuse.org/projects/openqav3/search?q=entrance+level+issue&open_issues=1)
143 167 okurz
* issues tagged as [easy](https://progress.opensuse.org/projects/openqav3/issues?utf8=%E2%9C%93&set_filter=1&sort=id%3Adesc&f%5B%5D=status_id&op%5Bstatus_id%5D=o&f%5B%5D=issue_tags&op%5Bissue_tags%5D=%3D&v%5Bissue_tags%5D%5B%5D=easy&f%5B%5D=&c%5B%5D=subject&c%5B%5D=project&c%5B%5D=status&c%5B%5D=assigned_to&c%5B%5D=fixed_version&c%5B%5D=is_private&c%5B%5D=due_date&c%5B%5D=relations&group_by=&t%5B%5D=)
144
* issues tagged as [beginner](https://progress.opensuse.org/projects/openqav3/issues?utf8=%E2%9C%93&set_filter=1&sort=id%3Adesc&f%5B%5D=status_id&op%5Bstatus_id%5D=o&f%5B%5D=issue_tags&op%5Bissue_tags%5D=%3D&v%5Bissue_tags%5D%5B%5D=beginner&f%5B%5D=&c%5B%5D=subject&c%5B%5D=project&c%5B%5D=status&c%5B%5D=assigned_to&c%5B%5D=fixed_version&c%5B%5D=is_private&c%5B%5D=due_date&c%5B%5D=relations&group_by=&t%5B%5D=)
145 168 okurz
* ideas from #65271
146 165 okurz
147
There are also some "always valid" tasks to be working on:
148 54 okurz
149
* *improve test coverage*:
150
 * *user story*: As openqa backend as well as test developer I want better test coverage of our projects to reduce technical debt
151
 * *acceptance criteria*: test coverage is significantly higher than before
152
 * *suggestions*: check current coverage in each individual project (os-autoinst/openQA/os-autoinst-distri-opensuse) and add tests as necessary
153 28 okurz
154 1 alarrosa
# Use cases
155 40 okurz
156 28 okurz
The following use cases 1-6 have been defined within a SUSE workshop (others have been defined later) to clarify how different actors work with openQA. Some of them are covered already within openQA quite well, some others are stated as motivation for further feature development.
157
158 6 okurz
## Use case 1
159 4 okurz
**User:** QA-Project Managment
160 1 alarrosa
**primary actor:** QA Project Manager, QA Team Leads
161
**stakeholder:** Directors, VP
162 7 okurz
**trigger:** product milestones, providing a daily status
163 1 alarrosa
**user story:** „As a QA project manager I want to check on a daily basis the „openQA Dashboard“ to get a summary/an overall status of the „reviewers results“ in order to take the right actions and prioritize tasks in QA accordingly.“
164 28 okurz
	
165 4 okurz
## Use case 2
166 1 alarrosa
**User:** openQA-Admin
167
**primary actor:** Backend-Team
168 4 okurz
**stakeholder:** Qa-Prjmgr, QA-TL, openQA Tech-Lead
169 7 okurz
**trigger:** Bugs, features, new testcases
170 5 okurz
**user story:** „As an openQA admin I constantly check in the web-UI the system health and I manage its configuration to ensure smooth operation of the tool.“
171 28 okurz
172 1 alarrosa
## Use case 3
173
**User:** QA-Reviewer
174
**primary actor:** QA-Team
175 4 okurz
**stakeholder:** QA-Prjmgr, Release-Mgmt, openQA-Admin
176 7 okurz
**trigger:** every new build
177
**user story:** „As an openQA-Reviewer at any point in time I review on the webpage of openQA the overall status of a build in order to track and find bugs, because I want to find bugs as early as possible and report them.“
178 28 okurz
179 1 alarrosa
## Use case 4
180
**User:** Testcase-Contributor
181 4 okurz
**primary actor:** All development teams, Maintenance QA
182 5 okurz
**stakeholder:** QA-Reviewer, openQA-Admin, openQA Tech-Lead
183 40 okurz
**trigger:** features, new functionality, bugs, new product/package
184 7 okurz
**user story:** „As developer when there are new features, new functionality, bugs, new product/package in git I contribute my testcases because I want to ensure good quality submissions and smooth product integration.“
185 28 okurz
186 4 okurz
## Use case 5
187
**User:** Release-Mgmt
188
**primary actor:** Release Manager
189 1 alarrosa
**stakeholder:** Directors, VP, PM, TAMs, Partners
190 7 okurz
**trigger:** Milestones
191
**user story:** „As a Release-Manager on a daily basis I check on a dashboard for the product health/build status in order to act early in case of failures and have concrete and current reports.“
192 28 okurz
193 4 okurz
## Use case 6
194
**User:** Staging-Admin
195
**primary actor:** Staging-Manager for the products
196 1 alarrosa
**stakeholder:** Release-Mgmt, Build-Team
197
**trigger:** every single submission to projects
198 40 okurz
**user story:** „As a Staging-Manager I review the build status of packages with every staged submission to the „staging projects“ in the „staging dashboard“ and the test-status of the pre-integrated fixes, because I want to identify major breakage before integration to the products and provide fast feedback back to the development.“
199
200
## Use case 7
201
**User:** Bug investigator
202
**primary actor:** Any bug assignee for openQA observed bugs
203
**stakeholder:** Developer
204
**trigger:** bugs
205 8 okurz
**user story:** „As a developer that has been assigned a bug which has been observed in openQA I can review referenced tests, find a newer and the most recent job in the same scenario, understand what changed since the last successful job, what other jobs show same symptoms to investigate the root cause fast and use openQA for verification of a bug fix.“
206 15 okurz
207 8 okurz
# Thoughts about categorizing test results, issues, states within openQA
208
by okurz
209
210
When reviewing test results it is important to distinguish between different causes of "failed tests"
211
212
## Nomenclature
213
214 58 okurz
### Test status categories
215 1 alarrosa
A common definition about the status of a test regarding the product it tests: "false|true positive|negative" as described on https://en.wikipedia.org/wiki/False_positives_and_false_negatives. "positive|negative" describes the outcome of a test ("positive": test signals presence of issue; "negative": no signal) whereas "false|true" describes the conclusion of the test regarding the presence of issues in the SUT or product in our case ("true": correct reporting; "false": incorrect reporting), e.g. "true negative", test successful, no issues detected and there are no issues, product is working as expected by customer. Another example: Think of testing as of a fire alarm. An alarm (event detector) should only go off (be "positive") *if* there is a fire (event to detect) --> "true positive" whereas *if* there is *no* fire there should be *no* alarm --> "true negative".
216 10 okurz
217 1 alarrosa
Another common but potentially ambiguous categorization:
218 10 okurz
219
* *broken*: the test is not behaving as expected (Ambiguity: "as expected" by whom?) --> commonly a "false positive", can also be "false negative" but hard to detect
220
* *failing*: the test is behaving as expected, but the test output is a fail --> "true positive"
221
* *working*: the test is behaving as expected (with no comment regarding the result, though some might ambiguously imply 'result is negative')
222
* *passing*: the test is behaving as expected, but the result is a success --> "true negative"
223 8 okurz
224 9 okurz
If in doubt declare a test as "broken". We should review the test and examine if it is behaving as expected.
225 10 okurz
226 8 okurz
Be careful about "positive/negative" as some might also use "positive" to incorrectly denote a passing test (and "negative" for failing test) as an indicator of "working product" not an indicator about "issue present". If you argue what is "used in common speech" think about how "false positive" is used as in "false alarm" --> "positive" == "alarm raised", also see https://narainko.wordpress.com/2012/08/26/understanding-false-positive-and-false-negative/
227
228 10 okurz
### Priorization of work regarding categories
229 3 okurz
In this sense development+QA want to accomplish a "true negative" state whenever possible (no issues present, therefore none detected). As QA and test developers we want to prevent "false positives" ("false alarms" declaring a product as broken when it is not but the test failed for other reasons), also known as "type I error" and "false negatives" (a product issue is not catched by tests and might "slip through" QA and at worst is only found by an external outside customer) also known as "type II error". Also see https://en.wikipedia.org/wiki/Type_I_and_type_II_errors. In the context of openQA and system testing paired with screen matching a "false positive" is much more likely as the tests are very susceptible to subtle variations and changes even if they should be accepted. So when in doubt, create an issue in progress, look at it again, and find that it was a false alarm, rather than wasting more peoples time with INVALID bug reports by believing the product to be broken when it isn't. To quote Richard Brown: "I […] believe this is the route to ongoing improvement - if we have tests which produce such false alarms, then that is a clear indicator that the test needs to be reworked to be less ambiguous, and that IS our job as openQA developers to deal with".
230 11 okurz
231
## Further categorization of statuses, issues and such in testing, especially automatic tests
232
By okurz
233
234
This categorization scheme is meant to help in communication in either written or spoken discussions being simple, concise, easy to remember while unambiguous in every case.
235
While used for naming it should also be used as a decision tree and can be followed from the top following each branch.
236
237
### Categorization scheme
238
239
To keep it simple I will try to go in steps of deciding if a potential issue is of one of two categories in every step (maybe three) and go further down from there. The degree of further detailing is not limited, i.e. it can be further extended. Naming scheme should follow arabic number (for two levels just 1 and 2) counting schemes added from the right for every additional level of decision step and detail without any separation between the digits, e.g. "1111" for the first type in every level of detail up to level four. Also, I am thinking of giving the fully written form phonetic name to unambiguously identify each on every level as long as not more individual levels are necessary. The alphabet should be reserved for higher levels and higher priority types.
240
Every leaf of the tree must have an action assigned to it.
241 12 okurz
242 11 okurz
1 **failed** (ZULU)
243
11 new (passed->failed) (YANKEE)
244
111 product issue ("true positive") (WHISKEY)
245 44 okurz
1111 unfiled issue (SIERRA)
246 11 okurz
11111 hard issue (openqa *fail*) (KILO)
247
111121 critical / potential ship stopper (INDIA) --> immediately file bug report with "ship_stopper?" flag; opt. inform RM directly
248 44 okurz
111122 non-critical hard issue (HOTEL) --> file bug report
249 11 okurz
11112 soft issue (openqa *softfail* on job level, not on module level) (JULIETT) --> file bug report on failing test module
250
1112 bugzilla bug exists (ROMEO)
251
11121 bug was known to openqa / openqa developer --> cross-reference (bug->test, test->bug) AND raise review process issue, improve openqa process
252
11122 bug was filed by other sources (e.g. beta-tester) --> cross-reference (bug->test, test->bug)
253
112 test issue ("false positive") (VICTOR)
254
1121 progress issue exists (QUEBEC) --> cross-reference (issue->test, test->issue)
255
1122 unfiled test issue (PAPA)
256
11221 easy to do w/o progress issue
257
112211 need needles update --> re-needle if sure, TODO how to notify?
258
112212 pot. flaky, timeout
259
1122121 retrigger yields PASS --> comment in progress about flaky issue fixed
260
1122122 reproducible on retrigger --> file progress issue
261
11222 needs progress issue filed --> file progress issue
262
12 existing / still failing (failed->failed) (XRAY)
263
121 product issue (UNIFORM)
264
1211 unfiled issue (OSCAR) --> file bug report AND raise review process issue (why has it not been found and filed?)
265
1212 bugzilla bug exists (NOVEMBER) --> ensure cross-reference, also see rules for 1112 ROMEO
266
122 test issue (TANGO)
267
1221 progress issue exists (MIKE) --> monitor, if persisting reprioritize test development work
268
1222 needs progress issue filed (LIMA) --> file progress issue AND raise review process issue, see 1211 OSCAR
269
2 **passed** (ALFA)
270
21 stable (passed->passed) (BRAVO)
271
211 existing "true negative" (DELTA) --> monitor, maybe can be made stricter
272
212 existing "false negative" (ECHO) --> needs test improvement
273
22 fixed (failed->passed) (CHARLIE)
274
222 fixed "true negative" (FOXTROTT) --> TODO split monitor, see 211 DELTA
275
2221 was test issue --> close progress issue
276
2222 was product issue
277
22221 no bug report exists --> raise review process issue (why was it not filed?)
278
22222 bug report exists
279
222221 was marked as RESOLVED FIXED
280
221 fixed but "false negative" (GOLF) --> potentially revert test fix, also see 212 ECHO
281 41 okurz
282
283 11 okurz
Priority from high to low: INDIA->OSCAR->HOTEL->JULIETT->…
284 35 okurz
285 142 okurz
# Important ticket queries
286
287
* All auto-review tickets: https://progress.opensuse.org/projects/openqav3/issues?query_id=697 , see https://github.com/os-autoinst/scripts/blob/master/README.md#auto-review---automatically-detect-known-issues-in-openqa-jobs-label-openqa-jobs-with-ticket-references-and-optionally-retrigger for further information regarding auto-review
288
* All auto-review+force-result tickets: https://progress.opensuse.org/projects/openqav3/issues?query_id=700
289
290 82 okurz
# Proposals for uses of labels
291 23 okurz
With [Show bug or label icon on overview if labeled (gh#550)](https://github.com/os-autoinst/openQA/pull/550) it is possible to add custom labels just by writing them. Nevertheless, a convention should be found for a common benefit. <del>Beware that labels are also automatically carried over with (Carry over labels from previous jobs in same scenario if still failing [gh#564])(https://github.com/os-autoinst/openQA/pull/564) which might make consistent test failures less visible when reviewers only look for test results without labels or bugrefs.</del> Labels are not anymore automatically carried over ([gh#1071](https://github.com/os-autoinst/openQA/pull/1071)).
292
293
List of proposed labels with their meaning and where they could be applied.
294
295
* ***`fixed_<build_ref>`***: If a test failure is already fixed in a more recent build and no bug reference is known, use this label together with a reference to a more recent passed test run in the same scenario. Useful for reviewing older builds. Example (https://openqa.suse.de/tests/382518#comments):
296
297
```
298
label:fixed_Build1501
299
300
t#382919
301
```
302 24 okurz
303
* ***`needles_added`***: In case needles were missing for test changes or expected product changes caused needle matching to fail, use this label with a reference to the test PR or a proper reasoning why the needles were missing and how you added them. Example (https://openqa.suse.de/tests/388521#comments):
304
305
```
306
label:needles_added
307
308
needles for https://github.com/os-autoinst/os-autoinst-distri-opensuse/pull/1353 were missing, added by jpupava in the meantime.
309 60 mgriessmeier
```
310
311 67 okurz
# s390x Test Organisation
312 1 alarrosa
313 67 okurz
See the following picture for a graphical overview of the current s390x test infrastructure at SUSE:
314
315
![SUSE s390x test infrastructure](qa_sle_openqa_s390x_test_infrastructure.jpg)
316
317 75 okurz
## Upgrades
318 60 mgriessmeier
319
### on z/VM 
320
#### special Requirements
321
322
Due to the lack of proper use of hdd-images on zVM, we need to workaround this with having a dedicated worker_class aka a dedicated Host where we run two jobs with START_AFTER_TEST,
323
the first one which installs the basesystem we want to have upgraded and a second one which is doing the actually upgrade (e.g migration_offline_sle12sp2_zVM_preparation and migration_offline_sle12sp2_zVM)
324
325
Since we encountered issues with randomly other preparation jobs are started in between there, we need to ensure that we have one complete chain for all migration jobs running on one worker, that means for example:
326
327 75 okurz
1. migration_offline_sle12sp2_zVM_preparation 
328
1. migration_offline_sle12sp2_zVM (START_AFTER_TEST=#1) 
329
1. migration_offline_sle12sp2_allpatterns_zVM_preparation (START_AFTER_TEST=#2) 
330
1. migration_offline_sle12sp2_allpatterns_zVM 
331
1. ...
332 66 okurz
333
This scheme ensures that all actual Upgrade jobs are finding the prepared system and are able to upgrade it
334
335
### on z/KVM
336
337 67 okurz
No special requirements anymore, see details in #18016
338 77 nicksinger
339
## Automated z/VM LPAR installation with openQA using qnipl
340
341 78 nicksinger
There is an ongoing effort to automate the LPAR creation and installation on z/VM. A first idea resulted in the creation of [qnipl](https://github.com/openSUSE/dracut-qnipl). `qnipl` enables one to boot a very slim initramfs from a shared medium (e.g. shared SCSI-disks) and supply it with the needed parameters to chainload a "normal SLES installation" using kexec.
342 77 nicksinger
This method is required for z/VM because snipl (Simple network initial program loader) can only load/boot LPARs from specific disks, not network resources.
343
344
### Setup
345
346
1. Get a shared disk for all your LPARs
347
  * Normally this can easily done by infra/gschlotter
348
  * Disks needs to be connected to all guests which should be able to network-boot
349
1. Boot a fully installed SLES on one of the LPARs to start preparing the shared-disk
350
1. Put a DOS partition table on the disk and create one single, large partition on there
351
1. Put a FS on there. Our first test was on ext2 and it worked flawlessly in our attempts
352
1. Install `zipl` (The s390x bootloader from IBM) on this partition
353
  * A simple and sufficient config can be found in [poo#33682](https://progress.opensuse.org/issues/33682)
354
1. clone [`qnipl`](https://github.com/nicksinger/dracut-qnipl) to your dracut modules (e.g. /usr/lib/dracut/modules.d/95qnipl)
355
1. Include the module named `qnipl` to your dracut modules for initramfs generation
356
  * e.g. in /etc/dracut.conf.d/99-qnipl.conf add: `add_dracutmodules+=qnipl`
357
1. Generate your initramfs (e.g. `dracut -f -a "url-lib qnipl" --no-hostonly-cmdline /tmp/custom_initramfs`)
358
  * Put the initramfs next to your kernel binary on the partition you want to prepare
359
1. From now on you can use `snipl` to boot any LPAR connected with this shared disk from network
360
  * example: `snipl -f ./snipl.conf -s P0069A27-LP3 -A fa00 --wwpn_scsiload 500507630713d3b3 --lun_scsiload 4001401100000000 --ossparms_scsiload "install=http://openqa.suse.de/assets/repo/SLE-15-Installer-DVD-s390x-Build533.2-Media1 hostip=10.161.159.3/20 gateway=10.161.159.254 Nameserver=10.160.0.1 Domain=suse.de ssh=1 regurl=http://all-533.2.proxy.scc.suse.de"`
361
  * `--ossparms_scsiload` is then evaluated and used by `qnipl` to kexec into the installer with the (for the installer) needed parameters
362
363
### Further details
364
365 78 nicksinger
Further details can also be found in the [github repo](https://github.com/openSUSE/dracut-qnipl/blob/master/README.md). Pull requests, questions and ideas always welcome!
366 84 okurz
367 109 okurz
# Infrastructure setup for o3 (openqa.opensuse.org) and osd (openqa.suse.de)
368 1 alarrosa
369 109 okurz
## o3 (openqa.opensuse.org)
370
371 113 okurz
o3 consists of a VM running the web UI and physical worker machines. The VM for o3 has netapp backed storage on rotating disk so less performant than SSD but cheaper. So eventually we might have the possibility to use SSD based storage. Currently there are four virtual storage devices provided to o3 totalling to 10 TB.
372 88 okurz
373 185 okurz
The o3 infrastructure is in detail described on https://github.com/os-autoinst/sync-and-trigger/blob/main/openqa-opensuse.md
374
375 141 okurz
### Accessing the o3 infrastructure
376
377 180 okurz
The o3 webui host as well the workers within the o3 infrastructure can be accessed over ssh by using `ssh -p 2213 gate.opensuse.org`. Ask one of the existing admins within https://app.element.io/#/room/#openqa:opensuse.org or irc://irc.libera.chat/opensuse-factory (so that I know you can be reached over those channels when people have questions to you what you did with the ssh access) to put your ssh key on the o3 webui host to be able to login. 
378 141 okurz
379
To give access for a new user an existing admin can do the following:
380
381
```
382
sudo useradd -G users,trusted --create-home $user
383
echo "$ssh_key_from_user" | sudo tee -a /home/$user/.ssh/authorized_keys
384
```
385
386
#### SSH configuration
387
388
To easily access all hosts behind the jump host you can use the following config for your ssh client (`~.ssh/config`):
389
390
```
391
Host ariel
392
  HostName gate.opensuse.org
393
  Port 2213
394
395
# Note that %h as understood by -W needs the real host, aliases won't work:
396
# kex_exchange_identification: Connection closed by remote host
397
# Connection closed by UNKNOWN port 65535`
398
Host *.opensuse.org
399
  ProxyCommand ssh -q -A -x ariel -W %h:%p
400
```
401
402
**A word of warning:** be aware that this enables agent-forwarding to at least the jumphost. Please read up for yourself if and how bad you consider the security implications of doing so.
403
404
The workers can only be accessed from "ariel", not directly. One can use password authentication on the workers using the root account. Ask existing admins for the root password. It is suggested that you use key-based authentication. For this put your ssh keys on all the workers, e.g. using the above configuration and `ssh-copy-id`.
405
406
**Notice:** Some machines are connected to the o3 openQA host from other networks and might need different ways of access, at time of writing:
407
408
* Remote (owner: @ggardet_arm):
409
 * ip-10-0-0-58
410
 * oss-cobbler-03
411
 * siodtw01 (for tests on Raspberry Pi 2,3,4)
412
413
### Manual command execution on o3 workers
414
415
To execute commands manually on all workers within the o3 infrastructure one can do for example the following:
416
417
```
418
for i in aarch64 openqaworker1 openqaworker4 openqaworker7 power8 imagetester rebel; do echo $i && ssh root@$i "(transactional-update -n dup || zypper -n dup) && reboot" ; done
419
```
420
421 181 mkittler
```
422
for i in aarch64 openqaworker1 openqaworker4 openqaworker7 power8 imagetester rebel; do echo $i && ssh root@$i " echo 'ssh-rsa … …' >> /root/.ssh/authorized_keys " ; done
423
```
424
425 141 okurz
mind the correct list of machines.
426
427 91 okurz
### Automatic update of o3
428 92 okurz
429
o3 is automatically deployed on a daily base, that includes both the webUI host as well as the workers.
430 111 okurz
431
#### Automatic update of o3 webUI host
432
433 184 okurz
openqa.opensuse.org applies continuous updates of openQA related packages, conducts nightly updates of system packages and reboots automatically as required, see
434
http://open.qa/docs/#_automatic_system_upgrades_and_reboots_of_openqa_hosts
435
for details
436 111 okurz
437
#### Recurring automatic update of openQA workers
438
439 186 okurz
Same as the o3 webUI all o3 workers all apply continuous updates of openQA related packages. Additionally most apply a daily automatic system update and are "Transactional Servers" running openSUSE Leap. power8 is non-transactional with a weekly update of the system every Sunday.
440 111 okurz
441
This was for a number of reasons including:
442 109 okurz
443 96 okurz
* Getting all the machines consistent after a few years of drift
444
* Making it easier to keep them consistent by leveraging a read only root filesystem
445
* Guaranteeing rollbackability by using transactional updates
446 102 okurz
447 1 alarrosa
This was done by rbrown also to fulfill the prerequisite to getting them viable for multi-machine testing
448 102 okurz
449
These systems currently patch themselves and reboot automatically in the default maintenance window of 0330-0500 CET/CEST.
450 112 okurz
451 102 okurz
On problems this could be changed in the following way:
452
453 109 okurz
* Edit the maintenance window in /etc/rebootmgr.conf
454 105 nicksinger
* Disable the automatic reboot by "systemctl disable rebootmgr.service"
455
* Disable the automatic patching by "systemctl disable transactional-update.timer"
456
457
SUSE employees have access to the bootmenu for the openQA worker machines, e.g. openqaworker1 and openqaworker4 via openqaworker1- ipmi.suse.de and openqaworker4-ipmi.suse.de which are both connected to the r&d network. For imagetester one would need to go through SUSE-IT in an unlikely event of a boot-preventing update. "snapper rollback" can be executed from a booted, functionally operative machine which one can ssh into.
458
459
For manual investigation https://github.com/kubic-project/microos-toolbox can be helpful
460
461
#### Rollback of updates
462 140 livdywan
463
Updates on workers can be rolled back using `transactional-update` affecting the transactional workers (others are likely not updated that often):
464
465 105 nicksinger
```
466
for i in aarch64 openqaworker1 openqaworker4 openqaworker7 power8 imagetester rebel; do echo $i && ssh root@$i "transactional-update rollback last && reboot"; done
467
```
468
469
Updates on the central webUI host openqa.opensuse.org can be rolled back by using either older variants of packages that receive maintenance updates or using the locally cached packages in e.g. /var/cache/zypp/packages/devel_openQA/noarch using `zypper in --oldpackage`, similar to https://github.com/os-autoinst/openQA/blob/master/script/openqa-rollback#L39
470 108 SLindoMansilla
471
#### Debugging qemu SUTs in openqa.opensuse.org
472
473
SUT: System Under Test
474
475
os-autoinst starts qemu with network type that doesn't allow access from the outside, so ssh is not possible. But, qemu is started with a VNC channel available from the host (the openQA-worker).
476
Running vncviewer inside a headless server is useless, but it is possible to use gate.opensuse.org as a jump host and SSH port forwarding to start vncviewer client from your desktop environment and connect to the VNC channel of the qemu SUT.
477
478
```
479
ssh -p 2213 -L LOCAL_PORT:WORKER_HOSTNAME:QEMU_VNC_PORT USERNAME@gate.opensuse.org
480
```
481
482
For example, if user **bernhard**, wants to connect to openqaworker7:11, and wants to use local port **43043**
483
Being the IP of openqaworker7 **192.168.112.12**
484
And the VNC channel port of openqa-worker@11 **6001** (5990 + 11)
485
486
##### 1. Create SSH tunnel with port forwarding
487
* on laptop shell 1: ssh -p 2213 -L 43043:192.168.112.12:6001 bernhard@gate.opensuse.org
488 1 alarrosa
* Keep shell open to keep the tunnel open and the port forwarding
489 108 SLindoMansilla
490 1 alarrosa
##### 2. Open vncviewer
491
* on laptop shell 2: vncviewer -Shared localhost:43043
492
* `-shared` is needed to not kick the VNC connection of os-autoinst. If it is kicked, the job will terminate and the qemu process will be killed.
493
494 109 okurz
### AArch64 specific configurations on o3
495 1 alarrosa
496 109 okurz
On o3, the aarch64 workers need additional configuration.
497
498 127 dheidler
#### Setup HugePages
499
500
You need to setup HugePages support to improve performances with qemu VM and to match current aarch64 `MACHINE` configuration.
501
For the D05 machine, the configuration is: `40` pages with a size of `1G`.
502
If there are some permissions issues on `/dev/hugepages/`, check https://progress.opensuse.org/issues/53234
503
504 126 dheidler
### o3 s390 workers
505
506
The s390 workers for openQA are running within podman containers on openqaworker1.
507
The containers are started using systemd but the unit files are specific to the containers and will end up in a restart-loop if this fact is ignored. Whenever the containers are recreated, the systemd files have to be recreated.
508
509
The containers are started like this (for i=101…104):
510
511
```
512
i=101
513 109 okurz
podman run -d -h openqaworker1_container --name openqaworker1_container_$i -p $(python3 -c"p=${i}*10+20003;print(f'{p}:{p}')") -e OPENQA_WORKER_INSTANCE=$i -v /opt/s390x_rebel_replacement:/etc/openqa -v /var/lib/openqa/share:/var/lib/openqa/share registry.opensuse.org/devel/openqa/containers15.2/openqa_worker:latest
514
(cd /etc/systemd/system/; podman generate systemd -f -n openqaworker1_container_$i --restart-policy always)
515
systemctl daemon-reload
516
systemctl enable container-openqaworker1_container_$i
517
```
518
519 133 okurz
As alternative s390x workers can run on the host "rebel" as well. Be aware that openQA workers accessing the same s390x instances must not run in parallel so only enable one worker instance per s390x instance at a time (See https://progress.opensuse.org/issues/97658 for details).
520
521 121 okurz
### Monitoring
522
523
There is an internal munin instance on o3. Anyone wanting to look at the HTML pages, do this:
524
```
525
rsync -a o3:/srv/www/htdocs/munin ~/o3-munin/ 
526
```
527
(where "o3" is configured in your ssh config of course)
528
529 183 okurz
## Hotfixing
530
531
Applying hotfixes, e.g. patches from an os-autoinst pull requests to O3 workers can be applied like this for a pull request <pr_id>:
532
533
```
534
for i in openqaworker1 openqaworker4 openqaworker7 imagetester rebel; do echo $i && ssh root@$i "(transactional-update run /bin/sh -c \"curl -sS https://patch-diff.githubusercontent.com/raw/os-autoinst/os-autoinst/pull/${pr_id}.patch | patch -p1 --directory=/usr/lib/os-autoinst\" && reboot) || curl -sS https://patch-diff.githubusercontent.com/raw/os-autoinst/os-autoinst/pull/${pr_id}.patch | patch -p1 --directory=/usr/lib/os-autoinst" ; done
535
```
536
537
Hotpatching on all OSD workers with the same <pr_id> as above with something like
538
539
```
540
sudo salt --no-color --state-output=changes -C 'G@roles:worker' cmd.run 'curl -sS https://patch-diff.githubusercontent.com/raw/os-autoinst/os-autoinst/pull/${pr_id}.patch | patch -p1 --directory=/usr/lib/os-autoinst'
541
```
542
543 89 ggardet_arm
## Mitigation of boot failure or disk issues
544
545
### Worker stuck in recovery
546
547
Check disk health and consider manual fixup of mount points, e.g.:
548
549
```
550
test -e /dev/md/openqa || lsblk -n | grep -v nvme | grep "/$" && mdadm --create /dev/md/openqa --level=0 --force --raid-devices=$(ls /dev/nvme?n1 | wc -l) --run /dev/nvme?n1 || mdadm --create /dev/md/openqa --level=0 --force --raid-devices=1 --run /dev/nvme0n1p3
551
```
552
553 106 okurz
## PPC specific configurations
554
555
In one case it was necessary to disable snapshots for petitboot with `nvram -p default --update-config "petitboot,snapshots?=false"` to prevent a race condition between dm_raid and btrfs trying to discover bootable devices (https://progress.opensuse.org/issues/68053#note-25). In another case https://bugzilla.opensuse.org/show_bug.cgi?id=1174166 caused the boot entries to be not properly discovered and it was necessary to prevent grub from trying to update the according sections (https://progress.opensuse.org/issues/68053#note-31).
556 89 ggardet_arm
557 84 okurz
## Moving worker from osd to o3
558
559
* Ensure system management, e.g. over IPMI works. This is untouched by the following steps and can be used during the process for recovery and setup
560
* Ensure network is configured for DHCP
561
* Instruct SUSE-IT to change VLAN for machine from 2 to 662 (example: https://infra.nue.suse.com/SelfService/Display.html?id=16458)
562
* Remove from osd:
563
564
```
565
salt-key -y -d openqaworker7.suse.de
566
```
567
568
* Add entry on o3 to `/etc/dnsmasq.d/openqa.conf` with MAC address, e.g.
569
570
```
571
dhcp-host=54:ab:3a:24:34:b8,openqaworker7
572
```
573
574
* Add entry to `/etc/hosts` which dnsmasq picks up to give out a DHCP lease, e.g.
575
576
```
577
192.168.112.12   openqaworker7.openqanet.opensuse.org openqaworker7
578
```
579
580 85 okurz
* Adapt NFS mount point
581
582
```
583
sed -i '/openqa\.suse\.de/d' /etc/fstab && echo 'openqa1-opensuse:/ /var/lib/openqa/share nfs4 ro,fsc 0 0' >> /etc/fstab
584
```
585
586 84 okurz
* Reload dnsmasq with `systemctl restart dnsmasq`
587
* Restart network on machine (over IMPI) using `systemctl restart network` and monitor in o3:`journalctl -f -u dnsmasq` until address is assigned, e.g.:
588
589
```
590
Feb 29 10:48:30 ariel dnsmasq[28105]: read /etc/hosts - 30 addresses
591
Feb 29 10:48:54 ariel dnsmasq-dhcp[28105]: DHCPREQUEST(eth1) 10.160.1.101 54:ab:3a:24:34:b8
592
Feb 29 10:48:54 ariel dnsmasq-dhcp[28105]: DHCPNAK(eth1) 10.160.1.101 54:ab:3a:24:34:b8 wrong network
593
Feb 29 10:49:10 ariel dnsmasq-dhcp[28105]: DHCPDISCOVER(eth1) 54:ab:3a:24:34:b8
594
Feb 29 10:49:10 ariel dnsmasq-dhcp[28105]: DHCPOFFER(eth1) 192.168.112.12 54:ab:3a:24:34:b8
595
Feb 29 10:49:10 ariel dnsmasq-dhcp[28105]: DHCPREQUEST(eth1) 192.168.112.12 54:ab:3a:24:34:b8
596
Feb 29 10:49:10 ariel dnsmasq-dhcp[28105]: DHCPACK(eth1) 192.168.112.12 54:ab:3a:24:34:b8 openqaworker7
597 85 okurz
```
598
599
* Ensure all mountpoints up
600
601
```
602
mount -a
603 84 okurz
```
604
605
* Change root password to o3 one
606 86 okurz
* Allow ssh password authentication: `sed -i 's/^PasswordAuthentication/#&/' /etc/ssh/sshd_config && systemctl restart sshd`
607 84 okurz
* Add personal ssh key to machine, e.g. openqaworker7:/root/.ssh/authorized_keys
608
* Update /etc/openqa/client.conf with the same key as used on other workers for "openqa1-opensuse"
609
* Update /etc/openqa/workers.ini with similar config as used on other workers, e.g. based on openqaworker4, example:
610
611
```
612
# diff -Naur /etc/openqa/workers.ini{.osd,}
613
--- /etc/openqa/workers.ini.osd 2020-02-29 15:21:47.737998821 +0100
614
+++ /etc/openqa/workers.ini     2020-02-29 15:22:53.334464958 +0100
615
@@ -1,17 +1,10 @@
616
-# This file is generated by salt - don't touch
617
-# Hosted on https://gitlab.suse.de/openqa/salt-pillars-openqa
618
-# numofworkers: 10
619
-
620
 [global]
621
-HOST=openqa.suse.de
622
-CACHEDIRECTORY=/var/lib/openqa/cache
623
-LOG_LEVEL=debug
624
-WORKER_CLASS=qemu_x86_64,qemu_x86_64_staging,tap,openqaworker7
625
-WORKER_HOSTNAME=10.160.1.101
626
-
627
-[1]
628
-WORKER_CLASS=qemu_x86_64,qemu_x86_64_staging,tap,qemu_x86_64_ibft,openqaworker7
629
+HOST=http://openqa1-opensuse
630
+WORKER_HOSTNAME=192.168.112.12
631
+CACHEDIRECTORY = /var/lib/openqa/cache
632
+CACHELIMIT = 50
633
+WORKER_CLASS = openqaworker7,qemu_x86_64
634
635
-[openqa.suse.de]
636
-TESTPOOLSERVER = rsync://openqa.suse.de/tests
637
+[http://openqa1-opensuse]
638
+TESTPOOLSERVER = rsync://openqa1-opensuse/tests
639
```
640
641
* Remove OSD specifics
642
643
```
644
systemctl disable --now auto-update.timer salt-minion telegraf
645
for i in  NPI SUSE_CA telegraf-monitoring; do zypper rr $i; done
646
zypper -n dup --force-resolution --allow-vendor-change
647
```
648
649
* If the machine is not a transactional-server one has the following options: Keep as is and handle like power8 (also not transactional), enable transactional updates w/o root being r/o, change to root being r/o on-the-fly, reinstall as transactional. At least option 2 is suggested, enable transactional updates:
650
651
```
652
zypper -n in transactional-update
653
systemctl enable --now transactional-update.timer rebootmgr
654
```
655
656
* Enable apparmor
657
658
```
659
zypper -n in apparmor-utils
660
systemctl unmask apparmor
661
systemctl enable --now apparmor
662
```
663
664
* Switch firewall from SuSEfirewall2 to firewalld
665
666
```
667
zypper -n in firewalld && zypper -n rm SuSEfirewall2
668
systemctl enable --now firewalld
669
firewall-cmd --zone=trusted --add-interface=br1
670
firewall-cmd --set-default-zone trusted
671
firewall-cmd --zone=trusted --add-masquerade
672
```
673
674
* Copy over special openSUSE UEFI staging images, see #63382
675
* Check operation with a single openQA worker instance:
676
677
```
678
systemctl enable --now openqa-worker.target openqa-worker@1
679
```
680
681
* Test with an openQA job cloned from a production job, e.g. for openqaworker7
682
683
```
684
openqa-clone-job --within-instance https://openqa.opensuse.org/t${id} WORKER_CLASS=openqaworker7
685
```
686
687
* After the latest openQA job could successfully finish enable more worker instances
688
689
```
690
systemctl unmask openqa-worker@{2..14} && systemctl enable --now openqa-worker@{2..14}
691
```
692
693
* Monitor if nightly update works, e.g. look for journal entry:
694
695
```
696
Mar 01 00:08:26 openqaworker7 transactional-update[10933]: Calling zypper up
697
698
Mar 01 00:08:51 openqaworker7 transactional-update[10933]: transactional-update finished - informed rebootmgr
699
Mar 01 00:08:51 openqaworker7 systemd[1]: Started Update the system.
700
701
Mar 01 03:30:00 openqaworker7 rebootmgrd[40760]: rebootmgr: reboot triggered now!
702
703
Mar 01 03:36:32 openqaworker7 systemd[1]: Reached target openQA Worker.
704
```
705 93 okurz
706 95 okurz
## Distribution upgrades
707
708 131 livdywan
**Note:** Performing the upgrade differs slightly depending on the host setup:
709 138 okurz
* On hosts with a writeable `/` you need to enter a root shell i.e. `sudo bash`
710
* Transactional hosts require that you use `transactional-update shell` thereby creating a snapshot which is applied after a reboot, optionally using `--continue` if you want to make further changes to an existing snapshot
711
* Depending on available space it might be necessary to cleanup space before conducting the upgrade, e.g. use `snapper rm <N..M>` to delete older root btrfs snapshots, cleanup unneeded packages, e.g. with https://github.com/okurz/scripts/blob/master/zypper-rm-orphaned and https://github.com/okurz/scripts/blob/master/zypper-rm-unneeded
712
* Consider using https://github.com/okurz/auto-upgrade/blob/master/auto-upgrade or manual (*Tip**: Run this in `screen -d -r || screen` and use e.g. `sudo bash`):
713 101 okurz
714 95 okurz
```
715 137 okurz
new_version=15.3 # Specify the target release
716 1 alarrosa
717 98 livdywan
# Change the release via the special $releasever
718 1 alarrosa
. /etc/os-release
719
sed -i -e "s/${VERSION_ID}/\$releasever/g" /etc/zypp/repos.d/*
720
zypper --releasever=$new_version ref
721
test -f /etc/openqa/openqa.ini && sudo -u geekotest /opt/openqa-scripts/dump-psql
722
zypper -n --releasever=$new_version dup --auto-agree-with-licenses --replacefiles --download-in-advance
723
724
# Check config files for relevant changes
725 95 okurz
rpmconfigcheck
726
for i in $(cat /var/adm/rpmconfigcheck) ; do vimdiff ${i%.rpm*} $i ; done
727
rm $(cat /var/adm/rpmconfigcheck)
728
729 1 alarrosa
reboot
730
systemctl --failed
731 98 livdywan
```
732
733 138 okurz
* Ensure that the upgrade was really successful, e.g. /etc/os-release should show the new version, the above `zypper dup` command should show no more pending actions
734
* Crosscheck for any obvious alerts, pipelines failing, user reports, etc.
735
* Monitor for successful openQA jobs on the host
736 132 livdywan
737 109 okurz
## Remote management with IPMI
738 95 okurz
739 119 livdywan
o3 and osd worker machines are controllable over IPMI from within the SUSE network, see [openqa/workerconf.sls](https://gitlab.suse.de/openqa/salt-pillars-openqa/-/blob/master/openqa/workerconf.sls) for the commands.
740
It is recommended to use [shell aliases](https://gitlab.suse.de/openqa/salt-pillars-openqa#get-ipmi-definition-aliases) for convenience.
741 109 okurz
742
`ipmitool` can sometimes behave unreliably. It seems (to okurz) as if ipmitool version ipmitool-1.8.18+git20200916.1245aaa387dc from openSUSE Tumbleweed or Factory or the "systemsmanagement" OBS repo is more reliable than the version supplied with openSUSE Leap 15.2 (See #80544#note-14) and given a stable internet connection it is certainly possible to have a consistent serial console experience.
743
744 110 okurz
To ensure that remotely controlled machines power on automatically after a power loss ensure to set the power restory policy to "previous", especially for new machines. Using https://gitlab.suse.de/openqa/salt-pillars-openqa/#get-ipmi-definition-aliases :
745
746
```
747
IFS=$'\n'; for i in $(sed 's/^alias .*="\(.*\)"/\1/' ~/.openqa_ipmi_aliases); do eval "$i" chassis policy previous; done
748
```
749
750 130 nicksinger
### Accessing imagetester
751 163 mkittler
Imagetester can't output anything over SOL. Therefore it is necessary to access it over the integrated iKVM console. Unfortunately java-webstart is somewhat broken and requires some extra steps to work:
752 129 nicksinger
753 163 mkittler
1. Access the web interface of the BMC at http://imagetester-ipmi.suse.de and login via the IPMI credentials mentioned in the salt pillars repository.
754
2. Click on the preview image of the "Remote Console Preview" and download the according "launch.jnlp" webstart script.
755 129 nicksinger
3. Grab the required dependencies with curl and place them in a local directory:
756
757
```
758
mkdir /tmp/ikvm
759 163 mkittler
curl -k https://imagetester-ipmi.suse.de:443/liblinux_x86_64__V1.0.3.jar.pack.gz > /tmp/ikvm/liblinux_x86_64__V1.0.3.jar.pack.gz
760
curl -k https://imagetester-ipmi.suse.de:443/iKVM__V1.69.13.0x0.jar.pack.gz > /tmp/ikvm/iKVM__V1.69.13.0x0.jar.pack.gz
761 129 nicksinger
```
762
763 163 mkittler
4. Open the previous downloaded "launch.jnlp" and replace the IP in the first line from e.g. `<jnlp spec="1.0+" codebase="https://10.160.65.195:443/">` to `<jnlp spec="1.0+" codebase="http://127.0.0.1:8080/">`
764
5. Launch some kind of web server which can serve the previously downloaded dependencies for javaws (from /tmp/ikvm). In this example we use python: `python3 -m http.server 8080`
765 129 nicksinger
6. Now you can finally launch the webstart application from your modifies "launch.jnlp" file in a second console: `javaws -nosecurity -jnlp ~/Downloads/launch.jnlp`
766
  * It will ask you how to run the application. You can run it in a sandbox and everything still works
767
7. You should see the monitor output of imagetester now. "Virtual Storage" is also working which allows you to mount an ISO over this remote connection. 
768
769
*Also check https://progress.opensuse.org/issues/96719#note-27 where this was discovered. If you have questions or remarks you can ping @nicksinger*
770 128 okurz
771 187 okurz
### Accessing java based remote control clients
772
773
We also managed to start the java based remote control client from pages like
774
https://openqaworker4-ipmi.suse.de/ with `javaws.itweb jviewer.jnlp` from icedtea-web which offers virtual media redirection so one can select a local ISO file as installation medium.
775
776
777
778 109 okurz
## openQA infrastructure needs (o3 + osd)
779
780 115 okurz
TL;DR: new OSD ARM workers needed, missing redundancy for o3-ppc, rest is needing replacement as nearly all current hardware is out of vendor provided maintenance (as of 2021-05), SSD storage for o3 would be good
781 93 okurz
782
2020-03: SUSE IT (EngInfra) provided us more space for O3 but we have only slow rotating-disk storage. Performance could be improved by providing SSD storage.
783
784
The most time and effort we currently struggle with storage space for OSD (openqa.suse.de) ~~both OSD (openqa.suse.de) as well as O3 (openqa.opensuse.org) (2020-03: Situation on o3 resolved with more storage provided by SUSE IT)~~. Both instances (OSD + O3) are using precious netapp-storage but there is currently no better approach to use different, external storage. An increase of the available space would be appreciated, ~~o3 being more important right now than osd,~~ see https://progress.opensuse.org/issues/57494 for details. Graphs like 
785
https://stats.openqa-monitor.qa.suse.de/d/nRDab3Jiz/openqa-jobs-test?orgId=1&from=1578343509900&to=1578653794173&fullscreen&panelId=12 show how usual test backlogs are worked on within OSD by architecture. It can be seen that both the ppc64le and aarch64 backlogs are reduced fast so we do not need more ppc64le or aarch64 machines. However, we have a stability problem with all three aarch64 workers. Potentially new machine(s) could help, see https://progress.opensuse.org/issues/41882 for details.
786 107 okurz
787 125 okurz
With number of workers and parallel processed tests as well as with the increased number of products tested on OSD and users using the system the workload on OSD constantly increases. CPU load alerts had been seen recently in #96713 and the higher load is visible in https://monitor.qa.suse.de/d/WebuiDb/webui-summary?viewPanel=25 . From time to time should increase the number of CPU cores on the OSD VM due to the higher usage.
788
789 117 okurz
## Setup guide for new machines
790
791 1 alarrosa
* Change IPMI/BMC passwords to use our common passwords instead of default IPMI
792 188 okurz
* OSD: Make sure to set /etc/salt/minion_id to the FQDN (see #90875#note-2 for reference)
793
* OSD: Add to salt using https://gitlab.suse.de/openqa/salt-states-openqa
794
* O3: Install with transactional-update role, then
795
796
```
797 189 okurz
echo "requires:openQA-worker" > /etc/zypp/systemCheck.d/openqa.check
798 188 okurz
sed -i 's@/ btrfs ro@/ btrfs rw@' /etc/fstab
799
mount -o rw,remount /
800
btrfs property set -ts / ro false
801
zypper -n in openQA-continuous-update
802
systemctl enable --now openqa-continuous-update.timer
803
```
804 117 okurz
805 120 okurz
## Take machines out of salt-controlled production
806
807
E.g. for investigation or development or manual maintenance work
808 118 okurz
809
```
810
ssh osd "sudo salt-key -y -d $hostname"
811 179 nicksinger
ssh $hostname "sudo systemctl disable --now telegraf $(systemctl list-units | grep openqa-worker-auto-restart | cut -d "." -f 1 | xargs)"
812 118 okurz
```
813
814 174 mkittler
Checkout [salt-states-openqa's examples](https://gitlab.suse.de/openqa/salt-states-openqa/-/blob/master/README.md#examples) for systemd commands to start and stop workers.
815
816 173 mkittler
## Bring back machines into salt-controlled production
817 118 okurz
818
```
819 124 dheidler
ssh osd "sudo salt-key -a $hostname && sudo salt --state-output=changes $hostname state.apply"
820 118 okurz
```
821
822
Depending on your actions further manual cleanup might be necessary, e.g. `ssh $hostname "sudo systemctl unmask telegraf salt-minion"`
823 117 okurz
824 175 okurz
## Use a production host for testing backend changes locally, e.g. svirt, powerVM, IPMI bare-metal, s390x, etc.
825 172 mkittler
826 177 mkittler
0. Find out which type of worker slot you need for the specific job you want to run, e.g. by checking which worker slots were used for previous runs of the job on OSD or by looking for the job's worker class in the [workers table](https://openqa.suse.de/admin/workers).
827 172 mkittler
1. Configure an additional worker slot in your local `workers.ini` using worker settings from the corresponding production worker. The production worker config can be found in [workerconf.sls](https://gitlab.suse.de/openqa/salt-pillars-openqa/-/blob/master/openqa/workerconf.sls) or on the hosts themselves.
828 1 alarrosa
2. Take out the corresponding worker slot from production using the systemd commands mentioned in [salt-states-openqa's examples](https://gitlab.suse.de/openqa/salt-states-openqa/-/blob/master/README.md#examples). This is important to prevent multiple jobs from using the same svirt host.
829 176 mkittler
3. Start the locally configured worker slot and clone/run some jobs.
830 172 mkittler
4. When you're done, bring back the production worker slots using the systemd commands mentioned in [salt-states-openqa's examples](https://gitlab.suse.de/openqa/salt-states-openqa/-/blob/master/README.md#examples).
831
832 178 mkittler
### Alternatives
833
It is also possible to test svirt backend changes fully locally, at least when running tests via KVM is sufficient. Checkout [os-autoinst's documentation](https://github.com/os-autoinst/os-autoinst/blob/master/doc/backends.md#svirt=) for further details.
834
835 122 okurz
## Backup
836
837 134 okurz
Both openqa.opensuse.org and openqa.suse.de run on virtual machine clusters that provide redundancy and differential backup using snapshotting of the involved storage. SUSE-IT currently provides backups going back up to 3 days with two daily backups conducted at 23:10Z and 11:00Z. With this it is possible in cases of catastrophic data loss to recover (raise ticket over https://sd.suse.com in that case). Additionally automatic backup for the o3 webui host introduced with https://gitlab.suse.de/okurz/backup-server-salt/tree/master/rsnapshot covering so far /etc and the SQL database dumps. Fixed assets and testresults are backed up on storage.qa.suse.de (see https://gitlab.suse.de/openqa/salt-states-openqa/-/merge_requests/612)
838 122 okurz
839 139 okurz
### openQA database backups
840
841
Database backups of o3+osd are available on backup.qa.suse.de, acessible over ssh, same credentials as for the OSD infrastructure
842
843 144 livdywan
### Fallback deployment on AWS
844
845
To get an instance running from a backup in case of a disaster, one can be created on AWS with this configuration:
846 149 tinita
847
#### Launch instance
848
849 155 tinita
##### Web Interface, from scratch (only if necessary, otherwise just use the template below)
850 149 tinita
851 144 livdywan
- Ensure your region is **Frankfurt, Germany**
852
- Pick a **t3.large** with `openSUSE Leap` on AWS Marketplace
853 146 mkittler
- Add two disks
854
    - 10 GiB for the root filesystem should be sufficient (can be easily extended later if needed)
855
    - The OSD database alone needs > 30 GiB and results plus assets will also need a lot (e.g. > 4 GiB for TW snapshot ISO) so take at least 100 GiB for the 2nd drive
856 144 livdywan
- The security group needs to include ssh and http
857
- Add `openqa_created_by`, `openqa_ttl` and `team:qa-tools` tags
858 149 tinita
859
##### Launch from a template
860
861 151 tinita
Note: When you modify the template (creating a new version), be sure to set the new version as the default.
862
863 155 tinita
- Go to the [openQA-webUI-openSUSE-Leap](https://eu-central-1.console.aws.amazon.com/ec2/v2/home?region=eu-central-1#LaunchTemplateDetails:launchTemplateId=lt-002dfbcbd2f818e4c) Template
864 154 tinita
- Select "Actions - Launch instance from template"
865 149 tinita
- Choose your key pair
866
- Click "Launch instance"
867 1 alarrosa
868
###### Command line
869 151 tinita
870 156 tinita
For configuring aws cli, see [below](https://progress.opensuse.org/projects/openqav3/wiki/Wiki#Configure-aws-cli)
871
872
[aws run-instances docs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-templates.html)
873 149 tinita
874
    aws ec2 run-instances --launch-template LaunchTemplateId=lt-002dfbcbd2f818e4c --key-name <your-keyname>
875 150 tinita
    # or
876
    aws ec2 run-instances --launch-template LaunchTemplateName=openQA-webUI-openSUSE-Leap --key-name <your-keyname>
877 149 tinita
878
For this you have to create a key pair first, if you haven't done so.
879
Save the result and look for the `InstanceId`.
880 144 livdywan
881
#### Transfer keys
882
883
Since an instance is always created with a single key, public keys of all users need to be deployed by whoever owns that key.
884
885
**Note**: `osd2` refers to the instance created above. Replace with the instance IP or add an alias to your SSH config.
886
887
    ssh openqa.suse.de "sudo su -c 'cat /home/*/.ssh/authorized_keys'" | ssh ec2-user@osd2 "cat - >> ~/.ssh/authorized_keys"
888
889
#### Bootstrapping
890
891
```
892
ssh osd2
893 169 osukup
sudo su
894 145 mkittler
parted --script /dev/nvme1n1 mklabel gpt && parted --script /dev/nvme1n1 mkpart ext4 4096s 100%
895
mkfs.ext4 /dev/nvme1n1p1
896 160 osukup
vim /etc/fstab # add mount to fstab
897 145 mkittler
mkdir /space && mount /dev/nvme1n1p1 /space
898 158 okurz
mkdir -p /space/pgsql/data
899
mkdir -p /var/lib/pgsql
900
ln -s /space/pgsql/data /var/lib/pgsql/data
901 169 osukup
zypper in postgresql-server # needed for user.group
902
chown -R postgres.postgres /space/pgsql # without correct group postgresql.service fails
903
mkdir -p /space/openqa
904
mkdir -p /var/lib/openqa
905 171 osukup
mount /space/openqa /var/lib/openqa -o bind # open also requires a lot of space 
906 161 osukup
curl -s https://raw.githubusercontent.com/os-autoinst/openQA/master/script/openqa-bootstrap | bash -x
907 152 tinita
908
ssh -A backup.qa.suse.de
909 145 mkittler
rsync --progress /home/rsnapshot/alpha.0/openqa.suse.de/var/lib/openqa/SQL-DUMPS/2022-02-08.dump ec2-user@osd2:/tmp
910 1 alarrosa
911
ssh osd2
912 147 mkittler
sudo -u postgres createdb -O geekotest openqa-osd # create pristine db for OSD import (to avoid conflicts with existing data)
913 1 alarrosa
sudo -u geekotest pg_restore -d openqa-osd /tmp/2022-02-08.dump # import data, will take a while (22m is a realistic time)
914 153 tinita
vim /etc/openqa/openqa.ini # change auth from Fake to OpenID
915 1 alarrosa
vim /etc/openqa/database.ini # change database to openqa-osd
916 170 osukup
vim /etc/openqa/client.conf # change key and secret to correct one
917 158 okurz
systemctl restart openqa-webui
918 1 alarrosa
```
919 155 tinita
920
##### Configure aws cli
921
922
You can use the command
923
924
    aws configure
925
926
but it doesn't actually help you with the possible values, so you can just create the file yourself like this:
927
928
    % cat ~/.aws/config
929
    [default]
930
    region = eu-central-1
931
    output = json
932 157 tinita
    % cat ~/.aws/credentials
933
    [default]
934
    aws_access_key_id = ABCDE
935 155 tinita
    aws_secret_access_key = FGHIJ
936 144 livdywan
937 109 okurz
## Best practices for infrastructure work
938 107 okurz
939
* Same as in OSD deployment we should look for failed grafana alerts if users report something suspicious
940
* Collect all the information between "last good" and "first bad" and then also find the git diff in openqa/salt-states-openqa
941
* Apply proper "scientific method" with written down hypotheses, experiments and conclusions in tickets, follow https://progress.opensuse.org/projects/openqav3/wiki#Further-decision-steps-working-on-test-issues
942
* Keep salt states to describe what should *not* be there
943
* Try out older btrfs snapshots in systems for crosschecking and boot with disabled salt. In the kernel cmdline append `systemd.mask=salt-minion.service`
944
* Team should conduct a work backlog check on a daily base, e.g. look for urgent tickets related to infrastructure problems
945 143 okurz
* For hardware replacement, create EngInfra ticket for coordination, order replacement on private expenses and get reimbursed using https://intra.suse.net/company/company-news/department/finance/claim-expenses/claim-expenses/ or have order placed with the help of line managers, let the components be delivered to the according place, e.g. SUSE Nuremberg datacenter and inform EngInfra in ticket to have them conduct the physical component replacement
946 148 livdywan
* Prefer `reload` over `restart` where available e.g. `systemctl reload postgres` - in general `systemctl cat postgres` will show available commands for any service
947 116 okurz
* Test reboot stability of machines with commands like in https://progress.opensuse.org/issues/78010#note-31 e.g.
948
949
```
950
for run in {01..30}; do for host in $host; do echo -n "run: $run, $host: ping .. " && timeout -k 5 600 sh -c "until ping -c30 $host >/dev/null; do :; done" && echo -n "ok, ssh .. " && timeout -k 5 600 sh -c "until nc -z -w 1 $host 22; do :; done" && echo -n "ok, uptime/reboot: " && ssh $host "uptime && sudo reboot" && sleep 120 || break; done || break; done
951
```