Project

General

Profile

Wiki » History » Version 262

okurz, 2023-07-26 13:54
remove some SUSE internals

1 3 okurz
# Introduction
2 1 alarrosa
3 3 okurz
This is the organisation wiki for the **openQA Project**.
4 49 okurz
The source code is hosted in the [os-autoinst github project](http://github.com/os-autoinst/), especially [openQA itself](http://github.com/os-autoinst/openQA) and the main backend [os-autoinst](http://github.com/os-autoinst/os-autoinst)
5 1 alarrosa
6 48 okurz
If you are interested in the tests for SUSE/openSUSE products take a look into the [openqatests](https://progress.opensuse.org/projects/openqatests) project.
7
8 165 okurz
If you are looking for entry level issues to contribute to please look into the section [[Wiki#Where-to-contribute|Where to contribute]]
9 70 szarate
10 14 okurz
{{toc}}
11
12 3 okurz
# Organisational
13 1 alarrosa
14 51 okurz
## ticket workflow
15
16
The following ticket statuses are used together and their meaning is explained:
17
18 63 okurz
* *New*: No one has worked on the ticket (e.g. the ticket has not been properly refined) or no one is feeling responsible for the work on this ticket.
19 73 riafarov
* *Workable*: The ticket has been refined and is ready to be picked.
20
* *In Progress*: Assignee is actively working on the ticket.
21 1 alarrosa
* *Resolved*: The complete work on this issue is done and the according issue is supposed to be fixed as observed (Should be updated together with a link to a merged pull request or also a link to an production openQA showing the effect)
22 239 okurz
* *Feedback*: Further work on the ticket needs clarification of open points within the ticket or is awaiting feedback from others or other systems (e.g. automated tests) to proceed. Sometimes also used to ask Assignee about progress on inactivity.
23 74 okurz
* *Blocked*: Further work on the ticket is blocked by some external dependency (e.g. bugs, not implemented features). There should be a link to another ticket, bug, trello card, etc. where it can be seen what the ticket is blocked by.
24 51 okurz
* *Rejected*: The issue is considered invalid, should not be done, is considered out of scope.
25
* *Closed*: As this can be set only by administrators it is suggested to not use this status.
26
27
It is good practice to update the status together with a comment about it, e.g. a link to a pull request or a reason for reject.
28
29 80 okurz
## ticket categories
30
31 251 okurz
* *Regressions/Crashes*: Regressions, crashes, error messages
32 80 okurz
* *Feature requests*: Ideas or wishes for extension, enhancement, improvement
33
* *Organisational*: Organisational tasks within the project(s), not directly code related
34
* *Support*: Support of users, usage problems, questions
35
36 1 alarrosa
Please avoid the use of other, deprecated categories
37
38
Suggestion by *okurz*: I recommend to avoid the word "bug" in our categories because of the usual "is it a bug or a feature" struggle. Instead I suggest to strictly define "Regressions & Crashes" to clearly separate "it used to work in before" from "this was never part of requirements" for Features. Any ticket of this category also means that our project processes missed something so we have points for improvements, e.g. extend things to look out for in code review.
39 100 okurz
40
## Epics and Sagas
41
42
[epic]s and [saga]s belong to the "coordination" tracker, project contributors are not required to follow this convention but the tracker may be changed automagically in the future: http://mailman.suse.de/mailman/private/qa-sle/2020-October/002722.html 
43 83 okurz
44 13 okurz
## ticket templates
45
You can use these templates to fill in tickets and further improve them with more detail over time. Copy the code block, paste it into a new issue, replace every block marked with "<…>" with your content or delete if not appropriate.
46
47 71 nicksinger
### Defects
48 13 okurz
49
Subject: `<Short description, example: "openQA dies when triggering any Windows ME tests">`
50
51 1 alarrosa
52 13 okurz
```
53 71 nicksinger
## Observation
54 13 okurz
<description of what can be observed and what the symptoms are, provide links to failing test results and/or put short blocks from the log output here to visualize what is happening>
55
56 71 nicksinger
## Steps to reproduce
57 1 alarrosa
* <do this>
58 13 okurz
* <do that>
59 1 alarrosa
* <observe result>
60 13 okurz
61 200 okurz
## Impact
62
<clearly state the impact of issues to make sure according prioritization is applied and rollbacks/downgrades can be applied>
63
64 71 nicksinger
## Problem
65 13 okurz
<problem investigation, can also include different hypotheses, should be labeled as "H1" for first hypothesis, etc.>
66
67 71 nicksinger
## Suggestion
68 123 okurz
* <what to do as a first step>
69
* <Fix the actual problem>
70
* <Consider fixing the design>
71
* <Consider fixing the team's process>
72
* <Consider to explore further>
73 13 okurz
74 71 nicksinger
## Workaround
75 13 okurz
<example: retrigger job>
76
```
77
78
example ticket: #10526
79
80 104 okurz
For tickets referencing "auto_review" see
81
https://github.com/os-autoinst/scripts/blob/master/README.md#auto-review---automatically-detect-known-issues-in-openqa-jobs-label-openqa-jobs-with-ticket-references-and-optionally-retrigger
82
for a suggested template snippet.
83
84 72 nicksinger
### Feature requests
85 13 okurz
86
Subject: `<Short description, example: "grub3 btrfs support" (feature)>`
87
88
89
```
90
## User story
91
<As a <role>, I want to <do an action>, to <achieve which goal> >
92
93 72 nicksinger
## Acceptance criteria
94 13 okurz
* <**AC1:** the first acceptance criterion that needs to be fulfilled to do this, example: Clicking "restart button" causes restart of the job>
95
* <**AC2:** also think about the "not-actions", example: other jobs are not affected>
96
97 72 nicksinger
## Tasks
98 13 okurz
* <first task to do as an easy starting point>
99 69 okurz
* <what do do next, all tasks optionally with an effort estimation in hours, e.g. "(0.5-2h)">
100 13 okurz
* <optional: mark "optional" tasks>
101
102 72 nicksinger
## Further details
103 17 okurz
<everything that does not fit into above sections>
104 13 okurz
```
105
106
example ticket: #10212
107
108 62 SLindoMansilla
## Further decision steps working on test issues
109 61 SLindoMansilla
110 62 SLindoMansilla
Test issues could be one of the following sources. Feel free to use the following template in tickets as well
111 1 alarrosa
112 62 SLindoMansilla
```
113
## Problem
114
* **H1** The product has changed
115
 * **H1.1** product changed slightly but in an acceptable way without the need for communication with DEV+RM --> adapt test
116
 * **H1.2** product changed slightly but in an acceptable way found after feedback from RM --> adapt test
117
 * **H1.3** product changed significantly --> after approval by RM adapt test
118 61 SLindoMansilla
119 62 SLindoMansilla
* **H2** Fails because of changes in test setup
120
 * **H2.1** Our test hardware equipment behaves different
121
 * **H2.2** The network behaves different
122
123
* **H3** Fails because of changes in test infrastructure software, e.g. os-autoinst, openQA
124
* **H4** Fails because of changes in test management configuration, e.g. openQA database settings
125
* **H5** Fails because of changes in the test software itself (the test plan in source code as well as needles)
126
* **H6** Sporadic issue, i.e. the root problem is already hidden in the system for a long time but does not show symptoms every time
127
```
128 25 okurz
129 235 okurz
This is following the [scientific method](https://en.wikipedia.org/wiki/Scientific_method). It is suggested to use the characters *H* (hypothesis), *E* (experiment), *O* (observation), e.g. like this
130
131
```
132
* **H3** Fails because of changes in test infrastructure software, e.g. os-autoinst, openQA
133
  * **H3.1** **REJECTED** Fails because of changes in openQA itself
134
    * **E3.1-1** (First experiment for hypothesis 3.1) test on an openQA server with the openQA version of "last good"
135
      * **O3.1-1-1** (First observation for first experiment for hypothesis 3) the test failed in the same way, reject *H3.1*
136
```
137
138 182 okurz
## Additional details needed for non-qemu issues
139
140
As the automatic integration tests of os-autoinst and openQA are based on qemu virtualization, for any non-qemu related requests please provide detailed manual reproduction steps, otherwise it is unlikely that any issue or feature request can be implemented.
141
142 25 okurz
## pull request handling on github
143
144
As a reviewer of pull requests on github for all related repositories, e.g. https://github.com/os-autoinst/os-autoinst-distri-opensuse/pulls, apply labels in case PRs are open for a longer time and can not be merged so that we keep our backlog clean and know why PRs are blocked.
145
146
* **notready**: Triaged as not ready yet for merging, no (immediate) reaction by the reviewee, e.g. when tests are missing, other scenarios break, only tested for one of SLE/TW
147
* **wip**: Marked by the reviewee itself as "[WIP]" or "[DO-NOT-MERGE]" or similar
148
* **question**: Questions to the reviewee, not answered yet
149 54 okurz
150
## Where to contribute?
151 1 alarrosa
152
If you want to help openQA development you can take a look into the existing [issues](https://progress.opensuse.org/projects/openqav3/issues).
153 167 okurz
You can start with
154 168 okurz
* [entrance level issues](https://progress.opensuse.org/projects/openqav3/search?q=entrance+level+issue&open_issues=1)
155 167 okurz
* issues tagged as [easy](https://progress.opensuse.org/projects/openqav3/issues?utf8=%E2%9C%93&set_filter=1&sort=id%3Adesc&f%5B%5D=status_id&op%5Bstatus_id%5D=o&f%5B%5D=issue_tags&op%5Bissue_tags%5D=%3D&v%5Bissue_tags%5D%5B%5D=easy&f%5B%5D=&c%5B%5D=subject&c%5B%5D=project&c%5B%5D=status&c%5B%5D=assigned_to&c%5B%5D=fixed_version&c%5B%5D=is_private&c%5B%5D=due_date&c%5B%5D=relations&group_by=&t%5B%5D=)
156 197 okurz
* issues tagged as [beginner](https://progress.opensuse.org/projects/openqav3/issues?utf8=%E2%9C%93&set_filter=1&sort=id%3Adesc&f%5B%5D=status_id&op%5Bstatus_id%5D=o&f%5B%5D=issue_tags&op%5Bissue_tags%5D=%3D&v%5Bissue_tags%5D%5B%5D=beginner&f%5B%5D=&c%5B%5D=subject&c%5B%5D=project&c%5B%5D=status&c%5B%5D=assigned_to&c%5B%5D=fixed_version&c%5B%5D=is_private&c%5B%5D=due_date&c%5B%5D=relations&group_by=&t%5B%5D=) - not necessarily "easy" but more suitable for someone coming to a project with little or no domain specific knowledge
157 168 okurz
* ideas from #65271
158 165 okurz
159
There are also some "always valid" tasks to be working on:
160 54 okurz
161
* *improve test coverage*:
162
 * *user story*: As openqa backend as well as test developer I want better test coverage of our projects to reduce technical debt
163
 * *acceptance criteria*: test coverage is significantly higher than before
164
 * *suggestions*: check current coverage in each individual project (os-autoinst/openQA/os-autoinst-distri-opensuse) and add tests as necessary
165 28 okurz
166 1 alarrosa
# Use cases
167 40 okurz
168 28 okurz
The following use cases 1-6 have been defined within a SUSE workshop (others have been defined later) to clarify how different actors work with openQA. Some of them are covered already within openQA quite well, some others are stated as motivation for further feature development.
169
170 6 okurz
## Use case 1
171 4 okurz
**User:** QA-Project Managment
172 1 alarrosa
**primary actor:** QA Project Manager, QA Team Leads
173
**stakeholder:** Directors, VP
174 7 okurz
**trigger:** product milestones, providing a daily status
175 1 alarrosa
**user story:** „As a QA project manager I want to check on a daily basis the „openQA Dashboard“ to get a summary/an overall status of the „reviewers results“ in order to take the right actions and prioritize tasks in QA accordingly.“
176 28 okurz
	
177 4 okurz
## Use case 2
178 1 alarrosa
**User:** openQA-Admin
179
**primary actor:** Backend-Team
180 4 okurz
**stakeholder:** Qa-Prjmgr, QA-TL, openQA Tech-Lead
181 7 okurz
**trigger:** Bugs, features, new testcases
182 5 okurz
**user story:** „As an openQA admin I constantly check in the web-UI the system health and I manage its configuration to ensure smooth operation of the tool.“
183 28 okurz
184 1 alarrosa
## Use case 3
185
**User:** QA-Reviewer
186
**primary actor:** QA-Team
187 4 okurz
**stakeholder:** QA-Prjmgr, Release-Mgmt, openQA-Admin
188 7 okurz
**trigger:** every new build
189
**user story:** „As an openQA-Reviewer at any point in time I review on the webpage of openQA the overall status of a build in order to track and find bugs, because I want to find bugs as early as possible and report them.“
190 28 okurz
191 1 alarrosa
## Use case 4
192
**User:** Testcase-Contributor
193 4 okurz
**primary actor:** All development teams, Maintenance QA
194 5 okurz
**stakeholder:** QA-Reviewer, openQA-Admin, openQA Tech-Lead
195 40 okurz
**trigger:** features, new functionality, bugs, new product/package
196 7 okurz
**user story:** „As developer when there are new features, new functionality, bugs, new product/package in git I contribute my testcases because I want to ensure good quality submissions and smooth product integration.“
197 28 okurz
198 4 okurz
## Use case 5
199
**User:** Release-Mgmt
200
**primary actor:** Release Manager
201 1 alarrosa
**stakeholder:** Directors, VP, PM, TAMs, Partners
202 7 okurz
**trigger:** Milestones
203
**user story:** „As a Release-Manager on a daily basis I check on a dashboard for the product health/build status in order to act early in case of failures and have concrete and current reports.“
204 28 okurz
205 4 okurz
## Use case 6
206
**User:** Staging-Admin
207
**primary actor:** Staging-Manager for the products
208 1 alarrosa
**stakeholder:** Release-Mgmt, Build-Team
209
**trigger:** every single submission to projects
210 40 okurz
**user story:** „As a Staging-Manager I review the build status of packages with every staged submission to the „staging projects“ in the „staging dashboard“ and the test-status of the pre-integrated fixes, because I want to identify major breakage before integration to the products and provide fast feedback back to the development.“
211
212
## Use case 7
213
**User:** Bug investigator
214
**primary actor:** Any bug assignee for openQA observed bugs
215
**stakeholder:** Developer
216
**trigger:** bugs
217 8 okurz
**user story:** „As a developer that has been assigned a bug which has been observed in openQA I can review referenced tests, find a newer and the most recent job in the same scenario, understand what changed since the last successful job, what other jobs show same symptoms to investigate the root cause fast and use openQA for verification of a bug fix.“
218 15 okurz
219 8 okurz
# Thoughts about categorizing test results, issues, states within openQA
220
by okurz
221
222
When reviewing test results it is important to distinguish between different causes of "failed tests"
223
224
## Nomenclature
225
226 58 okurz
### Test status categories
227 1 alarrosa
A common definition about the status of a test regarding the product it tests: "false|true positive|negative" as described on https://en.wikipedia.org/wiki/False_positives_and_false_negatives. "positive|negative" describes the outcome of a test ("positive": test signals presence of issue; "negative": no signal) whereas "false|true" describes the conclusion of the test regarding the presence of issues in the SUT or product in our case ("true": correct reporting; "false": incorrect reporting), e.g. "true negative", test successful, no issues detected and there are no issues, product is working as expected by customer. Another example: Think of testing as of a fire alarm. An alarm (event detector) should only go off (be "positive") *if* there is a fire (event to detect) --> "true positive" whereas *if* there is *no* fire there should be *no* alarm --> "true negative".
228 10 okurz
229 1 alarrosa
Another common but potentially ambiguous categorization:
230 10 okurz
231
* *broken*: the test is not behaving as expected (Ambiguity: "as expected" by whom?) --> commonly a "false positive", can also be "false negative" but hard to detect
232
* *failing*: the test is behaving as expected, but the test output is a fail --> "true positive"
233
* *working*: the test is behaving as expected (with no comment regarding the result, though some might ambiguously imply 'result is negative')
234
* *passing*: the test is behaving as expected, but the result is a success --> "true negative"
235 8 okurz
236 9 okurz
If in doubt declare a test as "broken". We should review the test and examine if it is behaving as expected.
237 10 okurz
238 8 okurz
Be careful about "positive/negative" as some might also use "positive" to incorrectly denote a passing test (and "negative" for failing test) as an indicator of "working product" not an indicator about "issue present". If you argue what is "used in common speech" think about how "false positive" is used as in "false alarm" --> "positive" == "alarm raised", also see https://narainko.wordpress.com/2012/08/26/understanding-false-positive-and-false-negative/
239
240 10 okurz
### Priorization of work regarding categories
241 3 okurz
In this sense development+QA want to accomplish a "true negative" state whenever possible (no issues present, therefore none detected). As QA and test developers we want to prevent "false positives" ("false alarms" declaring a product as broken when it is not but the test failed for other reasons), also known as "type I error" and "false negatives" (a product issue is not catched by tests and might "slip through" QA and at worst is only found by an external outside customer) also known as "type II error". Also see https://en.wikipedia.org/wiki/Type_I_and_type_II_errors. In the context of openQA and system testing paired with screen matching a "false positive" is much more likely as the tests are very susceptible to subtle variations and changes even if they should be accepted. So when in doubt, create an issue in progress, look at it again, and find that it was a false alarm, rather than wasting more peoples time with INVALID bug reports by believing the product to be broken when it isn't. To quote Richard Brown: "I […] believe this is the route to ongoing improvement - if we have tests which produce such false alarms, then that is a clear indicator that the test needs to be reworked to be less ambiguous, and that IS our job as openQA developers to deal with".
242 11 okurz
243
## Further categorization of statuses, issues and such in testing, especially automatic tests
244
By okurz
245
246
This categorization scheme is meant to help in communication in either written or spoken discussions being simple, concise, easy to remember while unambiguous in every case.
247
While used for naming it should also be used as a decision tree and can be followed from the top following each branch.
248
249
### Categorization scheme
250
251
To keep it simple I will try to go in steps of deciding if a potential issue is of one of two categories in every step (maybe three) and go further down from there. The degree of further detailing is not limited, i.e. it can be further extended. Naming scheme should follow arabic number (for two levels just 1 and 2) counting schemes added from the right for every additional level of decision step and detail without any separation between the digits, e.g. "1111" for the first type in every level of detail up to level four. Also, I am thinking of giving the fully written form phonetic name to unambiguously identify each on every level as long as not more individual levels are necessary. The alphabet should be reserved for higher levels and higher priority types.
252
Every leaf of the tree must have an action assigned to it.
253 12 okurz
254 11 okurz
1 **failed** (ZULU)
255
11 new (passed->failed) (YANKEE)
256
111 product issue ("true positive") (WHISKEY)
257 44 okurz
1111 unfiled issue (SIERRA)
258 11 okurz
11111 hard issue (openqa *fail*) (KILO)
259
111121 critical / potential ship stopper (INDIA) --> immediately file bug report with "ship_stopper?" flag; opt. inform RM directly
260 44 okurz
111122 non-critical hard issue (HOTEL) --> file bug report
261 11 okurz
11112 soft issue (openqa *softfail* on job level, not on module level) (JULIETT) --> file bug report on failing test module
262
1112 bugzilla bug exists (ROMEO)
263
11121 bug was known to openqa / openqa developer --> cross-reference (bug->test, test->bug) AND raise review process issue, improve openqa process
264
11122 bug was filed by other sources (e.g. beta-tester) --> cross-reference (bug->test, test->bug)
265
112 test issue ("false positive") (VICTOR)
266
1121 progress issue exists (QUEBEC) --> cross-reference (issue->test, test->issue)
267
1122 unfiled test issue (PAPA)
268
11221 easy to do w/o progress issue
269
112211 need needles update --> re-needle if sure, TODO how to notify?
270
112212 pot. flaky, timeout
271
1122121 retrigger yields PASS --> comment in progress about flaky issue fixed
272
1122122 reproducible on retrigger --> file progress issue
273
11222 needs progress issue filed --> file progress issue
274
12 existing / still failing (failed->failed) (XRAY)
275
121 product issue (UNIFORM)
276
1211 unfiled issue (OSCAR) --> file bug report AND raise review process issue (why has it not been found and filed?)
277
1212 bugzilla bug exists (NOVEMBER) --> ensure cross-reference, also see rules for 1112 ROMEO
278
122 test issue (TANGO)
279
1221 progress issue exists (MIKE) --> monitor, if persisting reprioritize test development work
280
1222 needs progress issue filed (LIMA) --> file progress issue AND raise review process issue, see 1211 OSCAR
281
2 **passed** (ALFA)
282
21 stable (passed->passed) (BRAVO)
283
211 existing "true negative" (DELTA) --> monitor, maybe can be made stricter
284
212 existing "false negative" (ECHO) --> needs test improvement
285
22 fixed (failed->passed) (CHARLIE)
286
222 fixed "true negative" (FOXTROTT) --> TODO split monitor, see 211 DELTA
287
2221 was test issue --> close progress issue
288
2222 was product issue
289
22221 no bug report exists --> raise review process issue (why was it not filed?)
290
22222 bug report exists
291
222221 was marked as RESOLVED FIXED
292
221 fixed but "false negative" (GOLF) --> potentially revert test fix, also see 212 ECHO
293 41 okurz
294
295 11 okurz
Priority from high to low: INDIA->OSCAR->HOTEL->JULIETT->…
296 35 okurz
297 142 okurz
# Important ticket queries
298
299
* All auto-review tickets: https://progress.opensuse.org/projects/openqav3/issues?query_id=697 , see https://github.com/os-autoinst/scripts/blob/master/README.md#auto-review---automatically-detect-known-issues-in-openqa-jobs-label-openqa-jobs-with-ticket-references-and-optionally-retrigger for further information regarding auto-review
300
* All auto-review+force-result tickets: https://progress.opensuse.org/projects/openqav3/issues?query_id=700
301
302 82 okurz
# Proposals for uses of labels
303 23 okurz
With [Show bug or label icon on overview if labeled (gh#550)](https://github.com/os-autoinst/openQA/pull/550) it is possible to add custom labels just by writing them. Nevertheless, a convention should be found for a common benefit. <del>Beware that labels are also automatically carried over with (Carry over labels from previous jobs in same scenario if still failing [gh#564])(https://github.com/os-autoinst/openQA/pull/564) which might make consistent test failures less visible when reviewers only look for test results without labels or bugrefs.</del> Labels are not anymore automatically carried over ([gh#1071](https://github.com/os-autoinst/openQA/pull/1071)).
304
305
List of proposed labels with their meaning and where they could be applied.
306
307
* ***`fixed_<build_ref>`***: If a test failure is already fixed in a more recent build and no bug reference is known, use this label together with a reference to a more recent passed test run in the same scenario. Useful for reviewing older builds. Example (https://openqa.suse.de/tests/382518#comments):
308
309
```
310
label:fixed_Build1501
311
312
t#382919
313
```
314 24 okurz
315
* ***`needles_added`***: In case needles were missing for test changes or expected product changes caused needle matching to fail, use this label with a reference to the test PR or a proper reasoning why the needles were missing and how you added them. Example (https://openqa.suse.de/tests/388521#comments):
316
317
```
318
label:needles_added
319
320
needles for https://github.com/os-autoinst/os-autoinst-distri-opensuse/pull/1353 were missing, added by jpupava in the meantime.
321 60 mgriessmeier
```
322
323 67 okurz
# s390x Test Organisation
324 1 alarrosa
325 67 okurz
See the following picture for a graphical overview of the current s390x test infrastructure at SUSE:
326
327
![SUSE s390x test infrastructure](qa_sle_openqa_s390x_test_infrastructure.jpg)
328
329 75 okurz
## Upgrades
330 60 mgriessmeier
331
### on z/VM 
332
#### special Requirements
333
334
Due to the lack of proper use of hdd-images on zVM, we need to workaround this with having a dedicated worker_class aka a dedicated Host where we run two jobs with START_AFTER_TEST,
335
the first one which installs the basesystem we want to have upgraded and a second one which is doing the actually upgrade (e.g migration_offline_sle12sp2_zVM_preparation and migration_offline_sle12sp2_zVM)
336
337
Since we encountered issues with randomly other preparation jobs are started in between there, we need to ensure that we have one complete chain for all migration jobs running on one worker, that means for example:
338
339 75 okurz
1. migration_offline_sle12sp2_zVM_preparation 
340
1. migration_offline_sle12sp2_zVM (START_AFTER_TEST=#1) 
341
1. migration_offline_sle12sp2_allpatterns_zVM_preparation (START_AFTER_TEST=#2) 
342
1. migration_offline_sle12sp2_allpatterns_zVM 
343
1. ...
344 66 okurz
345
This scheme ensures that all actual Upgrade jobs are finding the prepared system and are able to upgrade it
346
347
### on z/KVM
348
349 67 okurz
No special requirements anymore, see details in #18016
350 77 nicksinger
351
## Automated z/VM LPAR installation with openQA using qnipl
352
353 78 nicksinger
There is an ongoing effort to automate the LPAR creation and installation on z/VM. A first idea resulted in the creation of [qnipl](https://github.com/openSUSE/dracut-qnipl). `qnipl` enables one to boot a very slim initramfs from a shared medium (e.g. shared SCSI-disks) and supply it with the needed parameters to chainload a "normal SLES installation" using kexec.
354 77 nicksinger
This method is required for z/VM because snipl (Simple network initial program loader) can only load/boot LPARs from specific disks, not network resources.
355
356
### Setup
357
358
1. Get a shared disk for all your LPARs
359
  * Normally this can easily done by infra/gschlotter
360
  * Disks needs to be connected to all guests which should be able to network-boot
361
1. Boot a fully installed SLES on one of the LPARs to start preparing the shared-disk
362
1. Put a DOS partition table on the disk and create one single, large partition on there
363
1. Put a FS on there. Our first test was on ext2 and it worked flawlessly in our attempts
364
1. Install `zipl` (The s390x bootloader from IBM) on this partition
365
  * A simple and sufficient config can be found in [poo#33682](https://progress.opensuse.org/issues/33682)
366
1. clone [`qnipl`](https://github.com/nicksinger/dracut-qnipl) to your dracut modules (e.g. /usr/lib/dracut/modules.d/95qnipl)
367
1. Include the module named `qnipl` to your dracut modules for initramfs generation
368
  * e.g. in /etc/dracut.conf.d/99-qnipl.conf add: `add_dracutmodules+=qnipl`
369
1. Generate your initramfs (e.g. `dracut -f -a "url-lib qnipl" --no-hostonly-cmdline /tmp/custom_initramfs`)
370
  * Put the initramfs next to your kernel binary on the partition you want to prepare
371
1. From now on you can use `snipl` to boot any LPAR connected with this shared disk from network
372
  * example: `snipl -f ./snipl.conf -s P0069A27-LP3 -A fa00 --wwpn_scsiload 500507630713d3b3 --lun_scsiload 4001401100000000 --ossparms_scsiload "install=http://openqa.suse.de/assets/repo/SLE-15-Installer-DVD-s390x-Build533.2-Media1 hostip=10.161.159.3/20 gateway=10.161.159.254 Nameserver=10.160.0.1 Domain=suse.de ssh=1 regurl=http://all-533.2.proxy.scc.suse.de"`
373
  * `--ossparms_scsiload` is then evaluated and used by `qnipl` to kexec into the installer with the (for the installer) needed parameters
374
375
### Further details
376
377 78 nicksinger
Further details can also be found in the [github repo](https://github.com/openSUSE/dracut-qnipl/blob/master/README.md). Pull requests, questions and ideas always welcome!
378 84 okurz
379 109 okurz
# Infrastructure setup for o3 (openqa.opensuse.org) and osd (openqa.suse.de)
380 1 alarrosa
381 194 okurz
Both o3 and osd are hosted in SUSE data centers, mostly Nuremberg, Germany, and Prague, Czech Republic.
382
383
Interesting monitoring link referring to both (SUSE internal):
384 199 okurz
* https://bs-monitor.nue.suse.com:3000/d/paTR0FXnz/temperature-and-humidity-in-nuremberg-server-rooms?orgId=1&viewPanel=4&from=now-24h&to=now (SUSE internal) for climate control in SUSE Nuremberg Maxtorhof hosting, also http://srv2mgmt:3000/ with "monitor:monitor"
385 194 okurz
386 109 okurz
## o3 (openqa.opensuse.org)
387
388 113 okurz
o3 consists of a VM running the web UI and physical worker machines. The VM for o3 has netapp backed storage on rotating disk so less performant than SSD but cheaper. So eventually we might have the possibility to use SSD based storage. Currently there are four virtual storage devices provided to o3 totalling to 10 TB.
389 88 okurz
390 185 okurz
The o3 infrastructure is in detail described on https://github.com/os-autoinst/sync-and-trigger/blob/main/openqa-opensuse.md
391
392 141 okurz
### Accessing the o3 infrastructure
393
394 262 okurz
The o3 webui host as well the workers within the o3 infrastructure can be accessed over ssh by using `ssh -p 2214 gate.opensuse.org` (and `ssh -p 2213 gate.opensuse.org` for old-ariel). Ask one of the existing admins within https://app.element.io/#/room/#openqa:opensuse.org or irc://irc.libera.chat/opensuse-factory (so that I know you can be reached over those channels when people have questions to you what you did with the ssh access) to put your ssh key on the o3 webui host to be able to login. 
395 141 okurz
396
To give access for a new user an existing admin can do the following:
397
398
```
399
sudo useradd -G users,trusted --create-home $user
400
echo "$ssh_key_from_user" | sudo tee -a /home/$user/.ssh/authorized_keys
401
```
402
403
#### SSH configuration
404
405 207 mkittler
To easily access all hosts behind the jump host you can use the following config for your ssh client (`~/.ssh/config`):
406 141 okurz
407
```
408
Host ariel
409
  HostName gate.opensuse.org
410 262 okurz
  Port 2214
411 141 okurz
412
# Note that %h as understood by -W needs the real host, aliases won't work:
413
# kex_exchange_identification: Connection closed by remote host
414
# Connection closed by UNKNOWN port 65535`
415
Host *.opensuse.org
416
  ProxyCommand ssh -q -A -x ariel -W %h:%p
417
```
418
419
**A word of warning:** be aware that this enables agent-forwarding to at least the jumphost. Please read up for yourself if and how bad you consider the security implications of doing so.
420
421
The workers can only be accessed from "ariel", not directly. One can use password authentication on the workers using the root account. Ask existing admins for the root password. It is suggested that you use key-based authentication. For this put your ssh keys on all the workers, e.g. using the above configuration and `ssh-copy-id`.
422
423
**Notice:** Some machines are connected to the o3 openQA host from other networks and might need different ways of access, at time of writing:
424
425
* Remote (owner: @ggardet_arm):
426
 * ip-10-0-0-58
427
 * oss-cobbler-03
428
 * siodtw01 (for tests on Raspberry Pi 2,3,4)
429
430
### Manual command execution on o3 workers
431
432
To execute commands manually on all workers within the o3 infrastructure one can do for example the following:
433
434
```
435 228 favogt
for i in aarch64 openqaworker1 openqaworker4 openqaworker7 openqaworker19 openqaworker20 power8 qa-power8-3 rebel; do echo $i && ssh root@$i "zypper -n dup && reboot" ; done
436 141 okurz
```
437
438 181 mkittler
```
439 228 favogt
for i in aarch64 openqaworker1 openqaworker4 openqaworker7 openqaworker19 openqaworker20 power8 qa-power8-3 rebel; do echo $i && ssh root@$i " echo 'ssh-rsa … …' >> /root/.ssh/authorized_keys " ; done
440 181 mkittler
```
441
442 1 alarrosa
mind the correct list of machines.
443 193 okurz
444
Formerly for true transactional servers we used:
445
446
```
447
for i in $hosts; do echo $i && ssh root@$i "(transactional-update -n dup || zypper -n dup) && reboot" ; done
448
```
449 141 okurz
450 91 okurz
### Automatic update of o3
451 92 okurz
452
o3 is automatically deployed on a daily base, that includes both the webUI host as well as the workers.
453 111 okurz
454
#### Automatic update of o3 webUI host
455
456 184 okurz
openqa.opensuse.org applies continuous updates of openQA related packages, conducts nightly updates of system packages and reboots automatically as required, see
457
http://open.qa/docs/#_automatic_system_upgrades_and_reboots_of_openqa_hosts
458
for details
459 111 okurz
460
#### Recurring automatic update of openQA workers
461
462 186 okurz
Same as the o3 webUI all o3 workers all apply continuous updates of openQA related packages. Additionally most apply a daily automatic system update and are "Transactional Servers" running openSUSE Leap. power8 is non-transactional with a weekly update of the system every Sunday.
463 111 okurz
464
This was for a number of reasons including:
465 109 okurz
466 96 okurz
* Getting all the machines consistent after a few years of drift
467
* Making it easier to keep them consistent by leveraging a read only root filesystem
468
* Guaranteeing rollbackability by using transactional updates
469 102 okurz
470 1 alarrosa
This was done by rbrown also to fulfill the prerequisite to getting them viable for multi-machine testing
471 102 okurz
472
These systems currently patch themselves and reboot automatically in the default maintenance window of 0330-0500 CET/CEST.
473 112 okurz
474 102 okurz
On problems this could be changed in the following way:
475
476 109 okurz
* Edit the maintenance window in /etc/rebootmgr.conf
477 105 nicksinger
* Disable the automatic reboot by "systemctl disable rebootmgr.service"
478
* Disable the automatic patching by "systemctl disable transactional-update.timer"
479
480 192 okurz
EDIT: 2022-07-11: All o3 machines are effectively not "transactional-workers" anymore as openqa-continuous-update.service is doing a complete `zypper dup` every couple of minutes. With `rebootmgr` triggered for reboot still automatic nightly reboots happen as necessary. See #111989 for details
481
482 105 nicksinger
SUSE employees have access to the bootmenu for the openQA worker machines, e.g. openqaworker1 and openqaworker4 via openqaworker1- ipmi.suse.de and openqaworker4-ipmi.suse.de which are both connected to the r&d network. For imagetester one would need to go through SUSE-IT in an unlikely event of a boot-preventing update. "snapper rollback" can be executed from a booted, functionally operative machine which one can ssh into.
483
484
For manual investigation https://github.com/kubic-project/microos-toolbox can be helpful
485
486
#### Rollback of updates
487 140 livdywan
488
Updates on workers can be rolled back using `transactional-update` affecting the transactional workers (others are likely not updated that often):
489
490 105 nicksinger
```
491
for i in aarch64 openqaworker1 openqaworker4 openqaworker7 power8 imagetester rebel; do echo $i && ssh root@$i "transactional-update rollback last && reboot"; done
492
```
493
494
Updates on the central webUI host openqa.opensuse.org can be rolled back by using either older variants of packages that receive maintenance updates or using the locally cached packages in e.g. /var/cache/zypp/packages/devel_openQA/noarch using `zypper in --oldpackage`, similar to https://github.com/os-autoinst/openQA/blob/master/script/openqa-rollback#L39
495 108 SLindoMansilla
496
#### Debugging qemu SUTs in openqa.opensuse.org
497
498
SUT: System Under Test
499
500
os-autoinst starts qemu with network type that doesn't allow access from the outside, so ssh is not possible. But, qemu is started with a VNC channel available from the host (the openQA-worker).
501
Running vncviewer inside a headless server is useless, but it is possible to use gate.opensuse.org as a jump host and SSH port forwarding to start vncviewer client from your desktop environment and connect to the VNC channel of the qemu SUT.
502
503
```
504
ssh -p 2213 -L LOCAL_PORT:WORKER_HOSTNAME:QEMU_VNC_PORT USERNAME@gate.opensuse.org
505
```
506
507
For example, if user **bernhard**, wants to connect to openqaworker7:11, and wants to use local port **43043**
508
Being the IP of openqaworker7 **192.168.112.12**
509
And the VNC channel port of openqa-worker@11 **6001** (5990 + 11)
510
511
##### 1. Create SSH tunnel with port forwarding
512
* on laptop shell 1: ssh -p 2213 -L 43043:192.168.112.12:6001 bernhard@gate.opensuse.org
513 1 alarrosa
* Keep shell open to keep the tunnel open and the port forwarding
514 108 SLindoMansilla
515 1 alarrosa
##### 2. Open vncviewer
516
* on laptop shell 2: vncviewer -Shared localhost:43043
517
* `-shared` is needed to not kick the VNC connection of os-autoinst. If it is kicked, the job will terminate and the qemu process will be killed.
518
519 109 okurz
### AArch64 specific configurations on o3
520 1 alarrosa
521 109 okurz
On o3, the aarch64 workers need additional configuration.
522
523 127 dheidler
#### Setup HugePages
524
525
You need to setup HugePages support to improve performances with qemu VM and to match current aarch64 `MACHINE` configuration.
526
For the D05 machine, the configuration is: `40` pages with a size of `1G`.
527
If there are some permissions issues on `/dev/hugepages/`, check https://progress.opensuse.org/issues/53234
528
529 126 dheidler
### o3 s390 workers
530
531 223 dheidler
`workers.ini`
532
```
533
[global]
534
HOST=http://openqa1-opensuse
535
WORKER_HOSTNAME = 192.168.112.6
536
CACHEDIRECTORY=/var/lib/openqa/cache
537
CACHESERVICEURL=http://10.88.0.1:9530/
538
[101]
539
WORKER_CLASS=s390x-zVM-vswitch-l2,s390x-rebel-1-linux144
540
BACKEND=s390x
541
ZVM_HOST=192.168.112.9
542
ZVM_GUEST=linux144
543
ZVM_PASSWORD=lin390
544
S390_HOST=144
545
[102]
546
WORKER_CLASS=s390x-zVM-vswitch-l2,s390x-rebel-2-linux145
547
BACKEND=s390x
548
ZVM_HOST=192.168.112.9
549
ZVM_GUEST=linux145
550
ZVM_PASSWORD=lin390
551
S390_HOST=145
552
[103]
553
WORKER_CLASS=s390x-zVM-vswitch-l2,s390x-rebel-3-linux146
554
BACKEND=s390x
555
ZVM_HOST=192.168.112.9
556
ZVM_GUEST=linux146
557
ZVM_PASSWORD=lin390
558
S390_HOST=146
559
[104]
560
WORKER_CLASS=s390x-zVM-vswitch-l2,s390x-rebel-4-linux147
561
BACKEND=s390x
562
ZVM_HOST=192.168.112.9
563
ZVM_GUEST=linux147
564
ZVM_PASSWORD=lin390
565
S390_HOST=147
566
[105]
567
WORKER_CLASS=64bit-ipmi,64bit-ipmi-large-mem,64bit-ipmi-amd,blackbauhinia
568
IPMI_HOSTNAME=blackbauhinia-ipmi.openqanet.opensuse.org
569
IPMI_USER=ADMIN
570
IPMI_PASSWORD=ADMIN
571
SUT_IP=blackbauhinia.openqanet.opensuse.org
572
SUT_NETDEVICE=em1
573
IPMI_SOL_PERSISTENT_CONSOLE=1
574
IPMI_BACKEND_MC_RESET=1
575
[http://openqa1-opensuse]
576
TESTPOOLSERVER=rsync://openqa1-opensuse/tests
577
```
578
579 227 okurz
Allow containers to access cache service (`systemctl edit openqa-worker-cacheservice.service`):
580 221 dheidler
```
581
# /etc/systemd/system/openqa-worker-cacheservice.service.d/override.conf
582
[Service]
583
Environment="MOJO_LISTEN=http://0.0.0.0:9530"
584
```
585
586 126 dheidler
The s390 workers for openQA are running within podman containers on openqaworker1.
587
The containers are started using systemd but the unit files are specific to the containers and will end up in a restart-loop if this fact is ignored. Whenever the containers are recreated, the systemd files have to be recreated.
588
589
The containers are started like this (for i=101…104):
590
591
```
592
i=101
593 226 dheidler
podman run -d -h openqaworker1_container --name openqaworker1_container_$i -p $(python3 -c"p=${i}*10+20003;print(f'{p}:{p}')") -e OPENQA_WORKER_INSTANCE=$i -v /opt/s390x_rebel_replacement:/etc/openqa -v /var/lib/openqa/share:/var/lib/openqa/share -v /var/lib/openqa/cache:/var/lib/openqa/cache registry.opensuse.org/devel/openqa/containers15.4/openqa_worker_os_autoinst_distri_opensuse:latest
594 216 dheidler
(cd /etc/systemd/system/; podman generate systemd -f -n --new openqaworker1_container_$i --restart-policy always)
595 109 okurz
systemctl daemon-reload
596 1 alarrosa
systemctl enable container-openqaworker1_container_$i
597 209 mkittler
```
598
599
To restart and permanently enable all workers at once:
600
```
601 217 dheidler
for i in {101..104} ; do systemctl stop container-openqaworker1_container_$i ; done
602 209 mkittler
podman rm -f openqaworker1_container_{101..104}
603 226 dheidler
for i in {101..104} ; do podman run -d -h openqaworker1_container --name openqaworker1_container_$i -p $(python3 -c"p=${i}*10+20003;print(f'{p}:{p}')") -e OPENQA_WORKER_INSTANCE=$i -v /opt/s390x_rebel_replacement:/etc/openqa -v /var/lib/openqa/share:/var/lib/openqa/share -v /var/lib/openqa/cache:/var/lib/openqa/cache registry.opensuse.org/devel/openqa/containers15.4/openqa_worker_os_autoinst_distri_opensuse:latest ; done
604 216 dheidler
for i in {101..104} ; do (cd /etc/systemd/system/; podman generate systemd -f -n --new openqaworker1_container_$i --restart-policy always) ; done
605 209 mkittler
systemctl daemon-reload
606 220 dheidler
podman rm -f openqaworker1_container_{101..104}
607 219 dheidler
for i in {101..104} ; do systemctl reenable container-openqaworker1_container_$i && systemctl restart container-openqaworker1_container_$i ; done
608 109 okurz
```
609
610 210 mkittler
Initial ticket when the setup was created: https://progress.opensuse.org/issues/97751
611
612 133 okurz
As alternative s390x workers can run on the host "rebel" as well. Be aware that openQA workers accessing the same s390x instances must not run in parallel so only enable one worker instance per s390x instance at a time (See https://progress.opensuse.org/issues/97658 for details).
613
614 121 okurz
### Monitoring
615
616 233 okurz
openqa.opensuse.org is monitored by SUSE over https://zabbix.suse.de/. There is a user group "Owners/O3" to which SUSE employees can be added.
617
618
There is also an internal munin instance on o3. Anyone wanting to look at the HTML pages, do this:
619 121 okurz
```
620
rsync -a o3:/srv/www/htdocs/munin ~/o3-munin/ 
621
```
622
(where "o3" is configured in your ssh config of course)
623
624 241 tinita
It's also possible to view the munin page via an ssh tunnel:
625
```
626 252 tinita
ssh  -L 8080:127.0.0.1:80 o3
627 241 tinita
```
628
and then go to http://127.0.0.1:8080/munin/
629
630 247 tinita
Configuration of alerts is done in `/etc/munin/munin.conf`
631
632 183 okurz
## Hotfixing
633
634
Applying hotfixes, e.g. patches from an os-autoinst pull requests to O3 workers can be applied like this for a pull request <pr_id>:
635
636
```
637 225 okurz
for i in openqaworker1 openqaworker4 openqaworker7 openqaworker19 imagetester rebel power8 qa-power8-3; do echo $i && ssh root@$i "(transactional-update run /bin/sh -c \"curl -sS https://patch-diff.githubusercontent.com/raw/os-autoinst/os-autoinst/pull/${pr_id}.patch | patch -p1 --directory=/usr/lib/os-autoinst\" && reboot) || curl -sS https://patch-diff.githubusercontent.com/raw/os-autoinst/os-autoinst/pull/${pr_id}.patch | patch -p1 --directory=/usr/lib/os-autoinst" ; done
638 183 okurz
```
639
640
Hotpatching on all OSD workers with the same <pr_id> as above with something like
641
642
```
643
sudo salt --no-color --state-output=changes -C 'G@roles:worker' cmd.run 'curl -sS https://patch-diff.githubusercontent.com/raw/os-autoinst/os-autoinst/pull/${pr_id}.patch | patch -p1 --directory=/usr/lib/os-autoinst'
644
```
645
646 89 ggardet_arm
## Mitigation of boot failure or disk issues
647
648
### Worker stuck in recovery
649
650
Check disk health and consider manual fixup of mount points, e.g.:
651
652
```
653
test -e /dev/md/openqa || lsblk -n | grep -v nvme | grep "/$" && mdadm --create /dev/md/openqa --level=0 --force --raid-devices=$(ls /dev/nvme?n1 | wc -l) --run /dev/nvme?n1 || mdadm --create /dev/md/openqa --level=0 --force --raid-devices=1 --run /dev/nvme0n1p3
654
```
655
656 106 okurz
## PPC specific configurations
657
658 214 okurz
In one case it was necessary to disable snapshots for petitboot with `nvram -p default --update-config "petitboot,snapshots?=false"` to prevent a race condition between dm_raid and btrfs trying to discover bootable devices (#68053#note-25). In another case https://bugzilla.opensuse.org/show_bug.cgi?id=1174166 caused the boot entries to be not properly discovered and it was necessary to prevent grub from trying to update the according sections (#68053#note-31).
659 89 ggardet_arm
660 84 okurz
## Moving worker from osd to o3
661
662
* Ensure system management, e.g. over IPMI works. This is untouched by the following steps and can be used during the process for recovery and setup
663
* Ensure network is configured for DHCP
664 242 okurz
* Instruct SUSE-IT to change VLAN for machine from oqa.suse.de VLAN to 662 (example: https://sd.suse.com/servicedesk/customer/portal/1/SD-124055, ~~https://infra.nue.suse.com/SelfService/Display.html?id=16458 (not available anymore)~~)
665 84 okurz
* Remove from osd:
666
667
```
668 242 okurz
salt-key -y -d worker7.oqa.suse.de
669 84 okurz
```
670 1 alarrosa
671 245 okurz
* On the worker * Change root password to o3 one
672
* Allow ssh password authentication: `sed -i 's/^PasswordAuthentication/#&/' /etc/ssh/sshd_config && systemctl restart sshd`
673
* Ensure ssh based root login works with `zypper -n in openssh-server-config-rootlogin` or if that is not available change 'PermitRootLogin' to 'yes' in sshd_config
674
* Add personal ssh key to machine, e.g. openqaworker7:/root/.ssh/authorized_keys
675
676
677
678 84 okurz
* Add entry on o3 to `/etc/dnsmasq.d/openqa.conf` with MAC address, e.g.
679
680
```
681
dhcp-host=54:ab:3a:24:34:b8,openqaworker7
682
```
683
684
* Add entry to `/etc/hosts` which dnsmasq picks up to give out a DHCP lease, e.g.
685
686
```
687
192.168.112.12   openqaworker7.openqanet.opensuse.org openqaworker7
688
```
689
690 243 livdywan
* Reload dnsmasq with `systemctl restart dnsmasq`
691 1 alarrosa
692 243 livdywan
* Adapt NFS mount point on the worker
693
694 85 okurz
```
695 246 okurz
sed -i '/openqa\.suse\.de/d' /etc/fstab && echo 'openqa1-opensuse:/ /var/lib/openqa/share nfs4 noauto,nofail,retry=30,ro,x-systemd.automount,x-systemd.device-timeout=10m,x-systemd.mount-timeout=30m 0 0' >> /etc/fstab
696 85 okurz
```
697 84 okurz
698
* Restart network on machine (over IMPI) using `systemctl restart network` and monitor in o3:`journalctl -f -u dnsmasq` until address is assigned, e.g.:
699
700
```
701
Feb 29 10:48:30 ariel dnsmasq[28105]: read /etc/hosts - 30 addresses
702
Feb 29 10:48:54 ariel dnsmasq-dhcp[28105]: DHCPREQUEST(eth1) 10.160.1.101 54:ab:3a:24:34:b8
703
Feb 29 10:48:54 ariel dnsmasq-dhcp[28105]: DHCPNAK(eth1) 10.160.1.101 54:ab:3a:24:34:b8 wrong network
704
Feb 29 10:49:10 ariel dnsmasq-dhcp[28105]: DHCPDISCOVER(eth1) 54:ab:3a:24:34:b8
705
Feb 29 10:49:10 ariel dnsmasq-dhcp[28105]: DHCPOFFER(eth1) 192.168.112.12 54:ab:3a:24:34:b8
706
Feb 29 10:49:10 ariel dnsmasq-dhcp[28105]: DHCPREQUEST(eth1) 192.168.112.12 54:ab:3a:24:34:b8
707
Feb 29 10:49:10 ariel dnsmasq-dhcp[28105]: DHCPACK(eth1) 192.168.112.12 54:ab:3a:24:34:b8 openqaworker7
708 85 okurz
```
709
710
* Ensure all mountpoints up
711 84 okurz
712
```
713
mount -a
714 86 okurz
```
715 84 okurz
716
* Update /etc/openqa/client.conf with the same key as used on other workers for "openqa1-opensuse"
717
* Update /etc/openqa/workers.ini with similar config as used on other workers, e.g. based on openqaworker4, example:
718
719
```
720
# diff -Naur /etc/openqa/workers.ini{.osd,}
721
--- /etc/openqa/workers.ini.osd 2020-02-29 15:21:47.737998821 +0100
722
+++ /etc/openqa/workers.ini     2020-02-29 15:22:53.334464958 +0100
723
@@ -1,17 +1,10 @@
724
-# This file is generated by salt - don't touch
725
-# Hosted on https://gitlab.suse.de/openqa/salt-pillars-openqa
726
-# numofworkers: 10
727
-
728
 [global]
729
-HOST=openqa.suse.de
730
-CACHEDIRECTORY=/var/lib/openqa/cache
731
-LOG_LEVEL=debug
732
-WORKER_CLASS=qemu_x86_64,qemu_x86_64_staging,tap,openqaworker7
733
-WORKER_HOSTNAME=10.160.1.101
734
-
735
-[1]
736
-WORKER_CLASS=qemu_x86_64,qemu_x86_64_staging,tap,qemu_x86_64_ibft,openqaworker7
737
+HOST=http://openqa1-opensuse
738
+WORKER_HOSTNAME=192.168.112.12
739
+CACHEDIRECTORY = /var/lib/openqa/cache
740
+CACHELIMIT = 50
741
+WORKER_CLASS = openqaworker7,qemu_x86_64
742
743
-[openqa.suse.de]
744
-TESTPOOLSERVER = rsync://openqa.suse.de/tests
745
+[http://openqa1-opensuse]
746
+TESTPOOLSERVER = rsync://openqa1-opensuse/tests
747
```
748
749
* Remove OSD specifics
750
751
```
752
systemctl disable --now auto-update.timer salt-minion telegraf
753
for i in  NPI SUSE_CA telegraf-monitoring; do zypper rr $i; done
754
zypper -n dup --force-resolution --allow-vendor-change
755
```
756
757
* If the machine is not a transactional-server one has the following options: Keep as is and handle like power8 (also not transactional), enable transactional updates w/o root being r/o, change to root being r/o on-the-fly, reinstall as transactional. At least option 2 is suggested, enable transactional updates:
758
759
```
760
zypper -n in transactional-update
761
systemctl enable --now transactional-update.timer rebootmgr
762
```
763
764
* Enable apparmor
765
766
```
767
zypper -n in apparmor-utils
768
systemctl unmask apparmor
769
systemctl enable --now apparmor
770
```
771
772
* Switch firewall from SuSEfirewall2 to firewalld
773
774
```
775
zypper -n in firewalld && zypper -n rm SuSEfirewall2
776
systemctl enable --now firewalld
777
firewall-cmd --zone=trusted --add-interface=br1
778
firewall-cmd --set-default-zone trusted
779
firewall-cmd --zone=trusted --add-masquerade
780
```
781
782
* Copy over special openSUSE UEFI staging images, see #63382
783 248 okurz
* For multi-machine configured workers make sure to have updated IPv4 entries in /etc/wicked/scripts/gre_tunnel_preup.sh
784 84 okurz
* Check operation with a single openQA worker instance:
785
786
```
787
systemctl enable --now openqa-worker.target openqa-worker@1
788
```
789
790
* Test with an openQA job cloned from a production job, e.g. for openqaworker7
791
792
```
793
openqa-clone-job --within-instance https://openqa.opensuse.org/t${id} WORKER_CLASS=openqaworker7
794
```
795
796
* After the latest openQA job could successfully finish enable more worker instances
797
798
```
799
systemctl unmask openqa-worker@{2..14} && systemctl enable --now openqa-worker@{2..14}
800
```
801
802
* Monitor if nightly update works, e.g. look for journal entry:
803
804
```
805
Mar 01 00:08:26 openqaworker7 transactional-update[10933]: Calling zypper up
806
807
Mar 01 00:08:51 openqaworker7 transactional-update[10933]: transactional-update finished - informed rebootmgr
808
Mar 01 00:08:51 openqaworker7 systemd[1]: Started Update the system.
809
810
Mar 01 03:30:00 openqaworker7 rebootmgrd[40760]: rebootmgr: reboot triggered now!
811
812
Mar 01 03:36:32 openqaworker7 systemd[1]: Reached target openQA Worker.
813
```
814 93 okurz
815 95 okurz
## Distribution upgrades
816
817 131 livdywan
**Note:** Performing the upgrade differs slightly depending on the host setup:
818 138 okurz
* On hosts with a writeable `/` you need to enter a root shell i.e. `sudo bash`
819
* Transactional hosts require that you use `transactional-update shell` thereby creating a snapshot which is applied after a reboot, optionally using `--continue` if you want to make further changes to an existing snapshot
820
* Depending on available space it might be necessary to cleanup space before conducting the upgrade, e.g. use `snapper rm <N..M>` to delete older root btrfs snapshots, cleanup unneeded packages, e.g. with https://github.com/okurz/scripts/blob/master/zypper-rm-orphaned and https://github.com/okurz/scripts/blob/master/zypper-rm-unneeded
821 196 okurz
* Upgrades might pull in too many new packages so better crosscheck with `zypper … dup … --no-recommends`
822 138 okurz
* Consider using https://github.com/okurz/auto-upgrade/blob/master/auto-upgrade or manual (*Tip**: Run this in `screen -d -r || screen` and use e.g. `sudo bash`):
823 101 okurz
824 95 okurz
```
825 137 okurz
new_version=15.3 # Specify the target release
826 1 alarrosa
827 98 livdywan
# Change the release via the special $releasever
828 1 alarrosa
. /etc/os-release
829
sed -i -e "s/${VERSION_ID}/\$releasever/g" /etc/zypp/repos.d/*
830
zypper --releasever=$new_version ref
831
test -f /etc/openqa/openqa.ini && sudo -u geekotest /opt/openqa-scripts/dump-psql
832 195 mkittler
systemctl stop openqa-continuous-update.timer  # it would interfere, e.g. revert the previous zypper ref call
833 1 alarrosa
zypper -n --releasever=$new_version dup --auto-agree-with-licenses --replacefiles --download-in-advance
834
835
# Check config files for relevant changes
836 95 okurz
rpmconfigcheck
837
for i in $(cat /var/adm/rpmconfigcheck) ; do vimdiff ${i%.rpm*} $i ; done
838
rm $(cat /var/adm/rpmconfigcheck)
839
840 1 alarrosa
reboot
841
systemctl --failed
842 98 livdywan
```
843
844 138 okurz
* Ensure that the upgrade was really successful, e.g. /etc/os-release should show the new version, the above `zypper dup` command should show no more pending actions
845
* Crosscheck for any obvious alerts, pipelines failing, user reports, etc.
846 201 okurz
* On any severe problems consider a complete rollback of the upgrade or also partial downgrade of packages, e.g. force-install older version of packages and zypper locks until an issue is fixed
847 138 okurz
* Monitor for successful openQA jobs on the host
848 132 livdywan
849 213 okurz
## Remote management with IPMI and BMC tools
850 95 okurz
851 119 livdywan
o3 and osd worker machines are controllable over IPMI from within the SUSE network, see [openqa/workerconf.sls](https://gitlab.suse.de/openqa/salt-pillars-openqa/-/blob/master/openqa/workerconf.sls) for the commands.
852
It is recommended to use [shell aliases](https://gitlab.suse.de/openqa/salt-pillars-openqa#get-ipmi-definition-aliases) for convenience.
853 109 okurz
854
`ipmitool` can sometimes behave unreliably. It seems (to okurz) as if ipmitool version ipmitool-1.8.18+git20200916.1245aaa387dc from openSUSE Tumbleweed or Factory or the "systemsmanagement" OBS repo is more reliable than the version supplied with openSUSE Leap 15.2 (See #80544#note-14) and given a stable internet connection it is certainly possible to have a consistent serial console experience.
855
856 110 okurz
To ensure that remotely controlled machines power on automatically after a power loss ensure to set the power restory policy to "previous", especially for new machines. Using https://gitlab.suse.de/openqa/salt-pillars-openqa/#get-ipmi-definition-aliases :
857
858
```
859
IFS=$'\n'; for i in $(sed 's/^alias .*="\(.*\)"/\1/' ~/.openqa_ipmi_aliases); do eval "$i" chassis policy previous; done
860
```
861
862 212 okurz
### Accessing old BMCs with "Java iKVM Viewer" when ipmitool does not work, e.g. imagetester
863 213 okurz
Imagetester can't output anything over SOL. (Likely the problem is a too old BMC version. On another system, qamaster, in 2022-10 we upgraded the BMC version (#117679#note-14) which allowed us to use ipmitool.) Therefore it is necessary to access it over the integrated iKVM console. Unfortunately java-webstart is somewhat broken and requires some extra steps to work:
864 129 nicksinger
865 163 mkittler
1. Access the web interface of the BMC at http://imagetester-ipmi.suse.de and login via the IPMI credentials mentioned in the salt pillars repository.
866
2. Click on the preview image of the "Remote Console Preview" and download the according "launch.jnlp" webstart script.
867 129 nicksinger
3. Grab the required dependencies with curl and place them in a local directory:
868
869
```
870
mkdir /tmp/ikvm
871 163 mkittler
curl -k https://imagetester-ipmi.suse.de:443/liblinux_x86_64__V1.0.3.jar.pack.gz > /tmp/ikvm/liblinux_x86_64__V1.0.3.jar.pack.gz
872
curl -k https://imagetester-ipmi.suse.de:443/iKVM__V1.69.13.0x0.jar.pack.gz > /tmp/ikvm/iKVM__V1.69.13.0x0.jar.pack.gz
873 129 nicksinger
```
874
875 163 mkittler
4. Open the previous downloaded "launch.jnlp" and replace the IP in the first line from e.g. `<jnlp spec="1.0+" codebase="https://10.160.65.195:443/">` to `<jnlp spec="1.0+" codebase="http://127.0.0.1:8080/">`
876
5. Launch some kind of web server which can serve the previously downloaded dependencies for javaws (from /tmp/ikvm). In this example we use python: `python3 -m http.server 8080`
877 129 nicksinger
6. Now you can finally launch the webstart application from your modifies "launch.jnlp" file in a second console: `javaws -nosecurity -jnlp ~/Downloads/launch.jnlp`
878
  * It will ask you how to run the application. You can run it in a sandbox and everything still works
879
7. You should see the monitor output of imagetester now. "Virtual Storage" is also working which allows you to mount an ISO over this remote connection. 
880
881
*Also check https://progress.opensuse.org/issues/96719#note-27 where this was discovered. If you have questions or remarks you can ping @nicksinger*
882 128 okurz
883 187 okurz
### Accessing java based remote control clients
884
885
We also managed to start the java based remote control client from pages like
886 1 alarrosa
https://openqaworker4-ipmi.suse.de/ with `javaws.itweb jviewer.jnlp` from icedtea-web which offers virtual media redirection so one can select a local ISO file as installation medium.
887 187 okurz
888 213 okurz
Disclaimer: https://openwebstart.com/ explains that "Java Web Start (JWS) was deprecated in Java 9, and starting with Java 11, Oracle removed JWS from their JDK distributions". So an alternative to the outdated icedtea-web with openwebstart can be used. okurz managed the following way downloading the openwebstart .deb file from https://openwebstart.com/download/ and converted with alien:
889 1 alarrosa
890 213 okurz
```
891
wget https://github.com/karakun/OpenWebStart/releases/download/v1.6.0/OpenWebStart_linux_1_6_0.deb
892
sudo zypper in https://download.opensuse.org/repositories/home:/phoenix.os:/dup/15.4/noarch/alien-8.95-lp154.3.1.noarch.rpm
893
sudo alien --to-rpm --verbose ./OpenWebStart_linux_1_6_0.deb
894
sudo zypper in ./openwebstart-1.6.0-2.noarch.rpm
895
/opt/OpenWebStart/javaws ~/Downloads/launch.jnlp
896
```
897
898
In some cases we hit an error message "no iKVM64 in java.library.path" (see #117679#note-15 for details, same problem regardless of JDK version). For a local archive one can do `LD_LIBRARY_PATH=. ./iKVM`, for a webstart application there is no known solution. As workaround it is suggested to use downloaded applications from supermicro. Download "IPMIView" from https://www.supermicro.com/en/support/resources/downloadcenter/smsdownload, extract the archive, call `./IPMIView20` and connect to the desired system.
899
900
If you use your system provided JRE then you might hit the error "Certificates do not conform to algorithm constraints". Running in a terminal will tell provide more details, e.g. about SHA/RSA key lengths. Removing the line starting with `jdk.certpath.disabledAlgorithms` in /etc/crypto-policies/back-ends/java.config can fix that.
901
902
An alternative to "IPMIView" is "SMCIPMITool" from https://www.supermicro.com/en/support/resources/downloadcenter/smsdownload . This also allows to connect to systems and also use something like mounting virtual media, example:
903
904
```
905
$ jre/bin/java -jar ./SMCIPMITool.jar 10.162.0.4 … shell
906
907
10.162.0.4 X9DR3-LN4F+ (S0/G0,172w) 13:42 SIM(WA)>vmwa dev2iso /home/okurz/local/os-autoinst/os-autoinst/t/data/Core-7.2.iso
908
Mounting ISO file: /home/okurz/local/os-autoinst/os-autoinst/t/data/Core-7.2.iso
909
Device 2 :VM Plug-In OK!!
910
```
911 187 okurz
912 109 okurz
## openQA infrastructure needs (o3 + osd)
913
914 115 okurz
TL;DR: new OSD ARM workers needed, missing redundancy for o3-ppc, rest is needing replacement as nearly all current hardware is out of vendor provided maintenance (as of 2021-05), SSD storage for o3 would be good
915 93 okurz
916
2020-03: SUSE IT (EngInfra) provided us more space for O3 but we have only slow rotating-disk storage. Performance could be improved by providing SSD storage.
917
918
The most time and effort we currently struggle with storage space for OSD (openqa.suse.de) ~~both OSD (openqa.suse.de) as well as O3 (openqa.opensuse.org) (2020-03: Situation on o3 resolved with more storage provided by SUSE IT)~~. Both instances (OSD + O3) are using precious netapp-storage but there is currently no better approach to use different, external storage. An increase of the available space would be appreciated, ~~o3 being more important right now than osd,~~ see https://progress.opensuse.org/issues/57494 for details. Graphs like 
919
https://stats.openqa-monitor.qa.suse.de/d/nRDab3Jiz/openqa-jobs-test?orgId=1&from=1578343509900&to=1578653794173&fullscreen&panelId=12 show how usual test backlogs are worked on within OSD by architecture. It can be seen that both the ppc64le and aarch64 backlogs are reduced fast so we do not need more ppc64le or aarch64 machines. However, we have a stability problem with all three aarch64 workers. Potentially new machine(s) could help, see https://progress.opensuse.org/issues/41882 for details.
920 107 okurz
921 125 okurz
With number of workers and parallel processed tests as well as with the increased number of products tested on OSD and users using the system the workload on OSD constantly increases. CPU load alerts had been seen recently in #96713 and the higher load is visible in https://monitor.qa.suse.de/d/WebuiDb/webui-summary?viewPanel=25 . From time to time should increase the number of CPU cores on the OSD VM due to the higher usage.
922
923 117 okurz
## Setup guide for new machines
924 250 mkittler
* Ensure the host has a proper DNS entry
925
    * The MAC address of new o3 workers generally needs to be added to `/etc/dnsmasq.d/openqa.conf` and an IP address needs to be configured in `/etc/hosts` (both files are on ariel).
926
    * Hosts located at Frankencampus need a DNS entry via the OPS-Service repo, e.g. https://gitlab.suse.de/OPS-Service/salt/-/merge_requests/3687.
927 1 alarrosa
* Change IPMI/BMC passwords to use our common passwords instead of default IPMI
928
* OSD: Add to salt using https://gitlab.suse.de/openqa/salt-states-openqa
929 250 mkittler
    * Make sure to set /etc/salt/minion_id to the FQDN (see #90875#note-2 for reference)
930
    * Checkout the next section for details
931
* o3: Setup the worker manually, see "Manual worker setup" section below
932 1 alarrosa
933 250 mkittler
### Legacy boot via PXE and OS/worker setup
934
One can make use of our existing PXE infrastructure (which only supports legacy boot) following these steps:
935
936
1. Ensure the boot mode allows legacy boot, e.g. select it in the machine's setup menu manually.
937
2. Connect via IPMI and select "Leap -> HTTP -> Console" in our PXE menu, append ` console=ttyS0,115200 autoyast=http://s.qa.suse.de/oqa-ay-lp rootpassword=…` to the command line and wait until the installation has finished.
938 255 mkittler
    * Use https://w3.nue.suse.com/~okurz/ay-openqa-worker-leap.xml if the URL shortener is not available.
939
    * Alternatively, there's also https://raw.githubusercontent.com/os-autoinst/openQA/master/contrib/ay-openqa-worker.xml.
940 250 mkittler
    * If nothing shows up in the serial console, try a different console parameter, e.g. `console=ttyS1,115200`.
941
3. Configure repos, e.g. via the line of the scriptlet in http://s.qa.suse.de/oqa-ay-lp.
942
    * The scriptlet cannot be executed in the context of AutoYaST so this is a manual step at this point.
943
4. Enable SSH access via `systemctl enable --now sshd` and continue via SSH.
944 254 mkittler
5. Install some basic software, e.g. `zypper in htop vim systemd-coredump`.
945 253 mkittler
6. For OSD workers, setup `salt-minion` following the [documentation in our Salt states repository](https://github.com/os-autoinst/salt-states-openqa#setup-production-machine); otherwise setup the worker manually as explained in the next section.
946 250 mkittler
7. Check whether the config looks good on the workers and whether jobs look good on the web UI host.
947
948
### Manual worker setup
949 258 okurz
You likely want to configure the [openQA development repository](https://open.qa/docs/#_development_version_repository).
950 250 mkittler
Then setup the worker like this:
951
952 249 mkittler
```
953
echo "requires:openQA-worker" > /etc/zypp/systemCheck.d/openqa.check
954 259 okurz
zypper -n in openQA-worker openQA-auto-update openQA-continuous-update os-autoinst-distri-opensuse-deps swtpm # openQA worker services plus dependencies for openSUSE distri or development repo if added previously
955 258 okurz
zypper -n in ffmpeg-4  # for using external video encoder as it is already configured on some machines like ow19, ow20 and power8
956 249 mkittler
zypper -n in nfs-client  # For /var/lib/openqa/share
957 259 okurz
zypper -n in bash-completion vim htop strace systemd-coredump iputils tcpdump bind-utils  # for general tinkering
958 249 mkittler
959
echo "openqa1-opensuse:/ /var/lib/openqa/share nfs4 noauto,nofail,retry=30,ro,x-systemd.automount,x-systemd.device-timeout=10m,x-systemd.mount-timeout=30m 0 0" >> /etc/fstab
960
sed -i 's/\(solver.dupAllowVendorChange = \)false/\1true/' /etc/zypp/zypp.conf
961 1 alarrosa
962
# configure /etc/openqa/client.conf and /etc/openqa/workers.ini, then enable the desired number of worker slots, e.g.:
963
systemctl enable --now openqa-worker-auto-restart@{1..30}.service openqa-reload-worker-auto-restart@{1..30}.path openqa-auto-update.timer openqa-continuous-update.timer openqa-worker-cacheservice.service openqa-worker-cacheservice-minion.service
964
```
965
966
Also copy the OVMF images for staging tests (`/usr/share/qemu/*staging*`) from other workers. Those files are from the `devel` flavor of the OVMF package built in stagings and rings, e.g. https://build.opensuse.org/package/show/openSUSE:Factory:Rings:1-MinimalX/ovmf, just renamed.
967 258 okurz
968
#### Optional: Transactional-server
969
You may chose the transaction server role but a normal server will do as well:
970
971
```
972
sed -i 's@/ btrfs ro@/ btrfs rw@' /etc/fstab
973
mount -o rw,remount /
974
btrfs property set -ts / ro false
975
```
976 249 mkittler
977
### UEFI boot via iPXE
978 250 mkittler
The following steps are for the o3 environment but can likely also be adapted for setting up OSD workers. This section skips the setup of the OS as it doesn't differ when using UEFI/iPXE. Checkout the previous sections for the OS/worker setup.
979 202 mkittler
980
---
981
982
There's a PXE setup as part of `dnsmasq.service` running on ariel. It is currently configured to serve a legacy-only boot menu utilized by some tests. After following these steps, please restore this setup so tests can continue to use it.
983
984
First, make a file that contains the iPXE commands to boot available via some HTTP server. Here's how the file could look like for installing Leap 15.4 with AutoYaST:
985
```
986
#!ipxe
987 204 mkittler
kernel http://download.opensuse.org/distribution/leap/15.4/repo/oss/boot/x86_64/loader/linux initrd=initrd console=tty0 console=ttyS1,115200 install=http://download.opensuse.org/distribution/leap/15.4/repo/oss/ autoyast=http://martchus.no-ip.biz/ipxe/ay-openqa-worker.xml rootpassword=…
988 202 mkittler
initrd http://download.opensuse.org/distribution/leap/15.4/repo/oss/boot/x86_64/loader/initrd
989
boot
990
```
991
992
Then, setup the build of an iPXE UEFI image like explained on https://en.opensuse.org/SDB:IPXE_booting#Setup:
993
```
994
git clone https://github.com/ipxe/ipxe.git
995
cd ipxe
996
echo "#!ipxe
997
dhcp
998
chain http://martchus.no-ip.biz/ipxe/leap-15.4" > myscript.ipxe
999
```
1000
1001
As you can see, this build script contains the URL to the previously setup file. Of course commands could be built directly into the image but then you'd need to rebuild/redeploy the image all the time you want to make a change (instead of just editing a file on your HTTP server).
1002
1003
To conduct the build of the image, run:
1004
```
1005
cd src
1006
make EMBED=../myscript.ipxe NO_WERROR=1 bin/ipxe.lkrn bin/ipxe.pxe bin-i386-efi/ipxe.efi bin-x86_64-efi/ipxe.efi
1007
```
1008
1009
Note that these build options are taken from https://github.com/archlinux/svntogit-community/blob/packages/ipxe/trunk/PKGBUILD#L58 because when attempting to build on Tumbleweed I've otherwise ran into build errors.
1010
1011
Then you can copy the files to ariel and move them to a location somewhere under `/srv/tftpboot`:
1012
```
1013
# on build host
1014
rsync bin-x86_64-efi/ipxe.efi openqa.opensuse.org:/home/martchus/ipxe.efi
1015
# on ariel
1016
sudo cp /home/martchus/ipxe.efi /srv/tftpboot/ipxe-own-build/ipxe.efi
1017
```
1018
1019
Then configure the use of the image in `/etc/dnsmasq.d/pxeboot.conf` on ariel. Temporarily comment-out possibly disturbing lines and make sure the following lines are present:
1020
```
1021
enable-tftp
1022
tftp-root=/srv/tftpboot
1023
pxe-prompt="Press F8 for menu. foobar", 10
1024
dhcp-match=set:efi-x86_64,option:client-arch,7
1025
dhcp-match=set:efi-x86_64,option:client-arch,9
1026
dhcp-match=set:efi-x86,option:client-arch,6
1027
dhcp-match=set:bios,option:client-arch,0
1028
dhcp-boot=tag:efi-x86_64,ipxe-own-build/ipxe.efi
1029
```
1030
1031
Then run `systemctl restart dnsmasq.service` to apply and `journalctl -fu dnsmasq.service` to see what's going on.
1032 215 okurz
1033
### Installation of machines being able to run kexec
1034
1035
If it is possible to directly execute "kexec" on a machine, e.g. on ppc64le machines running petitboot, it is possible to start a remote network installation following https://en.opensuse.org/SDB:Network_installation#Start_the_Installation . See #119008#note-6 for an example.
1036 232 okurz
1037 231 okurz
### Linux Endpoint Protection Agent
1038 215 okurz
Ensure any non-test OS installations have the Linux Endpoint Protection Agent deployed, see https://progress.opensuse.org/issues/123094 and https://confluence.suse.com/display/CS/Sensor+-+Linux+Endpoint+Protection+Agent for details
1039 120 okurz
1040
## Take machines out of salt-controlled production
1041
1042 118 okurz
E.g. for investigation or development or manual maintenance work
1043
1044
```
1045 179 nicksinger
ssh osd "sudo salt-key -y -d $hostname"
1046 118 okurz
ssh $hostname "sudo systemctl disable --now telegraf $(systemctl list-units | grep openqa-worker-auto-restart | cut -d "." -f 1 | xargs)"
1047
```
1048 174 mkittler
1049
Checkout [salt-states-openqa's examples](https://gitlab.suse.de/openqa/salt-states-openqa/-/blob/master/README.md#examples) for systemd commands to start and stop workers.
1050 229 nicksinger
1051
## How to use samba shares to mount ISOs as virtual CD drives with SuperMicro server/mainboards
1052
1053
SuperMicro based servers have the capabilities to mount smb shares containing ISOs as virtual CD drives to e.g. boot from them.
1054
Install the samba package on any machine you control. This also works from your personal workstation if the server can access it (e.g. over VPN) and create the following `/etc/samba/smb.conf`:
1055
1056
~~~ text
1057
[global]
1058
   workgroup = MYGROUP
1059
   server string = Samba Server
1060
   log level = 3
1061
   client min protocol = core
1062
   server min protocol = core
1063
   guest ok = yes
1064 240 okurz
1065
## "Staging" test instances
1066
1067
SUSE internally we have two virtual machines that can be used for testing, developing, showcasing, reachable under convenient URLs:
1068
* https://openqa-staging-1.qe.nue2.suse.org
1069
* https://openqa-staging-2.qe.nue2.suse.org
1070
1071
You can use those machines and apply changes as desired over ssh.
1072 229 nicksinger
1073
#============================ Share Definitions ==============================
1074
[recovery]
1075
	comment = recovery
1076
	path = /home/you/recovery
1077
	public = yes
1078
~~~
1079
1080
Now start the samba service. Despite the share being accessible by everyone (be carful about this!), the SuperMicro machines still need a User on the Samba server as they don't support anonymous login. To create a user without requiring a local unix user, you can use the following command:
1081
1082
```samba-tool domain provision --use-rfc2307 --interactive```
1083
1084
afterwards create a user in the samba database with:
1085
1086
```smbpasswd -a smbtest```
1087
1088
Now it should be possible to access the share. Place an ISO file into your folder configured above and use the following settings in the webui of the SuperMicro server:
1089
1090
"Share Host": IP of your machine running samba
1091
"Path to Image": Path to your ISO inside the share, e.g. "\recovery\some_boot_medium.iso" (mind the backslashes!)
1092
"Users": The username from your just created user
1093
"Password": It's password - don't keep this empty as it will not work otherwise
1094
1095
After clicking on "mount" you should now see a connection to your samba server. The machine will try to mount the ISO and if everything goes well, will report "There is an iso file mounted." in the "Health Status" of the Devices.
1096 173 mkittler
1097 118 okurz
## Bring back machines into salt-controlled production
1098
1099 124 dheidler
```
1100 118 okurz
ssh osd "sudo salt-key -a $hostname && sudo salt --state-output=changes $hostname state.apply"
1101
```
1102
1103 117 okurz
Depending on your actions further manual cleanup might be necessary, e.g. `ssh $hostname "sudo systemctl unmask telegraf salt-minion"`
1104 230 nicksinger
1105
## Access the BMC of machines in the new security zone
1106
1107
One can use ssh portforwarding to access the services of a BMC (e.g. web interface) for a machine in the new security zone. The host "qe-jumpy" can be used for that like this:
1108
1109
~~~
1110
ssh -4 jumpy@qe-jumpy.suse.de -L 8443:openqaworker4-ipmi.qe-ipmi-ur:443 -L 8080:openqaworker4-ipmi.qe-ipmi-ur:80
1111
~~~
1112
1113
while the ssh-session is running you can then use your local browser to access the remote host by e.g. "http://localhost:8080" or "https://localhost:8443".
1114
1115
## Using the build-in java tools of BMCs to access machines in the security zone
1116
1117
*1.* Follow [Access the BMC of machines in the new security zone](#Access-the-BMC-of-machines-in-the-new-security-zone) to download the build-in java webstart file of the machine you want to control
1118
*2.* Use nmap on qe-jumpy to scan for all ports of a machines BMC. Example:
1119
1120
~~~
1121
jumpy@qe-jumpy:~> nmap openqaworker4-ipmi.qe-ipmi-ur -p-
1122
Starting Nmap 7.70 ( https://nmap.org ) at 2023-01-17 12:23 UTC
1123
Nmap scan report for openqaworker4-ipmi.qe-ipmi-ur (192.168.133.4)
1124
Host is up (0.0056s latency).
1125
Not shown: 65525 closed ports
1126
PORT     STATE SERVICE
1127
22/tcp   open  ssh
1128
80/tcp   open  http
1129
199/tcp  open  smux
1130
427/tcp  open  svrloc
1131
443/tcp  open  https
1132
623/tcp  open  oob-ws-http
1133
5120/tcp open  barracuda-bbs
1134
5122/tcp open  unknown
1135
5123/tcp open  unknown
1136
7578/tcp open  unknown
1137
~~~
1138
1139
*3.* Forward all ports relevant for the java applet to your local machine:
1140
1141
~~~
1142
sudo ssh -i /home/nicksinger/.ssh/id_rsa.SUSE -4 jumpy@qe-jumpy.suse.de -L 443:openqaworker4-ipmi.qe-ipmi-ur:443 -L 623:openqaworker4-ipmi.qe-ipmi-ur:623 -L 5120:openqaworker4-ipmi.qe-ipmi-ur:5120 -L 5122:openqaworker4-ipmi.qe-ipmi-ur:5122 -L 5123:openqaworker4-ipmi.qe-ipmi-ur:5123 -L 7578:openqaworker4-ipmi.qe-ipmi-ur:7578
1143
~~~
1144
1145
**Note 1:** You have to use the exact same ports as shown by the port scan because you cannot instruct the applet to use different ports
1146
**Note 2:** You have to execute your ssh client with root privileges for it to be able to bind to ports below 1024. These forwardings need to be present for the applet being able to download additional files from the BMC
1147
**Note 3:** Make sure to point to your right keyfile by using the -i parameter as ssh will scan different directories if run as root
1148
1149
*4.* Execute the previously downloaded applet. I use the following command to make it work with wayland:
1150
~~~
1151
LANG=C _JAVA_AWT_WM_NONREPARENTING=1 javaws -nosecurity -jnlp jviewer\ \(1\).jnlp
1152
~~~
1153
*5.* You should now be able to control the machine/BMC with all its features (e.g. mounting ISO images as virtual CD)
1154 175 okurz
1155 172 mkittler
## Use a production host for testing backend changes locally, e.g. svirt, powerVM, IPMI bare-metal, s390x, etc.
1156 177 mkittler
1157 172 mkittler
0. Find out which type of worker slot you need for the specific job you want to run, e.g. by checking which worker slots were used for previous runs of the job on OSD or by looking for the job's worker class in the [workers table](https://openqa.suse.de/admin/workers).
1158 1 alarrosa
1. Configure an additional worker slot in your local `workers.ini` using worker settings from the corresponding production worker. The production worker config can be found in [workerconf.sls](https://gitlab.suse.de/openqa/salt-pillars-openqa/-/blob/master/openqa/workerconf.sls) or on the hosts themselves.
1159 176 mkittler
2. Take out the corresponding worker slot from production using the systemd commands mentioned in [salt-states-openqa's examples](https://gitlab.suse.de/openqa/salt-states-openqa/-/blob/master/README.md#examples). This is important to prevent multiple jobs from using the same svirt host.
1160 172 mkittler
3. Start the locally configured worker slot and clone/run some jobs.
1161
4. When you're done, bring back the production worker slots using the systemd commands mentioned in [salt-states-openqa's examples](https://gitlab.suse.de/openqa/salt-states-openqa/-/blob/master/README.md#examples).
1162 178 mkittler
1163
### Alternatives
1164
It is also possible to test svirt backend changes fully locally, at least when running tests via KVM is sufficient. Checkout [os-autoinst's documentation](https://github.com/os-autoinst/os-autoinst/blob/master/doc/backends.md#svirt=) for further details.
1165 122 okurz
1166 257 mkittler
## Dealing with PowerEdge SAP servers from Dell
1167 256 mkittler
### Restoring access to the iDRAC web interface
1168
If iDRAC returns a 400 error it might be due to a wrong DNS setting. This is especially likely if you have just changed the DNS entry. Try to access iDRAC via its IP which should still work. Then goto iDRAC settings -> Network -> General settings and update the DNS iDRAC name to match the *not* fully qualified domain (e.g. `qesapworker-prg4-mgmt` for https://qesapworker-prg4-mgmt.qa.suse.cz).
1169
1170
### Acessing the management interface via SSH
1171
It is possible to access the management interface via SSH as well (using the same user name and password as for the web interface). Checkout further Wiki sections for useful commands or the [manual](https://dl.dell.com/content/manual65464730-integrated-dell-remote-access-controller-9-racadm-cli-guide.pdf?language=en-us).
1172
1173
### Recovering BIOS
1174 1 alarrosa
If the BIOS appears completely broken (e.g. after a firmware update) you may try to invoke `racadm systemerase bios` after accessing the management interface via SSH. This will take a while and afterwards you'll have to redo settings (e.g. the bootmode).
1175 257 mkittler
1176
### Cancel/delete stuck iDRAC jobs
1177
Invoke `racadm jobqueue delete -i JID_CLEARALL_FORCE` after accessing the management interface via SSH.
1178
1179
### Check status of BOSS-S2 NVMe disks
1180
Use the "MVCLI BOSS-S2" utility from Dell which you can download from their servers (see https://www.dell.com/support/manuals/de-de/poweredge-r6525/boss-s2_ug/run-boss-s2-cli-commands-on-poweredge-servers-running-the-linux-operating-system?guid=guid-c0f3bd0d-4725-4fed-8bc2-4aa872f3627f&lang=en-us).
1181
1182
### Firmware updates
1183
The easiest way is to download the *Windows* installer (a file that ends with `.EXE`) and upload and install that via the iDRAC web interface. This also works for updates of iDRAC but also for BIOS updates and firmware of various components. Uploading the GNU/Linux version (a file that ends with `.BIN`) is *not* possible this way. One can track the progress of those updates via the iDRAC job queue. It is possible to schedule two updates that require a reboot at the same time (e.g. BIOS update and SAS-RAID firmware) and do them this way in one go.
1184 256 mkittler
1185 122 okurz
## Backup
1186 134 okurz
1187 122 okurz
Both openqa.opensuse.org and openqa.suse.de run on virtual machine clusters that provide redundancy and differential backup using snapshotting of the involved storage. SUSE-IT currently provides backups going back up to 3 days with two daily backups conducted at 23:10Z and 11:00Z. With this it is possible in cases of catastrophic data loss to recover (raise ticket over https://sd.suse.com in that case). Additionally automatic backup for the o3 webui host introduced with https://gitlab.suse.de/okurz/backup-server-salt/tree/master/rsnapshot covering so far /etc and the SQL database dumps. Fixed assets and testresults are backed up on storage.qa.suse.de (see https://gitlab.suse.de/openqa/salt-states-openqa/-/merge_requests/612)
1188 139 okurz
1189
### openQA database backups
1190
1191
Database backups of o3+osd are available on backup.qa.suse.de, acessible over ssh, same credentials as for the OSD infrastructure
1192 144 livdywan
1193
### Fallback deployment on AWS
1194
1195 149 tinita
To get an instance running from a backup in case of a disaster, one can be created on AWS with this configuration:
1196
1197
#### Launch instance
1198 155 tinita
1199 149 tinita
##### Web Interface, from scratch (only if necessary, otherwise just use the template below)
1200 144 livdywan
1201
- Ensure your region is **Frankfurt, Germany**
1202 146 mkittler
- Pick a **t3.large** with `openSUSE Leap` on AWS Marketplace
1203
- Add two disks
1204
    - 10 GiB for the root filesystem should be sufficient (can be easily extended later if needed)
1205 144 livdywan
    - The OSD database alone needs > 30 GiB and results plus assets will also need a lot (e.g. > 4 GiB for TW snapshot ISO) so take at least 100 GiB for the 2nd drive
1206
- The security group needs to include ssh and http
1207 149 tinita
- Add `openqa_created_by`, `openqa_ttl` and `team:qa-tools` tags
1208
1209
##### Launch from a template
1210 151 tinita
1211
Note: When you modify the template (creating a new version), be sure to set the new version as the default.
1212 155 tinita
1213 154 tinita
- Go to the [openQA-webUI-openSUSE-Leap](https://eu-central-1.console.aws.amazon.com/ec2/v2/home?region=eu-central-1#LaunchTemplateDetails:launchTemplateId=lt-002dfbcbd2f818e4c) Template
1214 149 tinita
- Select "Actions - Launch instance from template"
1215
- Choose your key pair
1216 1 alarrosa
- Click "Launch instance"
1217
1218 151 tinita
###### Command line
1219 156 tinita
1220
For configuring aws cli, see [below](https://progress.opensuse.org/projects/openqav3/wiki/Wiki#Configure-aws-cli)
1221
1222 149 tinita
[aws run-instances docs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-templates.html)
1223
1224 150 tinita
    aws ec2 run-instances --launch-template LaunchTemplateId=lt-002dfbcbd2f818e4c --key-name <your-keyname>
1225
    # or
1226 149 tinita
    aws ec2 run-instances --launch-template LaunchTemplateName=openQA-webUI-openSUSE-Leap --key-name <your-keyname>
1227
1228
For this you have to create a key pair first, if you haven't done so.
1229 144 livdywan
Save the result and look for the `InstanceId`.
1230
1231
#### Transfer keys
1232
1233
Since an instance is always created with a single key, public keys of all users need to be deployed by whoever owns that key.
1234
1235
**Note**: `osd2` refers to the instance created above. Replace with the instance IP or add an alias to your SSH config.
1236
1237
    ssh openqa.suse.de "sudo su -c 'cat /home/*/.ssh/authorized_keys'" | ssh ec2-user@osd2 "cat - >> ~/.ssh/authorized_keys"
1238
1239
#### Bootstrapping
1240
1241
```
1242 169 osukup
ssh osd2
1243 145 mkittler
sudo su
1244
parted --script /dev/nvme1n1 mklabel gpt && parted --script /dev/nvme1n1 mkpart ext4 4096s 100%
1245 160 osukup
mkfs.ext4 /dev/nvme1n1p1
1246 145 mkittler
vim /etc/fstab # add mount to fstab
1247 158 okurz
mkdir /space && mount /dev/nvme1n1p1 /space
1248
mkdir -p /space/pgsql/data
1249
mkdir -p /var/lib/pgsql
1250 169 osukup
ln -s /space/pgsql/data /var/lib/pgsql/data
1251
zypper in postgresql-server # needed for user.group
1252
chown -R postgres.postgres /space/pgsql # without correct group postgresql.service fails
1253
mkdir -p /space/openqa
1254 171 osukup
mkdir -p /var/lib/openqa
1255 161 osukup
mount /space/openqa /var/lib/openqa -o bind # open also requires a lot of space 
1256 152 tinita
curl -s https://raw.githubusercontent.com/os-autoinst/openQA/master/script/openqa-bootstrap | bash -x
1257
1258 145 mkittler
ssh -A backup.qa.suse.de
1259 1 alarrosa
rsync --progress /home/rsnapshot/alpha.0/openqa.suse.de/var/lib/openqa/SQL-DUMPS/2022-02-08.dump ec2-user@osd2:/tmp
1260
1261 147 mkittler
ssh osd2
1262 1 alarrosa
sudo -u postgres createdb -O geekotest openqa-osd # create pristine db for OSD import (to avoid conflicts with existing data)
1263 153 tinita
sudo -u geekotest pg_restore -d openqa-osd /tmp/2022-02-08.dump # import data, will take a while (22m is a realistic time)
1264 1 alarrosa
vim /etc/openqa/openqa.ini # change auth from Fake to OpenID
1265 170 osukup
vim /etc/openqa/database.ini # change database to openqa-osd
1266 158 okurz
vim /etc/openqa/client.conf # change key and secret to correct one
1267 1 alarrosa
systemctl restart openqa-webui
1268 155 tinita
```
1269
1270
##### Configure aws cli
1271
1272
You can use the command
1273
1274
    aws configure
1275
1276
but it doesn't actually help you with the possible values, so you can just create the file yourself like this:
1277
1278
    % cat ~/.aws/config
1279
    [default]
1280
    region = eu-central-1
1281 157 tinita
    output = json
1282
    % cat ~/.aws/credentials
1283
    [default]
1284 155 tinita
    aws_access_key_id = ABCDE
1285 144 livdywan
    aws_secret_access_key = FGHIJ
1286 109 okurz
1287 107 okurz
## Best practices for infrastructure work
1288
1289
* Same as in OSD deployment we should look for failed grafana alerts if users report something suspicious
1290
* Collect all the information between "last good" and "first bad" and then also find the git diff in openqa/salt-states-openqa
1291
* Apply proper "scientific method" with written down hypotheses, experiments and conclusions in tickets, follow https://progress.opensuse.org/projects/openqav3/wiki#Further-decision-steps-working-on-test-issues
1292
* Keep salt states to describe what should *not* be there
1293
* Try out older btrfs snapshots in systems for crosschecking and boot with disabled salt. In the kernel cmdline append `systemd.mask=salt-minion.service`
1294 190 okurz
* Team should conduct a work backlog check on a daily base, e.g. look for urgent tickets related to infrastructure problems
1295 191 okurz
* For hardware component replacement, create EngInfra ticket for coordination, order replacement on private expenses and get reimbursed using https://intra.suse.net/company/company-news/department/finance/claim-expenses/claim-expenses/ or have order placed with the help of line managers, let the components be delivered to the according place, e.g. SUSE Nuremberg datacenter and inform EngInfra in ticket to have them conduct the physical component replacement
1296 148 livdywan
* For ordering new machines follow https://mysuse.sharepoint.com/sites/SUSEBusinessCriticalLinux/Shared%20Documents/Hardware%20Order/E&I%20Hardware.pdf (get quotes from vendor, create ticket with procurement, CC osd-admins+mgriessmeier, wait for purchase order (PO) approval, order with vendor and ask them to include PO number in invoice)
1297 116 okurz
* Prefer `reload` over `restart` where available e.g. `systemctl reload postgres` - in general `systemctl cat postgres` will show available commands for any service
1298
* Test reboot stability of machines with commands like in https://progress.opensuse.org/issues/78010#note-31 e.g.
1299
1300
```
1301
for run in {01..30}; do for host in $host; do echo -n "run: $run, $host: ping .. " && timeout -k 5 600 sh -c "until ping -c30 $host >/dev/null; do :; done" && echo -n "ok, ssh .. " && timeout -k 5 600 sh -c "until nc -z -w 1 $host 22; do :; done" && echo -n "ok, uptime/reboot: " && ssh $host "uptime && sudo reboot" && sleep 120 || break; done || break; done
1302 234 okurz
```
1303
1304
# Automatic submission of packages
1305 1 alarrosa
Every commit to the master branch of the git repositories of https://github.com/os-autoinst/os-autoinst and https://github.com/os-autoinst/openQA is considered a stable release and triggers package builds within https://build.opensuse.org/project/show/devel:openQA, in particular https://build.opensuse.org/package/show/devel:openQA/os-autoinst and https://build.opensuse.org/package/show/devel:openQA/openQA. http://jenkins.qa.suse.de/job/trigger-openQA_in_openQA-TW/ using https://github.com/os-autoinst/scripts/blob/master/trigger-openqa_in_openqa is monitoring the download repositories for new versions and triggers openQA-in-openQA tests as visible on https://openqa.opensuse.org/group_overview/24 . http://jenkins.qa.suse.de/job/monitor-openQA_in_openQA-TW/ monitors the test execution using https://github.com/os-autoinst/scripts/blob/master/monitor-openqa_job and on test success triggers http://jenkins.qa.suse.de/job/submit-openQA-TW-to-oS_Fctry/ periodically (with a build throttle as decided together with openSUSE reviewers) using https://github.com/os-autoinst/scripts/blob/master/os-autoinst-obs-auto-submit. This step prepares openQA related packages for automatic submission into openSUSE:Factory in https://build.opensuse.org/project/show/devel:openQA:tested, awaits build+check results and then creates automatic submissions to openSUSE:Factory for inclusion of packages into openSUSE Tumbleweed. This approach could also be extended for automatic submission to openSUSE Leap, SLE PackageHub or directly to SLE using maintenance updates based on a configurable schedule with additional check steps as applicable. Given that openQA are developed based on a rolling-release model with no maintenance branches any updates to base products supporting openQA would be new version updates along with dependency package updates as necessary.