Project

General

Profile

Wiki » History » Version 297

ybonatakis, 2024-02-27 16:50
Update autoyast links

1 3 okurz
# Introduction
2 1 alarrosa
3 3 okurz
This is the organisation wiki for the **openQA Project**.
4 49 okurz
The source code is hosted in the [os-autoinst github project](http://github.com/os-autoinst/), especially [openQA itself](http://github.com/os-autoinst/openQA) and the main backend [os-autoinst](http://github.com/os-autoinst/os-autoinst)
5 1 alarrosa
6 48 okurz
If you are interested in the tests for SUSE/openSUSE products take a look into the [openqatests](https://progress.opensuse.org/projects/openqatests) project.
7
8 165 okurz
If you are looking for entry level issues to contribute to please look into the section [[Wiki#Where-to-contribute|Where to contribute]]
9 70 szarate
10 14 okurz
{{toc}}
11
12 3 okurz
# Organisational
13 1 alarrosa
14 51 okurz
## ticket workflow
15
16
The following ticket statuses are used together and their meaning is explained:
17
18 63 okurz
* *New*: No one has worked on the ticket (e.g. the ticket has not been properly refined) or no one is feeling responsible for the work on this ticket.
19 73 riafarov
* *Workable*: The ticket has been refined and is ready to be picked.
20
* *In Progress*: Assignee is actively working on the ticket.
21 1 alarrosa
* *Resolved*: The complete work on this issue is done and the according issue is supposed to be fixed as observed (Should be updated together with a link to a merged pull request or also a link to an production openQA showing the effect)
22 239 okurz
* *Feedback*: Further work on the ticket needs clarification of open points within the ticket or is awaiting feedback from others or other systems (e.g. automated tests) to proceed. Sometimes also used to ask Assignee about progress on inactivity.
23 74 okurz
* *Blocked*: Further work on the ticket is blocked by some external dependency (e.g. bugs, not implemented features). There should be a link to another ticket, bug, trello card, etc. where it can be seen what the ticket is blocked by.
24 51 okurz
* *Rejected*: The issue is considered invalid, should not be done, is considered out of scope.
25
* *Closed*: As this can be set only by administrators it is suggested to not use this status.
26
27
It is good practice to update the status together with a comment about it, e.g. a link to a pull request or a reason for reject.
28
29 80 okurz
## ticket categories
30
31 251 okurz
* *Regressions/Crashes*: Regressions, crashes, error messages
32 80 okurz
* *Feature requests*: Ideas or wishes for extension, enhancement, improvement
33
* *Organisational*: Organisational tasks within the project(s), not directly code related
34
* *Support*: Support of users, usage problems, questions
35
36 1 alarrosa
Please avoid the use of other, deprecated categories
37
38
Suggestion by *okurz*: I recommend to avoid the word "bug" in our categories because of the usual "is it a bug or a feature" struggle. Instead I suggest to strictly define "Regressions & Crashes" to clearly separate "it used to work in before" from "this was never part of requirements" for Features. Any ticket of this category also means that our project processes missed something so we have points for improvements, e.g. extend things to look out for in code review.
39 100 okurz
40
## Epics and Sagas
41
42
[epic]s and [saga]s belong to the "coordination" tracker, project contributors are not required to follow this convention but the tracker may be changed automagically in the future: http://mailman.suse.de/mailman/private/qa-sle/2020-October/002722.html 
43 83 okurz
44 13 okurz
## ticket templates
45
You can use these templates to fill in tickets and further improve them with more detail over time. Copy the code block, paste it into a new issue, replace every block marked with "<…>" with your content or delete if not appropriate.
46
47 71 nicksinger
### Defects
48 13 okurz
49
Subject: `<Short description, example: "openQA dies when triggering any Windows ME tests">`
50
51 1 alarrosa
52 13 okurz
```
53 71 nicksinger
## Observation
54 13 okurz
<description of what can be observed and what the symptoms are, provide links to failing test results and/or put short blocks from the log output here to visualize what is happening>
55
56 71 nicksinger
## Steps to reproduce
57 1 alarrosa
* <do this>
58 13 okurz
* <do that>
59 1 alarrosa
* <observe result>
60 13 okurz
61 200 okurz
## Impact
62
<clearly state the impact of issues to make sure according prioritization is applied and rollbacks/downgrades can be applied>
63
64 71 nicksinger
## Problem
65 13 okurz
<problem investigation, can also include different hypotheses, should be labeled as "H1" for first hypothesis, etc.>
66
67 274 okurz
## Suggestions
68 123 okurz
* <what to do as a first step>
69
* <Fix the actual problem>
70
* <Consider fixing the design>
71
* <Consider fixing the team's process>
72
* <Consider to explore further>
73 13 okurz
74 71 nicksinger
## Workaround
75 13 okurz
<example: retrigger job>
76
```
77
78
example ticket: #10526
79
80 104 okurz
For tickets referencing "auto_review" see
81
https://github.com/os-autoinst/scripts/blob/master/README.md#auto-review---automatically-detect-known-issues-in-openqa-jobs-label-openqa-jobs-with-ticket-references-and-optionally-retrigger
82
for a suggested template snippet.
83
84 72 nicksinger
### Feature requests
85 13 okurz
86
Subject: `<Short description, example: "grub3 btrfs support" (feature)>`
87
88
89
```
90
## User story
91
<As a <role>, I want to <do an action>, to <achieve which goal> >
92
93 72 nicksinger
## Acceptance criteria
94 13 okurz
* <**AC1:** the first acceptance criterion that needs to be fulfilled to do this, example: Clicking "restart button" causes restart of the job>
95
* <**AC2:** also think about the "not-actions", example: other jobs are not affected>
96
97 275 okurz
## Suggestions
98 13 okurz
* <first task to do as an easy starting point>
99 69 okurz
* <what do do next, all tasks optionally with an effort estimation in hours, e.g. "(0.5-2h)">
100 13 okurz
* <optional: mark "optional" tasks>
101
102 72 nicksinger
## Further details
103 17 okurz
<everything that does not fit into above sections>
104 13 okurz
```
105
106 1 alarrosa
example ticket: #10212
107 275 okurz
108
Other often used sections that can be considered
109
110
```
111
## Motivation
112
<Where this idea/request comes from, what is the context, etc.; Could be alternatively used to the user story section>
113
114
## Acceptance tests
115
* <**AT1-1:** the first acceptance test for AC1 (see "Acceptance criteria" above), example: "Go to https://openqa.opensuse.org/tests and confirm that the requested new button is visible">
116
* <**AT1-2:** the second acceptance test for AC1 (see "Acceptance criteria" above), often the counter-test, example: "Go to https://openqa.opensuse.org/tests and confirm that the requested new button is *not* visible if do_not_show_button=True is set in the server config">
117
118
## Rollback steps
119
* <What was implemented as workaround or temporary measure and needs to be undone before the ticket is resolved. Often added retroactively>
120
121
## Out-of-scope
122
* <What is explicitly decided to be *not* covered within this ticket. Often used to limit the effort on work and preventing conflicts by relating to other tickets covering those aspects>
123
```
124 13 okurz
125 62 SLindoMansilla
## Further decision steps working on test issues
126 61 SLindoMansilla
127 62 SLindoMansilla
Test issues could be one of the following sources. Feel free to use the following template in tickets as well
128 1 alarrosa
129 62 SLindoMansilla
```
130
## Problem
131
* **H1** The product has changed
132
 * **H1.1** product changed slightly but in an acceptable way without the need for communication with DEV+RM --> adapt test
133
 * **H1.2** product changed slightly but in an acceptable way found after feedback from RM --> adapt test
134
 * **H1.3** product changed significantly --> after approval by RM adapt test
135 61 SLindoMansilla
136 62 SLindoMansilla
* **H2** Fails because of changes in test setup
137
 * **H2.1** Our test hardware equipment behaves different
138
 * **H2.2** The network behaves different
139
140
* **H3** Fails because of changes in test infrastructure software, e.g. os-autoinst, openQA
141
* **H4** Fails because of changes in test management configuration, e.g. openQA database settings
142
* **H5** Fails because of changes in the test software itself (the test plan in source code as well as needles)
143
* **H6** Sporadic issue, i.e. the root problem is already hidden in the system for a long time but does not show symptoms every time
144
```
145 25 okurz
146 279 okurz
This is following the [scientific method](https://en.wikipedia.org/wiki/Scientific_method), Also read http://yellerapp.com/posts/2014-08-11-scientific-debugging.html and http://web.mit.edu/6.031/www/fa17/classes/13-debugging/. It is suggested to use the characters *H* (hypothesis), *E* (experiment), *O* (observation), e.g. like this
147 235 okurz
148
```
149
* **H3** Fails because of changes in test infrastructure software, e.g. os-autoinst, openQA
150
  * **H3.1** **REJECTED** Fails because of changes in openQA itself
151
    * **E3.1-1** (First experiment for hypothesis 3.1) test on an openQA server with the openQA version of "last good"
152
      * **O3.1-1-1** (First observation for first experiment for hypothesis 3) the test failed in the same way, reject *H3.1*
153
```
154
155 182 okurz
## Additional details needed for non-qemu issues
156
157
As the automatic integration tests of os-autoinst and openQA are based on qemu virtualization, for any non-qemu related requests please provide detailed manual reproduction steps, otherwise it is unlikely that any issue or feature request can be implemented.
158
159 25 okurz
## pull request handling on github
160
161
As a reviewer of pull requests on github for all related repositories, e.g. https://github.com/os-autoinst/os-autoinst-distri-opensuse/pulls, apply labels in case PRs are open for a longer time and can not be merged so that we keep our backlog clean and know why PRs are blocked.
162
163
* **notready**: Triaged as not ready yet for merging, no (immediate) reaction by the reviewee, e.g. when tests are missing, other scenarios break, only tested for one of SLE/TW
164
* **wip**: Marked by the reviewee itself as "[WIP]" or "[DO-NOT-MERGE]" or similar
165
* **question**: Questions to the reviewee, not answered yet
166 54 okurz
167
## Where to contribute?
168 1 alarrosa
169
If you want to help openQA development you can take a look into the existing [issues](https://progress.opensuse.org/projects/openqav3/issues).
170 167 okurz
You can start with
171 168 okurz
* [entrance level issues](https://progress.opensuse.org/projects/openqav3/search?q=entrance+level+issue&open_issues=1)
172 167 okurz
* issues tagged as [easy](https://progress.opensuse.org/projects/openqav3/issues?utf8=%E2%9C%93&set_filter=1&sort=id%3Adesc&f%5B%5D=status_id&op%5Bstatus_id%5D=o&f%5B%5D=issue_tags&op%5Bissue_tags%5D=%3D&v%5Bissue_tags%5D%5B%5D=easy&f%5B%5D=&c%5B%5D=subject&c%5B%5D=project&c%5B%5D=status&c%5B%5D=assigned_to&c%5B%5D=fixed_version&c%5B%5D=is_private&c%5B%5D=due_date&c%5B%5D=relations&group_by=&t%5B%5D=)
173 197 okurz
* issues tagged as [beginner](https://progress.opensuse.org/projects/openqav3/issues?utf8=%E2%9C%93&set_filter=1&sort=id%3Adesc&f%5B%5D=status_id&op%5Bstatus_id%5D=o&f%5B%5D=issue_tags&op%5Bissue_tags%5D=%3D&v%5Bissue_tags%5D%5B%5D=beginner&f%5B%5D=&c%5B%5D=subject&c%5B%5D=project&c%5B%5D=status&c%5B%5D=assigned_to&c%5B%5D=fixed_version&c%5B%5D=is_private&c%5B%5D=due_date&c%5B%5D=relations&group_by=&t%5B%5D=) - not necessarily "easy" but more suitable for someone coming to a project with little or no domain specific knowledge
174 168 okurz
* ideas from #65271
175 165 okurz
176
There are also some "always valid" tasks to be working on:
177 54 okurz
178
* *improve test coverage*:
179
 * *user story*: As openqa backend as well as test developer I want better test coverage of our projects to reduce technical debt
180
 * *acceptance criteria*: test coverage is significantly higher than before
181
 * *suggestions*: check current coverage in each individual project (os-autoinst/openQA/os-autoinst-distri-opensuse) and add tests as necessary
182 28 okurz
183 1 alarrosa
# Use cases
184 40 okurz
185 28 okurz
The following use cases 1-6 have been defined within a SUSE workshop (others have been defined later) to clarify how different actors work with openQA. Some of them are covered already within openQA quite well, some others are stated as motivation for further feature development.
186
187 6 okurz
## Use case 1
188 4 okurz
**User:** QA-Project Managment
189 1 alarrosa
**primary actor:** QA Project Manager, QA Team Leads
190
**stakeholder:** Directors, VP
191 7 okurz
**trigger:** product milestones, providing a daily status
192 1 alarrosa
**user story:** „As a QA project manager I want to check on a daily basis the „openQA Dashboard“ to get a summary/an overall status of the „reviewers results“ in order to take the right actions and prioritize tasks in QA accordingly.“
193 28 okurz
	
194 4 okurz
## Use case 2
195 1 alarrosa
**User:** openQA-Admin
196
**primary actor:** Backend-Team
197 4 okurz
**stakeholder:** Qa-Prjmgr, QA-TL, openQA Tech-Lead
198 7 okurz
**trigger:** Bugs, features, new testcases
199 5 okurz
**user story:** „As an openQA admin I constantly check in the web-UI the system health and I manage its configuration to ensure smooth operation of the tool.“
200 28 okurz
201 1 alarrosa
## Use case 3
202
**User:** QA-Reviewer
203
**primary actor:** QA-Team
204 4 okurz
**stakeholder:** QA-Prjmgr, Release-Mgmt, openQA-Admin
205 7 okurz
**trigger:** every new build
206
**user story:** „As an openQA-Reviewer at any point in time I review on the webpage of openQA the overall status of a build in order to track and find bugs, because I want to find bugs as early as possible and report them.“
207 28 okurz
208 1 alarrosa
## Use case 4
209
**User:** Testcase-Contributor
210 4 okurz
**primary actor:** All development teams, Maintenance QA
211 5 okurz
**stakeholder:** QA-Reviewer, openQA-Admin, openQA Tech-Lead
212 40 okurz
**trigger:** features, new functionality, bugs, new product/package
213 7 okurz
**user story:** „As developer when there are new features, new functionality, bugs, new product/package in git I contribute my testcases because I want to ensure good quality submissions and smooth product integration.“
214 28 okurz
215 4 okurz
## Use case 5
216
**User:** Release-Mgmt
217
**primary actor:** Release Manager
218 1 alarrosa
**stakeholder:** Directors, VP, PM, TAMs, Partners
219 7 okurz
**trigger:** Milestones
220
**user story:** „As a Release-Manager on a daily basis I check on a dashboard for the product health/build status in order to act early in case of failures and have concrete and current reports.“
221 28 okurz
222 4 okurz
## Use case 6
223
**User:** Staging-Admin
224
**primary actor:** Staging-Manager for the products
225 1 alarrosa
**stakeholder:** Release-Mgmt, Build-Team
226
**trigger:** every single submission to projects
227 40 okurz
**user story:** „As a Staging-Manager I review the build status of packages with every staged submission to the „staging projects“ in the „staging dashboard“ and the test-status of the pre-integrated fixes, because I want to identify major breakage before integration to the products and provide fast feedback back to the development.“
228
229
## Use case 7
230
**User:** Bug investigator
231
**primary actor:** Any bug assignee for openQA observed bugs
232
**stakeholder:** Developer
233
**trigger:** bugs
234 8 okurz
**user story:** „As a developer that has been assigned a bug which has been observed in openQA I can review referenced tests, find a newer and the most recent job in the same scenario, understand what changed since the last successful job, what other jobs show same symptoms to investigate the root cause fast and use openQA for verification of a bug fix.“
235 15 okurz
236 8 okurz
# Thoughts about categorizing test results, issues, states within openQA
237
by okurz
238
239
When reviewing test results it is important to distinguish between different causes of "failed tests"
240
241
## Nomenclature
242
243 58 okurz
### Test status categories
244 1 alarrosa
A common definition about the status of a test regarding the product it tests: "false|true positive|negative" as described on https://en.wikipedia.org/wiki/False_positives_and_false_negatives. "positive|negative" describes the outcome of a test ("positive": test signals presence of issue; "negative": no signal) whereas "false|true" describes the conclusion of the test regarding the presence of issues in the SUT or product in our case ("true": correct reporting; "false": incorrect reporting), e.g. "true negative", test successful, no issues detected and there are no issues, product is working as expected by customer. Another example: Think of testing as of a fire alarm. An alarm (event detector) should only go off (be "positive") *if* there is a fire (event to detect) --> "true positive" whereas *if* there is *no* fire there should be *no* alarm --> "true negative".
245 10 okurz
246 1 alarrosa
Another common but potentially ambiguous categorization:
247 10 okurz
248
* *broken*: the test is not behaving as expected (Ambiguity: "as expected" by whom?) --> commonly a "false positive", can also be "false negative" but hard to detect
249
* *failing*: the test is behaving as expected, but the test output is a fail --> "true positive"
250
* *working*: the test is behaving as expected (with no comment regarding the result, though some might ambiguously imply 'result is negative')
251
* *passing*: the test is behaving as expected, but the result is a success --> "true negative"
252 8 okurz
253 9 okurz
If in doubt declare a test as "broken". We should review the test and examine if it is behaving as expected.
254 10 okurz
255 8 okurz
Be careful about "positive/negative" as some might also use "positive" to incorrectly denote a passing test (and "negative" for failing test) as an indicator of "working product" not an indicator about "issue present". If you argue what is "used in common speech" think about how "false positive" is used as in "false alarm" --> "positive" == "alarm raised", also see https://narainko.wordpress.com/2012/08/26/understanding-false-positive-and-false-negative/
256
257 10 okurz
### Priorization of work regarding categories
258 3 okurz
In this sense development+QA want to accomplish a "true negative" state whenever possible (no issues present, therefore none detected). As QA and test developers we want to prevent "false positives" ("false alarms" declaring a product as broken when it is not but the test failed for other reasons), also known as "type I error" and "false negatives" (a product issue is not catched by tests and might "slip through" QA and at worst is only found by an external outside customer) also known as "type II error". Also see https://en.wikipedia.org/wiki/Type_I_and_type_II_errors. In the context of openQA and system testing paired with screen matching a "false positive" is much more likely as the tests are very susceptible to subtle variations and changes even if they should be accepted. So when in doubt, create an issue in progress, look at it again, and find that it was a false alarm, rather than wasting more peoples time with INVALID bug reports by believing the product to be broken when it isn't. To quote Richard Brown: "I […] believe this is the route to ongoing improvement - if we have tests which produce such false alarms, then that is a clear indicator that the test needs to be reworked to be less ambiguous, and that IS our job as openQA developers to deal with".
259 11 okurz
260
## Further categorization of statuses, issues and such in testing, especially automatic tests
261
By okurz
262
263
This categorization scheme is meant to help in communication in either written or spoken discussions being simple, concise, easy to remember while unambiguous in every case.
264
While used for naming it should also be used as a decision tree and can be followed from the top following each branch.
265
266
### Categorization scheme
267
268
To keep it simple I will try to go in steps of deciding if a potential issue is of one of two categories in every step (maybe three) and go further down from there. The degree of further detailing is not limited, i.e. it can be further extended. Naming scheme should follow arabic number (for two levels just 1 and 2) counting schemes added from the right for every additional level of decision step and detail without any separation between the digits, e.g. "1111" for the first type in every level of detail up to level four. Also, I am thinking of giving the fully written form phonetic name to unambiguously identify each on every level as long as not more individual levels are necessary. The alphabet should be reserved for higher levels and higher priority types.
269
Every leaf of the tree must have an action assigned to it.
270 12 okurz
271 11 okurz
1 **failed** (ZULU)
272
11 new (passed->failed) (YANKEE)
273
111 product issue ("true positive") (WHISKEY)
274 44 okurz
1111 unfiled issue (SIERRA)
275 11 okurz
11111 hard issue (openqa *fail*) (KILO)
276
111121 critical / potential ship stopper (INDIA) --> immediately file bug report with "ship_stopper?" flag; opt. inform RM directly
277 44 okurz
111122 non-critical hard issue (HOTEL) --> file bug report
278 11 okurz
11112 soft issue (openqa *softfail* on job level, not on module level) (JULIETT) --> file bug report on failing test module
279
1112 bugzilla bug exists (ROMEO)
280
11121 bug was known to openqa / openqa developer --> cross-reference (bug->test, test->bug) AND raise review process issue, improve openqa process
281
11122 bug was filed by other sources (e.g. beta-tester) --> cross-reference (bug->test, test->bug)
282
112 test issue ("false positive") (VICTOR)
283
1121 progress issue exists (QUEBEC) --> cross-reference (issue->test, test->issue)
284
1122 unfiled test issue (PAPA)
285
11221 easy to do w/o progress issue
286
112211 need needles update --> re-needle if sure, TODO how to notify?
287
112212 pot. flaky, timeout
288
1122121 retrigger yields PASS --> comment in progress about flaky issue fixed
289
1122122 reproducible on retrigger --> file progress issue
290
11222 needs progress issue filed --> file progress issue
291
12 existing / still failing (failed->failed) (XRAY)
292
121 product issue (UNIFORM)
293
1211 unfiled issue (OSCAR) --> file bug report AND raise review process issue (why has it not been found and filed?)
294
1212 bugzilla bug exists (NOVEMBER) --> ensure cross-reference, also see rules for 1112 ROMEO
295
122 test issue (TANGO)
296
1221 progress issue exists (MIKE) --> monitor, if persisting reprioritize test development work
297
1222 needs progress issue filed (LIMA) --> file progress issue AND raise review process issue, see 1211 OSCAR
298
2 **passed** (ALFA)
299
21 stable (passed->passed) (BRAVO)
300
211 existing "true negative" (DELTA) --> monitor, maybe can be made stricter
301
212 existing "false negative" (ECHO) --> needs test improvement
302
22 fixed (failed->passed) (CHARLIE)
303
222 fixed "true negative" (FOXTROTT) --> TODO split monitor, see 211 DELTA
304
2221 was test issue --> close progress issue
305
2222 was product issue
306
22221 no bug report exists --> raise review process issue (why was it not filed?)
307
22222 bug report exists
308
222221 was marked as RESOLVED FIXED
309
221 fixed but "false negative" (GOLF) --> potentially revert test fix, also see 212 ECHO
310 41 okurz
311
312 11 okurz
Priority from high to low: INDIA->OSCAR->HOTEL->JULIETT->…
313 35 okurz
314 142 okurz
# Important ticket queries
315
316
* All auto-review tickets: https://progress.opensuse.org/projects/openqav3/issues?query_id=697 , see https://github.com/os-autoinst/scripts/blob/master/README.md#auto-review---automatically-detect-known-issues-in-openqa-jobs-label-openqa-jobs-with-ticket-references-and-optionally-retrigger for further information regarding auto-review
317
* All auto-review+force-result tickets: https://progress.opensuse.org/projects/openqav3/issues?query_id=700
318
319 82 okurz
# Proposals for uses of labels
320 23 okurz
With [Show bug or label icon on overview if labeled (gh#550)](https://github.com/os-autoinst/openQA/pull/550) it is possible to add custom labels just by writing them. Nevertheless, a convention should be found for a common benefit. <del>Beware that labels are also automatically carried over with (Carry over labels from previous jobs in same scenario if still failing [gh#564])(https://github.com/os-autoinst/openQA/pull/564) which might make consistent test failures less visible when reviewers only look for test results without labels or bugrefs.</del> Labels are not anymore automatically carried over ([gh#1071](https://github.com/os-autoinst/openQA/pull/1071)).
321
322
List of proposed labels with their meaning and where they could be applied.
323
324
* ***`fixed_<build_ref>`***: If a test failure is already fixed in a more recent build and no bug reference is known, use this label together with a reference to a more recent passed test run in the same scenario. Useful for reviewing older builds. Example (https://openqa.suse.de/tests/382518#comments):
325
326
```
327
label:fixed_Build1501
328
329
t#382919
330
```
331 24 okurz
332
* ***`needles_added`***: In case needles were missing for test changes or expected product changes caused needle matching to fail, use this label with a reference to the test PR or a proper reasoning why the needles were missing and how you added them. Example (https://openqa.suse.de/tests/388521#comments):
333
334
```
335
label:needles_added
336
337
needles for https://github.com/os-autoinst/os-autoinst-distri-opensuse/pull/1353 were missing, added by jpupava in the meantime.
338 60 mgriessmeier
```
339
340 67 okurz
# s390x Test Organisation
341 1 alarrosa
342 67 okurz
See the following picture for a graphical overview of the current s390x test infrastructure at SUSE:
343
344
![SUSE s390x test infrastructure](qa_sle_openqa_s390x_test_infrastructure.jpg)
345
346 75 okurz
## Upgrades
347 60 mgriessmeier
348
### on z/VM 
349
#### special Requirements
350
351
Due to the lack of proper use of hdd-images on zVM, we need to workaround this with having a dedicated worker_class aka a dedicated Host where we run two jobs with START_AFTER_TEST,
352
the first one which installs the basesystem we want to have upgraded and a second one which is doing the actually upgrade (e.g migration_offline_sle12sp2_zVM_preparation and migration_offline_sle12sp2_zVM)
353
354
Since we encountered issues with randomly other preparation jobs are started in between there, we need to ensure that we have one complete chain for all migration jobs running on one worker, that means for example:
355
356 75 okurz
1. migration_offline_sle12sp2_zVM_preparation 
357
1. migration_offline_sle12sp2_zVM (START_AFTER_TEST=#1) 
358
1. migration_offline_sle12sp2_allpatterns_zVM_preparation (START_AFTER_TEST=#2) 
359
1. migration_offline_sle12sp2_allpatterns_zVM 
360
1. ...
361 66 okurz
362
This scheme ensures that all actual Upgrade jobs are finding the prepared system and are able to upgrade it
363
364
### on z/KVM
365
366 67 okurz
No special requirements anymore, see details in #18016
367 77 nicksinger
368
## Automated z/VM LPAR installation with openQA using qnipl
369
370 78 nicksinger
There is an ongoing effort to automate the LPAR creation and installation on z/VM. A first idea resulted in the creation of [qnipl](https://github.com/openSUSE/dracut-qnipl). `qnipl` enables one to boot a very slim initramfs from a shared medium (e.g. shared SCSI-disks) and supply it with the needed parameters to chainload a "normal SLES installation" using kexec.
371 77 nicksinger
This method is required for z/VM because snipl (Simple network initial program loader) can only load/boot LPARs from specific disks, not network resources.
372
373
### Setup
374
375
1. Get a shared disk for all your LPARs
376
  * Normally this can easily done by infra/gschlotter
377
  * Disks needs to be connected to all guests which should be able to network-boot
378
1. Boot a fully installed SLES on one of the LPARs to start preparing the shared-disk
379
1. Put a DOS partition table on the disk and create one single, large partition on there
380
1. Put a FS on there. Our first test was on ext2 and it worked flawlessly in our attempts
381
1. Install `zipl` (The s390x bootloader from IBM) on this partition
382
  * A simple and sufficient config can be found in [poo#33682](https://progress.opensuse.org/issues/33682)
383
1. clone [`qnipl`](https://github.com/nicksinger/dracut-qnipl) to your dracut modules (e.g. /usr/lib/dracut/modules.d/95qnipl)
384
1. Include the module named `qnipl` to your dracut modules for initramfs generation
385
  * e.g. in /etc/dracut.conf.d/99-qnipl.conf add: `add_dracutmodules+=qnipl`
386
1. Generate your initramfs (e.g. `dracut -f -a "url-lib qnipl" --no-hostonly-cmdline /tmp/custom_initramfs`)
387
  * Put the initramfs next to your kernel binary on the partition you want to prepare
388
1. From now on you can use `snipl` to boot any LPAR connected with this shared disk from network
389 263 okurz
  * example: `snipl -f ./snipl.conf -s P0069A27-LP3 -A fa00 --wwpn_scsiload 500507630713d3b3 --lun_scsiload 4001401100000000 --ossparms_scsiload "install=http://openqa.suse.de/assets/repo/SLE-15-Installer-DVD-s390x-Build533.2-Media1 hostip=10.0.0.1/20 gateway=10.0.0.254 Nameserver=10.0.0.1 Domain=suse.de ssh=1"`
390 77 nicksinger
  * `--ossparms_scsiload` is then evaluated and used by `qnipl` to kexec into the installer with the (for the installer) needed parameters
391
392
### Further details
393
394 78 nicksinger
Further details can also be found in the [github repo](https://github.com/openSUSE/dracut-qnipl/blob/master/README.md). Pull requests, questions and ideas always welcome!
395 84 okurz
396 109 okurz
# Infrastructure setup for o3 (openqa.opensuse.org) and osd (openqa.suse.de)
397 1 alarrosa
398 194 okurz
Both o3 and osd are hosted in SUSE data centers, mostly Nuremberg, Germany, and Prague, Czech Republic.
399 199 okurz
400 194 okurz
## o3 (openqa.opensuse.org)
401 109 okurz
402 263 okurz
o3 consists of a VM running the web UI and physical worker machines. The VM for o3 has netapp backed storage on rotating disk so less performant than SSD but cheaper. So eventually we might have the possibility to use SSD based storage. Currently there are four virtual storage devices provided to o3 totalling to more than 10 TB.
403 88 okurz
404 1 alarrosa
The o3 infrastructure is in detail described on https://github.com/os-autoinst/sync-and-trigger/blob/main/openqa-opensuse.md
405 270 tinita
406
### Temporary things regarding the move to PRG2
407
408
On new-ariel there is the service `autossh-old-ariel.service`. If we get an email `Problem: Interface tun5: Link down` from zabbix this is the service we need to check.
409 185 okurz
410 141 okurz
### Accessing the o3 infrastructure
411
412 281 favogt
~~The o3 webui host as well the workers within the o3 infrastructure can be accessed over ssh by using `ssh -p 2214 gate.opensuse.org` (and `ssh -p 2213 gate.opensuse.org` for old-ariel).~~
413
Currently, ariel can only be accessed from the internal SUSE network through `ariel.dmz-prg2.suse.org`.
414 1 alarrosa
415 281 favogt
Ask one of the existing admins within https://app.element.io/#/room/#openqa:opensuse.org or irc://irc.libera.chat/opensuse-factory (so that I know you can be reached over those channels when people have questions to you what you did with the ssh access) to put your ssh key on the o3 webui host to be able to login. 
416
417 141 okurz
To give access for a new user an existing admin can do the following:
418
419
```
420
sudo useradd -G users,trusted --create-home $user
421
echo "$ssh_key_from_user" | sudo tee -a /home/$user/.ssh/authorized_keys
422
```
423
424
#### SSH configuration
425
426
To easily access all hosts behind the jump host you can use the following config for your ssh client (`~/.ssh/config`):
427 1 alarrosa
428 141 okurz
```
429
Host ariel
430 281 favogt
  HostName ariel.dmz-prg2.suse.org
431 141 okurz
432
# Note that %h as understood by -W needs the real host, aliases won't work:
433
# kex_exchange_identification: Connection closed by remote host
434
# Connection closed by UNKNOWN port 65535`
435
Host *.opensuse.org
436
  ProxyCommand ssh -q -A -x ariel -W %h:%p
437
```
438
439
**A word of warning:** be aware that this enables agent-forwarding to at least the jumphost. Please read up for yourself if and how bad you consider the security implications of doing so.
440
441
The workers can only be accessed from "ariel", not directly. One can use password authentication on the workers using the root account. Ask existing admins for the root password. It is suggested that you use key-based authentication. For this put your ssh keys on all the workers, e.g. using the above configuration and `ssh-copy-id`.
442
443
**Notice:** Some machines are connected to the o3 openQA host from other networks and might need different ways of access, at time of writing:
444
445
* Remote (owner: @ggardet_arm):
446
 * ip-10-0-0-58
447
 * oss-cobbler-03
448
 * siodtw01 (for tests on Raspberry Pi 2,3,4)
449 285 favogt
* Frankencampus (SUSE internal):
450
 * aarch64-o3
451
 * kerosene-8
452 141 okurz
453
### Manual command execution on o3 workers
454
455
To execute commands manually on all workers within the o3 infrastructure one can do for example the following:
456
457
```
458 291 mkittler
hosts="openqaworker20 openqaworker21 openqaworker22 openqaworker23 openqaworker24 openqaworker25 openqaworker26 openqaworker27 openqaworker28 openqaworker-arm21 openqaworker-arm22 qa-power8-3"
459 264 okurz
for i in $hosts; do echo $i && ssh root@$i "zypper -n dup && reboot" ; done
460 141 okurz
```
461 1 alarrosa
462 181 mkittler
```
463 264 okurz
for i in $hosts; do echo $i && ssh root@$i " echo 'ssh-rsa … …' >> /root/.ssh/authorized_keys " ; done
464 181 mkittler
```
465
466 1 alarrosa
mind the correct list of machines.
467 193 okurz
468
Formerly for true transactional servers we used:
469
470
```
471 1 alarrosa
for i in $hosts; do echo $i && ssh root@$i "(transactional-update -n dup || zypper -n dup) && reboot" ; done
472 291 mkittler
```
473
474
To execute commands on additional workers (not reachable from o3 directly):
475
476
```
477
hosts="kerosene.qe.nue2.suse.org aarch64-o3.qe.nue2.suse.org"
478
for i in $hosts; do echo $i && ssh root@$i "zypper -n dup && reboot" ; done
479 193 okurz
```
480 141 okurz
481 91 okurz
### Automatic update of o3
482 92 okurz
483 267 okurz
o3 is continuously deployed, that includes both the webUI host as well as the workers.
484 111 okurz
485
#### Automatic update of o3 webUI host
486
487 184 okurz
openqa.opensuse.org applies continuous updates of openQA related packages, conducts nightly updates of system packages and reboots automatically as required, see
488
http://open.qa/docs/#_automatic_system_upgrades_and_reboots_of_openqa_hosts
489
for details
490 111 okurz
491
#### Recurring automatic update of openQA workers
492
493 186 okurz
Same as the o3 webUI all o3 workers all apply continuous updates of openQA related packages. Additionally most apply a daily automatic system update and are "Transactional Servers" running openSUSE Leap. power8 is non-transactional with a weekly update of the system every Sunday.
494 111 okurz
495
This was for a number of reasons including:
496 109 okurz
497 96 okurz
* Getting all the machines consistent after a few years of drift
498
* Making it easier to keep them consistent by leveraging a read only root filesystem
499
* Guaranteeing rollbackability by using transactional updates
500 102 okurz
501 1 alarrosa
This was done by rbrown also to fulfill the prerequisite to getting them viable for multi-machine testing
502 102 okurz
503
These systems currently patch themselves and reboot automatically in the default maintenance window of 0330-0500 CET/CEST.
504 112 okurz
505 102 okurz
On problems this could be changed in the following way:
506
507 109 okurz
* Edit the maintenance window in /etc/rebootmgr.conf
508 105 nicksinger
* Disable the automatic reboot by "systemctl disable rebootmgr.service"
509
* Disable the automatic patching by "systemctl disable transactional-update.timer"
510
511 192 okurz
EDIT: 2022-07-11: All o3 machines are effectively not "transactional-workers" anymore as openqa-continuous-update.service is doing a complete `zypper dup` every couple of minutes. With `rebootmgr` triggered for reboot still automatic nightly reboots happen as necessary. See #111989 for details
512
513 276 okurz
SUSE employees have access to the bootmenu for the openQA worker machines, e.g. openqaworker21 and so on. "snapper rollback" can be executed from a booted, functionally operative machine which one can ssh into.
514 105 nicksinger
515
For manual investigation https://github.com/kubic-project/microos-toolbox can be helpful
516
517
#### Rollback of updates
518 1 alarrosa
519 140 livdywan
Updates on workers can be rolled back using `transactional-update` affecting the transactional workers (others are likely not updated that often):
520
521 105 nicksinger
```
522 263 okurz
for i in $hosts; do echo $i && ssh root@$i "transactional-update rollback last && reboot"; done
523 105 nicksinger
```
524
525
Updates on the central webUI host openqa.opensuse.org can be rolled back by using either older variants of packages that receive maintenance updates or using the locally cached packages in e.g. /var/cache/zypp/packages/devel_openQA/noarch using `zypper in --oldpackage`, similar to https://github.com/os-autoinst/openQA/blob/master/script/openqa-rollback#L39
526 108 SLindoMansilla
527
#### Debugging qemu SUTs in openqa.opensuse.org
528
529
SUT: System Under Test
530 1 alarrosa
531 108 SLindoMansilla
os-autoinst starts qemu with network type that doesn't allow access from the outside, so ssh is not possible. But, qemu is started with a VNC channel available from the host (the openQA-worker).
532 287 favogt
Running vncviewer inside a headless server is useless, but it is possible to use ariel as a jump host and SSH port forwarding to start vncviewer client from your desktop environment and connect to the VNC channel of the qemu SUT.
533 108 SLindoMansilla
534
```
535 287 favogt
ssh -L LOCAL_PORT:WORKER_HOSTNAME:QEMU_VNC_PORT ariel
536 108 SLindoMansilla
```
537
538
For example, if user **bernhard**, wants to connect to openqaworker7:11, and wants to use local port **43043**
539
And the VNC channel port of openqa-worker@11 **6001** (5990 + 11)
540
541
##### 1. Create SSH tunnel with port forwarding
542 287 favogt
* on laptop shell 1: ssh -L 43043:openqaworker7:6001 ariel
543 1 alarrosa
* Keep shell open to keep the tunnel open and the port forwarding
544 108 SLindoMansilla
545 1 alarrosa
##### 2. Open vncviewer
546
* on laptop shell 2: vncviewer -Shared localhost:43043
547
* `-shared` is needed to not kick the VNC connection of os-autoinst. If it is kicked, the job will terminate and the qemu process will be killed.
548
549 109 okurz
### AArch64 specific configurations on o3
550 1 alarrosa
551 109 okurz
On o3, the aarch64 workers need additional configuration.
552
553 127 dheidler
#### Setup HugePages
554
555
You need to setup HugePages support to improve performances with qemu VM and to match current aarch64 `MACHINE` configuration.
556
For the D05 machine, the configuration is: `40` pages with a size of `1G`.
557
If there are some permissions issues on `/dev/hugepages/`, check https://progress.opensuse.org/issues/53234
558
559 294 nicksinger
### o3 s390 and other (e.g. bare-metal) workers
560 126 dheidler
561 223 dheidler
`workers.ini`
562
```
563
[global]
564
HOST=http://openqa1-opensuse
565
WORKER_HOSTNAME = 192.168.112.6
566
CACHEDIRECTORY=/var/lib/openqa/cache
567
CACHESERVICEURL=http://10.88.0.1:9530/
568
[101]
569
WORKER_CLASS=s390x-zVM-vswitch-l2,s390x-rebel-1-linux144
570
BACKEND=s390x
571
ZVM_HOST=192.168.112.9
572
ZVM_GUEST=linux144
573
ZVM_PASSWORD=lin390
574
S390_HOST=144
575
[102]
576
WORKER_CLASS=s390x-zVM-vswitch-l2,s390x-rebel-2-linux145
577
BACKEND=s390x
578
ZVM_HOST=192.168.112.9
579
ZVM_GUEST=linux145
580
ZVM_PASSWORD=lin390
581
S390_HOST=145
582
[103]
583
WORKER_CLASS=s390x-zVM-vswitch-l2,s390x-rebel-3-linux146
584
BACKEND=s390x
585
ZVM_HOST=192.168.112.9
586
ZVM_GUEST=linux146
587
ZVM_PASSWORD=lin390
588
S390_HOST=146
589
[104]
590
WORKER_CLASS=s390x-zVM-vswitch-l2,s390x-rebel-4-linux147
591
BACKEND=s390x
592
ZVM_HOST=192.168.112.9
593
ZVM_GUEST=linux147
594
ZVM_PASSWORD=lin390
595
S390_HOST=147
596
[105]
597
WORKER_CLASS=64bit-ipmi,64bit-ipmi-large-mem,64bit-ipmi-amd,blackbauhinia
598
IPMI_HOSTNAME=blackbauhinia-ipmi.openqanet.opensuse.org
599
IPMI_USER=ADMIN
600
IPMI_PASSWORD=ADMIN
601
SUT_IP=blackbauhinia.openqanet.opensuse.org
602
SUT_NETDEVICE=em1
603
IPMI_SOL_PERSISTENT_CONSOLE=1
604
IPMI_BACKEND_MC_RESET=1
605
[http://openqa1-opensuse]
606
TESTPOOLSERVER=rsync://openqa1-opensuse/tests
607
```
608
609 227 okurz
Allow containers to access cache service (`systemctl edit openqa-worker-cacheservice.service`):
610 221 dheidler
```
611
# /etc/systemd/system/openqa-worker-cacheservice.service.d/override.conf
612
[Service]
613
Environment="MOJO_LISTEN=http://0.0.0.0:9530"
614
```
615
616 294 nicksinger
The s390 and ipmi workers for openQA are running within podman containers on openqaworker23.
617 126 dheidler
The containers are started using systemd but the unit files are specific to the containers and will end up in a restart-loop if this fact is ignored. Whenever the containers are recreated, the systemd files have to be recreated.
618
619
The containers are started like this (for i=101…104):
620
621
```
622
i=101
623 289 mkittler
podman run --pull=newer -d -h openqaworker23_container --name openqaworker23_container_$i -p $(python3 -c"p=${i}*10+20003;print(f'{p}:{p}')") -e OPENQA_WORKER_INSTANCE=$i -v /opt/s390x_opensuse:/etc/openqa -v /var/lib/openqa/share:/var/lib/openqa/share -v /var/lib/openqa/cache:/var/lib/openqa/cache registry.opensuse.org/devel/openqa/containers15.5/openqa_worker_os_autoinst_distri_opensuse:latest
624
(cd /etc/systemd/system/; podman generate systemd -f -n --new openqaworker23_container_$i --restart-policy always)
625 109 okurz
systemctl daemon-reload
626 289 mkittler
systemctl enable container-openqaworker23_container_$i
627 209 mkittler
```
628
629
To restart and permanently enable all workers at once:
630
```
631 289 mkittler
for i in {101..104} ; do systemctl stop container-openqaworker23_container_$i ; done
632 294 nicksinger
podman rm -f openqaworker23_container_{101..104}
633 289 mkittler
for i in {101..104} ; do podman run --pull=newer -d -h openqaworker23_container --name openqaworker23_container_$i -p $(python3 -c"p=${i}*10+20003;print(f'{p}:{p}')") -e OPENQA_WORKER_INSTANCE=$i -v /opt/s390x_opensuse:/etc/openqa -v /var/lib/openqa/share:/var/lib/openqa/share -v /var/lib/openqa/cache:/var/lib/openqa/cache registry.opensuse.org/devel/openqa/containers15.5/openqa_worker_os_autoinst_distri_opensuse:latest ; done
634
for i in {101..104} ; do (cd /etc/systemd/system/; podman generate systemd -f -n --new openqaworker23_container_$i --restart-policy always) ; done
635 209 mkittler
systemctl daemon-reload
636 220 dheidler
podman rm -f openqaworker1_container_{101..104}
637 289 mkittler
for i in {101..104} ; do systemctl reenable container-openqaworker23_container_$i && systemctl restart container-openqaworker23_container_$i ; done
638 109 okurz
```
639
640 210 mkittler
Initial ticket when the setup was created: https://progress.opensuse.org/issues/97751
641 295 nicksinger
As addition with https://progress.opensuse.org/issues/153706 we implemented IPMI workers in a similar manner starting with worker-slot 201. Its configuration can be found on worker23 in `/opt/ipmi_opensuse`.
642 210 mkittler
643 133 okurz
As alternative s390x workers can run on the host "rebel" as well. Be aware that openQA workers accessing the same s390x instances must not run in parallel so only enable one worker instance per s390x instance at a time (See https://progress.opensuse.org/issues/97658 for details).
644
645 121 okurz
### Monitoring
646
647 271 jbaier_cz
openqa.opensuse.org is monitored by SUSE over https://zabbix.suse.de/. There is a user group "Owners/O3" to which SUSE employees can be added. Alert notification is configured via trigger action in a special Infra-owned RO bot account. E-mail notification is in place for average problems and higher.
648 233 okurz
649
There is also an internal munin instance on o3. Anyone wanting to look at the HTML pages, do this:
650 121 okurz
```
651
rsync -a o3:/srv/www/htdocs/munin ~/o3-munin/ 
652
```
653
(where "o3" is configured in your ssh config of course)
654
655 241 tinita
It's also possible to view the munin page via an ssh tunnel:
656
```
657 252 tinita
ssh  -L 8080:127.0.0.1:80 o3
658 241 tinita
```
659
and then go to http://127.0.0.1:8080/munin/
660
661 247 tinita
Configuration of alerts is done in `/etc/munin/munin.conf`
662 1 alarrosa
663 247 tinita
## Hotfixing
664 183 okurz
665
Applying hotfixes, e.g. patches from an os-autoinst pull requests to O3 workers can be applied like this for a pull request <pr_id>:
666
667
```
668 263 okurz
for i in $hosts; do echo $i && ssh root@$i "(transactional-update run /bin/sh -c \"curl -sS https://patch-diff.githubusercontent.com/raw/os-autoinst/os-autoinst/pull/${pr_id}.patch | patch -p1 --directory=/usr/lib/os-autoinst\" && reboot) || curl -sS https://patch-diff.githubusercontent.com/raw/os-autoinst/os-autoinst/pull/${pr_id}.patch | patch -p1 --directory=/usr/lib/os-autoinst" ; done
669 183 okurz
```
670
671
Hotpatching on all OSD workers with the same <pr_id> as above with something like
672
673
```
674
sudo salt --no-color --state-output=changes -C 'G@roles:worker' cmd.run 'curl -sS https://patch-diff.githubusercontent.com/raw/os-autoinst/os-autoinst/pull/${pr_id}.patch | patch -p1 --directory=/usr/lib/os-autoinst'
675
```
676
677 89 ggardet_arm
## Mitigation of boot failure or disk issues
678
679
### Worker stuck in recovery
680
681
Check disk health and consider manual fixup of mount points, e.g.:
682
683
```
684
test -e /dev/md/openqa || lsblk -n | grep -v nvme | grep "/$" && mdadm --create /dev/md/openqa --level=0 --force --raid-devices=$(ls /dev/nvme?n1 | wc -l) --run /dev/nvme?n1 || mdadm --create /dev/md/openqa --level=0 --force --raid-devices=1 --run /dev/nvme0n1p3
685
```
686
687 106 okurz
## PPC specific configurations
688
689 214 okurz
In one case it was necessary to disable snapshots for petitboot with `nvram -p default --update-config "petitboot,snapshots?=false"` to prevent a race condition between dm_raid and btrfs trying to discover bootable devices (#68053#note-25). In another case https://bugzilla.opensuse.org/show_bug.cgi?id=1174166 caused the boot entries to be not properly discovered and it was necessary to prevent grub from trying to update the according sections (#68053#note-31).
690 89 ggardet_arm
691 84 okurz
## Moving worker from osd to o3
692
693
* Ensure system management, e.g. over IPMI works. This is untouched by the following steps and can be used during the process for recovery and setup
694
* Ensure network is configured for DHCP
695 242 okurz
* Instruct SUSE-IT to change VLAN for machine from oqa.suse.de VLAN to 662 (example: https://sd.suse.com/servicedesk/customer/portal/1/SD-124055, ~~https://infra.nue.suse.com/SelfService/Display.html?id=16458 (not available anymore)~~)
696 84 okurz
* Remove from osd:
697
698
```
699 242 okurz
salt-key -y -d worker7.oqa.suse.de
700 84 okurz
```
701 1 alarrosa
702 245 okurz
* On the worker * Change root password to o3 one
703
* Allow ssh password authentication: `sed -i 's/^PasswordAuthentication/#&/' /etc/ssh/sshd_config && systemctl restart sshd`
704
* Ensure ssh based root login works with `zypper -n in openssh-server-config-rootlogin` or if that is not available change 'PermitRootLogin' to 'yes' in sshd_config
705
* Add personal ssh key to machine, e.g. openqaworker7:/root/.ssh/authorized_keys
706
707
708
709 84 okurz
* Add entry on o3 to `/etc/dnsmasq.d/openqa.conf` with MAC address, e.g.
710
711
```
712
dhcp-host=54:ab:3a:24:34:b8,openqaworker7
713
```
714
715
* Add entry to `/etc/hosts` which dnsmasq picks up to give out a DHCP lease, e.g.
716
717
```
718
192.168.112.12   openqaworker7.openqanet.opensuse.org openqaworker7
719
```
720
721 243 livdywan
* Reload dnsmasq with `systemctl restart dnsmasq`
722 1 alarrosa
723 243 livdywan
* Adapt NFS mount point on the worker
724
725 85 okurz
```
726 246 okurz
sed -i '/openqa\.suse\.de/d' /etc/fstab && echo 'openqa1-opensuse:/ /var/lib/openqa/share nfs4 noauto,nofail,retry=30,ro,x-systemd.automount,x-systemd.device-timeout=10m,x-systemd.mount-timeout=30m 0 0' >> /etc/fstab
727 85 okurz
```
728 84 okurz
729
* Restart network on machine (over IMPI) using `systemctl restart network` and monitor in o3:`journalctl -f -u dnsmasq` until address is assigned, e.g.:
730
731
```
732
Feb 29 10:48:30 ariel dnsmasq[28105]: read /etc/hosts - 30 addresses
733
Feb 29 10:48:54 ariel dnsmasq-dhcp[28105]: DHCPREQUEST(eth1) 10.160.1.101 54:ab:3a:24:34:b8
734
Feb 29 10:48:54 ariel dnsmasq-dhcp[28105]: DHCPNAK(eth1) 10.160.1.101 54:ab:3a:24:34:b8 wrong network
735
Feb 29 10:49:10 ariel dnsmasq-dhcp[28105]: DHCPDISCOVER(eth1) 54:ab:3a:24:34:b8
736
Feb 29 10:49:10 ariel dnsmasq-dhcp[28105]: DHCPOFFER(eth1) 192.168.112.12 54:ab:3a:24:34:b8
737
Feb 29 10:49:10 ariel dnsmasq-dhcp[28105]: DHCPREQUEST(eth1) 192.168.112.12 54:ab:3a:24:34:b8
738
Feb 29 10:49:10 ariel dnsmasq-dhcp[28105]: DHCPACK(eth1) 192.168.112.12 54:ab:3a:24:34:b8 openqaworker7
739 85 okurz
```
740
741
* Ensure all mountpoints up
742 84 okurz
743
```
744
mount -a
745 86 okurz
```
746 84 okurz
747
* Update /etc/openqa/client.conf with the same key as used on other workers for "openqa1-opensuse"
748
* Update /etc/openqa/workers.ini with similar config as used on other workers, e.g. based on openqaworker4, example:
749
750
```
751
# diff -Naur /etc/openqa/workers.ini{.osd,}
752
--- /etc/openqa/workers.ini.osd 2020-02-29 15:21:47.737998821 +0100
753
+++ /etc/openqa/workers.ini     2020-02-29 15:22:53.334464958 +0100
754
@@ -1,17 +1,10 @@
755
-# This file is generated by salt - don't touch
756
-# Hosted on https://gitlab.suse.de/openqa/salt-pillars-openqa
757 1 alarrosa
-# numofworkers: 10
758 84 okurz
-
759
 [global]
760
-HOST=openqa.suse.de
761
-CACHEDIRECTORY=/var/lib/openqa/cache
762
-LOG_LEVEL=debug
763
-WORKER_CLASS=qemu_x86_64,qemu_x86_64_staging,tap,openqaworker7
764 263 okurz
-WORKER_HOSTNAME=10.X.X.101
765 84 okurz
-
766
-[1]
767
-WORKER_CLASS=qemu_x86_64,qemu_x86_64_staging,tap,qemu_x86_64_ibft,openqaworker7
768
+HOST=http://openqa1-opensuse
769
+WORKER_HOSTNAME=192.168.112.12
770
+CACHEDIRECTORY = /var/lib/openqa/cache
771
+CACHELIMIT = 50
772
+WORKER_CLASS = openqaworker7,qemu_x86_64
773
774
-[openqa.suse.de]
775
-TESTPOOLSERVER = rsync://openqa.suse.de/tests
776
+[http://openqa1-opensuse]
777
+TESTPOOLSERVER = rsync://openqa1-opensuse/tests
778
```
779
780
* Remove OSD specifics
781
782
```
783
systemctl disable --now auto-update.timer salt-minion telegraf
784
for i in  NPI SUSE_CA telegraf-monitoring; do zypper rr $i; done
785
zypper -n dup --force-resolution --allow-vendor-change
786
```
787
788
* If the machine is not a transactional-server one has the following options: Keep as is and handle like power8 (also not transactional), enable transactional updates w/o root being r/o, change to root being r/o on-the-fly, reinstall as transactional. At least option 2 is suggested, enable transactional updates:
789
790
```
791
zypper -n in transactional-update
792
systemctl enable --now transactional-update.timer rebootmgr
793
```
794
795
* Enable apparmor
796
797
```
798
zypper -n in apparmor-utils
799
systemctl unmask apparmor
800
systemctl enable --now apparmor
801
```
802
803
* Switch firewall from SuSEfirewall2 to firewalld
804
805
```
806
zypper -n in firewalld && zypper -n rm SuSEfirewall2
807
systemctl enable --now firewalld
808
firewall-cmd --zone=trusted --add-interface=br1
809
firewall-cmd --set-default-zone trusted
810
firewall-cmd --zone=trusted --add-masquerade
811
```
812
813
* Copy over special openSUSE UEFI staging images, see #63382
814 248 okurz
* For multi-machine configured workers make sure to have updated IPv4 entries in /etc/wicked/scripts/gre_tunnel_preup.sh
815 84 okurz
* Check operation with a single openQA worker instance:
816
817
```
818
systemctl enable --now openqa-worker.target openqa-worker@1
819
```
820
821
* Test with an openQA job cloned from a production job, e.g. for openqaworker7
822
823
```
824
openqa-clone-job --within-instance https://openqa.opensuse.org/t${id} WORKER_CLASS=openqaworker7
825
```
826
827
* After the latest openQA job could successfully finish enable more worker instances
828
829
```
830
systemctl unmask openqa-worker@{2..14} && systemctl enable --now openqa-worker@{2..14}
831
```
832
833
* Monitor if nightly update works, e.g. look for journal entry:
834
835
```
836
Mar 01 00:08:26 openqaworker7 transactional-update[10933]: Calling zypper up
837
838
Mar 01 00:08:51 openqaworker7 transactional-update[10933]: transactional-update finished - informed rebootmgr
839
Mar 01 00:08:51 openqaworker7 systemd[1]: Started Update the system.
840
841
Mar 01 03:30:00 openqaworker7 rebootmgrd[40760]: rebootmgr: reboot triggered now!
842
843
Mar 01 03:36:32 openqaworker7 systemd[1]: Reached target openQA Worker.
844
```
845 93 okurz
846 95 okurz
## Distribution upgrades
847
848 1 alarrosa
**Note:** Performing the upgrade differs slightly depending on the host setup:
849 131 livdywan
* On hosts with a writeable `/` you need to enter a root shell i.e. `sudo bash`
850 138 okurz
* Transactional hosts require that you use `transactional-update shell` thereby creating a snapshot which is applied after a reboot, optionally using `--continue` if you want to make further changes to an existing snapshot
851
* Depending on available space it might be necessary to cleanup space before conducting the upgrade, e.g. use `snapper rm <N..M>` to delete older root btrfs snapshots, cleanup unneeded packages, e.g. with https://github.com/okurz/scripts/blob/master/zypper-rm-orphaned and https://github.com/okurz/scripts/blob/master/zypper-rm-unneeded
852 196 okurz
* Upgrades might pull in too many new packages so better crosscheck with `zypper … dup … --no-recommends`
853 138 okurz
* Consider using https://github.com/okurz/auto-upgrade/blob/master/auto-upgrade or manual (*Tip**: Run this in `screen -d -r || screen` and use e.g. `sudo bash`):
854 101 okurz
855 95 okurz
```
856 263 okurz
new_version=15.5 # Specify the target release
857 1 alarrosa
858 98 livdywan
# Change the release via the special $releasever
859 1 alarrosa
. /etc/os-release
860
sed -i -e "s/${VERSION_ID}/\$releasever/g" /etc/zypp/repos.d/*
861 278 okurz
zypper --releasever=$new_version --gpg-auto-import-keys ref
862 1 alarrosa
test -f /etc/openqa/openqa.ini && sudo -u geekotest /opt/openqa-scripts/dump-psql
863 195 mkittler
systemctl stop openqa-continuous-update.timer  # it would interfere, e.g. revert the previous zypper ref call
864 1 alarrosa
zypper -n --releasever=$new_version dup --auto-agree-with-licenses --replacefiles --download-in-advance
865
866
# Check config files for relevant changes
867 95 okurz
rpmconfigcheck
868
for i in $(cat /var/adm/rpmconfigcheck) ; do vimdiff ${i%.rpm*} $i ; done
869
rm $(cat /var/adm/rpmconfigcheck)
870
871 1 alarrosa
reboot
872
systemctl --failed
873 213 okurz
```
874
875
* Ensure that the upgrade was really successful, e.g. /etc/os-release should show the new version, the above `zypper dup` command should show no more pending actions
876
* Crosscheck for any obvious alerts, pipelines failing, user reports, etc.
877
* On any severe problems consider a complete rollback of the upgrade or also partial downgrade of packages, e.g. force-install older version of packages and zypper locks until an issue is fixed
878
* Monitor for successful openQA jobs on the host
879 187 okurz
880 109 okurz
## openQA infrastructure needs (o3 + osd)
881
882 115 okurz
TL;DR: new OSD ARM workers needed, missing redundancy for o3-ppc, rest is needing replacement as nearly all current hardware is out of vendor provided maintenance (as of 2021-05), SSD storage for o3 would be good
883 93 okurz
884
2020-03: SUSE IT (EngInfra) provided us more space for O3 but we have only slow rotating-disk storage. Performance could be improved by providing SSD storage.
885
886
The most time and effort we currently struggle with storage space for OSD (openqa.suse.de) ~~both OSD (openqa.suse.de) as well as O3 (openqa.opensuse.org) (2020-03: Situation on o3 resolved with more storage provided by SUSE IT)~~. Both instances (OSD + O3) are using precious netapp-storage but there is currently no better approach to use different, external storage. An increase of the available space would be appreciated, ~~o3 being more important right now than osd,~~ see https://progress.opensuse.org/issues/57494 for details. Graphs like 
887
https://stats.openqa-monitor.qa.suse.de/d/nRDab3Jiz/openqa-jobs-test?orgId=1&from=1578343509900&to=1578653794173&fullscreen&panelId=12 show how usual test backlogs are worked on within OSD by architecture. It can be seen that both the ppc64le and aarch64 backlogs are reduced fast so we do not need more ppc64le or aarch64 machines. However, we have a stability problem with all three aarch64 workers. Potentially new machine(s) could help, see https://progress.opensuse.org/issues/41882 for details.
888 107 okurz
889 125 okurz
With number of workers and parallel processed tests as well as with the increased number of products tested on OSD and users using the system the workload on OSD constantly increases. CPU load alerts had been seen recently in #96713 and the higher load is visible in https://monitor.qa.suse.de/d/WebuiDb/webui-summary?viewPanel=25 . From time to time should increase the number of CPU cores on the OSD VM due to the higher usage.
890
891 117 okurz
## Setup guide for new machines
892 250 mkittler
* Ensure the host has a proper DNS entry
893
    * The MAC address of new o3 workers generally needs to be added to `/etc/dnsmasq.d/openqa.conf` and an IP address needs to be configured in `/etc/hosts` (both files are on ariel).
894
    * Hosts located at Frankencampus need a DNS entry via the OPS-Service repo, e.g. https://gitlab.suse.de/OPS-Service/salt/-/merge_requests/3687.
895 1 alarrosa
* Change IPMI/BMC passwords to use our common passwords instead of default IPMI
896
* OSD: Add to salt using https://gitlab.suse.de/openqa/salt-states-openqa
897 250 mkittler
    * Make sure to set /etc/salt/minion_id to the FQDN (see #90875#note-2 for reference)
898
    * Checkout the next section for details
899
* o3: Setup the worker manually, see "Manual worker setup" section below
900 1 alarrosa
901 265 okurz
### Network (legacy) boot via PXE and OS/worker setup
902 250 mkittler
One can make use of our existing PXE infrastructure (which only supports legacy boot) following these steps:
903
904
1. Ensure the boot mode allows legacy boot, e.g. select it in the machine's setup menu manually.
905 297 ybonatakis
2. Connect via IPMI and select "Leap -> HTTP -> Console" in our PXE menu, append ` console=ttyS0,115200 autoyast=http://s.qa.suse.de/ay.xml.erb rootpassword=<passwd_to_login_as root>` to the command line and wait until the installation has finished.
906
    * Short link directs to https://raw.githubusercontent.com/os-autoinst/openQA/master/contrib/ay-openqa-worker.xml.erb
907
    * No need to generate the xml profile. The `autoyast` parameter can work with .erb extension directly 
908 250 mkittler
    * If nothing shows up in the serial console, try a different console parameter, e.g. `console=ttyS1,115200`.
909 297 ybonatakis
3. Configure repos, e.g. via the line of the scriptlet in http://s.qa.suse.de/ay.xml.erb.
910 250 mkittler
    * The scriptlet cannot be executed in the context of AutoYaST so this is a manual step at this point.
911
4. Enable SSH access via `systemctl enable --now sshd` and continue via SSH.
912 254 mkittler
5. Install some basic software, e.g. `zypper in htop vim systemd-coredump`.
913 253 mkittler
6. For OSD workers, setup `salt-minion` following the [documentation in our Salt states repository](https://github.com/os-autoinst/salt-states-openqa#setup-production-machine); otherwise setup the worker manually as explained in the next section.
914 1 alarrosa
7. Check whether the config looks good on the workers and whether jobs look good on the web UI host.
915 250 mkittler
916
### Manual worker setup
917 258 okurz
You likely want to configure the [openQA development repository](https://open.qa/docs/#_development_version_repository).
918 250 mkittler
Then setup the worker like this:
919
920 249 mkittler
```
921
echo "requires:openQA-worker" > /etc/zypp/systemCheck.d/openqa.check
922 259 okurz
zypper -n in openQA-worker openQA-auto-update openQA-continuous-update os-autoinst-distri-opensuse-deps swtpm # openQA worker services plus dependencies for openSUSE distri or development repo if added previously
923 258 okurz
zypper -n in ffmpeg-4  # for using external video encoder as it is already configured on some machines like ow19, ow20 and power8
924 249 mkittler
zypper -n in nfs-client  # For /var/lib/openqa/share
925 259 okurz
zypper -n in bash-completion vim htop strace systemd-coredump iputils tcpdump bind-utils  # for general tinkering
926 249 mkittler
927
echo "openqa1-opensuse:/ /var/lib/openqa/share nfs4 noauto,nofail,retry=30,ro,x-systemd.automount,x-systemd.device-timeout=10m,x-systemd.mount-timeout=30m 0 0" >> /etc/fstab
928
sed -i 's/\(solver.dupAllowVendorChange = \)false/\1true/' /etc/zypp/zypp.conf
929 1 alarrosa
930
# configure /etc/openqa/client.conf and /etc/openqa/workers.ini, then enable the desired number of worker slots, e.g.:
931 286 favogt
systemctl enable --now openqa-worker-auto-restart@{1..30}.service openqa-reload-worker-auto-restart@{1..30}.path openqa-auto-update.timer openqa-continuous-update.timer openqa-worker-cacheservice.service openqa-worker-cacheservice-minion.service rebootmgr.service
932 1 alarrosa
```
933
934
Also copy the OVMF images for staging tests (`/usr/share/qemu/*staging*`) from other workers. Those files are from the `devel` flavor of the OVMF package built in stagings and rings, e.g. https://build.opensuse.org/package/show/openSUSE:Factory:Rings:1-MinimalX/ovmf, just renamed.
935 258 okurz
936
#### Optional: Transactional-server
937
You may chose the transaction server role but a normal server will do as well:
938
939
```
940
sed -i 's@/ btrfs ro@/ btrfs rw@' /etc/fstab
941
mount -o rw,remount /
942
btrfs property set -ts / ro false
943
```
944 249 mkittler
945
### UEFI boot via iPXE
946 273 dheidler
947 250 mkittler
The following steps are for the o3 environment but can likely also be adapted for setting up OSD workers. This section skips the setup of the OS as it doesn't differ when using UEFI/iPXE. Checkout the previous sections for the OS/worker setup.
948 273 dheidler
949
Find the iPXE and dnsmasq network boot config at: https://github.com/os-autoinst/scripts/tree/master/ipxe
950
The `boot.ipxe` file contains instructions on how to build the required ipxe binaries for x86_64-BIOS, x86_64-UEFI and aarch64-UEFI that
951
embed the boot.ipxe script, which will load the menu.ipxe via TFTP or HTTP from the $next-server.
952 202 mkittler
953
---
954
955
There's a PXE setup as part of `dnsmasq.service` running on ariel. It is currently configured to serve a legacy-only boot menu utilized by some tests. After following these steps, please restore this setup so tests can continue to use it.
956
957
First, make a file that contains the iPXE commands to boot available via some HTTP server. Here's how the file could look like for installing Leap 15.4 with AutoYaST:
958
```
959
#!ipxe
960 204 mkittler
kernel http://download.opensuse.org/distribution/leap/15.4/repo/oss/boot/x86_64/loader/linux initrd=initrd console=tty0 console=ttyS1,115200 install=http://download.opensuse.org/distribution/leap/15.4/repo/oss/ autoyast=http://martchus.no-ip.biz/ipxe/ay-openqa-worker.xml rootpassword=…
961 202 mkittler
initrd http://download.opensuse.org/distribution/leap/15.4/repo/oss/boot/x86_64/loader/initrd
962
boot
963
```
964
965
Then, setup the build of an iPXE UEFI image like explained on https://en.opensuse.org/SDB:IPXE_booting#Setup:
966
```
967
git clone https://github.com/ipxe/ipxe.git
968
cd ipxe
969
echo "#!ipxe
970
dhcp
971
chain http://martchus.no-ip.biz/ipxe/leap-15.4" > myscript.ipxe
972
```
973
974
As you can see, this build script contains the URL to the previously setup file. Of course commands could be built directly into the image but then you'd need to rebuild/redeploy the image all the time you want to make a change (instead of just editing a file on your HTTP server).
975
976
To conduct the build of the image, run:
977
```
978
cd src
979
make EMBED=../myscript.ipxe NO_WERROR=1 bin/ipxe.lkrn bin/ipxe.pxe bin-i386-efi/ipxe.efi bin-x86_64-efi/ipxe.efi
980
```
981
982
Note that these build options are taken from https://github.com/archlinux/svntogit-community/blob/packages/ipxe/trunk/PKGBUILD#L58 because when attempting to build on Tumbleweed I've otherwise ran into build errors.
983
984
Then you can copy the files to ariel and move them to a location somewhere under `/srv/tftpboot`:
985
```
986
# on build host
987
rsync bin-x86_64-efi/ipxe.efi openqa.opensuse.org:/home/martchus/ipxe.efi
988
# on ariel
989
sudo cp /home/martchus/ipxe.efi /srv/tftpboot/ipxe-own-build/ipxe.efi
990
```
991
992
Then configure the use of the image in `/etc/dnsmasq.d/pxeboot.conf` on ariel. Temporarily comment-out possibly disturbing lines and make sure the following lines are present:
993
```
994
enable-tftp
995
tftp-root=/srv/tftpboot
996
pxe-prompt="Press F8 for menu. foobar", 10
997
dhcp-match=set:efi-x86_64,option:client-arch,7
998
dhcp-match=set:efi-x86_64,option:client-arch,9
999
dhcp-match=set:efi-x86,option:client-arch,6
1000
dhcp-match=set:bios,option:client-arch,0
1001
dhcp-boot=tag:efi-x86_64,ipxe-own-build/ipxe.efi
1002
```
1003
1004
Then run `systemctl restart dnsmasq.service` to apply and `journalctl -fu dnsmasq.service` to see what's going on.
1005 215 okurz
1006
### Installation of machines being able to run kexec
1007
1008
If it is possible to directly execute "kexec" on a machine, e.g. on ppc64le machines running petitboot, it is possible to start a remote network installation following https://en.opensuse.org/SDB:Network_installation#Start_the_Installation . See #119008#note-6 for an example.
1009 232 okurz
1010 231 okurz
### Linux Endpoint Protection Agent
1011 215 okurz
Ensure any non-test OS installations have the Linux Endpoint Protection Agent deployed, see https://progress.opensuse.org/issues/123094 and https://confluence.suse.com/display/CS/Sensor+-+Linux+Endpoint+Protection+Agent for details
1012 120 okurz
1013 277 okurz
### s390 LPAR setup
1014
1015
Originally from #51836-15. To be able to use s390x LPARs for use as KVM hypervisor hosts we followed those steps:
1016
* Packages that need to be present:
1017
 * multipath-tools
1018
 * libvirt
1019
* directories
1020
 * /var/lib/openqa/share/factory
1021
 * /var/lib/libvirt/images
1022
* services
1023
 * libvirtd
1024
 * multipathd
1025
* ZFCP disk for storing images
1026
 * cio_ignore -r [fc00,fa00] to whitelist the channels
1027
 * zfcp_host_configure [fa00,fc00] 1 to permanently enable the fcp devices
1028
 * multipath -ll to check what devices are there
1029
 * /usr/bin/rescan-iscsi-bus.sh to discover newly add ed zfcp disks
1030
 * fdisk to create new partition
1031
 * mkfs.ext4 to create file system
1032
* /etc/fstab entries
1033
 * NFS openQA: `openqa.suse.de:/var/lib/openqa/share/factory /var/lib/openqa/share/factory nfs ro 0 0`
1034
 * ZFCP disk: `/dev/mapper/$ID /var/lib/libvirt/images ext4 nobarrier,data=writeback 1`
1035
1036
Additionally execute `echo 'roles: libvirt' >> /etc/salt/grains` and apply the state from https://github.com/os-autoinst/salt-states-openqa/tree/master/libvirt
1037
1038 120 okurz
## Take machines out of salt-controlled production
1039
1040 118 okurz
E.g. for investigation or development or manual maintenance work
1041
1042
```
1043 179 nicksinger
ssh osd "sudo salt-key -y -d $hostname"
1044 272 okurz
ssh $hostname "sudo systemctl disable --now telegraf $(systemctl list-units | grep openqa-worker-auto-restart | cut -d . -f 1 | xargs)"
1045 118 okurz
```
1046 174 mkittler
1047 290 nicksinger
If you also want to remove all alerts related to that machine, consider to execute https://gitlab.suse.de/openqa/salt-states-openqa/-/blob/master/monitoring/grafana/cleanup_stale_alerts.sh on monitor.qa.suse.de like so (adjust the parameters at the end with appropriately privileged account credentials):
1048
1049
**Caution**: This will remove all alerts currently present in Grafana but not provisioned (e.g. manually created ones)
1050
1051
```
1052
ssh -t root@monitor.qa.suse.de "curl https://gitlab.suse.de/openqa/salt-states-openqa/-/raw/master/monitoring/grafana/cleanup_stale_alerts.sh | bash -s -- USERNAME PASSWORD"
1053
```
1054
1055
1056 174 mkittler
Checkout [salt-states-openqa's examples](https://gitlab.suse.de/openqa/salt-states-openqa/-/blob/master/README.md#examples) for systemd commands to start and stop workers.
1057 229 nicksinger
1058
## How to use samba shares to mount ISOs as virtual CD drives with SuperMicro server/mainboards
1059
1060
SuperMicro based servers have the capabilities to mount smb shares containing ISOs as virtual CD drives to e.g. boot from them.
1061
Install the samba package on any machine you control. This also works from your personal workstation if the server can access it (e.g. over VPN) and create the following `/etc/samba/smb.conf`:
1062
1063
~~~ text
1064
[global]
1065
   workgroup = MYGROUP
1066
   server string = Samba Server
1067
   log level = 3
1068
   client min protocol = core
1069
   server min protocol = core
1070
   guest ok = yes
1071 240 okurz
1072
## "Staging" test instances
1073
1074
SUSE internally we have two virtual machines that can be used for testing, developing, showcasing, reachable under convenient URLs:
1075
* https://openqa-staging-1.qe.nue2.suse.org
1076
* https://openqa-staging-2.qe.nue2.suse.org
1077
1078
You can use those machines and apply changes as desired over ssh.
1079 229 nicksinger
1080
#============================ Share Definitions ==============================
1081
[recovery]
1082
	comment = recovery
1083
	path = /home/you/recovery
1084
	public = yes
1085
~~~
1086
1087
Now start the samba service. Despite the share being accessible by everyone (be carful about this!), the SuperMicro machines still need a User on the Samba server as they don't support anonymous login. To create a user without requiring a local unix user, you can use the following command:
1088
1089
```samba-tool domain provision --use-rfc2307 --interactive```
1090
1091
afterwards create a user in the samba database with:
1092
1093
```smbpasswd -a smbtest```
1094
1095
Now it should be possible to access the share. Place an ISO file into your folder configured above and use the following settings in the webui of the SuperMicro server:
1096
1097
"Share Host": IP of your machine running samba
1098
"Path to Image": Path to your ISO inside the share, e.g. "\recovery\some_boot_medium.iso" (mind the backslashes!)
1099
"Users": The username from your just created user
1100
"Password": It's password - don't keep this empty as it will not work otherwise
1101
1102
After clicking on "mount" you should now see a connection to your samba server. The machine will try to mount the ISO and if everything goes well, will report "There is an iso file mounted." in the "Health Status" of the Devices.
1103 173 mkittler
1104 118 okurz
## Bring back machines into salt-controlled production
1105
1106 124 dheidler
```
1107 118 okurz
ssh osd "sudo salt-key -a $hostname && sudo salt --state-output=changes $hostname state.apply"
1108
```
1109
1110 117 okurz
Depending on your actions further manual cleanup might be necessary, e.g. `ssh $hostname "sudo systemctl unmask telegraf salt-minion"`
1111 230 nicksinger
1112 276 okurz
## Access the BMC of machines in the SUSE network zones
1113 230 nicksinger
1114 276 okurz
One can use ssh portforwarding to access the services of a BMC (e.g. web interface) for a machine in the "oqa" network security zone. The host "oqa-jumpy" can be used for that like this:
1115 230 nicksinger
1116
~~~
1117 276 okurz
ssh -t jumpy@oqa-jumpy.dmz-prg2.suse.org -L 8443:openqaworker21.oqa-ipmi-ur:443 -L 8080:openqaworker21.oqa-ipmi-ur:80
1118 230 nicksinger
~~~
1119
1120
while the ssh-session is running you can then use your local browser to access the remote host by e.g. "http://localhost:8080" or "https://localhost:8443".
1121
1122
## Using the build-in java tools of BMCs to access machines in the security zone
1123
1124
*1.* Follow [Access the BMC of machines in the new security zone](#Access-the-BMC-of-machines-in-the-new-security-zone) to download the build-in java webstart file of the machine you want to control
1125 276 okurz
*2.* Use nmap on oqa-jumpy to scan for all ports of a machines BMC. Example:
1126 230 nicksinger
1127
~~~
1128 276 okurz
jumpy@oqa-jumpy:~> nmap openqaworker21.oqa-ipmi-ur -p-
1129 230 nicksinger
Starting Nmap 7.70 ( https://nmap.org ) at 2023-01-17 12:23 UTC
1130 276 okurz
Nmap scan report for openqaworker21.oqa-ipmi-ur (…)
1131 230 nicksinger
Host is up (0.0056s latency).
1132
Not shown: 65525 closed ports
1133
PORT     STATE SERVICE
1134
22/tcp   open  ssh
1135
80/tcp   open  http
1136
199/tcp  open  smux
1137
427/tcp  open  svrloc
1138
443/tcp  open  https
1139
623/tcp  open  oob-ws-http
1140
5120/tcp open  barracuda-bbs
1141
5122/tcp open  unknown
1142
5123/tcp open  unknown
1143
7578/tcp open  unknown
1144
~~~
1145
1146
*3.* Forward all ports relevant for the java applet to your local machine:
1147
1148
~~~
1149 276 okurz
sudo ssh -i /home/nicksinger/.ssh/id_rsa.SUSE -4 jumpy@oqa-jumpy.suse.de -L 443:openqaworker21.oqa-ipmi-ur:443 -L 623:openqaworker21.oqa-ipmi-ur:623 -L 5120:openqaworker21.oqa-ipmi-ur:5120 -L 5122:openqaworker21.oqa-ipmi-ur:5122 -L 5123:openqaworker21.oqa-ipmi-ur:5123 -L 7578:openqaworker21.oqa-ipmi-ur:7578
1150 230 nicksinger
~~~
1151
1152
**Note 1:** You have to use the exact same ports as shown by the port scan because you cannot instruct the applet to use different ports
1153
**Note 2:** You have to execute your ssh client with root privileges for it to be able to bind to ports below 1024. These forwardings need to be present for the applet being able to download additional files from the BMC
1154
**Note 3:** Make sure to point to your right keyfile by using the -i parameter as ssh will scan different directories if run as root
1155
1156
*4.* Execute the previously downloaded applet. I use the following command to make it work with wayland:
1157
~~~
1158
LANG=C _JAVA_AWT_WM_NONREPARENTING=1 javaws -nosecurity -jnlp jviewer\ \(1\).jnlp
1159
~~~
1160
*5.* You should now be able to control the machine/BMC with all its features (e.g. mounting ISO images as virtual CD)
1161 175 okurz
1162 172 mkittler
## Use a production host for testing backend changes locally, e.g. svirt, powerVM, IPMI bare-metal, s390x, etc.
1163 177 mkittler
1164 172 mkittler
0. Find out which type of worker slot you need for the specific job you want to run, e.g. by checking which worker slots were used for previous runs of the job on OSD or by looking for the job's worker class in the [workers table](https://openqa.suse.de/admin/workers).
1165 1 alarrosa
1. Configure an additional worker slot in your local `workers.ini` using worker settings from the corresponding production worker. The production worker config can be found in [workerconf.sls](https://gitlab.suse.de/openqa/salt-pillars-openqa/-/blob/master/openqa/workerconf.sls) or on the hosts themselves.
1166 176 mkittler
2. Take out the corresponding worker slot from production using the systemd commands mentioned in [salt-states-openqa's examples](https://gitlab.suse.de/openqa/salt-states-openqa/-/blob/master/README.md#examples). This is important to prevent multiple jobs from using the same svirt host.
1167 172 mkittler
3. Start the locally configured worker slot and clone/run some jobs.
1168
4. When you're done, bring back the production worker slots using the systemd commands mentioned in [salt-states-openqa's examples](https://gitlab.suse.de/openqa/salt-states-openqa/-/blob/master/README.md#examples).
1169 178 mkittler
1170
### Alternatives
1171
It is also possible to test svirt backend changes fully locally, at least when running tests via KVM is sufficient. Checkout [os-autoinst's documentation](https://github.com/os-autoinst/os-autoinst/blob/master/doc/backends.md#svirt=) for further details.
1172 122 okurz
1173 257 mkittler
## Dealing with PowerEdge SAP servers from Dell
1174 1 alarrosa
### Acessing the management interface via SSH
1175 256 mkittler
It is possible to access the management interface via SSH as well (using the same user name and password as for the web interface). Checkout further Wiki sections for useful commands or the [manual](https://dl.dell.com/content/manual65464730-integrated-dell-remote-access-controller-9-racadm-cli-guide.pdf?language=en-us) which is also availabe as [web page](https://www.dell.com/support/manuals/de-de/integrated-dell-remote-access-cntrllr-8-with-lifecycle-controller-v2.00.00.00/racadm_idrac_pub-v1/racadm-subcommand-details?guid=guid-cd4e81e6-818c-44fb-9e7a-82950425fbbb&lang=en-us).
1176 1 alarrosa
1177 269 mkittler
One very useful pair of commands are `racadm get` and `… set` which allow reading and writing configuration values, e.g. `racadm get iDRAC.NIC.DNSRacName` and `racadm set iDRAC.NIC.DNSRacName somevalue`.
1178 268 mkittler
1179 269 mkittler
### Restoring access to the iDRAC web interface
1180
If iDRAC returns a 400 error it might be due to a wrong DNS setting. This is especially likely if you have just changed the DNS entry. Try to access iDRAC via its IP which should still work. Then goto iDRAC settings -> Network -> General settings and update the DNS iDRAC name to match the *not* fully qualified domain (e.g. `qesapworker-prg4-mgmt` for https://qesapworker-prg4-mgmt.qa.suse.cz).
1181
1182
You may also change this setting by accessing the management interface via SSH. The command would be `racadm set iDRAC.NIC.DNSRacName qesapworker-prg4-mgmt` in this case. You may also use `racadm set idrac.webserver.HostHeaderCheck 0` to get rid of this entire check completely. This is especially useful if you cannot conveniently put in a matching name, e.g. when accessing the web UI via SSH forwarding.
1183 256 mkittler
1184
### Recovering BIOS
1185 1 alarrosa
If the BIOS appears completely broken (e.g. after a firmware update) you may try to invoke `racadm systemerase bios` after accessing the management interface via SSH. This will take a while and afterwards you'll have to redo settings (e.g. the bootmode).
1186 257 mkittler
1187
### Cancel/delete stuck iDRAC jobs
1188
Invoke `racadm jobqueue delete -i JID_CLEARALL_FORCE` after accessing the management interface via SSH.
1189
1190
### Check status of BOSS-S2 NVMe disks
1191
Use the "MVCLI BOSS-S2" utility from Dell which you can download from their servers (see https://www.dell.com/support/manuals/de-de/poweredge-r6525/boss-s2_ug/run-boss-s2-cli-commands-on-poweredge-servers-running-the-linux-operating-system?guid=guid-c0f3bd0d-4725-4fed-8bc2-4aa872f3627f&lang=en-us).
1192
1193
### Firmware updates
1194
The easiest way is to download the *Windows* installer (a file that ends with `.EXE`) and upload and install that via the iDRAC web interface. This also works for updates of iDRAC but also for BIOS updates and firmware of various components. Uploading the GNU/Linux version (a file that ends with `.BIN`) is *not* possible this way. One can track the progress of those updates via the iDRAC job queue. It is possible to schedule two updates that require a reboot at the same time (e.g. BIOS update and SAS-RAID firmware) and do them this way in one go.
1195 256 mkittler
1196 122 okurz
## Backup
1197 134 okurz
1198 122 okurz
Both openqa.opensuse.org and openqa.suse.de run on virtual machine clusters that provide redundancy and differential backup using snapshotting of the involved storage. SUSE-IT currently provides backups going back up to 3 days with two daily backups conducted at 23:10Z and 11:00Z. With this it is possible in cases of catastrophic data loss to recover (raise ticket over https://sd.suse.com in that case). Additionally automatic backup for the o3 webui host introduced with https://gitlab.suse.de/okurz/backup-server-salt/tree/master/rsnapshot covering so far /etc and the SQL database dumps. Fixed assets and testresults are backed up on storage.qa.suse.de (see https://gitlab.suse.de/openqa/salt-states-openqa/-/merge_requests/612)
1199 139 okurz
1200
### openQA database backups
1201
1202
Database backups of o3+osd are available on backup.qa.suse.de, acessible over ssh, same credentials as for the OSD infrastructure
1203 144 livdywan
1204
### Fallback deployment on AWS
1205
1206 149 tinita
To get an instance running from a backup in case of a disaster, one can be created on AWS with this configuration:
1207
1208
#### Launch instance
1209 155 tinita
1210 149 tinita
##### Web Interface, from scratch (only if necessary, otherwise just use the template below)
1211 144 livdywan
1212
- Ensure your region is **Frankfurt, Germany**
1213 146 mkittler
- Pick a **t3.large** with `openSUSE Leap` on AWS Marketplace
1214
- Add two disks
1215
    - 10 GiB for the root filesystem should be sufficient (can be easily extended later if needed)
1216 144 livdywan
    - The OSD database alone needs > 30 GiB and results plus assets will also need a lot (e.g. > 4 GiB for TW snapshot ISO) so take at least 100 GiB for the 2nd drive
1217
- The security group needs to include ssh and http
1218 149 tinita
- Add `openqa_created_by`, `openqa_ttl` and `team:qa-tools` tags
1219
1220
##### Launch from a template
1221 151 tinita
1222
Note: When you modify the template (creating a new version), be sure to set the new version as the default.
1223 155 tinita
1224 154 tinita
- Go to the [openQA-webUI-openSUSE-Leap](https://eu-central-1.console.aws.amazon.com/ec2/v2/home?region=eu-central-1#LaunchTemplateDetails:launchTemplateId=lt-002dfbcbd2f818e4c) Template
1225 149 tinita
- Select "Actions - Launch instance from template"
1226
- Choose your key pair
1227 1 alarrosa
- Click "Launch instance"
1228
1229 151 tinita
###### Command line
1230 156 tinita
1231
For configuring aws cli, see [below](https://progress.opensuse.org/projects/openqav3/wiki/Wiki#Configure-aws-cli)
1232
1233 149 tinita
[aws run-instances docs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-templates.html)
1234
1235 150 tinita
    aws ec2 run-instances --launch-template LaunchTemplateId=lt-002dfbcbd2f818e4c --key-name <your-keyname>
1236
    # or
1237 149 tinita
    aws ec2 run-instances --launch-template LaunchTemplateName=openQA-webUI-openSUSE-Leap --key-name <your-keyname>
1238
1239
For this you have to create a key pair first, if you haven't done so.
1240 144 livdywan
Save the result and look for the `InstanceId`.
1241
1242
#### Transfer keys
1243
1244
Since an instance is always created with a single key, public keys of all users need to be deployed by whoever owns that key.
1245
1246
**Note**: `osd2` refers to the instance created above. Replace with the instance IP or add an alias to your SSH config.
1247
1248
    ssh openqa.suse.de "sudo su -c 'cat /home/*/.ssh/authorized_keys'" | ssh ec2-user@osd2 "cat - >> ~/.ssh/authorized_keys"
1249
1250
#### Bootstrapping
1251
1252
```
1253 169 osukup
ssh osd2
1254 145 mkittler
sudo su
1255
parted --script /dev/nvme1n1 mklabel gpt && parted --script /dev/nvme1n1 mkpart ext4 4096s 100%
1256 160 osukup
mkfs.ext4 /dev/nvme1n1p1
1257 145 mkittler
vim /etc/fstab # add mount to fstab
1258 158 okurz
mkdir /space && mount /dev/nvme1n1p1 /space
1259
mkdir -p /space/pgsql/data
1260
mkdir -p /var/lib/pgsql
1261 169 osukup
ln -s /space/pgsql/data /var/lib/pgsql/data
1262
zypper in postgresql-server # needed for user.group
1263
chown -R postgres.postgres /space/pgsql # without correct group postgresql.service fails
1264
mkdir -p /space/openqa
1265 171 osukup
mkdir -p /var/lib/openqa
1266 161 osukup
mount /space/openqa /var/lib/openqa -o bind # open also requires a lot of space 
1267 152 tinita
curl -s https://raw.githubusercontent.com/os-autoinst/openQA/master/script/openqa-bootstrap | bash -x
1268
1269 145 mkittler
ssh -A backup.qa.suse.de
1270 1 alarrosa
rsync --progress /home/rsnapshot/alpha.0/openqa.suse.de/var/lib/openqa/SQL-DUMPS/2022-02-08.dump ec2-user@osd2:/tmp
1271
1272 147 mkittler
ssh osd2
1273 1 alarrosa
sudo -u postgres createdb -O geekotest openqa-osd # create pristine db for OSD import (to avoid conflicts with existing data)
1274 153 tinita
sudo -u geekotest pg_restore -d openqa-osd /tmp/2022-02-08.dump # import data, will take a while (22m is a realistic time)
1275 1 alarrosa
vim /etc/openqa/openqa.ini # change auth from Fake to OpenID
1276 170 osukup
vim /etc/openqa/database.ini # change database to openqa-osd
1277 158 okurz
vim /etc/openqa/client.conf # change key and secret to correct one
1278 1 alarrosa
systemctl restart openqa-webui
1279 155 tinita
```
1280
1281
##### Configure aws cli
1282
1283
You can use the command
1284
1285
    aws configure
1286
1287
but it doesn't actually help you with the possible values, so you can just create the file yourself like this:
1288
1289
    % cat ~/.aws/config
1290
    [default]
1291
    region = eu-central-1
1292 157 tinita
    output = json
1293
    % cat ~/.aws/credentials
1294
    [default]
1295 155 tinita
    aws_access_key_id = ABCDE
1296 144 livdywan
    aws_secret_access_key = FGHIJ
1297 109 okurz
1298 107 okurz
## Best practices for infrastructure work
1299
1300
* Same as in OSD deployment we should look for failed grafana alerts if users report something suspicious
1301
* Collect all the information between "last good" and "first bad" and then also find the git diff in openqa/salt-states-openqa
1302
* Apply proper "scientific method" with written down hypotheses, experiments and conclusions in tickets, follow https://progress.opensuse.org/projects/openqav3/wiki#Further-decision-steps-working-on-test-issues
1303
* Keep salt states to describe what should *not* be there
1304
* Try out older btrfs snapshots in systems for crosschecking and boot with disabled salt. In the kernel cmdline append `systemd.mask=salt-minion.service`
1305 190 okurz
* Team should conduct a work backlog check on a daily base, e.g. look for urgent tickets related to infrastructure problems
1306 191 okurz
* For hardware component replacement, create EngInfra ticket for coordination, order replacement on private expenses and get reimbursed using https://intra.suse.net/company/company-news/department/finance/claim-expenses/claim-expenses/ or have order placed with the help of line managers, let the components be delivered to the according place, e.g. SUSE Nuremberg datacenter and inform EngInfra in ticket to have them conduct the physical component replacement
1307 148 livdywan
* For ordering new machines follow https://mysuse.sharepoint.com/sites/SUSEBusinessCriticalLinux/Shared%20Documents/Hardware%20Order/E&I%20Hardware.pdf (get quotes from vendor, create ticket with procurement, CC osd-admins+mgriessmeier, wait for purchase order (PO) approval, order with vendor and ask them to include PO number in invoice)
1308 116 okurz
* Prefer `reload` over `restart` where available e.g. `systemctl reload postgres` - in general `systemctl cat postgres` will show available commands for any service
1309 266 okurz
* Test reboot stability of machines with commands like https://github.com/os-autoinst/scripts/blob/master/reboot-stability-check
1310 234 okurz
1311
# Automatic submission of packages
1312 1 alarrosa
Every commit to the master branch of the git repositories of https://github.com/os-autoinst/os-autoinst and https://github.com/os-autoinst/openQA is considered a stable release and triggers package builds within https://build.opensuse.org/project/show/devel:openQA, in particular https://build.opensuse.org/package/show/devel:openQA/os-autoinst and https://build.opensuse.org/package/show/devel:openQA/openQA. http://jenkins.qa.suse.de/job/trigger-openQA_in_openQA-TW/ using https://github.com/os-autoinst/scripts/blob/master/trigger-openqa_in_openqa is monitoring the download repositories for new versions and triggers openQA-in-openQA tests as visible on https://openqa.opensuse.org/group_overview/24 . http://jenkins.qa.suse.de/job/monitor-openQA_in_openQA-TW/ monitors the test execution using https://github.com/os-autoinst/scripts/blob/master/monitor-openqa_job and on test success triggers http://jenkins.qa.suse.de/job/submit-openQA-TW-to-oS_Fctry/ periodically (with a build throttle as decided together with openSUSE reviewers) using https://github.com/os-autoinst/scripts/blob/master/os-autoinst-obs-auto-submit. This step prepares openQA related packages for automatic submission into openSUSE:Factory in https://build.opensuse.org/project/show/devel:openQA:tested, awaits build+check results and then creates automatic submissions to openSUSE:Factory for inclusion of packages into openSUSE Tumbleweed. This approach could also be extended for automatic submission to openSUSE Leap, SLE PackageHub or directly to SLE using maintenance updates based on a configurable schedule with additional check steps as applicable. Given that openQA are developed based on a rolling-release model with no maintenance branches any updates to base products supporting openQA would be new version updates along with dependency package updates as necessary.