Project

General

Profile

Wiki » History » Version 29

okurz, 2017-10-18 09:09
Add one exception in DOD about "failing in valid product bug" instead of "successful test run"

1 15 okurz
# Introduction
2 1 okurz
3 15 okurz
{{toc}}
4
5 1 okurz
Also see https://progress.opensuse.org/projects/openqav3/wiki
6
7 20 okurz
8
# Organisational
9
10
## ticket workflow
11
12 21 okurz
This project adheres to the ticket workflow as described on the parent project: [ticket workflow](https://progress.opensuse.org/projects/openqav3/wiki/Wiki#ticket-workflow)
13 20 okurz
14
Also see the [[Wiki#Definition-of-DONE|Definition-of-DONE]] on the use of ticket status, especially when to set *Resolved*.
15
16
17 15 okurz
# test organization on https://openqa.suse.de/
18 1 okurz
19 15 okurz
## job group names
20 1 okurz
21 15 okurz
### Job group names should be consistent and structured for easy (daily) review of the current status
22 1 okurz
23
template:
24
```
25
<product_group_short_name> <order_nr>.<product_variant>
26
```
27
e.g. "SLE 12 SP1 1.Server". Keep the whitespace for separation consistent, also see https://progress.opensuse.org/issues/9916
28
29 15 okurz
### Released products should be named with a prefix 'x' to show up late in the overview page
30 1 okurz
31 2 okurz
This way we can keep track if tests fail even though the product does not produce new builds. This could help us crosscheck tests. E.g. "x-released SLE 12 SP1 1.Server".
32 1 okurz
33 2 okurz
lowercase "x" as all our product names start with capital letters. Sorting works regardless (or uppercase first?).
34 1 okurz
35
For now we do not retrigger tests on old builds automatically but any test developer may retrigger it manually, e.g. if he suspects the tests broke and he wants to confirm that local changes are not at fault.
36 4 okurz
37 15 okurz
# needling best practices
38 14 okurz
There are also other locations where "needling best practices" can be found but we should also have the possibility to keep something on the wiki. Feel free to contact me (okurz) and tell me where it should be instead if there is a better place. Also look into [openQA Pitfalls](https://github.com/os-autoinst/openQA/blob/master/docs/Pitfalls.asciidoc)
39 4 okurz
40 15 okurz
## applying "workaround" needles
41 4 okurz
If a test reveals a product issue of minor importance it can make sense create a needle with the property "workaround" set. This way, if the needle is matched, the test records this as a "soft-fail". To backtrack the product issue and follow on this and eventually delete the workaround needle if the product issue is fixed, the product issue should be recorded in the needle name itself and at best also in the git commit message adding the needle. If test changes are necessary the source code should have a corresponding comment referencing the issue as well as marking start and stop of the test procedure that is necessary for applying the workaround. Example for a needle name: "gdm-workaround-bsc962806-20160125" referencing bsc#962806
42 1 okurz
43
*keep in mind:*
44 13 okurz
Since [gh-os-autoinst#532](https://github.com/os-autoinst/os-autoinst/pull/532) workaround needles are always preferred, otherwise if two needles match, the first in alphabetical list wins.
45 5 okurz
46 15 okurz
## do not overwrite old needles because old date confuses people
47 19 okurz
With the needle editor a timestamp of the current day is automatically added to new needles. When updating a needle, don't overwrite a needle with the old date tag not to confuse people as it will look really weird in the needle editor.
48 8 okurz
49 15 okurz
## needle indidvidual column entries in tables
50 8 okurz
**Problem**: Tables might auto-adjust column size based on content. Therefore it is unsafe to create needles covering multiple columns in a row. Failing example: https://openqa.suse.de/tests/441169#step/yast2_snapper/23
51
**Solution**: Needles support multiple areas. Use them to needle individual cells in this example.
52 6 okurz
53 17 okurz
54
## don't include version specific content in needles
55
56
**Problem**: Creating a needle that covers version number of application or product version fails often for every update, e.g. see [opensuse-42.2-DVD-x86_64-Build0112-xfce@64bit](https://openqa.opensuse.org/tests/228793#step/firefox/10). Obviously the needle does not match because no one so far created a needle for firefox 47 on Leap42.2 on xfce.
57
**Solution**: openQA in general supports exclusion areas and even OCR but they have its [flaws](https://progress.opensuse.org/issues/12858). For now better carefully select matching areas so that versions are not included like in the following example
58
![needling example](openQA_needle_firefox_wo_version_cropped.png).
59
60 15 okurz
# Definition of DONE/READY
61 6 okurz
62
Each of the following points has to be fulfilled to regard individual contributions as *DONE*. Not every step has to be done by the same step. The overall completion is in responsibility of the complete team.
63
64 15 okurz
## Definition of DONE
65 6 okurz
66
Also see http://www.allaboutagile.com/definition-of-done-10-point-checklist/ and https://www.scrumalliance.org/community/articles/2008/september/what-is-definition-of-done-%28dod%29
67
68
The following definitions are used to ensure development on individual tests has been completed covering all existing different workflows, e.g. covering "hot-fixes" on the productive instance as well as contributions by new contributors with no previous experience and no control over needle generation on productive instances.
69
70 1 okurz
* Code changes are made available via a pull request on the github repository
71 6 okurz
* New tests as individual test modules (i.e. files under `tests/`): They are loaded in main.pm of sle and/or opensuse 
72
* "make test" works (e.g. automatic travis CI check triggered on each github PR)
73
* [Guidelines for git commits](http://chris.beams.io/posts/git-commit/) have been followed
74
* Code has been reviewed (e.g. in the github PR)
75
* Favored, but depending on criticality/complexity/size: A local verification test has been run, e.g. post link to a local openQA machine or screenshot or logfile
76 18 okurz
* Test modules that have been touched have updated metadata, e.g. "Maintainer" and "Summary" (#13034)
77 28 okurz
* Potentially impacted product variants have been considered, e.g. openSUSE, SLE, validation tests for new product versions currently in development, maintenance tests on older product versions
78 6 okurz
* Code has been merged (either by reviewer or reviewee after 'LGTM' from others)
79 1 okurz
* Code has been deployed to osd and o3 (automatic git sync every few minutes)
80
* If new variables are necessary (feature toggles): A test_suite is executing the test, e.g. test_suite is created or variable is added to existing test_suite over web interface configuration on osd and/or o3
81 7 okurz
* If a new test_suite has been created: The test_suite is added to at least one job_group
82 6 okurz
* Necessary needles are made available as PR for sle and/or opensuse (depending if executed, see above for 'main.pm') or are created on the productive instance
83 29 okurz
* At least one successful test run has been observed on osd or o3 and referenced in the corresponding progress item or bugzilla bug report if one exists. There is one exception: If the test fails in a valid product bug and it is expected that a bug fix will be provided shortly the test run may also fail when labeled accordingly.
84 6 okurz
85 15 okurz
## Definition of READY for new tests
86 6 okurz
87
The following points should be considered before a new test is READY to be implemented:
88
89
* Either a product bug has been discovered for which there is no automated test in openQA or a FATE request for new features exists
90
* A test case description exists depicting the prerequisites of the test, the steps to conduct and the expected result
91
* The impact and applicability for both SLE and openSUSE products has been considered
92 10 okurz
93 24 okurz
## ticket backlog refinement
94
95
Also see https://progress.opensuse.org/projects/suseqa/wiki#ticket-refinement-and-cleanup-workflow
96
97
98 27 okurz
1. [**Categorize**](https://progress.opensuse.org/projects/openqatests/issues?utf8=%E2%9C%93&set_filter=1&f%5B%5D=category_id&op%5Bcategory_id%5D=%21*&f%5B%5D=status_id&op%5Bstatus_id%5D=o&f%5B%5D=&c%5B%5D=subject&c%5B%5D=project&c%5B%5D=status&c%5B%5D=assigned_to&c%5B%5D=fixed_version&c%5B%5D=relations&c%5B%5D=priority&c%5B%5D=updated_on&c%5B%5D=category&c%5B%5D=created_on&group_by=): Goal -> No ticket without category
99
2. [**Tag**](https://progress.opensuse.org/projects/openqatests/issues?utf8=%E2%9C%93&set_filter=1&f%5B%5D=subject&op%5Bsubject%5D=%21%7E&v%5Bsubject%5D%5B%5D=%5B&f%5B%5D=status_id&op%5Bstatus_id%5D=o&f%5B%5D=&c%5B%5D=subject&c%5B%5D=project&c%5B%5D=status&c%5B%5D=assigned_to&c%5B%5D=fixed_version&c%5B%5D=relations&c%5B%5D=priority&c%5B%5D=updated_on&c%5B%5D=category&group_by=): Goal -> No ticket without component or responsibility tags
100 23 okurz
101
# code contribution review checklist
102
103
Check each pull request on https://github.com/os-autoinst/os-autoinst-distri-opensuse against the following rules
104
105
* https://github.com/os-autoinst/os-autoinst-distri-opensuse#coding-style
106
* DoD is adhered to
107
* SLE staging impact has been considered (be careful accepting changes during working days when a stable SLE staging project is expected by release managers)
108
109
110 15 okurz
# Test development instances (staging openQA instances)
111 10 okurz
112 1 okurz
Contributors cannot afford to verify a newly developed test in all scenarios run by o3 or osd, so tests will break sometime. It would be useful to use a machine to run a subset of the scenarios run in the official instance(s) to make sure the new tests can be deployed with some degree of confidence. But: Any "staging openQA instance" would not be able to run everything which is run in production. It just does not scale. So anyway only a subset can be run and there can be always something missing. Also, we don't have the hardware capacity to cover everything twice and also consider SLE plus openSUSE. Our [DOD](https://progress.opensuse.org/projects/openqatests/wiki/Wiki#Definition-of-DONEREADY) should cover some important steps so that external contributors are motivated to test something locally first. We have a good test review process and it has to be decided by the reviewer if he accepts the risk of a new test with or without a local verification and covering which scenarios. Depending on the contributors it might make sense to setup a staging server with a subset of tests which is used by multiple test developers to share the burden of openQA setup and administration. For example the YaST team has one available: https://wiki.microfocus.net/index.php/YAST/openQA
113 11 okurz
If you want to follow this model you can watch [this talk by Christopher Hofmann from the OSC16](https://events.opensuse.org/conference/oSC16/program/proposal/986) or ask the YaST team for their experiences.
114 16 okurz
115
116
# Tips for test development and issue investigation
117
118
Examples mentioned here write `clone_job` and `client`. Replace this by a call to the scripts within openQA installation with the corresponding name and proper arguments to provide your API key as well as the host selection, e.g. `/usr/share/openqa/client --host https://openqa.opensuse.org` with your API key configured in `~/.config/openqa/client.conf`
119
120
## Uploading image files to openqa server and run test on it
121
122
You can manually trigger a test job with explicit name as one-shot overriding the variables as necessary, for example:
123
124
as geekotest@openqa:
125
126
```
127
cd /var/lib/openqa/factory/hdd
128
wget http://<my_host>/<path>.qcow2 -O <new_image_name>.qcow2
129
cd /var/lib/openqa/factory/iso
130
/usr/share/openqa/script/client isos post --params SLE-12-SP2-Server-DVD-ppc64le-Build1651-Media1.iso.6.json HDD_1=SLE-12-Server-ppc64le-GM-gnome_with_snapper.qcow2 TEST=migration_offline_sle12_ppc BUILD=1651_<your_short_name>
131
```
132
133
why `SLE-12-SP2-Server-DVD-ppc64le-Build1651-Media1.iso.6.json`? I checked `SLE-12-SP2-Server-DVD-ppc64le-Build1651-Media1.iso.?.json`: There are `…5…` and `…6…`. `…5…` is for *HA* so I chose 6.
134
135
The job can be cleaned afterwards to tidy up the build history with:
136
137
```
138
client jobs/463859 delete
139
```
140
141
## Create new HDD image with openQA
142
```
143
client jobs post DISTRI=sle VERSION=12 FLAVOR=Server-DVD ARCH=ppc64le BACKEND=qemu \
144
NOVIDEO=1 OFW=1 QEMUCPU=host SERIALDEV=hvc0 BUILD=okurz_poo9714 \
145
ISO=SLE-12-Server-DVD-ppc64le-GM-DVD1.iso INSTALLONLY=1 QEMU_COMPRESS_QCOW2=1 \
146
PUBLISH_HDD_1=SLES-12-GM-gnome-ppc64le_snapper_20g.qcow2 TEST=create_gm_ppc_image \
147
MACHINE=ppc64le WORKER_CLASS=qemu_ppc64le HDDSIZEGB=20 MAX_JOB_TIME=86400 TIMEOUT_SCALE=10
148
```
149
150
The `MAX_JOB_TIME=86400 TIMEOUT_SCALE=10` allows for interactive login during the process in case you want to manually adjust or debug. Beware though that `TIMEOUT_SCALE=10` also scales the waiting time on `check_screen` so that the whole job might take longer to execute.
151
152
To run a test but based on the new HDD image search for a good example and clone it with adjusted parameter:
153
154
```
155
clone_job 462022 HDD_1=SLES-12-GM-gnome-ppc64le_snapper_20g.qcow2
156
```
157
158
## Interactive investigation
159
160
While a job is running one can connect to the worker (if network access is possible) using VNC. One challenge is that the test is still running and manual interaction with the system interferes with the test and vice versa.
161
162
163
### Making the test stop for long enough to be able to connect
164
165
If you can change the test code, i.e. if running on a development machine, you can for example add a `sleep 3600;` or `wait_serial 'CONTINUE';` at the point in test when you want to connect to the system and interact with it, e.g. to gather additional logs. In case of `wait_serial 'CONTINUE';` you can echo 'CONTINUE' to the serial point to let the test continue, e.g. call `echo 'CONTINUE' > /dev/ttyS0;`.
166
167
In case you can not or do not want to change the test code or your test run is stopping anyway at a certain point with long enough timeout you can also increase timeout with `TIMEOUT_SCALE`, e.g. trigger it with the job variable `TIMEOUT_SCALE=10`. For example a `script_run` with default timeout of 90 seconds will wait for 900 seconds (=15 minutes) which should give enough time in most cases already.
168 1 okurz
169 26 okurz
Other possibility is to enter the interactive mode using the Interactive mode button on "Live view" tab of job run and then stop the execution. After that the qemu VM will enter debug mode.
170 25 riafarov
171
### Making VM active again
172 26 okurz
In case of interactive mode usage, as mentioned above, VM will get to debug mode and freeze. To make VM interactive again, we need to send the 'cont' command over qemu HMP.
173
To perform these activity within the o3 infrastructure, multiple steps are required:
174 25 riafarov
1) Request adding your ssh public key to access o3
175 26 okurz
2) Connect to o3 using the following command:
176 25 riafarov
177
```
178
ssh o3
179 1 okurz
```
180 26 okurz
3) Now you will be able to connect as root to the worker of your choice using ssh
181
4) Use 'ps' to find relevant qemu VM instance and get the qemu telnet monitor port. Hint: you can use the vnc port shown when cursor is on the worker's name on job page, e.g.:
182 25 riafarov
183
```
184
ps aux | grep :91
185 1 okurz
```
186 26 okurz
5) Connect to the VM using VNC (see next section)
187
6) Connect to the VM monitor using telnet:
188 25 riafarov
189
```
190
telnet localhost 20072
191
```
192 26 okurz
7) Type the `cont` command to continue:
193 25 riafarov
194
```
195
cont
196 1 okurz
```
197 25 riafarov
198
NOTE: please use '^]' as escape character, detach will stop VM.
199
200 1 okurz
### VNC port forwarding
201 26 okurz
After configuring the ssh profile for connection to o3, it's possible to perform port forwarding using ssh using following command:
202 25 riafarov
203
```
204
ssh -L <local_port_number>:<worker_hostname>:<vnc_port_on_remote_host> -NT4f o3
205
```
206
207
For example:
208
209
```
210
ssh -L 5997:openqa-worker:5997 -NT4f o3
211
```
212
213 26 okurz
After that you can connect to this port using VNC.
214 16 okurz
215
### Connecting over VNC
216
217 26 okurz
The VNC port is shown on the job live view as a hover text on the instance name. Make sure to use a "shared" connection in your vncviewer. `krdc`, the default KDE VNC viewer, as well as `vinagre`, default GNOME VNC viewer, do this already. For TigerVNC use for example:
218 16 okurz
219
```
220
vncviewer -Shared malbec.arch:91
221
```
222
223
224
### Forwarding of special shortcuts
225
226
The default `vncviewer` in openSUSE/SUSE systems is recommended as it can also be used to forward special keyboard shortcuts. E.g. to change to text console:
227
Press *F8* in vncviewer, select *ctrl* and *alt* in menue, exit menue, press *F2*.
228
229
230
### Requesting video when by default you do not have video in your environment
231
232
Example:
233
234
```
235
clone_job 464665 NOVIDEO=0
236
```
237 22 okurz
238
## Structured test issue investigation
239
240
In the cases of non-trivial issues it makes sense to use the "scientific method" especially because openQA tests being system tests are under influence of many moving parts. Also see https://progress.opensuse.org/projects/openqav3/wiki#Further-decision-steps-working-on-test-issues about this.
241
[Bug Hunting and the Scientific Method](https://accu.org/index.php/journals/1714) is a suggested read as well as [How to Fix the Hardest Bug You've Ever Seen: The Scientific Method](http://yellerapp.com/posts/2014-08-11-scientific-debugging.html). It is suggested to note down in tickets the hypotheses of all potential relevant problem sources, design experiments - which can be as simple as checking the logfile, collect observations, accept/reject hypotheses and therefore derive a better understanding of what is happening to eventually come to a conclusion. [s390 dasdfmt fails even though command looks complete in screenshot](https://progress.opensuse.org/issues/12410) can serve as an real-world example ticket how it can look like.