Project

General

Profile

Wiki » History » Version 23

okurz, 2017-03-29 13:45
add code contribution review checklist

1 15 okurz
# Introduction
2 1 okurz
3 15 okurz
{{toc}}
4
5 1 okurz
Also see https://progress.opensuse.org/projects/openqav3/wiki
6
7 20 okurz
8
# Organisational
9
10
## ticket workflow
11
12 21 okurz
This project adheres to the ticket workflow as described on the parent project: [ticket workflow](https://progress.opensuse.org/projects/openqav3/wiki/Wiki#ticket-workflow)
13 20 okurz
14
Also see the [[Wiki#Definition-of-DONE|Definition-of-DONE]] on the use of ticket status, especially when to set *Resolved*.
15
16
17 15 okurz
# test organization on https://openqa.suse.de/
18 1 okurz
19 15 okurz
## job group names
20 1 okurz
21 15 okurz
### Job group names should be consistent and structured for easy (daily) review of the current status
22 1 okurz
23
template:
24
```
25
<product_group_short_name> <order_nr>.<product_variant>
26
```
27
e.g. "SLE 12 SP1 1.Server". Keep the whitespace for separation consistent, also see https://progress.opensuse.org/issues/9916
28
29 15 okurz
### Released products should be named with a prefix 'x' to show up late in the overview page
30 1 okurz
31 2 okurz
This way we can keep track if tests fail even though the product does not produce new builds. This could help us crosscheck tests. E.g. "x-released SLE 12 SP1 1.Server".
32 1 okurz
33 2 okurz
lowercase "x" as all our product names start with capital letters. Sorting works regardless (or uppercase first?).
34 1 okurz
35
For now we do not retrigger tests on old builds automatically but any test developer may retrigger it manually, e.g. if he suspects the tests broke and he wants to confirm that local changes are not at fault.
36 4 okurz
37 15 okurz
# needling best practices
38 14 okurz
There are also other locations where "needling best practices" can be found but we should also have the possibility to keep something on the wiki. Feel free to contact me (okurz) and tell me where it should be instead if there is a better place. Also look into [openQA Pitfalls](https://github.com/os-autoinst/openQA/blob/master/docs/Pitfalls.asciidoc)
39 4 okurz
40 15 okurz
## applying "workaround" needles
41 4 okurz
If a test reveals a product issue of minor importance it can make sense create a needle with the property "workaround" set. This way, if the needle is matched, the test records this as a "soft-fail". To backtrack the product issue and follow on this and eventually delete the workaround needle if the product issue is fixed, the product issue should be recorded in the needle name itself and at best also in the git commit message adding the needle. If test changes are necessary the source code should have a corresponding comment referencing the issue as well as marking start and stop of the test procedure that is necessary for applying the workaround. Example for a needle name: "gdm-workaround-bsc962806-20160125" referencing bsc#962806
42 1 okurz
43
*keep in mind:*
44 13 okurz
Since [gh-os-autoinst#532](https://github.com/os-autoinst/os-autoinst/pull/532) workaround needles are always preferred, otherwise if two needles match, the first in alphabetical list wins.
45 5 okurz
46 15 okurz
## do not overwrite old needles because old date confuses people
47 19 okurz
With the needle editor a timestamp of the current day is automatically added to new needles. When updating a needle, don't overwrite a needle with the old date tag not to confuse people as it will look really weird in the needle editor.
48 8 okurz
49 15 okurz
## needle indidvidual column entries in tables
50 8 okurz
**Problem**: Tables might auto-adjust column size based on content. Therefore it is unsafe to create needles covering multiple columns in a row. Failing example: https://openqa.suse.de/tests/441169#step/yast2_snapper/23
51
**Solution**: Needles support multiple areas. Use them to needle individual cells in this example.
52 6 okurz
53 17 okurz
54
## don't include version specific content in needles
55
56
**Problem**: Creating a needle that covers version number of application or product version fails often for every update, e.g. see [opensuse-42.2-DVD-x86_64-Build0112-xfce@64bit](https://openqa.opensuse.org/tests/228793#step/firefox/10). Obviously the needle does not match because no one so far created a needle for firefox 47 on Leap42.2 on xfce.
57
**Solution**: openQA in general supports exclusion areas and even OCR but they have its [flaws](https://progress.opensuse.org/issues/12858). For now better carefully select matching areas so that versions are not included like in the following example
58
![needling example](openQA_needle_firefox_wo_version_cropped.png).
59
60 15 okurz
# Definition of DONE/READY
61 6 okurz
62
Each of the following points has to be fulfilled to regard individual contributions as *DONE*. Not every step has to be done by the same step. The overall completion is in responsibility of the complete team.
63
64 15 okurz
## Definition of DONE
65 6 okurz
66
Also see http://www.allaboutagile.com/definition-of-done-10-point-checklist/ and https://www.scrumalliance.org/community/articles/2008/september/what-is-definition-of-done-%28dod%29
67
68
The following definitions are used to ensure development on individual tests has been completed covering all existing different workflows, e.g. covering "hot-fixes" on the productive instance as well as contributions by new contributors with no previous experience and no control over needle generation on productive instances.
69
70 1 okurz
* Code changes are made available via a pull request on the github repository
71 6 okurz
* New tests as individual test modules (i.e. files under `tests/`): They are loaded in main.pm of sle and/or opensuse 
72
* "make test" works (e.g. automatic travis CI check triggered on each github PR)
73
* [Guidelines for git commits](http://chris.beams.io/posts/git-commit/) have been followed
74
* Code has been reviewed (e.g. in the github PR)
75
* Favored, but depending on criticality/complexity/size: A local verification test has been run, e.g. post link to a local openQA machine or screenshot or logfile
76 18 okurz
* Test modules that have been touched have updated metadata, e.g. "Maintainer" and "Summary" (#13034)
77 6 okurz
* Code has been merged (either by reviewer or reviewee after 'LGTM' from others)
78 1 okurz
* Code has been deployed to osd and o3 (automatic git sync every few minutes)
79
* If new variables are necessary (feature toggles): A test_suite is executing the test, e.g. test_suite is created or variable is added to existing test_suite over web interface configuration on osd and/or o3
80 7 okurz
* If a new test_suite has been created: The test_suite is added to at least one job_group
81 6 okurz
* Necessary needles are made available as PR for sle and/or opensuse (depending if executed, see above for 'main.pm') or are created on the productive instance
82
* At least one successful test run has been observed on osd or o3 and referenced in the corresponding progress item or bugzilla bug report if one exists
83
84 15 okurz
## Definition of READY for new tests
85 6 okurz
86
The following points should be considered before a new test is READY to be implemented:
87
88
* Either a product bug has been discovered for which there is no automated test in openQA or a FATE request for new features exists
89
* A test case description exists depicting the prerequisites of the test, the steps to conduct and the expected result
90
* The impact and applicability for both SLE and openSUSE products has been considered
91 10 okurz
92 23 okurz
93
# code contribution review checklist
94
95
Check each pull request on https://github.com/os-autoinst/os-autoinst-distri-opensuse against the following rules
96
97
* https://github.com/os-autoinst/os-autoinst-distri-opensuse#coding-style
98
* DoD is adhered to
99
* SLE staging impact has been considered (be careful accepting changes during working days when a stable SLE staging project is expected by release managers)
100
101
102 15 okurz
# Test development instances (staging openQA instances)
103 10 okurz
104 1 okurz
Contributors cannot afford to verify a newly developed test in all scenarios run by o3 or osd, so tests will break sometime. It would be useful to use a machine to run a subset of the scenarios run in the official instance(s) to make sure the new tests can be deployed with some degree of confidence. But: Any "staging openQA instance" would not be able to run everything which is run in production. It just does not scale. So anyway only a subset can be run and there can be always something missing. Also, we don't have the hardware capacity to cover everything twice and also consider SLE plus openSUSE. Our [DOD](https://progress.opensuse.org/projects/openqatests/wiki/Wiki#Definition-of-DONEREADY) should cover some important steps so that external contributors are motivated to test something locally first. We have a good test review process and it has to be decided by the reviewer if he accepts the risk of a new test with or without a local verification and covering which scenarios. Depending on the contributors it might make sense to setup a staging server with a subset of tests which is used by multiple test developers to share the burden of openQA setup and administration. For example the YaST team has one available: https://wiki.microfocus.net/index.php/YAST/openQA
105 11 okurz
If you want to follow this model you can watch [this talk by Christopher Hofmann from the OSC16](https://events.opensuse.org/conference/oSC16/program/proposal/986) or ask the YaST team for their experiences.
106 16 okurz
107
108
# Tips for test development and issue investigation
109
110
Examples mentioned here write `clone_job` and `client`. Replace this by a call to the scripts within openQA installation with the corresponding name and proper arguments to provide your API key as well as the host selection, e.g. `/usr/share/openqa/client --host https://openqa.opensuse.org` with your API key configured in `~/.config/openqa/client.conf`
111
112
## Uploading image files to openqa server and run test on it
113
114
You can manually trigger a test job with explicit name as one-shot overriding the variables as necessary, for example:
115
116
as geekotest@openqa:
117
118
```
119
cd /var/lib/openqa/factory/hdd
120
wget http://<my_host>/<path>.qcow2 -O <new_image_name>.qcow2
121
cd /var/lib/openqa/factory/iso
122
/usr/share/openqa/script/client isos post --params SLE-12-SP2-Server-DVD-ppc64le-Build1651-Media1.iso.6.json HDD_1=SLE-12-Server-ppc64le-GM-gnome_with_snapper.qcow2 TEST=migration_offline_sle12_ppc BUILD=1651_<your_short_name>
123
```
124
125
why `SLE-12-SP2-Server-DVD-ppc64le-Build1651-Media1.iso.6.json`? I checked `SLE-12-SP2-Server-DVD-ppc64le-Build1651-Media1.iso.?.json`: There are `…5…` and `…6…`. `…5…` is for *HA* so I chose 6.
126
127
The job can be cleaned afterwards to tidy up the build history with:
128
129
```
130
client jobs/463859 delete
131
```
132
133
## Create new HDD image with openQA
134
```
135
client jobs post DISTRI=sle VERSION=12 FLAVOR=Server-DVD ARCH=ppc64le BACKEND=qemu \
136
NOVIDEO=1 OFW=1 QEMUCPU=host SERIALDEV=hvc0 BUILD=okurz_poo9714 \
137
ISO=SLE-12-Server-DVD-ppc64le-GM-DVD1.iso INSTALLONLY=1 QEMU_COMPRESS_QCOW2=1 \
138
PUBLISH_HDD_1=SLES-12-GM-gnome-ppc64le_snapper_20g.qcow2 TEST=create_gm_ppc_image \
139
MACHINE=ppc64le WORKER_CLASS=qemu_ppc64le HDDSIZEGB=20 MAX_JOB_TIME=86400 TIMEOUT_SCALE=10
140
```
141
142
The `MAX_JOB_TIME=86400 TIMEOUT_SCALE=10` allows for interactive login during the process in case you want to manually adjust or debug. Beware though that `TIMEOUT_SCALE=10` also scales the waiting time on `check_screen` so that the whole job might take longer to execute.
143
144
To run a test but based on the new HDD image search for a good example and clone it with adjusted parameter:
145
146
```
147
clone_job 462022 HDD_1=SLES-12-GM-gnome-ppc64le_snapper_20g.qcow2
148
```
149
150
151
## Interactive investigation
152
153
While a job is running one can connect to the worker (if network access is possible) using VNC. One challenge is that the test is still running and manual interaction with the system interferes with the test and vice versa.
154
155
156
### Making the test stop for long enough to be able to connect
157
158
If you can change the test code, i.e. if running on a development machine, you can for example add a `sleep 3600;` or `wait_serial 'CONTINUE';` at the point in test when you want to connect to the system and interact with it, e.g. to gather additional logs. In case of `wait_serial 'CONTINUE';` you can echo 'CONTINUE' to the serial point to let the test continue, e.g. call `echo 'CONTINUE' > /dev/ttyS0;`.
159
160
In case you can not or do not want to change the test code or your test run is stopping anyway at a certain point with long enough timeout you can also increase timeout with `TIMEOUT_SCALE`, e.g. trigger it with the job variable `TIMEOUT_SCALE=10`. For example a `script_run` with default timeout of 90 seconds will wait for 900 seconds (=15 minutes) which should give enough time in most cases already.
161
162
163
### Connecting over VNC
164
165
Then connected to the instance when it stalled over VNC. The VNC port is shown on the job live view as a hover text on the instance name. Make sure to use a "shared" connection in your vncviewer. `krdc`, the default KDE VNC viewer, as well as `vinagre`, default GNOME VNC viewer, do this already. For TigerVNC use for example:
166
167
```
168
vncviewer -Shared malbec.arch:91
169
```
170
171
172
### Forwarding of special shortcuts
173
174
The default `vncviewer` in openSUSE/SUSE systems is recommended as it can also be used to forward special keyboard shortcuts. E.g. to change to text console:
175
Press *F8* in vncviewer, select *ctrl* and *alt* in menue, exit menue, press *F2*.
176
177
178
### Requesting video when by default you do not have video in your environment
179
180
Example:
181
182
```
183
clone_job 464665 NOVIDEO=0
184
```
185 22 okurz
186
## Structured test issue investigation
187
188
In the cases of non-trivial issues it makes sense to use the "scientific method" especially because openQA tests being system tests are under influence of many moving parts. Also see https://progress.opensuse.org/projects/openqav3/wiki#Further-decision-steps-working-on-test-issues about this.
189
[Bug Hunting and the Scientific Method](https://accu.org/index.php/journals/1714) is a suggested read as well as [How to Fix the Hardest Bug You've Ever Seen: The Scientific Method](http://yellerapp.com/posts/2014-08-11-scientific-debugging.html). It is suggested to note down in tickets the hypotheses of all potential relevant problem sources, design experiments - which can be as simple as checking the logfile, collect observations, accept/reject hypotheses and therefore derive a better understanding of what is happening to eventually come to a conclusion. [s390 dasdfmt fails even though command looks complete in screenshot](https://progress.opensuse.org/issues/12410) can serve as an real-world example ticket how it can look like.