Project

General

Profile

Wiki » History » Version 112

okurz, 2021-01-28 15:25
Add comments from #78127

1 27 okurz
{{toc}}
2
3
# Test results overview
4 18 okurz
* Latest report based on openQA test results http://s.qa.suse.de/test-status , SLE12: http://s.qa.suse.de/test-status-sle12 , SLE15: http://s.qa.suse.de/test-status-sle15
5 36 okurz
* only "blocker" or "shipstopper" bugs on "interesting products" for SLE: http://s.qa.suse.de/qa_sle_bugs_sle , SLE15: http://s.qa.suse.de/qa_sle_bugs_sle15_all, SLE12: http://s.qa/qa_sle_bugs_sle12_2
6 1 mgriessmeier
7 64 okurz
# QE tools - Team description
8 1 mgriessmeier
9 84 okurz
"The easiest way to provide complete quality for your software"
10
11
We provide the most complete free-software system-level testing solution to ensure high quality of operating systems, complete software stacks and multi-machine services for software distribution builders, system integration engineers and release teams. We continuously develop, maintain and release our software to be readily used by anyone while we offer a friendly community to support you in your needs. We maintain the public openqa.opensuse.org openQA server and the SUSE internal openqa.suse.de openQA server as well as supporting tools in the surrounding ecosystem.
12
13 27 okurz
## Team responsibilities
14 1 mgriessmeier
15 27 okurz
* Develop and maintain upstream openQA
16
* Administration of openqa.suse.de and workers (But not physical hardware, as these belong to the departments that purchased them and we merely facilitate)
17
* Helps administrating and maintaining openqa.opensuse.org, including coordination of efforts aiming at solving problems affecting o3
18 81 okurz
* Develop and maintain internal maintenance QA tools (SMELT, template generator, MTUI, openQA QAM bot, etc, e.g. from https://confluence.suse.com/display/maintenanceqa/QAM+Toolchain)
19 27 okurz
* Support colleagues, team members and open source community
20 1 mgriessmeier
21 27 okurz
## Out of scope
22
23
* Maintenance of individual tests
24
* Maintenance of physical hardware
25
* Maintenance of special worker addendums needed for tests, e.g. external hypervisor hosts for s390x, powerVM
26
* Ticket triaging of http://progress.opensuse.org/projects/openqatests/
27
* Feature development within the backend for single teams (commonly provided by teams themselves)
28
29 95 okurz
## Our common userbase
30
31 96 okurz
Known users of our products: Most SUSE QA engineers, SUSE SLE release managers and release engineers, every SLE developer submitting "submit requests" in OBS/IBS where product changes are tested as part of the "staging" process before changes are accepted in either SLE or openSUSE (staging tests must be green before packages are accepted), same for all openSUSE contributors submitting to either openSUSE:Factory (for Tumbleweed, SLE, future Leap versions) or Leap, other GNU/Linux distributions like Fedora https://openqa.fedoraproject.org/ , Debian ( https://openqa.debian.net/ , unavailable at time of writing), https://openqa.qubes-os.org/ , openSUSE KDE contributors (with their own workflows, https://openqa.opensuse.org/group_overview/23 ), openSUSE GNOME contributors (https://openqa.opensuse.org/group_overview/35 ), OBS developers (https://openqa.opensuse.org/parent_group_overview/7#grouped_by_build) , wicked developers (https://gitlab.suse.de/wicked-maintainers/wicked-ci#openqa), and of course our team itself for "openQA-in-openQA Tests" :) https://openqa.opensuse.org/group_overview/24
32 95 okurz
Keep in mind: "Users of openQA" and talking about "openSUSE release managers and engineers" means SUSE employees but also employees of other companies, also development partners of SUSE.
33
In summary our products, for example openQA, are a critical part of many development processes hence outages and regressions are disruptive and costly. Hence we need to ensure a high quality in production hence we practice DevOps with a slight tendency to a conservative approach for introducing changes while still ensuring a high development velocity.
34
35 27 okurz
## How we work
36
37 64 okurz
The QE Tools team is following the DevOps approach working using a lightweight Agile approach. We plan and track our works using tickets on https://progress.opensuse.org . We pick tickets based on priority and planning decisions. We use weekly meetings as checkpoints for progress and also track cycle and lead times to crosscheck progress against expectations.
38 27 okurz
39 83 okurz
* [tools team - backlog](https://progress.opensuse.org/issues?query_id=230): The complete backlog of the team
40 86 okurz
* [tools team - backlog, high-level view](https://progress.opensuse.org/issues?query_id=526): A high-level view of the backlog, all epics and higher (an "epic" includes multiple stories)
41
* [tools team - backlog, top-level view](https://progress.opensuse.org/issues?query_id=524): A top-level view of the backlog, only sagas and higher (a "saga" is bigger than an epic and can include multiple epics, i.e.  "epic of epics")
42 67 okurz
* [tools team - what members of the team are working on](https://progress.opensuse.org/issues?query_id=400): To check progress and know what the team is currently occupied with
43 1 mgriessmeier
44 67 okurz
Be aware: Custom queries in the right-hand sidebar of individual projects, e.g. https://progress.opensuse.org/projects/openqav3/issues , show queries with the same name but are limited to the scope of the specific projects so can show only a subset of all relevant tickets.
45 1 mgriessmeier
46 32 okurz
### Common tasks for team members
47
48
This is a list of common tasks that we follow, e.g. reviewing daily based on individual steps in the DevOps Process ![DevOps Process](devops-process_25p.png)
49
50
* **Plan**:
51
 * State daily learning and planned tasks in internal chat room
52
 * Review backlog for time-critical, triage new tickets, pick tickets from backlog; see https://progress.opensuse.org/projects/qa/wiki#How-we-work-on-our-backlog
53
* **Code**:
54 1 mgriessmeier
 * See project specific contribution instructions
55 72 okurz
 * Provide peer-review following https://github.com/notifications based on projects within the scope of https://github.com/os-autoinst/ with the exception of test code repositories, especially https://github.com/os-autoinst/openQA, https://github.com/os-autoinst/os-autoinst, https://github.com/os-autoinst/scripts, https://github.com/os-autoinst/os-autoinst-distri-openQA, https://github.com/os-autoinst/openqa-trigger-from-obs, https://github.com/os-autoinst/openqa_review as well as other projects like https://gitlab.suse.de/qa-maintenance/openQABot/
56 32 okurz
* **Build**:
57
 * See project specific contribution instructions
58
* **Test**:
59
 * Monitor failures on https://travis-ci.org/ relying on https://build.opensuse.org/package/show/devel:openQA/os-autoinst_dev for os-autoinst (email notifications)
60
 * Monitor failures on https://app.circleci.com/pipelines/github/os-autoinst/openQA?branch=master relying on https://build.opensuse.org/project/show/devel:openQA:ci for openQA (email notifications)
61
* **Release**:
62
 * By default we use the rolling-release model for all projects unless specified otherwise
63 94 livdywan
 * Monitor [devel:openQA on OBS](https://build.opensuse.org/project/show/devel:openQA) (all packages and all subprojects) for failures, ensure packages are published on http://download.opensuse.org/repositories/devel:/openQA/ (members need to be added individually, you can ask existing team members, e.g. the SM)
64 32 okurz
 * Monitor http://jenkins.qa.suse.de/view/openQA-in-openQA/ for the openQA-in-openQA Tests and automatic submissions of os-autoinst and openQA to openSUSE:Factory through https://build.opensuse.org/project/show/devel:openQA:tested
65
* **Deploy**:
66
 * o3 is automatically deployed (daily), see https://progress.opensuse.org/projects/openqav3/wiki/Wiki#Automatic-update-of-o3
67 92 livdywan
 * osd is automatically deployed (weekly), monitor https://gitlab.suse.de/openqa/osd-deployment/pipelines and watch for notification email to openqa@suse.de
68 32 okurz
* **Operate**:
69
 * Apply infrastructure changes from https://gitlab.suse.de/openqa/salt-states-openqa (osd) or manually over sshd (o3)
70 37 okurz
 * Monitor for backup, see https://gitlab.suse.de/qa-sle/backup-server-salt
71 32 okurz
config changes in salt (osd), backups, job group configuration changes
72 61 okurz
 * Ensure old unused/non-matching needles are cleaned up (osd+o3), see #73387
73 32 okurz
* **Monitor**:
74 106 livdywan
 * React on alerts from [stats.openqa-monitor.qa.suse.de](https://stats.openqa-monitor.qa.suse.de/alerting/list?state=not_ok) (emails on [osd-admins@suse.de](http://mailman.suse.de/mailman/listinfo/osd-admins) and login via LDAP credentials, you must be an *editor* to edit panels and hooks via the web UI)
75
 * Look for incomplete jobs or scheduled not being worked on o3 and osd (API or webUI) - see also #81058 for *power*
76 44 okurz
 * React on alerts from https://gitlab.suse.de/openqa/auto-review/, https://gitlab.suse.de/openqa/openqa-review/, https://gitlab.suse.de/openqa/monitor-o3 (subscribe to projects for notifications)
77 49 okurz
 * Be responsive on #opensuse-factory (irc://chat.freenode.net/opensuse-factory) for help, support and collaboration (Unless you have a better solution it is suggested to use [Element.io](https://app.element.io/#/room/%23freenode_%23opensuse-factory:matrix.org) for a sustainable presence; you also need a [registered IRC account](https://freenode.net/kb/answer/registration))
78 97 okurz
 * Be responsive on [#qa-tools](https://chat.suse.de/channel/qa-tools) for internal coordination and alarm handling, fallback to #opensuse-factory (irc://chat.freenode.net/opensuse-factory) as backup if [#qa-tools](https://chat.suse.de/channel/qa-tools) is not available, e.g. if chat.suse.de is down
79 1 mgriessmeier
 * Be responsive on [#testing](https://chat.suse.de/channel/testing) for help, support and collaboration
80 50 okurz
 * Be responsive on mailing lists opensuse-factory@opensuse.org and openqa@suse.de (see https://en.opensuse.org/openSUSE:Mailing_lists_subscription)
81 31 okurz
82 27 okurz
### How we work on our backlog
83 1 mgriessmeier
84 27 okurz
* "due dates" are only used as exception or reminders
85
* every team member can pick up tickets themselves
86
* everybody can set priority, PO can help to resolve conflicts
87 67 okurz
* consider the [ready, not assigned/blocked/low](https://progress.opensuse.org/issues?query_id=490) query as preferred
88 60 livdywan
* ask questions in tickets, even potentially "stupid" questions, oftentimes descriptions are unclear and should be improved
89 62 okurz
* There are "low-level infrastructure tasks" only conducted by some team members, the "DevOps" aspect does not include that but focusses on the joint development and operation of our main products
90 74 okurz
* Consider tickets with the subject keyword or tag "learning" as good learning opportunities for people new to a certain area. Experts in the specific area should prefer helping others but not work on the ticket
91 91 okurz
* For tickets which are out of the scope of the team remove from backlog, delegate to corresponding teams or persons but be nice and supportive, e.g. [SUSE-IT](https://sd.suse.com/), [EngInfra](https://infra.nue.suse.com/) also see [SLA](https://confluence.suse.com/display/qasle/Service+Level+Agreements), [test maintainer](https://progress.opensuse.org/projects/openqatests/), QE-LSG PrjMgr/mgmt
92 112 okurz
 * For [EngInfra](https://infra.nue.suse.com/) tickets first create tracker ticket in https://progress.opensuse.org/projects/openqa-infrastructure/issues/ , then create EngInfra ticket with "[openqa] …" in subject, optional "[openqa][urgent] …", reference progress ticket, CC osd-admins@suse.de . Whenever creating any external ticket, e.g. EngInfra, create internal tracker ticket. Because there might be more internal notes
93
* Whenever we apply changes to the infrastructure we should have a ticket
94 88 okurz
* Refactoring and general improvements are conducted while we work on features or regression fixes
95 89 okurz
* For every regression or bigger issue that we encounter try to come up with at least two improvements, e.g. the actual issue is fixed and similar cases are prevented in the future with better tests and optionally also monitoring is improved
96 103 okurz
* For critical issues and very big problems collect "lessons learned", e.g. in notes in the ticket or a meeting with minutes in the ticket, consider https://en.wikipedia.org/wiki/Five_whys and answer at least the following questions: "User impact, outwards-facing communication and mitigation, upstream improvement ideas, Why did the issue appear, can we reduce our detection time, can we prevent similar issues in the future, what can we improve technically, what can we improve in our processes"
97 1 mgriessmeier
* okurz proposes to use "#NoEstimates". Though that topic is controversial and often misunderstood. https://ronjeffries.com/xprog/articles/the-noestimates-movement/ describes it nicely :)
98 112 okurz
99 27 okurz
100 55 okurz
#### Definition of DONE
101
102
Also see http://www.allaboutagile.com/definition-of-done-10-point-checklist/ and https://www.scrumalliance.org/community/articles/2008/september/what-is-definition-of-done-%28dod%29
103
104
* Code changes are made available via a pull request on a version control repository, e.g. github for openQA
105
* [Guidelines for git commits](http://chris.beams.io/posts/git-commit/) have been followed
106
* Code has been reviewed (e.g. in the github PR)
107
* Depending on criticality/complexity/size/feature: A local verification test has been run, e.g. post link to a local openQA machine or screenshot or logfile
108
* Potentially impacted package builds have been considered, e.g. openSUSE Tumbleweed and Leap, Fedora, etc.
109
* Code has been merged (either by reviewer or "mergify" bot or reviewee after 'LGTM' from others)
110
* Code has been deployed to osd and o3 (monitor automatic deployment, apply necessary config or infrastructure changes)
111
112 56 okurz
#### Definition of READY for new features
113 55 okurz
114
The following points should be considered before a new feature ticket is READY to be implemented:
115
116
* Follow the ticket template from https://progress.opensuse.org/projects/openqav3/wiki/#Feature-requests
117
* A clear motivation or user expressing a wish is available
118
* Acceptance criteria are stated (see ticket template)
119
* add tasks as a hint where to start
120
121 1 mgriessmeier
#### WIP-limits (reference "Kanban development")
122 28 okurz
123 79 livdywan
* global limit of 10 tickets, and 3 tickets per person respectively [In Progress](https://progress.opensuse.org/issues?query_id=505)
124
* limit of 20 tickets per person in [Feedback](https://progress.opensuse.org/issues?query_id=520)
125 27 okurz
126 1 mgriessmeier
#### Target numbers or "guideline", "should be", in priorities
127
128 67 okurz
1. *New, untriaged openQA:* [0 (daily)](https://progress.opensuse.org/projects/openqav3/issues?query_id=475) . Every ticket should have a target version, e.g. "Ready" for QE tools team, "future" if unplanned, others for other teams
129 64 okurz
1. *Untriaged "tools" tagged:* [0 (daily)](https://progress.opensuse.org/issues?query_id=481) . Every ticket should have a target version, e.g. "Ready" for QE tools team, "future" if unplanned, others for other teams
130 67 okurz
1. *Workable (properly defined):* [~40 (20-50)](https://progress.opensuse.org/issues?query_id=478) . Enough tickets to reflect a proper plan but not too many to limit unfinished data (see "waste")
131 82 okurz
1. *Overall backlog length:* [ideally less than 100](https://progress.opensuse.org/issues?query_id=230) . Similar as for "Workable". Enough tickets to reflect a proper roadmap as well as give enough flexibility for all unfinished work but limited to a feasible number that can still be overlooked by the team without loosing overview. One more reason for a maximum of 100 are that pagination in redmine UI allows to show only up to 100 issues on one page at a time, same for redmine API access.
132 71 okurz
1. *Within due-date:* [0 (daily/weekly)](https://progress.opensuse.org/issues?query_id=514) . We should take due-dates serious, finish tickets fast and at the very least update tickets with an explanation why the due-date could not be hold and update to a reasonable time in the future based on usual cycle time expectations
133 27 okurz
134
#### SLOs (service level objectives)
135
136
* for picking up tickets based on priority, first goal is "urgency removal":
137 67 okurz
 * **immediate openQA**: [<1 day](https://progress.opensuse.org/projects/openqav3/issues?utf8=%E2%9C%93&set_filter=1&f%5B%5D=priority_id&op%5Bpriority_id%5D=%3D&v%5Bpriority_id%5D%5B%5D=7&f%5B%5D=status_id&op%5Bstatus_id%5D=o&f%5B%5D=subproject_id&op%5Bsubproject_id%5D=%3D&v%5Bsubproject_id%5D%5B%5D=125&f%5B%5D=updated_on&op%5Bupdated_on%5D=%3Ct-&v%5Bupdated_on%5D%5B%5D=1&f%5B%5D=&c%5B%5D=subject&c%5B%5D=project&c%5B%5D=status&c%5B%5D=assigned_to&c%5B%5D=fixed_version&c%5B%5D=due_date&c%5B%5D=priority&c%5B%5D=updated_on&c%5B%5D=category&group_by=priority)
138
 * **urgent openQA**: [<1 week](https://progress.opensuse.org/projects/openqav3/issues?utf8=%E2%9C%93&set_filter=1&f%5B%5D=priority_id&op%5Bpriority_id%5D=%3D&v%5Bpriority_id%5D%5B%5D=6&f%5B%5D=status_id&op%5Bstatus_id%5D=o&f%5B%5D=subproject_id&op%5Bsubproject_id%5D=%3D&v%5Bsubproject_id%5D%5B%5D=125&f%5B%5D=updated_on&op%5Bupdated_on%5D=%3Ct-&v%5Bupdated_on%5D%5B%5D=7&f%5B%5D=&c%5B%5D=subject&c%5B%5D=project&c%5B%5D=status&c%5B%5D=assigned_to&c%5B%5D=fixed_version&c%5B%5D=due_date&c%5B%5D=priority&c%5B%5D=updated_on&c%5B%5D=category&group_by=status)
139
 * **high openQA**: [<1 month](https://progress.opensuse.org/projects/openqav3/issues?utf8=%E2%9C%93&set_filter=1&f%5B%5D=status_id&op%5Bstatus_id%5D=o&f%5B%5D=priority_id&op%5Bpriority_id%5D=%3D&v%5Bpriority_id%5D%5B%5D=5&f%5B%5D=subproject_id&op%5Bsubproject_id%5D=%3D&v%5Bsubproject_id%5D%5B%5D=125&f%5B%5D=updated_on&op%5Bupdated_on%5D=%3Ct-&v%5Bupdated_on%5D%5B%5D=30&f%5B%5D=&c%5B%5D=subject&c%5B%5D=project&c%5B%5D=status&c%5B%5D=assigned_to&c%5B%5D=fixed_version&c%5B%5D=due_date&c%5B%5D=priority&c%5B%5D=updated_on&c%5B%5D=category&group_by=status)
140
 * **normal openQA**: [<1 year](https://progress.opensuse.org/projects/openqav3/issues?utf8=%E2%9C%93&set_filter=1&f%5B%5D=priority_id&op%5Bpriority_id%5D=%3D&v%5Bpriority_id%5D%5B%5D=4&f%5B%5D=status_id&op%5Bstatus_id%5D=o&f%5B%5D=subproject_id&op%5Bsubproject_id%5D=%3D&v%5Bsubproject_id%5D%5B%5D=125&f%5B%5D=updated_on&op%5Bupdated_on%5D=%3Ct-&v%5Bupdated_on%5D%5B%5D=365&f%5B%5D=&c%5B%5D=subject&c%5B%5D=project&c%5B%5D=status&c%5B%5D=assigned_to&c%5B%5D=fixed_version&c%5B%5D=due_date&c%5B%5D=priority&c%5B%5D=updated_on&c%5B%5D=category&group_by=status)
141
 * **low openQA**: undefined
142 1 mgriessmeier
143
* aim for cycle time of individual tickets (not epics or sagas): 1h-2w
144 31 okurz
145 54 mkittler
#### Backlog prioritization
146 47 okurz
147
When we prioritize tickets we assess:
148
1. What the main use cases of openQA are among all users, be it SUSE QA engineers, other SUSE employees, openSUSE contributors as well as any other outside user of openQA
149
2. We try to understand how many persons and products are affected by feature requests as well as regressions (or "concrete bugs" as the ticket category is called within the openQA Project) and prioritize issues affecting more persons and products and use cases over limited issues
150
3. We prioritize regressions higher than work on (new) feature requests
151
4. If a workaround or alternative exists then this lowers priority. We prioritize tasks that need deep understanding of the architecture and an efficient low-level implementation over convenience additions that other contributors are more likely to be able to implement themselves.
152
153 38 okurz
### Team meetings
154
155 100 okurz
* **Daily:** Use (internal) chat actively, e.g. formulate your findings or achievements and plans for the day, "think out loud" while working on individual problems. Optionally join the call every day 1030-1045 CET/CEST with optional extension for selected topics
156 58 livdywan
  * *Goal*: Quick support on problems, feedback on plans, collaboration and self-reflection (compare to [Daily Scrum](https://www.scrumguides.org/scrum-guide.html#events-daily))
157 63 livdywan
* **Weekly coordination:** Every Friday 1115-1145 CET/CEST in [m.o.o/suse_qa_tools](https://meet.opensuse.org/suse_qa_tools) ([fallback](https://meet.jit.si/suse_qa_tools)). Community members and guests are particularly welcome to join this meeting.
158 58 livdywan
  * *Goal*: Team backlog coordination and design decisions of bigger topics (compare to [Sprint Planning](https://www.scrumguides.org/scrum-guide.html#events-planning)).
159 110 livdywan
* **Fortnightly Retrospective:** Friday 1145-1215 CET/CEST every even week, same room as the weekly meeting. On these days the weekly has hard time limit of 1115-1145.
160 1 mgriessmeier
  * *Goal*: Inspect and adapt, learn and improve (compare to [Sprint Retrospective](https://www.scrumguides.org/scrum-guide.html#events-retro))
161 110 livdywan
  * *Announcements*: Create a new *discussion* with all team members in Rocket Chat and a new [retrospected game](retrospected.com) which can be filled in all week. Specific actions will be recorded as tickets.
162 58 livdywan
* **Virtual coffee talk:** Weekly every Thursday 1100-1120 CET/CEST, same room as the weekly.
163
  * *Goal*: Connect and bond as a team, understand each other (compare to [Informal Communication in an all-remote environment](https://about.gitlab.com/company/culture/all-remote/informal-communication))
164
* **extension on-demand:** Optional meeting on invitation in the suggested time slot Thursday 1000-1200 CET/CEST, in the same room as the weekly, on-demand or replacing the *Virtual coffee talk*.
165 1 mgriessmeier
  * *Goal*: Introduce, research and discuss bigger topics, e.g. backlog overview, processes and workflows
166
* **Workshop:** Friday 0800-0900 UTC (!) every week in [m.o.o/suse_qa_tools](https://meet.opensuse.org/suse_qa_tools) especially for community members and users! We will run this every week with the plan to move to a fortnightly cadence every even week.
167 110 livdywan
  * *Goal*: Demonstrate new and important features, explain already existing, but less well-known features, and discuss questions from the user community. All your questions are welcome!
168
  * *Announcements*: Drop a reminder with a teaser in [#testing](https://chat.suse.de/channel/testing).
169 31 okurz
170 59 livdywan
Note: Meetings concerning the whole team are moderated by the scrum master by default, who should join the call early and verify that the meeting itself and any tools used are working or e.g. advise the use of the fallback option.
171
172 111 livdywan
#### Announcements
173
174
- For every meeting, regular or one-off, desired attendants should be invited to make sure a slot blocked in their calendar and reminders with the correct local time will show up when it's time to join the meeting
175
  - Create a new event, for example in Thunderbird via the *Calendar* tab or `New > Event` via the menu.
176
  - Pick your audience, for example `qa-team@suse.de` will reach test developers and reviewers, or you can select individual attendants via their respective email addresses.
177
  - Add attendees accordingly.
178
  - Specify the time of the meeting
179
  - Set a schedule to repeat the event if applicable.
180
  - Add a location, e.g. https://meet.opensuse.org/suse_qa_tools
181
  - Don't worry if any of the details might change - you can update the invitation later and participants will be notified.
182
- See the respective meeting for regular actions such as communication via chat
183
184 73 okurz
### Team
185
186
The team is comprised of engineers from different teams, some only partially available:
187
* Xiaojing Liu (Jane, [QA APAC 1](https://geekos.prv.suse.net/team/5b08104d7d795700204993df))
188
* Marius Kittler
189
* Nick Singer
190 98 okurz
* Sebastian Riedel (Part time contributions)
191 73 okurz
* Oliver Kurz (acting Product Owner)
192
* Tina Müller (Part time, now mainly working for OBS, [QAM3](https://geekos.prv.suse.net/team/5b7d24a17cf60423d2523485))
193
* Christian Dywan (Scrum Master, [QEM1](https://geekos.prv.suse.net/team/5b08104b7d795700204993d1))
194
* Ivan Lausuch (QEM3)
195
* Ondřej Súkup 
196
* Jan Baier (Part time)
197 90 livdywan
* Vasileios Anastasiadis (Bill)
198 73 okurz
199 107 livdywan
### Onboarding for new joiners
200
201
* Request to get added to the [tools team on GitHub](https://github.com/orgs/os-autoinst/teams/tools-team)
202
* Login at [stats.openqa-monitor.qa.suse.de](https://stats.openqa-monitor.qa.suse.de/alerting/list) with LDAP credentials and ask to be given the *editor* role
203
* [Watch](https://progress.opensuse.org/watchers/watch?object_id=347&object_type=wiki_page) this wiki page
204
* Subscribe to [osd-admins@suse.de](http://mailman.suse.de/mailman/listinfo/osd-admins), [openqa@suse.de](http://mailman.suse.de/mailman/listinfo/openqa) and [opensuse-factory@opensuse.org](https://lists.opensuse.org/archives/list/factory@lists.opensuse.org)
205
* Join [qa-tools on Rocket](https://chat.suse.de/channel/qa-tools)
206
* Request to join [devel:openQA on OBS](https://build.opensuse.org/project/show/devel:openQA)
207
* Ready an IRC bouncer for `#opensuse-factory` on *Freenode*, such as [Element.io](https://app.element.io/#/room/%23freenode_%23opensuse-factory:matrix.org)
208
* Request admin access on [osd](http://openqa.suse.de/) and [o3](http://openqa.opensuse.org/)
209
* Request to get added to the [QA project](https://progress.opensuse.org/projects/qa/settings/members) and *enable notifications for the openQA project* in [your account settings](https://progress.opensuse.org/my/account)
210
211 45 okurz
### Alert handling
212
213
#### Best practices
214
215
* "if it hurts, do it more often": https://www.martinfowler.com/bliki/FrequencyReducesDifficulty.html
216
* Reduce [Mean-time-to-Detect (MTTD)](https://searchitoperations.techtarget.com/definition/mean-time-to-detect-MTTD) and [Mean-time-to-Recovery](https://raygun.com/blog/what-is-mttr/)
217
218
#### Process
219
220
* React on any alert
221
* For each failing grafana alert
222 109 okurz
 * Create a ticket for the issue (with a tag "alert"; create ticket unless the alert is trivial to resolve and needs no improvement; even create a ticket if alerts turn to "ok" to prevent these issues in the future and to improve the alter)
223 45 okurz
 * Link the corresponding grafana panel in the ticket
224
 * Respond to the notification email with a link to the ticket
225 1 mgriessmeier
 * Optional: Inform in chat
226 51 okurz
 * Optional: Add "annotation" in corresponding grafana panel with a link to the corresponding ticket 
227 46 okurz
 * Pause the alert if you think further alerting the team does not help (e.g. you can work on fixing the problem, alert is non-critical but problem can not be fixed within minutes)
228 45 okurz
* If you consider an alert non-actionable then change it accordingly
229
* If you do not know how to handle an alert ask the team for help
230
* After resolving the issue add explanation in ticket, unpause alert and verify it going to "ok" again, resolve ticket
231
232
#### References
233
234
* https://nl.devoteam.com/en/blog-post/monitoring-reduce-mean-time-recovery-mttr/
235
236 99 okurz
### Extra-ordinary "hack-week" 2020-W51
237
238
SUSE QE Tools plans to have an internal "hack-week": Condition: We close 30 tickets from our backlog within the time frame 2020-12-03 until 2020-12-11 start of weekly meeting. No cheating! :) See [this query](https://progress.opensuse.org/issues?utf8=%E2%9C%93&set_filter=1&sort=priority%3Adesc%2Cid%3Adesc&f%5B%5D=status_id&op%5Bstatus_id%5D=c&f%5B%5D=fixed_version_id&op%5Bfixed_version_id%5D=%3D&v%5Bfixed_version_id%5D%5B%5D=418&f%5B%5D=closed_on&op%5Bclosed_on%5D=%3E%3C&v%5Bclosed_on%5D%5B%5D=2020-12-03&v%5Bclosed_on%5D%5B%5D=2020-12-11&f%5B%5D=&c%5B%5D=subject&c%5B%5D=project&c%5B%5D=status&c%5B%5D=assigned_to&c%5B%5D=relations&c%5B%5D=priority&c%5B%5D=category&c%5B%5D=cf_16&group_by=status&t%5B%5D=). During week 2020-W51 everyone is allowed to work on any hack-week project, it should just have a reasonable, "explainable" connection to our normal work. okurz volunteers to take over ops-duty for the week.
239
240 105 okurz
Result during meeting 2020-12-11: We missed the goal (by a slight amount) but we are motivated to try again in the next year :) Everybody, put some easy tickets aside for the next time!
241
242 31 okurz
### Historical
243
244 64 okurz
Previously the former QA tools team used target versions "Ready" (to be planned into individual milestone periods or sprints), "Current Sprint" and "Done". However the team never really did use proper time-limited sprints so the distinction was rather vague. After having tickets "Resolved" after some time the PO or someone else would also update the target version to "Done" to signal that the result has been reviewed. This was causing a lot of ticket update noise for not much value considering that the [Definition-of-Done](https://progress.opensuse.org/projects/openqav3/wiki/#ticket-workflow) when properly followed already has rather strict requirements on when something can be considered really "Resolved" hence the team eventually decided to not use the "Done" target version anymore. Since about 2019-05 (and since okurz is doing more backlog management) the team uses priorities more as well as the status "Workable" together with an explicit team member list for "What the team is working on" to better visualize what is making team members busy regardless of what was "officially" planned to be part of the team's work. So we closed the target version. On 2020-07-03 okurz subsequently closed "Current Sprint" as also this one was in most cases equivalent to just picking an assignee for a ticket or setting to "In Progress". We can just distinguish between "(no version)" meaning untriaged, "Ready" meaning tools team should consider picking up these issues and "future" meaning that there is no plan for this to be picked up. Everything else is defined by status and priority.
245 62 okurz
In 2020-10-27 we discussed together to find out the history of the team. We clarified that the team started out as a not well defined "Dev+Ops" team. "team responsibilities" have been mainly unchanged since at least beginning of 2019. We agreed that learning from users and production about our "Dev" contributions is good, so this part of "Ops" is responsibility of everyone.
246 27 okurz
247 104 okurz
## Change announcements
248
249
For new, cool features or disruptive changes consider providing according notifications to our common userbase as well as potential future users, for example create post on opensuse-factory@opensuse.org , link to post on openqa@suse.de , invite for workshop, post on one.suse.com, #opensuse-factory (IRC) (irc://chat.freenode.net/opensuse-factory), [#testing (RC)](https://chat.suse.de/testing)
250
251 69 tjyrinki_suse
# QE Core and QE Yast - Team descriptions
252 1 mgriessmeier
253 70 tjyrinki_suse
(this chapter has seen changes in 2020-11 regarding QSF -> QE Core / QE Yast change)
254 68 tjyrinki_suse
255 70 tjyrinki_suse
**QE Core** (formerly QSF, QA SLE Functional) and **QE Yast** are squads focusing on Quality Engineering of the core and yast functionality of the SUSE SLE products. The squad is comprised of members of QE Integration - [SUSE QA SLE Nbg](https://wiki.suse.net/index.php/SUSE-Quality_Assurance/Organization/Members_and_Responsibilities#QA_SLE_NBG_Team), including [SUSE QA SLE Prg](https://wiki.suse.net/index.php/SUSE-Quality_Assurance/Organization/Members_and_Responsibilities#QA_SLE_PRG_Team) - and QE Maintenance people (formerly "QAM"). The [SLE Departement](https://wiki.suse.net/index.php/SUSE-Quality_Assurance/SLE_Department#QSF_.28QA_SLE_Functional.29) page describes our QA responsibilities. We focus on our automatic tests running in [openQA](https://openqa.suse.de) under the job groups "Functional" as well as "Autoyast" for the respective products, for example [SLE 15 / Functional](https://openqa.suse.de/group_overview/110) and [SLE 15 / Autoyast](https://openqa.suse.de/group_overview/129). We back our automatic tests with exploratory manual tests, especially for the product milestone builds. Additionally we care about corresponding openSUSE openQA tests (see as well https://openqa.opensuse.org).
256 7 szarate
257 1 mgriessmeier
* long-term roadmap: http://s.qa.suse.de/qa-long-term
258
* overview of current openQA SLE12SP5 tests with progress ticket references: https://openqa.suse.de/tests/overview?distri=sle&version=12-SP5&groupid=139&groupid=142
259
* fate tickets for SLE12SP5 feature testing: based on http://s.qa.suse.de/qa_sle_functional_feature_tests_sle12sp5 new report based on all tickets with milestone before SLE12SP5 GM, http://s.qa.suse.de/qa_sle_functional_feature_tests_sle15sp1 for SLE15SP1
260
* only "blocker" or "shipstopper" bugs on "interesting products" for SLE15 http://s.qa.suse.de/qa_sle_functional_bug_query_sle15_2, http://s.qa/qa_sle_bugs_sle12_2 for SLE12
261 3 szarate
* Better organization of planned work can be seen at the [SUSE QA](https://progress.opensuse.org/projects/suseqa) project (which is not public).
262 1 mgriessmeier
263 27 okurz
## Test plan
264 1 mgriessmeier
265
When looking for coverage of certain components or use cases keep the [openQA glossary](http://open.qa/docs/#concept) in mind. It is important to understand that "tests in openQA" could be a scenario, for example a "textmode installation run", a combined multi-machine scenario, for example "a remote ssh based installation using X-forwarding", or a test module, for example "vim", which checks if the vim editor is correctly installed, provides correct rendering and basic functionality. You are welcome to contact any member of the team to ask for more clarification about this.
266
267 19 okurz
In detail the following areas are tested as part of "SLE functional":
268
269 1 mgriessmeier
* different hardware setups (UEFI, acpi)
270
* support for localization
271
* openSUSE: virtualization - some "virtualization" tests are active on o3 with reduced set compared to SLE coverage (on behalf of QA SLE virtualization due to team capacity constraints, clarified in QA SLE coordination meeting 2018-03-28)
272
* openSUSE: migration - comparable to "virtualization", a reduced set compared to SLE coverage is active on o3 (on behalf of QA SLE migration due to team capacity constraints, clarified in QA SLE coordination meeting 2018-04)
273 26 riafarov
274
275 69 tjyrinki_suse
### QE Yast
276 18 okurz
277 69 tjyrinki_suse
Squad focuses on testing YaST components, including installer and snapper.
278 1 mgriessmeier
279 18 okurz
Detailed test plan for SLES can be found here: [SLES_Integration_Level_Testplan.md](https://gitlab.suse.de/qsf-y/qa-sle-functional-y/blob/master/SLES_Integration_Level_Testplan.md)
280 1 mgriessmeier
281
* Latest report based on openQA test results SLE12: http://s.qa.suse.de/test-status-sle12-yast , SLE15: http://s.qa.suse.de/test-status-sle15-yast
282 2 mgriessmeier
283
284 69 tjyrinki_suse
### QE Core
285 1 mgriessmeier
286
"Testing is the future, and the future starts with you"
287
288
* basic operations (firefox, zypper, logout/reboot/shutdown)
289
* boot_to_snapshot
290 18 okurz
* functional application tests (kdump, gpg, ipv6, java, git, openssl, openvswitch, VNC)
291
* NIS (server, client)
292 1 mgriessmeier
* toolchain (development module)
293
* systemd
294 6 okurz
* "transactional-updates" as part of the corresponding SLE server role, not CaaSP
295
296
* Latest report based on openQA test results SLE12: http://s.qa.suse.de/test-status-sle12-functional , SLE15: http://s.qa.suse.de/test-status-sle15-functional
297 1 mgriessmeier
298 6 okurz
299 69 tjyrinki_suse
## In new organization also qovered by QE Core and others
300 1 mgriessmeier
301 69 tjyrinki_suse
* quarterly updated media: former QA Maintenance (QAM) is now part of the various QE squads. However, QU media does happen together with Maintenance Coordination that is not part of these squads.
302 1 mgriessmeier
303
304 27 okurz
## What we do
305 1 mgriessmeier
306
We collected opinions, personal experiences and preferences starting with the following four topics: What are fun-tasks ("new tests", "collaborate", "do it right"), what parts are annoying ("old & sporadic issues"), what do we think is expected from qsf-u ("be quick", "keep stuff running", "assess quality") and what we should definitely keep doing to prevent stakeholders becoming disappointed ("build validation", "communication & support").
307 12 okurz
308 27 okurz
### How we work on our backlog
309 12 okurz
310
* no "due date"
311
* we pick up tickets that have not been previously discussed
312 1 mgriessmeier
* more flexible choice
313 14 okurz
* WIP-limits:
314
 * global limit of 10 tickets "In Progress"
315
316
* target numbers or "guideline", "should be", in priorities:
317 12 okurz
 1. New, untriaged: 0
318
 2. Workable: 40
319 69 tjyrinki_suse
 3. New, assigned to [qe-core] or [qe-yast]: ideally less than 200 (should not stop you from triaging)
320 1 mgriessmeier
321
* SLAs for priority tickets - how to ensure to work on tickets which are more urgent?
322
 * "taken": <1d: immediate -> looking daily
323
 * 2-3d: urgent
324 12 okurz
 * first goal is "urgency removal": <1d: immediate, 1w: urgent
325 1 mgriessmeier
326 12 okurz
* our current "cycle time" is 1h - 1y (maximum, with interruptions)
327 1 mgriessmeier
328
* everybody should set priority + milestone in obvious cases, e.g. new reproducible test failures in multiple critical scenarios, in general case the PO decides
329
330 27 okurz
### How we like to choose our battles
331 1 mgriessmeier
332
We self-assessed our tasks on a scale from "administrative" to "creative" and found in the following descending order: daily test review (very "administrative"), ticket triaging, milestone validation, code review, create needles, infrastructure issues, fix and cleanup tests, find bugs while fixing failing tests, find bugs while designing new tests, new automated tests (very "creative"). Then we found we appreciate if our work has a fair share of both sides. Probably a good ratio is 60% creative plus 40% administrative tasks. Both types have their advantages and we should try to keep the healthy balance.
333
334
335 27 okurz
### What "product(s)" do we (really) *care* about?
336 1 mgriessmeier
337
Brainstorming results:
338
339
* openSUSE Krypton -> good example of something that we only remotely care about or not at all even though we see the connection point, e.g. test plasma changes early before they reach TW or Leap as operating systems we rely on or SLE+packagehub which SUSE does not receive direct revenue from but indirect benefit. Should be "community only", that includes members from QSF though
340
* openQA -> (like OBS), helps to provide ROI for SUSE
341
* SLE(S) (in development versions)
342
* Tumbleweed
343
* Leap, because we use it
344
* SLES HA
345
* SLE migration
346
* os-autoinst-distri-opensuse+backend+needles
347
348
From this list strictly no "product" gives us direct revenue however most likely SLE(S) (as well as SLES HA and SLE migration) are good examples of direct connection to revenue (based on SLE subscriptions). Conducting a poll in the team has revealed that 3 persons see "SLE(S)" as our main product and 3 see "os-autoinst-distri-opensuse+backend+needles" as the main product. We mainly agreed that however we can not *own* a product like "SLE" because that product is mainly not under our control.
349
350
Visualizing "cost of testing" vs. "risk of business impact" showed that both metrics have an inverse dependency, e.g. on a range from "upstream source code" over "package self-tests", "openSUSE Factory staging", "Tumbleweed", "SLE" we consider SLE to have the highest business risk attached and therefore defines our priority however testing at upstream source level is considered most effective to prevent higher cost of bugs or issues. Our conclusion is that we must ensure that the high-risk SLE base has its quality assured while supporting a quality assurance process as early as possible in the development process. package self-tests as well as the openQA staging tests are seen as a useful approach in that direction as well as "domain specfic specialist QA engineers" working closely together with according in-house development parties.
351
352 27 okurz
## Documentation
353 1 mgriessmeier
354
This documentation should only be interesting for the team QA SLE functional. If you find that some of the following topics are interesting for other people, please extract those topics to another wiki section.
355
356
### QA SLE functional Dashboards
357
358
In room 3.2.15 from Nuremberg office are two dedicated laptops each with a monitor attached showing a selected overview of openQA test resuls with important builds from SLE and openSUSE.
359 4 szarate
Such laptops are configured with a root account with the default password for production machines. First point of contact: [slindomansilla.suse.com](mailto:slindomansilla@suse.com), (okurz@suse.de)[mailto:okurz@suse.de]
360 1 mgriessmeier
361
* ''dashboard-osd-3215.suse.de'': Showing current view of openqa.suse.de filtered for some job group results, e.g. "Functional"
362
* ''dashboard-o3-3215.suse.de'': Showing current view of openqa.opensuse.org filtered for some job group results which we took responsibility to review and are mostly interested in
363
364 24 dheidler
### dashboard-osd-3215
365 1 mgriessmeier
366
* OS: openSUSE Tumbleweed
367
* Services: ssh, mosh, vnc, x2x
368
* Users:
369
** root
370
** dashboard
371
* VNC: `vncviewer dashboard-osd-3215`
372
* X2X: `ssh -XC dashboard@dashboard-osd-3215 x2x -west -to :0.0`
373
** (attaches the dashboard monitor as an extra display to the left of your screens. Then move the mouse over and the attached X11 server will capture mouse and keyboard)
374
375
#### Content of /home/dashboard/.xinitrc
376
377 3 szarate
```
378 1 mgriessmeier
#
379
# Source common code shared between the
380
# X session and X init scripts
381
#
382
. /etc/X11/xinit/xinitrc.common
383
384
xset -dpms
385
xset s off
386
xset s noblank
387
[...]
388
#
389
# Add your own lines here...
390
#
391
$HOME/bin/osd_dashboard &
392 3 szarate
```
393 1 mgriessmeier
394
#### Content of /home/dashboard/bin/osd_dashboard
395
396 3 szarate
```
397 1 mgriessmeier
#!/bin/bash
398
399
DISPLAY=:0 unclutter &
400
401
DISPLAY=:0 xset -dpms
402
DISPLAY=:0 xset s off
403
DISPLAY=:0 xset s noblank
404
405
url="${url:-"https://openqa.suse.de/?group=SLE+15+%2F+%28Functional%7CAutoyast%29&default_expanded=1&limit_builds=3&time_limit_days=14&show_tags=1&fullscreen=1#"}"
406 20 dheidler
DISPLAY=:0 chromium --kiosk "$url"
407 3 szarate
```
408 1 mgriessmeier
409
#### Cron job:
410
411 3 szarate
```
412 1 mgriessmeier
Min     H       DoM     Mo      DoW     Command
413 23 dheidler
*	*	*	*	*	/home/dashboard/bin/reload_chromium
414 3 szarate
```
415 1 mgriessmeier
416 21 dheidler
#### Content of /home/dashboard/bin/reload_chromium
417 1 mgriessmeier
418 3 szarate
```
419 1 mgriessmeier
#!/bin/bash
420
421
DISPLAY=:0 xset -dpms
422
DISPLAY=:0 xset s off
423
DISPLAY=:0 xset s noblank
424
425 22 dheidler
DISPLAY=:0 xdotool windowactivate $(DISPLAY=:0 xdotool search --class Chromium)
426 21 dheidler
DISPLAY=:0 xdotool key F5
427
DISPLAY=:0 xdotool windowactivate $(DISPLAY=:0 xdotool getactivewindow)
428 3 szarate
```
429 1 mgriessmeier
430
#### Issues:
431
432
* ''When the screen shows a different part of the web page''
433
** a simple mouse scroll through vnc or x2x may suffice.
434
* ''When the builds displayed are freeze without showing a new build, it usually means that midori, the browser displaying the info on the screen, crashed.''
435
** you can try to restart midori this way:
436
*** ps aux | grep midori
437
*** kill $pid
438
*** /home/dashboard/bin/osd_dashboard
439
** If this also doesn't work, restart the machine.
440 25 dheidler
441
442
### dashboard-o3
443
444
* Raspberry Pi 3B+
445
* IP: `10.160.65.207`
446
447
#### Content of /home/tux/.xinitrc
448
```
449
#!/bin/bash
450
451
unclutter &
452
openbox &
453
xset s off
454
xset -dpms
455
sleep 5
456
url="https://openqa.opensuse.org?group=openSUSE Tumbleweed\$|openSUSE Leap [0-9]{2}.?[0-9]*\$|openSUSE Leap.\*JeOS\$|openSUSE Krypton|openQA|GNOME Next&limit_builds=2&time_limit_days=14&&show_tags=1&fullscreen=1#build-results"
457
chromium --kiosk "$url" &
458
459
while sleep 300 ; do
460
        xdotool windowactivate $(xdotool search --class Chromium)
461
        xdotool key F5
462
        xdotool windowactivate $(xdotool getactivewindow)
463
done
464
```
465
466
#### Content of /usr/share/lightdm/lightdm.conf.d/50-suse-defaults.conf
467
```
468
[Seat:*]
469
pam-service = lightdm
470
pam-autologin-service = lightdm-autologin
471
pam-greeter-service = lightdm-greeter
472
xserver-command=/usr/bin/X
473
session-wrapper=/etc/X11/xdm/Xsession
474
greeter-setup-script=/etc/X11/xdm/Xsetup
475
session-setup-script=/etc/X11/xdm/Xstartup
476
session-cleanup-script=/etc/X11/xdm/Xreset
477
autologin-user=tux
478
autologin-timeout=0
479
```