Project

General

Profile

Wiki » History » Version 65

VANASTASIADIS, 2020-11-03 09:11

1 27 okurz
{{toc}}
2
3
# Test results overview
4 18 okurz
* Latest report based on openQA test results http://s.qa.suse.de/test-status , SLE12: http://s.qa.suse.de/test-status-sle12 , SLE15: http://s.qa.suse.de/test-status-sle15
5 36 okurz
* only "blocker" or "shipstopper" bugs on "interesting products" for SLE: http://s.qa.suse.de/qa_sle_bugs_sle , SLE15: http://s.qa.suse.de/qa_sle_bugs_sle15_all, SLE12: http://s.qa/qa_sle_bugs_sle12_2
6 1 mgriessmeier
7 64 okurz
# QE tools - Team description
8 1 mgriessmeier
9 27 okurz
## Team responsibilities
10 1 mgriessmeier
11 27 okurz
* Develop and maintain upstream openQA
12
* Administration of openqa.suse.de and workers (But not physical hardware, as these belong to the departments that purchased them and we merely facilitate)
13
* Helps administrating and maintaining openqa.opensuse.org, including coordination of efforts aiming at solving problems affecting o3
14 65 VANASTASIADIS
* Develop and maintain internal maintenance QA tools
15 27 okurz
* Support colleagues, team members and open source community
16 1 mgriessmeier
17 27 okurz
## Out of scope
18
19
* Maintenance of individual tests
20
* Maintenance of physical hardware
21
* Maintenance of special worker addendums needed for tests, e.g. external hypervisor hosts for s390x, powerVM
22
* Ticket triaging of http://progress.opensuse.org/projects/openqatests/
23
* Feature development within the backend for single teams (commonly provided by teams themselves)
24
25
## How we work
26
27 64 okurz
The QE Tools team is following the DevOps approach working using a lightweight Agile approach. We plan and track our works using tickets on https://progress.opensuse.org . We pick tickets based on priority and planning decisions. We use weekly meetings as checkpoints for progress and also track cycle and lead times to crosscheck progress against expectations.
28 27 okurz
29 43 okurz
* [tools team - ready issues](https://progress.opensuse.org/projects/openqav3/issues?query_id=230): The complete backlog of the team
30
* [tools team - what members of the team are working on](https://progress.opensuse.org/projects/openqav3/issues?query_id=400): To check progress and know what the team is currently occupied with
31 1 mgriessmeier
32
Also find the custom queries in the right-hand sidebar of https://progress.opensuse.org/projects/openqav3/issues for tickets and their plans.
33 32 okurz
34
### Common tasks for team members
35
36
This is a list of common tasks that we follow, e.g. reviewing daily based on individual steps in the DevOps Process ![DevOps Process](devops-process_25p.png)
37
38
* **Plan**:
39
 * State daily learning and planned tasks in internal chat room
40
 * Review backlog for time-critical, triage new tickets, pick tickets from backlog; see https://progress.opensuse.org/projects/qa/wiki#How-we-work-on-our-backlog
41
* **Code**:
42
 * See project specific contribution instructions
43
 * Provide peer-review following https://github.com/notifications based on projects within the scope of https://github.com/os-autoinst/ with the exception of test code repositories, especially https://github.com/os-autoinst/openQA, https://github.com/os-autoinst/os-autoinst, https://github.com/os-autoinst/scripts, https://github.com/os-autoinst/os-autoinst-distri-openQA, https://github.com/os-autoinst/openqa-trigger-from-obs, https://github.com/os-autoinst/openqa_review
44
* **Build**:
45
 * See project specific contribution instructions
46
* **Test**:
47
 * Monitor failures on https://travis-ci.org/ relying on https://build.opensuse.org/package/show/devel:openQA/os-autoinst_dev for os-autoinst (email notifications)
48
 * Monitor failures on https://app.circleci.com/pipelines/github/os-autoinst/openQA?branch=master relying on https://build.opensuse.org/project/show/devel:openQA:ci for openQA (email notifications)
49
* **Release**:
50
 * By default we use the rolling-release model for all projects unless specified otherwise
51
 * Monitor https://build.opensuse.org/project/show/devel:openQA (all packages and all subprojects) for failures, ensure packages are published on http://download.opensuse.org/repositories/devel:/openQA/
52
 * Monitor http://jenkins.qa.suse.de/view/openQA-in-openQA/ for the openQA-in-openQA Tests and automatic submissions of os-autoinst and openQA to openSUSE:Factory through https://build.opensuse.org/project/show/devel:openQA:tested
53
* **Deploy**:
54
 * o3 is automatically deployed (daily), see https://progress.opensuse.org/projects/openqav3/wiki/Wiki#Automatic-update-of-o3
55
 * osd is automatically deployed (weekly), monitor https://gitlab.suse.de/openqa/osd-deployment/pipelines and watch for notification email to openqa@suse.de
56
* **Operate**:
57
 * Apply infrastructure changes from https://gitlab.suse.de/openqa/salt-states-openqa (osd) or manually over sshd (o3)
58 37 okurz
 * Monitor for backup, see https://gitlab.suse.de/qa-sle/backup-server-salt
59 32 okurz
config changes in salt (osd), backups, job group configuration changes
60 61 okurz
 * Ensure old unused/non-matching needles are cleaned up (osd+o3), see #73387
61 32 okurz
* **Monitor**:
62 48 okurz
 * React on alerts from https://stats.openqa-monitor.qa.suse.de/alerting/list?state=not_ok (emails on osd-admins@suse.de . You need to be logged in to reach the alert list by the provided URL)
63 32 okurz
 * Look for incomplete jobs or scheduled not being worked on o3 and osd (API or webUI)
64 44 okurz
 * React on alerts from https://gitlab.suse.de/openqa/auto-review/, https://gitlab.suse.de/openqa/openqa-review/, https://gitlab.suse.de/openqa/monitor-o3 (subscribe to projects for notifications)
65 49 okurz
 * Be responsive on #opensuse-factory (irc://chat.freenode.net/opensuse-factory) for help, support and collaboration (Unless you have a better solution it is suggested to use [Element.io](https://app.element.io/#/room/%23freenode_%23opensuse-factory:matrix.org) for a sustainable presence; you also need a [registered IRC account](https://freenode.net/kb/answer/registration))
66 1 mgriessmeier
 * Be responsive on [#testing](https://chat.suse.de/channel/testing) for help, support and collaboration
67 50 okurz
 * Be responsive on mailing lists opensuse-factory@opensuse.org and openqa@suse.de (see https://en.opensuse.org/openSUSE:Mailing_lists_subscription)
68 31 okurz
69 27 okurz
### How we work on our backlog
70
71
* "due dates" are only used as exception or reminders
72
* every team member can pick up tickets themselves
73
* everybody can set priority, PO can help to resolve conflicts
74 57 livdywan
* consider the [ready, not assigned/blocked/low](https://progress.opensuse.org/projects/openqav3/issues?query_id=490) query as preferred
75 60 livdywan
* ask questions in tickets, even potentially "stupid" questions, oftentimes descriptions are unclear and should be improved
76 62 okurz
* There are "low-level infrastructure tasks" only conducted by some team members, the "DevOps" aspect does not include that but focusses on the joint development and operation of our main products
77 27 okurz
78 55 okurz
#### Definition of DONE
79
80
Also see http://www.allaboutagile.com/definition-of-done-10-point-checklist/ and https://www.scrumalliance.org/community/articles/2008/september/what-is-definition-of-done-%28dod%29
81
82
* Code changes are made available via a pull request on a version control repository, e.g. github for openQA
83
* [Guidelines for git commits](http://chris.beams.io/posts/git-commit/) have been followed
84
* Code has been reviewed (e.g. in the github PR)
85
* Depending on criticality/complexity/size/feature: A local verification test has been run, e.g. post link to a local openQA machine or screenshot or logfile
86
* Potentially impacted package builds have been considered, e.g. openSUSE Tumbleweed and Leap, Fedora, etc.
87
* Code has been merged (either by reviewer or "mergify" bot or reviewee after 'LGTM' from others)
88
* Code has been deployed to osd and o3 (monitor automatic deployment, apply necessary config or infrastructure changes)
89
90 56 okurz
#### Definition of READY for new features
91 55 okurz
92
The following points should be considered before a new feature ticket is READY to be implemented:
93
94
* Follow the ticket template from https://progress.opensuse.org/projects/openqav3/wiki/#Feature-requests
95
* A clear motivation or user expressing a wish is available
96
* Acceptance criteria are stated (see ticket template)
97
* add tasks as a hint where to start
98
99 53 okurz
#### WIP-limits (reference "Kanban development")
100 28 okurz
101 42 okurz
* global limit of 10 tickets "In Progress"
102 1 mgriessmeier
* personal limit of 3 tickets "In Progress"
103 30 okurz
104 31 okurz
To check: Open [query](https://progress.opensuse.org/projects/openqav3/issues?utf8=%E2%9C%93&set_filter=1&type=IssueQuery&sort=id%3Adesc&f%5B%5D=status_id&op%5Bstatus_id%5D=%3D&v%5Bstatus_id%5D%5B%5D=2&f%5B%5D=assigned_to_id&op%5Bassigned_to_id%5D=%3D&v%5Bassigned_to_id%5D%5B%5D=32300&v%5Bassigned_to_id%5D%5B%5D=15&v%5Bassigned_to_id%5D%5B%5D=34361&v%5Bassigned_to_id%5D%5B%5D=23018&v%5Bassigned_to_id%5D%5B%5D=22072&v%5Bassigned_to_id%5D%5B%5D=24624&v%5Bassigned_to_id%5D%5B%5D=17668&v%5Bassigned_to_id%5D%5B%5D=33482&v%5Bassigned_to_id%5D%5B%5D=32669&f%5B%5D=subproject_id&op%5Bsubproject_id%5D=*&f%5B%5D=&c%5B%5D=subject&c%5B%5D=project&c%5B%5D=status&c%5B%5D=assigned_to&c%5B%5D=fixed_version&c%5B%5D=due_date&c%5B%5D=priority&c%5B%5D=updated_on&c%5B%5D=category&group_by=assigned_to&t%5B%5D=) and look for tickets total number of tickets as well as per person
105 27 okurz
106 1 mgriessmeier
#### Target numbers or "guideline", "should be", in priorities
107 27 okurz
108 64 okurz
1. *New, untriaged:* [0 (daily)](https://progress.opensuse.org/projects/openqav3/issues?query_id=475) . Every ticket should have a target version, e.g. "Ready" for QE tools team, "future" if unplanned, others for other teams
109
1. *Untriaged "tools" tagged:* [0 (daily)](https://progress.opensuse.org/issues?query_id=481) . Every ticket should have a target version, e.g. "Ready" for QE tools team, "future" if unplanned, others for other teams
110 52 okurz
1. *Workable (properly defined):* [~40 (20-50)](https://progress.opensuse.org/projects/openqav3/issues?query_id=478) . Enough tickets to reflect a proper plan but not too many to limit unfinished data (see "waste")
111
1. *Overall backlog length:* [ideally less than 100](https://progress.opensuse.org/projects/openqav3/issues?query_id=230) . Similar as for "Workable"
112 27 okurz
113
#### SLOs (service level objectives)
114
115
* for picking up tickets based on priority, first goal is "urgency removal":
116 29 okurz
 * **immediate**: [<1 day](https://progress.opensuse.org/projects/openqav3/issues?utf8=%E2%9C%93&set_filter=1&f%5B%5D=priority_id&op%5Bpriority_id%5D=%3D&v%5Bpriority_id%5D%5B%5D=7&f%5B%5D=status_id&op%5Bstatus_id%5D=o&f%5B%5D=subproject_id&op%5Bsubproject_id%5D=%3D&v%5Bsubproject_id%5D%5B%5D=125&f%5B%5D=updated_on&op%5Bupdated_on%5D=%3Ct-&v%5Bupdated_on%5D%5B%5D=1&f%5B%5D=&c%5B%5D=subject&c%5B%5D=project&c%5B%5D=status&c%5B%5D=assigned_to&c%5B%5D=fixed_version&c%5B%5D=due_date&c%5B%5D=priority&c%5B%5D=updated_on&c%5B%5D=category&group_by=priority)
117
 * **urgent**: [<1 week](https://progress.opensuse.org/projects/openqav3/issues?utf8=%E2%9C%93&set_filter=1&f%5B%5D=priority_id&op%5Bpriority_id%5D=%3D&v%5Bpriority_id%5D%5B%5D=6&f%5B%5D=status_id&op%5Bstatus_id%5D=o&f%5B%5D=subproject_id&op%5Bsubproject_id%5D=%3D&v%5Bsubproject_id%5D%5B%5D=125&f%5B%5D=updated_on&op%5Bupdated_on%5D=%3Ct-&v%5Bupdated_on%5D%5B%5D=7&f%5B%5D=&c%5B%5D=subject&c%5B%5D=project&c%5B%5D=status&c%5B%5D=assigned_to&c%5B%5D=fixed_version&c%5B%5D=due_date&c%5B%5D=priority&c%5B%5D=updated_on&c%5B%5D=category&group_by=status)
118
 * **high**: [<1 month](https://progress.opensuse.org/projects/openqav3/issues?utf8=%E2%9C%93&set_filter=1&f%5B%5D=status_id&op%5Bstatus_id%5D=o&f%5B%5D=priority_id&op%5Bpriority_id%5D=%3D&v%5Bpriority_id%5D%5B%5D=5&f%5B%5D=subproject_id&op%5Bsubproject_id%5D=%3D&v%5Bsubproject_id%5D%5B%5D=125&f%5B%5D=updated_on&op%5Bupdated_on%5D=%3Ct-&v%5Bupdated_on%5D%5B%5D=30&f%5B%5D=&c%5B%5D=subject&c%5B%5D=project&c%5B%5D=status&c%5B%5D=assigned_to&c%5B%5D=fixed_version&c%5B%5D=due_date&c%5B%5D=priority&c%5B%5D=updated_on&c%5B%5D=category&group_by=status)
119
 * **normal**: [<1 year](https://progress.opensuse.org/projects/openqav3/issues?utf8=%E2%9C%93&set_filter=1&f%5B%5D=priority_id&op%5Bpriority_id%5D=%3D&v%5Bpriority_id%5D%5B%5D=4&f%5B%5D=status_id&op%5Bstatus_id%5D=o&f%5B%5D=subproject_id&op%5Bsubproject_id%5D=%3D&v%5Bsubproject_id%5D%5B%5D=125&f%5B%5D=updated_on&op%5Bupdated_on%5D=%3Ct-&v%5Bupdated_on%5D%5B%5D=365&f%5B%5D=&c%5B%5D=subject&c%5B%5D=project&c%5B%5D=status&c%5B%5D=assigned_to&c%5B%5D=fixed_version&c%5B%5D=due_date&c%5B%5D=priority&c%5B%5D=updated_on&c%5B%5D=category&group_by=status)
120 1 mgriessmeier
 * **low**: undefined
121
122
* aim for cycle time of individual tickets (not epics or sagas): 1h-2w
123 31 okurz
124 54 mkittler
#### Backlog prioritization
125 47 okurz
126
When we prioritize tickets we assess:
127
1. What the main use cases of openQA are among all users, be it SUSE QA engineers, other SUSE employees, openSUSE contributors as well as any other outside user of openQA
128
2. We try to understand how many persons and products are affected by feature requests as well as regressions (or "concrete bugs" as the ticket category is called within the openQA Project) and prioritize issues affecting more persons and products and use cases over limited issues
129
3. We prioritize regressions higher than work on (new) feature requests
130
4. If a workaround or alternative exists then this lowers priority. We prioritize tasks that need deep understanding of the architecture and an efficient low-level implementation over convenience additions that other contributors are more likely to be able to implement themselves.
131
132 38 okurz
### Team meetings
133
134 58 livdywan
* **Daily:** Use (internal) chat actively, e.g. formulate your findings or achievements and plans for the day, "think out loud" while working on individual problems.
135
  * *Goal*: Quick support on problems, feedback on plans, collaboration and self-reflection (compare to [Daily Scrum](https://www.scrumguides.org/scrum-guide.html#events-daily))
136 63 livdywan
* **Weekly coordination:** Every Friday 1115-1145 CET/CEST in [m.o.o/suse_qa_tools](https://meet.opensuse.org/suse_qa_tools) ([fallback](https://meet.jit.si/suse_qa_tools)). Community members and guests are particularly welcome to join this meeting.
137 58 livdywan
  * *Goal*: Team backlog coordination and design decisions of bigger topics (compare to [Sprint Planning](https://www.scrumguides.org/scrum-guide.html#events-planning)).
138 63 livdywan
* **Fortnightly Retrospective:** Friday 1145-1215 CET/CEST every even week, same room as the weekly meeting. On these days the weekly has hard time limit of 1115-1145. At the start of the week a game on retrospected.com is started which can be filled in all week. Specific actions are recorded as tickets at the end of the week.
139 58 livdywan
  * *Goal*: Inspect and adapt, learn and improve (compare to [Sprint Retrospective](https://www.scrumguides.org/scrum-guide.html#events-retro))
140
* **Virtual coffee talk:** Weekly every Thursday 1100-1120 CET/CEST, same room as the weekly.
141
  * *Goal*: Connect and bond as a team, understand each other (compare to [Informal Communication in an all-remote environment](https://about.gitlab.com/company/culture/all-remote/informal-communication))
142
* **extension on-demand:** Optional meeting on invitation in the suggested time slot Thursday 1000-1200 CET/CEST, in the same room as the weekly, on-demand or replacing the *Virtual coffee talk*.
143
  * *Goal*: Introduce, research and discuss bigger topics, e.g. backlog overview, processes and workflows
144 31 okurz
145 59 livdywan
Note: Meetings concerning the whole team are moderated by the scrum master by default, who should join the call early and verify that the meeting itself and any tools used are working or e.g. advise the use of the fallback option.
146
147 45 okurz
### Alert handling
148
149
#### Best practices
150
151
* "if it hurts, do it more often": https://www.martinfowler.com/bliki/FrequencyReducesDifficulty.html
152
* Reduce [Mean-time-to-Detect (MTTD)](https://searchitoperations.techtarget.com/definition/mean-time-to-detect-MTTD) and [Mean-time-to-Recovery](https://raygun.com/blog/what-is-mttr/)
153
154
#### Process
155
156
* React on any alert
157
* For each failing grafana alert
158 51 okurz
 * Create a ticket for the issue (with a tag "alert"; create ticket unless the alert is trivial to resolve and needs no improvement)
159 45 okurz
 * Link the corresponding grafana panel in the ticket
160
 * Respond to the notification email with a link to the ticket
161 1 mgriessmeier
 * Optional: Inform in chat
162 51 okurz
 * Optional: Add "annotation" in corresponding grafana panel with a link to the corresponding ticket 
163 46 okurz
 * Pause the alert if you think further alerting the team does not help (e.g. you can work on fixing the problem, alert is non-critical but problem can not be fixed within minutes)
164 45 okurz
* If you consider an alert non-actionable then change it accordingly
165
* If you do not know how to handle an alert ask the team for help
166
* After resolving the issue add explanation in ticket, unpause alert and verify it going to "ok" again, resolve ticket
167
168
#### References
169
170
* https://nl.devoteam.com/en/blog-post/monitoring-reduce-mean-time-recovery-mttr/
171
172 31 okurz
### Historical
173
174 64 okurz
Previously the former QA tools team used target versions "Ready" (to be planned into individual milestone periods or sprints), "Current Sprint" and "Done". However the team never really did use proper time-limited sprints so the distinction was rather vague. After having tickets "Resolved" after some time the PO or someone else would also update the target version to "Done" to signal that the result has been reviewed. This was causing a lot of ticket update noise for not much value considering that the [Definition-of-Done](https://progress.opensuse.org/projects/openqav3/wiki/#ticket-workflow) when properly followed already has rather strict requirements on when something can be considered really "Resolved" hence the team eventually decided to not use the "Done" target version anymore. Since about 2019-05 (and since okurz is doing more backlog management) the team uses priorities more as well as the status "Workable" together with an explicit team member list for "What the team is working on" to better visualize what is making team members busy regardless of what was "officially" planned to be part of the team's work. So we closed the target version. On 2020-07-03 okurz subsequently closed "Current Sprint" as also this one was in most cases equivalent to just picking an assignee for a ticket or setting to "In Progress". We can just distinguish between "(no version)" meaning untriaged, "Ready" meaning tools team should consider picking up these issues and "future" meaning that there is no plan for this to be picked up. Everything else is defined by status and priority.
175 62 okurz
In 2020-10-27 we discussed together to find out the history of the team. We clarified that the team started out as a not well defined "Dev+Ops" team. "team responsibilities" have been mainly unchanged since at least beginning of 2019. We agreed that learning from users and production about our "Dev" contributions is good, so this part of "Ops" is responsibility of everyone.
176 27 okurz
177
# QA SLE Functional - Team description
178
179 1 mgriessmeier
**QSF (QA SLE Functional)** is a virtual team focusing on QA of the "functional" domain of the SUSE SLE products. The virtual team is mainly comprised of members of [SUSE QA SLE Nbg](https://wiki.suse.net/index.php/SUSE-Quality_Assurance/Organization/Members_and_Responsibilities#QA_SLE_NBG_Team) including members from [SUSE QA SLE Prg](https://wiki.suse.net/index.php/SUSE-Quality_Assurance/Organization/Members_and_Responsibilities#QA_SLE_PRG_Team). The [SLE Departement](https://wiki.suse.net/index.php/SUSE-Quality_Assurance/SLE_Department#QSF_.28QA_SLE_Functional.29) page describes our QA responsibilities. We focus on our automatic tests running in [openQA](https://openqa.suse.de) under the job groups "Functional" as well as "Autoyast" for the respective products, for example [SLE 15 / Functional](https://openqa.suse.de/group_overview/110) and [SLE 15 / Autoyast](https://openqa.suse.de/group_overview/129). We back our automatic tests with exploratory manual tests, especially for the product milestone builds. Additionally we care about corresponding openSUSE openQA tests (see as well https://openqa.opensuse.org).
180 7 szarate
181 1 mgriessmeier
* long-term roadmap: http://s.qa.suse.de/qa-long-term
182
* overview of current openQA SLE12SP5 tests with progress ticket references: https://openqa.suse.de/tests/overview?distri=sle&version=12-SP5&groupid=139&groupid=142
183
* fate tickets for SLE12SP5 feature testing: based on http://s.qa.suse.de/qa_sle_functional_feature_tests_sle12sp5 new report based on all tickets with milestone before SLE12SP5 GM, http://s.qa.suse.de/qa_sle_functional_feature_tests_sle15sp1 for SLE15SP1
184
* only "blocker" or "shipstopper" bugs on "interesting products" for SLE15 http://s.qa.suse.de/qa_sle_functional_bug_query_sle15_2, http://s.qa/qa_sle_bugs_sle12_2 for SLE12
185 3 szarate
* Better organization of planned work can be seen at the [SUSE QA](https://progress.opensuse.org/projects/suseqa) project (which is not public).
186 1 mgriessmeier
187 27 okurz
## Test plan
188 1 mgriessmeier
189
When looking for coverage of certain components or use cases keep the [openQA glossary](http://open.qa/docs/#concept) in mind. It is important to understand that "tests in openQA" could be a scenario, for example a "textmode installation run", a combined multi-machine scenario, for example "a remote ssh based installation using X-forwarding", or a test module, for example "vim", which checks if the vim editor is correctly installed, provides correct rendering and basic functionality. You are welcome to contact any member of the team to ask for more clarification about this.
190
191 19 okurz
In detail the following areas are tested as part of "SLE functional":
192
193 1 mgriessmeier
* different hardware setups (UEFI, acpi)
194
* support for localization
195
* openSUSE: virtualization - some "virtualization" tests are active on o3 with reduced set compared to SLE coverage (on behalf of QA SLE virtualization due to team capacity constraints, clarified in QA SLE coordination meeting 2018-03-28)
196
* openSUSE: migration - comparable to "virtualization", a reduced set compared to SLE coverage is active on o3 (on behalf of QA SLE migration due to team capacity constraints, clarified in QA SLE coordination meeting 2018-04)
197 26 riafarov
198
199 27 okurz
### QSF-y
200 18 okurz
201
Virtual team focuses on testing YaST components, including installer and snapper.
202 1 mgriessmeier
203 18 okurz
Detailed test plan for SLES can be found here: [SLES_Integration_Level_Testplan.md](https://gitlab.suse.de/qsf-y/qa-sle-functional-y/blob/master/SLES_Integration_Level_Testplan.md)
204 1 mgriessmeier
205
* Latest report based on openQA test results SLE12: http://s.qa.suse.de/test-status-sle12-yast , SLE15: http://s.qa.suse.de/test-status-sle15-yast
206 2 mgriessmeier
207
208 27 okurz
### QSF-u
209 1 mgriessmeier
210
"Testing is the future, and the future starts with you"
211
212
* basic operations (firefox, zypper, logout/reboot/shutdown)
213
* boot_to_snapshot
214 18 okurz
* functional application tests (kdump, gpg, ipv6, java, git, openssl, openvswitch, VNC)
215
* NIS (server, client)
216 1 mgriessmeier
* toolchain (development module)
217
* systemd
218 6 okurz
* "transactional-updates" as part of the corresponding SLE server role, not CaaSP
219
220
* Latest report based on openQA test results SLE12: http://s.qa.suse.de/test-status-sle12-functional , SLE15: http://s.qa.suse.de/test-status-sle15-functional
221 1 mgriessmeier
222 6 okurz
223
## Explicitly not covered by QSF
224 1 mgriessmeier
225
* quarterly updated media: Expected to be covered by Maintenance + QAM
226
227
228 27 okurz
## What we do
229 1 mgriessmeier
230
We collected opinions, personal experiences and preferences starting with the following four topics: What are fun-tasks ("new tests", "collaborate", "do it right"), what parts are annoying ("old & sporadic issues"), what do we think is expected from qsf-u ("be quick", "keep stuff running", "assess quality") and what we should definitely keep doing to prevent stakeholders becoming disappointed ("build validation", "communication & support").
231 12 okurz
232 27 okurz
### How we work on our backlog
233 12 okurz
234
* no "due date"
235
* we pick up tickets that have not been previously discussed
236 1 mgriessmeier
* more flexible choice
237 14 okurz
* WIP-limits:
238
 * global limit of 10 tickets "In Progress"
239
240
* target numbers or "guideline", "should be", in priorities:
241 12 okurz
 1. New, untriaged: 0
242
 2. Workable: 40
243 1 mgriessmeier
 3. New, assigned to [u]: ideally less than 200 (should not stop you from triaging)
244
245
* SLAs for priority tickets - how to ensure to work on tickets which are more urgent?
246
 * "taken": <1d: immediate -> looking daily
247
 * 2-3d: urgent
248 12 okurz
 * first goal is "urgency removal": <1d: immediate, 1w: urgent
249 1 mgriessmeier
250 12 okurz
* our current "cycle time" is 1h - 1y (maximum, with interruptions)
251 1 mgriessmeier
252
* everybody should set priority + milestone in obvious cases, e.g. new reproducible test failures in multiple critical scenarios, in general case the PO decides
253
254 27 okurz
### How we like to choose our battles
255 1 mgriessmeier
256
We self-assessed our tasks on a scale from "administrative" to "creative" and found in the following descending order: daily test review (very "administrative"), ticket triaging, milestone validation, code review, create needles, infrastructure issues, fix and cleanup tests, find bugs while fixing failing tests, find bugs while designing new tests, new automated tests (very "creative"). Then we found we appreciate if our work has a fair share of both sides. Probably a good ratio is 60% creative plus 40% administrative tasks. Both types have their advantages and we should try to keep the healthy balance.
257
258
259 27 okurz
### What "product(s)" do we (really) *care* about?
260 1 mgriessmeier
261
Brainstorming results:
262
263
* openSUSE Krypton -> good example of something that we only remotely care about or not at all even though we see the connection point, e.g. test plasma changes early before they reach TW or Leap as operating systems we rely on or SLE+packagehub which SUSE does not receive direct revenue from but indirect benefit. Should be "community only", that includes members from QSF though
264
* openQA -> (like OBS), helps to provide ROI for SUSE
265
* SLE(S) (in development versions)
266
* Tumbleweed
267
* Leap, because we use it
268
* SLES HA
269
* SLE migration
270
* os-autoinst-distri-opensuse+backend+needles
271
272
From this list strictly no "product" gives us direct revenue however most likely SLE(S) (as well as SLES HA and SLE migration) are good examples of direct connection to revenue (based on SLE subscriptions). Conducting a poll in the team has revealed that 3 persons see "SLE(S)" as our main product and 3 see "os-autoinst-distri-opensuse+backend+needles" as the main product. We mainly agreed that however we can not *own* a product like "SLE" because that product is mainly not under our control.
273
274
Visualizing "cost of testing" vs. "risk of business impact" showed that both metrics have an inverse dependency, e.g. on a range from "upstream source code" over "package self-tests", "openSUSE Factory staging", "Tumbleweed", "SLE" we consider SLE to have the highest business risk attached and therefore defines our priority however testing at upstream source level is considered most effective to prevent higher cost of bugs or issues. Our conclusion is that we must ensure that the high-risk SLE base has its quality assured while supporting a quality assurance process as early as possible in the development process. package self-tests as well as the openQA staging tests are seen as a useful approach in that direction as well as "domain specfic specialist QA engineers" working closely together with according in-house development parties.
275
276 27 okurz
## Documentation
277 1 mgriessmeier
278
This documentation should only be interesting for the team QA SLE functional. If you find that some of the following topics are interesting for other people, please extract those topics to another wiki section.
279
280
### QA SLE functional Dashboards
281
282
In room 3.2.15 from Nuremberg office are two dedicated laptops each with a monitor attached showing a selected overview of openQA test resuls with important builds from SLE and openSUSE.
283 4 szarate
Such laptops are configured with a root account with the default password for production machines. First point of contact: [slindomansilla.suse.com](mailto:slindomansilla@suse.com), (okurz@suse.de)[mailto:okurz@suse.de]
284 1 mgriessmeier
285
* ''dashboard-osd-3215.suse.de'': Showing current view of openqa.suse.de filtered for some job group results, e.g. "Functional"
286
* ''dashboard-o3-3215.suse.de'': Showing current view of openqa.opensuse.org filtered for some job group results which we took responsibility to review and are mostly interested in
287
288 24 dheidler
### dashboard-osd-3215
289 1 mgriessmeier
290
* OS: openSUSE Tumbleweed
291
* Services: ssh, mosh, vnc, x2x
292
* Users:
293
** root
294
** dashboard
295
* VNC: `vncviewer dashboard-osd-3215`
296
* X2X: `ssh -XC dashboard@dashboard-osd-3215 x2x -west -to :0.0`
297
** (attaches the dashboard monitor as an extra display to the left of your screens. Then move the mouse over and the attached X11 server will capture mouse and keyboard)
298
299
#### Content of /home/dashboard/.xinitrc
300
301 3 szarate
```
302 1 mgriessmeier
#
303
# Source common code shared between the
304
# X session and X init scripts
305
#
306
. /etc/X11/xinit/xinitrc.common
307
308
xset -dpms
309
xset s off
310
xset s noblank
311
[...]
312
#
313
# Add your own lines here...
314
#
315
$HOME/bin/osd_dashboard &
316 3 szarate
```
317 1 mgriessmeier
318
#### Content of /home/dashboard/bin/osd_dashboard
319
320 3 szarate
```
321 1 mgriessmeier
#!/bin/bash
322
323
DISPLAY=:0 unclutter &
324
325
DISPLAY=:0 xset -dpms
326
DISPLAY=:0 xset s off
327
DISPLAY=:0 xset s noblank
328
329
url="${url:-"https://openqa.suse.de/?group=SLE+15+%2F+%28Functional%7CAutoyast%29&default_expanded=1&limit_builds=3&time_limit_days=14&show_tags=1&fullscreen=1#"}"
330 20 dheidler
DISPLAY=:0 chromium --kiosk "$url"
331 3 szarate
```
332 1 mgriessmeier
333
#### Cron job:
334
335 3 szarate
```
336 1 mgriessmeier
Min     H       DoM     Mo      DoW     Command
337 23 dheidler
*	*	*	*	*	/home/dashboard/bin/reload_chromium
338 3 szarate
```
339 1 mgriessmeier
340 21 dheidler
#### Content of /home/dashboard/bin/reload_chromium
341 1 mgriessmeier
342 3 szarate
```
343 1 mgriessmeier
#!/bin/bash
344
345
DISPLAY=:0 xset -dpms
346
DISPLAY=:0 xset s off
347
DISPLAY=:0 xset s noblank
348
349 22 dheidler
DISPLAY=:0 xdotool windowactivate $(DISPLAY=:0 xdotool search --class Chromium)
350 21 dheidler
DISPLAY=:0 xdotool key F5
351
DISPLAY=:0 xdotool windowactivate $(DISPLAY=:0 xdotool getactivewindow)
352 3 szarate
```
353 1 mgriessmeier
354
#### Issues:
355
356
* ''When the screen shows a different part of the web page''
357
** a simple mouse scroll through vnc or x2x may suffice.
358
* ''When the builds displayed are freeze without showing a new build, it usually means that midori, the browser displaying the info on the screen, crashed.''
359
** you can try to restart midori this way:
360
*** ps aux | grep midori
361
*** kill $pid
362
*** /home/dashboard/bin/osd_dashboard
363
** If this also doesn't work, restart the machine.
364 25 dheidler
365
366
### dashboard-o3
367
368
* Raspberry Pi 3B+
369
* IP: `10.160.65.207`
370
371
#### Content of /home/tux/.xinitrc
372
```
373
#!/bin/bash
374
375
unclutter &
376
openbox &
377
xset s off
378
xset -dpms
379
sleep 5
380
url="https://openqa.opensuse.org?group=openSUSE Tumbleweed\$|openSUSE Leap [0-9]{2}.?[0-9]*\$|openSUSE Leap.\*JeOS\$|openSUSE Krypton|openQA|GNOME Next&limit_builds=2&time_limit_days=14&&show_tags=1&fullscreen=1#build-results"
381
chromium --kiosk "$url" &
382
383
while sleep 300 ; do
384
        xdotool windowactivate $(xdotool search --class Chromium)
385
        xdotool key F5
386
        xdotool windowactivate $(xdotool getactivewindow)
387
done
388
```
389
390
#### Content of /usr/share/lightdm/lightdm.conf.d/50-suse-defaults.conf
391
```
392
[Seat:*]
393
pam-service = lightdm
394
pam-autologin-service = lightdm-autologin
395
pam-greeter-service = lightdm-greeter
396
xserver-command=/usr/bin/X
397
session-wrapper=/etc/X11/xdm/Xsession
398
greeter-setup-script=/etc/X11/xdm/Xsetup
399
session-setup-script=/etc/X11/xdm/Xstartup
400
session-cleanup-script=/etc/X11/xdm/Xreset
401
autologin-user=tux
402
autologin-timeout=0
403
```