Project

General

Profile

Core » History » Revision 4

Revision 3 (tjyrinki_suse, 2022-02-16 08:57) → Revision 4/11 (szarate, 2023-03-01 18:12)

# QE Core 

 (this chapter has seen changes in 2020-11 regarding QSF -> QE Core / QE Yast change) 

 **QE Core** (formerly QSF, QA SLE Functional) and of the core functionality of the SUSE SLE products. The squad is comprised of members of QE Integration - [SUSE QA SLE Nbg](https://wiki.suse.net/index.php/SUSE-Quality_Assurance/Organization/Members_and_Responsibilities#QA_SLE_NBG_Team), including [SUSE QA SLE Prg](https://wiki.suse.net/index.php/SUSE-Quality_Assurance/Organization/Members_and_Responsibilities#QA_SLE_PRG_Team) - and QE Maintenance people (formerly "QAM"). The [SLE Departement](https://wiki.suse.net/index.php/SUSE-Quality_Assurance/SLE_Department#QSF_.28QA_SLE_Functional.29) page describes our QA responsibilities. We focus on our automatic tests running in [openQA](https://openqa.suse.de) under the job groups "Functional" and "Core" (maintenance SLE releases), for example [SLE 15 / Functional](https://openqa.suse.de/group_overview/110). We back our automatic tests with exploratory manual tests, especially for the product milestone builds. Additionally we care about corresponding openSUSE openQA tests (see as well https://openqa.opensuse.org). 

 ## Scope of QE Core: 

 Responsibilities: 

     Maintaining as top level authority os-autoinst-distri-opensuse git repository and Core maintained test suites 
     Organizing overall configuration changes like new products/versions (but can delegate to other teams) 
     Maintain Functional / Core job groups on OSD (see eg schedule/functional in os-autoinst-distri-opensuse) 
     Maintain modules where QE Core is the maintainer 
     Validation of install image builds (new product, QU) as far as tests are in our domain 
     Create new tests for basic userspace packages, if there's an important lack of test coverage (for example, a regression that slipped past) 
     Creating new ways of end-to-end test scenarios that combine pieces of software traditionally tested by single squads. 
     QE Core has some extra freedoms to do more testing innovation (cross-squad) and drive changes. 
     Triage tickets with [qe-core] tag. 
     Fix problems with maintained test modules. 

 ### TL;DR 

 If All in all, if you want to file a ticket for QE Core, add "[qe-core]" to the Subject line. 

 See [[Core#What we do|What we do]] * full backlog and tickets to triage: https://is.gd/aLWTdv 

 Some older links: 
 * overview of current openQA SLE12SP5 tests with progress ticket references: https://openqa.suse.de/tests/overview?distri=sle&version=12-SP5&groupid=139&groupid=142 
 * fate tickets for more detailed information regarding our backlog. SLE12SP5 feature testing: based on http://s.qa.suse.de/qa_sle_functional_feature_tests_sle12sp5 new report based on all tickets with milestone before SLE12SP5 GM, http://s.qa.suse.de/qa_sle_functional_feature_tests_sle15sp1 for SLE15SP1 
 * only "blocker" or "shipstopper" bugs on "interesting products" for SLE15 http://s.qa.suse.de/qa_sle_functional_bug_query_sle15_2, http://s.qa/qa_sle_bugs_sle12_2 for SLE12 
 * Better organization of planned work can be seen at the [SUSE QA](https://progress.opensuse.org/projects/suseqa) project (which is not public). 

 ## Test plan 

 When looking for coverage of certain components or use cases keep the [openQA glossary](http://open.qa/docs/#concept) in mind. It is important to understand that "tests in openQA" could be a scenario, for example a "textmode installation run", a combined multi-machine scenario, for example "a remote ssh based installation using X-forwarding", or a test module, for example "vim", which checks if the vim editor is correctly installed, provides correct rendering and basic functionality. You are welcome to contact any member of the team to ask for more clarification about this. 

 In detail the following areas are tested as part of "SLE functional": 

 * different hardware setups (UEFI, acpi) 
 * support for localization 

 Virtualization and Migration have separate squads for them: 
 * openSUSE: virtualization - some "virtualization" tests are active on o3 with reduced set compared to SLE coverage (on behalf of QA SLE virtualization due to team capacity constraints, clarified in QA SLE coordination meeting 2018-03-28) 
 * openSUSE: migration - comparable to "virtualization", a reduced set compared to SLE coverage is active on o3 (on behalf of QA SLE migration due to team capacity constraints, clarified in QA SLE coordination meeting 2018-04) 

 ### QE Core 

 "Testing is the future, and the future starts with you" 

 * Current definitions can be found at https://confluence.suse.com/display/qasle/Tests+Maintained+by+QE+Core,  

 Note: Link mentioned above is WIP; QE-Core's work has impact on the openSUSE community as well, to keep the community in sync, either https://progress.opensuse.org/projects/qa/wiki/Core or a better place has to be used to share what is the scope of work, always keeping to a unique source of truth, that is available to the community, keeping SLE's specific information, available to SUSE employees only.  

 * Latest report based on openQA test results SLE12: http://s.qa.suse.de/test-status-sle12-functional , SLE15: http://s.qa.suse.de/test-status-sle15-functional 

 ## In new organization also covered by QE Core and others 

 * quarterly updated media: former QA Maintenance (QAM) is now part of the various QE squads. However, QU media does happen together with Maintenance Coordination that is not part of these squads. 

 **The rest of the page is possibly interesting, but has not been updated since QSF-U changed to QE Core and included development maintenance SLE release tests in the same categories as Functional job group** 

 ## What we do 

 We collected opinions, personal experiences and preferences starting with the following four topics: What are fun-tasks ("new tests", "collaborate", "do it right"), what parts are annoying ("old & sporadic issues"), what do we think is expected from qsf-u ("be quick", "keep stuff running", "assess quality") and what we should definitely keep doing to prevent stakeholders becoming disappointed ("build validation", "communication & support"). 

 ### How we work on our backlog 

 * Tickets have a due-date only when it is required or there's a business need or impacts other teams no "due date" 
 * we pick up tickets that have not been previously discussed 
 * We are more flexible on what we work. choice 
 * WIP-limits: 
  * global limit of 10 tickets "In Progress" 

 * We strive target numbers or "guideline", "should be", in priorities: 
  1. New, untriaged: 0 
  2. Workable: 40 
  3. New, assigned to have no [[wiki#ticket-backlog-triaging|un-triaged]] tickets, see also [[Wiki#Guidelines-for-triaging|this wiki]]. [qe-core] or [qe-yast]: ideally less than 200 (should not stop you from triaging) 

 * SLAs: SLAs for priority tickets - how to ensure to work on tickets which are more urgent? 
  * First "taken": <1d: immediate -> looking daily 
  * 2-3d: urgent 
  * first goal is "to remove the urgency", see "urgency removal": <1d: immediate, 1w: urgent 

 * our current [[wiki#SLOs-service-level-objectives| Service Level Objectives]], removing the urgency often doesn't necessarily mean to fully close the ticket, but to understand what it "cycle time" is about, and have a plan on how to tackle it, specially true for maintenance updates, as failing openQA tests imply no updates being auto-approved. 1h - 1y (maximum, with interruptions) 

 * everybody should set priority + milestone in obvious cases, e.g. new reproducible test failures in multiple critical scenarios, in general case the PO decides 

 ### Where to find our backlog 

 * Full backlog and tickets to triage can be found at: https://is.gd/aLWTdv 
 * Example of a sprint: https://is.gd/qe_core_backlog_example 

 ### How we like to choose our battles 

 We self-assessed our tasks on a scale from "administrative" to "creative" and found in the following descending order: daily test review (very "administrative"), ticket triaging, milestone validation, code review, create needles, infrastructure issues, fix and cleanup tests, find bugs while fixing failing tests, find bugs while designing new tests, new automated tests (very "creative"). Then we found we appreciate if our work has a fair share of both sides. Probably a good ratio is 60% creative plus 40% administrative tasks. Both types have their advantages and we should try to keep the healthy balance. 


 ### What "product(s)" do we (really) *care* about? 

 Brainstorming results: 

 * openSUSE Krypton -> good example of something that we only remotely care about or not at all even though we see the connection point, e.g. test plasma changes early before they reach TW or Leap as operating systems we rely on or SLE+packagehub which SUSE does not receive direct revenue from but indirect benefit. Should be "community only", that includes members from QSF though 
 * openQA -> (like OBS), helps to provide ROI for SUSE 
 * SLE(S) (in development versions) 
 * Tumbleweed 
 * Leap, because we use it 
 * SLE migration 
 * os-autoinst-distri-opensuse+backend+needles 

 From this list strictly no "product" gives us direct revenue however most likely SLE(S) (as well as SLES HA and SLE migration) are good examples of direct connection to revenue (based on SLE subscriptions). Conducting a poll in the team has revealed that 3 persons see "SLE(S)" as our main product and 3 see "os-autoinst-distri-opensuse+backend+needles" as the main product. We mainly agreed that however we can not *own* a product like "SLE" because that product is mainly not under our control. 

 Visualizing "cost of testing" vs. "risk of business impact" showed that both metrics have an inverse dependency, e.g. on a range from "upstream source code" over "package self-tests", "openSUSE Factory staging", "Tumbleweed", "SLE" we consider SLE to have the highest business risk attached and therefore defines our priority however testing at upstream source level is considered most effective to prevent higher cost of bugs or issues. Our conclusion is that we must ensure that the high-risk SLE base has its quality assured while supporting a quality assurance process as early as possible in the development process. package self-tests as well as the openQA staging tests are seen as a useful approach in that direction as well as "domain specfic specialist QA engineers" working closely together with according in-house development parties. 

 ## Documentation 

 This documentation should only be interesting for the team QA SLE functional. If you find that some of the following topics are interesting for other people, please extract those topics to another wiki section. 

 ### QA SLE functional Dashboards 

 In room 3.2.15 from Nuremberg office are two dedicated laptops each with a monitor attached showing a selected overview of openQA test resuls with important builds from SLE and openSUSE. 
 Such laptops are configured with a root account with the default password for production machines. First point of contact: [slindomansilla.suse.com](mailto:slindomansilla@suse.com), (okurz@suse.de)[mailto:okurz@suse.de] 

 * ''dashboard-osd-3215.suse.de'': Showing current view of openqa.suse.de filtered for some job group results, e.g. "Functional" 
 * ''dashboard-o3-3215.suse.de'': Showing current view of openqa.opensuse.org filtered for some job group results which we took responsibility to review and are mostly interested in 

 ### dashboard-osd-3215 

 * OS: openSUSE Tumbleweed 
 * Services: ssh, mosh, vnc, x2x 
 * Users: 
 ** root 
 ** dashboard 
 * VNC: `vncviewer dashboard-osd-3215` 
 * X2X: `ssh -XC dashboard@dashboard-osd-3215 x2x -west -to :0.0` 
 ** (attaches the dashboard monitor as an extra display to the left of your screens. Then move the mouse over and the attached X11 server will capture mouse and keyboard) 

 #### Content of /home/dashboard/.xinitrc 

 ``` 
 # 
 # Source common code shared between the 
 # X session and X init scripts 
 # 
 . /etc/X11/xinit/xinitrc.common 

 xset -dpms 
 xset s off 
 xset s noblank 
 [...] 
 # 
 # Add your own lines here... 
 # 
 $HOME/bin/osd_dashboard & 
 ``` 

 #### Content of /home/dashboard/bin/osd_dashboard 

 ``` 
 #!/bin/bash 

 DISPLAY=:0 unclutter & 

 DISPLAY=:0 xset -dpms 
 DISPLAY=:0 xset s off 
 DISPLAY=:0 xset s noblank 

 url="${url:-"https://openqa.suse.de/?group=SLE+15+%2F+%28Functional%7CAutoyast%29&default_expanded=1&limit_builds=3&time_limit_days=14&show_tags=1&fullscreen=1#"}" 
 DISPLAY=:0 chromium --kiosk "$url" 
 ``` 

 #### Cron job: 

 ``` 
 Min       H         DoM       Mo        DoW       Command 
 * 	 * 	 * 	 * 	 * 	 /home/dashboard/bin/reload_chromium 
 ``` 

 #### Content of /home/dashboard/bin/reload_chromium 

 ``` 
 #!/bin/bash 

 DISPLAY=:0 xset -dpms 
 DISPLAY=:0 xset s off 
 DISPLAY=:0 xset s noblank 

 DISPLAY=:0 xdotool windowactivate $(DISPLAY=:0 xdotool search --class Chromium) 
 DISPLAY=:0 xdotool key F5 
 DISPLAY=:0 xdotool windowactivate $(DISPLAY=:0 xdotool getactivewindow) 
 ``` 

 #### Issues: 

 * ''When the screen shows a different part of the web page'' 
 ** a simple mouse scroll through vnc or x2x may suffice. 
 * ''When the builds displayed are freeze without showing a new build, it usually means that midori, the browser displaying the info on the screen, crashed.'' 
 ** you can try to restart midori this way: 
 *** ps aux | grep midori 
 *** kill $pid 
 *** /home/dashboard/bin/osd_dashboard 
 ** If this also doesn't work, restart the machine. 


 ### dashboard-o3 

 * Raspberry Pi 3B+ 
 * IP: `10.160.65.207` 

 #### Content of /home/tux/.xinitrc 
 ``` 
 #!/bin/bash 

 unclutter & 
 openbox & 
 xset s off 
 xset -dpms 
 sleep 5 
 url="https://openqa.opensuse.org?group=openSUSE Tumbleweed\$|openSUSE Leap [0-9]{2}.?[0-9]*\$|openSUSE Leap.\*JeOS\$|openSUSE Krypton|openQA|GNOME Next&limit_builds=2&time_limit_days=14&&show_tags=1&fullscreen=1#build-results" 
 chromium --kiosk "$url" & 

 while sleep 300 ; do 
         xdotool windowactivate $(xdotool search --class Chromium) 
         xdotool key F5 
         xdotool windowactivate $(xdotool getactivewindow) 
 done 
 ``` 

 #### Content of /usr/share/lightdm/lightdm.conf.d/50-suse-defaults.conf 
 ``` 
 [Seat:*] 
 pam-service = lightdm 
 pam-autologin-service = lightdm-autologin 
 pam-greeter-service = lightdm-greeter 
 xserver-command=/usr/bin/X 
 session-wrapper=/etc/X11/xdm/Xsession 
 greeter-setup-script=/etc/X11/xdm/Xsetup 
 session-setup-script=/etc/X11/xdm/Xstartup 
 session-cleanup-script=/etc/X11/xdm/Xreset 
 autologin-user=tux 
 autologin-timeout=0 
 ``` 

 ## Old stuff: 

 Some older links: 
 * overview of current openQA SLE12SP5 tests with progress ticket references: https://openqa.suse.de/tests/overview?distri=sle&version=12-SP5&groupid=139&groupid=142 
 * fate tickets for SLE12SP5 feature testing: based on http://s.qa.suse.de/qa_sle_functional_feature_tests_sle12sp5 new report based on all tickets with milestone before SLE12SP5 GM, http://s.qa.suse.de/qa_sle_functional_feature_tests_sle15sp1 for SLE15SP1 
 * only "blocker" or "shipstopper" bugs on "interesting products" for SLE15 http://s.qa.suse.de/qa_sle_functional_bug_query_sle15_2, http://s.qa/qa_sle_bugs_sle12_2 for SLE12 
 * Better organization of planned work can be seen at the [SUSE QA](https://progress.opensuse.org/projects/suseqa) project (which is not public).