openSUSE Project Management Tool: Issueshttps://progress.opensuse.org/https://progress.opensuse.org/themes/openSUSE/favicon/favicon.ico?15829177842024-02-09T10:20:57ZopenSUSE Project Management Tool
Redmine QA - action #155245 (New): [mtui] Better openQA - MTUI cooperationhttps://progress.opensuse.org/issues/1552452024-02-09T10:20:57Zvpelcakvpelcak@suse.com
<a name="Motivation"></a>
<h2 >Motivation<a href="#Motivation" class="wiki-anchor">¶</a></h2>
<p>As of now, the Update Validation squad needs to manually check the results of openQA for the particular maintenance update and whether the test coverage is sufficient.</p>
<p>Being able to have those data automatically exported into MTUI would improve a lot.</p>
<a name="Acceptance-Criteria"></a>
<h2 >Acceptance Criteria<a href="#Acceptance-Criteria" class="wiki-anchor">¶</a></h2>
<ul>
<li><strong>AC1:</strong> Test coverage description is properly exported to MTUI regression tests section</li>
<li><strong>AC2:</strong> The status of the results is exported, too</li>
</ul>
<a name="Suggestions"></a>
<h2 >Suggestions<a href="#Suggestions" class="wiki-anchor">¶</a></h2>
<ul>
<li>Test coverage description of testcases</li>
<li>Mapping of the testcases to the particular update</li>
<li>A way to have test coverage being exported to the testreport template</li>
<li>Feedback loop from UV squad on the test coverage</li>
</ul>
QA - action #153478 (Blocked): [mtui] Prepare MTUI for ALPhttps://progress.opensuse.org/issues/1534782024-01-12T11:48:27Zvpelcakvpelcak@suse.com
<a name="Motivation"></a>
<h2 >Motivation<a href="#Motivation" class="wiki-anchor">¶</a></h2>
<p>With ALP approaching we need to prepare our testing platform for its arrival to be able to continue delivering the updates. Some weeks prior release of new product we start with testing of fixes which came after a deadline and didn't make it into release and we can use that as an opportunity to test the workflow. So the specific goal here is that MTUI is able to handle update testing of ALP.</p>
<a name="Acceptance-Criteria"></a>
<h2 >Acceptance Criteria<a href="#Acceptance-Criteria" class="wiki-anchor">¶</a></h2>
<ul>
<li><strong>AC1:</strong> mtui can be called successfully on ALP maintenance requests</li>
</ul>
<a name="Suggestions"></a>
<h2 >Suggestions<a href="#Suggestions" class="wiki-anchor">¶</a></h2>
<ul>
<li>Coordinate with the PO of UV (hrommel1) and leading engineers, e.g. mpluskal, to understand and crosscheck the specific requirements</li>
<li>If possible actually join the UV squad for some time e.g. days/weeks to properly and efficiently accomplish this task</li>
<li>Find (test) maintenance requests for ALP to be able to test mtui against. If no such test maintenance requests exist yet then request such test maintenance requests</li>
<li>Ensure you have a proper development environment for <a href="https://github.com/openSUSE/mtui/" class="external">https://github.com/openSUSE/mtui/</a>, i.e. at least <code>make test</code> works</li>
<li>Add missing support in config, e.g. in mtui/template/products/</li>
<li>Consider additional adaptions needed to handle any relevant ALP changes</li>
</ul>
QA - action #153352 (New): Test refhost maintenance automationhttps://progress.opensuse.org/issues/1533522024-01-10T15:47:46Zvpelcakvpelcak@suse.com
<a name="Problem-Statement"></a>
<h3 >Problem Statement<a href="#Problem-Statement" class="wiki-anchor">¶</a></h3>
<p>Update Validation team uses reference hosts for the testing of the maintenance updates.</p>
<p>There machines contain various combinations of add-ons and are often modified by the testers to test some functionality, repositories are being added etc, but the configuration changes are often not reverted.<br>
That leads to reference hosts being broken, with full disk ...</p>
<p>We also use a metadata from Gitlab to contain information about reference hosts setup which are used by MTUI to find a proper testing machine.</p>
<p>It would make sense to deploy an automation able to take care of system functionality, cleanup and proper add-on configuration.</p>
<p>For example we could use Ansible, which would be regularly checking refhosts status, compare installed add-ons with the information in metadata and if there are any differences, metadata would be used as reference.</p>
<a name="Acceptance-Criteria"></a>
<h3 >Acceptance Criteria<a href="#Acceptance-Criteria" class="wiki-anchor">¶</a></h3>
<a name="AC-1"></a>
<h4 >AC 1<a href="#AC-1" class="wiki-anchor">¶</a></h4>
<p>Refhosts add-ons are automatically synced to reflect setup of add-ons as specified in metadata.</p>
<a name="AC-2"></a>
<h4 >AC 2<a href="#AC-2" class="wiki-anchor">¶</a></h4>
<p>Refhosts are automatically regularly checked for health and functionality and automatically cleaned up or reinstalled if needed.</p>
QA - action #134420 (New): [tools] If no refhost found in chosen location, try different ones in ...https://progress.opensuse.org/issues/1344202023-08-18T09:54:30Zvpelcakvpelcak@suse.com
<p>In UV squad we are using so called locations as a form of load balancing of the load on refhosts.</p>
<p>Each location has refhosts assigned and based on used location in .mtuirc the appropriate group/location of refhosts is being chosen from.</p>
<p>Sometimes the location requires necessary refhosts. In such case another locations should be used as backup.</p>
QA - action #134417 (Workable): Merge prague and prague-2 refhosts locations size:Mhttps://progress.opensuse.org/issues/1344172023-08-18T09:50:54Zvpelcakvpelcak@suse.com
<a name="Motivation"></a>
<h2 >Motivation<a href="#Motivation" class="wiki-anchor">¶</a></h2>
<p>Historically we were using <code>prague</code> and <code>prague-2</code> locations for the automatic connection of refhosts.</p>
<p>It is worth considering merging these 2 locations.</p>
<p>Also, it makes sense to notify the user that they are using an obsoleted location in the config.</p>
<a name="Acceptance-criteria"></a>
<h2 >Acceptance criteria<a href="#Acceptance-criteria" class="wiki-anchor">¶</a></h2>
<ul>
<li><strong>AC1:</strong> Only <code>prague</code> is required to get all Prague locations for automatic connection of refhosts</li>
</ul>
<a name="Suggestions"></a>
<h2 >Suggestions<a href="#Suggestions" class="wiki-anchor">¶</a></h2>
<ul>
<li>Remind yourself of how mtui is used e.g. by reading the docs <a href="https://github.com/openSUSE/mtui/blob/69d4632d367b5074bc4aa5f831eb696fd50f057a/Documentation/faq.rst#L79" class="external">https://github.com/openSUSE/mtui/blob/69d4632d367b5074bc4aa5f831eb696fd50f057a/Documentation/faq.rst#L79</a></li>
<li>Review existing instances of prague/prague-2 in the <a href="https://gitlab.suse.de/qa-maintenance/metadata/-/tree/master/refhosts" class="external">metadata repo</a> e.g. <a href="https://gitlab.suse.de/qa-maintenance/metadata/-/blob/master/refhosts/15-SP5/freyr.qam.suse.cz.yml" class="external">https://gitlab.suse.de/qa-maintenance/metadata/-/blob/master/refhosts/15-SP5/freyr.qam.suse.cz.yml</a></li>
<li>Consider adding a simple warning to <a href="https://github.com/openSUSE/mtui" class="external">mtui</a></li>
<li>Maybe a message like "location must be one of ..."based on metadata</li>
</ul>
QA - action #124473 (New): [tools] Automatic regression tests export from openQAhttps://progress.opensuse.org/issues/1244732023-02-14T11:20:03Zvpelcakvpelcak@suse.com
<a name="Motivation"></a>
<h2 >Motivation<a href="#Motivation" class="wiki-anchor">¶</a></h2>
<p>With our progression in automation of regression tests we reached the state that many updates just need the results to be looked up in openQA and linked in our testreports.<br>
That is pretty repetitive manual work which begs for the automation.</p>
<a name="Acceptance-criteria"></a>
<h2 >Acceptance criteria<a href="#Acceptance-criteria" class="wiki-anchor">¶</a></h2>
<ul>
<li><strong>AC1:</strong> Automation fills the links to the appropriate regression tests in openQA and their result into the testreport of the update.</li>
</ul>
<a name="Suggestions"></a>
<h2 >Suggestions<a href="#Suggestions" class="wiki-anchor">¶</a></h2>
<ul>
<li>This will perhaps require knowledge of our test coverage</li>
<li>Maybe extension of openQA API will be needed</li>
</ul>
QA - action #115613 (New): [tools] dashboard.qam.suse.de/blocked to show updates by the priority ...https://progress.opensuse.org/issues/1156132022-08-22T13:26:47Zvpelcakvpelcak@suse.com
<p>When testing updates, we have them sorted by their priority in SMELT <a href="https://maintenance.suse.de/overview/" class="external">https://maintenance.suse.de/overview/</a></p>
<p>It would be nice, if this priority was also reflected in the dashboard <a href="https://dashboard.qam.suse.de/blocked" class="external">https://dashboard.qam.suse.de/blocked</a></p>
<p>That way, the priority items at the top will be more visible and easier to prioritize.</p>
openQA Tests - action #104568 (New): [qe-core] QAM maintenance tests: openQA to be able to test u...https://progress.opensuse.org/issues/1045682022-01-03T10:37:24Zvpelcakvpelcak@suse.com
<a name="Motivation"></a>
<h2 >Motivation<a href="#Motivation" class="wiki-anchor">¶</a></h2>
<p>When testing of maintenance update identifies broken update and it is rejected, the tests will start failing as the repository will become empty.<br>
That kills the testing for the day.</p>
<a name="Acceptance-criteria"></a>
<h2 >Acceptance criteria<a href="#Acceptance-criteria" class="wiki-anchor">¶</a></h2>
<ul>
<li><strong>AC1:</strong> When the update is rejected, restarted tests will proceed like nothing happened and will test the remaining maintenance updates.</li>
</ul>
<a name="Suggestions"></a>
<h2 >Suggestions<a href="#Suggestions" class="wiki-anchor">¶</a></h2>
<ul>
<li>Make openQA smarter itself to identify this kind of situation</li>
<li>Modify the initial tests to gracefully skip empty repositories</li>
</ul>
openQA Tests - action #103545 (Blocked): [qe-core] Kiwi tests for SLE 15 SP3 missinghttps://progress.opensuse.org/issues/1035452021-12-06T12:02:56Zvpelcakvpelcak@suse.com
<p>Kiwi test for SLE 15 SP3 is missing.<br>
We need to make sure that the test is in place Job Groups -> Maintenance: Kiwi</p>
openQA Tests - action #103542 (New): [qe-core] Kiwi tests for SLE 15+ failinghttps://progress.opensuse.org/issues/1035422021-12-06T11:58:01Zvpelcakvpelcak@suse.com
<p>I noticed that SLE 15 and newer tests of Kiwi started to fail or are already failing for some time now.</p>
<p><a href="https://openqa.suse.de/parent_group_overview/24#grouped_by_build" class="external">https://openqa.suse.de/parent_group_overview/24#grouped_by_build</a> </p>
<p>Maybe it should be considered to reorganize them in a way that we do not lose visibility on them.</p>
QA - action #100871 (New): Consider CI Dashboard integration to SMELThttps://progress.opensuse.org/issues/1008712021-10-12T13:53:52Zvpelcakvpelcak@suse.com
<p><a href="http://dashboard.qam.suse.de/blocked" class="external">http://dashboard.qam.suse.de/blocked</a> contains information about tests status for individual updates.</p>
<p>That requires a lot of cross-checking between smelt and dashboard which impacts usability.<br>
For example the dashboard doesn't contain information about priority and deadlines. Having them included in he dashboard would effectively duplicate items from SMELT.</p>
<p>Perhaps it would be worth considering integrating the dashboard to the SMELT directly and decommission <a href="http://dashboard.qam.suse.de" class="external">http://dashboard.qam.suse.de</a>.</p>
QA - action #98820 (New): Various requirements for qem-dashboard (was: Design document for openQA...https://progress.opensuse.org/issues/988202021-09-17T10:08:26Zvpelcakvpelcak@suse.com
<p>The dashboard itself consists of 2 components. The dashboard itself openQA bot performing the operations underneath.</p>
<p>Dashboard</p>
<ul>
<li>Test state overview - The openQA reviewer can easily see what tests passed/failed... for the particular update to see commonalities, e.g. the same test modules failing in all codestreams and versions
<ul>
<li>Alternative: A test module centric openQA view</li>
</ul></li>
<li>Have a section for each squad (need to be able to assign tests/job groups to squads before)
<ul>
<li>It should be sufficient or better suitable to just have that in openQA itself unless there is benefit in knowing which updates are blocked by tests maintained by particular squads</li>
</ul></li>
<li>History - be able to see which tests passed/failed for the updates in recent history to not need to wait for all jobs for incidents to complete
<ul>
<li>alternative: seen as less helpful than automatic actions by any approving automation that looks into the history</li>
<li>further details: openQA knows the history for each scenario
<ul>
<li>Is it possible some example how verify this easily, as an old history from smelt?
<ul>
<li>yes, openQA can provide test overviews for specific parameters, e.g. a certain incident, for a specific time</li>
<li>remark: Right now maintenance openQA test results on openqa.suse.de are mostly stored for only about 2 weeks, would need storage investment to store results going back longer into the past</li>
</ul></li>
</ul></li>
</ul></li>
</ul>
<p>Bot</p>
<ul>
<li>Ensure that all to-be-scheduled tests are considered for any approval decision
<ul>
<li>rejection, especially manual rejection, can be done on individual already finished test results</li>
<li>alternative: Only approve if there are at least as many passed results as in previous incident releases
<ul>
<li>Would need possibility to mark the decrease of test numbers or test coverage as acceptable</li>
</ul></li>
</ul></li>
</ul>
QA - action #96314 (New): [mtui] Update MTUI to use new format for Testplatformhttps://progress.opensuse.org/issues/963142021-07-29T15:51:45Zvpelcakvpelcak@suse.com
<p>Currently templates contain metadata for example:</p>
<p>Testplatform: base=HPC(major=15,minor=);arch=[x86_64]<br>
Testplatform: base=sap-aio(major=15,minor=);arch=[x86_64]<br>
Testplatform: base=sles(major=15,minor=);arch=[s390x,x86_64]<br>
Testplatform: base=SLE_HPC(major=15,minor=);arch=[aarch64,x86_64]<br>
Testplatform: base=SLES(major=15,minor=);arch=[aarch64,ppc64le,s390x,x86_64];addon=SLES-LTSS(major=15,minor=)<br>
Testplatform: base=SLES_SAP(major=15,minor=);arch=[ppc64le,x86_64]</p>
<p>Currently only Testplatform: base=sap-aio(major=15,minor=);arch=[x86_64] and Testplatform: base=sles(major=15,minor=);arch=[s390x,x86_64] are read and followed by MTUI.<br>
This is the old format of the metadata.<br>
The other ones in new format are skipped automatically.</p>
<p>The problem is, that as you can see, old format uses fewer architectures for example as it is generated based on older mechanism with various quirks which are no longer needed.</p>
<p>References:</p>
<p><a href="https://gitlab.suse.de/qa-maintenance/templates-management/-/issues/18" class="external">https://gitlab.suse.de/qa-maintenance/templates-management/-/issues/18</a><br>
<a href="https://gitlab.suse.de/qa-maintenance/mtui/-/issues/39" class="external">https://gitlab.suse.de/qa-maintenance/mtui/-/issues/39</a></p>
QA - action #94024 (New): MTUI unable to recover from failure to connect to the refhosthttps://progress.opensuse.org/issues/940242021-06-15T12:40:52Zvpelcakvpelcak@suse.com
<a name="Steps-to-reproduce"></a>
<h2 >Steps to reproduce<a href="#Steps-to-reproduce" class="wiki-anchor">¶</a></h2>
<p>1) start mtui -a RRID<br>
2) make it connect to host, which ssh key changed<br>
3) you would be asked for password<br>
4) pressing Ctrl+C will stop attempt to log in<br>
5) you won't be able to enter any comand in mtui</p>
<a name="Workaround"></a>
<h2 >Workaround<a href="#Workaround" class="wiki-anchor">¶</a></h2>
<p>quit MTUI and start again</p>
openQA Tests - action #52679 (Workable): [qe-core][qem][functional][network] Update Travis checke...https://progress.opensuse.org/issues/526792019-06-06T11:44:59Zvpelcakvpelcak@suse.com
<p>We should not use IPs and hostnames of machines that are not part of the automation infrastructure on site.</p>
<p>We need to find the right balance between real world scenarios and prevention of unnecessary failures.</p>