action #120939
closed[alert] Pipeline for scheduling incidents runs into timeout size:M
0%
Description
Observation¶
It has already happened two times. The logs look like this:
…
INFO: Triggering {'api': 'api/incident_settings', 'qem': {'incident': 26739, 'arch': 'x86_64', 'flavor': 'Azure-SAP-BYOS-Incidents-saptune', 'version': '15-SP1', 'withAggregate': False, 'settings': {'DISTRI': 'sle', 'VERSION': '15-SP1', 'ARCH': 'x86_64', 'FLAVOR': 'Azure-SAP-BYOS-Incidents-saptune', '_ONLY_OBSOLETE_SAME_BUILD': '1', '_OBSOLETE': '1', 'INCIDENT_ID': 26739, '__CI_JOB_URL': 'https://gitlab.suse.de/qa-maintenance/bot-ng/-/jobs/1255284', 'BUILD': ':26739:nodejs10', 'RRID': 'SUSE:Maintenance:26739:284015', 'REPOHASH': 1667984242, 'OS_TEST_ISSUES': '26739', 'INCIDENT_REPO': 'http://download.suse.de/ibs/SUSE:/Maintenance:/26739/SUSE_Updates_SLE-Product-SLES_SAP_15-SP1_x86_64', '_PRIORITY': 60, '__SMELT_INCIDENT_URL': 'https://smelt.suse.de/incident/26739', '__DASHBOARD_INCIDENT_URL': 'https://dashboard.qam.suse.de/incident/26739'}}, 'openqa': {'DISTRI': 'sle', 'VERSION': '15-SP1', 'ARCH': 'x86_64', 'FLAVOR': 'Azure-SAP-BYOS-Incidents-saptune', '_ONLY_OBSOLETE_SAME_BUILD': '1', '_OBSOLETE': '1', 'INCIDENT_ID': 26739, '__CI_JOB_URL': 'https://gitlab.suse.de/qa-maintenance/bot-ng/-/jobs/1255284', 'BUILD': ':26739:nodejs10', 'RRID': 'SUSE:Maintenance:26739:284015', 'REPOHASH': 1667984242, 'OS_TEST_ISSUES': '26739', 'INCIDENT_REPO': 'http://download.suse.de/ibs/SUSE:/Maintenance:/26739/SUSE_Updates_SLE-Product-SLES_SAP_15-SP1_x86_64', '_PRIORITY': 60, '__SMELT_INCIDENT_URL': 'https://smelt.suse.de/incident/26739', '__DASHBOARD_INCIDENT_URL': 'https://dashboard.qam.suse.de/incident/26739'}}
INFO: openqa-cli api --host https://openqa.suse.de -X post isos DISTRI=sle VERSION=15-SP1 ARCH=x86_64 FLAVOR=Azure-SAP-BYOS-Incidents-saptune _ONLY_OBSOLETE_SAME_BUILD=1 _OBSOLETE=1 INCIDENT_ID=26739 __CI_JOB_URL=https://gitlab.suse.de/qa-maintenance/bot-ng/-/jobs/1255284 BUILD=:26739:nodejs10 RRID=SUSE:Maintenance:26739:284015 REPOHASH=1667984242 OS_TEST_ISSUES=26739 INCIDENT_REPO=http://download.suse.de/ibs/SUSE:/Maintenance:/26739/SUSE_Updates_SLE-Product-SLES_SAP_15-SP1_x86_64 _PRIORITY=60 __SMELT_INCIDENT_URL=https://smelt.suse.de/incident/26739 __DASHBOARD_INCIDENT_URL=https://dashboard.qam.suse.de/incident/26739
ERROR: Job failed: execution took longer than 1h0m0s seconds
(see https://gitlab.suse.de/qa-maintenance/bot-ng/-/jobs/1255284)
Acceptance criteria¶
- AC1: Pipeline does not fail anymore
Rollback steps¶
- Set Updates only schedule for Maintenance Incidents in openqa back to Active
Suggestions¶
- The issue is still occurring
- Output the response from openQA if we get a 404 from isos post
- Track down what's causing the delay e.g. OBS and maybe less likely openQA which returns the 404
- Ensure we can see timestamps in the logs
Updated by jbaier_cz almost 2 years ago
I see a lot of 404 failures from the openQA:
INFO: Triggering {'api': 'api/incident_settings', 'qem': {'incident': 26726, 'arch': 'x86_64', 'flavor': 'Azure-SAP-BYOS-Incidents-saptune', 'version': '12-SP4', 'withAggregate': True, 'settings': {'DISTRI': 'sle', 'VERSION': '12-SP4', 'ARCH': 'x86_64', 'FLAVOR': 'Azure-SAP-BYOS-Incidents-saptune', '_ONLY_OBSOLETE_SAME_BUILD': '1', '_OBSOLETE': '1', 'INCIDENT_ID': 26726, '__CI_JOB_URL': 'https://gitlab.suse.de/qa-maintenance/bot-ng/-/jobs/1255284', 'BUILD': ':26726:krb5', 'RRID': 'SUSE:Maintenance:26726:283931', 'REPOHASH': 1667898735, 'OS_TEST_ISSUES': '26726', 'SLES4SAP_TEST_ISSUES': '26726', 'INCIDENT_REPO': 'http://download.suse.de/ibs/SUSE:/Maintenance:/26726/SUSE_Updates_SLE-SAP_12-SP4_x86_64,http://download.suse.de/ibs/SUSE:/Maintenance:/26726/SUSE_Updates_SLE-SERVER_12-SP4-LTSS_x86_64', '_PRIORITY': 60, '__SMELT_INCIDENT_URL': 'https://smelt.suse.de/incident/26726', '__DASHBOARD_INCIDENT_URL': 'https://dashboard.qam.suse.de/incident/26726'}}, 'openqa': {'DISTRI': 'sle', 'VERSION': '12-SP4', 'ARCH': 'x86_64', 'FLAVOR': 'Azure-SAP-BYOS-Incidents-saptune', '_ONLY_OBSOLETE_SAME_BUILD': '1', '_OBSOLETE': '1', 'INCIDENT_ID': 26726, '__CI_JOB_URL': 'https://gitlab.suse.de/qa-maintenance/bot-ng/-/jobs/1255284', 'BUILD': ':26726:krb5', 'RRID': 'SUSE:Maintenance:26726:283931', 'REPOHASH': 1667898735, 'OS_TEST_ISSUES': '26726', 'SLES4SAP_TEST_ISSUES': '26726', 'INCIDENT_REPO': 'http://download.suse.de/ibs/SUSE:/Maintenance:/26726/SUSE_Updates_SLE-SAP_12-SP4_x86_64,http://download.suse.de/ibs/SUSE:/Maintenance:/26726/SUSE_Updates_SLE-SERVER_12-SP4-LTSS_x86_64', '_PRIORITY': 60, '__SMELT_INCIDENT_URL': 'https://smelt.suse.de/incident/26726', '__DASHBOARD_INCIDENT_URL': 'https://dashboard.qam.suse.de/incident/26726'}}
INFO: openqa-cli api --host https://openqa.suse.de -X post isos DISTRI=sle VERSION=12-SP4 ARCH=x86_64 FLAVOR=Azure-SAP-BYOS-Incidents-saptune _ONLY_OBSOLETE_SAME_BUILD=1 _OBSOLETE=1 INCIDENT_ID=26726 __CI_JOB_URL=https://gitlab.suse.de/qa-maintenance/bot-ng/-/jobs/1255284 BUILD=:26726:krb5 RRID=SUSE:Maintenance:26726:283931 REPOHASH=1667898735 OS_TEST_ISSUES=26726 SLES4SAP_TEST_ISSUES=26726 INCIDENT_REPO=http://download.suse.de/ibs/SUSE:/Maintenance:/26726/SUSE_Updates_SLE-SAP_12-SP4_x86_64,http://download.suse.de/ibs/SUSE:/Maintenance:/26726/SUSE_Updates_SLE-SERVER_12-SP4-LTSS_x86_64 _PRIORITY=60 __SMELT_INCIDENT_URL=https://smelt.suse.de/incident/26726 __DASHBOARD_INCIDENT_URL=https://dashboard.qam.suse.de/incident/26726
ERROR: openQA returned 404
ERROR: Post failed with {'ARCH': 'x86_64',
'BUILD': ':26726:krb5',
'DISTRI': 'sle',
'FLAVOR': 'Azure-SAP-BYOS-Incidents-saptune',
'INCIDENT_ID': 26726,
'INCIDENT_REPO': 'http://download.suse.de/ibs/SUSE:/Maintenance:/26726/SUSE_Updates_SLE-SAP_12-SP4_x86_64,http://download.suse.de/ibs/SUSE:/Maintenance:/26726/SUSE_Updates_SLE-SERVER_12-SP4-LTSS_x86_64',
'OS_TEST_ISSUES': '26726',
'REPOHASH': 1667898735,
'RRID': 'SUSE:Maintenance:26726:283931',
'SLES4SAP_TEST_ISSUES': '26726',
'VERSION': '12-SP4',
'_OBSOLETE': '1',
'_ONLY_OBSOLETE_SAME_BUILD': '1',
'_PRIORITY': 60,
'__CI_JOB_URL': 'https://gitlab.suse.de/qa-maintenance/bot-ng/-/jobs/1255284',
'__DASHBOARD_INCIDENT_URL': 'https://dashboard.qam.suse.de/incident/26726',
'__SMELT_INCIDENT_URL': 'https://smelt.suse.de/incident/26726'}
INFO: POST failed, not updating dashboard
Do I remember correctly, there is now some internal retry in the openqa-client? Does it retry on 404 (it shouldn't)?
Updated by tinita almost 2 years ago
It does not retry on 404, and only retries if requested: https://github.com/os-autoinst/openQA-python-client/commit/dc939d215b1dc509bf8ab8a8eb189f27a5375845
AFAICS, the requests in the log are all different.
Updated by jbaier_cz almost 2 years ago
In that case, we are losing time somewhere else. I propose https://github.com/openSUSE/qem-bot/pull/86 to add timestamps to all log entries, then we will see where it hangs.
Updated by okurz almost 2 years ago
- Tags changed from alert to alert, reactive work
Updated by livdywan almost 2 years ago
- Tags changed from alert, reactive work to alert
- Subject changed from [alert] Pipeline for scheduling incidents runs into timeout to [alert] Pipeline for scheduling incidents runs into timeout size:M
- Description updated (diff)
- Status changed from New to Workable
Updated by jbaier_cz almost 2 years ago
The job with timestamps is running, we will see the results in an hour: https://gitlab.suse.de/qa-maintenance/bot-ng/-/jobs/1256445
Updated by jbaier_cz almost 2 years ago
So, it is a minute lost on every 404, which is really an issue:
2022-11-25 10:46:55 INFO openqa-cli api --host https://openqa.suse.de -X post isos DISTRI=sle VERSION=12-SP4 ARCH=x86_64 FLAVOR=Azure-SAP-BYOS-Incidents-saptune _ONLY_OBSOLETE_SAME_BUILD=1 _OBSOLETE=1 INCIDENT_ID=25111 __CI_JOB_URL=https://gitlab.suse.de/qa-maintenance/bot-ng/-/jobs/1256445 BUILD=:25111:gcc10 REPOHASH=1658498452 OS_TEST_ISSUES=25111 SLES4SAP_TEST_ISSUES=25111 INCIDENT_REPO=http://download.suse.de/ibs/SUSE:/Maintenance:/25111/SUSE_Updates_SLE-SAP_12-SP4_x86_64,http://download.suse.de/ibs/SUSE:/Maintenance:/25111/SUSE_Updates_SLE-SERVER_12-SP4-LTSS_x86_64 __SMELT_INCIDENT_URL=https://smelt.suse.de/incident/25111 __DASHBOARD_INCIDENT_URL=https://dashboard.qam.suse.de/incident/25111
2022-11-25 10:48:05 ERROR openQA returned 404
Updated by jbaier_cz almost 2 years ago
- Related to action #107923: qem-bot: Ignore not-ok openQA jobs for specific incident based on openQA job comment size:M added
Updated by jbaier_cz almost 2 years ago
tinita wrote:
It does not retry on 404, and only retries if requested: https://github.com/os-autoinst/openQA-python-client/commit/dc939d215b1dc509bf8ab8a8eb189f27a5375845
AFAICS, the requests in the log are all different.
I am aware of https://github.com/os-autoinst/openQA-python-client/pull/34, but do we use it in the CI? In other words, are those changes in the Leap repositories?
Updated by tinita almost 2 years ago
I think I confused it anyway. Our perl tool openqa-cli
is used here. I thought we were using openqa-python-client in qem-bot.
Or are we using both? (And why?) A bit offtopic, but that's confusing.
Updated by tinita almost 2 years ago
Btw, you can also do grep "POST.* 404 " /var/log/apache2/access_log
on osd to see when the requests hit the server.
Seems the requests itself are not very slow most of the time (< 100ms).
Updated by jbaier_cz almost 2 years ago
tinita wrote:
I think I confused it anyway. Our perl tool
openqa-cli
is used here. I thought we were using openqa-python-client in qem-bot.
Or are we using both? (And why?) A bit offtopic, but that's confusing.
It is using the python library.
from openqa_client.client import OpenQA_Client
The log line with openqa-cli call is there just for the user to be able to run that particular job outside of qem-bot (it should be the exact interpretation what the qem-bot is calling via the Python binding).
Updated by livdywan almost 2 years ago
schedule updates seems to be failing with a different error now:
2022-11-28 07:48:20 INFO Starting bot mainloop
344Traceback (most recent call last):
345 File "./qem-bot/bot-ng.py", line 7, in <module>
346 main()
347 File "/builds/qa-maintenance/bot-ng/qem-bot/openqabot/main.py", line 43, in main
348 sys.exit(cfg.func(cfg))
349 File "/builds/qa-maintenance/bot-ng/qem-bot/openqabot/args.py", line 35, in do_aggregate_schedule
350 return bot()
351 File "/builds/qa-maintenance/bot-ng/qem-bot/openqabot/openqabot.py", line 61, in __call__
352 post += worker(self.incidents, self.token, self.ci, self.ignore_onetime)
353 File "/builds/qa-maintenance/bot-ng/qem-bot/openqabot/types/aggregate.py", line 212, in __call__
354 handle_arch(incidents, token, ci_url, ignore_onetime) for arch in self.archs
355 File "/builds/qa-maintenance/bot-ng/qem-bot/openqabot/types/aggregate.py", line 212, in <listcomp>
356 handle_arch(incidents, token, ci_url, ignore_onetime) for arch in self.archs
357NameError: name 'handle_arch' is not defined
358Execution failed! Will try again in 8 minutes...
Consequently I paused the pipeline.
Updated by livdywan almost 2 years ago
- Description updated (diff)
- Status changed from Workable to In Progress
- Assignee set to livdywan
Updated by jbaier_cz almost 2 years ago
cdywan wrote:
schedule updates seems to be failing with a different error now:
2022-11-28 07:48:20 INFO Starting bot mainloop 344Traceback (most recent call last): 345 File "./qem-bot/bot-ng.py", line 7, in <module> 346 main() 347 File "/builds/qa-maintenance/bot-ng/qem-bot/openqabot/main.py", line 43, in main 348 sys.exit(cfg.func(cfg)) 349 File "/builds/qa-maintenance/bot-ng/qem-bot/openqabot/args.py", line 35, in do_aggregate_schedule 350 return bot() 351 File "/builds/qa-maintenance/bot-ng/qem-bot/openqabot/openqabot.py", line 61, in __call__ 352 post += worker(self.incidents, self.token, self.ci, self.ignore_onetime) 353 File "/builds/qa-maintenance/bot-ng/qem-bot/openqabot/types/aggregate.py", line 212, in __call__ 354 handle_arch(incidents, token, ci_url, ignore_onetime) for arch in self.archs 355 File "/builds/qa-maintenance/bot-ng/qem-bot/openqabot/types/aggregate.py", line 212, in <listcomp> 356 handle_arch(incidents, token, ci_url, ignore_onetime) for arch in self.archs 357NameError: name 'handle_arch' is not defined 358Execution failed! Will try again in 8 minutes...
Consequently I paused the pipeline.
handle_arch
was introduced in https://github.com/openSUSE/qem-bot/commit/cfd40e1634775dd7b46c7bb05267c206be99f3d0; this failure might be just a good old regression.
Updated by livdywan almost 2 years ago
cdywan wrote:
357NameError: name 'handle_arch' is not defined
358Execution failed! Will try again in 8 minutes...Consequently I paused the pipeline.
https://github.com/openSUSE/qem-bot/pull/89
I unpaused schedule updates, and also renamed it to match the name you see when it fails because that caused me some confusing trying to see which one was failing.
Updated by jbaier_cz almost 2 years ago
AFAIK the name reflected what the pipeline actually does. By the way, you should also include the cron timer information into the name as the name is the only visible information for others and there is no other way to find out the scheduling info without edit rights.
Updated by mkittler almost 2 years ago
It looks like we get 404 from openQA very often during that pipeline run. When invoking such commands the response is returned promptly by openQA and the error looks like this:
openqa-cli api --host https://openqa.suse.de -X post isos DISTRI=sle VERSION=15-SP1 ARCH=x86_64 FLAVOR=Azure-SAP-BYOS-Incidents-saptune _ONLY_OBSOLETE_SAME_BUILD=1 _OBSOLETE=1 INCIDENT_ID=26739 __CI_JOB_URL=https://gitlab.suse.de/qa-maintenance/bot-ng/-/jobs/1259318 BUILD=:26739:nodejs10 RRID=SUSE:Maintenance:26739:284015 REPOHASH=1667984242 OS_TEST_ISSUES=26739 INCIDENT_REPO=http://download.suse.de/ibs/SUSE:/Maintenance:/26739/SUSE_Updates_SLE-Product-SLES_SAP_15-SP1_x86_64 _PRIORITY=60 __SMELT_INCIDENT_URL=https://smelt.suse.de/incident/26739 __DASHBOARD_INCIDENT_URL=https://dashboard.qam.suse.de/incident/26739
404 Not Found
{"count":0,"error":"no templates found for sle-Azure-SAP-BYOS-Incidents-saptune-x86_64","error_status":404,"failed":{},"ids":[],"scheduled_product_id":1036435}
The retry behavior of https://github.com/os-autoinst/openQA-python-client is problematic as it means wasting almost 2 minutes on every 404 error. That sums up leading to an overall timeout of the pipeline.
If we don't already use it the latest release of the Python module we should upgrade because https://github.com/os-autoinst/openQA-python-client/commit/dc939d215b1dc509bf8ab8a8eb189f27a5375845 would help. (And if we already use it then this change might not work as intended and could be our regression.)
I'm also wondering whether the bot is expected to make requests for non-existing templates. Should it act more cleverly and avoid the 404 errors in the first place?
Updated by livdywan almost 2 years ago
jbaier_cz wrote:
AFAIK the name reflected what the pipeline actually does. By the way, you should also include the cron timer information into the name as the name is the only visible information for others and there is no other way to find out the scheduling info without edit rights.
I wasn't arguing the name just that it should match. An MR is probably required to change it the other way around (tho it may not be so easy).
Updated by jbaier_cz almost 2 years ago
Thanks for the nice summary @mkittler; as I said in #120939#note-10, the container should use python3-openqa_client directly from Leap 15.4 (we can probably override if needed by linking a newer package into QA:Maintenance project in IBS).
Yes, the 404 are interesting, I am wondering if that could be because of different settings inside openQA job and some bot metadata?
An MR is probably required to change it the other way around.
Yes, the name of the pipeline job is in the yaml and it has some naming restrictions.
Updated by livdywan almost 2 years ago
Apparently we can't easily surface the error message that openqa-cli
shows because openQA-python-client consumes it and never propagates it via the exception it emits. This will require an upstream change.
Updated by livdywan almost 2 years ago
jbaier_cz wrote:
tinita wrote:
I am aware of https://github.com/os-autoinst/openQA-python-client/pull/34, but do we use it in the CI? In other words, are those changes in the Leap repositories?
Checking the image here:
python3 -c 'import openqa_client; print(openqa_client.__version__)'
4.1.1
The latest is 4.2.1
. Actually the request fix is not contained in any release so far so we wouldn't benefit from it.
We could install straight from git now and see if that helps us.
Updated by kraih almost 2 years ago
cdywan wrote:
schedule updates seems to be failing with a different error now:
2022-11-28 07:48:20 INFO Starting bot mainloop 344Traceback (most recent call last): 345 File "./qem-bot/bot-ng.py", line 7, in <module> 346 main() 347 File "/builds/qa-maintenance/bot-ng/qem-bot/openqabot/main.py", line 43, in main 348 sys.exit(cfg.func(cfg)) 349 File "/builds/qa-maintenance/bot-ng/qem-bot/openqabot/args.py", line 35, in do_aggregate_schedule 350 return bot() 351 File "/builds/qa-maintenance/bot-ng/qem-bot/openqabot/openqabot.py", line 61, in __call__ 352 post += worker(self.incidents, self.token, self.ci, self.ignore_onetime) 353 File "/builds/qa-maintenance/bot-ng/qem-bot/openqabot/types/aggregate.py", line 212, in __call__ 354 handle_arch(incidents, token, ci_url, ignore_onetime) for arch in self.archs 355 File "/builds/qa-maintenance/bot-ng/qem-bot/openqabot/types/aggregate.py", line 212, in <listcomp> 356 handle_arch(incidents, token, ci_url, ignore_onetime) for arch in self.archs 357NameError: name 'handle_arch' is not defined 358Execution failed! Will try again in 8 minutes...
Consequently I paused the pipeline.
Might be a good idea to check if this is related to the same refactoring that caused #120973. Maybe we should just revert that.
Updated by jbaier_cz almost 2 years ago
cdywan wrote:
jbaier_cz wrote:
tinita wrote:
I am aware of https://github.com/os-autoinst/openQA-python-client/pull/34, but do we use it in the CI? In other words, are those changes in the Leap repositories?Checking the image here:
python3 -c 'import openqa_client; print(openqa_client.__version__)' 4.1.1
The latest is
4.2.1
. Actually the request fix is not contained in any release so far so we wouldn't benefit from it.We could install straight from git now and see if that helps us.
Yeah, it seems that we use the version in the Leap, which is 2 years old. I linked python-openqa_client from devel:languages:python into QA:Maintenance, so if someone updates the devel package, it can be used right away from the container (without waiting for maintenance update to Leap).
Updated by livdywan almost 2 years ago
jbaier_cz wrote:
Yeah, it seems that we use the version in the Leap, which is 2 years old. I linked python-openqa_client from devel:languages:python into QA:Maintenance, so if someone updates the devel package, it can be used right away from the container (without waiting for maintenance update to Leap).
https://build.opensuse.org/request/show/1038718
With that I'm retracting my proposal to install from git. This should be enough.
Updated by livdywan almost 2 years ago
cdywan wrote:
jbaier_cz wrote:
Yeah, it seems that we use the version in the Leap, which is 2 years old. I linked python-openqa_client from devel:languages:python into QA:Maintenance, so if someone updates the devel package, it can be used right away from the container (without waiting for maintenance update to Leap).
https://build.opensuse.org/request/show/1038718
With that I'm retracting my proposal to install from git. This should be enough.
Apparently OBS doesn't care for my update:
error: Bad source: /home/abuild/rpmbuild/SOURCES/python-openqa_client-4.2.1.tar.gz: No such file or directory
I checked that the git checksum was updated, the file is in the repo and the spec also seems to refer to the correct version. Not sure why this currently won't build.
Updated by openqa_review almost 2 years ago
- Due date set to 2022-12-13
Setting due date based on mean cycle time of SUSE QE Tools
Updated by livdywan almost 2 years ago
cdywan wrote:
cdywan wrote:
jbaier_cz wrote:
Yeah, it seems that we use the version in the Leap, which is 2 years old. I linked python-openqa_client from devel:languages:python into QA:Maintenance, so if someone updates the devel package, it can be used right away from the container (without waiting for maintenance update to Leap).
https://build.opensuse.org/request/show/1038718
With that I'm retracting my proposal to install from git. This should be enough.
Apparently OBS doesn't care for my update:
error: Bad source: /home/abuild/rpmbuild/SOURCES/python-openqa_client-4.2.1.tar.gz: No such file or directory
I checked that the git checksum was updated, the file is in the repo and the spec also seems to refer to the correct version. Not sure why this currently won't build.
Package is looking good now: https://build.opensuse.org/request/show/1038877
Updated by livdywan almost 2 years ago
Just to clarify the current state:
- schedule updates passes
- schedule incidents times out as before, pending the package update on OBS
- My upstream change to improve logging isn't probably crucial since we seem to know what the error is, but I'm still working on that on the side.
- I'm trying to confirm why we're still seeing the same
no templates found
errors
Updated by okurz almost 2 years ago
- Status changed from In Progress to Feedback
https://build.opensuse.org/request/show/1038877 was declined with "Please, make rpmlint happy.", replaced by https://build.opensuse.org/request/show/1039298.
https://gitlab.suse.de/qa-maintenance/bot-ng/-/jobs shows much green so this already looks good.
Please crosscheck what you see as necessary to resolve.
Updated by livdywan almost 2 years ago
podman run --rm -it -v $(pwd):/pwd -w /pwd registry.suse.de/qa/maintenance/containers/qam-ci-leap:latest
python3 -c 'import openqa_client; print(openqa_client.__version__)'
4.2.1
The update seems to have made it into the container.
Updated by livdywan almost 2 years ago
- Tags changed from alert, reactive work to alert
- Status changed from Feedback to Resolved
cdywan wrote:
- I'm trying to confirm why we're still seeing the same
no templates found
errors
Apparently the pipeline no longer times out even though I can still see errors related to Azure tests. Filed #121447 to cover that.