action #164412
closedjenkins.qa.suse.de fails to run its jobs
0%
Description
Jenkins.qa.suse.de is used to schedule the tests for e.g the GNOME:Next media
http://jenkins.qa.suse.de/job/gnome_next-openqa/
There seems to be 2 stacked problems:
- For a while, the jobs did not auto-fire and only executed on manual runs (did not detect new ISOs to post)
- even worse: since Jul 23, all runs of the jobs fail with
FATAL: command execution failed
java.io.IOException: error=0, Failed to exec spawn helper: pid: 15861, exit value: 1
at java.base/java.lang.ProcessImpl.forkAndExec(Native Method)
at java.base/java.lang.ProcessImpl.<init>(ProcessImpl.java:295)
at java.base/java.lang.ProcessImpl.start(ProcessImpl.java:225)
at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1126)
Caused: java.io.IOException: Cannot run program "/bin/sh" (in directory "/var/lib/jenkins/workspace/gnome_next-openqa"): error=0, Failed to exec spawn helper: pid: 15861, exit value: 1
Updated by okurz 5 months ago
- Status changed from In Progress to Resolved
From http://jenkins.qa.suse.de/view/all/builds it looks like all builds are affected the same. I applied a not-related update to jenkins plugins and restarted the jenkins instance. Now http://jenkins.qa.suse.de/job/openqa-scripts_deploy/ passed a build. I triggered multiple builds. http://jenkins.qa.suse.de/job/gnome_next-openqa/8935/console now passed which triggered https://openqa.opensuse.org/tests/4357179 . I also triggered other jobs. Thank you for reporting the problem. Probably without your report eventually after some months or so I would have realized that there are no more submissions of os-autoinst packages to Fctry ;)
Updated by okurz 5 months ago
- Status changed from In Progress to Resolved
echo "test" | mailx -s "test from jenkins" okurz@suse.de
works fine. I received the email. I put an exit 1
into http://jenkins.qa.suse.de/job/gnome_next-openqa/ and triggered it. But we don't have email notifications in there so no message is sent as expected. Actually we do receive emails to osd-admins@suse.de and I even reacted on it but assumed that the error would just be the very same SUSE IT storage problem: https://mailman.suse.de/mlarch/SuSE/osd-admins/2024/osd-admins.2024.07/msg00166.html . I guess with all back to green next time we will do better :)