action #62048

monitor incompletes

Added by okurz 8 months ago. Updated 8 months ago.

Target version:
Start date:
Due date:
% Done:


Estimated time:


Last Wednesday after deployment we had (as multiple times in the past) many incompletes and we did not learn about this until our users told us. I think our "time to recovery" was good. But the "time to detection" is something I would like to improve. I am thinking of monitor+alert of incompletes, more specifically "incompletion rate" which is a little more complex. we discussed having a mojo command that runs constantly to spit out monitoring data we're intested in. You can deploy this by salt, so it stays in sync with our telegraf config. But also that one you can run in polling fashion by exec. It is to be seen how expensive the startup and db connect is if done every 5s but possibly you don't need it every 5s to be useful.


#1 Updated by okurz 8 months ago

  • Status changed from New to In Progress
  • Assignee set to okurz
  • Target version set to Current Sprint

#2 Updated by okurz 8 months ago

merged and active.

I added two panels to now, "Incomplete jobs of last 24h" and "New incompletes" with an alert.

Maybe we need to save the complete dashboard in the salt repo as well.

#3 Updated by okurz 8 months ago

  • Status changed from In Progress to Feedback

yesterday there were many incompletes due to #62237 and initially our monitoring didn't alert because apparently I applied too much smoothing for the alert condition detection. I changed this and subsequently the alarms for number of incompletes as well as the rate triggered successfully. With these changes are prepared to be persistent and also we add dashboards that were so far not stored in git.

#4 Updated by okurz 8 months ago

  • Due date set to 2020-01-22

#5 Updated by okurz 8 months ago

  • Status changed from Feedback to Resolved
  • Target version changed from Current Sprint to Done

merged. The dashboard including the alerts for incompletes now lives in

Also available in: Atom PDF