Project

General

Profile

action #96713

Slow grep in openqa-label-known-issues leads to high CPU usage

Added by tinita 6 months ago. Updated 5 months ago.

Status:
Resolved
Priority:
Normal
Assignee:
Target version:
Start date:
Due date:
2021-09-10
% Done:

0%

Estimated time:

Description

Observation

On 2021-08-10 we experienced problems on osd like a high load, and one of the reasons seemed to be slow grep commands called by openqa-label-known-issues.

For example this regex seemed to be problematic:

grep -qPzo (?s)2021-.*T.*Error connecting to <root@s390p.*.suse.de>: No route to host /tmp/tmp.gy3iDKQDrV

Running over 4 minutes sometimes. (Removing the (?s) made it run in only a few milliseconds on a file with about 500kb.)

We are now lowering the timeout for the post fail hook
https://gitlab.suse.de/openqa/salt-states-openqa/-/merge_requests/538

But that will mean that the whole process can timeout because of one bad regex.

Suggestions

  • Study current grep logic in openqa-label-known-issues
  • The mentioned regex comes from this issue: #93119
  • Maybe the regex can be improved
  • Add a timeout to the grep call
  • Make sure we are informed in some way of timed out grep calls so we can improve the regexes
  • Reduce limit=200 (maximum number of jobs to check)

Related issues

Related to openQA Infrastructure - action #96807: Web UI is slow and Apache Response Time alert got triggeredResolved2021-08-122021-10-01

Copied from openQA Infrastructure - action #96710: Error `Can't call method "write" on an undefined value` shows up in worker log leading to incompletesResolved2021-08-102021-08-31

Copied to openQA Infrastructure - action #97943: Increase number of CPU cores on OSD VM due to high usage size:SResolved

History

#1 Updated by tinita 6 months ago

  • Copied from action #96710: Error `Can't call method "write" on an undefined value` shows up in worker log leading to incompletes added

#2 Updated by mkittler 6 months ago

The SR for the timeout unfortunately needs a follow up: https://gitlab.suse.de/openqa/salt-states-openqa/-/merge_requests/539

#3 Updated by cdywan 6 months ago

  • Description updated (diff)

#4 Updated by cdywan 6 months ago

Add a timeout to the grep call

Thinking out loud here. The script has some hard-coded regexes where grep is called individually, but results from progress are passed in one go. So I assume to allow a timeout per title we'd have to add the overhead of calling grep every single time.
It might be more beneficial to join other grep calls and speed up the whole thing?

#5 Updated by tinita 6 months ago

cdywan wrote:

Thinking out loud here. The script has some hard-coded regexes where grep is called individually, but results from progress are passed in one go.

Not sure what you mean.
We currently call grep for every title.

#6 Updated by cdywan 6 months ago

https://gitlab.suse.de/openqa/salt-states-openqa/-/merge_requests/541

tinita wrote:

We currently call grep for every title.

Nevermind, seems I was misreading the code. label_on_issues_from_issue_tracker loops through the issues indiivdually.

#7 Updated by tinita 6 months ago

  • Status changed from New to In Progress
  • Assignee set to tinita

#8 Updated by tinita 6 months ago

  • Status changed from In Progress to Feedback

#9 Updated by tinita 6 months ago

PR was merged, scripts on osd updated.
I'll watch the logs.

#10 Updated by tinita 6 months ago

  • Target version set to Ready

#11 Updated by cdywan 6 months ago

  • Related to action #96807: Web UI is slow and Apache Response Time alert got triggered added

#12 Updated by okurz 5 months ago

  • Due date set to 2021-09-03
  • Priority changed from Urgent to High

High load seems to be solved meanwhile but the current timeout of just 5s is rather low. Please also see my related MR https://gitlab.suse.de/openqa/salt-states-openqa/-/merge_requests/550 where I want to configure a much longer timeout along with running scripts with nice

#13 Updated by okurz 5 months ago

  • Assignee changed from tinita to okurz
  • Priority changed from High to Normal

discussed with tinita and cdywan, noted down one idea for improvement in #65271#note-70

will merge https://gitlab.suse.de/openqa/salt-states-openqa/-/merge_requests/550 as soon as pipelines succeed.

Will watch logs in some days with ssh osd 'sudo journalctl -u openqa-gru | grep timed'

#14 Updated by okurz 5 months ago

  • Due date changed from 2021-09-03 to 2021-09-10
  • Status changed from Feedback to In Progress

Since the MR was merged I don't see problems. On OSD I could confirm that sometimes there are grep processes running with nice level 19 as expected. Monitoring in htop I see that telegraf pops up often but only with nice-level of 10. Mostly it's "openqa prefork" processes so that's fine. I don't think my change had negative impact. However we see that we book all 10 cores often so we should ask for increase in number of cores. Also we see alerts on CPU load lately. We can consider bumping the CPU load alert limits a bit as well.

#15 Updated by okurz 5 months ago

  • Copied to action #97943: Increase number of CPU cores on OSD VM due to high usage size:S added

#16 Updated by okurz 5 months ago

  • Status changed from In Progress to Feedback

#17 Updated by okurz 5 months ago

  • Status changed from Feedback to Resolved

Also available in: Atom PDF