coordination #99579
closed
coordination #80142: [saga][epic] Scale out: Redundant/load-balancing deployments of openQA, easy containers, containers on kubernetes
[epic][retro] Follow-up to "Published QCOW images appear to be uncompressed"
Added by okurz about 3 years ago.
Updated almost 2 years ago.
Estimated time:
(Total: 0.00 h)
Description
Motivation¶
In #99246 gladly mdoucha could identify a big performance regression due to https://github.com/os-autoinst/os-autoinst/pull/1699/commits/eb207de0a372d832a60a081dd08dc674c90ef950 . After the very specific bug report we could deploy a fix to openqa.suse.de within 2 hours so very quick. But before that we had nearly two months of vague issues, user reports about reduced performance, multiple alerts related to high CPU time, high I/O pressure, long test runtimes and long test schedule queues.
For example:
Acceptance criteria¶
- AC1: A Five-Whys analysis has been conducted and results documented
- AC2: Improvements are planned
Suggestions¶
- Bring up in retro
- Conduct "Five-Whys" analysis for the topic
- Identify follow-up tasks in tickets
Five Whys¶
- Why did we not prevent the merge of the PR?
- Why could we not link problems to the code change immediately after deployment?
- Monitor mean value of asset sizes
- Monitor mean value of job completion times
- Deploy synchronously after every merged pull request to make alerts more stricter and be more likely to link alerts to the smaller deployed changes
- Why did we not link the I/O alerts to the deployed change?
- Distracted by multiple network problems and increased load due to crashing workers ->
- Deploying more often should help
- Network performance monitoring
- Have a "test-openQA-job" that we use as reference, e.g. the openQA-in-openQA tests or the os-autoinst full-stack test but check their runtime
- Why could we not link multiple user reports to the alerts (mentioned above)?
- user reports did not tell us more than what we should have seen from monitoring data but confirmed the presence of the issue which we should have linked. For this it's good to have tickets
- lookup if the alert levels for "Disk I/O time for /dev/vdc (/assets)" have been bumped to very high numbers for good reason, lower if possible. The github PR was likely deployed on 2021-08-04 where we saw a first spike. There was an alert on "2021-08-04 13:01:35" (only for a short time).
- Why did we not look into the I/O alerts in more detail?
- Copied from action #99246: Published QCOW images appear to be uncompressed added
- Description updated (diff)
- Tracker changed from action to coordination
- Subject changed from [retro] Follow-up to "Published QCOW images appear to be uncompressed" to [epic][retro] Follow-up to "Published QCOW images appear to be uncompressed"
- Description updated (diff)
- Parent task set to #64746
Conducted five-why analysis during retro with the team. Thank you team!
- Related to action #96557: jobs run into MAX_SETUP_TIME, one hour between 'Downloading' and 'Download processed' and no useful output in between auto_review:"timeout: setup exceeded MAX_SETUP_TIME":retry added
- Description updated (diff)
- Description updated (diff)
okurz wrote:
Five Whys¶
- Why did we not prevent the merge of the PR?
I'm going to give you an answer which you won't like: The code review as currently practiced by the Tools team is a farce that only wastes everyone's time. You keep bikeshedding complete nonsense like comment wording while paying zero attention to potential bugs. And worse, some of the reviewer even prefer to needlessly increase the risk of errors in order to make the code "pretty". (I could link specific examples but I'll refrain from doing so to protect the guilty. You know who you are.)
The PR that introduced the bug was adding a new parameter to two existing functions. The logical thing to do during code review is to grep the codebase for any calls of those two functions that were not updated in the PR. The bug slipped through because nobody bothered to do that trivial check.
MDoucha wrote:
okurz wrote:
Five Whys¶
- Why did we not prevent the merge of the PR?
I'm going to give you an answer which you won't like: The code review as currently practiced by the Tools team is a farce that only wastes everyone's time. You keep bikeshedding complete nonsense like comment wording while paying zero attention to potential bugs. And worse, some of the reviewer even prefer to needlessly increase the risk of errors in order to make the code "pretty".
Well, I don't dislike your comment because you are often right and here as well. Obviously most people would have it phrased differently but it's still true what you say what can happen. However, this is also not a workable suggestion that we can follow with. So unless you have a proper suggestion I will just read it as "Be more diligent in code review".
(I could link specific examples but I'll refrain from doing so to protect the guilty. You know who you are.)
Well, I know that you mean me and I am not afraid :D
The PR that introduced the bug was adding a new parameter to two existing functions. The logical thing to do during code review is to grep the codebase for any calls of those two functions that were not updated in the PR. The bug slipped through because nobody bothered to do that trivial check.
Correct. Hence we introduce signatures next to all the other suggestions. Obviously my experience was confirmed and I forgot about that during review: A commit starting with the word "Fix …" is often a cause for regressions so we should be even more careful with that ;)
- Status changed from New to Workable
So we consider it "Workable" as an epic meaning that we can define further individual specific subtasks while others can work on subtasks.
okurz wrote:
MDoucha wrote:
I'm going to give you an answer which you won't like: The code review as currently practiced by the Tools team is a farce that only wastes everyone's time. You keep bikeshedding complete nonsense like comment wording while paying zero attention to potential bugs. And worse, some of the reviewer even prefer to needlessly increase the risk of errors in order to make the code "pretty".
Well, I don't dislike your comment because you are often right and here as well. Obviously most people would have it phrased differently but it's still true what you say what can happen. However, this is also not a workable suggestion that we can follow with. So unless you have a proper suggestion I will just read it as "Be more diligent in code review".
I can be more specific and give you a code review checklist:
- How does this code interact with the rest of the codebase? What can break if this gets merged?
- Under which conditions will this code produce errors? Can any of those errors be avoided?
- Are there any structural issues, e.g. copy-pasted code, inefficient loops or function call patterns that would cause performance issues?
- Are there any obvious future changes to this code? Can it be improved now to make those future changes easier?
There's also the employee exchange program so people from your team could join Kernel QA for a week or two to learn how to do code review better. Or you can just read the LTP mailing list archives:
https://lists.linux.it/pipermail/ltp/
(I could link specific examples but I'll refrain from doing so to protect the guilty. You know who you are.)
Well, I know that you mean me and I am not afraid :D
No, I don't mean you.
And worse, some of the reviewer even prefer to needlessly increase the risk of errors in order to make the code "pretty".
I can not imagine an example here, and if noone else can, this is pretty useless.
I can only speak for myself, if it's me, please give examples. Otherwise, maybe get in touch with the person(s) in question directly.
We want to improve.
Thanks.
tinita wrote:
And worse, some of the reviewer even prefer to needlessly increase the risk of errors in order to make the code "pretty".
I can not imagine an example here, and if noone else can, this is pretty useless.
I can only speak for myself, if it's me, please give examples. Otherwise, maybe get in touch with the person(s) in question directly.
We want to improve.
Thanks.
If it were you, you'd remember a long argument with me about why a particular suggestion is wrong. I will at least describe the main example I had in mind when I wrote the sentence above:
The code is updating a file and needs to avoid corrupting it if the update fails halfway through for whatever reason. Solution: write to a temporary file and rename it. But the reviewer argued that rename()
with manual cleanup in case of failure is ugly and insisted on using Mojo::File::tempfile
and renaming it using move_to()
which is not guaranteed to be atomic. (In addition to also needing an unnecessary chmod()
call because the tempfile is always created with 0600
access permissions.)
MDoucha wrote:
tinita wrote:
And worse, some of the reviewer even prefer to needlessly increase the risk of errors in order to make the code "pretty".
I can not imagine an example here, and if noone else can, this is pretty useless.
I can only speak for myself, if it's me, please give examples. Otherwise, maybe get in touch with the person(s) in question directly.
We want to improve.
Thanks.
If it were you, you'd remember a long argument with me about why a particular suggestion is wrong. I will at least describe the main example I had in mind when I wrote the sentence above:
The code is updating a file and needs to avoid corrupting it if the update fails halfway through for whatever reason. Solution: write to a temporary file and rename it. But the reviewer argued that rename()
with manual cleanup in case of failure is ugly and insisted on using Mojo::File::tempfile
and renaming it using move_to()
which is not guaranteed to be atomic. (In addition to also needing an unnecessary chmod()
call because the tempfile is always created with 0600
access permissions.)
I guess you mean https://github.com/os-autoinst/openQA/pull/3144#pullrequestreview-424274015 . I don't see the argument in there as "increase the risk of errors" but rather avoid overly defensive programming
By the way. I like your code review checklist that you posted above
- Status changed from Workable to Blocked
- Assignee set to okurz
More sub tasks could be defined but right now we have enough in the backlog. I can track this based on what we already have defined at least until the current open sub tasks are done
- Parent task changed from #64746 to #80142
subtasks actively being worked on with low prio, no problems.
- Status changed from Blocked to Resolved
#99663 was fixed which I consider the most critical part. With openQA we can continue on the go. I wouldn't restrict our work to need that strictly.
Also available in: Atom
PDF