isotovideo: backend takes 100 % of CPU when driving svirt job
isotovideo: backend takes 100 % of CPU when it drives svirt job. It does not when it drives qemu job.
#3 Updated by dasantiago about 2 years ago
On both Xen HVM & Hyper-V it happens at the end of
bootloader_svirt/ at the beginning of
bootloader_uefi. On that boundary is switch from
Then, it looks like it's because of the polling of the serial console... Don't you agree? Or the CPU usage don't estabilize after that?
#5 Updated by michalnowak almost 2 years ago
Perhaps the 100% CPU utilization harms the shared believe that two svirt worker can replace one qemu worker? It still should be true for disk IO, but CPU time is probably affected greatly. Also running more than two svirt jobs on laptop makes the fan go crazy.
- Status changed from New to In Progress
- Assignee set to mkittler
- Target version changed from Ready to Current Sprint
@dasantiago is right. I've added some debug printing in the relevant functions in
baseclass.pm to confirm the theory. There is also already a related warning visible in the log:
alling Net::SSH2::Channel::readline in non-blocking mode is usually a programming error at /hdd/openqa-devel/repos/os-autoinst/backend/baseclass.pm line 1225.
It likely can't be made blocking without impairing the backend's responsiveness. I have to dig into the backend code to find a solution. It might not be trivial.
The code actually uses
IO::Select to only read from the SSH channel when the underlying socket is ready to read. But apparently that's not sufficient. The socket appears to be always ready to read although reading from the SSH channel mostly results in the error "operation would block".
I changed the code from reading line by line to use Net::SSH2::Channel::read2) so the extended data would be consumed as well. However, that doesn't change a thing.
So I'm not sure how to integrate Net::SSH2::Channel into our async processing.
Apparently the SSH socket was just passed to the write FDs for
IO::Select. This PR attempts to fix it: https://github.com/os-autoinst/os-autoinst/pull/1239
It actually decreases the CPU usage to almost nothing. However, it seems to break other things (or it is just my local setup).
- Status changed from Feedback to Resolved
- Target version changed from Current Sprint to Done
I've just had a look at the CPU usage on openqaworker2. It runs a few svirt jobs but none of the cores is constantly busy.
The change likely caused a regression. There's another ticket for it so I'll close this one.