linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Timeslice of vCPU thread in QEMU/KVM is not stable
@ 2016-12-08 20:14 Weiwei Jia
  2016-12-09  8:07 ` Pankaj Gupta
  0 siblings, 1 reply; 3+ messages in thread
From: Weiwei Jia @ 2016-12-08 20:14 UTC (permalink / raw)
  To: qemu-devel, mingo, efault, dmitry.adamushko, vatsa, tglx, pzijlstr
  Cc: linux-kernel, kvm

Hi everyone,

I am testing the timeslice of vCPU thread in QEMU/KVM. In principle,
the timeslice should be stable under following workload but it is
unstable after I do experiments with following workload. I appreciate
it if you can give me some suggestions. Thanks in advance.

Workload settings:
In VMM, there are 6 pCPUs which are pCPU0, pCPU1, pCPU2, pCPU3, pCPU4,
pCPU5. There are two Kernel Virtual Machines (VM1 and VM2) upon VMM.
In each VM, there are 5 vritual CPUs (vCPU0, vCPU1, vCPU2, vCPU3,
vCPU4). vCPU0 in VM1 and vCPU0 in VM2 are pinned to pCPU0 and pCPU5
separately to handle interrupts dedicatedly. vCPU1 in VM1 and vCPU1 in
VM2 are pinned to pCPU1; vCPU2 in VM1 and vCPU2 in VM2 are pinned to
pCPU2; vCPU3 in VM1 and vCPU3 in VM2 are pinned to pCPU3; vCPU4 in VM1
and vCPU4 in VM2 are pinned to pCPU4. There is one CPU intensive
thread (while(1){i++}) upon each vCPU in VM1 and VM2 to avoid the vCPU
to be idle. In VM1, I start one I/O thread on vCPU2, which the I/O
thread reads 4KB from disk each time (reads 8GB in total). The I/O
scheduler in VM1 and VM2 is Noop. The I/O scheduler in VMM is CFQ.
"/proc/sys/kernel/sched_min_granularity_ns" is set to be 100
microseconds in VM1 and VM2. "/proc/sys/kernel/sched_latency_ns" is
set to be 100 microseconds in VM1 and VM2.
"/proc/sys/kernel/sched_wakeup_granularity_ns" is set to be 0
microseconds in VM1 and VM2.
"/proc/sys/kernel/sched_min_granularity_ns" is set to be 2.25
milliseconds in VMM. "/proc/sys/kernel/sched_latency_ns" is set to be
18 milliseconds in VMM. "/proc/sys/kernel/sched_wakeup_granularity_ns"
is set to be 0 microseconds in VMM. I also pinned the I/O worker
threads started by QEMU to pCPU5. The process scheduling class I use
is CFS.

Linux Kernel version for VMM is: 3.16.39
Linux Kernel version for VM1 and VM2 is: 4.7.4
QEMU emulator version is: 2.0.0

I test the timeslice of vCPU2 thread of VM1 in VMM according to above
workload settings and the experiment shows that the timeslice is not
stable. I also find that after the I/O thread on vCPU2 in VM1 is
finished, the timeslice of vCPU2 thread of VM1 will be stable. From
the experiment, it seems that the unstable timeslice of vCPU2 thread
of VM1 is caused by the I/O thread on it in VM1. However, I think the
I/O thread on vCPU2 in VM1 should not affect its timeslice since each
vCPU in VM1 and VM2 has one CPU intensive thread (while(1){i++}).
Please give me some suggestions if you have. Thank you.

Best,
Harry

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Timeslice of vCPU thread in QEMU/KVM is not stable
  2016-12-08 20:14 Timeslice of vCPU thread in QEMU/KVM is not stable Weiwei Jia
@ 2016-12-09  8:07 ` Pankaj Gupta
  2016-12-09 13:00   ` Weiwei Jia
  0 siblings, 1 reply; 3+ messages in thread
From: Pankaj Gupta @ 2016-12-09  8:07 UTC (permalink / raw)
  To: Weiwei Jia
  Cc: qemu-devel, mingo, efault, dmitry adamushko, vatsa, tglx,
	pzijlstr, linux-kernel, kvm

Hello,

> 
> Hi everyone,
> 
> I am testing the timeslice of vCPU thread in QEMU/KVM. In principle,
> the timeslice should be stable under following workload but it is
> unstable after I do experiments with following workload. I appreciate
> it if you can give me some suggestions. Thanks in advance.
> 
> Workload settings:
> In VMM, there are 6 pCPUs which are pCPU0, pCPU1, pCPU2, pCPU3, pCPU4,
> pCPU5. There are two Kernel Virtual Machines (VM1 and VM2) upon VMM.
> In each VM, there are 5 vritual CPUs (vCPU0, vCPU1, vCPU2, vCPU3,
> vCPU4). vCPU0 in VM1 and vCPU0 in VM2 are pinned to pCPU0 and pCPU5
> separately to handle interrupts dedicatedly. vCPU1 in VM1 and vCPU1 in
> VM2 are pinned to pCPU1; vCPU2 in VM1 and vCPU2 in VM2 are pinned to
> pCPU2; vCPU3 in VM1 and vCPU3 in VM2 are pinned to pCPU3; vCPU4 in VM1
> and vCPU4 in VM2 are pinned to pCPU4. There is one CPU intensive
> thread (while(1){i++}) upon each vCPU in VM1 and VM2 to avoid the vCPU
> to be idle. In VM1, I start one I/O thread on vCPU2, which the I/O
> thread reads 4KB from disk each time (reads 8GB in total). The I/O
> scheduler in VM1 and VM2 is Noop. The I/O scheduler in VMM is CFQ.
> "/proc/sys/kernel/sched_min_granularity_ns" is set to be 100
> microseconds in VM1 and VM2. "/proc/sys/kernel/sched_latency_ns" is
> set to be 100 microseconds in VM1 and VM2.
> "/proc/sys/kernel/sched_wakeup_granularity_ns" is set to be 0
> microseconds in VM1 and VM2.
> "/proc/sys/kernel/sched_min_granularity_ns" is set to be 2.25
> milliseconds in VMM. "/proc/sys/kernel/sched_latency_ns" is set to be
> 18 milliseconds in VMM. "/proc/sys/kernel/sched_wakeup_granularity_ns"
> is set to be 0 microseconds in VMM. I also pinned the I/O worker
> threads started by QEMU to pCPU5. The process scheduling class I use
> is CFS.
> 
> Linux Kernel version for VMM is: 3.16.39
> Linux Kernel version for VM1 and VM2 is: 4.7.4
> QEMU emulator version is: 2.0.0
> 
> I test the timeslice of vCPU2 thread of VM1 in VMM according to above
> workload settings and the experiment shows that the timeslice is not
> stable. I also find that after the I/O thread on vCPU2 in VM1 is
> finished, the timeslice of vCPU2 thread of VM1 will be stable. From
> the experiment, it seems that the unstable timeslice of vCPU2 thread
> of VM1 is caused by the I/O thread on it in VM1. However, I think the
> I/O thread on vCPU2 in VM1 should not affect its timeslice since each
> vCPU in VM1 and VM2 has one CPU intensive thread (while(1){i++}).
> Please give me some suggestions if you have. Thank you.

I think you need to check what else is scheduling on pCPU2(physical cpu).
If you want to avoid any other task to be scheduled at pCPU2, you need
to isolate the pCPU2 core to avoid scheduler to run any other task on it.

> 
> Best,
> Harry
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Timeslice of vCPU thread in QEMU/KVM is not stable
  2016-12-09  8:07 ` Pankaj Gupta
@ 2016-12-09 13:00   ` Weiwei Jia
  0 siblings, 0 replies; 3+ messages in thread
From: Weiwei Jia @ 2016-12-09 13:00 UTC (permalink / raw)
  To: Pankaj Gupta
  Cc: qemu-devel, mingo, efault, dmitry adamushko, vatsa, tglx,
	pzijlstr, linux-kernel, kvm

Hi Pankaj Gupta,

Thanks for your reply. I have found the problem after debug Linux
Kernel. The problem is once there is I/O thread upon vCPU2 thread of
VM1, there will be some mutex (synchronization) produced so that it
will be preempted by vCPU2 thread of VM2. After I set
"/proc/sys/kernel/sched_wakeup_granularity_ns" to be the default value
(3 milliseconds), the timeslice is stable again even though there is
I/O thread upon vCPU2 thread of VM1. That means, vCPU2 thread of VM2
can not preempt vCPU2 thread of VM1 because
"/proc/sys/kernel/sched_wakeup_granularity_ns" is 3 milliseconds.
Thank you again :)

Best Regards,
Harry

On Fri, Dec 9, 2016 at 3:07 AM, Pankaj Gupta <pagupta@redhat.com> wrote:
> Hello,
>
>>
>> Hi everyone,
>>
>> I am testing the timeslice of vCPU thread in QEMU/KVM. In principle,
>> the timeslice should be stable under following workload but it is
>> unstable after I do experiments with following workload. I appreciate
>> it if you can give me some suggestions. Thanks in advance.
>>
>> Workload settings:
>> In VMM, there are 6 pCPUs which are pCPU0, pCPU1, pCPU2, pCPU3, pCPU4,
>> pCPU5. There are two Kernel Virtual Machines (VM1 and VM2) upon VMM.
>> In each VM, there are 5 vritual CPUs (vCPU0, vCPU1, vCPU2, vCPU3,
>> vCPU4). vCPU0 in VM1 and vCPU0 in VM2 are pinned to pCPU0 and pCPU5
>> separately to handle interrupts dedicatedly. vCPU1 in VM1 and vCPU1 in
>> VM2 are pinned to pCPU1; vCPU2 in VM1 and vCPU2 in VM2 are pinned to
>> pCPU2; vCPU3 in VM1 and vCPU3 in VM2 are pinned to pCPU3; vCPU4 in VM1
>> and vCPU4 in VM2 are pinned to pCPU4. There is one CPU intensive
>> thread (while(1){i++}) upon each vCPU in VM1 and VM2 to avoid the vCPU
>> to be idle. In VM1, I start one I/O thread on vCPU2, which the I/O
>> thread reads 4KB from disk each time (reads 8GB in total). The I/O
>> scheduler in VM1 and VM2 is Noop. The I/O scheduler in VMM is CFQ.
>> "/proc/sys/kernel/sched_min_granularity_ns" is set to be 100
>> microseconds in VM1 and VM2. "/proc/sys/kernel/sched_latency_ns" is
>> set to be 100 microseconds in VM1 and VM2.
>> "/proc/sys/kernel/sched_wakeup_granularity_ns" is set to be 0
>> microseconds in VM1 and VM2.
>> "/proc/sys/kernel/sched_min_granularity_ns" is set to be 2.25
>> milliseconds in VMM. "/proc/sys/kernel/sched_latency_ns" is set to be
>> 18 milliseconds in VMM. "/proc/sys/kernel/sched_wakeup_granularity_ns"
>> is set to be 0 microseconds in VMM. I also pinned the I/O worker
>> threads started by QEMU to pCPU5. The process scheduling class I use
>> is CFS.
>>
>> Linux Kernel version for VMM is: 3.16.39
>> Linux Kernel version for VM1 and VM2 is: 4.7.4
>> QEMU emulator version is: 2.0.0
>>
>> I test the timeslice of vCPU2 thread of VM1 in VMM according to above
>> workload settings and the experiment shows that the timeslice is not
>> stable. I also find that after the I/O thread on vCPU2 in VM1 is
>> finished, the timeslice of vCPU2 thread of VM1 will be stable. From
>> the experiment, it seems that the unstable timeslice of vCPU2 thread
>> of VM1 is caused by the I/O thread on it in VM1. However, I think the
>> I/O thread on vCPU2 in VM1 should not affect its timeslice since each
>> vCPU in VM1 and VM2 has one CPU intensive thread (while(1){i++}).
>> Please give me some suggestions if you have. Thank you.
>
> I think you need to check what else is scheduling on pCPU2(physical cpu).
> If you want to avoid any other task to be scheduled at pCPU2, you need
> to isolate the pCPU2 core to avoid scheduler to run any other task on it.
>
>>
>> Best,
>> Harry
>> --
>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2016-12-09 13:00 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-08 20:14 Timeslice of vCPU thread in QEMU/KVM is not stable Weiwei Jia
2016-12-09  8:07 ` Pankaj Gupta
2016-12-09 13:00   ` Weiwei Jia

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).