linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Weiwei Jia <harrynjit@gmail.com>
To: Pankaj Gupta <pagupta@redhat.com>
Cc: qemu-devel@nongnu.org, mingo@redhat.com, efault@gmx.de,
	dmitry adamushko <dmitry.adamushko@gmail.com>,
	vatsa@linux.vnet.ibm.com, tglx@linutronix.de,
	pzijlstr@redhat.com, linux-kernel@vger.kernel.org,
	kvm@vger.kernel.org
Subject: Re: Timeslice of vCPU thread in QEMU/KVM is not stable
Date: Fri, 9 Dec 2016 08:00:25 -0500	[thread overview]
Message-ID: <CA+scX6mFHMwmFDFoStvghHf8piHB4LV+Zq7CiiuzRWUoSTKESA@mail.gmail.com> (raw)
In-Reply-To: <1560317871.2551833.1481270875346.JavaMail.zimbra@redhat.com>

Hi Pankaj Gupta,

Thanks for your reply. I have found the problem after debug Linux
Kernel. The problem is once there is I/O thread upon vCPU2 thread of
VM1, there will be some mutex (synchronization) produced so that it
will be preempted by vCPU2 thread of VM2. After I set
"/proc/sys/kernel/sched_wakeup_granularity_ns" to be the default value
(3 milliseconds), the timeslice is stable again even though there is
I/O thread upon vCPU2 thread of VM1. That means, vCPU2 thread of VM2
can not preempt vCPU2 thread of VM1 because
"/proc/sys/kernel/sched_wakeup_granularity_ns" is 3 milliseconds.
Thank you again :)

Best Regards,
Harry

On Fri, Dec 9, 2016 at 3:07 AM, Pankaj Gupta <pagupta@redhat.com> wrote:
> Hello,
>
>>
>> Hi everyone,
>>
>> I am testing the timeslice of vCPU thread in QEMU/KVM. In principle,
>> the timeslice should be stable under following workload but it is
>> unstable after I do experiments with following workload. I appreciate
>> it if you can give me some suggestions. Thanks in advance.
>>
>> Workload settings:
>> In VMM, there are 6 pCPUs which are pCPU0, pCPU1, pCPU2, pCPU3, pCPU4,
>> pCPU5. There are two Kernel Virtual Machines (VM1 and VM2) upon VMM.
>> In each VM, there are 5 vritual CPUs (vCPU0, vCPU1, vCPU2, vCPU3,
>> vCPU4). vCPU0 in VM1 and vCPU0 in VM2 are pinned to pCPU0 and pCPU5
>> separately to handle interrupts dedicatedly. vCPU1 in VM1 and vCPU1 in
>> VM2 are pinned to pCPU1; vCPU2 in VM1 and vCPU2 in VM2 are pinned to
>> pCPU2; vCPU3 in VM1 and vCPU3 in VM2 are pinned to pCPU3; vCPU4 in VM1
>> and vCPU4 in VM2 are pinned to pCPU4. There is one CPU intensive
>> thread (while(1){i++}) upon each vCPU in VM1 and VM2 to avoid the vCPU
>> to be idle. In VM1, I start one I/O thread on vCPU2, which the I/O
>> thread reads 4KB from disk each time (reads 8GB in total). The I/O
>> scheduler in VM1 and VM2 is Noop. The I/O scheduler in VMM is CFQ.
>> "/proc/sys/kernel/sched_min_granularity_ns" is set to be 100
>> microseconds in VM1 and VM2. "/proc/sys/kernel/sched_latency_ns" is
>> set to be 100 microseconds in VM1 and VM2.
>> "/proc/sys/kernel/sched_wakeup_granularity_ns" is set to be 0
>> microseconds in VM1 and VM2.
>> "/proc/sys/kernel/sched_min_granularity_ns" is set to be 2.25
>> milliseconds in VMM. "/proc/sys/kernel/sched_latency_ns" is set to be
>> 18 milliseconds in VMM. "/proc/sys/kernel/sched_wakeup_granularity_ns"
>> is set to be 0 microseconds in VMM. I also pinned the I/O worker
>> threads started by QEMU to pCPU5. The process scheduling class I use
>> is CFS.
>>
>> Linux Kernel version for VMM is: 3.16.39
>> Linux Kernel version for VM1 and VM2 is: 4.7.4
>> QEMU emulator version is: 2.0.0
>>
>> I test the timeslice of vCPU2 thread of VM1 in VMM according to above
>> workload settings and the experiment shows that the timeslice is not
>> stable. I also find that after the I/O thread on vCPU2 in VM1 is
>> finished, the timeslice of vCPU2 thread of VM1 will be stable. From
>> the experiment, it seems that the unstable timeslice of vCPU2 thread
>> of VM1 is caused by the I/O thread on it in VM1. However, I think the
>> I/O thread on vCPU2 in VM1 should not affect its timeslice since each
>> vCPU in VM1 and VM2 has one CPU intensive thread (while(1){i++}).
>> Please give me some suggestions if you have. Thank you.
>
> I think you need to check what else is scheduling on pCPU2(physical cpu).
> If you want to avoid any other task to be scheduled at pCPU2, you need
> to isolate the pCPU2 core to avoid scheduler to run any other task on it.
>
>>
>> Best,
>> Harry
>> --
>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>

      reply	other threads:[~2016-12-09 13:00 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-12-08 20:14 Timeslice of vCPU thread in QEMU/KVM is not stable Weiwei Jia
2016-12-09  8:07 ` Pankaj Gupta
2016-12-09 13:00   ` Weiwei Jia [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CA+scX6mFHMwmFDFoStvghHf8piHB4LV+Zq7CiiuzRWUoSTKESA@mail.gmail.com \
    --to=harrynjit@gmail.com \
    --cc=dmitry.adamushko@gmail.com \
    --cc=efault@gmx.de \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=pagupta@redhat.com \
    --cc=pzijlstr@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=tglx@linutronix.de \
    --cc=vatsa@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).