All of lore.kernel.org
 help / color / mirror / Atom feed
* xen-netback in linux 3.12
@ 2014-01-17 21:30 xu cong
  2014-01-20 10:28 ` Zoltan Kiss
  0 siblings, 1 reply; 2+ messages in thread
From: xu cong @ 2014-01-17 21:30 UTC (permalink / raw)
  To: xen-devel; +Cc: wei.liu2, ian.campbell


[-- Attachment #1.1: Type: text/plain, Size: 503 bytes --]

Hi all,

I found xen-netback changed a lot in latest kernel. The shared xen-netback
threads in driver domain is replaced by per-vif kernel thread. In my
platform, I found the I/O throughput and scalability of per-vif netback is
better than previous implementation. Another advantage is that I can use
cgroups to control the CPU fair share among all VMs in driver domain (I
group netback thread and blkback thread for each VM). Is there any other
motivation for this modification? Thanks.

Regards,
Cong

[-- Attachment #1.2: Type: text/html, Size: 568 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: xen-netback in linux 3.12
  2014-01-17 21:30 xen-netback in linux 3.12 xu cong
@ 2014-01-20 10:28 ` Zoltan Kiss
  0 siblings, 0 replies; 2+ messages in thread
From: Zoltan Kiss @ 2014-01-20 10:28 UTC (permalink / raw)
  To: xu cong, xen-devel; +Cc: wei.liu2, ian.campbell

On 17/01/14 21:30, xu cong wrote:
> Hi all,
>
> I found xen-netback changed a lot in latest kernel. The shared
> xen-netback threads in driver domain is replaced by per-vif kernel
> thread. In my platform, I found the I/O throughput and scalability of
> per-vif netback is better than previous implementation. Another
> advantage is that I can use cgroups to control the CPU fair share among
> all VMs in driver domain (I group netback thread and blkback thread for
> each VM). Is there any other motivation for this modification? Thanks.

as far as I remember scalability was the only motivation. Previously you 
could end in a situation where one thread (pinned to a vCPU) did a lot 
of work while others nothing. In this thread-per-VIF model the kernel 
has the ability to schedule the workloads as it want, and users can also 
poke around with settings.

Zoli

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2014-01-20 10:28 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-01-17 21:30 xen-netback in linux 3.12 xu cong
2014-01-20 10:28 ` Zoltan Kiss

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.