All of lore.kernel.org
 help / color / mirror / Atom feed
* qemu/vhost tx data path
@ 2016-06-10 19:11 Ritun Patney
  2016-06-13  9:14 ` Stefan Hajnoczi
  0 siblings, 1 reply; 2+ messages in thread
From: Ritun Patney @ 2016-06-10 19:11 UTC (permalink / raw)
  To: kvm

Hey,

I am studying the network performance between a pair of KVM workloads
on different hosts. I have been looking into the RX, TX data path when
a vhost-net driver is involved.

I am intrigued why does a packet transmission from a guest run
entirely in the context of the vhost thread. From what I could gather
from the source, the vhost driver seems to write packets to the tap
device using sendmsg. The tap device seems to enqueue it for napi to
take over. I would have assumed that after sending the packet over the
tap device, a softirq would have taken over but that doesn't happen.

I am wondering how it that the entire packet is processed in the vhost
kernel thread's context and would appreciate help!

Thanks,
Ritun

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: qemu/vhost tx data path
  2016-06-10 19:11 qemu/vhost tx data path Ritun Patney
@ 2016-06-13  9:14 ` Stefan Hajnoczi
  0 siblings, 0 replies; 2+ messages in thread
From: Stefan Hajnoczi @ 2016-06-13  9:14 UTC (permalink / raw)
  To: Ritun Patney; +Cc: kvm

[-- Attachment #1: Type: text/plain, Size: 1074 bytes --]

On Fri, Jun 10, 2016 at 12:11:58PM -0700, Ritun Patney wrote:
> I am studying the network performance between a pair of KVM workloads
> on different hosts. I have been looking into the RX, TX data path when
> a vhost-net driver is involved.
> 
> I am intrigued why does a packet transmission from a guest run
> entirely in the context of the vhost thread. From what I could gather
> from the source, the vhost driver seems to write packets to the tap
> device using sendmsg. The tap device seems to enqueue it for napi to
> take over. I would have assumed that after sending the packet over the
> tap device, a softirq would have taken over but that doesn't happen.
> 
> I am wondering how it that the entire packet is processed in the vhost
> kernel thread's context and would appreciate help!

The way I read the drivers/net/tun.c code the local softirq processing
happens immediately in netif_rx_ni().  So progress should be made
immediately during ->sendmsg() rather than just marking the softirq
pending and returning back to vhost_net.ko.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2016-06-13  9:14 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-10 19:11 qemu/vhost tx data path Ritun Patney
2016-06-13  9:14 ` Stefan Hajnoczi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.