qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] high-level view of packet processing for virtio NIC?
@ 2019-07-23 16:18 Chris Friesen
  2019-07-24  0:27 ` Dongli Zhang
  2019-07-29 13:27 ` Stefan Hajnoczi
  0 siblings, 2 replies; 3+ messages in thread
From: Chris Friesen @ 2019-07-23 16:18 UTC (permalink / raw)
  To: qemu-devel

Hi,

I'm looking for information on what the qemu architecture looks like for 
processing virtio network packets in a two-vCPU guest.

It looks like there's an IO thread doing a decent fraction of the work, 
separate from the vCPU threads--is that correct?  There's no disk 
involved in this case, purely network packet processing.

Chris


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [Qemu-devel] high-level view of packet processing for virtio NIC?
  2019-07-23 16:18 [Qemu-devel] high-level view of packet processing for virtio NIC? Chris Friesen
@ 2019-07-24  0:27 ` Dongli Zhang
  2019-07-29 13:27 ` Stefan Hajnoczi
  1 sibling, 0 replies; 3+ messages in thread
From: Dongli Zhang @ 2019-07-24  0:27 UTC (permalink / raw)
  To: Chris Friesen; +Cc: qemu-devel

Hi Chris,

On 7/24/19 12:18 AM, Chris Friesen wrote:
> Hi,
> 
> I'm looking for information on what the qemu architecture looks like for
> processing virtio network packets in a two-vCPU guest.
> 
> It looks like there's an IO thread doing a decent fraction of the work, separate
> from the vCPU threads--is that correct?  There's no disk involved in this case,
> purely network packet processing.
> 

I suggest use gdb to have a view.

To use gdb, the debug option should be enabled in source code.

# ./configure --target-list=x86_64-softmmu --enable-debug
# make -j8 > /dev/null

# sudo gdb ./x86_64-softmmu/qemu-system-x86_64

(gdb) run -machine pc,accel=kvm -vnc :0 -smp 4 -m 4096M -hda
/path/to/ubuntu1804.qcow2 -device virtio-net-pci,netdev=tapnet,mq=true,vectors=9
-netdev
tap,id=tapnet,ifname=tap0,script=/path/to/qemu-ifup,downscript=no,queues=4,vhost=off

gdb would be able to show which thread is running which function. To have a view
on RX path...

(gdb) b virtio_net_do_receive


Once the break point is hit...

Thread 1 "qemu-system-x86" hit Breakpoint 2, virtio_net_do_receive
(nc=0x555557705d40, buf=0x55555698b2f4 "", size=72) at
/home/zhang/kvm/debug/qemu-4.0.0/hw/net/virtio-net.c:1370
1370	    rcu_read_lock();
(gdb) bt
#0  virtio_net_do_receive (nc=0x555557705d40, buf=0x55555698b2f4 "", size=72) at
/home/zhang/kvm/debug/qemu-4.0.0/hw/net/virtio-net.c:1370
#1  0x00005555558f6934 in virtio_net_receive (nc=0x555557705d40,
buf=0x55555698b2f4 "", size=72) at
/home/zhang/kvm/debug/qemu-4.0.0/hw/net/virtio-net.c:1978
#2  0x0000555555c4208b in nc_sendv_compat (nc=0x555557705d40,
iov=0x7fffffffddb0, iovcnt=1, flags=0) at net/net.c:706
#3  0x0000555555c4214d in qemu_deliver_packet_iov (sender=0x55555698ad00,
flags=0, iov=0x7fffffffddb0, iovcnt=1, opaque=0x555557705d40) at net/net.c:734
#4  0x0000555555c44bae in qemu_net_queue_deliver (queue=0x5555577060d0,
sender=0x55555698ad00, flags=0, data=0x55555698b2f4 "", size=72) at net/queue.c:164
#5  0x0000555555c44cca in qemu_net_queue_send (queue=0x5555577060d0,
sender=0x55555698ad00, flags=0, data=0x55555698b2f4 "", size=72,
sent_cb=0x555555c56829 <tap_send_completed>) at net/queue.c:199
#6  0x0000555555c41ef3 in qemu_send_packet_async_with_flags
(sender=0x55555698ad00, flags=0, buf=0x55555698b2f4 "", size=72,
sent_cb=0x555555c56829 <tap_send_completed>) at net/net.c:660
#7  0x0000555555c41f2b in qemu_send_packet_async (sender=0x55555698ad00,
buf=0x55555698b2f4 "", size=72, sent_cb=0x555555c56829 <tap_send_completed>) at
net/net.c:667
#8  0x0000555555c56938 in tap_send (opaque=0x55555698ad00) at net/tap.c:202
#9  0x0000555555dee376 in aio_dispatch_handlers (ctx=0x5555568770d0) at
util/aio-posix.c:430
#10 0x0000555555dee509 in aio_dispatch (ctx=0x5555568770d0) at util/aio-posix.c:461
#11 0x0000555555de9a20 in aio_ctx_dispatch (source=0x5555568770d0, callback=0x0,
user_data=0x0) at util/async.c:261
#12 0x00007ffff7570197 in g_main_context_dispatch () from
/lib/x86_64-linux-gnu/libglib-2.0.so.0
#13 0x0000555555decdfc in glib_pollfds_poll () at util/main-loop.c:213
#14 0x0000555555dece76 in os_host_main_loop_wait (timeout=198847612) at
util/main-loop.c:236
#15 0x0000555555decf7b in main_loop_wait (nonblocking=0) at util/main-loop.c:512
#16 0x0000555555a0beeb in main_loop () at vl.c:1970
#17 0x0000555555a13230 in main (argc=15, argv=0x7fffffffe3a8,
envp=0x7fffffffe428) at vl.c:4604

Dongli Zhang


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [Qemu-devel] high-level view of packet processing for virtio NIC?
  2019-07-23 16:18 [Qemu-devel] high-level view of packet processing for virtio NIC? Chris Friesen
  2019-07-24  0:27 ` Dongli Zhang
@ 2019-07-29 13:27 ` Stefan Hajnoczi
  1 sibling, 0 replies; 3+ messages in thread
From: Stefan Hajnoczi @ 2019-07-29 13:27 UTC (permalink / raw)
  To: Chris Friesen; +Cc: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1008 bytes --]

On Tue, Jul 23, 2019 at 10:18:01AM -0600, Chris Friesen wrote:
> I'm looking for information on what the qemu architecture looks like for
> processing virtio network packets in a two-vCPU guest.
> 
> It looks like there's an IO thread doing a decent fraction of the work,
> separate from the vCPU threads--is that correct?  There's no disk involved
> in this case, purely network packet processing.

Most production x86 KVM guests use vhost_net.ko to perform virtio-net
rx/tx virtqueue processing in the host kernel.  That means the QEMU code
isn't used and the code path is totally different.

Before spending too much time on this, check which code path you are
interested in.

If you are using QEMU's virtio-net without vhost then the main loop
thread processes rx/tx virtqueue kicks and packet rx/tx events.  The
vcpu threads are not directly involved because the ioeventfd feature is
used to direct virtqueue kicks to the main loop thread instead of
blocking vcpu threads.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2019-07-29 13:27 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-23 16:18 [Qemu-devel] high-level view of packet processing for virtio NIC? Chris Friesen
2019-07-24  0:27 ` Dongli Zhang
2019-07-29 13:27 ` Stefan Hajnoczi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).