---------- Forwarded message ---------
From: Shreyansh Chouhan <chouhan.shreyansh2702@gmail.com>
Date: Mon, 11 Jan 2021 at 11:59
Subject: Re: VirtioSound device emulation implementation
To: Gerd Hoffmann <kraxel@redhat.com>




On Sun, 10 Jan 2021 at 13:55, Shreyansh Chouhan <chouhan.shreyansh2702@gmail.com> wrote:
Hi,

I have been reading about the virtio and vhost specifications, however I have a few doubts. I tried looking for them but I still
do not understand them clearly enough. From what I understand, there are two protocols:

The virtio protocol: The one that specifies how we can have common emulation for virtual devices. The front end drivers
interact with these devices, and these devices could then process the information that they have received either in QEMU,
or somewhere else. From what I understand the front driver uses mmaps to communicate with the virtio device.

The vhost protocol: The one that specifies how we can _offload_ the processing from QEMU to a separate process. We
want to offload so that we do not have to stop the guest when we are processing information passed to a virtio device. This
service could either be implemented in the host kernel or the host userspace. Now when we offload the processing, we map the
memory of the device to this vhost service, so that this service has all the information that it should process.
  Also, this process can generate the vCPU interrupts, and this process responds to the ioeventfd notifications.

What I do not understand is, once we have this vhost service, either in userspace or in kernel space, which does the information processing,
why do we need a virtio device still emulated in QEMU? Is it only to pass on the configurations between the driver and the
vhost service? I know that the vhost service doesn't emulate anything, but then what is the difference between "processing" the
information and "emulating" a device?

Also, from article[3], moving the vhost-net service to userspace was faster somehow. I am assuming this was only the case for
networking devices, and would not be true in general. Since there would be more context switches between user and kernel space?
(KVM receives the irq/ioevent notification and then transfers control back to user space, as opposed to when vhost was in kernel
space.)

For context, I've been reading the following:


Found the answers in this blog: http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html
In short, yes, the configuration plane still remains with QEMU. The frontend driver interacts with the PCI
adapter emulated in QEMU, for configurations and memory map setup. Only the data plane is forwarded
to the vhost service. This makes sense since we would only want to configure the device once, and hence
having that emulated in QEMU is not a performance issue, as much as having the data plane was.

There is still a little confusion in my mind regarding a few things, but I think looking at the source code
of the already implemented drivers will clear that up for me. So that is what I will be doing next.

I will start looking at the source code for in-QEMU and vhost implementations of other virtio drivers, and then decide which one I'd like to
go with. I will probably follow that decision with an implementation plan/timeline so that everyone can follow the progress on the
development of this project.