All of lore.kernel.org
 help / color / mirror / Atom feed
From: Julien Grall <julien@xen.org>
To: Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr <olekstysh@gmail.com>
Cc: "Oleksandr Andrushchenko" <andr2000@gmail.com>,
	"Andre Przywara" <andre.przywara@arm.com>,
	"Bertrand Marquis" <Bertrand.Marquis@arm.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	"Roger Pau Monné" <roger.pau@citrix.com>,
	alex.bennee@linaro.org, "Artem Mygaiev" <joculator@gmail.com>
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
Date: Tue, 21 Jul 2020 14:22:28 +0100	[thread overview]
Message-ID: <56e512af-993b-1364-be56-fc4be5d88519@xen.org> (raw)
In-Reply-To: <alpine.DEB.2.21.2007201326060.32544@sstabellini-ThinkPad-T480s>

(+ Andree for the vGIC).

Hi Stefano,

On 20/07/2020 21:38, Stefano Stabellini wrote:
> On Fri, 17 Jul 2020, Oleksandr wrote:
>>>> *A few word about solution:*
>>>> As it was mentioned at [1], in order to implement virtio-mmio Xen on Arm
>>> Any plans for virtio-pci? Arm seems to be moving to the PCI bus, and
>>> it would be very interesting from a x86 PoV, as I don't think
>>> virtio-mmio is something that you can easily use on x86 (or even use
>>> at all).
>>
>> Being honest I didn't consider virtio-pci so far. Julien's PoC (we are based
>> on) provides support for the virtio-mmio transport
>>
>> which is enough to start working around VirtIO and is not as complex as
>> virtio-pci. But it doesn't mean there is no way for virtio-pci in Xen.
>>
>> I think, this could be added in next steps. But the nearest target is
>> virtio-mmio approach (of course if the community agrees on that).

> Aside from complexity and easy-of-development, are there any other
> architectural reasons for using virtio-mmio?

 From the hypervisor PoV, the main/only difference between virtio-mmio 
and virtio-pci is that in the latter we need to forward PCI config space 
access to the device emulator. IOW, we would need to add support for 
vPCI. This shouldn't require much more work, but I didn't want to invest 
on it for PoC.

Long term, I don't think we should tie Xen to any of the virtio 
protocol. We just need to offer facilities so users can be build easily 
virtio backend for Xen.

> 
> I am not asking because I intend to suggest to do something different
> (virtio-mmio is fine as far as I can tell.) I am asking because recently
> there was a virtio-pci/virtio-mmio discussion recently in Linaro and I
> would like to understand if there are any implications from a Xen point
> of view that I don't yet know.

virtio-mmio is going to require more work in the toolstack because we 
would need to do the memory/interrupts allocation ourself. In the case 
of virtio-pci, we only need to pass a range of memory/interrupts to the 
guest and let him decide the allocation.

Regarding virtio-pci vs virtio-mmio:
      - flexibility: virtio-mmio is a good fit when you know all your 
devices at boot. If you want to hotplug disk/network, then virtio-pci is 
going to be a better fit.
      - interrupts: I would expect each virtio-mmio device to have its 
own SPI interrupts. In the case of virtio-pci, then legacy interrupts 
would be shared between all the PCI devices on the same host controller. 
This could possibly lead to performance issue if you have many devices. 
So for virtio-pci, we should consider MSIs.

> 
> For instance, what's your take on notifications with virtio-mmio? How
> are they modelled today?

The backend will notify the frontend using an SPI. The other way around 
(frontend -> backend) is based on an MMIO write.

We have an interface to allow the backend to control whether the 
interrupt level (i.e. low, high). However, the "old" vGIC doesn't handle 
properly level interrupts. So we would end up to treat level interrupts 
as edge.

Technically, the problem is already existing with HW interrupts, but the 
HW should fire it again if the interrupt line is still asserted. Another 
issue is the interrupt may fire even if the interrupt line was 
deasserted (IIRC this caused some interesting problem with the Arch timer).

I am a bit concerned that the issue will be more proeminent for virtual 
interrupts. I know that we have some gross hack in the vpl011 to handle 
a level interrupts. So maybe it is time to switch to the new vGIC?

> Are they good enough or do we need MSIs?

I am not sure whether virtio-mmio supports MSIs. However for virtio-pci, 
MSIs is going to be useful to improve performance. This may mean to 
expose an ITS, so we would need to add support for guest.

Cheers,

-- 
Julien Grall


  parent reply	other threads:[~2020-07-21 13:22 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-17 14:11 Virtio in Xen on Arm (based on IOREQ concept) Oleksandr Tyshchenko
2020-07-17 15:00 ` Roger Pau Monné
2020-07-17 18:34   ` Oleksandr
2020-07-20  9:17     ` Roger Pau Monné
2020-07-20  9:40       ` Julien Grall
2020-07-20 10:20         ` Roger Pau Monné
2020-07-20 20:37           ` Stefano Stabellini
2020-07-21 12:31             ` Julien Grall
2020-07-21 13:25               ` Roger Pau Monné
2020-07-21 13:32                 ` Julien Grall
2020-07-21 13:40                   ` Roger Pau Monné
2020-07-21 14:09               ` Alex Bennée
2020-07-21 16:14                 ` Stefano Stabellini
2020-07-20 10:56       ` Oleksandr
2020-07-20 11:09         ` Roger Pau Monné
2020-07-20 20:40           ` Stefano Stabellini
2020-07-21 12:43             ` Oleksandr
2020-07-20 20:38     ` Stefano Stabellini
2020-07-21 12:26       ` Oleksandr
2020-07-21 13:43         ` Julien Grall
2020-07-21 14:32           ` André Przywara
2020-07-21 14:52             ` Oleksandr
2020-07-21 14:58               ` André Przywara
2020-07-21 16:09                 ` Oleksandr
2020-07-21 17:02                   ` André Przywara
2020-07-21 13:22       ` Julien Grall [this message]
2020-07-21 14:15         ` Alex Bennée
2020-07-21 14:40           ` Julien Grall
2020-07-21 16:42             ` Stefano Stabellini
2020-07-21 14:27     ` Julien Grall
2020-07-21 17:24       ` Oleksandr
2020-07-21 18:16       ` Oleksandr
2020-07-21 21:12         ` Julien Grall
2020-07-22  8:21           ` Roger Pau Monné
2020-07-22 10:47             ` Julien Grall
2020-07-22 11:10               ` Roger Pau Monné
2020-07-22 11:17                 ` Julien Grall
2020-07-22 11:37                   ` Roger Pau Monné

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56e512af-993b-1364-be56-fc4be5d88519@xen.org \
    --to=julien@xen.org \
    --cc=Bertrand.Marquis@arm.com \
    --cc=alex.bennee@linaro.org \
    --cc=andr2000@gmail.com \
    --cc=andre.przywara@arm.com \
    --cc=joculator@gmail.com \
    --cc=olekstysh@gmail.com \
    --cc=roger.pau@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.