All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Roger Pau Monné" <roger.pau@citrix.com>
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"Jan Beulich" <jbeulich@suse.com>
Subject: Re: SR-IOV: do we need to virtualize in Xen or rely on Dom0?
Date: Thu, 10 Jun 2021 09:54:40 +0200	[thread overview]
Message-ID: <YMHFQA1L61ntKNRq@Air-de-Roger> (raw)
In-Reply-To: <c10e16c9-ec42-336f-e838-caca49b39723@epam.com>

On Fri, Jun 04, 2021 at 06:37:27AM +0000, Oleksandr Andrushchenko wrote:
> Hi, all!
> 
> While working on PCI SR-IOV support for ARM I started porting [1] on top
> of current PCI on ARM support [2]. The question I have for this series
> is if we really need emulating SR-IOV code in Xen?
> 
> I have implemented a PoC for SR-IOV on ARM [3] (please see the top 2 
> patches)
> and it "works for me": MSI support is still WIP, but I was able to see that
> VFs are properly seen in the guest and BARs are properly programmed in p2m.
> 
> What I can't fully understand is if we can live with this approach or there
> are use-cases I can't see.
> 
> Previously I've been told that this approach might not work on FreeBSD 
> running
> as Domain-0, but is seems that "PCI Passthrough is not supported 
> (Xen/FreeBSD)"
> anyways [4].

PCI passthorgh is not supported on FreeBSD dom0 because PCI
passthrough is not supported by Xen itself when using a PVH dom0, and
that's the only mode FreeBSD dom0 can use.

PHYSDEVOP_pci_device_add can be added to FreeBSD, so it could be made
to work. I however think this is not the proper way to implement
SR-IOV support.

> 
> I also see ACRN hypervisor [5] implements SR-IOV inside it which makes 
> me think I
> miss some important use-case on x86 though.
> 
> I would like to ask for any advise with SR-IOV in hypervisor respect, 
> any pointers
> to documentation or any other source which might be handy in deciding if 
> we do
> need SR-IOV complexity in Xen.
> 
> And it does bring complexity if you compare [1] and [3])...
> 
> A bit of technical details on the approach implemented [3]:
> 1. We rely on PHYSDEVOP_pci_device_add
> 2. We rely on Domain-0 SR-IOV drivers to instantiate VFs
> 3. BARs are programmed in p2m implementing guest view on those (we have 
> extended
> vPCI code for that and this path is used for both "normal" devices and 
> VFs the same way)
> 4. No need to trap PCI_SRIOV_CTRL
> 5. No need to wait 100ms in Xen before attempting to access VF registers 
> when
> enabling virtual functions on the PF - this is handled by Domain-0 itself.

I think the SR-IOV capability should be handled like any other PCI
capability, ie: like we currently handle MSI or MSI-X in vPCI.

It's likely that using some kind of hypercall in order to deal with
SR-IOV could make this easier to implement in Xen, but that just adds
more code to all OSes that want to run as the hardware domain.

OTOH if we properly trap accesses to the SR-IOV capability (like it
was proposed in [1] from your references) we won't have to modify OSes
that want to run as hardware domains in order to handle SR-IOV devices.

IMO going for the hypercall option seems easier now, but adds a burden
to all OSes that want to manage SR-IOV devices that will hurt us long
term.

Thanks, Roger.

> Thank you in advance,
> Oleksandr
> 
> [1] 
> https://lists.xenproject.org/archives/html/xen-devel/2018-07/msg01494.html
> [2] 
> https://gitlab.com/xen-project/fusa/xen-integration/-/tree/integration/pci-passthrough
> [3] https://github.com/xen-troops/xen/commits/pci_phase2
> [4] https://wiki.freebsd.org/Xen
> [5] https://projectacrn.github.io/latest/tutorials/sriov_virtualization.html


  reply	other threads:[~2021-06-10  7:55 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-04  6:37 SR-IOV: do we need to virtualize in Xen or rely on Dom0? Oleksandr Andrushchenko
2021-06-10  7:54 ` Roger Pau Monné [this message]
2021-06-10 10:01   ` Oleksandr Andrushchenko
2021-06-10 10:48     ` Jan Beulich
2021-06-10 11:45       ` Oleksandr Andrushchenko
2021-06-10 12:02         ` Jan Beulich
2021-06-10 12:25           ` Oleksandr Andrushchenko
2021-06-10 14:10     ` Roger Pau Monné
2021-06-10 15:33       ` Oleksandr Andrushchenko
2021-06-11  6:45         ` Jan Beulich
2021-06-11  6:53           ` Oleksandr Andrushchenko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YMHFQA1L61ntKNRq@Air-de-Roger \
    --to=roger.pau@citrix.com \
    --cc=Oleksandr_Andrushchenko@epam.com \
    --cc=jbeulich@suse.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.