All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Roger Pau Monné" <roger.pau@citrix.com>
To: Julien Grall <julien@xen.org>
Cc: Stewart Hildebrand <stewart.hildebrand@amd.com>,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Artem Mygaiev <artem_mygaiev@epam.com>
Subject: Re: [PATCH v2 3/3] [FUTURE] xen/arm: enable vPCI for domUs
Date: Fri, 7 Jul 2023 15:40:43 +0200	[thread overview]
Message-ID: <ZKgV2x3PqUJUHdic@MacBook-Air-de-Roger.local> (raw)
In-Reply-To: <78c0f004-4bdb-6f86-e42c-ec3f64d25dd3@xen.org>

On Fri, Jul 07, 2023 at 02:27:17PM +0100, Julien Grall wrote:
> Hi,
> 
> On 07/07/2023 14:13, Roger Pau Monné wrote:
> > On Fri, Jul 07, 2023 at 01:09:40PM +0100, Julien Grall wrote:
> > > Hi,
> > > 
> > > On 07/07/2023 12:34, Roger Pau Monné wrote:
> > > > On Fri, Jul 07, 2023 at 12:16:46PM +0100, Julien Grall wrote:
> > > > > 
> > > > > 
> > > > > On 07/07/2023 11:47, Roger Pau Monné wrote:
> > > > > > On Fri, Jul 07, 2023 at 11:33:14AM +0100, Julien Grall wrote:
> > > > > > > Hi,
> > > > > > > 
> > > > > > > On 07/07/2023 11:06, Roger Pau Monné wrote:
> > > > > > > > On Fri, Jul 07, 2023 at 10:00:51AM +0100, Julien Grall wrote:
> > > > > > > > > On 07/07/2023 02:47, Stewart Hildebrand wrote:
> > > > > > > > > > Note that CONFIG_HAS_VPCI_GUEST_SUPPORT is not currently used in the upstream
> > > > > > > > > > code base. It will be used by the vPCI series [1]. This patch is intended to be
> > > > > > > > > > merged as part of the vPCI series.
> > > > > > > > > > 
> > > > > > > > > > v1->v2:
> > > > > > > > > > * new patch
> > > > > > > > > > ---
> > > > > > > > > >       xen/arch/arm/Kconfig              | 1 +
> > > > > > > > > >       xen/arch/arm/include/asm/domain.h | 2 +-
> > > > > > > > > >       2 files changed, 2 insertions(+), 1 deletion(-)
> > > > > > > > > > 
> > > > > > > > > > diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> > > > > > > > > > index 4e0cc421ad48..75dfa2f5a82d 100644
> > > > > > > > > > --- a/xen/arch/arm/Kconfig
> > > > > > > > > > +++ b/xen/arch/arm/Kconfig
> > > > > > > > > > @@ -195,6 +195,7 @@ config PCI_PASSTHROUGH
> > > > > > > > > >       	depends on ARM_64
> > > > > > > > > >       	select HAS_PCI
> > > > > > > > > >       	select HAS_VPCI
> > > > > > > > > > +	select HAS_VPCI_GUEST_SUPPORT
> > > > > > > > > >       	default n
> > > > > > > > > >       	help
> > > > > > > > > >       	  This option enables PCI device passthrough
> > > > > > > > > > diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
> > > > > > > > > > index 1a13965a26b8..6e016b00bae1 100644
> > > > > > > > > > --- a/xen/arch/arm/include/asm/domain.h
> > > > > > > > > > +++ b/xen/arch/arm/include/asm/domain.h
> > > > > > > > > > @@ -298,7 +298,7 @@ static inline void arch_vcpu_block(struct vcpu *v) {}
> > > > > > > > > >       #define arch_vm_assist_valid_mask(d) (1UL << VMASST_TYPE_runstate_update_flag)
> > > > > > > > > > -#define has_vpci(d) ({ IS_ENABLED(CONFIG_HAS_VPCI) && is_hardware_domain(d); })
> > > > > > > > > > +#define has_vpci(d)    ({ (void)(d); IS_ENABLED(CONFIG_HAS_VPCI); })
> > > > > > > > > 
> > > > > > > > > As I mentioned in the previous patch, wouldn't this enable vPCI
> > > > > > > > > unconditionally for all the domain? Shouldn't this be instead an optional
> > > > > > > > > feature which would be selected by the toolstack?
> > > > > > > > 
> > > > > > > > I do think so, at least on x86 we signal whether vPCI should be
> > > > > > > > enabled for a domain using xen_arch_domainconfig at domain creation.
> > > > > > > > 
> > > > > > > > Ideally we would like to do this on a per-device basis for domUs, so
> > > > > > > > we should consider adding a new flag to xen_domctl_assign_device in
> > > > > > > > order to signal whether the assigned device should use vPCI.
> > > > > > > 
> > > > > > > I am a bit confused with this paragraph. If the device is not using vPCI,
> > > > > > > how will it be exposed to the domain? Are you planning to support both vPCI
> > > > > > > and PV PCI passthrough for a same domain?
> > > > > > 
> > > > > > You could have an external device model handling it using the ioreq
> > > > > > interface, like we currently do passthrough for HVM guests.
> > > > > 
> > > > > IMHO, if one decide to use QEMU for emulating the host bridge, then there is
> > > > > limited point to also ask Xen to emulate the hostbridge for some other
> > > > > device. So what would be the use case where you would want to be a
> > > > > per-device basis decision?
> > > > 
> > > > You could also emulate the bridge in Xen and then have QEMU and
> > > > vPCI handle accesses to the PCI config space for different devices.
> > > > The ioreq interface already allows registering for config space
> > > > accesses on a per SBDF basis.
> > > > 
> > > > XenServer currently has a use-case where generic PCI device
> > > > passthrough is handled by QEMU, while some GPUs are passed through
> > > > using a custom emulator.  So some domains effectively end with a QEMU
> > > > instance and a custom emulator, I don't see why you couldn't
> > > > technically replace QEMU with vPCI in this scenario.
> > > > 
> > > > The PCI root complex might be emulated by QEMU, or ideally by Xen.
> > > > That shouldn't prevent other device models from handling accesses for
> > > > devices, as long as accesses to the ECAM region(s) are trapped and
> > > > decoded by Xen.  IOW: if we want bridges to be emulated by ioreq
> > > > servers we need to introduce an hypercall to register ECAM regions
> > > > with Xen so that it can decode accesses and forward them
> > > > appropriately.
> > > 
> > > Thanks for the clarification. Going back to the original discussion. Even
> > > with this setup, I think we still need to tell at domain creation whether
> > > vPCI will be used (think PCI hotplug).
> > 
> > Well, for PCI hotplug you will still need to execute a
> > XEN_DOMCTL_assign_device hypercall in order to assign the device, at
> > which point you could pass the vPCI flag.
> 
> I am probably missing something here. If you don't pass the vPCI flag at
> domain creation, wouldn't it mean that hostbridge would not be created until
> later? Are you thinking to make it unconditionally or hotplug it (even
> that's even possible)?

I think at domain creation more than a vPCI flag you want an 'emulate a
PCI bridge' flag.  Such flag will also be needed if in the future you
want to support virtio-pci devices for example, and those have nothing
do to do with vPCI.

> > 
> > What you likely want at domain create is whether the IOMMU should be
> > enabled or not, as we no longer allow late enabling of the IOMMU once
> > the domain has been created.
> > 
> > One question I have is whether Arm plans to allow exposing fully
> > emulated devices on the PCI config space, or that would be limited to
> > PCI device passthrough?
> 
> In the longer term, I would expect to have a mix of physical and emulated
> device (e.g. virtio).

That's what I would expect.

> > 
> > IOW: should an emulated PCI root complex be unconditionally exposed to
> > guests so that random ioreq servers can register for SBDF slots?
> 
> I would say no. The vPCI should only be added when the configuration
> requested it. This is to avoid exposing unnecessary emulation to a domain
> (not everyone will want to use a PCI hostbridge).

Right, then as replied above you might want a domain create flag to
signal whether to emulate a PCI bridge for the domain.

> > 
> > > After that, the device assignment hypercall could have a way to say whether
> > > the device will be emulated by vPCI. But I don't think this is necessary to
> > > have from day one as the ABI will be not stable (this is a DOMCTL).
> > 
> > Indeed, it's not a stable interface, but we might as well get
> > something sane if we have to plumb it through the tools.  Either if
> > it's a domain create flag or a device attach flag you will need some
> > plumbing to do at the toolstack level, at which point we might as well
> > use an interface that doesn't have arbitrary limits.
> 
> I think we need both flags. In your approach you seem to want to either have
> the hostbridge created unconditionally or hotplug it (if that's even
> possible).

You could in theory have hotplug MCFG (ECAM) regions in ACPI, but I'm
unsure how many OSes support that, but no, I don't think we should try
to hotplug PCI bridges.

I was thinking that for x86 PVH we might want to unconditionally
expose a PCI bridge, but it might be better to signal that from the
domain configuration and not make it mandatory.

> However, I don't think we should have the vPCI unconditionally created and
> we should still allow the toolstack to say at domain creation that PCI will
> be used.

Indeed.  I think a domain create emulate a PCI bridge flag and a vPCI
flag for XEN_DOMCTL_assign_device are required.

Thanks, Roger.


  reply	other threads:[~2023-07-07 13:41 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-07  1:47 [PATCH v2 0/3] Kconfig for PCI passthrough on ARM Stewart Hildebrand
2023-07-07  1:47 ` [PATCH v2 1/3] xen/arm: pci: introduce PCI_PASSTHROUGH Kconfig option Stewart Hildebrand
2023-07-07  6:55   ` Jan Beulich
2023-07-07  8:10     ` Julien Grall
2023-07-07  8:38   ` Julien Grall
2023-07-13 18:40   ` Julien Grall
2023-07-18 17:35     ` Stewart Hildebrand
2023-07-20  9:20       ` Julien Grall
2023-10-04 18:07         ` Stewart Hildebrand
2023-07-07  1:47 ` [PATCH v2 2/3] xen/arm: make has_vpci depend on CONFIG_HAS_VPCI Stewart Hildebrand
2023-07-07  6:59   ` Jan Beulich
2023-07-18 17:01     ` Stewart Hildebrand
2023-07-19  7:42       ` Jan Beulich
2023-07-07  8:55   ` Julien Grall
2023-10-09 19:11     ` Stewart Hildebrand
2023-07-07  1:47 ` [PATCH v2 3/3] [FUTURE] xen/arm: enable vPCI for domUs Stewart Hildebrand
2023-07-07  9:00   ` Julien Grall
2023-07-07 10:06     ` Roger Pau Monné
2023-07-07 10:33       ` Julien Grall
2023-07-07 10:47         ` Roger Pau Monné
2023-07-07 11:16           ` Julien Grall
2023-07-07 11:34             ` Roger Pau Monné
2023-07-07 12:09               ` Julien Grall
2023-07-07 13:13                 ` Roger Pau Monné
2023-07-07 13:27                   ` Julien Grall
2023-07-07 13:40                     ` Roger Pau Monné [this message]
2023-10-09 19:12     ` Stewart Hildebrand
2023-07-07 11:04   ` Rahul Singh
2023-07-21  4:54     ` Stewart Hildebrand
2023-07-21  8:41       ` Rahul Singh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZKgV2x3PqUJUHdic@MacBook-Air-de-Roger.local \
    --to=roger.pau@citrix.com \
    --cc=Volodymyr_Babchuk@epam.com \
    --cc=artem_mygaiev@epam.com \
    --cc=bertrand.marquis@arm.com \
    --cc=julien@xen.org \
    --cc=oleksandr_tyshchenko@epam.com \
    --cc=sstabellini@kernel.org \
    --cc=stewart.hildebrand@amd.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.