From: Stefano Stabellini <sstabellini@kernel.org>
To: Rahul Singh <rahul.singh@arm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
Julien Grall <julien@xen.org>,
andrew.cooper3@citrix.com, Bertrand.Marquis@arm.com,
jbeulich@suse.com, xen-devel@lists.xenproject.org, nd@arm.com,
Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
roger.pau@citrix.com
Subject: Re: [RFC PATCH v1 2/4] xen/arm: Discovering PCI devices and add the PCI devices in XEN.
Date: Thu, 23 Jul 2020 13:44:03 -0700 (PDT) [thread overview]
Message-ID: <alpine.DEB.2.21.2007231337140.17562@sstabellini-ThinkPad-T480s> (raw)
In-Reply-To: <666df0147054dda8af13ae74a89be44c69984295.1595511416.git.rahul.singh@arm.com>
On Thu, 23 Jul 2020, Rahul Singh wrote:
> Hardware domain is in charge of doing the PCI enumeration and will
> discover the PCI devices and then will communicate to XEN via hyper
> call PHYSDEVOP_pci_device_add to add the PCI devices in XEN.
>
> Change-Id: Ie87e19741689503b4b62da911c8dc2ee318584ac
Same question about Change-Id
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> ---
> xen/arch/arm/physdev.c | 42 +++++++++++++++++++++++++++++++++++++++---
> 1 file changed, 39 insertions(+), 3 deletions(-)
>
> diff --git a/xen/arch/arm/physdev.c b/xen/arch/arm/physdev.c
> index e91355fe22..274720f98a 100644
> --- a/xen/arch/arm/physdev.c
> +++ b/xen/arch/arm/physdev.c
> @@ -9,12 +9,48 @@
> #include <xen/errno.h>
> #include <xen/sched.h>
> #include <asm/hypercall.h>
> -
> +#include <xen/guest_access.h>
> +#include <xsm/xsm.h>
>
> int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
> {
> - gdprintk(XENLOG_DEBUG, "PHYSDEVOP cmd=%d: not implemented\n", cmd);
> - return -ENOSYS;
> + int ret = 0;
> +
> + switch ( cmd )
> + {
> +#ifdef CONFIG_HAS_PCI
> + case PHYSDEVOP_pci_device_add:
> + {
> + struct physdev_pci_device_add add;
> + struct pci_dev_info pdev_info;
> + nodeid_t node = NUMA_NO_NODE;
> +
> + ret = -EFAULT;
> + if ( copy_from_guest(&add, arg, 1) != 0 )
> + break;
> +
> + pdev_info.is_extfn = !!(add.flags & XEN_PCI_DEV_EXTFN);
> + if ( add.flags & XEN_PCI_DEV_VIRTFN )
> + {
> + pdev_info.is_virtfn = 1;
> + pdev_info.physfn.bus = add.physfn.bus;
> + pdev_info.physfn.devfn = add.physfn.devfn;
> + }
> + else
> + pdev_info.is_virtfn = 0;
> +
> + ret = pci_add_device(add.seg, add.bus, add.devfn,
> + &pdev_info, node);
> +
> + break;
> + }
> +#endif
> + default:
> + gdprintk(XENLOG_DEBUG, "PHYSDEVOP cmd=%d: not implemented\n", cmd);
> + ret = -ENOSYS;
> + }
I think we should make the implementation common between arm and x86 by
creating xen/common/physdev.c:do_physdev_op as a shared entry point for
PHYSDEVOP hypercalls implementations. See for instance:
xen/common/sysctl.c:do_sysctl
and
xen/arch/arm/sysctl.c:arch_do_sysctl
xen/arch/x86/sysctl.c:arch_do_sysctl
Jan, Andrew, Roger, any opinions?
next prev parent reply other threads:[~2020-07-23 20:44 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-23 15:40 [RFC PATCH v1 0/4] PCI devices passthrough on Arm Rahul Singh
2020-07-23 15:40 ` [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge discovery within XEN on ARM Rahul Singh
2020-07-23 23:38 ` Stefano Stabellini
2020-07-24 7:03 ` Oleksandr Andrushchenko
2020-07-24 8:05 ` Julien Grall
2020-07-24 17:47 ` Stefano Stabellini
2020-07-27 15:27 ` Rahul Singh
2020-07-27 15:20 ` Rahul Singh
2020-07-24 8:44 ` Julien Grall
2020-07-24 17:41 ` Stefano Stabellini
2020-07-24 18:21 ` Julien Grall
2020-07-24 18:32 ` Stefano Stabellini
2020-07-24 19:24 ` Julien Grall
2020-07-24 23:46 ` Stefano Stabellini
2020-07-25 9:59 ` Julien Grall
2020-07-27 11:06 ` Roger Pau Monné
2020-07-28 0:06 ` Stefano Stabellini
2020-07-28 8:33 ` Roger Pau Monné
2020-07-28 18:33 ` Stefano Stabellini
2020-07-26 7:01 ` Jan Beulich
2020-07-27 13:27 ` Rahul Singh
2020-07-24 8:23 ` Julien Grall
2020-07-27 15:29 ` Rahul Singh
2020-07-24 14:44 ` Roger Pau Monné
2020-07-24 15:15 ` Julien Grall
2020-07-24 15:29 ` Roger Pau Monné
2020-07-24 15:42 ` Roger Pau Monné
2020-07-24 15:46 ` Julien Grall
2020-07-24 16:01 ` Jan Beulich
2020-07-24 16:54 ` Julien Grall
2020-07-27 10:34 ` Roger Pau Monné
2020-07-28 8:06 ` Rahul Singh
2020-07-28 8:21 ` Roger Pau Monné
2020-07-23 15:40 ` [RFC PATCH v1 2/4] xen/arm: Discovering PCI devices and add the PCI devices in XEN Rahul Singh
2020-07-23 20:44 ` Stefano Stabellini [this message]
2020-07-24 7:14 ` Oleksandr Andrushchenko
2020-07-24 8:19 ` Julien Grall
2020-07-27 16:10 ` Rahul Singh
2020-07-24 14:49 ` Roger Pau Monné
2020-07-27 8:40 ` Rahul Singh
2020-07-23 15:40 ` [RFC PATCH v1 3/4] xen/arm: Enable the existing x86 virtual PCI support for ARM Rahul Singh
2020-07-23 23:39 ` Stefano Stabellini
2020-07-24 15:08 ` Roger Pau Monné
2020-07-23 15:40 ` [RFC PATCH v1 4/4] arm/libxl: Emulated PCI device tree node in libxl Rahul Singh
2020-07-23 23:39 ` Stefano Stabellini
2020-07-24 7:55 ` Oleksandr Andrushchenko
2020-07-24 9:11 ` Julien Grall
2020-07-27 13:40 ` Rahul Singh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.DEB.2.21.2007231337140.17562@sstabellini-ThinkPad-T480s \
--to=sstabellini@kernel.org \
--cc=Bertrand.Marquis@arm.com \
--cc=Volodymyr_Babchuk@epam.com \
--cc=andrew.cooper3@citrix.com \
--cc=jbeulich@suse.com \
--cc=julien@xen.org \
--cc=nd@arm.com \
--cc=rahul.singh@arm.com \
--cc=roger.pau@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).