All of lore.kernel.org
 help / color / mirror / Atom feed
From: Wei Chen <Wei.Chen@arm.com>
To: "Oleksandr Andrushchenko" <Oleksandr_Andrushchenko@epam.com>,
	"Roger Pau Monné" <roger.pau@citrix.com>
Cc: Julien Grall <julien@xen.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>,
	"sstabellini@kernel.org" <sstabellini@kernel.org>,
	Rahul Singh <Rahul.Singh@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH] xen/arm: fix SBDF calculation for vPCI MMIO handlers
Date: Tue, 2 Nov 2021 08:12:36 +0000	[thread overview]
Message-ID: <DB9PR08MB68573DA7157BD153B4C51A7D9E8B9@DB9PR08MB6857.eurprd08.prod.outlook.com> (raw)
In-Reply-To: <eae1776e-79ea-e219-741d-632aeceeeb12@epam.com>

Hi Oleksandr,

> -----Original Message-----
> From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
> Sent: 2021年11月2日 15:47
> To: Wei Chen <Wei.Chen@arm.com>; Roger Pau Monné <roger.pau@citrix.com>
> Cc: Julien Grall <julien@xen.org>; Bertrand Marquis
> <Bertrand.Marquis@arm.com>; sstabellini@kernel.org; Rahul Singh
> <Rahul.Singh@arm.com>; xen-devel@lists.xenproject.org
> Subject: Re: [PATCH] xen/arm: fix SBDF calculation for vPCI MMIO handlers
> 
> Hi,
> 
> On 02.11.21 09:37, Wei Chen wrote:
> > Hi Oleksandr,
> >
> > On 2021/11/1 14:14, Oleksandr Andrushchenko wrote:
> >>
> >>
> >> On 29.10.21 10:33, Roger Pau Monné wrote:
> >>> On Thu, Oct 28, 2021 at 05:55:25PM +0000, Oleksandr Andrushchenko
> wrote:
> >>>>
> >>>> On 28.10.21 19:03, Roger Pau Monné wrote:
> >>>>> On Thu, Oct 28, 2021 at 02:23:34PM +0000, Oleksandr Andrushchenko
> wrote:
> >>>>>> On 28.10.21 16:36, Roger Pau Monné wrote:
> >>>>>>> On Thu, Oct 28, 2021 at 12:09:23PM +0000, Oleksandr Andrushchenko
> wrote:
> >>>>>>>> Hi, Julien!
> >>>>>>>>
> >>>>>>>> On 27.10.21 20:35, Julien Grall wrote:
> >>>>>>>>> Hi Oleksandr,
> >>>>>>>>>
> >>>>>>>>> On 27/10/2021 09:25, Oleksandr Andrushchenko wrote:
> >>>>>>>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> >>>>>>>>>>
> >>>>>>>>>> While in vPCI MMIO trap handlers for the guest PCI host bridge
> it is not
> >>>>>>>>>> enough for SBDF translation to simply call VPCI_ECAM_BDF(info-
> >gpa) as
> >>>>>>>>>> the base address may not be aligned in the way that the
> translation
> >>>>>>>>>> always work. If not adjusted with respect to the base address
> it may not be
> >>>>>>>>>> able to properly convert SBDF and crashes:
> >>>>>>>>>>
> >>>>>>>>>> (XEN) vpci_mmio_read 0000:65:1a.0 reg 8bc gpa e65d08bc
> >>>>>>>>> I can't find a printk() that may output this message. Where does
> this comes from?
> >>>>>>>> That was a debug print. I shouldn't have used that in the patch
> description, but
> >>>>>>>> probably after "---" to better explain what's happening
> >>>>>>>>> Anyway, IIUC the guest physical address is 0xe65d08bc which, if
> I am not mistaken, doesn't belong to the range advertised for
> GUEST_VPCI_ECAM.
> >>>>>>>> This is from dom0 I am working on now.
> >>>>>>>>> IMHO, the stack trace should come from usptream Xen or need some
> information to explain how this was reproduced.
> >>>>>>>>>
> >>>>>>>>>> (XEN) Data Abort Trap. Syndrome=0x6
> >>>>>>>>>> (XEN) Walking Hypervisor VA 0x467a28bc on CPU0 via TTBR
> 0x00000000481d5000
> >>>>>>>>> I can understnad that if we don't substract GUEST_VPCI_ECAM, we
> would (in theory) not get the correct BDF. But... I don't understand how
> this would result to a data abort in the hypervisor.
> >>>>>>>>>
> >>>>>>>>> In fact, I think the vPCI code should be resilient enough to not
> crash if we pass the wrong BDF.
> >>>>>>>> Well, there is no (?) easy way to validate SBDF. And this could
> be a problem if we have a misbehaving
> >>>>>>>> guest which may force Xen to access the memory beyond that of PCI
> host bridge
> >>>>>>> How could that be? The ECAM region exposed to the guest you should
> be
> >>>>>>> the same as the physical one for dom0?
> >>>>>> Ok, I have a Designware PCI hist which has 2 ECAM regions (I am
> starting to
> >>>>>> implement the driver for it, so I can be wrong here):
> >>>>>> - Root Complex ECAM area ("dbi"), it is something like 0x3000 bytes
> long
> >>>>>> - "Client" ECAM area ("config")
> >>>>>> So from Dom0 POV we have 2 ECAM regions and for the guest
> >>>>>> we always emulate a single big region:
> >>>>> You need support for multiple ECAM regions. That's how we do it on
> x86
> >>>>> PVH dom0. See register_vpci_mmcfg_handler and related machinery.
> >>>> Is it common for a PCI host bridge to have multiple ECAM regions?
> >>>> Currently on Arm we were about to support "pci-host-ecam-generic" [1],
> >>>> e.g. generic ECAM host bridge which normally (?) has a single ECAM
> >>>> region [2]. But the host bridge I want to support has multiple, so
> >>>> strictly speaking it is not the one that we implement.
> >>> It's possible on x86 to have multiple ECAM regions, whether that means
> >>> multiple host bridges, or host bridges having multiple ECAM regions is
> >>> unknown to me. It's all reported in the MCFG ACPI table (see PCI
> >>> Firmware document for the detailed description of MCFG) using the
> >>> "Configuration Space Base Address Allocation Structure", and there can
> >>> be multiple of those structures.
> >> As we are currently supporting generic ECAM host bridge which
> >> has a single ECAM region I think the existing code we have and
> >> about to upstream is ok as is for now.
> >> I own a bridge which has 2 ECAM regions, so I will work towards
> >> adding its support soon.
> >>>
> >>>> Arm folks, do we want this generalization at this moment to align
> with x86
> >>>> with this respect?
> >>>>
> >>>> We can live with the current approach and when I have my driver
> implemented
> >>>> I can send patches to make that generalization.
> >>>>>> /*
> >>>>>>      * 256 MB is reserved for VPCI configuration space based on
> calculation
> >>>>>>      * 256 buses x 32 devices x 8 functions x 4 KB = 256 MB
> >>>>>>      */
> >>>>>> #define GUEST_VPCI_ECAM_BASE xen_mk_ullong(0x10000000)
> >>>>>> #define GUEST_VPCI_ECAM_SIZE xen_mk_ullong(0x10000000)
> >>>>>>
> >>>>>> So, we have the base address and size of the emulated ECAM space
> >>>>>> not connected to the real host bridge
> >>>>>>> And for domUs you really need to fix vpci_{read,write} to not
> >>>>>>> passthrough accesses not explicitly handled.
> >>>>>> Do you mean that we need to validate SBDFs there?
> >>>>>> This can be tricky if we have a use-case when a PCI device being
> >>>>>> passed through if not put at 0000:00:0.0, but requested to be, for
> >>>>>> example, 0000:0d:0.0. So, we need to go over the list of virtual
> >>>>>> devices and see if SBDF the guest is trying to access is a valid
> SBDF.
> >>>>>> Is this what you mean?
> >>>>> No, you need to prevent accesses to registers not explicitly handled
> >>>>> by vpci. Ie: do not forward unhandled accesses to
> >>>>> vpci_{read,wrie}_hw).
> >>>> I see, so those which have no handlers are not passed to the hardware.
> >>>> I need to see how to do that
> >>> Indeed. Without fixing that passthrough to domUs is completely unsafe,
> >>> as you allow domUs full access to registers not explicitly handled by
> >>> current vPCI code.
> >> Well, my understanding is: we can let the guest access whatever
> >> registers it wants with the following exceptions:
> >> - "special" registers we already trap in vPCI, e.g. command, BARs
> >> - we must not let the guest go out of the configuration space of a
> >> specific PCI device, e.g. prevent it from accessing configuration
> >> spaces of other devices.
> >> The rest accesses seem to be ok to me as we do not really want:
> >> - have handlers and emulate all possible registers
> >> - we do not want the guest to fail if it accesses a valid register
> which
> >> we do not emulate.
> >
> > I am tring to review your patch, please point out if there is anything
> > wrong. IIUC, vPCI only emulates some registers, and forward unhandled
> > accesses to physical device configuration space (if the accesses passed
> the validate.)?
> Right
> > Does that make the context inconsistent in physical device's
> configuration space?
> It is always consistent for the hardware domain and some parts of it are
> emulated
> for guests
> > For example, one register in physical device
> > config space is related to another register. But we just emulate
> > only one in vPCI?
> So, we trap for all domains and emulate for guests, e.g. hardware domain's
> view on the
> registers is consistent. For guests we emulate:
> - PCI_COMMAND - not to allow INTx as we do not support that on Arm
> - BARs - we emulate guest's view on these according to the memory spaces
> of the emulated host bridge, so the real BARs still have physical values,
> but
> guests see emulated ones
> 
> Hope this helps

Thanks, it's very helpful!

> >
> >
> >>>
> >>> Regards, Roger.
> >>>
> >> Thanks,
> >> Oleksandr
> >>

  reply	other threads:[~2021-11-02  8:13 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-27  8:25 [PATCH] xen/arm: fix SBDF calculation for vPCI MMIO handlers Oleksandr Andrushchenko
2021-10-27  8:59 ` Roger Pau Monné
2021-10-27  9:04   ` Oleksandr Andrushchenko
2021-10-27  9:23     ` Roger Pau Monné
2021-10-27  9:46       ` Oleksandr Andrushchenko
2021-10-27 17:07     ` Ian Jackson
2021-11-01 10:25       ` [PATCH] xen/arm: fix SBDF calculation for vPCI MMIO handlers [and 2 more messages] Ian Jackson
2021-11-01 21:06         ` Stefano Stabellini
2021-11-02  7:16           ` Oleksandr Andrushchenko
2021-11-02  9:32             ` Julien Grall
2021-11-02 11:21               ` Oleksandr Andrushchenko
2021-11-02 15:55           ` Ian Jackson
2021-10-27 17:35 ` [PATCH] xen/arm: fix SBDF calculation for vPCI MMIO handlers Julien Grall
2021-10-28 12:09   ` Oleksandr Andrushchenko
2021-10-28 13:22     ` Julien Grall
2021-10-28 14:16       ` Oleksandr Andrushchenko
2021-10-28 14:28         ` Julien Grall
2021-10-28 15:27           ` Oleksandr Andrushchenko
2021-10-28 15:35             ` Julien Grall
2021-10-28 15:54               ` Ian Jackson
2021-10-29  9:15                 ` Julien Grall
2021-10-28 18:00               ` Oleksandr Andrushchenko
2021-10-28 13:36     ` Roger Pau Monné
2021-10-28 14:23       ` Oleksandr Andrushchenko
2021-10-28 16:03         ` Roger Pau Monné
2021-10-28 17:55           ` Oleksandr Andrushchenko
2021-10-29  7:33             ` Roger Pau Monné
2021-11-01  6:14               ` Oleksandr Andrushchenko
2021-11-02  7:37                 ` Wei Chen
2021-11-02  7:46                   ` Oleksandr Andrushchenko
2021-11-02  8:12                     ` Wei Chen [this message]
2021-11-02  8:48                 ` Roger Pau Monné
2021-11-02  9:07                   ` Oleksandr Andrushchenko
2021-11-02  9:32                     ` Roger Pau Monné

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DB9PR08MB68573DA7157BD153B4C51A7D9E8B9@DB9PR08MB6857.eurprd08.prod.outlook.com \
    --to=wei.chen@arm.com \
    --cc=Bertrand.Marquis@arm.com \
    --cc=Oleksandr_Andrushchenko@epam.com \
    --cc=Rahul.Singh@arm.com \
    --cc=julien@xen.org \
    --cc=roger.pau@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.