All of lore.kernel.org
 help / color / mirror / Atom feed
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	"sstabellini@kernel.org" <sstabellini@kernel.org>,
	Rahul Singh <rahul.singh@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
Subject: Re: [PATCH] xen/arm: fix SBDF calculation for vPCI MMIO handlers
Date: Mon, 1 Nov 2021 06:14:40 +0000	[thread overview]
Message-ID: <0ba7aa82-40bc-c0d3-38a2-8c4d141d0afd@epam.com> (raw)
In-Reply-To: <YXuj4frtHIRuSgOO@Air-de-Roger>



On 29.10.21 10:33, Roger Pau Monné wrote:
> On Thu, Oct 28, 2021 at 05:55:25PM +0000, Oleksandr Andrushchenko wrote:
>>
>> On 28.10.21 19:03, Roger Pau Monné wrote:
>>> On Thu, Oct 28, 2021 at 02:23:34PM +0000, Oleksandr Andrushchenko wrote:
>>>> On 28.10.21 16:36, Roger Pau Monné wrote:
>>>>> On Thu, Oct 28, 2021 at 12:09:23PM +0000, Oleksandr Andrushchenko wrote:
>>>>>> Hi, Julien!
>>>>>>
>>>>>> On 27.10.21 20:35, Julien Grall wrote:
>>>>>>> Hi Oleksandr,
>>>>>>>
>>>>>>> On 27/10/2021 09:25, Oleksandr Andrushchenko wrote:
>>>>>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>>>>>>>>
>>>>>>>> While in vPCI MMIO trap handlers for the guest PCI host bridge it is not
>>>>>>>> enough for SBDF translation to simply call VPCI_ECAM_BDF(info->gpa) as
>>>>>>>> the base address may not be aligned in the way that the translation
>>>>>>>> always work. If not adjusted with respect to the base address it may not be
>>>>>>>> able to properly convert SBDF and crashes:
>>>>>>>>
>>>>>>>> (XEN) vpci_mmio_read 0000:65:1a.0 reg 8bc gpa e65d08bc
>>>>>>> I can't find a printk() that may output this message. Where does this comes from?
>>>>>> That was a debug print. I shouldn't have used that in the patch description, but
>>>>>> probably after "---" to better explain what's happening
>>>>>>> Anyway, IIUC the guest physical address is 0xe65d08bc which, if I am not mistaken, doesn't belong to the range advertised for GUEST_VPCI_ECAM.
>>>>>> This is from dom0 I am working on now.
>>>>>>> IMHO, the stack trace should come from usptream Xen or need some information to explain how this was reproduced.
>>>>>>>
>>>>>>>> (XEN) Data Abort Trap. Syndrome=0x6
>>>>>>>> (XEN) Walking Hypervisor VA 0x467a28bc on CPU0 via TTBR 0x00000000481d5000
>>>>>>> I can understnad that if we don't substract GUEST_VPCI_ECAM, we would (in theory) not get the correct BDF. But... I don't understand how this would result to a data abort in the hypervisor.
>>>>>>>
>>>>>>> In fact, I think the vPCI code should be resilient enough to not crash if we pass the wrong BDF.
>>>>>> Well, there is no (?) easy way to validate SBDF. And this could be a problem if we have a misbehaving
>>>>>> guest which may force Xen to access the memory beyond that of PCI host bridge
>>>>> How could that be? The ECAM region exposed to the guest you should be
>>>>> the same as the physical one for dom0?
>>>> Ok, I have a Designware PCI hist which has 2 ECAM regions (I am starting to
>>>> implement the driver for it, so I can be wrong here):
>>>> - Root Complex ECAM area ("dbi"), it is something like 0x3000 bytes long
>>>> - "Client" ECAM area ("config")
>>>> So from Dom0 POV we have 2 ECAM regions and for the guest
>>>> we always emulate a single big region:
>>> You need support for multiple ECAM regions. That's how we do it on x86
>>> PVH dom0. See register_vpci_mmcfg_handler and related machinery.
>> Is it common for a PCI host bridge to have multiple ECAM regions?
>> Currently on Arm we were about to support "pci-host-ecam-generic" [1],
>> e.g. generic ECAM host bridge which normally (?) has a single ECAM
>> region [2]. But the host bridge I want to support has multiple, so
>> strictly speaking it is not the one that we implement.
> It's possible on x86 to have multiple ECAM regions, whether that means
> multiple host bridges, or host bridges having multiple ECAM regions is
> unknown to me. It's all reported in the MCFG ACPI table (see PCI
> Firmware document for the detailed description of MCFG) using the
> "Configuration Space Base Address Allocation Structure", and there can
> be multiple of those structures.
As we are currently supporting generic ECAM host bridge which
has a single ECAM region I think the existing code we have and
about to upstream is ok as is for now.
I own a bridge which has 2 ECAM regions, so I will work towards
adding its support soon.
>
>> Arm folks, do we want this generalization at this moment to align with x86
>> with this respect?
>>
>> We can live with the current approach and when I have my driver implemented
>> I can send patches to make that generalization.
>>>> /*
>>>>     * 256 MB is reserved for VPCI configuration space based on calculation
>>>>     * 256 buses x 32 devices x 8 functions x 4 KB = 256 MB
>>>>     */
>>>> #define GUEST_VPCI_ECAM_BASE    xen_mk_ullong(0x10000000)
>>>> #define GUEST_VPCI_ECAM_SIZE    xen_mk_ullong(0x10000000)
>>>>
>>>> So, we have the base address and size of the emulated ECAM space
>>>> not connected to the real host bridge
>>>>> And for domUs you really need to fix vpci_{read,write} to not
>>>>> passthrough accesses not explicitly handled.
>>>> Do you mean that we need to validate SBDFs there?
>>>> This can be tricky if we have a use-case when a PCI device being
>>>> passed through if not put at 0000:00:0.0, but requested to be, for
>>>> example, 0000:0d:0.0. So, we need to go over the list of virtual
>>>> devices and see if SBDF the guest is trying to access is a valid SBDF.
>>>> Is this what you mean?
>>> No, you need to prevent accesses to registers not explicitly handled
>>> by vpci. Ie: do not forward unhandled accesses to
>>> vpci_{read,wrie}_hw).
>> I see, so those which have no handlers are not passed to the hardware.
>> I need to see how to do that
> Indeed. Without fixing that passthrough to domUs is completely unsafe,
> as you allow domUs full access to registers not explicitly handled by
> current vPCI code.
Well, my understanding is: we can let the guest access whatever
registers it wants with the following exceptions:
- "special" registers we already trap in vPCI, e.g. command, BARs
- we must not let the guest go out of the configuration space of a
specific PCI device, e.g. prevent it from accessing configuration
spaces of other devices.
The rest accesses seem to be ok to me as we do not really want:
- have handlers and emulate all possible registers
- we do not want the guest to fail if it accesses a valid register which
we do not emulate.
>
> Regards, Roger.
>
Thanks,
Oleksandr

  reply	other threads:[~2021-11-01  6:15 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-27  8:25 [PATCH] xen/arm: fix SBDF calculation for vPCI MMIO handlers Oleksandr Andrushchenko
2021-10-27  8:59 ` Roger Pau Monné
2021-10-27  9:04   ` Oleksandr Andrushchenko
2021-10-27  9:23     ` Roger Pau Monné
2021-10-27  9:46       ` Oleksandr Andrushchenko
2021-10-27 17:07     ` Ian Jackson
2021-11-01 10:25       ` [PATCH] xen/arm: fix SBDF calculation for vPCI MMIO handlers [and 2 more messages] Ian Jackson
2021-11-01 21:06         ` Stefano Stabellini
2021-11-02  7:16           ` Oleksandr Andrushchenko
2021-11-02  9:32             ` Julien Grall
2021-11-02 11:21               ` Oleksandr Andrushchenko
2021-11-02 15:55           ` Ian Jackson
2021-10-27 17:35 ` [PATCH] xen/arm: fix SBDF calculation for vPCI MMIO handlers Julien Grall
2021-10-28 12:09   ` Oleksandr Andrushchenko
2021-10-28 13:22     ` Julien Grall
2021-10-28 14:16       ` Oleksandr Andrushchenko
2021-10-28 14:28         ` Julien Grall
2021-10-28 15:27           ` Oleksandr Andrushchenko
2021-10-28 15:35             ` Julien Grall
2021-10-28 15:54               ` Ian Jackson
2021-10-29  9:15                 ` Julien Grall
2021-10-28 18:00               ` Oleksandr Andrushchenko
2021-10-28 13:36     ` Roger Pau Monné
2021-10-28 14:23       ` Oleksandr Andrushchenko
2021-10-28 16:03         ` Roger Pau Monné
2021-10-28 17:55           ` Oleksandr Andrushchenko
2021-10-29  7:33             ` Roger Pau Monné
2021-11-01  6:14               ` Oleksandr Andrushchenko [this message]
2021-11-02  7:37                 ` Wei Chen
2021-11-02  7:46                   ` Oleksandr Andrushchenko
2021-11-02  8:12                     ` Wei Chen
2021-11-02  8:48                 ` Roger Pau Monné
2021-11-02  9:07                   ` Oleksandr Andrushchenko
2021-11-02  9:32                     ` Roger Pau Monné

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0ba7aa82-40bc-c0d3-38a2-8c4d141d0afd@epam.com \
    --to=oleksandr_andrushchenko@epam.com \
    --cc=bertrand.marquis@arm.com \
    --cc=julien@xen.org \
    --cc=rahul.singh@arm.com \
    --cc=roger.pau@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.