All of lore.kernel.org
 help / color / mirror / Atom feed
From: Julien Grall <julien.grall@linaro.org>
To: Manish Jaggi <mjaggi@caviumnetworks.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Cc: edgar.iglesias@xilinx.com, okaya@qti.qualcomm.com,
	Wei Chen <Wei.Chen@arm.com>, Steve Capper <Steve.Capper@arm.com>,
	Andre Przywara <andre.przywara@arm.com>,
	manish.jaggi@caviumnetworks.com, punit.agrawal@arm.com,
	vikrams@qti.qualcomm.com, "Goel, Sameer" <sgoel@qti.qualcomm.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Dave P Martin <Dave.Martin@arm.com>,
	Vijaya Kumar K <Vijaya.Kumar@caviumnetworks.com>,
	roger.pau@citrix.com
Subject: Re: [RFC] ARM PCI Passthrough design document
Date: Tue, 30 May 2017 10:33:48 +0100	[thread overview]
Message-ID: <718fea24-4ee0-0828-fcf9-063e2e4a5f37@linaro.org> (raw)
In-Reply-To: <bf0d5f38-73de-dd1c-e17e-5c06faf197a3@caviumnetworks.com>



On 30/05/17 06:53, Manish Jaggi wrote:
> Hi Julien,
>
> On 5/29/2017 11:44 PM, Julien Grall wrote:
>>
>>
>> On 05/29/2017 03:30 AM, Manish Jaggi wrote:
>>> Hi Julien,
>>
>> Hello Manish,
>>
>>> On 5/26/2017 10:44 PM, Julien Grall wrote:
>>>> PCI pass-through allows the guest to receive full control of
>>>> physical PCI
>>>> devices. This means the guest will have full and direct access to
>>>> the PCI
>>>> device.
>>>>
>>>> ARM is supporting a kind of guest that exploits as much as possible
>>>> virtualization support in hardware. The guest will rely on PV driver
>>>> only
>>>> for IO (e.g block, network) and interrupts will come through the
>>>> virtualized
>>>> interrupt controller, therefore there are no big changes required
>>>> within the
>>>> kernel.
>>>>
>>>> As a consequence, it would be possible to replace PV drivers by
>>>> assigning real
>>>> devices to the guest for I/O access. Xen on ARM would therefore be
>>>> able to
>>>> run unmodified operating system.
>>>>
>>>> To achieve this goal, it looks more sensible to go towards emulating
>>>> the
>>>> host bridge (there will be more details later).
>>> IIUC this means that domU would have an emulated host bridge and dom0
>>> will see the actual host bridge?
>>
>> You don't want the hardware domain and Xen access the configuration
>> space at the same time. So if Xen is in charge of the host bridge,
>> then an emulated host bridge should be exposed to the hardware.
> I believe in x86 case dom0 and Xen do access the config space. In the
> context of pci device add hypercall.
> Thats when the pci_config_XXX functions in xen are called.

I don't understand how this is related to what I said... If DOM0 has an 
emulated host bridge, it will not be possible to have both poking the 
real hardware at the same time as only Xen would do hardware access.

>>
>> Although, this is depending on who is in charge of the the host
>> bridge. As you may have noticed, this design document is proposing two
>> ways to handle configuration space access. At the moment any generic
>> host bridge (see the definition in the design document) will be
>> handled in Xen and the hardware domain will have an emulated host bridge.
>>
> So in case of generic hb, xen will manage the config space and provide a
> emulated I/f to dom0, and accesses would be trapped by Xen.
> Essentially the goal is to scan all pci devices and register them with
> Xen (which in turn will configure the smmu).
> For a  generic hb, this can be done either in dom0/xen. The only doubt
> here is what extra benefit the emulated hb give in case of dom0.

Because you don't have 2 entities to access hardware at the same time. 
You don't know how it will behave. You may also want to trap some 
registers for configuration. Note that this what is already done on x86.

[...]

>>> For mapping the MMIO space of the device in Stage2, we need to add
>>> support in Xen / via a map hypercall in linux/drivers/xen/pci.c
>>
>> Mapping MMIO space in stage-2 is not PCI specific and already
>> addressed in Xen 4.9 (see commit 80f9c31 "xen/arm: acpi: Map MMIO on
>> fault in stage-2 page table for the hardware domain"). So I don't
>> understand why we should care about that here...
>>
> This approach is ok.
> But we could have more granular approach than trapping IMHO.
> For ACPI
>    -xen parses MCFG and can map pci hb (emulated / original) in stage2
> for dom0
>    -device MMIO can be mapped in stage2 alongside pci_device_add call .
> What do you think?

There are plenty of way to map MMIO today and again this is not related 
to this design document. It does not matter how you are going to map 
(trapping, XENMEM_add_to_add_physmap, parsing MCFG, reading BARs...) at 
this stage.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-05-30  9:33 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-26 17:14 [RFC] ARM PCI Passthrough design document Julien Grall
2017-05-29  2:30 ` Manish Jaggi
2017-05-29 18:14   ` Julien Grall
2017-05-30  5:53     ` Manish Jaggi
2017-05-30  9:33       ` Julien Grall [this message]
2017-05-30  7:53     ` Roger Pau Monné
2017-05-30  9:42       ` Julien Grall
2017-05-30  7:40 ` Roger Pau Monné
2017-05-30  9:54   ` Julien Grall
2017-06-16  0:31     ` Stefano Stabellini
2017-06-16  0:23 ` Stefano Stabellini
2017-06-20  0:19 ` Vikram Sethi
2017-06-28 15:22   ` Julien Grall
2017-06-29 15:17     ` Vikram Sethi
2017-07-03 14:35       ` Julien Grall
2017-07-04  8:30     ` roger.pau
2017-07-06 20:55       ` Vikram Sethi
2017-07-07  8:49         ` Roger Pau Monné
2017-07-07 21:50           ` Stefano Stabellini
2017-07-07 23:40             ` Vikram Sethi
2017-07-08  7:34             ` Roger Pau Monné
2018-01-19 10:34               ` Manish Jaggi
2017-07-19 14:41 ` Notes from PCI Passthrough design discussion at Xen Summit Punit Agrawal
2017-07-20  3:54   ` Manish Jaggi
2017-07-20  8:24     ` Roger Pau Monné
2017-07-20  9:32       ` Manish Jaggi
2017-07-20 10:29         ` Roger Pau Monné
2017-07-20 10:47           ` Julien Grall
2017-07-20 11:06             ` Roger Pau Monné
2017-07-20 11:52               ` Julien Grall
2017-07-20 11:02           ` Manish Jaggi
2017-07-20 10:41         ` Julien Grall
2017-07-20 11:00           ` Manish Jaggi
2017-07-20 12:24             ` Julien Grall
2018-01-22 11:10 ` [RFC] ARM PCI Passthrough design document Manish Jaggi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=718fea24-4ee0-0828-fcf9-063e2e4a5f37@linaro.org \
    --to=julien.grall@linaro.org \
    --cc=Dave.Martin@arm.com \
    --cc=Steve.Capper@arm.com \
    --cc=Vijaya.Kumar@caviumnetworks.com \
    --cc=Wei.Chen@arm.com \
    --cc=andre.przywara@arm.com \
    --cc=edgar.iglesias@xilinx.com \
    --cc=manish.jaggi@caviumnetworks.com \
    --cc=mjaggi@caviumnetworks.com \
    --cc=okaya@qti.qualcomm.com \
    --cc=punit.agrawal@arm.com \
    --cc=roger.pau@citrix.com \
    --cc=sgoel@qti.qualcomm.com \
    --cc=sstabellini@kernel.org \
    --cc=vikrams@qti.qualcomm.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.