All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC + Queries] Flow of PCI passthrough in ARM
@ 2014-09-18 11:34 manish jaggi
  2014-09-22 10:45 ` Stefano Stabellini
  0 siblings, 1 reply; 38+ messages in thread
From: manish jaggi @ 2014-09-18 11:34 UTC (permalink / raw)
  To: Julien Grall, xen-devel, Stefano Stabellini, Ian Campbell,
	manish.jaggi, Prasun Kapoor, Vijay Kilari

Hi,
Below is the flow I am working on, Please provide your comments, I
have a couple of queries as well..

a) Device tree has smmu nodes and each smmu node has the mmu-master property.
In our Soc DT the mmu-master is a pcie node in device tree.

b) Xen parses the device tree and prepares a list which stores the pci
device tree node pointers. The order in device tree is mapped to
segment number in subsequent calls. For eg 1st pci node found is
segment 0, 2nd segment 1

c) During SMMU init the pcie nodes in DT are saved as smmu masters.

d) Dom0 Enumerates PCI devices, calls hypercall PHYSDEVOP_pci_device_add.
 - In Xen the SMMU iommu_ops add_device is called. I have implemented
the add_device function.
- In the add_device function
 the segment number is used to locate the device tree node pointer of
the pcie node which helps to find out the corresponding smmu.
- In the same PHYSDEVOP the BAR regions are mapped to Dom0.

Note: The current SMMU driver maps the complete Domain's Address space
for the device in SMMU hardware.

The above flow works currently for us.

Now when I call pci-assignable-add I see that the iommu_ops
remove_device in smmu driver is not called. If that is not called the
SMMU would still have the dom0 address space mappings for that device

Can you please suggest the best place (kernel / xl-tools) to put the
code which would call the remove_device in iommu_opps in the control
flow from pci-assignable-add.

One way I see is to introduce a DOMCTL_iommu_remove_device in
pci-assignable-add / pci-detach and DOMCTL_iommu_add_device in
pci-attach. Is that a valid approach  ?

-Manish

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-09-18 11:34 [RFC + Queries] Flow of PCI passthrough in ARM manish jaggi
@ 2014-09-22 10:45 ` Stefano Stabellini
  2014-09-22 11:09   ` Ian Campbell
  2014-09-24 10:53   ` manish jaggi
  0 siblings, 2 replies; 38+ messages in thread
From: Stefano Stabellini @ 2014-09-22 10:45 UTC (permalink / raw)
  To: manish jaggi
  Cc: Ian Campbell, Vijay Kilari, Stefano Stabellini, Prasun Kapoor,
	manish.jaggi, Julien Grall, xen-devel, psawargaonkar, anup.patel

On Thu, 18 Sep 2014, manish jaggi wrote:
> Hi,
> Below is the flow I am working on, Please provide your comments, I
> have a couple of queries as well..
> 
> a) Device tree has smmu nodes and each smmu node has the mmu-master property.
> In our Soc DT the mmu-master is a pcie node in device tree.

Do you mean that both the smmu nodes and the pcie node have the
mmu-master property? The pcie node is the pcie root complex, right?


> b) Xen parses the device tree and prepares a list which stores the pci
> device tree node pointers. The order in device tree is mapped to
> segment number in subsequent calls. For eg 1st pci node found is
> segment 0, 2nd segment 1

What's a segment number? Something from the PCI spec?
If you have several pci nodes on device tree, does that mean that you
have several different pcie root complexes?


> c) During SMMU init the pcie nodes in DT are saved as smmu masters.

At this point you should also be able to find via DT the stream-id range
supported by each SMMU and program the SMMU with them, assigning
everything to dom0.


> d) Dom0 Enumerates PCI devices, calls hypercall PHYSDEVOP_pci_device_add.
>  - In Xen the SMMU iommu_ops add_device is called. I have implemented
> the add_device function.
> - In the add_device function
>  the segment number is used to locate the device tree node pointer of
> the pcie node which helps to find out the corresponding smmu.
> - In the same PHYSDEVOP the BAR regions are mapped to Dom0.
> 
> Note: The current SMMU driver maps the complete Domain's Address space
> for the device in SMMU hardware.
> 
> The above flow works currently for us.

It would be nice to be able to skip d): in a system where all dma capable
devices are behind smmus, we should be capable of booting dom0 without
the 1:1 mapping hack. If we do that, it would be better to program the
smmus before booting dom0. Otherwise there is a risk that dom0 is going
to start using these devices and doing dma before we manage to secure
the devices via smmus.
                          
Of course we can do that if there are no alternatives. But in our case
we should be able to extract the stream-ids from device tree and program
the smmus right away, right?  Do we really need to wait for dom0 to call
PHYSDEVOP_pci_device_add? We could just assign everything to dom0 for a
start.

I would like to know from the x86 guys, if this is really how it is
supposed to work on PVH too. Do we rely on PHYSDEVOP_pci_device_add to
program the IOMMU?


> Now when I call pci-assignable-add I see that the iommu_ops
> remove_device in smmu driver is not called. If that is not called the
> SMMU would still have the dom0 address space mappings for that device
> 
> Can you please suggest the best place (kernel / xl-tools) to put the
> code which would call the remove_device in iommu_opps in the control
> flow from pci-assignable-add.
> 
> One way I see is to introduce a DOMCTL_iommu_remove_device in
> pci-assignable-add / pci-detach and DOMCTL_iommu_add_device in
> pci-attach. Is that a valid approach  ?

I am not 100% sure, but I think that before assigning a PCI device to
another guest, you are supposed to bind the device to xen-pciback (see
drivers/xen/xen-pciback, also see
http://wiki.xen.org/wiki/Xen_PCI_Passthrough). The pciback driver is
going hide the device from dom0 and as a consequence
drivers/xen/pci.c:xen_remove_device ends up being called, that issues a
PHYSDEVOP_pci_device_remove hypercall.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-09-22 10:45 ` Stefano Stabellini
@ 2014-09-22 11:09   ` Ian Campbell
  2014-09-24 10:56     ` manish jaggi
  2014-09-24 10:53   ` manish jaggi
  1 sibling, 1 reply; 38+ messages in thread
From: Ian Campbell @ 2014-09-22 11:09 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: anup.patel, Vijay Kilari, Prasun Kapoor, manish.jaggi,
	Julien Grall, xen-devel, psawargaonkar, manish jaggi

On Mon, 2014-09-22 at 11:45 +0100, Stefano Stabellini wrote:
> On Thu, 18 Sep 2014, manish jaggi wrote:
> > Hi,
> > Below is the flow I am working on, Please provide your comments, I
> > have a couple of queries as well..
> > 
> > a) Device tree has smmu nodes and each smmu node has the mmu-master property.
> > In our Soc DT the mmu-master is a pcie node in device tree.
> 
> Do you mean that both the smmu nodes and the pcie node have the
> mmu-master property? The pcie node is the pcie root complex, right?
> 
> 
> > b) Xen parses the device tree and prepares a list which stores the pci
> > device tree node pointers. The order in device tree is mapped to
> > segment number in subsequent calls. For eg 1st pci node found is
> > segment 0, 2nd segment 1
> 
> What's a segment number? Something from the PCI spec?
> If you have several pci nodes on device tree, does that mean that you
> have several different pcie root complexes?
> 
> 
> > c) During SMMU init the pcie nodes in DT are saved as smmu masters.
> 
> At this point you should also be able to find via DT the stream-id range
> supported by each SMMU and program the SMMU with them, assigning
> everything to dom0.
> 
> 
> > d) Dom0 Enumerates PCI devices, calls hypercall PHYSDEVOP_pci_device_add.
> >  - In Xen the SMMU iommu_ops add_device is called. I have implemented
> > the add_device function.
> > - In the add_device function
> >  the segment number is used to locate the device tree node pointer of
> > the pcie node which helps to find out the corresponding smmu.
> > - In the same PHYSDEVOP the BAR regions are mapped to Dom0.
> > 
> > Note: The current SMMU driver maps the complete Domain's Address space
> > for the device in SMMU hardware.
> > 
> > The above flow works currently for us.
> 
> It would be nice to be able to skip d): in a system where all dma capable
> devices are behind smmus, we should be capable of booting dom0 without
> the 1:1 mapping hack. If we do that, it would be better to program the
> smmus before booting dom0. Otherwise there is a risk that dom0 is going
> to start using these devices and doing dma before we manage to secure
> the devices via smmus.

Without commenting on whether we should take this approach or not note
that the default before the call to pci_device_add should be to deny, so
there is no risk from a security perspective of doing things this way.

If the dom0 kernel tries to use a PCI device which it hasn't registered
then that would be a dom0 bug under this model. Probably an easily
avoided one since you would naturally want to call pci_device_add during
bus enumeration, I'd imagine, and a dom0 kernel which uses a device
before enumerating it would be pretty broken I think.

Ian.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-09-22 10:45 ` Stefano Stabellini
  2014-09-22 11:09   ` Ian Campbell
@ 2014-09-24 10:53   ` manish jaggi
  2014-09-24 12:13     ` Jan Beulich
  2014-09-24 14:10     ` Stefano Stabellini
  1 sibling, 2 replies; 38+ messages in thread
From: manish jaggi @ 2014-09-24 10:53 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Ian Campbell, Vijay Kilari, Prasun Kapoor, manish.jaggi,
	Julien Grall, xen-devel, psawargaonkar, anup.patel

On 22 September 2014 16:15, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Thu, 18 Sep 2014, manish jaggi wrote:
>> Hi,
>> Below is the flow I am working on, Please provide your comments, I
>> have a couple of queries as well..
>>
>> a) Device tree has smmu nodes and each smmu node has the mmu-master property.
>> In our Soc DT the mmu-master is a pcie node in device tree.
>
> Do you mean that both the smmu nodes and the pcie node have the
> mmu-master property? The pcie node is the pcie root complex, right?
>
pci-node is the pcie root complex. pci node is the mmu master in smmu node.

  smmu1@0x8310,00000000 {
...

                 mmu-masters = <&pcie1 0x100>;
         };

>> b) Xen parses the device tree and prepares a list which stores the pci
>> device tree node pointers. The order in device tree is mapped to
>> segment number in subsequent calls. For eg 1st pci node found is
>> segment 0, 2nd segment 1
>
> What's a segment number? Something from the PCI spec?
> If you have several pci nodes on device tree, does that mean that you
> have several different pcie root complexes?
>
yes.
segment is the pci rc number.
>
>> c) During SMMU init the pcie nodes in DT are saved as smmu masters.
>
> At this point you should also be able to find via DT the stream-id range
> supported by each SMMU and program the SMMU with them, assigning
> everything to dom0.
Currently pcie enumeration is not done in xen, it is done in dom0.
>
>
>> d) Dom0 Enumerates PCI devices, calls hypercall PHYSDEVOP_pci_device_add.
>>  - In Xen the SMMU iommu_ops add_device is called. I have implemented
>> the add_device function.
>> - In the add_device function
>>  the segment number is used to locate the device tree node pointer of
>> the pcie node which helps to find out the corresponding smmu.
>> - In the same PHYSDEVOP the BAR regions are mapped to Dom0.
>>
>> Note: The current SMMU driver maps the complete Domain's Address space
>> for the device in SMMU hardware.
>>
>> The above flow works currently for us.
>
> It would be nice to be able to skip d): in a system where all dma capable
> devices are behind smmus, we should be capable of booting dom0 without
> the 1:1 mapping hack. If we do that, it would be better to program the
> smmus before booting dom0. Otherwise there is a risk that dom0 is going
> to start using these devices and doing dma before we manage to secure
> the devices via smmus.
>
In our current case we are programming smmu in
PHYSDEVOP_pci_device_add flow so before the domain 0 accesses the
device it is mapped, otherwise xen gets a SMMU fault.

> Of course we can do that if there are no alternatives. But in our case
> we should be able to extract the stream-ids from device tree and program
> the smmus right away, right?  Do we really need to wait for dom0 to call
> PHYSDEVOP_pci_device_add? We could just assign everything to dom0 for a
> start.
>
We cannot get streamid from device tree as enumeration is done for the same.

> I would like to know from the x86 guys, if this is really how it is
> supposed to work on PVH too. Do we rely on PHYSDEVOP_pci_device_add to
> program the IOMMU?
>
>
I was waiting but no one has commented

>> Now when I call pci-assignable-add I see that the iommu_ops
>> remove_device in smmu driver is not called. If that is not called the
>> SMMU would still have the dom0 address space mappings for that device
>>
>> Can you please suggest the best place (kernel / xl-tools) to put the
>> code which would call the remove_device in iommu_opps in the control
>> flow from pci-assignable-add.
>>
>> One way I see is to introduce a DOMCTL_iommu_remove_device in
>> pci-assignable-add / pci-detach and DOMCTL_iommu_add_device in
>> pci-attach. Is that a valid approach  ?
>
> I am not 100% sure, but I think that before assigning a PCI device to
> another guest, you are supposed to bind the device to xen-pciback (see
> drivers/xen/xen-pciback, also see
> http://wiki.xen.org/wiki/Xen_PCI_Passthrough). The pciback driver is
> going hide the device from dom0 and as a consequence
> drivers/xen/pci.c:xen_remove_device ends up being called, that issues a
> PHYSDEVOP_pci_device_remove hypercall.

xen_remove_device is not called at all. in pci-attach
iommu_ops->assign_device is called.
In Xen the nomenclature is confusing and no comments are there is iommu.h
iommu_ops.add_device is when dom0 issues hypercall
iommu_ops.assign_dt_device is when xen attaches a device tree device to dom0
iommu_ops.assign_device is when xl pci-attach is called
iommu_ops.reassign_device is called when xl pci-detach is called

As of now we are able to assign devices to domU and driver in domU is
running, we did some hacks like
a) in xen pci front driver bus->msi is assigned to its msi_chip

---- pcifront_scan_root()
...
b = pci_scan_bus_parented(&pdev->xdev->dev, bus,
                  &pcifront_bus_ops, sd);
    if (!b) {
        dev_err(&pdev->xdev->dev,
            "Error creating PCI Frontend Bus!\n");
        err = -ENOMEM;
        pci_unlock_rescan_remove();
        goto err_out;
    }

    bus_entry->bus = b;
+        msi_node = of_find_compatible_node(NULL,NULL, "arm,gic-v3-its");
+        if(msi_node) {
+            b->msi = of_pci_find_msi_chip_by_node(msi_node);
+            if(!b->msi) {
+               printk(KERN_ERR"Unable to find bus->msi node \r\n");
+               goto err_out;
+            }
+        }else {
+               printk(KERN_ERR"Unable to find arm,gic-v3-its
compatible node \r\n");
+               goto err_out;
+        }

----

using this the ITS emulation code in xen is able to trap ITS command
writes by driver.
But we are facing a problem now, where your help is needed

The StreamID is generated by segment: bus : device: number which is
fed as DevID in ITS commands. In Dom0 the streamID is correctly
generated but in domU the Stream ID for a passthrough device is
0:0:0:0 now when emulating this in Xen it is a problem as xen does not
know how to get the physical stream id.

(Eg: xl pci-attach 1 0001:00:05.0
DomU has the device but in DomU the id is 0000:00:00.0.)

Could you suggest how to go about this.

-Regards
Manish

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-09-22 11:09   ` Ian Campbell
@ 2014-09-24 10:56     ` manish jaggi
  0 siblings, 0 replies; 38+ messages in thread
From: manish jaggi @ 2014-09-24 10:56 UTC (permalink / raw)
  To: Ian Campbell
  Cc: anup.patel, Vijay Kilari, Stefano Stabellini, Prasun Kapoor,
	manish.jaggi, Julien Grall, xen-devel, psawargaonkar

On 22 September 2014 16:39, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2014-09-22 at 11:45 +0100, Stefano Stabellini wrote:
>> On Thu, 18 Sep 2014, manish jaggi wrote:
>> > Hi,
>> > Below is the flow I am working on, Please provide your comments, I
>> > have a couple of queries as well..
>> >
>> > a) Device tree has smmu nodes and each smmu node has the mmu-master property.
>> > In our Soc DT the mmu-master is a pcie node in device tree.
>>
>> Do you mean that both the smmu nodes and the pcie node have the
>> mmu-master property? The pcie node is the pcie root complex, right?
>>
>>
>> > b) Xen parses the device tree and prepares a list which stores the pci
>> > device tree node pointers. The order in device tree is mapped to
>> > segment number in subsequent calls. For eg 1st pci node found is
>> > segment 0, 2nd segment 1
>>
>> What's a segment number? Something from the PCI spec?
>> If you have several pci nodes on device tree, does that mean that you
>> have several different pcie root complexes?
>>
>>
>> > c) During SMMU init the pcie nodes in DT are saved as smmu masters.
>>
>> At this point you should also be able to find via DT the stream-id range
>> supported by each SMMU and program the SMMU with them, assigning
>> everything to dom0.
>>
>>
>> > d) Dom0 Enumerates PCI devices, calls hypercall PHYSDEVOP_pci_device_add.
>> >  - In Xen the SMMU iommu_ops add_device is called. I have implemented
>> > the add_device function.
>> > - In the add_device function
>> >  the segment number is used to locate the device tree node pointer of
>> > the pcie node which helps to find out the corresponding smmu.
>> > - In the same PHYSDEVOP the BAR regions are mapped to Dom0.
>> >
>> > Note: The current SMMU driver maps the complete Domain's Address space
>> > for the device in SMMU hardware.
>> >
>> > The above flow works currently for us.
>>
>> It would be nice to be able to skip d): in a system where all dma capable
>> devices are behind smmus, we should be capable of booting dom0 without
>> the 1:1 mapping hack. If we do that, it would be better to program the
>> smmus before booting dom0. Otherwise there is a risk that dom0 is going
>> to start using these devices and doing dma before we manage to secure
>> the devices via smmus.
>
> Without commenting on whether we should take this approach or not note
> that the default before the call to pci_device_add should be to deny, so
> there is no risk from a security perspective of doing things this way.
>
Yes it is done that way only.

> If the dom0 kernel tries to use a PCI device which it hasn't registered
> then that would be a dom0 bug under this model.
SMMU would generate a fault to xen.

> Probably an easily
> avoided one since you would naturally want to call pci_device_add during
> bus enumeration, I'd imagine, and a dom0 kernel which uses a device
> before enumerating it would be pretty broken I think.
>
I am not sure if that is the case here.


> Ian.
>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-09-24 10:53   ` manish jaggi
@ 2014-09-24 12:13     ` Jan Beulich
  2014-09-24 14:10     ` Stefano Stabellini
  1 sibling, 0 replies; 38+ messages in thread
From: Jan Beulich @ 2014-09-24 12:13 UTC (permalink / raw)
  To: Stefano Stabellini, manish jaggi
  Cc: Ian Campbell, Vijay Kilari, Prasun Kapoor, manish.jaggi,
	Julien Grall, xen-devel, psawargaonkar, anup.patel

>>> On 24.09.14 at 12:53, <manishjaggi.oss@gmail.com> wrote:
> On 22 September 2014 16:15, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
>> I would like to know from the x86 guys, if this is really how it is
>> supposed to work on PVH too. Do we rely on PHYSDEVOP_pci_device_add to
>> program the IOMMU?
>>
>>
> I was waiting but no one has commented

I don't think I saw this (and it being an ARM thread it easily could have
slipped my attention). The answer is yes, we do expect Dom0 to report
all PCI devices (irrespective of the bus scan we do on bus zero before
launching Dom0).

Jan

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-09-24 10:53   ` manish jaggi
  2014-09-24 12:13     ` Jan Beulich
@ 2014-09-24 14:10     ` Stefano Stabellini
  2014-09-24 18:32       ` manish jaggi
  1 sibling, 1 reply; 38+ messages in thread
From: Stefano Stabellini @ 2014-09-24 14:10 UTC (permalink / raw)
  To: manish jaggi
  Cc: Ian Campbell, Vijay Kilari, Stefano Stabellini, Prasun Kapoor,
	manish.jaggi, Julien Grall, xen-devel, Matt.Evans, psawargaonkar,
	Dave.Martin, anup.patel

CC'ing Matt and Dave at ARM for an opinions about device tree, SMMUs and
stream ids. See below.

On Wed, 24 Sep 2014, manish jaggi wrote:
> On 22 September 2014 16:15, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > On Thu, 18 Sep 2014, manish jaggi wrote:
> >> Hi,
> >> Below is the flow I am working on, Please provide your comments, I
> >> have a couple of queries as well..
> >>
> >> a) Device tree has smmu nodes and each smmu node has the mmu-master property.
> >> In our Soc DT the mmu-master is a pcie node in device tree.
> >
> > Do you mean that both the smmu nodes and the pcie node have the
> > mmu-master property? The pcie node is the pcie root complex, right?
> >
> pci-node is the pcie root complex. pci node is the mmu master in smmu node.
> 
>   smmu1@0x8310,00000000 {
> ...
> 
>                  mmu-masters = <&pcie1 0x100>;
>          };
> 
> >> b) Xen parses the device tree and prepares a list which stores the pci
> >> device tree node pointers. The order in device tree is mapped to
> >> segment number in subsequent calls. For eg 1st pci node found is
> >> segment 0, 2nd segment 1
> >
> > What's a segment number? Something from the PCI spec?
> > If you have several pci nodes on device tree, does that mean that you
> > have several different pcie root complexes?
> >
> yes.
> segment is the pci rc number.
> >
> >> c) During SMMU init the pcie nodes in DT are saved as smmu masters.
> >
> > At this point you should also be able to find via DT the stream-id range
> > supported by each SMMU and program the SMMU with them, assigning
> > everything to dom0.
> Currently pcie enumeration is not done in xen, it is done in dom0.

Yes, but we don't really need to walk any PCIe busses in order to
program the SMMU, right? We only need the requestor id and the stream id
ranges. We should be able to get them via device tree.


> >> d) Dom0 Enumerates PCI devices, calls hypercall PHYSDEVOP_pci_device_add.
> >>  - In Xen the SMMU iommu_ops add_device is called. I have implemented
> >> the add_device function.
> >> - In the add_device function
> >>  the segment number is used to locate the device tree node pointer of
> >> the pcie node which helps to find out the corresponding smmu.
> >> - In the same PHYSDEVOP the BAR regions are mapped to Dom0.
> >>
> >> Note: The current SMMU driver maps the complete Domain's Address space
> >> for the device in SMMU hardware.
> >>
> >> The above flow works currently for us.
> >
> > It would be nice to be able to skip d): in a system where all dma capable
> > devices are behind smmus, we should be capable of booting dom0 without
> > the 1:1 mapping hack. If we do that, it would be better to program the
> > smmus before booting dom0. Otherwise there is a risk that dom0 is going
> > to start using these devices and doing dma before we manage to secure
> > the devices via smmus.
> >
> In our current case we are programming smmu in
> PHYSDEVOP_pci_device_add flow so before the domain 0 accesses the
> device it is mapped, otherwise xen gets a SMMU fault.

Good.


> > Of course we can do that if there are no alternatives. But in our case
> > we should be able to extract the stream-ids from device tree and program
> > the smmus right away, right?  Do we really need to wait for dom0 to call
> > PHYSDEVOP_pci_device_add? We could just assign everything to dom0 for a
> > start.
> >
> We cannot get streamid from device tree as enumeration is done for the same.

I am not sure what the current state of the device tree spec is, but I
am pretty sure that the intention is to express stream id and requestor
id ranges directly in the dts so that the SMMU can be programmed right
away without walking the PCI bus.


> > I would like to know from the x86 guys, if this is really how it is
> > supposed to work on PVH too. Do we rely on PHYSDEVOP_pci_device_add to
> > program the IOMMU?
> >
> >
> I was waiting but no one has commented

Me too. Everybody is very busy at the moment with the 4.5 release.


> >> Now when I call pci-assignable-add I see that the iommu_ops
> >> remove_device in smmu driver is not called. If that is not called the
> >> SMMU would still have the dom0 address space mappings for that device
> >>
> >> Can you please suggest the best place (kernel / xl-tools) to put the
> >> code which would call the remove_device in iommu_opps in the control
> >> flow from pci-assignable-add.
> >>
> >> One way I see is to introduce a DOMCTL_iommu_remove_device in
> >> pci-assignable-add / pci-detach and DOMCTL_iommu_add_device in
> >> pci-attach. Is that a valid approach  ?
> >
> > I am not 100% sure, but I think that before assigning a PCI device to
> > another guest, you are supposed to bind the device to xen-pciback (see
> > drivers/xen/xen-pciback, also see
> > http://wiki.xen.org/wiki/Xen_PCI_Passthrough). The pciback driver is
> > going hide the device from dom0 and as a consequence
> > drivers/xen/pci.c:xen_remove_device ends up being called, that issues a
> > PHYSDEVOP_pci_device_remove hypercall.
> 
> xen_remove_device is not called at all. in pci-attach
> iommu_ops->assign_device is called.
> In Xen the nomenclature is confusing and no comments are there is iommu.h
> iommu_ops.add_device is when dom0 issues hypercall
> iommu_ops.assign_dt_device is when xen attaches a device tree device to dom0
> iommu_ops.assign_device is when xl pci-attach is called
> iommu_ops.reassign_device is called when xl pci-detach is called
> 
> As of now we are able to assign devices to domU and driver in domU is
> running, we did some hacks like
> a) in xen pci front driver bus->msi is assigned to its msi_chip
> 
> ---- pcifront_scan_root()
> ...
> b = pci_scan_bus_parented(&pdev->xdev->dev, bus,
>                   &pcifront_bus_ops, sd);
>     if (!b) {
>         dev_err(&pdev->xdev->dev,
>             "Error creating PCI Frontend Bus!\n");
>         err = -ENOMEM;
>         pci_unlock_rescan_remove();
>         goto err_out;
>     }
> 
>     bus_entry->bus = b;
> +        msi_node = of_find_compatible_node(NULL,NULL, "arm,gic-v3-its");
> +        if(msi_node) {
> +            b->msi = of_pci_find_msi_chip_by_node(msi_node);
> +            if(!b->msi) {
> +               printk(KERN_ERR"Unable to find bus->msi node \r\n");
> +               goto err_out;
> +            }
> +        }else {
> +               printk(KERN_ERR"Unable to find arm,gic-v3-its
> compatible node \r\n");
> +               goto err_out;
> +        }

It seems to be that of_pci_find_msi_chip_by_node should be called by
common code somewhere else. Maybe people at linux-arm would know where
to suggest this initialization should go.



> ----
> 
> using this the ITS emulation code in xen is able to trap ITS command
> writes by driver.
> But we are facing a problem now, where your help is needed
> 
> The StreamID is generated by segment: bus : device: number which is
> fed as DevID in ITS commands. In Dom0 the streamID is correctly
> generated but in domU the Stream ID for a passthrough device is
> 0:0:0:0 now when emulating this in Xen it is a problem as xen does not
> know how to get the physical stream id.
> 
> (Eg: xl pci-attach 1 0001:00:05.0
> DomU has the device but in DomU the id is 0000:00:00.0.)
> 
> Could you suggest how to go about this.

I don't think that the ITS patches have been posted yet, so it is
difficult for me to understand the problem and propose a solution.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-09-24 14:10     ` Stefano Stabellini
@ 2014-09-24 18:32       ` manish jaggi
  2014-09-25 10:27         ` Stefano Stabellini
  0 siblings, 1 reply; 38+ messages in thread
From: manish jaggi @ 2014-09-24 18:32 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Ian Campbell, Vijay Kilari, Prasun Kapoor, manish.jaggi,
	Julien Grall, xen-devel, psawargaonkar, Matt.Evans, Dave.Martin,
	Anup Patel

On 24 September 2014 19:40, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> CC'ing Matt and Dave at ARM for an opinions about device tree, SMMUs and
> stream ids. See below.
>
> On Wed, 24 Sep 2014, manish jaggi wrote:
>> On 22 September 2014 16:15, Stefano Stabellini
>> <stefano.stabellini@eu.citrix.com> wrote:
>> > On Thu, 18 Sep 2014, manish jaggi wrote:
>> >> Hi,
>> >> Below is the flow I am working on, Please provide your comments, I
>> >> have a couple of queries as well..
>> >>
>> >> a) Device tree has smmu nodes and each smmu node has the mmu-master property.
>> >> In our Soc DT the mmu-master is a pcie node in device tree.
>> >
>> > Do you mean that both the smmu nodes and the pcie node have the
>> > mmu-master property? The pcie node is the pcie root complex, right?
>> >
>> pci-node is the pcie root complex. pci node is the mmu master in smmu node.
>>
>>   smmu1@0x8310,00000000 {
>> ...
>>
>>                  mmu-masters = <&pcie1 0x100>;
>>          };
>>
>> >> b) Xen parses the device tree and prepares a list which stores the pci
>> >> device tree node pointers. The order in device tree is mapped to
>> >> segment number in subsequent calls. For eg 1st pci node found is
>> >> segment 0, 2nd segment 1
>> >
>> > What's a segment number? Something from the PCI spec?
>> > If you have several pci nodes on device tree, does that mean that you
>> > have several different pcie root complexes?
>> >
>> yes.
>> segment is the pci rc number.
>> >
>> >> c) During SMMU init the pcie nodes in DT are saved as smmu masters.
>> >
>> > At this point you should also be able to find via DT the stream-id range
>> > supported by each SMMU and program the SMMU with them, assigning
>> > everything to dom0.
>> Currently pcie enumeration is not done in xen, it is done in dom0.
>
> Yes, but we don't really need to walk any PCIe busses in order to
> program the SMMU, right? We only need the requestor id and the stream id
> ranges. We should be able to get them via device tree.
>
Yes, but i have a doubt here
Before booting dom0 for each smmu the mask in SMR can be set to enable
stream ids to dom0.
This can be fixed or read from device tree.
There are 2 points here
a) PCI bus enumeration
b) Programming SMMU for dom0
For (b) the enumeration is not required provided we set the mask
So are you also saying that (a) should be done in Xen and not in dom0 ?
If yes how would dom0 get to know about PCIe Eps , from its Device tree ?

>
>> >> d) Dom0 Enumerates PCI devices, calls hypercall PHYSDEVOP_pci_device_add.
>> >>  - In Xen the SMMU iommu_ops add_device is called. I have implemented
>> >> the add_device function.
>> >> - In the add_device function
>> >>  the segment number is used to locate the device tree node pointer of
>> >> the pcie node which helps to find out the corresponding smmu.
>> >> - In the same PHYSDEVOP the BAR regions are mapped to Dom0.
>> >>
>> >> Note: The current SMMU driver maps the complete Domain's Address space
>> >> for the device in SMMU hardware.
>> >>
>> >> The above flow works currently for us.
>> >
>> > It would be nice to be able to skip d): in a system where all dma capable
>> > devices are behind smmus, we should be capable of booting dom0 without
>> > the 1:1 mapping hack. If we do that, it would be better to program the
>> > smmus before booting dom0. Otherwise there is a risk that dom0 is going
>> > to start using these devices and doing dma before we manage to secure
>> > the devices via smmus.
>> >
>> In our current case we are programming smmu in
>> PHYSDEVOP_pci_device_add flow so before the domain 0 accesses the
>> device it is mapped, otherwise xen gets a SMMU fault.
>
> Good.
>
>
>> > Of course we can do that if there are no alternatives. But in our case
>> > we should be able to extract the stream-ids from device tree and program
>> > the smmus right away, right?  Do we really need to wait for dom0 to call
>> > PHYSDEVOP_pci_device_add? We could just assign everything to dom0 for a
>> > start.
>> >
>> We cannot get streamid from device tree as enumeration is done for the same.
>
> I am not sure what the current state of the device tree spec is, but I
> am pretty sure that the intention is to express stream id and requestor
> id ranges directly in the dts so that the SMMU can be programmed right
> away without walking the PCI bus.
>
>
>> > I would like to know from the x86 guys, if this is really how it is
>> > supposed to work on PVH too. Do we rely on PHYSDEVOP_pci_device_add to
>> > program the IOMMU?
>> >
>> >
>> I was waiting but no one has commented
>
> Me too. Everybody is very busy at the moment with the 4.5 release.
>
>
>> >> Now when I call pci-assignable-add I see that the iommu_ops
>> >> remove_device in smmu driver is not called. If that is not called the
>> >> SMMU would still have the dom0 address space mappings for that device
>> >>
>> >> Can you please suggest the best place (kernel / xl-tools) to put the
>> >> code which would call the remove_device in iommu_opps in the control
>> >> flow from pci-assignable-add.
>> >>
>> >> One way I see is to introduce a DOMCTL_iommu_remove_device in
>> >> pci-assignable-add / pci-detach and DOMCTL_iommu_add_device in
>> >> pci-attach. Is that a valid approach  ?
>> >
>> > I am not 100% sure, but I think that before assigning a PCI device to
>> > another guest, you are supposed to bind the device to xen-pciback (see
>> > drivers/xen/xen-pciback, also see
>> > http://wiki.xen.org/wiki/Xen_PCI_Passthrough). The pciback driver is
>> > going hide the device from dom0 and as a consequence
>> > drivers/xen/pci.c:xen_remove_device ends up being called, that issues a
>> > PHYSDEVOP_pci_device_remove hypercall.
>>
>> xen_remove_device is not called at all. in pci-attach
>> iommu_ops->assign_device is called.
>> In Xen the nomenclature is confusing and no comments are there is iommu.h
>> iommu_ops.add_device is when dom0 issues hypercall
>> iommu_ops.assign_dt_device is when xen attaches a device tree device to dom0
>> iommu_ops.assign_device is when xl pci-attach is called
>> iommu_ops.reassign_device is called when xl pci-detach is called
>>
>> As of now we are able to assign devices to domU and driver in domU is
>> running, we did some hacks like
>> a) in xen pci front driver bus->msi is assigned to its msi_chip
>>
>> ---- pcifront_scan_root()
>> ...
>> b = pci_scan_bus_parented(&pdev->xdev->dev, bus,
>>                   &pcifront_bus_ops, sd);
>>     if (!b) {
>>         dev_err(&pdev->xdev->dev,
>>             "Error creating PCI Frontend Bus!\n");
>>         err = -ENOMEM;
>>         pci_unlock_rescan_remove();
>>         goto err_out;
>>     }
>>
>>     bus_entry->bus = b;
>> +        msi_node = of_find_compatible_node(NULL,NULL, "arm,gic-v3-its");
>> +        if(msi_node) {
>> +            b->msi = of_pci_find_msi_chip_by_node(msi_node);
>> +            if(!b->msi) {
>> +               printk(KERN_ERR"Unable to find bus->msi node \r\n");
>> +               goto err_out;
>> +            }
>> +        }else {
>> +               printk(KERN_ERR"Unable to find arm,gic-v3-its
>> compatible node \r\n");
>> +               goto err_out;
>> +        }
>
> It seems to be that of_pci_find_msi_chip_by_node should be called by
> common code somewhere else. Maybe people at linux-arm would know where
> to suggest this initialization should go.
>
This is a workaround to attach a msi-controller to xen pcifront bus.
We are avoiding the xen fronted ops for msi.
>
>
>> ----
>>
>> using this the ITS emulation code in xen is able to trap ITS command
>> writes by driver.
>> But we are facing a problem now, where your help is needed
>>
>> The StreamID is generated by segment: bus : device: number which is
>> fed as DevID in ITS commands. In Dom0 the streamID is correctly
>> generated but in domU the Stream ID for a passthrough device is
>> 0:0:0:0 now when emulating this in Xen it is a problem as xen does not
>> know how to get the physical stream id.
>>
>> (Eg: xl pci-attach 1 0001:00:05.0
>> DomU has the device but in DomU the id is 0000:00:00.0.)
>>
>> Could you suggest how to go about this.
>
> I don't think that the ITS patches have been posted yet, so it is
> difficult for me to understand the problem and propose a solution.

In a simpler way, It is more of what the StreamID a driver running in
domU sees. Which is programmed in the ITS commands.
And how to map the domU  streamID to actual streamID in Xen when the
ITS command write traps.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-09-24 18:32       ` manish jaggi
@ 2014-09-25 10:27         ` Stefano Stabellini
  2014-10-01 10:37           ` manish jaggi
  0 siblings, 1 reply; 38+ messages in thread
From: Stefano Stabellini @ 2014-09-25 10:27 UTC (permalink / raw)
  To: manish jaggi
  Cc: Ian Campbell, Vijay Kilari, Stefano Stabellini, Prasun Kapoor,
	manish.jaggi, Julien Grall, xen-devel, Matt.Evans, psawargaonkar,
	Dave.Martin, Anup Patel

On Thu, 25 Sep 2014, manish jaggi wrote:
> On 24 September 2014 19:40, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > CC'ing Matt and Dave at ARM for an opinions about device tree, SMMUs and
> > stream ids. See below.
> >
> > On Wed, 24 Sep 2014, manish jaggi wrote:
> >> On 22 September 2014 16:15, Stefano Stabellini
> >> <stefano.stabellini@eu.citrix.com> wrote:
> >> > On Thu, 18 Sep 2014, manish jaggi wrote:
> >> >> Hi,
> >> >> Below is the flow I am working on, Please provide your comments, I
> >> >> have a couple of queries as well..
> >> >>
> >> >> a) Device tree has smmu nodes and each smmu node has the mmu-master property.
> >> >> In our Soc DT the mmu-master is a pcie node in device tree.
> >> >
> >> > Do you mean that both the smmu nodes and the pcie node have the
> >> > mmu-master property? The pcie node is the pcie root complex, right?
> >> >
> >> pci-node is the pcie root complex. pci node is the mmu master in smmu node.
> >>
> >>   smmu1@0x8310,00000000 {
> >> ...
> >>
> >>                  mmu-masters = <&pcie1 0x100>;
> >>          };
> >>
> >> >> b) Xen parses the device tree and prepares a list which stores the pci
> >> >> device tree node pointers. The order in device tree is mapped to
> >> >> segment number in subsequent calls. For eg 1st pci node found is
> >> >> segment 0, 2nd segment 1
> >> >
> >> > What's a segment number? Something from the PCI spec?
> >> > If you have several pci nodes on device tree, does that mean that you
> >> > have several different pcie root complexes?
> >> >
> >> yes.
> >> segment is the pci rc number.
> >> >
> >> >> c) During SMMU init the pcie nodes in DT are saved as smmu masters.
> >> >
> >> > At this point you should also be able to find via DT the stream-id range
> >> > supported by each SMMU and program the SMMU with them, assigning
> >> > everything to dom0.
> >> Currently pcie enumeration is not done in xen, it is done in dom0.
> >
> > Yes, but we don't really need to walk any PCIe busses in order to
> > program the SMMU, right? We only need the requestor id and the stream id
> > ranges. We should be able to get them via device tree.
> >
> Yes, but i have a doubt here
> Before booting dom0 for each smmu the mask in SMR can be set to enable
> stream ids to dom0.
> This can be fixed or read from device tree.
> There are 2 points here
> a) PCI bus enumeration
> b) Programming SMMU for dom0
> For (b) the enumeration is not required provided we set the mask
> So are you also saying that (a) should be done in Xen and not in dom0 ?
> If yes how would dom0 get to know about PCIe Eps , from its Device tree ?

No, I think that doing (a) via PHYSDEVOP_pci_device_add is OK. 
I am saying that we should consider doing (b) in Xen before booting
dom0.


> >> >> d) Dom0 Enumerates PCI devices, calls hypercall PHYSDEVOP_pci_device_add.
> >> >>  - In Xen the SMMU iommu_ops add_device is called. I have implemented
> >> >> the add_device function.
> >> >> - In the add_device function
> >> >>  the segment number is used to locate the device tree node pointer of
> >> >> the pcie node which helps to find out the corresponding smmu.
> >> >> - In the same PHYSDEVOP the BAR regions are mapped to Dom0.
> >> >>
> >> >> Note: The current SMMU driver maps the complete Domain's Address space
> >> >> for the device in SMMU hardware.
> >> >>
> >> >> The above flow works currently for us.
> >> >
> >> > It would be nice to be able to skip d): in a system where all dma capable
> >> > devices are behind smmus, we should be capable of booting dom0 without
> >> > the 1:1 mapping hack. If we do that, it would be better to program the
> >> > smmus before booting dom0. Otherwise there is a risk that dom0 is going
> >> > to start using these devices and doing dma before we manage to secure
> >> > the devices via smmus.
> >> >
> >> In our current case we are programming smmu in
> >> PHYSDEVOP_pci_device_add flow so before the domain 0 accesses the
> >> device it is mapped, otherwise xen gets a SMMU fault.
> >
> > Good.
> >
> >
> >> > Of course we can do that if there are no alternatives. But in our case
> >> > we should be able to extract the stream-ids from device tree and program
> >> > the smmus right away, right?  Do we really need to wait for dom0 to call
> >> > PHYSDEVOP_pci_device_add? We could just assign everything to dom0 for a
> >> > start.
> >> >
> >> We cannot get streamid from device tree as enumeration is done for the same.
> >
> > I am not sure what the current state of the device tree spec is, but I
> > am pretty sure that the intention is to express stream id and requestor
> > id ranges directly in the dts so that the SMMU can be programmed right
> > away without walking the PCI bus.
> >
> >
> >> > I would like to know from the x86 guys, if this is really how it is
> >> > supposed to work on PVH too. Do we rely on PHYSDEVOP_pci_device_add to
> >> > program the IOMMU?
> >> >
> >> >
> >> I was waiting but no one has commented
> >
> > Me too. Everybody is very busy at the moment with the 4.5 release.
> >
> >
> >> >> Now when I call pci-assignable-add I see that the iommu_ops
> >> >> remove_device in smmu driver is not called. If that is not called the
> >> >> SMMU would still have the dom0 address space mappings for that device
> >> >>
> >> >> Can you please suggest the best place (kernel / xl-tools) to put the
> >> >> code which would call the remove_device in iommu_opps in the control
> >> >> flow from pci-assignable-add.
> >> >>
> >> >> One way I see is to introduce a DOMCTL_iommu_remove_device in
> >> >> pci-assignable-add / pci-detach and DOMCTL_iommu_add_device in
> >> >> pci-attach. Is that a valid approach  ?
> >> >
> >> > I am not 100% sure, but I think that before assigning a PCI device to
> >> > another guest, you are supposed to bind the device to xen-pciback (see
> >> > drivers/xen/xen-pciback, also see
> >> > http://wiki.xen.org/wiki/Xen_PCI_Passthrough). The pciback driver is
> >> > going hide the device from dom0 and as a consequence
> >> > drivers/xen/pci.c:xen_remove_device ends up being called, that issues a
> >> > PHYSDEVOP_pci_device_remove hypercall.
> >>
> >> xen_remove_device is not called at all. in pci-attach
> >> iommu_ops->assign_device is called.
> >> In Xen the nomenclature is confusing and no comments are there is iommu.h
> >> iommu_ops.add_device is when dom0 issues hypercall
> >> iommu_ops.assign_dt_device is when xen attaches a device tree device to dom0
> >> iommu_ops.assign_device is when xl pci-attach is called
> >> iommu_ops.reassign_device is called when xl pci-detach is called
> >>
> >> As of now we are able to assign devices to domU and driver in domU is
> >> running, we did some hacks like
> >> a) in xen pci front driver bus->msi is assigned to its msi_chip
> >>
> >> ---- pcifront_scan_root()
> >> ...
> >> b = pci_scan_bus_parented(&pdev->xdev->dev, bus,
> >>                   &pcifront_bus_ops, sd);
> >>     if (!b) {
> >>         dev_err(&pdev->xdev->dev,
> >>             "Error creating PCI Frontend Bus!\n");
> >>         err = -ENOMEM;
> >>         pci_unlock_rescan_remove();
> >>         goto err_out;
> >>     }
> >>
> >>     bus_entry->bus = b;
> >> +        msi_node = of_find_compatible_node(NULL,NULL, "arm,gic-v3-its");
> >> +        if(msi_node) {
> >> +            b->msi = of_pci_find_msi_chip_by_node(msi_node);
> >> +            if(!b->msi) {
> >> +               printk(KERN_ERR"Unable to find bus->msi node \r\n");
> >> +               goto err_out;
> >> +            }
> >> +        }else {
> >> +               printk(KERN_ERR"Unable to find arm,gic-v3-its
> >> compatible node \r\n");
> >> +               goto err_out;
> >> +        }
> >
> > It seems to be that of_pci_find_msi_chip_by_node should be called by
> > common code somewhere else. Maybe people at linux-arm would know where
> > to suggest this initialization should go.
> >
> This is a workaround to attach a msi-controller to xen pcifront bus.
> We are avoiding the xen fronted ops for msi.

I think I would need to see a proper patch series to really evaluate this change.


> >
> >> ----
> >>
> >> using this the ITS emulation code in xen is able to trap ITS command
> >> writes by driver.
> >> But we are facing a problem now, where your help is needed
> >>
> >> The StreamID is generated by segment: bus : device: number which is
> >> fed as DevID in ITS commands. In Dom0 the streamID is correctly
> >> generated but in domU the Stream ID for a passthrough device is
> >> 0:0:0:0 now when emulating this in Xen it is a problem as xen does not
> >> know how to get the physical stream id.
> >>
> >> (Eg: xl pci-attach 1 0001:00:05.0
> >> DomU has the device but in DomU the id is 0000:00:00.0.)
> >>
> >> Could you suggest how to go about this.
> >
> > I don't think that the ITS patches have been posted yet, so it is
> > difficult for me to understand the problem and propose a solution.
> 
> In a simpler way, It is more of what the StreamID a driver running in
> domU sees. Which is programmed in the ITS commands.
> And how to map the domU  streamID to actual streamID in Xen when the
> ITS command write traps.

Wouldn't it be possible to pass the correct StreamID to DomU via device
tree? Does it really need to match the PCI BDF?
Otherwise if the command trap into Xen, couldn't Xen do the translation?

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-09-25 10:27         ` Stefano Stabellini
@ 2014-10-01 10:37           ` manish jaggi
  2014-10-02 16:41             ` Stefano Stabellini
  0 siblings, 1 reply; 38+ messages in thread
From: manish jaggi @ 2014-10-01 10:37 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Ian Campbell, Vijay Kilari, Prasun Kapoor, manish.jaggi,
	Julien Grall, xen-devel, psawargaonkar, Matt.Evans, Dave.Martin,
	Anup Patel

On 25 September 2014 15:57, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Thu, 25 Sep 2014, manish jaggi wrote:
>> On 24 September 2014 19:40, Stefano Stabellini
>> <stefano.stabellini@eu.citrix.com> wrote:
>> > CC'ing Matt and Dave at ARM for an opinions about device tree, SMMUs and
>> > stream ids. See below.
>> >
>> > On Wed, 24 Sep 2014, manish jaggi wrote:
>> >> On 22 September 2014 16:15, Stefano Stabellini
>> >> <stefano.stabellini@eu.citrix.com> wrote:
>> >> > On Thu, 18 Sep 2014, manish jaggi wrote:
>> >> >> Hi,
>> >> >> Below is the flow I am working on, Please provide your comments, I
>> >> >> have a couple of queries as well..
>> >> >>
>> >> >> a) Device tree has smmu nodes and each smmu node has the mmu-master property.
>> >> >> In our Soc DT the mmu-master is a pcie node in device tree.
>> >> >
>> >> > Do you mean that both the smmu nodes and the pcie node have the
>> >> > mmu-master property? The pcie node is the pcie root complex, right?
>> >> >
>> >> pci-node is the pcie root complex. pci node is the mmu master in smmu node.
>> >>
>> >>   smmu1@0x8310,00000000 {
>> >> ...
>> >>
>> >>                  mmu-masters = <&pcie1 0x100>;
>> >>          };
>> >>
>> >> >> b) Xen parses the device tree and prepares a list which stores the pci
>> >> >> device tree node pointers. The order in device tree is mapped to
>> >> >> segment number in subsequent calls. For eg 1st pci node found is
>> >> >> segment 0, 2nd segment 1
>> >> >
>> >> > What's a segment number? Something from the PCI spec?
>> >> > If you have several pci nodes on device tree, does that mean that you
>> >> > have several different pcie root complexes?
>> >> >
>> >> yes.
>> >> segment is the pci rc number.
>> >> >
>> >> >> c) During SMMU init the pcie nodes in DT are saved as smmu masters.
>> >> >
>> >> > At this point you should also be able to find via DT the stream-id range
>> >> > supported by each SMMU and program the SMMU with them, assigning
>> >> > everything to dom0.
>> >> Currently pcie enumeration is not done in xen, it is done in dom0.
>> >
>> > Yes, but we don't really need to walk any PCIe busses in order to
>> > program the SMMU, right? We only need the requestor id and the stream id
>> > ranges. We should be able to get them via device tree.
>> >
>> Yes, but i have a doubt here
>> Before booting dom0 for each smmu the mask in SMR can be set to enable
>> stream ids to dom0.
>> This can be fixed or read from device tree.
>> There are 2 points here
>> a) PCI bus enumeration
>> b) Programming SMMU for dom0
>> For (b) the enumeration is not required provided we set the mask
>> So are you also saying that (a) should be done in Xen and not in dom0 ?
>> If yes how would dom0 get to know about PCIe Eps , from its Device tree ?
>
> No, I think that doing (a) via PHYSDEVOP_pci_device_add is OK.
> I am saying that we should consider doing (b) in Xen before booting
> dom0.
>
>
>> >> >> d) Dom0 Enumerates PCI devices, calls hypercall PHYSDEVOP_pci_device_add.
>> >> >>  - In Xen the SMMU iommu_ops add_device is called. I have implemented
>> >> >> the add_device function.
>> >> >> - In the add_device function
>> >> >>  the segment number is used to locate the device tree node pointer of
>> >> >> the pcie node which helps to find out the corresponding smmu.
>> >> >> - In the same PHYSDEVOP the BAR regions are mapped to Dom0.
>> >> >>
>> >> >> Note: The current SMMU driver maps the complete Domain's Address space
>> >> >> for the device in SMMU hardware.
>> >> >>
>> >> >> The above flow works currently for us.
>> >> >
>> >> > It would be nice to be able to skip d): in a system where all dma capable
>> >> > devices are behind smmus, we should be capable of booting dom0 without
>> >> > the 1:1 mapping hack. If we do that, it would be better to program the
>> >> > smmus before booting dom0. Otherwise there is a risk that dom0 is going
>> >> > to start using these devices and doing dma before we manage to secure
>> >> > the devices via smmus.
>> >> >
>> >> In our current case we are programming smmu in
>> >> PHYSDEVOP_pci_device_add flow so before the domain 0 accesses the
>> >> device it is mapped, otherwise xen gets a SMMU fault.
>> >
>> > Good.
>> >
>> >
>> >> > Of course we can do that if there are no alternatives. But in our case
>> >> > we should be able to extract the stream-ids from device tree and program
>> >> > the smmus right away, right?  Do we really need to wait for dom0 to call
>> >> > PHYSDEVOP_pci_device_add? We could just assign everything to dom0 for a
>> >> > start.
>> >> >
>> >> We cannot get streamid from device tree as enumeration is done for the same.
>> >
>> > I am not sure what the current state of the device tree spec is, but I
>> > am pretty sure that the intention is to express stream id and requestor
>> > id ranges directly in the dts so that the SMMU can be programmed right
>> > away without walking the PCI bus.
>> >
>> >
>> >> > I would like to know from the x86 guys, if this is really how it is
>> >> > supposed to work on PVH too. Do we rely on PHYSDEVOP_pci_device_add to
>> >> > program the IOMMU?
>> >> >
>> >> >
>> >> I was waiting but no one has commented
>> >
>> > Me too. Everybody is very busy at the moment with the 4.5 release.
>> >
>> >
>> >> >> Now when I call pci-assignable-add I see that the iommu_ops
>> >> >> remove_device in smmu driver is not called. If that is not called the
>> >> >> SMMU would still have the dom0 address space mappings for that device
>> >> >>
>> >> >> Can you please suggest the best place (kernel / xl-tools) to put the
>> >> >> code which would call the remove_device in iommu_opps in the control
>> >> >> flow from pci-assignable-add.
>> >> >>
>> >> >> One way I see is to introduce a DOMCTL_iommu_remove_device in
>> >> >> pci-assignable-add / pci-detach and DOMCTL_iommu_add_device in
>> >> >> pci-attach. Is that a valid approach  ?
>> >> >
>> >> > I am not 100% sure, but I think that before assigning a PCI device to
>> >> > another guest, you are supposed to bind the device to xen-pciback (see
>> >> > drivers/xen/xen-pciback, also see
>> >> > http://wiki.xen.org/wiki/Xen_PCI_Passthrough). The pciback driver is
>> >> > going hide the device from dom0 and as a consequence
>> >> > drivers/xen/pci.c:xen_remove_device ends up being called, that issues a
>> >> > PHYSDEVOP_pci_device_remove hypercall.
>> >>
>> >> xen_remove_device is not called at all. in pci-attach
>> >> iommu_ops->assign_device is called.
>> >> In Xen the nomenclature is confusing and no comments are there is iommu.h
>> >> iommu_ops.add_device is when dom0 issues hypercall
>> >> iommu_ops.assign_dt_device is when xen attaches a device tree device to dom0
>> >> iommu_ops.assign_device is when xl pci-attach is called
>> >> iommu_ops.reassign_device is called when xl pci-detach is called
>> >>
>> >> As of now we are able to assign devices to domU and driver in domU is
>> >> running, we did some hacks like
>> >> a) in xen pci front driver bus->msi is assigned to its msi_chip
>> >>
>> >> ---- pcifront_scan_root()
>> >> ...
>> >> b = pci_scan_bus_parented(&pdev->xdev->dev, bus,
>> >>                   &pcifront_bus_ops, sd);
>> >>     if (!b) {
>> >>         dev_err(&pdev->xdev->dev,
>> >>             "Error creating PCI Frontend Bus!\n");
>> >>         err = -ENOMEM;
>> >>         pci_unlock_rescan_remove();
>> >>         goto err_out;
>> >>     }
>> >>
>> >>     bus_entry->bus = b;
>> >> +        msi_node = of_find_compatible_node(NULL,NULL, "arm,gic-v3-its");
>> >> +        if(msi_node) {
>> >> +            b->msi = of_pci_find_msi_chip_by_node(msi_node);
>> >> +            if(!b->msi) {
>> >> +               printk(KERN_ERR"Unable to find bus->msi node \r\n");
>> >> +               goto err_out;
>> >> +            }
>> >> +        }else {
>> >> +               printk(KERN_ERR"Unable to find arm,gic-v3-its
>> >> compatible node \r\n");
>> >> +               goto err_out;
>> >> +        }
>> >
>> > It seems to be that of_pci_find_msi_chip_by_node should be called by
>> > common code somewhere else. Maybe people at linux-arm would know where
>> > to suggest this initialization should go.
>> >
>> This is a workaround to attach a msi-controller to xen pcifront bus.
>> We are avoiding the xen fronted ops for msi.
>
> I think I would need to see a proper patch series to really evaluate this change.
>
>
>> >
>> >> ----
>> >>
>> >> using this the ITS emulation code in xen is able to trap ITS command
>> >> writes by driver.
>> >> But we are facing a problem now, where your help is needed
>> >>
>> >> The StreamID is generated by segment: bus : device: number which is
>> >> fed as DevID in ITS commands. In Dom0 the streamID is correctly
>> >> generated but in domU the Stream ID for a passthrough device is
>> >> 0:0:0:0 now when emulating this in Xen it is a problem as xen does not
>> >> know how to get the physical stream id.
>> >>
>> >> (Eg: xl pci-attach 1 0001:00:05.0
>> >> DomU has the device but in DomU the id is 0000:00:00.0.)
>> >>
>> >> Could you suggest how to go about this.
>> >
>> > I don't think that the ITS patches have been posted yet, so it is
>> > difficult for me to understand the problem and propose a solution.
>>
>> In a simpler way, It is more of what the StreamID a driver running in
>> domU sees. Which is programmed in the ITS commands.
>> And how to map the domU  streamID to actual streamID in Xen when the
>> ITS command write traps.
>
> Wouldn't it be possible to pass the correct StreamID to DomU via device
> tree? Does it really need to match the PCI BDF?
Device Tree provide static mapping, runtime attaching a device (using
xl tools) to a domU is what I am working on.

> Otherwise if the command trap into Xen, couldn't Xen do the translation?
Xen does not know how to map the BDF in domU to actual streamID.

I had thought of adding a hypercall,  when xl pci-attach is called.
PHYSDEVOP_map_streamid {
    dom_id,
    phys_streamid, //bdf
    guest_streamid,
}

 But I am not able to get correct BDF of domU.
For instance the logs at 2 different place give diff BDFs

#xl pci-attach 1 '0002:01:00.1,permissive=1'

xen-pciback pci-1-0: xen_pcibk_export_device exporting dom 2 bus 1 slot 0 func 1
xen_pciback: vpci: 0002:01:00.1: assign to virtual slot 1
xen_pcibk_publish_pci_dev 0000:00:01.00

Code that generated print:
static int xen_pcibk_publish_pci_dev(struct xen_pcibk_device *pdev,
                                   unsigned int domain, unsigned int bus,
                                   unsigned int devfn, unsigned int devid)
{
    ...
        printk(KERN_ERR"%s %04x:%02x:%02x.%02x",__func__, domain, bus,
                            PCI_SLOT(devfn), PCI_FUNC(devfn));


While in xen_pcibk_do_op Print is:

xen_pcibk_do_op Guest SBDF=0:0:1.1 (this is output of lspci in domU)

Code that generated print:

void xen_pcibk_do_op(struct work_struct *data)
{
     ...
        if (dev == NULL)
                op->err = XEN_PCI_ERR_dev_not_found;
        else {
        printk(KERN_ERR"%s Guest SBDF=%d:%d:%d.%d \r\n",__func__,
op->domain, op->bus, op->devfn>>3, op->devfn&0x7);


Stefano, I need your help in this

-Regards
Manish

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-10-01 10:37           ` manish jaggi
@ 2014-10-02 16:41             ` Stefano Stabellini
  2014-10-02 16:59               ` Stefano Stabellini
  0 siblings, 1 reply; 38+ messages in thread
From: Stefano Stabellini @ 2014-10-02 16:41 UTC (permalink / raw)
  To: manish jaggi
  Cc: Ian Campbell, Vijay Kilari, Stefano Stabellini, Prasun Kapoor,
	manish.jaggi, Julien Grall, xen-devel, Matt.Evans, psawargaonkar,
	Dave.Martin, Anup Patel

On Wed, 1 Oct 2014, manish jaggi wrote:
> On 25 September 2014 15:57, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > On Thu, 25 Sep 2014, manish jaggi wrote:
> >> On 24 September 2014 19:40, Stefano Stabellini
> >> <stefano.stabellini@eu.citrix.com> wrote:
> >> > CC'ing Matt and Dave at ARM for an opinions about device tree, SMMUs and
> >> > stream ids. See below.
> >> >
> >> > On Wed, 24 Sep 2014, manish jaggi wrote:
> >> >> On 22 September 2014 16:15, Stefano Stabellini
> >> >> <stefano.stabellini@eu.citrix.com> wrote:
> >> >> > On Thu, 18 Sep 2014, manish jaggi wrote:
> >> >> >> Hi,
> >> >> >> Below is the flow I am working on, Please provide your comments, I
> >> >> >> have a couple of queries as well..
> >> >> >>
> >> >> >> a) Device tree has smmu nodes and each smmu node has the mmu-master property.
> >> >> >> In our Soc DT the mmu-master is a pcie node in device tree.
> >> >> >
> >> >> > Do you mean that both the smmu nodes and the pcie node have the
> >> >> > mmu-master property? The pcie node is the pcie root complex, right?
> >> >> >
> >> >> pci-node is the pcie root complex. pci node is the mmu master in smmu node.
> >> >>
> >> >>   smmu1@0x8310,00000000 {
> >> >> ...
> >> >>
> >> >>                  mmu-masters = <&pcie1 0x100>;
> >> >>          };
> >> >>
> >> >> >> b) Xen parses the device tree and prepares a list which stores the pci
> >> >> >> device tree node pointers. The order in device tree is mapped to
> >> >> >> segment number in subsequent calls. For eg 1st pci node found is
> >> >> >> segment 0, 2nd segment 1
> >> >> >
> >> >> > What's a segment number? Something from the PCI spec?
> >> >> > If you have several pci nodes on device tree, does that mean that you
> >> >> > have several different pcie root complexes?
> >> >> >
> >> >> yes.
> >> >> segment is the pci rc number.
> >> >> >
> >> >> >> c) During SMMU init the pcie nodes in DT are saved as smmu masters.
> >> >> >
> >> >> > At this point you should also be able to find via DT the stream-id range
> >> >> > supported by each SMMU and program the SMMU with them, assigning
> >> >> > everything to dom0.
> >> >> Currently pcie enumeration is not done in xen, it is done in dom0.
> >> >
> >> > Yes, but we don't really need to walk any PCIe busses in order to
> >> > program the SMMU, right? We only need the requestor id and the stream id
> >> > ranges. We should be able to get them via device tree.
> >> >
> >> Yes, but i have a doubt here
> >> Before booting dom0 for each smmu the mask in SMR can be set to enable
> >> stream ids to dom0.
> >> This can be fixed or read from device tree.
> >> There are 2 points here
> >> a) PCI bus enumeration
> >> b) Programming SMMU for dom0
> >> For (b) the enumeration is not required provided we set the mask
> >> So are you also saying that (a) should be done in Xen and not in dom0 ?
> >> If yes how would dom0 get to know about PCIe Eps , from its Device tree ?
> >
> > No, I think that doing (a) via PHYSDEVOP_pci_device_add is OK.
> > I am saying that we should consider doing (b) in Xen before booting
> > dom0.
> >
> >
> >> >> >> d) Dom0 Enumerates PCI devices, calls hypercall PHYSDEVOP_pci_device_add.
> >> >> >>  - In Xen the SMMU iommu_ops add_device is called. I have implemented
> >> >> >> the add_device function.
> >> >> >> - In the add_device function
> >> >> >>  the segment number is used to locate the device tree node pointer of
> >> >> >> the pcie node which helps to find out the corresponding smmu.
> >> >> >> - In the same PHYSDEVOP the BAR regions are mapped to Dom0.
> >> >> >>
> >> >> >> Note: The current SMMU driver maps the complete Domain's Address space
> >> >> >> for the device in SMMU hardware.
> >> >> >>
> >> >> >> The above flow works currently for us.
> >> >> >
> >> >> > It would be nice to be able to skip d): in a system where all dma capable
> >> >> > devices are behind smmus, we should be capable of booting dom0 without
> >> >> > the 1:1 mapping hack. If we do that, it would be better to program the
> >> >> > smmus before booting dom0. Otherwise there is a risk that dom0 is going
> >> >> > to start using these devices and doing dma before we manage to secure
> >> >> > the devices via smmus.
> >> >> >
> >> >> In our current case we are programming smmu in
> >> >> PHYSDEVOP_pci_device_add flow so before the domain 0 accesses the
> >> >> device it is mapped, otherwise xen gets a SMMU fault.
> >> >
> >> > Good.
> >> >
> >> >
> >> >> > Of course we can do that if there are no alternatives. But in our case
> >> >> > we should be able to extract the stream-ids from device tree and program
> >> >> > the smmus right away, right?  Do we really need to wait for dom0 to call
> >> >> > PHYSDEVOP_pci_device_add? We could just assign everything to dom0 for a
> >> >> > start.
> >> >> >
> >> >> We cannot get streamid from device tree as enumeration is done for the same.
> >> >
> >> > I am not sure what the current state of the device tree spec is, but I
> >> > am pretty sure that the intention is to express stream id and requestor
> >> > id ranges directly in the dts so that the SMMU can be programmed right
> >> > away without walking the PCI bus.
> >> >
> >> >
> >> >> > I would like to know from the x86 guys, if this is really how it is
> >> >> > supposed to work on PVH too. Do we rely on PHYSDEVOP_pci_device_add to
> >> >> > program the IOMMU?
> >> >> >
> >> >> >
> >> >> I was waiting but no one has commented
> >> >
> >> > Me too. Everybody is very busy at the moment with the 4.5 release.
> >> >
> >> >
> >> >> >> Now when I call pci-assignable-add I see that the iommu_ops
> >> >> >> remove_device in smmu driver is not called. If that is not called the
> >> >> >> SMMU would still have the dom0 address space mappings for that device
> >> >> >>
> >> >> >> Can you please suggest the best place (kernel / xl-tools) to put the
> >> >> >> code which would call the remove_device in iommu_opps in the control
> >> >> >> flow from pci-assignable-add.
> >> >> >>
> >> >> >> One way I see is to introduce a DOMCTL_iommu_remove_device in
> >> >> >> pci-assignable-add / pci-detach and DOMCTL_iommu_add_device in
> >> >> >> pci-attach. Is that a valid approach  ?
> >> >> >
> >> >> > I am not 100% sure, but I think that before assigning a PCI device to
> >> >> > another guest, you are supposed to bind the device to xen-pciback (see
> >> >> > drivers/xen/xen-pciback, also see
> >> >> > http://wiki.xen.org/wiki/Xen_PCI_Passthrough). The pciback driver is
> >> >> > going hide the device from dom0 and as a consequence
> >> >> > drivers/xen/pci.c:xen_remove_device ends up being called, that issues a
> >> >> > PHYSDEVOP_pci_device_remove hypercall.
> >> >>
> >> >> xen_remove_device is not called at all. in pci-attach
> >> >> iommu_ops->assign_device is called.
> >> >> In Xen the nomenclature is confusing and no comments are there is iommu.h
> >> >> iommu_ops.add_device is when dom0 issues hypercall
> >> >> iommu_ops.assign_dt_device is when xen attaches a device tree device to dom0
> >> >> iommu_ops.assign_device is when xl pci-attach is called
> >> >> iommu_ops.reassign_device is called when xl pci-detach is called
> >> >>
> >> >> As of now we are able to assign devices to domU and driver in domU is
> >> >> running, we did some hacks like
> >> >> a) in xen pci front driver bus->msi is assigned to its msi_chip
> >> >>
> >> >> ---- pcifront_scan_root()
> >> >> ...
> >> >> b = pci_scan_bus_parented(&pdev->xdev->dev, bus,
> >> >>                   &pcifront_bus_ops, sd);
> >> >>     if (!b) {
> >> >>         dev_err(&pdev->xdev->dev,
> >> >>             "Error creating PCI Frontend Bus!\n");
> >> >>         err = -ENOMEM;
> >> >>         pci_unlock_rescan_remove();
> >> >>         goto err_out;
> >> >>     }
> >> >>
> >> >>     bus_entry->bus = b;
> >> >> +        msi_node = of_find_compatible_node(NULL,NULL, "arm,gic-v3-its");
> >> >> +        if(msi_node) {
> >> >> +            b->msi = of_pci_find_msi_chip_by_node(msi_node);
> >> >> +            if(!b->msi) {
> >> >> +               printk(KERN_ERR"Unable to find bus->msi node \r\n");
> >> >> +               goto err_out;
> >> >> +            }
> >> >> +        }else {
> >> >> +               printk(KERN_ERR"Unable to find arm,gic-v3-its
> >> >> compatible node \r\n");
> >> >> +               goto err_out;
> >> >> +        }
> >> >
> >> > It seems to be that of_pci_find_msi_chip_by_node should be called by
> >> > common code somewhere else. Maybe people at linux-arm would know where
> >> > to suggest this initialization should go.
> >> >
> >> This is a workaround to attach a msi-controller to xen pcifront bus.
> >> We are avoiding the xen fronted ops for msi.
> >
> > I think I would need to see a proper patch series to really evaluate this change.
> >
> >
> >> >
> >> >> ----
> >> >>
> >> >> using this the ITS emulation code in xen is able to trap ITS command
> >> >> writes by driver.
> >> >> But we are facing a problem now, where your help is needed
> >> >>
> >> >> The StreamID is generated by segment: bus : device: number which is
> >> >> fed as DevID in ITS commands. In Dom0 the streamID is correctly
> >> >> generated but in domU the Stream ID for a passthrough device is
> >> >> 0:0:0:0 now when emulating this in Xen it is a problem as xen does not
> >> >> know how to get the physical stream id.
> >> >>
> >> >> (Eg: xl pci-attach 1 0001:00:05.0
> >> >> DomU has the device but in DomU the id is 0000:00:00.0.)
> >> >>
> >> >> Could you suggest how to go about this.
> >> >
> >> > I don't think that the ITS patches have been posted yet, so it is
> >> > difficult for me to understand the problem and propose a solution.
> >>
> >> In a simpler way, It is more of what the StreamID a driver running in
> >> domU sees. Which is programmed in the ITS commands.
> >> And how to map the domU  streamID to actual streamID in Xen when the
> >> ITS command write traps.
> >
> > Wouldn't it be possible to pass the correct StreamID to DomU via device
> > tree? Does it really need to match the PCI BDF?
> Device Tree provide static mapping, runtime attaching a device (using
> xl tools) to a domU is what I am working on.

As I wrote before it is difficult to answer without the patches and/or a
design document.

You should be able to specify StreamID ranges in Device Tree to cover a
bus. So you should be able to say that the virtual PCI bus in the guest
has StreamID [0-8] for slots [0-8]. Then in your example below you need
to make sure to insert the passthrough device in virtual slot 1 instead
of virtual slot 0.

I don't know if you were aware of this but you can already specify the
virtual slot number to pci-attach, see xl pci-attach --help

Otherwise you could let the frontend know the StreamID via xenbus: the
backend should know the correct StreamID for the device, it could just
add it to xenstore as a new parameter for the frontend.

Either way you should be able to tell the frontend what is the right
StreamID for the device.


> > Otherwise if the command trap into Xen, couldn't Xen do the translation?
> Xen does not know how to map the BDF in domU to actual streamID.
> 
> I had thought of adding a hypercall,  when xl pci-attach is called.
> PHYSDEVOP_map_streamid {
>     dom_id,
>     phys_streamid, //bdf
>     guest_streamid,
> }
> 
>  But I am not able to get correct BDF of domU.

I don't think that an hypercall is a good way to solve this.


> For instance the logs at 2 different place give diff BDFs
> 
> #xl pci-attach 1 '0002:01:00.1,permissive=1'
> 
> xen-pciback pci-1-0: xen_pcibk_export_device exporting dom 2 bus 1 slot 0 func 1
> xen_pciback: vpci: 0002:01:00.1: assign to virtual slot 1
> xen_pcibk_publish_pci_dev 0000:00:01.00
> 
> Code that generated print:
> static int xen_pcibk_publish_pci_dev(struct xen_pcibk_device *pdev,
>                                    unsigned int domain, unsigned int bus,
>                                    unsigned int devfn, unsigned int devid)
> {
>     ...
>         printk(KERN_ERR"%s %04x:%02x:%02x.%02x",__func__, domain, bus,
>                             PCI_SLOT(devfn), PCI_FUNC(devfn));
> 
> 
> While in xen_pcibk_do_op Print is:
> 
> xen_pcibk_do_op Guest SBDF=0:0:1.1 (this is output of lspci in domU)
> 
> Code that generated print:
> 
> void xen_pcibk_do_op(struct work_struct *data)
> {
>      ...
>         if (dev == NULL)
>                 op->err = XEN_PCI_ERR_dev_not_found;
>         else {
>         printk(KERN_ERR"%s Guest SBDF=%d:%d:%d.%d \r\n",__func__,
> op->domain, op->bus, op->devfn>>3, op->devfn&0x7);
> 
> 
> Stefano, I need your help in this

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-10-02 16:41             ` Stefano Stabellini
@ 2014-10-02 16:59               ` Stefano Stabellini
  2014-10-03  9:01                 ` Ian Campbell
  2014-10-03  9:32                 ` manish jaggi
  0 siblings, 2 replies; 38+ messages in thread
From: Stefano Stabellini @ 2014-10-02 16:59 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Ian Campbell, Vijay Kilari, Prasun Kapoor, manish.jaggi,
	Julien Grall, xen-devel, psawargaonkar, Matt.Evans, Dave.Martin,
	Anup Patel, manish jaggi

On Thu, 2 Oct 2014, Stefano Stabellini wrote:
> > >> >> using this the ITS emulation code in xen is able to trap ITS command
> > >> >> writes by driver.
> > >> >> But we are facing a problem now, where your help is needed

Actually  given that you are talking about the virtual ITS, why do you
need the real Device ID or Stream ID of the passthrough device in the
guest at all?  Shouldn't you just be generating a bunch of virtual IDs
instead?

Let's take a step back: why do you need *real* IDs to program a
*virtual* ITS?


> > >> >> The StreamID is generated by segment: bus : device: number which is
> > >> >> fed as DevID in ITS commands. In Dom0 the streamID is correctly
> > >> >> generated but in domU the Stream ID for a passthrough device is
> > >> >> 0:0:0:0 now when emulating this in Xen it is a problem as xen does not
> > >> >> know how to get the physical stream id.
> > >> >>
> > >> >> (Eg: xl pci-attach 1 0001:00:05.0
> > >> >> DomU has the device but in DomU the id is 0000:00:00.0.)
> > >> >>
> > >> >> Could you suggest how to go about this.
> > >> >
> > >> > I don't think that the ITS patches have been posted yet, so it is
> > >> > difficult for me to understand the problem and propose a solution.
> > >>
> > >> In a simpler way, It is more of what the StreamID a driver running in
> > >> domU sees. Which is programmed in the ITS commands.
> > >> And how to map the domU  streamID to actual streamID in Xen when the
> > >> ITS command write traps.
> > >
> > > Wouldn't it be possible to pass the correct StreamID to DomU via device
> > > tree? Does it really need to match the PCI BDF?
> > Device Tree provide static mapping, runtime attaching a device (using
> > xl tools) to a domU is what I am working on.
> 
> As I wrote before it is difficult to answer without the patches and/or a
> design document.
> 
> You should be able to specify StreamID ranges in Device Tree to cover a
> bus. So you should be able to say that the virtual PCI bus in the guest
> has StreamID [0-8] for slots [0-8]. Then in your example below you need
> to make sure to insert the passthrough device in virtual slot 1 instead
> of virtual slot 0.
> 
> I don't know if you were aware of this but you can already specify the
> virtual slot number to pci-attach, see xl pci-attach --help
> 
> Otherwise you could let the frontend know the StreamID via xenbus: the
> backend should know the correct StreamID for the device, it could just
> add it to xenstore as a new parameter for the frontend.
> 
> Either way you should be able to tell the frontend what is the right
> StreamID for the device.
> 
> 
> > > Otherwise if the command trap into Xen, couldn't Xen do the translation?
> > Xen does not know how to map the BDF in domU to actual streamID.
> > 
> > I had thought of adding a hypercall,  when xl pci-attach is called.
> > PHYSDEVOP_map_streamid {
> >     dom_id,
> >     phys_streamid, //bdf
> >     guest_streamid,
> > }
> > 
> >  But I am not able to get correct BDF of domU.
> 
> I don't think that an hypercall is a good way to solve this.
> 
> 
> > For instance the logs at 2 different place give diff BDFs
> > 
> > #xl pci-attach 1 '0002:01:00.1,permissive=1'
> > 
> > xen-pciback pci-1-0: xen_pcibk_export_device exporting dom 2 bus 1 slot 0 func 1
> > xen_pciback: vpci: 0002:01:00.1: assign to virtual slot 1
> > xen_pcibk_publish_pci_dev 0000:00:01.00
> > 
> > Code that generated print:
> > static int xen_pcibk_publish_pci_dev(struct xen_pcibk_device *pdev,
> >                                    unsigned int domain, unsigned int bus,
> >                                    unsigned int devfn, unsigned int devid)
> > {
> >     ...
> >         printk(KERN_ERR"%s %04x:%02x:%02x.%02x",__func__, domain, bus,
> >                             PCI_SLOT(devfn), PCI_FUNC(devfn));
> > 
> > 
> > While in xen_pcibk_do_op Print is:
> > 
> > xen_pcibk_do_op Guest SBDF=0:0:1.1 (this is output of lspci in domU)
> > 
> > Code that generated print:
> > 
> > void xen_pcibk_do_op(struct work_struct *data)
> > {
> >      ...
> >         if (dev == NULL)
> >                 op->err = XEN_PCI_ERR_dev_not_found;
> >         else {
> >         printk(KERN_ERR"%s Guest SBDF=%d:%d:%d.%d \r\n",__func__,
> > op->domain, op->bus, op->devfn>>3, op->devfn&0x7);
> > 
> > 
> > Stefano, I need your help in this
> 

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-10-02 16:59               ` Stefano Stabellini
@ 2014-10-03  9:01                 ` Ian Campbell
  2014-10-03  9:33                   ` manish jaggi
  2014-10-03  9:32                 ` manish jaggi
  1 sibling, 1 reply; 38+ messages in thread
From: Ian Campbell @ 2014-10-03  9:01 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Anup Patel, Vijay Kilari, Prasun Kapoor, manish.jaggi,
	Julien Grall, xen-devel, psawargaonkar, Matt.Evans, Dave.Martin,
	manish jaggi

On Thu, 2014-10-02 at 17:59 +0100, Stefano Stabellini wrote:
> On Thu, 2 Oct 2014, Stefano Stabellini wrote:
> > > >> >> using this the ITS emulation code in xen is able to trap ITS command
> > > >> >> writes by driver.
> > > >> >> But we are facing a problem now, where your help is needed
> 
> Actually  given that you are talking about the virtual ITS, why do you
> need the real Device ID or Stream ID of the passthrough device in the
> guest at all?  Shouldn't you just be generating a bunch of virtual IDs
> instead?
> 
> Let's take a step back: why do you need *real* IDs to program a
> *virtual* ITS?

Ack, I'm having a great deal of trouble figuring out why the guest would
ever see a physical hardware identifier of any kind.

Ian.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-10-02 16:59               ` Stefano Stabellini
  2014-10-03  9:01                 ` Ian Campbell
@ 2014-10-03  9:32                 ` manish jaggi
  2014-10-06 11:05                   ` manish jaggi
  1 sibling, 1 reply; 38+ messages in thread
From: manish jaggi @ 2014-10-03  9:32 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Ian Campbell, Vijay Kilari, Prasun Kapoor, manish.jaggi,
	Julien Grall, xen-devel, psawargaonkar, Matt.Evans, Dave.Martin,
	Anup Patel

On 2 October 2014 22:29, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Thu, 2 Oct 2014, Stefano Stabellini wrote:
>> > >> >> using this the ITS emulation code in xen is able to trap ITS command
>> > >> >> writes by driver.
>> > >> >> But we are facing a problem now, where your help is needed
>
> Actually  given that you are talking about the virtual ITS, why do you
> need the real Device ID or Stream ID of the passthrough device in the
> guest at all?  Shouldn't you just be generating a bunch of virtual IDs
> instead?
>
> Let's take a step back: why do you need *real* IDs to program a
> *virtual* ITS?
>
Guest Passes passthrough devices stream id - which is generated by BDF
so a device with BDF (real ID) say 1:0:5.0 may at BDF 0:0:0.0 in
guest.
Xen ITS Emulation Code would need the information mapping between
guestBDF actual BDF of a passthrough device.

Guest ITS never sees real ID.

>
>> > >> >> The StreamID is generated by segment: bus : device: number which is
>> > >> >> fed as DevID in ITS commands. In Dom0 the streamID is correctly
>> > >> >> generated but in domU the Stream ID for a passthrough device is
>> > >> >> 0:0:0:0 now when emulating this in Xen it is a problem as xen does not
>> > >> >> know how to get the physical stream id.
>> > >> >>
>> > >> >> (Eg: xl pci-attach 1 0001:00:05.0
>> > >> >> DomU has the device but in DomU the id is 0000:00:00.0.)
>> > >> >>
>> > >> >> Could you suggest how to go about this.
>> > >> >
>> > >> > I don't think that the ITS patches have been posted yet, so it is
>> > >> > difficult for me to understand the problem and propose a solution.
>> > >>
>> > >> In a simpler way, It is more of what the StreamID a driver running in
>> > >> domU sees. Which is programmed in the ITS commands.
>> > >> And how to map the domU  streamID to actual streamID in Xen when the
>> > >> ITS command write traps.
>> > >
>> > > Wouldn't it be possible to pass the correct StreamID to DomU via device
>> > > tree? Does it really need to match the PCI BDF?
>> > Device Tree provide static mapping, runtime attaching a device (using
>> > xl tools) to a domU is what I am working on.
>>
>> As I wrote before it is difficult to answer without the patches and/or a
>> design document.
>>
>> You should be able to specify StreamID ranges in Device Tree to cover a
>> bus. So you should be able to say that the virtual PCI bus in the guest
>> has StreamID [0-8] for slots [0-8]. Then in your example below you need
>> to make sure to insert the passthrough device in virtual slot 1 instead
>> of virtual slot 0.
>>
>> I don't know if you were aware of this but you can already specify the
>> virtual slot number to pci-attach, see xl pci-attach --help
>>
>> Otherwise you could let the frontend know the StreamID via xenbus: the
>> backend should know the correct StreamID for the device, it could just
>> add it to xenstore as a new parameter for the frontend.
>>
>> Either way you should be able to tell the frontend what is the right
>> StreamID for the device.
>>
>>
>> > > Otherwise if the command trap into Xen, couldn't Xen do the translation?
>> > Xen does not know how to map the BDF in domU to actual streamID.
>> >
>> > I had thought of adding a hypercall,  when xl pci-attach is called.
>> > PHYSDEVOP_map_streamid {
>> >     dom_id,
>> >     phys_streamid, //bdf
>> >     guest_streamid,
>> > }
>> >
>> >  But I am not able to get correct BDF of domU.
>>
>> I don't think that an hypercall is a good way to solve this.
>>
>>
>> > For instance the logs at 2 different place give diff BDFs
>> >
>> > #xl pci-attach 1 '0002:01:00.1,permissive=1'
>> >
>> > xen-pciback pci-1-0: xen_pcibk_export_device exporting dom 2 bus 1 slot 0 func 1
>> > xen_pciback: vpci: 0002:01:00.1: assign to virtual slot 1
>> > xen_pcibk_publish_pci_dev 0000:00:01.00
>> >
>> > Code that generated print:
>> > static int xen_pcibk_publish_pci_dev(struct xen_pcibk_device *pdev,
>> >                                    unsigned int domain, unsigned int bus,
>> >                                    unsigned int devfn, unsigned int devid)
>> > {
>> >     ...
>> >         printk(KERN_ERR"%s %04x:%02x:%02x.%02x",__func__, domain, bus,
>> >                             PCI_SLOT(devfn), PCI_FUNC(devfn));
>> >
>> >
>> > While in xen_pcibk_do_op Print is:
>> >
>> > xen_pcibk_do_op Guest SBDF=0:0:1.1 (this is output of lspci in domU)
>> >
>> > Code that generated print:
>> >
>> > void xen_pcibk_do_op(struct work_struct *data)
>> > {
>> >      ...
>> >         if (dev == NULL)
>> >                 op->err = XEN_PCI_ERR_dev_not_found;
>> >         else {
>> >         printk(KERN_ERR"%s Guest SBDF=%d:%d:%d.%d \r\n",__func__,
>> > op->domain, op->bus, op->devfn>>3, op->devfn&0x7);
>> >
>> >
>> > Stefano, I need your help in this
>>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-10-03  9:01                 ` Ian Campbell
@ 2014-10-03  9:33                   ` manish jaggi
  0 siblings, 0 replies; 38+ messages in thread
From: manish jaggi @ 2014-10-03  9:33 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Anup Patel, Vijay Kilari, Stefano Stabellini, Prasun Kapoor,
	manish.jaggi, Julien Grall, xen-devel, Matt.Evans, psawargaonkar,
	Dave.Martin

On 3 October 2014 14:31, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2014-10-02 at 17:59 +0100, Stefano Stabellini wrote:
>> On Thu, 2 Oct 2014, Stefano Stabellini wrote:
>> > > >> >> using this the ITS emulation code in xen is able to trap ITS command
>> > > >> >> writes by driver.
>> > > >> >> But we are facing a problem now, where your help is needed
>>
>> Actually  given that you are talking about the virtual ITS, why do you
>> need the real Device ID or Stream ID of the passthrough device in the
>> guest at all?  Shouldn't you just be generating a bunch of virtual IDs
>> instead?
>>
>> Let's take a step back: why do you need *real* IDs to program a
>> *virtual* ITS?
>
> Ack, I'm having a great deal of trouble figuring out why the guest would
> ever see a physical hardware identifier of any kind.
>
> Ian.
>
Guest Never sees the real ID of pass through device

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-10-03  9:32                 ` manish jaggi
@ 2014-10-06 11:05                   ` manish jaggi
  2014-10-06 14:11                     ` Stefano Stabellini
  0 siblings, 1 reply; 38+ messages in thread
From: manish jaggi @ 2014-10-06 11:05 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Ian Campbell, Vijay Kilari, Prasun Kapoor, manish.jaggi,
	Julien Grall, xen-devel, psawargaonkar, Matt.Evans, Dave.Martin,
	Anup Patel

On 3 October 2014 15:02, manish jaggi <manishjaggi.oss@gmail.com> wrote:
> On 2 October 2014 22:29, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
>> On Thu, 2 Oct 2014, Stefano Stabellini wrote:
>>> > >> >> using this the ITS emulation code in xen is able to trap ITS command
>>> > >> >> writes by driver.
>>> > >> >> But we are facing a problem now, where your help is needed
>>
>> Actually  given that you are talking about the virtual ITS, why do you
>> need the real Device ID or Stream ID of the passthrough device in the
>> guest at all?  Shouldn't you just be generating a bunch of virtual IDs
>> instead?
>>
>> Let's take a step back: why do you need *real* IDs to program a
>> *virtual* ITS?
>>
> Guest Passes passthrough devices stream id - which is generated by BDF
> so a device with BDF (real ID) say 1:0:5.0 may at BDF 0:0:0.0 in
> guest.
> Xen ITS Emulation Code would need the information mapping between
> guestBDF actual BDF of a passthrough device.
>
> Guest ITS never sees real ID.
>
>>
>>> > >> >> The StreamID is generated by segment: bus : device: number which is
>>> > >> >> fed as DevID in ITS commands. In Dom0 the streamID is correctly
>>> > >> >> generated but in domU the Stream ID for a passthrough device is
>>> > >> >> 0:0:0:0 now when emulating this in Xen it is a problem as xen does not
>>> > >> >> know how to get the physical stream id.
>>> > >> >>
>>> > >> >> (Eg: xl pci-attach 1 0001:00:05.0
>>> > >> >> DomU has the device but in DomU the id is 0000:00:00.0.)
>>> > >> >>
>>> > >> >> Could you suggest how to go about this.
>>> > >> >
>>> > >> > I don't think that the ITS patches have been posted yet, so it is
>>> > >> > difficult for me to understand the problem and propose a solution.
>>> > >>
>>> > >> In a simpler way, It is more of what the StreamID a driver running in
>>> > >> domU sees. Which is programmed in the ITS commands.
>>> > >> And how to map the domU  streamID to actual streamID in Xen when the
>>> > >> ITS command write traps.
>>> > >
>>> > > Wouldn't it be possible to pass the correct StreamID to DomU via device
>>> > > tree? Does it really need to match the PCI BDF?
>>> > Device Tree provide static mapping, runtime attaching a device (using
>>> > xl tools) to a domU is what I am working on.
>>>
>>> As I wrote before it is difficult to answer without the patches and/or a
>>> design document.
>>>
Below is the Design Flow:

=====
Xen PCI Passthrough support for ARM

----------------------------------------------


Nomenclature:

SBDF - segment:bus:device.function.
Segment - Number of the PCIe RootComplex. (this is also referred to as
domain in linux terminology Domain:BDF)
domU sbdf - refers to the sbdf of the device assigned when enumerated
in domU. This is different from the actual sbdf.


What is the requirement

1. Any PCI device be assigned to any domU at runtime.
2. The assignment should not be static, system admin must be able to
assign a PCI BDF at will.
3. SMMU should be programmed for the device/domain pair
4. MSI should be directly injected into domU.
5. Access to GIC should be trapped in Xen to program MSI(LPI) in ITS
6. FrontEnd - Backend communication between DomU-Dom0 must be limited
to PCI config read writes.

What is support today

1. Non PCI passthrough using a device tree. A device tree node can be
passthrough to a domU.
2. SMMU is programmed (devices' stream ID is allowed to access domU
guest memory)
3. By default a device is assigned to dom0 if not done a pci-hide.
(SMMU for that device is programmed for dom0)


Proposed Flow:

1. Xen parses its device tree to find the pci nodes. There nodes
represent the PCIe Root Complexes

2. Xen parses its device tree to find smmu nodes which have mmu-master
property which should point to the PCIe RCs (pci nodes)
Note: The mapping between the pci node and the smmu node is based on:
which smmu which translates r/w requests from that pcie

3. All the pci nodes are assigned to the dom0.
4. dom0 boots

5. dom0 enumerates all the PCI devices and calls xen hypercall
PHYSDEVOP_pci_device_add. This in-turn programs the smmu
Note: Also The MMIO regions for the device are assigned to the dom0.

6. dom0 boots domU.
Note: in domU, the pcifront bus has a msi-controller attached. It is
assumed that the domU device tree contains the 'gicv3-its' node.

7. dom0 calls xl pci-assignable-add <sbdf>

8. dom0 calls xl pci-attach <domU_id> '<sbdf>,permissive=1'
Note: XEN_DOMCTL_assign_device is called. Implemented assign_device in smmu.c
The MMIO regions for the device are assigned to the respective domU.

9. Front end driver is notified by xenwatch daemon and it starts
enumerating devices.
Note: the check of initial_domain in register_xen_pci_notifier() is removed.


10. The device driver requests the msi using pcie_enable_msi, which in
turn calls the its driver which tries to issue commands like MAPD,
MAPVI. which are trapped by the Xen ITS emulation driver.
10a. The MAPD command contains a device id, which is a stream ID
constructed using the sbdf. The sbdf is specific to domU for that
device.
10b. Xen ITS driver has to replace the domU sbdf to the actual sdbf
and program the command in the command queue.

Now there is a _problem_ that xen does not know the mapping between
domU streamid (sbdf) and actual sbdf.
For ex if two pci devices are assigned to domU, xen does not know that
which domU sbdf maps to which pci device
Thus the Xen ITS driver fails to program the MAPD command in command
queue, which results LPIs not programmed for that device

At the time xl pci-attach is called the domU sbdf of the device is not
known, as the enumeration of the PCI device in domU has not started by
that time.
in xl pci-attach a virtual slot number can be provided but it is not
used in ITS commands in domU.

If it is known somehow (__we need help on this__) then dom0 could
issue a hypercall to xen to map domU sbdf to actual in the following
format

PHYSDEVOP_map_domU_sbdf{

actual sBDF,
domU enumerated sBDF
and domU_ID.

}

=====



>>> You should be able to specify StreamID ranges in Device Tree to cover a
>>> bus. So you should be able to say that the virtual PCI bus in the guest
>>> has StreamID [0-8] for slots [0-8]. Then in your example below you need
>>> to make sure to insert the passthrough device in virtual slot 1 instead
>>> of virtual slot 0.
>>>
>>> I don't know if you were aware of this but you can already specify the
>>> virtual slot number to pci-attach, see xl pci-attach --help
>>>
>>> Otherwise you could let the frontend know the StreamID via xenbus: the
>>> backend should know the correct StreamID for the device, it could just
>>> add it to xenstore as a new parameter for the frontend.
>>>
>>> Either way you should be able to tell the frontend what is the right
>>> StreamID for the device.
>>>
>>>
>>> > > Otherwise if the command trap into Xen, couldn't Xen do the translation?
>>> > Xen does not know how to map the BDF in domU to actual streamID.
>>> >
>>> > I had thought of adding a hypercall,  when xl pci-attach is called.
>>> > PHYSDEVOP_map_streamid {
>>> >     dom_id,
>>> >     phys_streamid, //bdf
>>> >     guest_streamid,
>>> > }
>>> >
>>> >  But I am not able to get correct BDF of domU.
>>>
>>> I don't think that an hypercall is a good way to solve this.
>>>
>>>
>>> > For instance the logs at 2 different place give diff BDFs
>>> >
>>> > #xl pci-attach 1 '0002:01:00.1,permissive=1'
>>> >
>>> > xen-pciback pci-1-0: xen_pcibk_export_device exporting dom 2 bus 1 slot 0 func 1
>>> > xen_pciback: vpci: 0002:01:00.1: assign to virtual slot 1
>>> > xen_pcibk_publish_pci_dev 0000:00:01.00
>>> >
>>> > Code that generated print:
>>> > static int xen_pcibk_publish_pci_dev(struct xen_pcibk_device *pdev,
>>> >                                    unsigned int domain, unsigned int bus,
>>> >                                    unsigned int devfn, unsigned int devid)
>>> > {
>>> >     ...
>>> >         printk(KERN_ERR"%s %04x:%02x:%02x.%02x",__func__, domain, bus,
>>> >                             PCI_SLOT(devfn), PCI_FUNC(devfn));
>>> >
>>> >
>>> > While in xen_pcibk_do_op Print is:
>>> >
>>> > xen_pcibk_do_op Guest SBDF=0:0:1.1 (this is output of lspci in domU)
>>> >
>>> > Code that generated print:
>>> >
>>> > void xen_pcibk_do_op(struct work_struct *data)
>>> > {
>>> >      ...
>>> >         if (dev == NULL)
>>> >                 op->err = XEN_PCI_ERR_dev_not_found;
>>> >         else {
>>> >         printk(KERN_ERR"%s Guest SBDF=%d:%d:%d.%d \r\n",__func__,
>>> > op->domain, op->bus, op->devfn>>3, op->devfn&0x7);
>>> >
>>> >
>>> > Stefano, I need your help in this
>>>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-10-06 11:05                   ` manish jaggi
@ 2014-10-06 14:11                     ` Stefano Stabellini
  2014-10-06 15:38                       ` Ian Campbell
  2014-10-06 17:39                       ` manish jaggi
  0 siblings, 2 replies; 38+ messages in thread
From: Stefano Stabellini @ 2014-10-06 14:11 UTC (permalink / raw)
  To: manish jaggi
  Cc: Ian Campbell, Vijay Kilari, Stefano Stabellini, Prasun Kapoor,
	manish.jaggi, Julien Grall, xen-devel, Matt.Evans, psawargaonkar,
	Dave.Martin, Anup Patel

On Mon, 6 Oct 2014, manish jaggi wrote:
> Below is the Design Flow:
> 
> =====
> Xen PCI Passthrough support for ARM
> 
> ----------------------------------------------
> 
> 
> Nomenclature:
> 
> SBDF - segment:bus:device.function.
> Segment - Number of the PCIe RootComplex. (this is also referred to as
> domain in linux terminology Domain:BDF)
> domU sbdf - refers to the sbdf of the device assigned when enumerated
> in domU. This is different from the actual sbdf.
> 
> 
> What is the requirement
> 
> 1. Any PCI device be assigned to any domU at runtime.
> 2. The assignment should not be static, system admin must be able to
> assign a PCI BDF at will.
> 3. SMMU should be programmed for the device/domain pair
> 4. MSI should be directly injected into domU.
> 5. Access to GIC should be trapped in Xen to program MSI(LPI) in ITS
> 6. FrontEnd - Backend communication between DomU-Dom0 must be limited
> to PCI config read writes.
> 
> What is support today
> 
> 1. Non PCI passthrough using a device tree. A device tree node can be
> passthrough to a domU.
> 2. SMMU is programmed (devices' stream ID is allowed to access domU
> guest memory)
> 3. By default a device is assigned to dom0 if not done a pci-hide.
> (SMMU for that device is programmed for dom0)
> 
> 
> Proposed Flow:
> 
> 1. Xen parses its device tree to find the pci nodes. There nodes
> represent the PCIe Root Complexes
> 
> 2. Xen parses its device tree to find smmu nodes which have mmu-master
> property which should point to the PCIe RCs (pci nodes)
> Note: The mapping between the pci node and the smmu node is based on:
> which smmu which translates r/w requests from that pcie
> 
> 3. All the pci nodes are assigned to the dom0.
> 4. dom0 boots
> 
> 5. dom0 enumerates all the PCI devices and calls xen hypercall
> PHYSDEVOP_pci_device_add. This in-turn programs the smmu
> Note: Also The MMIO regions for the device are assigned to the dom0.
> 
> 6. dom0 boots domU.
> Note: in domU, the pcifront bus has a msi-controller attached. It is
> assumed that the domU device tree contains the 'gicv3-its' node.
> 
> 7. dom0 calls xl pci-assignable-add <sbdf>
> 
> 8. dom0 calls xl pci-attach <domU_id> '<sbdf>,permissive=1'
> Note: XEN_DOMCTL_assign_device is called. Implemented assign_device in smmu.c
> The MMIO regions for the device are assigned to the respective domU.
> 
> 9. Front end driver is notified by xenwatch daemon and it starts
> enumerating devices.
> Note: the check of initial_domain in register_xen_pci_notifier() is removed.
> 
> 
> 10. The device driver requests the msi using pcie_enable_msi, which in
> turn calls the its driver which tries to issue commands like MAPD,
> MAPVI. which are trapped by the Xen ITS emulation driver.
> 10a. The MAPD command contains a device id, which is a stream ID
> constructed using the sbdf. The sbdf is specific to domU for that
> device.
> 10b. Xen ITS driver has to replace the domU sbdf to the actual sdbf
> and program the command in the command queue.
> 
> Now there is a _problem_ that xen does not know the mapping between
> domU streamid (sbdf) and actual sbdf.
> For ex if two pci devices are assigned to domU, xen does not know that
> which domU sbdf maps to which pci device
> Thus the Xen ITS driver fails to program the MAPD command in command
> queue, which results LPIs not programmed for that device
> 
> At the time xl pci-attach is called the domU sbdf of the device is not
> known, as the enumeration of the PCI device in domU has not started by
> that time.
> in xl pci-attach a virtual slot number can be provided but it is not
> used in ITS commands in domU.
> 
> If it is known somehow (__we need help on this__) then dom0 could
> issue a hypercall to xen to map domU sbdf to actual in the following
> format
> 
> PHYSDEVOP_map_domU_sbdf{
> 
> actual sBDF,
> domU enumerated sBDF
> and domU_ID.
> 
> }

Now the problem is much much clearer, thank you!

Actually the xen-pcifront driver in the guest knows the real PCI sbdf
for the assigned device, not just the virtual slot. On x86 xen-pcifront
makes an hypercall to enable msi/msix on the device passing the real
sbdf as argument:

drivers/pci/xen-pcifront.c:pci_frontend_enable_msix

Could we use the same hypercall to enable msi/msix on ARM? That would be
ideal.

Otherwise xen-pcifront could call a new hypercall to let Xen know the
virtual sbdf to sbdf mapping. But I would prefer not to introduce a new
hypercall and reuse the existing one.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-10-06 14:11                     ` Stefano Stabellini
@ 2014-10-06 15:38                       ` Ian Campbell
  2014-10-06 17:39                         ` manish jaggi
  2014-10-06 17:39                       ` manish jaggi
  1 sibling, 1 reply; 38+ messages in thread
From: Ian Campbell @ 2014-10-06 15:38 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Anup Patel, Vijay Kilari, Prasun Kapoor, manish.jaggi,
	Julien Grall, xen-devel, psawargaonkar, Matt.Evans, Dave.Martin,
	manish jaggi

On Mon, 2014-10-06 at 15:11 +0100, Stefano Stabellini wrote:
> Actually the xen-pcifront driver in the guest knows the real PCI sbdf
> for the assigned device, not just the virtual slot. On x86 xen-pcifront
> makes an hypercall to enable msi/msix on the device passing the real
> sbdf as argument:
> 
> drivers/pci/xen-pcifront.c:pci_frontend_enable_msix
> 
> Could we use the same hypercall to enable msi/msix on ARM? That would be
> ideal.

That's not a hypercall, it's a message to pciback.

And I think it takes the virtual BDF, since pciback knows how to
translate such things.

> Otherwise xen-pcifront could call a new hypercall to let Xen know the
> virtual sbdf to sbdf mapping. But I would prefer not to introduce a new
> hypercall and reuse the existing one.

Ian.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-10-06 14:11                     ` Stefano Stabellini
  2014-10-06 15:38                       ` Ian Campbell
@ 2014-10-06 17:39                       ` manish jaggi
  2014-10-07 18:17                         ` Stefano Stabellini
  1 sibling, 1 reply; 38+ messages in thread
From: manish jaggi @ 2014-10-06 17:39 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Ian Campbell, Vijay Kilari, Prasun Kapoor, manish.jaggi,
	Julien Grall, xen-devel, psawargaonkar, Matt.Evans, Dave.Martin,
	Anup Patel

On 6 October 2014 19:41, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Mon, 6 Oct 2014, manish jaggi wrote:
>> Below is the Design Flow:
>>
>> =====
>> Xen PCI Passthrough support for ARM
>>
>> ----------------------------------------------
>>
>>
>> Nomenclature:
>>
>> SBDF - segment:bus:device.function.
>> Segment - Number of the PCIe RootComplex. (this is also referred to as
>> domain in linux terminology Domain:BDF)
>> domU sbdf - refers to the sbdf of the device assigned when enumerated
>> in domU. This is different from the actual sbdf.
>>
>>
>> What is the requirement
>>
>> 1. Any PCI device be assigned to any domU at runtime.
>> 2. The assignment should not be static, system admin must be able to
>> assign a PCI BDF at will.
>> 3. SMMU should be programmed for the device/domain pair
>> 4. MSI should be directly injected into domU.
>> 5. Access to GIC should be trapped in Xen to program MSI(LPI) in ITS
>> 6. FrontEnd - Backend communication between DomU-Dom0 must be limited
>> to PCI config read writes.
>>
>> What is support today
>>
>> 1. Non PCI passthrough using a device tree. A device tree node can be
>> passthrough to a domU.
>> 2. SMMU is programmed (devices' stream ID is allowed to access domU
>> guest memory)
>> 3. By default a device is assigned to dom0 if not done a pci-hide.
>> (SMMU for that device is programmed for dom0)
>>
>>
>> Proposed Flow:
>>
>> 1. Xen parses its device tree to find the pci nodes. There nodes
>> represent the PCIe Root Complexes
>>
>> 2. Xen parses its device tree to find smmu nodes which have mmu-master
>> property which should point to the PCIe RCs (pci nodes)
>> Note: The mapping between the pci node and the smmu node is based on:
>> which smmu which translates r/w requests from that pcie
>>
>> 3. All the pci nodes are assigned to the dom0.
>> 4. dom0 boots
>>
>> 5. dom0 enumerates all the PCI devices and calls xen hypercall
>> PHYSDEVOP_pci_device_add. This in-turn programs the smmu
>> Note: Also The MMIO regions for the device are assigned to the dom0.
>>
>> 6. dom0 boots domU.
>> Note: in domU, the pcifront bus has a msi-controller attached. It is
>> assumed that the domU device tree contains the 'gicv3-its' node.
>>
>> 7. dom0 calls xl pci-assignable-add <sbdf>
>>
>> 8. dom0 calls xl pci-attach <domU_id> '<sbdf>,permissive=1'
>> Note: XEN_DOMCTL_assign_device is called. Implemented assign_device in smmu.c
>> The MMIO regions for the device are assigned to the respective domU.
>>
>> 9. Front end driver is notified by xenwatch daemon and it starts
>> enumerating devices.
>> Note: the check of initial_domain in register_xen_pci_notifier() is removed.
>>
>>
>> 10. The device driver requests the msi using pcie_enable_msi, which in
>> turn calls the its driver which tries to issue commands like MAPD,
>> MAPVI. which are trapped by the Xen ITS emulation driver.
>> 10a. The MAPD command contains a device id, which is a stream ID
>> constructed using the sbdf. The sbdf is specific to domU for that
>> device.
>> 10b. Xen ITS driver has to replace the domU sbdf to the actual sdbf
>> and program the command in the command queue.
>>
>> Now there is a _problem_ that xen does not know the mapping between
>> domU streamid (sbdf) and actual sbdf.
>> For ex if two pci devices are assigned to domU, xen does not know that
>> which domU sbdf maps to which pci device
>> Thus the Xen ITS driver fails to program the MAPD command in command
>> queue, which results LPIs not programmed for that device
>>
>> At the time xl pci-attach is called the domU sbdf of the device is not
>> known, as the enumeration of the PCI device in domU has not started by
>> that time.
>> in xl pci-attach a virtual slot number can be provided but it is not
>> used in ITS commands in domU.
>>
>> If it is known somehow (__we need help on this__) then dom0 could
>> issue a hypercall to xen to map domU sbdf to actual in the following
>> format
>>
>> PHYSDEVOP_map_domU_sbdf{
>>
>> actual sBDF,
>> domU enumerated sBDF
>> and domU_ID.
>>
>> }
>
> Now the problem is much much clearer, thank you!
>
> Actually the xen-pcifront driver in the guest knows the real PCI sbdf
> for the assigned device, not just the virtual slot.
Could you please help in the data structure where it is stored.
PS: in this mail thread you were averse to the fact that why guest
should know the real sbdf.

>On x86 xen-pcifront
> makes an hypercall to enable msi/msix on the device passing the real
> sbdf as argument:
On ARM since for non msi interrupts (SPI, SGI's) GIC access is
directly trapped in Xen, it makes sense to use traps for LPIs
The same logic is being proposed. Apart from PCI config space r/w the
proposal is not to use front-end back-end communication for MSIs.
In the long run, I plan to trap directly into Xen for PCI config space r/w.
Also in our previous mails we agreed on utilising arm trap and emulate for MSI.


>
> drivers/pci/xen-pcifront.c:pci_frontend_enable_msix
>
> Could we use the same hypercall to enable msi/msix on ARM? That would be
> ideal.
>
> Otherwise xen-pcifront could call a new hypercall to let Xen know the
> virtual sbdf to sbdf mapping. But I would prefer not to introduce a new
> hypercall and reuse the existing one.
We are proposing removal of pcifront back communication for MSIs. I do
not want to introduce a new hypercall
If we know the guest sbdf it can be done in the pci-attach DOMCTL itself.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-10-06 15:38                       ` Ian Campbell
@ 2014-10-06 17:39                         ` manish jaggi
  0 siblings, 0 replies; 38+ messages in thread
From: manish jaggi @ 2014-10-06 17:39 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Anup Patel, Vijay Kilari, Stefano Stabellini, Prasun Kapoor,
	manish.jaggi, Julien Grall, xen-devel, Matt.Evans, psawargaonkar,
	Dave.Martin

On 6 October 2014 21:08, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2014-10-06 at 15:11 +0100, Stefano Stabellini wrote:
>> Actually the xen-pcifront driver in the guest knows the real PCI sbdf
>> for the assigned device, not just the virtual slot. On x86 xen-pcifront
>> makes an hypercall to enable msi/msix on the device passing the real
>> sbdf as argument:
>>
>> drivers/pci/xen-pcifront.c:pci_frontend_enable_msix
>>
>> Could we use the same hypercall to enable msi/msix on ARM? That would be
>> ideal.
>
> That's not a hypercall, it's a message to pciback.
>
> And I think it takes the virtual BDF, since pciback knows how to
> translate such things.
>
>> Otherwise xen-pcifront could call a new hypercall to let Xen know the
>> virtual sbdf to sbdf mapping. But I would prefer not to introduce a new
>> hypercall and reuse the existing one.
Correct!
>
> Ian.
>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-10-06 17:39                       ` manish jaggi
@ 2014-10-07 18:17                         ` Stefano Stabellini
  2014-10-08 11:46                           ` manish jaggi
  0 siblings, 1 reply; 38+ messages in thread
From: Stefano Stabellini @ 2014-10-07 18:17 UTC (permalink / raw)
  To: manish jaggi
  Cc: Ian Campbell, Vijay Kilari, Stefano Stabellini, Prasun Kapoor,
	manish.jaggi, Julien Grall, xen-devel, psawargaonkar, Anup Patel

As the discussion is becoming Xen specific, reduce the CC list.

On Mon, 6 Oct 2014, manish jaggi wrote:
> On 6 October 2014 19:41, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > On Mon, 6 Oct 2014, manish jaggi wrote:
> >> Below is the Design Flow:
> >>
> >> =====
> >> Xen PCI Passthrough support for ARM
> >>
> >> ----------------------------------------------
> >>
> >>
> >> Nomenclature:
> >>
> >> SBDF - segment:bus:device.function.
> >> Segment - Number of the PCIe RootComplex. (this is also referred to as
> >> domain in linux terminology Domain:BDF)
> >> domU sbdf - refers to the sbdf of the device assigned when enumerated
> >> in domU. This is different from the actual sbdf.
> >>
> >>
> >> What is the requirement
> >>
> >> 1. Any PCI device be assigned to any domU at runtime.
> >> 2. The assignment should not be static, system admin must be able to
> >> assign a PCI BDF at will.
> >> 3. SMMU should be programmed for the device/domain pair
> >> 4. MSI should be directly injected into domU.
> >> 5. Access to GIC should be trapped in Xen to program MSI(LPI) in ITS
> >> 6. FrontEnd - Backend communication between DomU-Dom0 must be limited
> >> to PCI config read writes.
> >>
> >> What is support today
> >>
> >> 1. Non PCI passthrough using a device tree. A device tree node can be
> >> passthrough to a domU.
> >> 2. SMMU is programmed (devices' stream ID is allowed to access domU
> >> guest memory)
> >> 3. By default a device is assigned to dom0 if not done a pci-hide.
> >> (SMMU for that device is programmed for dom0)
> >>
> >>
> >> Proposed Flow:
> >>
> >> 1. Xen parses its device tree to find the pci nodes. There nodes
> >> represent the PCIe Root Complexes
> >>
> >> 2. Xen parses its device tree to find smmu nodes which have mmu-master
> >> property which should point to the PCIe RCs (pci nodes)
> >> Note: The mapping between the pci node and the smmu node is based on:
> >> which smmu which translates r/w requests from that pcie
> >>
> >> 3. All the pci nodes are assigned to the dom0.
> >> 4. dom0 boots
> >>
> >> 5. dom0 enumerates all the PCI devices and calls xen hypercall
> >> PHYSDEVOP_pci_device_add. This in-turn programs the smmu
> >> Note: Also The MMIO regions for the device are assigned to the dom0.
> >>
> >> 6. dom0 boots domU.
> >> Note: in domU, the pcifront bus has a msi-controller attached. It is
> >> assumed that the domU device tree contains the 'gicv3-its' node.
> >>
> >> 7. dom0 calls xl pci-assignable-add <sbdf>
> >>
> >> 8. dom0 calls xl pci-attach <domU_id> '<sbdf>,permissive=1'
> >> Note: XEN_DOMCTL_assign_device is called. Implemented assign_device in smmu.c
> >> The MMIO regions for the device are assigned to the respective domU.
> >>
> >> 9. Front end driver is notified by xenwatch daemon and it starts
> >> enumerating devices.
> >> Note: the check of initial_domain in register_xen_pci_notifier() is removed.
> >>
> >>
> >> 10. The device driver requests the msi using pcie_enable_msi, which in
> >> turn calls the its driver which tries to issue commands like MAPD,
> >> MAPVI. which are trapped by the Xen ITS emulation driver.
> >> 10a. The MAPD command contains a device id, which is a stream ID
> >> constructed using the sbdf. The sbdf is specific to domU for that
> >> device.
> >> 10b. Xen ITS driver has to replace the domU sbdf to the actual sdbf
> >> and program the command in the command queue.
> >>
> >> Now there is a _problem_ that xen does not know the mapping between
> >> domU streamid (sbdf) and actual sbdf.
> >> For ex if two pci devices are assigned to domU, xen does not know that
> >> which domU sbdf maps to which pci device
> >> Thus the Xen ITS driver fails to program the MAPD command in command
> >> queue, which results LPIs not programmed for that device
> >>
> >> At the time xl pci-attach is called the domU sbdf of the device is not
> >> known, as the enumeration of the PCI device in domU has not started by
> >> that time.
> >> in xl pci-attach a virtual slot number can be provided but it is not
> >> used in ITS commands in domU.
> >>
> >> If it is known somehow (__we need help on this__) then dom0 could
> >> issue a hypercall to xen to map domU sbdf to actual in the following
> >> format
> >>
> >> PHYSDEVOP_map_domU_sbdf{
> >>
> >> actual sBDF,
> >> domU enumerated sBDF
> >> and domU_ID.
> >>
> >> }
> >
> > Now the problem is much much clearer, thank you!
> >
> > Actually the xen-pcifront driver in the guest knows the real PCI sbdf
> > for the assigned device, not just the virtual slot.
> Could you please help in the data structure where it is stored.

Actually I was wrong. It is true that the real sbdf is exposed to the
guest via xenstore (see the dev-0 backend node) but it is not currently
read by the guest and probably it shouldn't be readable in the first
place. It's best not to rely on it.

The virtual to physical sbdf mapping is done by pciback, see
drivers/xen/xen-pciback/xenbus.c and drivers/xen/xen-pciback/vpci.c.
Pciback should be one telling Xen what the mapping is.
Unfortunately I think that you'll probably have to introduce a new
hypercall to do it.



> PS: in this mail thread you were averse to the fact that why guest
> should know the real sbdf.

I am averse to having to rely on real StreamIDs being exposed and used
in the guest virtual ITS.


> >On x86 xen-pcifront
> > makes an hypercall to enable msi/msix on the device passing the real
> > sbdf as argument:
> On ARM since for non msi interrupts (SPI, SGI's) GIC access is
> directly trapped in Xen, it makes sense to use traps for LPIs
> The same logic is being proposed. Apart from PCI config space r/w the
> proposal is not to use front-end back-end communication for MSIs.
> In the long run, I plan to trap directly into Xen for PCI config space r/w.
> Also in our previous mails we agreed on utilising arm trap and emulate for MSI.

OK

 
> >
> > drivers/pci/xen-pcifront.c:pci_frontend_enable_msix
> >
> > Could we use the same hypercall to enable msi/msix on ARM? That would be
> > ideal.
> >
> > Otherwise xen-pcifront could call a new hypercall to let Xen know the
> > virtual sbdf to sbdf mapping. But I would prefer not to introduce a new
> > hypercall and reuse the existing one.
> We are proposing removal of pcifront back communication for MSIs. I do
> not want to introduce a new hypercall
> If we know the guest sbdf it can be done in the pci-attach DOMCTL itself.

I agree that it would be nice to avoid a new hypercall but it might be
the only way to do it. Dig through the pciback code and see what you can
do, I am open to alternatives.
pciback code and see what you can do.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-10-07 18:17                         ` Stefano Stabellini
@ 2014-10-08 11:46                           ` manish jaggi
  2014-10-08 12:46                             ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 38+ messages in thread
From: manish jaggi @ 2014-10-08 11:46 UTC (permalink / raw)
  To: Stefano Stabellini, Ryan Wilson
  Cc: Ian Campbell, Vijay Kilari, Prasun Kapoor, manish.jaggi,
	Julien Grall, xen-devel, psawargaonkar, Anup Patel

On 7 October 2014 23:47, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> As the discussion is becoming Xen specific, reduce the CC list.
>
> On Mon, 6 Oct 2014, manish jaggi wrote:
>> On 6 October 2014 19:41, Stefano Stabellini
>> <stefano.stabellini@eu.citrix.com> wrote:
>> > On Mon, 6 Oct 2014, manish jaggi wrote:
>> >> Below is the Design Flow:
>> >>
>> >> =====
>> >> Xen PCI Passthrough support for ARM
>> >>
>> >> ----------------------------------------------
>> >>
>> >>
>> >> Nomenclature:
>> >>
>> >> SBDF - segment:bus:device.function.
>> >> Segment - Number of the PCIe RootComplex. (this is also referred to as
>> >> domain in linux terminology Domain:BDF)
>> >> domU sbdf - refers to the sbdf of the device assigned when enumerated
>> >> in domU. This is different from the actual sbdf.
>> >>
>> >>
>> >> What is the requirement
>> >>
>> >> 1. Any PCI device be assigned to any domU at runtime.
>> >> 2. The assignment should not be static, system admin must be able to
>> >> assign a PCI BDF at will.
>> >> 3. SMMU should be programmed for the device/domain pair
>> >> 4. MSI should be directly injected into domU.
>> >> 5. Access to GIC should be trapped in Xen to program MSI(LPI) in ITS
>> >> 6. FrontEnd - Backend communication between DomU-Dom0 must be limited
>> >> to PCI config read writes.
>> >>
>> >> What is support today
>> >>
>> >> 1. Non PCI passthrough using a device tree. A device tree node can be
>> >> passthrough to a domU.
>> >> 2. SMMU is programmed (devices' stream ID is allowed to access domU
>> >> guest memory)
>> >> 3. By default a device is assigned to dom0 if not done a pci-hide.
>> >> (SMMU for that device is programmed for dom0)
>> >>
>> >>
>> >> Proposed Flow:
>> >>
>> >> 1. Xen parses its device tree to find the pci nodes. There nodes
>> >> represent the PCIe Root Complexes
>> >>
>> >> 2. Xen parses its device tree to find smmu nodes which have mmu-master
>> >> property which should point to the PCIe RCs (pci nodes)
>> >> Note: The mapping between the pci node and the smmu node is based on:
>> >> which smmu which translates r/w requests from that pcie
>> >>
>> >> 3. All the pci nodes are assigned to the dom0.
>> >> 4. dom0 boots
>> >>
>> >> 5. dom0 enumerates all the PCI devices and calls xen hypercall
>> >> PHYSDEVOP_pci_device_add. This in-turn programs the smmu
>> >> Note: Also The MMIO regions for the device are assigned to the dom0.
>> >>
>> >> 6. dom0 boots domU.
>> >> Note: in domU, the pcifront bus has a msi-controller attached. It is
>> >> assumed that the domU device tree contains the 'gicv3-its' node.
>> >>
>> >> 7. dom0 calls xl pci-assignable-add <sbdf>
>> >>
>> >> 8. dom0 calls xl pci-attach <domU_id> '<sbdf>,permissive=1'
>> >> Note: XEN_DOMCTL_assign_device is called. Implemented assign_device in smmu.c
>> >> The MMIO regions for the device are assigned to the respective domU.
>> >>
>> >> 9. Front end driver is notified by xenwatch daemon and it starts
>> >> enumerating devices.
>> >> Note: the check of initial_domain in register_xen_pci_notifier() is removed.
>> >>
>> >>
>> >> 10. The device driver requests the msi using pcie_enable_msi, which in
>> >> turn calls the its driver which tries to issue commands like MAPD,
>> >> MAPVI. which are trapped by the Xen ITS emulation driver.
>> >> 10a. The MAPD command contains a device id, which is a stream ID
>> >> constructed using the sbdf. The sbdf is specific to domU for that
>> >> device.
>> >> 10b. Xen ITS driver has to replace the domU sbdf to the actual sdbf
>> >> and program the command in the command queue.
>> >>
>> >> Now there is a _problem_ that xen does not know the mapping between
>> >> domU streamid (sbdf) and actual sbdf.
>> >> For ex if two pci devices are assigned to domU, xen does not know that
>> >> which domU sbdf maps to which pci device
>> >> Thus the Xen ITS driver fails to program the MAPD command in command
>> >> queue, which results LPIs not programmed for that device
>> >>
>> >> At the time xl pci-attach is called the domU sbdf of the device is not
>> >> known, as the enumeration of the PCI device in domU has not started by
>> >> that time.
>> >> in xl pci-attach a virtual slot number can be provided but it is not
>> >> used in ITS commands in domU.
>> >>
>> >> If it is known somehow (__we need help on this__) then dom0 could
>> >> issue a hypercall to xen to map domU sbdf to actual in the following
>> >> format
>> >>
>> >> PHYSDEVOP_map_domU_sbdf{
>> >>
>> >> actual sBDF,
>> >> domU enumerated sBDF
>> >> and domU_ID.
>> >>
>> >> }
>> >
>> > Now the problem is much much clearer, thank you!
>> >
>> > Actually the xen-pcifront driver in the guest knows the real PCI sbdf
>> > for the assigned device, not just the virtual slot.
>> Could you please help in the data structure where it is stored.
>
> Actually I was wrong. It is true that the real sbdf is exposed to the
> guest via xenstore (see the dev-0 backend node) but it is not currently
> read by the guest and probably it shouldn't be readable in the first
> place. It's best not to rely on it.
>
> The virtual to physical sbdf mapping is done by pciback, see
> drivers/xen/xen-pciback/xenbus.c and drivers/xen/xen-pciback/vpci.c.
> Pciback should be one telling Xen what the mapping is.
> Unfortunately I think that you'll probably have to introduce a new
> hypercall to do it.
>
>
>
>> PS: in this mail thread you were averse to the fact that why guest
>> should know the real sbdf.
>
> I am averse to having to rely on real StreamIDs being exposed and used
> in the guest virtual ITS.
>
>
>> >On x86 xen-pcifront
>> > makes an hypercall to enable msi/msix on the device passing the real
>> > sbdf as argument:
>> On ARM since for non msi interrupts (SPI, SGI's) GIC access is
>> directly trapped in Xen, it makes sense to use traps for LPIs
>> The same logic is being proposed. Apart from PCI config space r/w the
>> proposal is not to use front-end back-end communication for MSIs.
>> In the long run, I plan to trap directly into Xen for PCI config space r/w.
>> Also in our previous mails we agreed on utilising arm trap and emulate for MSI.
>
> OK
>
>
>> >
>> > drivers/pci/xen-pcifront.c:pci_frontend_enable_msix
>> >
>> > Could we use the same hypercall to enable msi/msix on ARM? That would be
>> > ideal.
>> >
>> > Otherwise xen-pcifront could call a new hypercall to let Xen know the
>> > virtual sbdf to sbdf mapping. But I would prefer not to introduce a new
>> > hypercall and reuse the existing one.
>> We are proposing removal of pcifront back communication for MSIs. I do
>> not want to introduce a new hypercall
>> If we know the guest sbdf it can be done in the pci-attach DOMCTL itself.
>
> I agree that it would be nice to avoid a new hypercall but it might be
> the only way to do it. Dig through the pciback code and see what you can
> do, I am open to alternatives.
> pciback code and see what you can do.
Stefano, I need your help on this, can you point out someone who can help.
I am adding Ryan Wilson the author as well. Is he still active on Xen ?

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-10-08 11:46                           ` manish jaggi
@ 2014-10-08 12:46                             ` Konrad Rzeszutek Wilk
  2014-10-08 13:37                               ` manish jaggi
  0 siblings, 1 reply; 38+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-10-08 12:46 UTC (permalink / raw)
  To: manish jaggi
  Cc: Julien Grall, Ian Campbell, Vijay Kilari, Stefano Stabellini,
	Prasun Kapoor, manish.jaggi, Ryan Wilson, xen-devel,
	psawargaonkar, Anup Patel

On Wed, Oct 08, 2014 at 05:16:53PM +0530, manish jaggi wrote:
> On 7 October 2014 23:47, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > As the discussion is becoming Xen specific, reduce the CC list.
> >
> > On Mon, 6 Oct 2014, manish jaggi wrote:
> >> On 6 October 2014 19:41, Stefano Stabellini
> >> <stefano.stabellini@eu.citrix.com> wrote:
> >> > On Mon, 6 Oct 2014, manish jaggi wrote:
> >> >> Below is the Design Flow:
> >> >>
> >> >> =====
> >> >> Xen PCI Passthrough support for ARM
> >> >>
> >> >> ----------------------------------------------
> >> >>
> >> >>
> >> >> Nomenclature:
> >> >>
> >> >> SBDF - segment:bus:device.function.
> >> >> Segment - Number of the PCIe RootComplex. (this is also referred to as
> >> >> domain in linux terminology Domain:BDF)
> >> >> domU sbdf - refers to the sbdf of the device assigned when enumerated
> >> >> in domU. This is different from the actual sbdf.
> >> >>
> >> >>
> >> >> What is the requirement
> >> >>
> >> >> 1. Any PCI device be assigned to any domU at runtime.
> >> >> 2. The assignment should not be static, system admin must be able to
> >> >> assign a PCI BDF at will.
> >> >> 3. SMMU should be programmed for the device/domain pair
> >> >> 4. MSI should be directly injected into domU.
> >> >> 5. Access to GIC should be trapped in Xen to program MSI(LPI) in ITS
> >> >> 6. FrontEnd - Backend communication between DomU-Dom0 must be limited
> >> >> to PCI config read writes.
> >> >>
> >> >> What is support today
> >> >>
> >> >> 1. Non PCI passthrough using a device tree. A device tree node can be
> >> >> passthrough to a domU.
> >> >> 2. SMMU is programmed (devices' stream ID is allowed to access domU
> >> >> guest memory)
> >> >> 3. By default a device is assigned to dom0 if not done a pci-hide.
> >> >> (SMMU for that device is programmed for dom0)
> >> >>
> >> >>
> >> >> Proposed Flow:
> >> >>
> >> >> 1. Xen parses its device tree to find the pci nodes. There nodes
> >> >> represent the PCIe Root Complexes
> >> >>
> >> >> 2. Xen parses its device tree to find smmu nodes which have mmu-master
> >> >> property which should point to the PCIe RCs (pci nodes)
> >> >> Note: The mapping between the pci node and the smmu node is based on:
> >> >> which smmu which translates r/w requests from that pcie
> >> >>
> >> >> 3. All the pci nodes are assigned to the dom0.
> >> >> 4. dom0 boots
> >> >>
> >> >> 5. dom0 enumerates all the PCI devices and calls xen hypercall
> >> >> PHYSDEVOP_pci_device_add. This in-turn programs the smmu
> >> >> Note: Also The MMIO regions for the device are assigned to the dom0.
> >> >>
> >> >> 6. dom0 boots domU.
> >> >> Note: in domU, the pcifront bus has a msi-controller attached. It is
> >> >> assumed that the domU device tree contains the 'gicv3-its' node.
> >> >>
> >> >> 7. dom0 calls xl pci-assignable-add <sbdf>
> >> >>
> >> >> 8. dom0 calls xl pci-attach <domU_id> '<sbdf>,permissive=1'
> >> >> Note: XEN_DOMCTL_assign_device is called. Implemented assign_device in smmu.c
> >> >> The MMIO regions for the device are assigned to the respective domU.
> >> >>
> >> >> 9. Front end driver is notified by xenwatch daemon and it starts
> >> >> enumerating devices.
> >> >> Note: the check of initial_domain in register_xen_pci_notifier() is removed.
> >> >>
> >> >>
> >> >> 10. The device driver requests the msi using pcie_enable_msi, which in
> >> >> turn calls the its driver which tries to issue commands like MAPD,
> >> >> MAPVI. which are trapped by the Xen ITS emulation driver.
> >> >> 10a. The MAPD command contains a device id, which is a stream ID
> >> >> constructed using the sbdf. The sbdf is specific to domU for that
> >> >> device.
> >> >> 10b. Xen ITS driver has to replace the domU sbdf to the actual sdbf
> >> >> and program the command in the command queue.
> >> >>
> >> >> Now there is a _problem_ that xen does not know the mapping between
> >> >> domU streamid (sbdf) and actual sbdf.
> >> >> For ex if two pci devices are assigned to domU, xen does not know that
> >> >> which domU sbdf maps to which pci device
> >> >> Thus the Xen ITS driver fails to program the MAPD command in command
> >> >> queue, which results LPIs not programmed for that device
> >> >>
> >> >> At the time xl pci-attach is called the domU sbdf of the device is not
> >> >> known, as the enumeration of the PCI device in domU has not started by
> >> >> that time.
> >> >> in xl pci-attach a virtual slot number can be provided but it is not
> >> >> used in ITS commands in domU.
> >> >>
> >> >> If it is known somehow (__we need help on this__) then dom0 could
> >> >> issue a hypercall to xen to map domU sbdf to actual in the following
> >> >> format
> >> >>
> >> >> PHYSDEVOP_map_domU_sbdf{
> >> >>
> >> >> actual sBDF,
> >> >> domU enumerated sBDF
> >> >> and domU_ID.
> >> >>
> >> >> }
> >> >
> >> > Now the problem is much much clearer, thank you!
> >> >
> >> > Actually the xen-pcifront driver in the guest knows the real PCI sbdf
> >> > for the assigned device, not just the virtual slot.
> >> Could you please help in the data structure where it is stored.
> >
> > Actually I was wrong. It is true that the real sbdf is exposed to the
> > guest via xenstore (see the dev-0 backend node) but it is not currently
> > read by the guest and probably it shouldn't be readable in the first
> > place. It's best not to rely on it.
> >
> > The virtual to physical sbdf mapping is done by pciback, see
> > drivers/xen/xen-pciback/xenbus.c and drivers/xen/xen-pciback/vpci.c.
> > Pciback should be one telling Xen what the mapping is.
> > Unfortunately I think that you'll probably have to introduce a new
> > hypercall to do it.
> >
> >
> >
> >> PS: in this mail thread you were averse to the fact that why guest
> >> should know the real sbdf.
> >
> > I am averse to having to rely on real StreamIDs being exposed and used
> > in the guest virtual ITS.
> >
> >
> >> >On x86 xen-pcifront
> >> > makes an hypercall to enable msi/msix on the device passing the real
> >> > sbdf as argument:
> >> On ARM since for non msi interrupts (SPI, SGI's) GIC access is
> >> directly trapped in Xen, it makes sense to use traps for LPIs
> >> The same logic is being proposed. Apart from PCI config space r/w the
> >> proposal is not to use front-end back-end communication for MSIs.
> >> In the long run, I plan to trap directly into Xen for PCI config space r/w.
> >> Also in our previous mails we agreed on utilising arm trap and emulate for MSI.
> >
> > OK
> >
> >
> >> >
> >> > drivers/pci/xen-pcifront.c:pci_frontend_enable_msix
> >> >
> >> > Could we use the same hypercall to enable msi/msix on ARM? That would be
> >> > ideal.
> >> >
> >> > Otherwise xen-pcifront could call a new hypercall to let Xen know the
> >> > virtual sbdf to sbdf mapping. But I would prefer not to introduce a new
> >> > hypercall and reuse the existing one.
> >> We are proposing removal of pcifront back communication for MSIs. I do
> >> not want to introduce a new hypercall
> >> If we know the guest sbdf it can be done in the pci-attach DOMCTL itself.
> >
> > I agree that it would be nice to avoid a new hypercall but it might be
> > the only way to do it. Dig through the pciback code and see what you can
> > do, I am open to alternatives.
> > pciback code and see what you can do.
> Stefano, I need your help on this, can you point out someone who can help.
> I am adding Ryan Wilson the author as well. Is he still active on Xen ?

No, but what is the help you need?

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-10-08 12:46                             ` Konrad Rzeszutek Wilk
@ 2014-10-08 13:37                               ` manish jaggi
  2014-10-08 13:45                                 ` Ian Campbell
  0 siblings, 1 reply; 38+ messages in thread
From: manish jaggi @ 2014-10-08 13:37 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Julien Grall, Ian Campbell, Vijay Kilari, Stefano Stabellini,
	Prasun Kapoor, manish.jaggi, Ryan Wilson, xen-devel,
	psawargaonkar, Anup Patel

On 8 October 2014 18:16, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Wed, Oct 08, 2014 at 05:16:53PM +0530, manish jaggi wrote:
>> On 7 October 2014 23:47, Stefano Stabellini
>> <stefano.stabellini@eu.citrix.com> wrote:
>> > As the discussion is becoming Xen specific, reduce the CC list.
>> >
>> > On Mon, 6 Oct 2014, manish jaggi wrote:
>> >> On 6 October 2014 19:41, Stefano Stabellini
>> >> <stefano.stabellini@eu.citrix.com> wrote:
>> >> > On Mon, 6 Oct 2014, manish jaggi wrote:
>> >> >> Below is the Design Flow:
>> >> >>
>> >> >> =====
>> >> >> Xen PCI Passthrough support for ARM
>> >> >>
>> >> >> ----------------------------------------------
>> >> >>
>> >> >>
>> >> >> Nomenclature:
>> >> >>
>> >> >> SBDF - segment:bus:device.function.
>> >> >> Segment - Number of the PCIe RootComplex. (this is also referred to as
>> >> >> domain in linux terminology Domain:BDF)
>> >> >> domU sbdf - refers to the sbdf of the device assigned when enumerated
>> >> >> in domU. This is different from the actual sbdf.
>> >> >>
>> >> >>
>> >> >> What is the requirement
>> >> >>
>> >> >> 1. Any PCI device be assigned to any domU at runtime.
>> >> >> 2. The assignment should not be static, system admin must be able to
>> >> >> assign a PCI BDF at will.
>> >> >> 3. SMMU should be programmed for the device/domain pair
>> >> >> 4. MSI should be directly injected into domU.
>> >> >> 5. Access to GIC should be trapped in Xen to program MSI(LPI) in ITS
>> >> >> 6. FrontEnd - Backend communication between DomU-Dom0 must be limited
>> >> >> to PCI config read writes.
>> >> >>
>> >> >> What is support today
>> >> >>
>> >> >> 1. Non PCI passthrough using a device tree. A device tree node can be
>> >> >> passthrough to a domU.
>> >> >> 2. SMMU is programmed (devices' stream ID is allowed to access domU
>> >> >> guest memory)
>> >> >> 3. By default a device is assigned to dom0 if not done a pci-hide.
>> >> >> (SMMU for that device is programmed for dom0)
>> >> >>
>> >> >>
>> >> >> Proposed Flow:
>> >> >>
>> >> >> 1. Xen parses its device tree to find the pci nodes. There nodes
>> >> >> represent the PCIe Root Complexes
>> >> >>
>> >> >> 2. Xen parses its device tree to find smmu nodes which have mmu-master
>> >> >> property which should point to the PCIe RCs (pci nodes)
>> >> >> Note: The mapping between the pci node and the smmu node is based on:
>> >> >> which smmu which translates r/w requests from that pcie
>> >> >>
>> >> >> 3. All the pci nodes are assigned to the dom0.
>> >> >> 4. dom0 boots
>> >> >>
>> >> >> 5. dom0 enumerates all the PCI devices and calls xen hypercall
>> >> >> PHYSDEVOP_pci_device_add. This in-turn programs the smmu
>> >> >> Note: Also The MMIO regions for the device are assigned to the dom0.
>> >> >>
>> >> >> 6. dom0 boots domU.
>> >> >> Note: in domU, the pcifront bus has a msi-controller attached. It is
>> >> >> assumed that the domU device tree contains the 'gicv3-its' node.
>> >> >>
>> >> >> 7. dom0 calls xl pci-assignable-add <sbdf>
>> >> >>
>> >> >> 8. dom0 calls xl pci-attach <domU_id> '<sbdf>,permissive=1'
>> >> >> Note: XEN_DOMCTL_assign_device is called. Implemented assign_device in smmu.c
>> >> >> The MMIO regions for the device are assigned to the respective domU.
>> >> >>
>> >> >> 9. Front end driver is notified by xenwatch daemon and it starts
>> >> >> enumerating devices.
>> >> >> Note: the check of initial_domain in register_xen_pci_notifier() is removed.
>> >> >>
>> >> >>
>> >> >> 10. The device driver requests the msi using pcie_enable_msi, which in
>> >> >> turn calls the its driver which tries to issue commands like MAPD,
>> >> >> MAPVI. which are trapped by the Xen ITS emulation driver.
>> >> >> 10a. The MAPD command contains a device id, which is a stream ID
>> >> >> constructed using the sbdf. The sbdf is specific to domU for that
>> >> >> device.
>> >> >> 10b. Xen ITS driver has to replace the domU sbdf to the actual sdbf
>> >> >> and program the command in the command queue.
>> >> >>
>> >> >> Now there is a _problem_ that xen does not know the mapping between
>> >> >> domU streamid (sbdf) and actual sbdf.
>> >> >> For ex if two pci devices are assigned to domU, xen does not know that
>> >> >> which domU sbdf maps to which pci device
>> >> >> Thus the Xen ITS driver fails to program the MAPD command in command
>> >> >> queue, which results LPIs not programmed for that device
>> >> >>
>> >> >> At the time xl pci-attach is called the domU sbdf of the device is not
>> >> >> known, as the enumeration of the PCI device in domU has not started by
>> >> >> that time.
>> >> >> in xl pci-attach a virtual slot number can be provided but it is not
>> >> >> used in ITS commands in domU.
>> >> >>
>> >> >> If it is known somehow (__we need help on this__) then dom0 could
>> >> >> issue a hypercall to xen to map domU sbdf to actual in the following
>> >> >> format
>> >> >>
>> >> >> PHYSDEVOP_map_domU_sbdf{
>> >> >>
>> >> >> actual sBDF,
>> >> >> domU enumerated sBDF
>> >> >> and domU_ID.
>> >> >>
>> >> >> }
>> >> >
>> >> > Now the problem is much much clearer, thank you!
>> >> >
>> >> > Actually the xen-pcifront driver in the guest knows the real PCI sbdf
>> >> > for the assigned device, not just the virtual slot.
>> >> Could you please help in the data structure where it is stored.
>> >
>> > Actually I was wrong. It is true that the real sbdf is exposed to the
>> > guest via xenstore (see the dev-0 backend node) but it is not currently
>> > read by the guest and probably it shouldn't be readable in the first
>> > place. It's best not to rely on it.
>> >
>> > The virtual to physical sbdf mapping is done by pciback, see
>> > drivers/xen/xen-pciback/xenbus.c and drivers/xen/xen-pciback/vpci.c.
>> > Pciback should be one telling Xen what the mapping is.
>> > Unfortunately I think that you'll probably have to introduce a new
>> > hypercall to do it.
>> >
>> >
>> >
>> >> PS: in this mail thread you were averse to the fact that why guest
>> >> should know the real sbdf.
>> >
>> > I am averse to having to rely on real StreamIDs being exposed and used
>> > in the guest virtual ITS.
>> >
>> >
>> >> >On x86 xen-pcifront
>> >> > makes an hypercall to enable msi/msix on the device passing the real
>> >> > sbdf as argument:
>> >> On ARM since for non msi interrupts (SPI, SGI's) GIC access is
>> >> directly trapped in Xen, it makes sense to use traps for LPIs
>> >> The same logic is being proposed. Apart from PCI config space r/w the
>> >> proposal is not to use front-end back-end communication for MSIs.
>> >> In the long run, I plan to trap directly into Xen for PCI config space r/w.
>> >> Also in our previous mails we agreed on utilising arm trap and emulate for MSI.
>> >
>> > OK
>> >
>> >
>> >> >
>> >> > drivers/pci/xen-pcifront.c:pci_frontend_enable_msix
>> >> >
>> >> > Could we use the same hypercall to enable msi/msix on ARM? That would be
>> >> > ideal.
>> >> >
>> >> > Otherwise xen-pcifront could call a new hypercall to let Xen know the
>> >> > virtual sbdf to sbdf mapping. But I would prefer not to introduce a new
>> >> > hypercall and reuse the existing one.
>> >> We are proposing removal of pcifront back communication for MSIs. I do
>> >> not want to introduce a new hypercall
>> >> If we know the guest sbdf it can be done in the pci-attach DOMCTL itself.
>> >
>> > I agree that it would be nice to avoid a new hypercall but it might be
>> > the only way to do it. Dig through the pciback code and see what you can
>> > do, I am open to alternatives.
>> > pciback code and see what you can do.
>> Stefano, I need your help on this, can you point out someone who can help.
>> I am adding Ryan Wilson the author as well. Is he still active on Xen ?
>
> No, but what is the help you need?
Thanks for replying. As detailed in this thread, I need to create a
hypercall that would send the following information to Xen at the time
of PCI attach
{ sbdf , domU sbdf, domainId }.
I am not able to find a way to get the domU sbdf from dom0 at the time
of pci-attach.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-10-08 13:37                               ` manish jaggi
@ 2014-10-08 13:45                                 ` Ian Campbell
  2014-10-08 13:47                                   ` manish jaggi
  0 siblings, 1 reply; 38+ messages in thread
From: Ian Campbell @ 2014-10-08 13:45 UTC (permalink / raw)
  To: manish jaggi
  Cc: Ryan Wilson, Vijay Kilari, Stefano Stabellini, Prasun Kapoor,
	manish.jaggi, Julien Grall, xen-devel, psawargaonkar, Anup Patel

On Wed, 2014-10-08 at 19:07 +0530, manish jaggi wrote:
> Thanks for replying. As detailed in this thread, I need to create a
> hypercall that would send the following information to Xen at the time
> of PCI attach
> { sbdf , domU sbdf, domainId }.
> I am not able to find a way to get the domU sbdf from dom0 at the time
> of pci-attach.

I think it would need to be done by the pciback driver in the dom0
kernel, which AFAIK is the thing which consistently knows both physical
and virtual sbdf for a given assigned device.

Ian.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-10-08 13:45                                 ` Ian Campbell
@ 2014-10-08 13:47                                   ` manish jaggi
  2014-10-08 13:58                                     ` Ian Campbell
  2014-10-08 14:51                                     ` Konrad Rzeszutek Wilk
  0 siblings, 2 replies; 38+ messages in thread
From: manish jaggi @ 2014-10-08 13:47 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Ryan Wilson, Vijay Kilari, Stefano Stabellini, Prasun Kapoor,
	manish.jaggi, Julien Grall, xen-devel, psawargaonkar, Anup Patel

On 8 October 2014 19:15, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2014-10-08 at 19:07 +0530, manish jaggi wrote:
>> Thanks for replying. As detailed in this thread, I need to create a
>> hypercall that would send the following information to Xen at the time
>> of PCI attach
>> { sbdf , domU sbdf, domainId }.
>> I am not able to find a way to get the domU sbdf from dom0 at the time
>> of pci-attach.
>
> I think it would need to be done by the pciback driver in the dom0
> kernel, which AFAIK is the thing which consistently knows both physical
> and virtual sbdf for a given assigned device.
>
> Ian.
>
Correct, can you point out which data structure holds the domU sbdf
corresponding to the actual sbdf in pciback.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-10-08 13:47                                   ` manish jaggi
@ 2014-10-08 13:58                                     ` Ian Campbell
  2014-10-08 14:51                                     ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 38+ messages in thread
From: Ian Campbell @ 2014-10-08 13:58 UTC (permalink / raw)
  To: manish jaggi
  Cc: Ryan Wilson, Vijay Kilari, Stefano Stabellini, Prasun Kapoor,
	manish.jaggi, Julien Grall, xen-devel, psawargaonkar, Anup Patel

On Wed, 2014-10-08 at 19:17 +0530, manish jaggi wrote:
> On 8 October 2014 19:15, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Wed, 2014-10-08 at 19:07 +0530, manish jaggi wrote:
> >> Thanks for replying. As detailed in this thread, I need to create a
> >> hypercall that would send the following information to Xen at the time
> >> of PCI attach
> >> { sbdf , domU sbdf, domainId }.
> >> I am not able to find a way to get the domU sbdf from dom0 at the time
> >> of pci-attach.
> >
> > I think it would need to be done by the pciback driver in the dom0
> > kernel, which AFAIK is the thing which consistently knows both physical
> > and virtual sbdf for a given assigned device.
> >
> > Ian.
> >
> Correct, can you point out which data structure holds the domU sbdf
> corresponding to the actual sbdf in pciback.

Not off the top of my head, I suggest looking at the existing handling
of the pci op messages, since they will be doing a similar looking.

Ian.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-10-08 13:47                                   ` manish jaggi
  2014-10-08 13:58                                     ` Ian Campbell
@ 2014-10-08 14:51                                     ` Konrad Rzeszutek Wilk
  2014-10-20 13:30                                       ` manish jaggi
  1 sibling, 1 reply; 38+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-10-08 14:51 UTC (permalink / raw)
  To: manish jaggi
  Cc: Julien Grall, Ian Campbell, Vijay Kilari, Stefano Stabellini,
	Prasun Kapoor, manish.jaggi, Ryan Wilson, xen-devel,
	psawargaonkar, Anup Patel

On Wed, Oct 08, 2014 at 07:17:48PM +0530, manish jaggi wrote:
> On 8 October 2014 19:15, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Wed, 2014-10-08 at 19:07 +0530, manish jaggi wrote:
> >> Thanks for replying. As detailed in this thread, I need to create a
> >> hypercall that would send the following information to Xen at the time
> >> of PCI attach
> >> { sbdf , domU sbdf, domainId }.
> >> I am not able to find a way to get the domU sbdf from dom0 at the time
> >> of pci-attach.
> >
> > I think it would need to be done by the pciback driver in the dom0
> > kernel, which AFAIK is the thing which consistently knows both physical
> > and virtual sbdf for a given assigned device.
> >
> > Ian.
> >
> Correct, can you point out which data structure holds the domU sbdf
> corresponding to the actual sbdf in pciback.

See 'xen_pcibk_export_device' or 'xen_pcibk_publish_pci_root'
is that what you are referring to?

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-10-08 14:51                                     ` Konrad Rzeszutek Wilk
@ 2014-10-20 13:30                                       ` manish jaggi
  2014-10-20 14:54                                         ` Stefano Stabellini
  0 siblings, 1 reply; 38+ messages in thread
From: manish jaggi @ 2014-10-20 13:30 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Julien Grall, Ian Campbell, Vijay Kilari, Stefano Stabellini,
	Prasun Kapoor, manish.jaggi, Ryan Wilson, xen-devel,
	psawargaonkar, Anup Patel

On 8 October 2014 20:21, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Wed, Oct 08, 2014 at 07:17:48PM +0530, manish jaggi wrote:
>> On 8 October 2014 19:15, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Wed, 2014-10-08 at 19:07 +0530, manish jaggi wrote:
>> >> Thanks for replying. As detailed in this thread, I need to create a
>> >> hypercall that would send the following information to Xen at the time
>> >> of PCI attach
>> >> { sbdf , domU sbdf, domainId }.
>> >> I am not able to find a way to get the domU sbdf from dom0 at the time
>> >> of pci-attach.
>> >
>> > I think it would need to be done by the pciback driver in the dom0
>> > kernel, which AFAIK is the thing which consistently knows both physical
>> > and virtual sbdf for a given assigned device.
>> >
>> > Ian.
>> >
>> Correct, can you point out which data structure holds the domU sbdf
>> corresponding to the actual sbdf in pciback.
>
> See 'xen_pcibk_export_device' or 'xen_pcibk_publish_pci_root'
> is that what you are referring to?

Xen docs also mention about xen-pciback.passthrough=1. If I set this
in dom0 i see that the device is enumerated as the same sbdf in domU,
but
a) it is not shown in lspci
b) no front-back communication is done for reading devices configuration space
.
Is option useful / fully implemented for ARM ?

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-10-20 13:30                                       ` manish jaggi
@ 2014-10-20 14:54                                         ` Stefano Stabellini
  2014-11-06 15:28                                           ` manish jaggi
  0 siblings, 1 reply; 38+ messages in thread
From: Stefano Stabellini @ 2014-10-20 14:54 UTC (permalink / raw)
  To: manish jaggi
  Cc: Ryan Wilson, Ian Campbell, Stefano Stabellini, Prasun Kapoor,
	manish.jaggi, Julien Grall, xen-devel, psawargaonkar,
	Vijay Kilari, Anup Patel

On Mon, 20 Oct 2014, manish jaggi wrote:
> On 8 October 2014 20:21, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > On Wed, Oct 08, 2014 at 07:17:48PM +0530, manish jaggi wrote:
> >> On 8 October 2014 19:15, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >> > On Wed, 2014-10-08 at 19:07 +0530, manish jaggi wrote:
> >> >> Thanks for replying. As detailed in this thread, I need to create a
> >> >> hypercall that would send the following information to Xen at the time
> >> >> of PCI attach
> >> >> { sbdf , domU sbdf, domainId }.
> >> >> I am not able to find a way to get the domU sbdf from dom0 at the time
> >> >> of pci-attach.
> >> >
> >> > I think it would need to be done by the pciback driver in the dom0
> >> > kernel, which AFAIK is the thing which consistently knows both physical
> >> > and virtual sbdf for a given assigned device.
> >> >
> >> > Ian.
> >> >
> >> Correct, can you point out which data structure holds the domU sbdf
> >> corresponding to the actual sbdf in pciback.
> >
> > See 'xen_pcibk_export_device' or 'xen_pcibk_publish_pci_root'
> > is that what you are referring to?
> 
> Xen docs also mention about xen-pciback.passthrough=1. If I set this
> in dom0 i see that the device is enumerated as the same sbdf in domU,
> but
> a) it is not shown in lspci
> b) no front-back communication is done for reading devices configuration space
> .
> Is option useful / fully implemented for ARM ?

I don't think this option is very useful. I wouldn't worry about it for
now.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-10-20 14:54                                         ` Stefano Stabellini
@ 2014-11-06 15:28                                           ` manish jaggi
  2014-11-06 15:48                                             ` Stefano Stabellini
  2014-11-06 19:41                                             ` Konrad Rzeszutek Wilk
  0 siblings, 2 replies; 38+ messages in thread
From: manish jaggi @ 2014-11-06 15:28 UTC (permalink / raw)
  To: Stefano Stabellini, Konrad Rzeszutek Wilk, Ian Campbell,
	JBeulich, Julien Grall, Prasun Kapoor
  Cc: manish.jaggi, Ryan Wilson, Vijay Kilari, xen-devel

[-- Attachment #1: Type: text/plain, Size: 2090 bytes --]

On 20 October 2014 20:24, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Mon, 20 Oct 2014, manish jaggi wrote:
>> On 8 October 2014 20:21, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>> > On Wed, Oct 08, 2014 at 07:17:48PM +0530, manish jaggi wrote:
>> >> On 8 October 2014 19:15, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> >> > On Wed, 2014-10-08 at 19:07 +0530, manish jaggi wrote:
>> >> >> Thanks for replying. As detailed in this thread, I need to create a
>> >> >> hypercall that would send the following information to Xen at the time
>> >> >> of PCI attach
>> >> >> { sbdf , domU sbdf, domainId }.
>> >> >> I am not able to find a way to get the domU sbdf from dom0 at the time
>> >> >> of pci-attach.
>> >> >
>> >> > I think it would need to be done by the pciback driver in the dom0
>> >> > kernel, which AFAIK is the thing which consistently knows both physical
>> >> > and virtual sbdf for a given assigned device.
>> >> >
>> >> > Ian.
>> >> >
>> >> Correct, can you point out which data structure holds the domU sbdf
>> >> corresponding to the actual sbdf in pciback.
>> >
>> > See 'xen_pcibk_export_device' or 'xen_pcibk_publish_pci_root'
>> > is that what you are referring to?
>>
>> Xen docs also mention about xen-pciback.passthrough=1. If I set this
>> in dom0 i see that the device is enumerated as the same sbdf in domU,
>> but
>> a) it is not shown in lspci
>> b) no front-back communication is done for reading devices configuration space
>> .
>> Is option useful / fully implemented for ARM ?
>
> I don't think this option is very useful. I wouldn't worry about it for
> now.

Stefano / Ian / Konard / Julien,

Attached is a first raw code FYI RFC Patches of PCI passthrough support on ARM.
- Linux Patch (3.18)
- Xen Patch  (4.5 staging)
---(Smmu changes not included, thats a separate patch altogether)
This patches show the logic, at places need of improvements in code
organization/quality. I wanted to share to get initial comments.
This is working with SRIOV as well.

Please have a look and let me know your positive comments

[-- Attachment #2: 0002-Removing-X86-as-a-dependency-on-XEN_PCI_FRONTEND-and.patch --]
[-- Type: application/octet-stream, Size: 1092 bytes --]

From a29d658eea93373ad597d849df530fb1fc9a7066 Mon Sep 17 00:00:00 2001
From: manish <manish.jaggi@caviumnetworks.com>
Date: Wed, 5 Nov 2014 12:17:05 +0530
Subject: [PATCH 2/5] Removing X86 as a dependency on XEN_PCI_FRONTEND and
 XEN_PCI_BACKEND

---
 drivers/pci/Kconfig | 2 +-
 drivers/xen/Kconfig | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig
index 893503f..c534ce0 100644
--- a/drivers/pci/Kconfig
+++ b/drivers/pci/Kconfig
@@ -50,7 +50,7 @@ config PCI_STUB
 
 config XEN_PCIDEV_FRONTEND
         tristate "Xen PCI Frontend"
-        depends on PCI && X86 && XEN
+        depends on PCI && XEN
         select PCI_XEN
 	select XEN_XENBUS_FRONTEND
         default y
diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index b812462..1332eed 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -151,7 +151,7 @@ config XEN_TMEM
 
 config XEN_PCIDEV_BACKEND
 	tristate "Xen PCI-device backend driver"
-	depends on PCI && X86 && XEN
+	depends on PCI && XEN
 	depends on XEN_BACKEND
 	default m
 	help
-- 
1.9.1


[-- Attachment #3: 0005-PCI-passthrough-support-for-Xen.patch --]
[-- Type: application/octet-stream, Size: 17845 bytes --]

From 7165ddc2dfdd416beb964af3966b2a2cd5562371 Mon Sep 17 00:00:00 2001
From: manish <manish.jaggi@caviumnetworks.com>
Date: Wed, 5 Nov 2014 20:05:25 +0530
Subject: [PATCH 5/5] PCI passthrough support for Xen a) MAP SBDF hypercall
 added b) MAP MMIO BAR regions hypercall added c) Basic PCI passthrough ARM
 support

---
 arch/arm64/include/asm/xen/pci.h         | 82 +++++++++++++++++++++++++++
 arch/arm64/include/asm/xen/swiotlb-xen.h |  7 +++
 arch/arm64/xen/Makefile                  |  2 +-
 arch/arm64/xen/xen_pci.c                 | 95 ++++++++++++++++++++++++++++++++
 drivers/pci/xen-pcifront.c               | 32 +++++++++--
 drivers/xen/pci.c                        | 67 +++++++++++++++++++++-
 drivers/xen/xen-pciback/pci_stub.c       |  4 +-
 drivers/xen/xen-pciback/pciback.h        |  4 ++
 drivers/xen/xen-pciback/pciback_ops.c    |  2 +
 drivers/xen/xen-pciback/vpci.c           | 37 ++++++++++++-
 include/xen/interface/physdev.h          | 21 +++++++
 11 files changed, 340 insertions(+), 13 deletions(-)
 create mode 100644 arch/arm64/include/asm/xen/pci.h
 create mode 100644 arch/arm64/include/asm/xen/swiotlb-xen.h
 create mode 100644 arch/arm64/xen/xen_pci.c

diff --git a/arch/arm64/include/asm/xen/pci.h b/arch/arm64/include/asm/xen/pci.h
new file mode 100644
index 0000000..5790efe
--- /dev/null
+++ b/arch/arm64/include/asm/xen/pci.h
@@ -0,0 +1,82 @@
+#ifndef _ASM_ARM_XEN_PCI_H
+#define _ASM_ARM_XEN_PCI_H
+
+#if defined(CONFIG_PCI_XEN)
+extern int __init pci_xen_init(void);
+extern int __init pci_xen_hvm_init(void);
+#define pci_xen 1
+#else
+#define pci_xen 0
+#define pci_xen_init (0)
+static inline int pci_xen_hvm_init(void)
+{
+	return -1;
+}
+#endif
+#if defined(CONFIG_XEN_DOM0)
+int __init pci_xen_initial_domain(void);
+int xen_find_device_domain_owner(struct pci_dev *dev);
+int xen_register_device_domain_owner(struct pci_dev *dev, uint16_t domain);
+int xen_unregister_device_domain_owner(struct pci_dev *dev);
+#else
+static inline int __init pci_xen_initial_domain(void)
+{
+	return -1;
+}
+static inline int xen_find_device_domain_owner(struct pci_dev *dev)
+{
+	return -1;
+}
+static inline int xen_register_device_domain_owner(struct pci_dev *dev,
+						   uint16_t domain)
+{
+	return -1;
+}
+static inline int xen_unregister_device_domain_owner(struct pci_dev *dev)
+{
+	return -1;
+}
+#endif
+
+#if defined(CONFIG_PCI_MSI)
+#if defined(CONFIG_PCI_XEN)
+/* The drivers/pci/xen-pcifront.c sets this structure to
+ * its own functions.
+ */
+struct xen_pci_frontend_ops {
+	int (*enable_msi)(struct pci_dev *dev, int vectors[]);
+	void (*disable_msi)(struct pci_dev *dev);
+	int (*enable_msix)(struct pci_dev *dev, int vectors[], int nvec);
+	void (*disable_msix)(struct pci_dev *dev);
+};
+
+extern struct xen_pci_frontend_ops *xen_pci_frontend;
+
+static inline int xen_pci_frontend_enable_msi(struct pci_dev *dev,
+					      int vectors[])
+{
+	if (xen_pci_frontend && xen_pci_frontend->enable_msi)
+		return xen_pci_frontend->enable_msi(dev, vectors);
+	return -ENODEV;
+}
+static inline void xen_pci_frontend_disable_msi(struct pci_dev *dev)
+{
+	if (xen_pci_frontend && xen_pci_frontend->disable_msi)
+			xen_pci_frontend->disable_msi(dev);
+}
+static inline int xen_pci_frontend_enable_msix(struct pci_dev *dev,
+					       int vectors[], int nvec)
+{
+	if (xen_pci_frontend && xen_pci_frontend->enable_msix)
+		return xen_pci_frontend->enable_msix(dev, vectors, nvec);
+	return -ENODEV;
+}
+static inline void xen_pci_frontend_disable_msix(struct pci_dev *dev)
+{
+	if (xen_pci_frontend && xen_pci_frontend->disable_msix)
+			xen_pci_frontend->disable_msix(dev);
+}
+#endif /* CONFIG_PCI_XEN */
+#endif /* CONFIG_PCI_MSI */
+
+#endif	/* _ASM_X86_XEN_PCI_H */
diff --git a/arch/arm64/include/asm/xen/swiotlb-xen.h b/arch/arm64/include/asm/xen/swiotlb-xen.h
new file mode 100644
index 0000000..02ab2a1
--- /dev/null
+++ b/arch/arm64/include/asm/xen/swiotlb-xen.h
@@ -0,0 +1,7 @@
+#ifndef ARM64_SWIOTLB_XEN
+#define ARM64_SWIOTLB_XEN
+/* Place holder file */
+static inline int __init pci_xen_swiotlb_detect(void) { return 0; }
+static inline void __init pci_xen_swiotlb_init(void) { }
+static inline int pci_xen_swiotlb_init_late(void) { return -ENXIO; }
+#endif
diff --git a/arch/arm64/xen/Makefile b/arch/arm64/xen/Makefile
index 74a8d87..818ec45 100644
--- a/arch/arm64/xen/Makefile
+++ b/arch/arm64/xen/Makefile
@@ -1,2 +1,2 @@
 xen-arm-y	+= $(addprefix ../../arm/xen/, enlighten.o grant-table.o p2m.o mm.o)
-obj-y		:= xen-arm.o hypercall.o
+obj-y		:= xen-arm.o hypercall.o xen_pci.o
diff --git a/arch/arm64/xen/xen_pci.c b/arch/arm64/xen/xen_pci.c
new file mode 100644
index 0000000..77d6670
--- /dev/null
+++ b/arch/arm64/xen/xen_pci.c
@@ -0,0 +1,95 @@
+/*
+ * Xen PCI - handle PCI (INTx) and MSI infrastructure calls for PV, HVM and
+ * initial domain support. We also handle the DSDT _PRT callbacks for GSI's
+ * used in HVM and initial domain mode (PV does not parse ACPI, so it has no
+ * concept of GSIs). Under PV we hook under the pnbbios API for IRQs and
+ * 0xcf8 PCI configuration read/write.
+ *
+ *   Author: Ryan Wilson <hap9@epoch.ncsc.mil>
+ *           Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
+ *           Stefano Stabellini <stefano.stabellini@eu.citrix.com>
+ */
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/pci.h>
+
+#include <linux/io.h>
+
+#include <asm/xen/hypervisor.h>
+
+#include <xen/features.h>
+#include <xen/events.h>
+#include <asm/xen/pci.h>
+struct xen_device_domain_owner {
+	domid_t domain;
+	struct pci_dev *dev;
+	struct list_head list;
+};
+
+
+static DEFINE_SPINLOCK(dev_domain_list_spinlock);
+static struct list_head dev_domain_list = LIST_HEAD_INIT(dev_domain_list);
+
+static struct xen_device_domain_owner *find_device(struct pci_dev *dev)
+{
+	struct xen_device_domain_owner *owner;
+
+	list_for_each_entry(owner, &dev_domain_list, list) {
+		if (owner->dev == dev)
+			return owner;
+	}
+	return NULL;
+}
+
+int xen_find_device_domain_owner(struct pci_dev *dev)
+{
+	struct xen_device_domain_owner *owner;
+	int domain = -ENODEV;
+
+	spin_lock(&dev_domain_list_spinlock);
+	owner = find_device(dev);
+	if (owner)
+		domain = owner->domain;
+	spin_unlock(&dev_domain_list_spinlock);
+	return domain;
+}
+EXPORT_SYMBOL_GPL(xen_find_device_domain_owner);
+
+int xen_register_device_domain_owner(struct pci_dev *dev, uint16_t domain)
+{
+	struct xen_device_domain_owner *owner;
+
+	owner = kzalloc(sizeof(struct xen_device_domain_owner), GFP_KERNEL);
+	if (!owner)
+		return -ENODEV;
+
+	spin_lock(&dev_domain_list_spinlock);
+	if (find_device(dev)) {
+		spin_unlock(&dev_domain_list_spinlock);
+		kfree(owner);
+		return -EEXIST;
+	}
+	owner->domain = domain;
+	owner->dev = dev;
+	list_add_tail(&owner->list, &dev_domain_list);
+	spin_unlock(&dev_domain_list_spinlock);
+	return 0;
+}
+EXPORT_SYMBOL_GPL(xen_register_device_domain_owner);
+
+int xen_unregister_device_domain_owner(struct pci_dev *dev)
+{
+	struct xen_device_domain_owner *owner;
+
+	spin_lock(&dev_domain_list_spinlock);
+	owner = find_device(dev);
+	if (!owner) {
+		spin_unlock(&dev_domain_list_spinlock);
+		return -ENODEV;
+	}
+	list_del(&owner->list);
+	spin_unlock(&dev_domain_list_spinlock);
+	kfree(owner);
+	return 0;
+}
+EXPORT_SYMBOL_GPL(xen_unregister_device_domain_owner);
diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index 116ca37..e30ca7b 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -10,6 +10,11 @@
 #include <xen/events.h>
 #include <xen/grant_table.h>
 #include <xen/page.h>
+#include <linux/of_irq.h>
+#include <linux/of_pci.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_device.h>
 #include <linux/spinlock.h>
 #include <linux/pci.h>
 #include <linux/msi.h>
@@ -21,8 +26,8 @@
 #include <linux/bitops.h>
 #include <linux/time.h>
 #include <xen/platform_pci.h>
-
 #include <asm/xen/swiotlb-xen.h>
+#include <linux/swiotlb.h>
 #define INVALID_GRANT_REF (0)
 #define INVALID_EVTCHN    (-1)
 
@@ -243,7 +248,7 @@ static struct pci_ops pcifront_bus_ops = {
 	.write = pcifront_bus_write,
 };
 
-#ifdef CONFIG_PCI_MSI
+#if defined(CONFIG_PCI_MSI) && defined(CONFIG_X86)
 static int pci_frontend_enable_msix(struct pci_dev *dev,
 				    int vector[], int nvec)
 {
@@ -387,7 +392,7 @@ static void pci_frontend_registrar(int enable)
 };
 #else
 static inline void pci_frontend_registrar(int enable) { };
-#endif /* CONFIG_PCI_MSI */
+#endif /* CONFIG_PCI_MSI && X86*/
 
 /* Claim resources for the PCI frontend as-is, backend won't allow changes */
 static int pcifront_claim_resource(struct pci_dev *dev, void *data)
@@ -448,6 +453,7 @@ static int pcifront_scan_root(struct pcifront_device *pdev,
 	struct pci_bus *b;
 	struct pcifront_sd *sd = NULL;
 	struct pci_bus_entry *bus_entry = NULL;
+struct device_node *msi_node;
 	int err = 0;
 
 #ifndef CONFIG_PCI_DOMAINS
@@ -485,6 +491,17 @@ static int pcifront_scan_root(struct pcifront_device *pdev,
 	}
 
 	bus_entry->bus = b;
+        msi_node = of_find_compatible_node(NULL,NULL, "arm,gic-v3-its");
+        if(msi_node) {
+            b->msi = of_pci_find_msi_chip_by_node(msi_node);
+            if(!b->msi) {
+               printk(KERN_ERR"Unable to find bus->msi node \r\n");
+               goto err_out;
+            }
+        }else {
+               printk(KERN_ERR"Unable to find arm,gic-v3-its compatible node \r\n");
+               goto err_out;
+        }
 
 	list_add(&bus_entry->list, &pdev->root_buses);
 
@@ -1146,12 +1163,17 @@ static struct xenbus_driver xenpci_driver = {
 
 static int __init pcifront_init(void)
 {
-	if (!xen_pv_domain() || xen_initial_domain())
+	if(
+#ifdef X86
+	!xen_pv_domain() ||
+#endif
+	xen_initial_domain())
 		return -ENODEV;
 
+#ifdef X86
 	if (!xen_has_pv_devices())
 		return -ENODEV;
-
+#endif
 	pci_frontend_registrar(1 /* enable */);
 
 	return xenbus_register_frontend(&xenpci_driver);
diff --git a/drivers/xen/pci.c b/drivers/xen/pci.c
index dd9c249..a560f42 100644
--- a/drivers/xen/pci.c
+++ b/drivers/xen/pci.c
@@ -22,6 +22,7 @@
 #include <xen/xen.h>
 #include <xen/interface/physdev.h>
 #include <xen/interface/xen.h>
+#include <xen/hvc-console.h>
 
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
@@ -130,7 +131,7 @@ static int xen_add_device(struct device *dev)
 	return r;
 }
 
-static int xen_remove_device(struct device *dev)
+int xen_remove_device(struct device *dev)
 {
 	int r;
 	struct pci_dev *pci_dev = to_pci_dev(dev);
@@ -158,6 +159,64 @@ static int xen_remove_device(struct device *dev)
 
 	return r;
 }
+static int xen_map_pci_bars(struct device *dev)
+{
+	int i,r=0;
+	struct pci_dev *pci_dev = to_pci_dev(dev);
+	for(i=0; i<6; i++) {
+		struct physdev_map_mmio mapm;
+		mapm.addr = pci_resource_start(pci_dev,i);
+		mapm.size = pci_resource_len(pci_dev,i);
+		xen_raw_printk("%s ADDR=%lx SIZE=%lx\r\n",__func__, mapm.addr, mapm.size);
+		if(mapm.addr && mapm.size) {
+			r = HYPERVISOR_physdev_op(PHYSDEVOP_map_mmio, &mapm);
+			if (r)
+				printk(KERN_ERR "%s Xen Error Unable to map ADDR=%lx SIZE=%lx\r\n",__func__, mapm.addr, mapm.size);
+		}
+	}
+	return r;
+
+}
+static int xen_unmap_pci_bars(struct device *dev)
+{
+	int i,r=0;
+	struct pci_dev *pci_dev = to_pci_dev(dev);
+	for(i=0; i<6; i++) {
+		struct physdev_map_mmio mapm;
+		mapm.addr = pci_resource_start(pci_dev,i);
+		mapm.size = pci_resource_len(pci_dev,i);
+		xen_raw_printk("%s ADDR=%lx SIZE=%lx\r\n",__func__, mapm.addr, mapm.size);
+		if(mapm.addr && mapm.size) {
+			r = HYPERVISOR_physdev_op(PHYSDEVOP_unmap_mmio, &mapm);
+			if (r)
+				printk(KERN_ERR "%s Xen Error Unable to unmap ADDR=%lx SIZE=%lx\r\n",__func__, mapm.addr, mapm.size);
+		}
+	}
+	return r;
+
+}
+int xen_add_pci_device(struct device *dev)
+{
+	int r = 0;
+	if (xen_initial_domain())
+	    r = xen_add_device(dev);
+	if(!r) {
+		r = xen_map_pci_bars(dev);
+	}
+	return r;
+}
+
+int xen_remove_pci_device(struct device *dev)
+{
+	int r = 0;
+	printk(KERN_ERR">>%s \r\n",__func__);
+	r = xen_remove_device(dev);
+	if(!r) {
+		r = xen_unmap_pci_bars(dev);
+	}
+	printk(KERN_ERR"<<%s %s\r\n", __func__, __LINE__);
+	return r;
+}
 
 static int xen_pci_notifier(struct notifier_block *nb,
 			    unsigned long action, void *data)
@@ -167,10 +226,10 @@ static int xen_pci_notifier(struct notifier_block *nb,
 
 	switch (action) {
 	case BUS_NOTIFY_ADD_DEVICE:
-		r = xen_add_device(dev);
+		r = xen_add_pci_device(dev);
 		break;
 	case BUS_NOTIFY_DEL_DEVICE:
-		r = xen_remove_device(dev);
+		r = xen_remove_pci_device(dev);
 		break;
 	default:
 		return NOTIFY_DONE;
@@ -188,8 +247,10 @@ static struct notifier_block device_nb = {
 
 static int __init register_xen_pci_notifier(void)
 {
+#ifdef X86
 	if (!xen_initial_domain())
 		return 0;
+#endif
 
 	return bus_register_notifier(&pci_bus_type, &device_nb);
 }
diff --git a/drivers/xen/xen-pciback/pci_stub.c b/drivers/xen/xen-pciback/pci_stub.c
index 017069a..e843506 100644
--- a/drivers/xen/xen-pciback/pci_stub.c
+++ b/drivers/xen/xen-pciback/pci_stub.c
@@ -382,7 +382,7 @@ static int pcistub_init_device(struct pci_dev *dev)
 	err = pci_enable_device(dev);
 	if (err)
 		goto config_release;
-
+#ifdef CONFIG_X86
 	if (dev->msix_cap) {
 		struct physdev_pci_device ppdev = {
 			.seg = pci_domain_nr(dev->bus),
@@ -395,7 +395,7 @@ static int pcistub_init_device(struct pci_dev *dev)
 			dev_err(&dev->dev, "MSI-X preparation failed (%d)\n",
 				err);
 	}
-
+#endif
 	/* We need the device active to save the state. */
 	dev_dbg(&dev->dev, "save state of device\n");
 	pci_save_state(dev);
diff --git a/drivers/xen/xen-pciback/pciback.h b/drivers/xen/xen-pciback/pciback.h
index f72af87..52ef031 100644
--- a/drivers/xen/xen-pciback/pciback.h
+++ b/drivers/xen/xen-pciback/pciback.h
@@ -37,6 +37,10 @@ struct xen_pcibk_device {
 	struct xen_pci_sharedinfo *sh_info;
 	unsigned long flags;
 	struct work_struct op_work;
+	int pci_domain;
+	int bus;
+	int slot;
+	int func;
 };
 
 struct xen_pcibk_dev_data {
diff --git a/drivers/xen/xen-pciback/pciback_ops.c b/drivers/xen/xen-pciback/pciback_ops.c
index c4a0666..8a5701d 100644
--- a/drivers/xen/xen-pciback/pciback_ops.c
+++ b/drivers/xen/xen-pciback/pciback_ops.c
@@ -70,6 +70,7 @@ static void xen_pcibk_control_isr(struct pci_dev *dev, int reset)
 		enable ? "enable" : "disable");
 
 	if (enable) {
+#ifdef CONFIG_X86
 		rc = request_irq(dev_data->irq,
 				xen_pcibk_guest_interrupt, IRQF_SHARED,
 				dev_data->irq_name, dev);
@@ -79,6 +80,7 @@ static void xen_pcibk_control_isr(struct pci_dev *dev, int reset)
 				dev_data->irq_name, dev_data->irq, rc);
 			goto out;
 		}
+#endif
 	} else {
 		free_irq(dev_data->irq, dev);
 		dev_data->irq = 0;
diff --git a/drivers/xen/xen-pciback/vpci.c b/drivers/xen/xen-pciback/vpci.c
index 51afff9..b073327 100644
--- a/drivers/xen/xen-pciback/vpci.c
+++ b/drivers/xen/xen-pciback/vpci.c
@@ -11,6 +11,10 @@
 #include <linux/slab.h>
 #include <linux/pci.h>
 #include <linux/mutex.h>
+#include <xen/xen.h>
+#include <xen/interface/physdev.h>
+#include <asm/xen/hypervisor.h>
+#include <asm/xen/hypercall.h>
 #include "pciback.h"
 
 #define PCI_SLOT_MAX 32
@@ -71,6 +75,9 @@ static int __xen_pcibk_add_pci_dev(struct xen_pcibk_device *pdev,
 	int err = 0, slot, func = -1;
 	struct pci_dev_entry *t, *dev_entry;
 	struct vpci_dev_data *vpci_dev = pdev->pci_dev_data;
+	struct physdev_map_sbdf map_sbdf;;
+
+printk("%s %d\r\n", __func__, __LINE__);
 
 	if ((dev->class >> 24) == PCI_BASE_CLASS_BRIDGE) {
 		err = -EFAULT;
@@ -118,11 +125,11 @@ static int __xen_pcibk_add_pci_dev(struct xen_pcibk_device *pdev,
 	/* Assign to a new slot on the virtual PCI bus */
 	for (slot = 0; slot < PCI_SLOT_MAX; slot++) {
 		if (list_empty(&vpci_dev->dev_list[slot])) {
-			pr_info("vpci: %s: assign to virtual slot %d\n",
-				pci_name(dev), slot);
 			list_add_tail(&dev_entry->list,
 				      &vpci_dev->dev_list[slot]);
 			func = dev->is_virtfn ? 0 : PCI_FUNC(dev->devfn);
+			pr_info("vpci: %s: assign to virtual slot %d function %d\n",
+				pci_name(dev), slot, func);
 			goto unlock;
 		}
 	}
@@ -140,6 +147,31 @@ unlock:
 	else
 		kfree(dev_entry);
 
+	/*Issue Hypercall here */
+
+	map_sbdf.domain_id = pdev->xdev->otherend_id;
+	map_sbdf.sbdf_s = dev->bus->domain_nr;
+	map_sbdf.sbdf_b = dev->bus->number;
+	map_sbdf.sbdf_d = dev->devfn>>3;
+	map_sbdf.sbdf_f = dev->devfn & 0x7;
+	map_sbdf.gsbdf_s = 0;
+	map_sbdf.gsbdf_b = 0;
+	map_sbdf.gsbdf_d = slot;
+	map_sbdf.gsbdf_f = dev->devfn & 0x7;
+	printk(KERN_ERR"## sbdf = %d:%d:%d.%d g_sbdf %d:%d:%d.%d domain_id=%d ##\r\n",
+	map_sbdf.sbdf_s,
+	map_sbdf.sbdf_b,
+	map_sbdf.sbdf_d,
+	map_sbdf.sbdf_f,
+	map_sbdf.gsbdf_s,
+	map_sbdf.gsbdf_b,
+	map_sbdf.gsbdf_d,
+	map_sbdf.gsbdf_f,
+	map_sbdf.domain_id);
+	
+	err = HYPERVISOR_physdev_op(PHYSDEVOP_map_sbdf, &map_sbdf);
+	if (err)
+		printk(KERN_ERR " Xen Error PHYSDEVOP_map_sbdf");
 out:
 	return err;
 }
@@ -243,6 +275,7 @@ static int __xen_pcibk_get_pcifront_dev(struct pci_dev *pcidev,
 				*bus = 0;
 				*devfn = PCI_DEVFN(slot,
 					 PCI_FUNC(pcidev->devfn));
+				printk(KERN_ERR"%s %d:%d:%d.%d \r\n", __func__, *domain, *bus, (*devfn)>>3 , (*devfn)& 0x7);
 			}
 		}
 	}
diff --git a/include/xen/interface/physdev.h b/include/xen/interface/physdev.h
index 610dba9..f12aa6b 100644
--- a/include/xen/interface/physdev.h
+++ b/include/xen/interface/physdev.h
@@ -296,6 +296,27 @@ struct physdev_dbgp_op {
         struct physdev_pci_device pci;
     } u;
 };
+#define PHYSDEVOP_map_mmio		40
+struct physdev_map_mmio {
+    /* IN */
+    uint64_t addr;
+    uint64_t size;	
+};
+#define PHYSDEVOP_unmap_mmio		41
+
+#define PHYSDEVOP_map_sbdf		43
+struct physdev_map_sbdf {
+	int domain_id;
+	int sbdf_s;
+	int sbdf_b;
+	int sbdf_d;
+	int sbdf_f;
+
+	int gsbdf_s;
+	int gsbdf_b;
+	int gsbdf_d;
+	int gsbdf_f;
+};
 
 /*
  * Notify that some PIRQ-bound event channels have been unmasked.
-- 
1.9.1


[-- Attachment #4: 0002-SMMU-fix-sbdf-patch.patch --]
[-- Type: application/octet-stream, Size: 6618 bytes --]

From 86b5dd5254ff9eed996d9332682c080035910103 Mon Sep 17 00:00:00 2001
From: manish <manish.jaggi@caviumnetworks.com>
Date: Wed, 5 Nov 2014 20:10:40 +0530
Subject: [PATCH 2/2] SMMU fix + sbdf patch

---
 tools/libxl/libxl_pci.c            |  4 +++-
 xen/arch/arm/physdev.c             | 34 +++++++++++++++++++++++++-
 xen/drivers/passthrough/arm/smmu.c |  2 ++
 xen/include/asm-arm/gic-its.h      | 49 ++++++++++++++++++++++++++++++++++----
 xen/include/asm-arm/pci.h          |  4 ++++
 xen/include/public/physdev.h       | 18 ++++++++++++++
 8 files changed, 124 insertions(+), 25 deletions(-)

diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
index 9f40100..6571c6a 100644
--- a/tools/libxl/libxl_pci.c
+++ b/tools/libxl/libxl_pci.c
@@ -968,10 +968,11 @@ static int do_pci_add(libxl__gc *gc, uint32_t domid, libxl_device_pci *pcidev, i
         LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "Couldn't open %s", sysfs_path);
         goto out;
     }
+#ifdef CONFIG_X86
     if ((fscanf(f, "%u", &irq) == 1) && irq) {
         rc = xc_physdev_map_pirq(ctx->xch, domid, irq, &irq);
         if (rc < 0) {
             fclose(f);
             return ERROR_FAIL;
         }
@@ -982,6 +983,7 @@ static int do_pci_add(libxl__gc *gc, uint32_t domid, libxl_device_pci *pcidev, i
             return ERROR_FAIL;
         }
     }
+#endif
     fclose(f);
 
     /* Don't restrict writes to the PCI config space from this VM */
 
diff --git a/xen/arch/arm/physdev.c b/xen/arch/arm/physdev.c
index 7762806..8c68487 100644
--- a/xen/arch/arm/physdev.c
+++ b/xen/arch/arm/physdev.c
@@ -38,11 +38,43 @@
 int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
   //  int irq;
-    int ret=0;
+    int ret=-1;
     struct vcpu *v = current;
 	printk("%s cmd=%d\r\n", __func__ , cmd);
     switch ( cmd )
     {
+    case PHYSDEVOP_map_sbdf:{
+        struct physdev_map_sbdf map_sbdf;
+        struct domain *d;
+        struct pci_dev *pdev;
+
+        if ( copy_from_guest(&map_sbdf, arg, 1) != 0 )
+            break;
+
+        for_each_domain(d){
+        	printk("@#@ %d %d\r\n",d->domain_id , map_sbdf.domain_id);
+        	if(d->domain_id == map_sbdf.domain_id){
+        		for_each_pdev(d,pdev){
+        			printk("@@ %d:%d:%d:%d, %d:%d:%d:%d \r\n",
+        					pdev->seg,pdev->bus,pdev->devfn>>3,pdev->devfn & 0x7,
+        					map_sbdf.sbdf_s,map_sbdf.sbdf_b,map_sbdf.sbdf_d,map_sbdf.sbdf_f);
+
+        			if(pdev->seg == map_sbdf.sbdf_s &&
+        			pdev->bus == map_sbdf.sbdf_b &&
+        			(pdev->devfn >> 3) == map_sbdf.sbdf_d &&
+        			(pdev->devfn & 0x7) == map_sbdf.sbdf_f){
+        					pdev->arch.gsbdf_s = map_sbdf.gsbdf_s;
+        					pdev->arch.gsbdf_b = map_sbdf.gsbdf_b;
+        					pdev->arch.gsbdf_d = map_sbdf.gsbdf_d;
+        					pdev->arch.gsbdf_f = map_sbdf.gsbdf_f;
+						return 0;
+        			}
+        		}
+        	}
+        }
+    break;
+    }
+
     case PHYSDEVOP_map_mmio: {
 	struct physdev_map_mmio mapm;
         u64 addr;
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 564ff96..c1a29b5 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -2086,6 +2086,8 @@ out_err:
 static const char * const smmu_dt_compat[] __initconst =
 {
     "arm,mmu-400",
+    "arm,mmu-500",
+    "thunder,smmu-v2",
     NULL
 };
 
diff --git a/xen/include/asm-arm/gic-its.h b/xen/include/asm-arm/gic-its.h
index 998cbd9..bd74059 100644
--- a/xen/include/asm-arm/gic-its.h
+++ b/xen/include/asm-arm/gic-its.h
@@ -94,10 +94,40 @@ static inline uint8_t its_decode_cmd(struct its_cmd_block *cmd)
     return cmd->raw_cmd[0] & 0xff;
 }
 
-static inline uint32_t its_decode_devid(struct its_cmd_block *cmd)
+static inline int map_gsbdf(struct domain *d, uint32_t gsbdf, uint32_t *sbdf)
 {
+	struct pci_dev *pdev;
+	int ret = -1;
+	uint32_t tmp_sbdf,t;
+
+	if(d->domain_id == 0) {
+		*sbdf = gsbdf;
+		return 0;
+	}
+
+	for_each_pdev(d,pdev){
+		tmp_sbdf = (pdev->arch.gsbdf_s << 16) |
+				(pdev->arch.gsbdf_b << 8) |
+				(pdev->arch.gsbdf_d <<3) |
+				pdev->arch.gsbdf_f;
+		t = (pdev->seg <<16) | (pdev->bus <<8) | pdev->devfn;
+		printk(KERN_ERR"%s Domain=%d tmp_sbdf=%d  gsbdf=%d sbdf=%d\r\n",
+				__func__,d->domain_id, tmp_sbdf, gsbdf, t);
+		if(tmp_sbdf == gsbdf) {
+			*sbdf = t;
+			ret = 0;
+			break;
+		}
+	}
+	return ret;
+}
+static inline uint32_t its_decode_devid(struct domain *d, struct its_cmd_block *cmd)
+{
+	uint32_t nval;
     uint32_t val = cmd->raw_cmd[0] >> 32;
-    if(val==0) return 0x10028;
+    if(!map_gsbdf(d,val,&nval))
+    	return nval;
+
     return (cmd->raw_cmd[0] >> 32);  
 }
 
@@ -142,9 +172,20 @@ static inline void its_encode_cmd(struct its_cmd_block *cmd, u8 cmd_nr)
     cmd->raw_cmd[0] |= cmd_nr;
 }
 
-static inline void its_encode_devid(struct its_cmd_block *cmd, uint32_t devid)
+static inline void its_encode_devid(struct domain *d, struct its_cmd_block *cmd, uint32_t devid)
+{
+	uint32_t ndevid;
+    if(!map_gsbdf(d,devid,&ndevid))
+    	devid = ndevid;
+
+    cmd->raw_cmd[0] &= ~(0xffffUL << 32);
+    cmd->raw_cmd[0] |= ((uint64_t)devid & 0xffffffffUL) << 32;
+}
+
+static inline void _its_encode_devid(struct its_cmd_block *cmd, uint32_t devid)
 {
-    if(devid==0) devid = 0x10028;
+
     cmd->raw_cmd[0] &= ~(0xffffUL << 32);
     cmd->raw_cmd[0] |= ((uint64_t)devid & 0xffffffffUL) << 32;
 }
diff --git a/xen/include/asm-arm/pci.h b/xen/include/asm-arm/pci.h
index d20a299..bfe86f5 100644
--- a/xen/include/asm-arm/pci.h
+++ b/xen/include/asm-arm/pci.h
@@ -6,6 +6,10 @@
 struct arch_pci_dev {
 	u64 addr[MAX_PCI_BARS];
 	u64 size[MAX_PCI_BARS];
+    int gsbdf_s;
+    int gsbdf_b;
+    int gsbdf_d;
+    int gsbdf_f;
 	int valid;
 };
 struct pci_conf {
diff --git a/xen/include/public/physdev.h b/xen/include/public/physdev.h
index 9d15286..d4ad38c 100644
--- a/xen/include/public/physdev.h
+++ b/xen/include/public/physdev.h
@@ -344,9 +344,27 @@ struct physdev_map_mmio {
     uint64_t addr;
     uint64_t size;
 };
+
 #define PHYSDEVOP_pci_domain_detach              42
 typedef struct physdev_map_mmio physdev_map_mmio_t;
 DEFINE_XEN_GUEST_HANDLE(physdev_map_mmio_t);
+
+#define PHYSDEVOP_map_sbdf		43
+struct physdev_map_sbdf {
+	int domain_id;
+	int sbdf_s;
+	int sbdf_b;
+	int sbdf_d;
+	int sbdf_f;
+
+	int gsbdf_s;
+	int gsbdf_b;
+	int gsbdf_d;
+	int gsbdf_f;
+};
+typedef struct physdev_map_sbdf physdev_map_sbdf_t;
+DEFINE_XEN_GUEST_HANDLE(physdev_map_sbdf_t);
+
 /*
  * Notify that some PIRQ-bound event channels have been unmasked.
  * ** This command is obsolete since interface version 0x00030202 and is **
-- 
1.9.1


[-- Attachment #5: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-11-06 15:28                                           ` manish jaggi
@ 2014-11-06 15:48                                             ` Stefano Stabellini
  2014-11-06 15:55                                               ` manish jaggi
  2014-11-06 19:41                                             ` Konrad Rzeszutek Wilk
  1 sibling, 1 reply; 38+ messages in thread
From: Stefano Stabellini @ 2014-11-06 15:48 UTC (permalink / raw)
  To: manish jaggi
  Cc: Ryan Wilson, Ian Campbell, Vijay Kilari, Prasun Kapoor,
	manish.jaggi, Julien Grall, xen-devel, JBeulich,
	Stefano Stabellini

On Thu, 6 Nov 2014, manish jaggi wrote:
> On 20 October 2014 20:24, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > On Mon, 20 Oct 2014, manish jaggi wrote:
> >> On 8 October 2014 20:21, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> >> > On Wed, Oct 08, 2014 at 07:17:48PM +0530, manish jaggi wrote:
> >> >> On 8 October 2014 19:15, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >> >> > On Wed, 2014-10-08 at 19:07 +0530, manish jaggi wrote:
> >> >> >> Thanks for replying. As detailed in this thread, I need to create a
> >> >> >> hypercall that would send the following information to Xen at the time
> >> >> >> of PCI attach
> >> >> >> { sbdf , domU sbdf, domainId }.
> >> >> >> I am not able to find a way to get the domU sbdf from dom0 at the time
> >> >> >> of pci-attach.
> >> >> >
> >> >> > I think it would need to be done by the pciback driver in the dom0
> >> >> > kernel, which AFAIK is the thing which consistently knows both physical
> >> >> > and virtual sbdf for a given assigned device.
> >> >> >
> >> >> > Ian.
> >> >> >
> >> >> Correct, can you point out which data structure holds the domU sbdf
> >> >> corresponding to the actual sbdf in pciback.
> >> >
> >> > See 'xen_pcibk_export_device' or 'xen_pcibk_publish_pci_root'
> >> > is that what you are referring to?
> >>
> >> Xen docs also mention about xen-pciback.passthrough=1. If I set this
> >> in dom0 i see that the device is enumerated as the same sbdf in domU,
> >> but
> >> a) it is not shown in lspci
> >> b) no front-back communication is done for reading devices configuration space
> >> .
> >> Is option useful / fully implemented for ARM ?
> >
> > I don't think this option is very useful. I wouldn't worry about it for
> > now.
> 
> Stefano / Ian / Konard / Julien,
> 
> Attached is a first raw code FYI RFC Patches of PCI passthrough support on ARM.
> - Linux Patch (3.18)
> - Xen Patch  (4.5 staging)
> ---(Smmu changes not included, thats a separate patch altogether)
> This patches show the logic, at places need of improvements in code
> organization/quality. I wanted to share to get initial comments.
> This is working with SRIOV as well.
> 
> Please have a look and let me know your positive comments

Please send as individual inline patches. not attachments.
Please also add a proper description to each patch and an entry 0/N email
with the high level explanation of your work.

See http://wiki.xenproject.org/wiki/Submitting_Xen_Project_Patches

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-11-06 15:48                                             ` Stefano Stabellini
@ 2014-11-06 15:55                                               ` manish jaggi
  2014-11-06 16:02                                                 ` Julien Grall
  0 siblings, 1 reply; 38+ messages in thread
From: manish jaggi @ 2014-11-06 15:55 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Ryan Wilson, Ian Campbell, Vijay Kilari, Prasun Kapoor,
	manish.jaggi, Julien Grall, xen-devel, JBeulich

On 6 November 2014 21:18, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Thu, 6 Nov 2014, manish jaggi wrote:
>> On 20 October 2014 20:24, Stefano Stabellini
>> <stefano.stabellini@eu.citrix.com> wrote:
>> > On Mon, 20 Oct 2014, manish jaggi wrote:
>> >> On 8 October 2014 20:21, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>> >> > On Wed, Oct 08, 2014 at 07:17:48PM +0530, manish jaggi wrote:
>> >> >> On 8 October 2014 19:15, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> >> >> > On Wed, 2014-10-08 at 19:07 +0530, manish jaggi wrote:
>> >> >> >> Thanks for replying. As detailed in this thread, I need to create a
>> >> >> >> hypercall that would send the following information to Xen at the time
>> >> >> >> of PCI attach
>> >> >> >> { sbdf , domU sbdf, domainId }.
>> >> >> >> I am not able to find a way to get the domU sbdf from dom0 at the time
>> >> >> >> of pci-attach.
>> >> >> >
>> >> >> > I think it would need to be done by the pciback driver in the dom0
>> >> >> > kernel, which AFAIK is the thing which consistently knows both physical
>> >> >> > and virtual sbdf for a given assigned device.
>> >> >> >
>> >> >> > Ian.
>> >> >> >
>> >> >> Correct, can you point out which data structure holds the domU sbdf
>> >> >> corresponding to the actual sbdf in pciback.
>> >> >
>> >> > See 'xen_pcibk_export_device' or 'xen_pcibk_publish_pci_root'
>> >> > is that what you are referring to?
>> >>
>> >> Xen docs also mention about xen-pciback.passthrough=1. If I set this
>> >> in dom0 i see that the device is enumerated as the same sbdf in domU,
>> >> but
>> >> a) it is not shown in lspci
>> >> b) no front-back communication is done for reading devices configuration space
>> >> .
>> >> Is option useful / fully implemented for ARM ?
>> >
>> > I don't think this option is very useful. I wouldn't worry about it for
>> > now.
>>
>> Stefano / Ian / Konard / Julien,
>>
>> Attached is a first raw code FYI RFC Patches of PCI passthrough support on ARM.
>> - Linux Patch (3.18)
>> - Xen Patch  (4.5 staging)
>> ---(Smmu changes not included, thats a separate patch altogether)
>> This patches show the logic, at places need of improvements in code
>> organization/quality. I wanted to share to get initial comments.
>> This is working with SRIOV as well.
>>
>> Please have a look and let me know your positive comments
>
> Please send as individual inline patches. not attachments.
> Please also add a proper description to each patch and an entry 0/N email
> with the high level explanation of your work.
>
> See http://wiki.xenproject.org/wiki/Submitting_Xen_Project_Patches
Stefano I just wanted to share the patches as reference to our
discussion on the approach. Please recall I had shared in this mail a
design flow. These are just an extension to it. I wanted to move this
discussion to a conclusion

There are not patches which I am submitting to xen git.
If you are ok with the approach I will formally send the patches post
4.5 release.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-11-06 15:55                                               ` manish jaggi
@ 2014-11-06 16:02                                                 ` Julien Grall
  2014-11-06 16:07                                                   ` Stefano Stabellini
  0 siblings, 1 reply; 38+ messages in thread
From: Julien Grall @ 2014-11-06 16:02 UTC (permalink / raw)
  To: manish jaggi, Stefano Stabellini
  Cc: Ian Campbell, Vijay Kilari, Prasun Kapoor, manish.jaggi,
	Ryan Wilson, xen-devel, JBeulich

Hi Manish,

On 06/11/2014 15:55, manish jaggi wrote:
> On 6 November 2014 21:18, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
>> On Thu, 6 Nov 2014, manish jaggi wrote:
>>> On 20 October 2014 20:24, Stefano Stabellini
>>> <stefano.stabellini@eu.citrix.com> wrote:
>>>> On Mon, 20 Oct 2014, manish jaggi wrote:
>>>>> On 8 October 2014 20:21, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>>>>>> On Wed, Oct 08, 2014 at 07:17:48PM +0530, manish jaggi wrote:
>>>>>>> On 8 October 2014 19:15, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>>>>>>> On Wed, 2014-10-08 at 19:07 +0530, manish jaggi wrote:
>>>>>>>>> Thanks for replying. As detailed in this thread, I need to create a
>>>>>>>>> hypercall that would send the following information to Xen at the time
>>>>>>>>> of PCI attach
>>>>>>>>> { sbdf , domU sbdf, domainId }.
>>>>>>>>> I am not able to find a way to get the domU sbdf from dom0 at the time
>>>>>>>>> of pci-attach.
>>>>>>>>
>>>>>>>> I think it would need to be done by the pciback driver in the dom0
>>>>>>>> kernel, which AFAIK is the thing which consistently knows both physical
>>>>>>>> and virtual sbdf for a given assigned device.
>>>>>>>>
>>>>>>>> Ian.
>>>>>>>>
>>>>>>> Correct, can you point out which data structure holds the domU sbdf
>>>>>>> corresponding to the actual sbdf in pciback.
>>>>>>
>>>>>> See 'xen_pcibk_export_device' or 'xen_pcibk_publish_pci_root'
>>>>>> is that what you are referring to?
>>>>>
>>>>> Xen docs also mention about xen-pciback.passthrough=1. If I set this
>>>>> in dom0 i see that the device is enumerated as the same sbdf in domU,
>>>>> but
>>>>> a) it is not shown in lspci
>>>>> b) no front-back communication is done for reading devices configuration space
>>>>> .
>>>>> Is option useful / fully implemented for ARM ?
>>>>
>>>> I don't think this option is very useful. I wouldn't worry about it for
>>>> now.
>>>
>>> Stefano / Ian / Konard / Julien,
>>>
>>> Attached is a first raw code FYI RFC Patches of PCI passthrough support on ARM.
>>> - Linux Patch (3.18)
>>> - Xen Patch  (4.5 staging)
>>> ---(Smmu changes not included, thats a separate patch altogether)
>>> This patches show the logic, at places need of improvements in code
>>> organization/quality. I wanted to share to get initial comments.
>>> This is working with SRIOV as well.
>>>
>>> Please have a look and let me know your positive comments
>>
>> Please send as individual inline patches. not attachments.
>> Please also add a proper description to each patch and an entry 0/N email
>> with the high level explanation of your work.
>>
>> See http://wiki.xenproject.org/wiki/Submitting_Xen_Project_Patches
> Stefano I just wanted to share the patches as reference to our
> discussion on the approach. Please recall I had shared in this mail a
> design flow. These are just an extension to it. I wanted to move this
> discussion to a conclusion
> There are not patches which I am submitting to xen git.
> If you are ok with the approach I will formally send the patches post
> 4.5 release.

In this case you can send the patch series tagged "[RFC]" in the 
subject. It would better to start sending your patch series now, rather 
than post 4.5 release. So we can start to review it and maybe merge it 
as soon as 4.6 windows is opened.

I gave a quick look to the patch you provided. It looks like it depends 
on GICv3 ITS and maybe some SMMU patches? (I doubt the current driver is 
working out-of-box).

Please provide everything so we can try the patch series and have a 
better overview of the changes.

Regards,


-- 
Julien Grall

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-11-06 16:02                                                 ` Julien Grall
@ 2014-11-06 16:07                                                   ` Stefano Stabellini
  2014-11-06 16:20                                                     ` manish jaggi
  0 siblings, 1 reply; 38+ messages in thread
From: Stefano Stabellini @ 2014-11-06 16:07 UTC (permalink / raw)
  To: Julien Grall
  Cc: Ryan Wilson, Ian Campbell, Vijay Kilari, Stefano Stabellini,
	Prasun Kapoor, manish.jaggi, xen-devel, JBeulich, manish jaggi

On Thu, 6 Nov 2014, Julien Grall wrote:
> Hi Manish,
> 
> On 06/11/2014 15:55, manish jaggi wrote:
> > On 6 November 2014 21:18, Stefano Stabellini
> > <stefano.stabellini@eu.citrix.com> wrote:
> > > On Thu, 6 Nov 2014, manish jaggi wrote:
> > > > On 20 October 2014 20:24, Stefano Stabellini
> > > > <stefano.stabellini@eu.citrix.com> wrote:
> > > > > On Mon, 20 Oct 2014, manish jaggi wrote:
> > > > > > On 8 October 2014 20:21, Konrad Rzeszutek Wilk
> > > > > > <konrad.wilk@oracle.com> wrote:
> > > > > > > On Wed, Oct 08, 2014 at 07:17:48PM +0530, manish jaggi wrote:
> > > > > > > > On 8 October 2014 19:15, Ian Campbell <Ian.Campbell@citrix.com>
> > > > > > > > wrote:
> > > > > > > > > On Wed, 2014-10-08 at 19:07 +0530, manish jaggi wrote:
> > > > > > > > > > Thanks for replying. As detailed in this thread, I need to
> > > > > > > > > > create a
> > > > > > > > > > hypercall that would send the following information to Xen
> > > > > > > > > > at the time
> > > > > > > > > > of PCI attach
> > > > > > > > > > { sbdf , domU sbdf, domainId }.
> > > > > > > > > > I am not able to find a way to get the domU sbdf from dom0
> > > > > > > > > > at the time
> > > > > > > > > > of pci-attach.
> > > > > > > > > 
> > > > > > > > > I think it would need to be done by the pciback driver in the
> > > > > > > > > dom0
> > > > > > > > > kernel, which AFAIK is the thing which consistently knows both
> > > > > > > > > physical
> > > > > > > > > and virtual sbdf for a given assigned device.
> > > > > > > > > 
> > > > > > > > > Ian.
> > > > > > > > > 
> > > > > > > > Correct, can you point out which data structure holds the domU
> > > > > > > > sbdf
> > > > > > > > corresponding to the actual sbdf in pciback.
> > > > > > > 
> > > > > > > See 'xen_pcibk_export_device' or 'xen_pcibk_publish_pci_root'
> > > > > > > is that what you are referring to?
> > > > > > 
> > > > > > Xen docs also mention about xen-pciback.passthrough=1. If I set this
> > > > > > in dom0 i see that the device is enumerated as the same sbdf in
> > > > > > domU,
> > > > > > but
> > > > > > a) it is not shown in lspci
> > > > > > b) no front-back communication is done for reading devices
> > > > > > configuration space
> > > > > > .
> > > > > > Is option useful / fully implemented for ARM ?
> > > > > 
> > > > > I don't think this option is very useful. I wouldn't worry about it
> > > > > for
> > > > > now.
> > > > 
> > > > Stefano / Ian / Konard / Julien,
> > > > 
> > > > Attached is a first raw code FYI RFC Patches of PCI passthrough support
> > > > on ARM.
> > > > - Linux Patch (3.18)
> > > > - Xen Patch  (4.5 staging)
> > > > ---(Smmu changes not included, thats a separate patch altogether)
> > > > This patches show the logic, at places need of improvements in code
> > > > organization/quality. I wanted to share to get initial comments.
> > > > This is working with SRIOV as well.
> > > > 
> > > > Please have a look and let me know your positive comments
> > > 
> > > Please send as individual inline patches. not attachments.
> > > Please also add a proper description to each patch and an entry 0/N email
> > > with the high level explanation of your work.
> > > 
> > > See http://wiki.xenproject.org/wiki/Submitting_Xen_Project_Patches
> > Stefano I just wanted to share the patches as reference to our
> > discussion on the approach. Please recall I had shared in this mail a
> > design flow. These are just an extension to it. I wanted to move this
> > discussion to a conclusion
> > There are not patches which I am submitting to xen git.
> > If you are ok with the approach I will formally send the patches post
> > 4.5 release.
> 
> In this case you can send the patch series tagged "[RFC]" in the subject.

That's right. It is difficult to give even just an early feedback
without the patch descriptions.


> It
> would better to start sending your patch series now, rather than post 4.5
> release. So we can start to review it and maybe merge it as soon as 4.6
> windows is opened.
> 
> I gave a quick look to the patch you provided. It looks like it depends on
> GICv3 ITS and maybe some SMMU patches? (I doubt the current driver is working
> out-of-box).
> 
> Please provide everything so we can try the patch series and have a better
> overview of the changes.
> 
> Regards,
> 
> 
> -- 
> Julien Grall
> 

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-11-06 16:07                                                   ` Stefano Stabellini
@ 2014-11-06 16:20                                                     ` manish jaggi
  2014-11-07 10:29                                                       ` Julien Grall
  0 siblings, 1 reply; 38+ messages in thread
From: manish jaggi @ 2014-11-06 16:20 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Ryan Wilson, Ian Campbell, Vijay Kilari, Prasun Kapoor,
	manish.jaggi, Julien Grall, xen-devel, Jan Beulich

On 6 November 2014 21:37, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Thu, 6 Nov 2014, Julien Grall wrote:
>> Hi Manish,
>>
>> On 06/11/2014 15:55, manish jaggi wrote:
>> > On 6 November 2014 21:18, Stefano Stabellini
>> > <stefano.stabellini@eu.citrix.com> wrote:
>> > > On Thu, 6 Nov 2014, manish jaggi wrote:
>> > > > On 20 October 2014 20:24, Stefano Stabellini
>> > > > <stefano.stabellini@eu.citrix.com> wrote:
>> > > > > On Mon, 20 Oct 2014, manish jaggi wrote:
>> > > > > > On 8 October 2014 20:21, Konrad Rzeszutek Wilk
>> > > > > > <konrad.wilk@oracle.com> wrote:
>> > > > > > > On Wed, Oct 08, 2014 at 07:17:48PM +0530, manish jaggi wrote:
>> > > > > > > > On 8 October 2014 19:15, Ian Campbell <Ian.Campbell@citrix.com>
>> > > > > > > > wrote:
>> > > > > > > > > On Wed, 2014-10-08 at 19:07 +0530, manish jaggi wrote:
>> > > > > > > > > > Thanks for replying. As detailed in this thread, I need to
>> > > > > > > > > > create a
>> > > > > > > > > > hypercall that would send the following information to Xen
>> > > > > > > > > > at the time
>> > > > > > > > > > of PCI attach
>> > > > > > > > > > { sbdf , domU sbdf, domainId }.
>> > > > > > > > > > I am not able to find a way to get the domU sbdf from dom0
>> > > > > > > > > > at the time
>> > > > > > > > > > of pci-attach.
>> > > > > > > > >
>> > > > > > > > > I think it would need to be done by the pciback driver in the
>> > > > > > > > > dom0
>> > > > > > > > > kernel, which AFAIK is the thing which consistently knows both
>> > > > > > > > > physical
>> > > > > > > > > and virtual sbdf for a given assigned device.
>> > > > > > > > >
>> > > > > > > > > Ian.
>> > > > > > > > >
>> > > > > > > > Correct, can you point out which data structure holds the domU
>> > > > > > > > sbdf
>> > > > > > > > corresponding to the actual sbdf in pciback.
>> > > > > > >
>> > > > > > > See 'xen_pcibk_export_device' or 'xen_pcibk_publish_pci_root'
>> > > > > > > is that what you are referring to?
>> > > > > >
>> > > > > > Xen docs also mention about xen-pciback.passthrough=1. If I set this
>> > > > > > in dom0 i see that the device is enumerated as the same sbdf in
>> > > > > > domU,
>> > > > > > but
>> > > > > > a) it is not shown in lspci
>> > > > > > b) no front-back communication is done for reading devices
>> > > > > > configuration space
>> > > > > > .
>> > > > > > Is option useful / fully implemented for ARM ?
>> > > > >
>> > > > > I don't think this option is very useful. I wouldn't worry about it
>> > > > > for
>> > > > > now.
>> > > >
>> > > > Stefano / Ian / Konard / Julien,
>> > > >
>> > > > Attached is a first raw code FYI RFC Patches of PCI passthrough support
>> > > > on ARM.
>> > > > - Linux Patch (3.18)
>> > > > - Xen Patch  (4.5 staging)
>> > > > ---(Smmu changes not included, thats a separate patch altogether)
>> > > > This patches show the logic, at places need of improvements in code
>> > > > organization/quality. I wanted to share to get initial comments.
>> > > > This is working with SRIOV as well.
>> > > >
>> > > > Please have a look and let me know your positive comments
>> > >
>> > > Please send as individual inline patches. not attachments.
>> > > Please also add a proper description to each patch and an entry 0/N email
>> > > with the high level explanation of your work.
>> > >
>> > > See http://wiki.xenproject.org/wiki/Submitting_Xen_Project_Patches
>> > Stefano I just wanted to share the patches as reference to our
>> > discussion on the approach. Please recall I had shared in this mail a
>> > design flow. These are just an extension to it. I wanted to move this
>> > discussion to a conclusion
>> > There are not patches which I am submitting to xen git.
>> > If you are ok with the approach I will formally send the patches post
>> > 4.5 release.
>>
>> In this case you can send the patch series tagged "[RFC]" in the subject.
>
> That's right. It is difficult to give even just an early feedback
> without the patch descriptions.
>
I assumed that the context is preserved in this mail thread. I shared
the flow in the first few mails and am sharing the code after a lot of
discussion in this thread.

Anyhow I will share the code as RFC in some time.
Thanks for the comments.

>
>> It
>> would better to start sending your patch series now, rather than post 4.5
>> release. So we can start to review it and maybe merge it as soon as 4.6
>> windows is opened.
>>
>> I gave a quick look to the patch you provided. It looks like it depends on
>> GICv3 ITS and maybe some SMMU patches? (I doubt the current driver is working
>> out-of-box).
>>
>> Please provide everything so we can try the patch series and have a better
>> overview of the changes.
>>
>> Regards,
>>
>>
>> --
>> Julien Grall
>>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-11-06 15:28                                           ` manish jaggi
  2014-11-06 15:48                                             ` Stefano Stabellini
@ 2014-11-06 19:41                                             ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 38+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-11-06 19:41 UTC (permalink / raw)
  To: manish jaggi
  Cc: Ryan Wilson, Ian Campbell, Vijay Kilari, Stefano Stabellini,
	Prasun Kapoor, manish.jaggi, Julien Grall, xen-devel, JBeulich

On Thu, Nov 06, 2014 at 08:58:18PM +0530, manish jaggi wrote:
> On 20 October 2014 20:24, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > On Mon, 20 Oct 2014, manish jaggi wrote:
> >> On 8 October 2014 20:21, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> >> > On Wed, Oct 08, 2014 at 07:17:48PM +0530, manish jaggi wrote:
> >> >> On 8 October 2014 19:15, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >> >> > On Wed, 2014-10-08 at 19:07 +0530, manish jaggi wrote:
> >> >> >> Thanks for replying. As detailed in this thread, I need to create a
> >> >> >> hypercall that would send the following information to Xen at the time
> >> >> >> of PCI attach
> >> >> >> { sbdf , domU sbdf, domainId }.
> >> >> >> I am not able to find a way to get the domU sbdf from dom0 at the time
> >> >> >> of pci-attach.
> >> >> >
> >> >> > I think it would need to be done by the pciback driver in the dom0
> >> >> > kernel, which AFAIK is the thing which consistently knows both physical
> >> >> > and virtual sbdf for a given assigned device.
> >> >> >
> >> >> > Ian.
> >> >> >
> >> >> Correct, can you point out which data structure holds the domU sbdf
> >> >> corresponding to the actual sbdf in pciback.
> >> >
> >> > See 'xen_pcibk_export_device' or 'xen_pcibk_publish_pci_root'
> >> > is that what you are referring to?
> >>
> >> Xen docs also mention about xen-pciback.passthrough=1. If I set this
> >> in dom0 i see that the device is enumerated as the same sbdf in domU,
> >> but
> >> a) it is not shown in lspci
> >> b) no front-back communication is done for reading devices configuration space
> >> .
> >> Is option useful / fully implemented for ARM ?
> >
> > I don't think this option is very useful. I wouldn't worry about it for
> > now.
> 
> Stefano / Ian / Konard / Julien,
> 
> Attached is a first raw code FYI RFC Patches of PCI passthrough support on ARM.
> - Linux Patch (3.18)

I would move the code that arch/arm64/xen/xen_pci.c introduces
(which is also in arch/x86/pci/xen.c) in its own generic file - say
to drivers/xen/pci.c.

That way you share the code between those two platforms instead
of copying it.

> - Xen Patch  (4.5 staging)
> ---(Smmu changes not included, thats a separate patch altogether)
> This patches show the logic, at places need of improvements in code
> organization/quality. I wanted to share to get initial comments.
> This is working with SRIOV as well.

Fantastic!
> 
> Please have a look and let me know your positive comments

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC + Queries] Flow of PCI passthrough in ARM
  2014-11-06 16:20                                                     ` manish jaggi
@ 2014-11-07 10:29                                                       ` Julien Grall
  0 siblings, 0 replies; 38+ messages in thread
From: Julien Grall @ 2014-11-07 10:29 UTC (permalink / raw)
  To: manish jaggi, Stefano Stabellini
  Cc: Ian Campbell, Vijay Kilari, Prasun Kapoor, manish.jaggi,
	Ryan Wilson, xen-devel, Jan Beulich

Hi Manish,

On 06/11/2014 16:20, manish jaggi wrote:
> On 6 November 2014 21:37, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
>> On Thu, 6 Nov 2014, Julien Grall wrote:
>>> Hi Manish,
>>>
>>> On 06/11/2014 15:55, manish jaggi wrote:
>>>> On 6 November 2014 21:18, Stefano Stabellini
>>>> <stefano.stabellini@eu.citrix.com> wrote:
>>>>> On Thu, 6 Nov 2014, manish jaggi wrote:
>>>>>> On 20 October 2014 20:24, Stefano Stabellini
>>>>>> <stefano.stabellini@eu.citrix.com> wrote:
>>>>>>> On Mon, 20 Oct 2014, manish jaggi wrote:
>>>>>>>> On 8 October 2014 20:21, Konrad Rzeszutek Wilk
>>>>>>>> <konrad.wilk@oracle.com> wrote:
>>>>>>>>> On Wed, Oct 08, 2014 at 07:17:48PM +0530, manish jaggi wrote:
>>>>>>>>>> On 8 October 2014 19:15, Ian Campbell <Ian.Campbell@citrix.com>
>>>>>>>>>> wrote:
>>>>>>>>>>> On Wed, 2014-10-08 at 19:07 +0530, manish jaggi wrote:
>>>>>>>>>>>> Thanks for replying. As detailed in this thread, I need to
>>>>>>>>>>>> create a
>>>>>>>>>>>> hypercall that would send the following information to Xen
>>>>>>>>>>>> at the time
>>>>>>>>>>>> of PCI attach
>>>>>>>>>>>> { sbdf , domU sbdf, domainId }.
>>>>>>>>>>>> I am not able to find a way to get the domU sbdf from dom0
>>>>>>>>>>>> at the time
>>>>>>>>>>>> of pci-attach.
>>>>>>>>>>>
>>>>>>>>>>> I think it would need to be done by the pciback driver in the
>>>>>>>>>>> dom0
>>>>>>>>>>> kernel, which AFAIK is the thing which consistently knows both
>>>>>>>>>>> physical
>>>>>>>>>>> and virtual sbdf for a given assigned device.
>>>>>>>>>>>
>>>>>>>>>>> Ian.
>>>>>>>>>>>
>>>>>>>>>> Correct, can you point out which data structure holds the domU
>>>>>>>>>> sbdf
>>>>>>>>>> corresponding to the actual sbdf in pciback.
>>>>>>>>>
>>>>>>>>> See 'xen_pcibk_export_device' or 'xen_pcibk_publish_pci_root'
>>>>>>>>> is that what you are referring to?
>>>>>>>>
>>>>>>>> Xen docs also mention about xen-pciback.passthrough=1. If I set this
>>>>>>>> in dom0 i see that the device is enumerated as the same sbdf in
>>>>>>>> domU,
>>>>>>>> but
>>>>>>>> a) it is not shown in lspci
>>>>>>>> b) no front-back communication is done for reading devices
>>>>>>>> configuration space
>>>>>>>> .
>>>>>>>> Is option useful / fully implemented for ARM ?
>>>>>>>
>>>>>>> I don't think this option is very useful. I wouldn't worry about it
>>>>>>> for
>>>>>>> now.
>>>>>>
>>>>>> Stefano / Ian / Konard / Julien,
>>>>>>
>>>>>> Attached is a first raw code FYI RFC Patches of PCI passthrough support
>>>>>> on ARM.
>>>>>> - Linux Patch (3.18)
>>>>>> - Xen Patch  (4.5 staging)
>>>>>> ---(Smmu changes not included, thats a separate patch altogether)
>>>>>> This patches show the logic, at places need of improvements in code
>>>>>> organization/quality. I wanted to share to get initial comments.
>>>>>> This is working with SRIOV as well.
>>>>>>
>>>>>> Please have a look and let me know your positive comments
>>>>>
>>>>> Please send as individual inline patches. not attachments.
>>>>> Please also add a proper description to each patch and an entry 0/N email
>>>>> with the high level explanation of your work.
>>>>>
>>>>> See http://wiki.xenproject.org/wiki/Submitting_Xen_Project_Patches
>>>> Stefano I just wanted to share the patches as reference to our
>>>> discussion on the approach. Please recall I had shared in this mail a
>>>> design flow. These are just an extension to it. I wanted to move this
>>>> discussion to a conclusion
>>>> There are not patches which I am submitting to xen git.
>>>> If you are ok with the approach I will formally send the patches post
>>>> 4.5 release.
>>>
>>> In this case you can send the patch series tagged "[RFC]" in the subject.
>>
>> That's right. It is difficult to give even just an early feedback
>> without the patch descriptions.
>>
> I assumed that the context is preserved in this mail thread. I shared
> the flow in the first few mails and am sharing the code after a lot of
> discussion in this thread.

There is about 30 mails in this discussion. It's better if you give a 
summary, it will avoid us to read again all the mails to find the 
conclusion.

> Anyhow I will share the code as RFC in some time.

Thanks,

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 38+ messages in thread

end of thread, other threads:[~2014-11-07 10:29 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-18 11:34 [RFC + Queries] Flow of PCI passthrough in ARM manish jaggi
2014-09-22 10:45 ` Stefano Stabellini
2014-09-22 11:09   ` Ian Campbell
2014-09-24 10:56     ` manish jaggi
2014-09-24 10:53   ` manish jaggi
2014-09-24 12:13     ` Jan Beulich
2014-09-24 14:10     ` Stefano Stabellini
2014-09-24 18:32       ` manish jaggi
2014-09-25 10:27         ` Stefano Stabellini
2014-10-01 10:37           ` manish jaggi
2014-10-02 16:41             ` Stefano Stabellini
2014-10-02 16:59               ` Stefano Stabellini
2014-10-03  9:01                 ` Ian Campbell
2014-10-03  9:33                   ` manish jaggi
2014-10-03  9:32                 ` manish jaggi
2014-10-06 11:05                   ` manish jaggi
2014-10-06 14:11                     ` Stefano Stabellini
2014-10-06 15:38                       ` Ian Campbell
2014-10-06 17:39                         ` manish jaggi
2014-10-06 17:39                       ` manish jaggi
2014-10-07 18:17                         ` Stefano Stabellini
2014-10-08 11:46                           ` manish jaggi
2014-10-08 12:46                             ` Konrad Rzeszutek Wilk
2014-10-08 13:37                               ` manish jaggi
2014-10-08 13:45                                 ` Ian Campbell
2014-10-08 13:47                                   ` manish jaggi
2014-10-08 13:58                                     ` Ian Campbell
2014-10-08 14:51                                     ` Konrad Rzeszutek Wilk
2014-10-20 13:30                                       ` manish jaggi
2014-10-20 14:54                                         ` Stefano Stabellini
2014-11-06 15:28                                           ` manish jaggi
2014-11-06 15:48                                             ` Stefano Stabellini
2014-11-06 15:55                                               ` manish jaggi
2014-11-06 16:02                                                 ` Julien Grall
2014-11-06 16:07                                                   ` Stefano Stabellini
2014-11-06 16:20                                                     ` manish jaggi
2014-11-07 10:29                                                       ` Julien Grall
2014-11-06 19:41                                             ` Konrad Rzeszutek Wilk

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.