xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* PCI Pass-through in Xen ARM - Draft 2.
@ 2015-06-28 18:38 Manish Jaggi
  2015-06-29 10:31 ` Julien Grall
  2015-06-29 15:34 ` Ian Campbell
  0 siblings, 2 replies; 51+ messages in thread
From: Manish Jaggi @ 2015-06-28 18:38 UTC (permalink / raw)
  To: xen-devel, Ian Campbell, Konrad Rzeszutek Wilk,
	Stefano Stabellini, Kulkarni, Ganapatrao, Prasun Kapoor,
	Julien Grall, Kumar, Vijaya

PCI Pass-through in Xen ARM
--------------------------

Draft 2

Index

1. Background

2. Basic PCI Support in Xen ARM
2.1 pci_hostbridge and pci_hostbridge_ops
2.2 PHYSDEVOP_HOSTBRIDGE_ADD hypercall

3. Dom0 Access PCI devices

4. DomU assignment of PCI device
4.1 Holes in guest memory space
4.2 New entries in xenstore for device BARs
4.3 Hypercall for bdf mapping noification to xen
4.4 Change in Linux PCI FrontEnd - backend driver
  for MSI/X programming

5. NUMA and PCI passthrough

6. DomU pci device attach flow


Revision History
----------------
Changes from Draft 1
a) map_mmio hypercall removed from earlier draft
b) device bar mapping into guest not 1:1
c) holes in guest address space 32bit / 64bit for MMIO virtual BARs
d) xenstore device's BAR info addition.


1. Background of PCI passthrough
--------------------------------
Passthrough refers to assigning a pci device to a guest domain (domU) such that
the guest has full control over the device.The MMIO space and interrupts are
managed by the guest itself, close to how a bare kernel manages a device.

Device's access to guest address space needs to be isolated and protected. SMMU
(System MMU - IOMMU in ARM) is programmed by xen hypervisor to allow device
access guest memory for data transfer and sending MSI/X interrupts. In case of
MSI/X  the device writes to GITS (ITS address space) Interrupt Translation
Register.

2. Basic PCI Support for ARM
----------------------------
The apis to read write from pci configuration space are based on segment:bdf.
How the sbdf is mapped to a physical address is under the realm of the pci
host controller.

ARM PCI support in Xen, introduces pci host controller similar to what exists
in Linux. Each drivers registers callbacks, which are invoked on matching the
compatible property in pci device tree node.

2.1:
The init function in the pci host driver calls to register hostbridge callbacks:
int pci_hostbridge_register(pci_hostbridge_t *pcihb);

struct pci_hostbridge_ops {
     u32 (*pci_conf_read)(struct pci_hostbridge*, u32 bus, u32 devfn,
                                 u32 reg, u32 bytes);
     void (*pci_conf_write)(struct pci_hostbridge*, u32 bus, u32 devfn,
                                 u32 reg, u32 bytes, u32 val);
};

struct pci_hostbridge{
     u32 segno;
     paddr_t cfg_base;
     paddr_t cfg_size;
     struct dt_device_node *dt_node;
     struct pci_hostbridge_ops ops;
     struct list_head list;
};

A pci conf read function would internally be as follows:
u32 pcihb_conf_read(u32 seg, u32 bus, u32 devfn,u32 reg, u32 bytes)
{
     pci_hostbridge_t *pcihb;
     list_for_each_entry(pcihb, &pci_hostbridge_list, list)
     {
         if(pcihb->segno == seg)
             return pcihb->ops.pci_conf_read(pcihb, bus, devfn, reg, bytes);
     }
     return -1;
}

2.2 PHYSDEVOP_pci_host_bridge_add hypercall

Xen code accesses PCI configuration space based on the sbdf received from the
guest. The order in which the pci device tree node appear may not be the same
order of device enumeration in dom0. Thus there needs to be a mechanism to bind
the segment number assigned by dom0 to the pci host controller. The hypercall
is introduced:

#define PHYSDEVOP_pci_host_bridge_add    44
struct physdev_pci_host_bridge_add {
     /* IN */
     uint16_t seg;
     uint64_t cfg_base;
     uint64_t cfg_size;
};

This hypercall is invoked before dom0 invokes the PHYSDEVOP_pci_device_add
hypercall. The handler code invokes to update segment number in pci_hostbridge:

int pci_hostbridge_setup(uint32_t segno, uint64_t cfg_base, uint64_t cfg_size);

Subsequent calls to pci_conf_read/write are completed by the pci_hostbridge_ops
of the respective pci_hostbridge.

3. Dom0 access PCI device
---------------------------------
As per the design of xen hypervisor, dom0 enumerates the PCI devices. For each
device the MMIO space has to be mapped in the Stage2 translation for dom0. For
dom0 xen maps the ranges in pci nodes in stage 2 translation.

GITS_ITRANSLATER space (4k( must be programmed in Stage2 translation so that MSI/X
must work. This is done in vits initialization in dom0/domU.

4. DomU access / assignment PCI device
--------------------------------------
In the flow of pci-attach device, the toolkit will read the pci configuration
space BAR registers. The toolkit has the guest memory map and the information
of the MMIO holes.

When the first pci device is assigned to domU, toolkit allocates a virtual
BAR region from the MMIO hole area. toolkit then sends domctl xc_domain_memory_mapping
to map in stage2 translation.

4.1 Holes in guest memory space
----------------------------
Holes are added in the guest memory space for mapping pci device's BAR regions.
These are defined in arch-arm.h

/* For 32bit */
GUEST_MMIO_HOLE0_BASE, GUEST_MMIO_HOLE0_SIZE
  
/* For 64bit */
GUEST_MMIO_HOLE1_BASE , GUEST_MMIO_HOLE1_SIZE

4.2 New entries in xenstore for device BARs
--------------------------------------------
toolkit also updates the xenstore information for the device (virtualbar:physical bar).
This information is read by xenpciback and returned to the pcifront driver configuration
space accesses.

4.3 Hypercall for bdf mapping notification to xen
-----------------------------------------------
#define PHYSDEVOP_map_sbdf              43
typedef struct {
     u32 s;
     u8 b;
     u8 df;
     u16 res;
} sbdf_t;
struct physdev_map_sbdf {
     int domain_id;
     sbdf_t    sbdf;
     sbdf_t    gsbdf;
};

Each domain has a pdev list, which contains the list of all pci devices. The
pdev structure already has a sbdf information. The arch_pci_dev is updated to
contain the gsbdf information. (gs- guest segment id)

Whenever there is trap from guest or an interrupt has to be injected, the pdev
list is iterated to find the gsbdf.

4.4 Change in Linux PCI ForntEnd - backend driver for MSI/X programming
-------------------------------------------------------------
On the Pci frontend bus a msi-parent as gicv3-its is added. As there is a single
virtual its for a domU, as there is only a single virtual pci bus in domU. This
ensures that the config_msi calls are handled by the gicv3 its driver in domU
kernel and not utilizing frontend-backend communication between dom0-domU.

5. NUMA domU and vITS
-----------------------------
a) On NUMA systems domU still have a single its node.
b) How can xen identify the ITS on which a device is connected.
- Using segment number query using api which gives pci host controllers
device node

struct dt_device_node* pci_hostbridge_dt_node(uint32_t segno)

c) Query the interrupt parent of the pci device node to find out the its.

6. DomU Bootup flow
---------------------
a. DomU boots up without any pci devices assigned. A daemon listens to events
from the xenstore. When a device is attached to domU, the frontend pci bus driver
starts enumerating the devices.Front end driver communicates with backend driver
in dom0 to read the pci config space.

b. backend driver returns the virtual BAR ranges which are already mapped in domU
stage 2 translation.

c. Device driver of the specific pci device invokes methods to configure the
msi/x interrupt which are handled by the its driver in domU kernel. The read/writes
by the its driver are trapped in xen. ITS driver finds out the actual sbdf based
on the map_sbdf hypercall information.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-06-28 18:38 PCI Pass-through in Xen ARM - Draft 2 Manish Jaggi
@ 2015-06-29 10:31 ` Julien Grall
  2015-06-29 10:50   ` Ian Campbell
  2015-07-05  5:55   ` Manish Jaggi
  2015-06-29 15:34 ` Ian Campbell
  1 sibling, 2 replies; 51+ messages in thread
From: Julien Grall @ 2015-06-29 10:31 UTC (permalink / raw)
  To: Manish Jaggi, xen-devel, Ian Campbell, Konrad Rzeszutek Wilk,
	Stefano Stabellini, Kulkarni, Ganapatrao, Prasun Kapoor, Kumar,
	Vijaya

Hi Manish,

On 28/06/15 19:38, Manish Jaggi wrote:
> 4.1 Holes in guest memory space
> ----------------------------
> Holes are added in the guest memory space for mapping pci device's BAR
> regions.
> These are defined in arch-arm.h
> 
> /* For 32bit */
> GUEST_MMIO_HOLE0_BASE, GUEST_MMIO_HOLE0_SIZE
>  
> /* For 64bit */
> GUEST_MMIO_HOLE1_BASE , GUEST_MMIO_HOLE1_SIZE

The memory layout for 32bit and 64bit are exactly the same. Why do you
need to differ here?

> 4.2 New entries in xenstore for device BARs
> --------------------------------------------
> toolkit also updates the xenstore information for the device
> (virtualbar:physical bar).
> This information is read by xenpciback and returned to the pcifront
> driver configuration
> space accesses.

Can you details what do you plan to put in xenstore and how?

What about the expansion ROM?

> 4.3 Hypercall for bdf mapping notification to xen
> -----------------------------------------------
> #define PHYSDEVOP_map_sbdf              43
> typedef struct {
>     u32 s;
>     u8 b;
>     u8 df;
>     u16 res;
> } sbdf_t;
> struct physdev_map_sbdf {
>     int domain_id;
>     sbdf_t    sbdf;
>     sbdf_t    gsbdf;
> };
> 
> Each domain has a pdev list, which contains the list of all pci devices.
> The
> pdev structure already has a sbdf information. The arch_pci_dev is
> updated to
> contain the gsbdf information. (gs- guest segment id)
> 
> Whenever there is trap from guest or an interrupt has to be injected,
> the pdev
> list is iterated to find the gsbdf.

Can you give more background for this section? i.e:
	- Why do you need this?
	- How xen will translate the gbdf to a vDeviceID?
	- Who will call this hypercall?
	- Why not setting the gsbdf when the device is assigned?

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-06-29 10:31 ` Julien Grall
@ 2015-06-29 10:50   ` Ian Campbell
  2015-06-29 11:00     ` Julien Grall
  2015-07-05  5:55   ` Manish Jaggi
  1 sibling, 1 reply; 51+ messages in thread
From: Ian Campbell @ 2015-06-29 10:50 UTC (permalink / raw)
  To: Julien Grall
  Cc: Prasun Kapoor, Kumar, Vijaya, Manish Jaggi, xen-devel,
	Stefano Stabellini, Kulkarni, Ganapatrao

On Mon, 2015-06-29 at 11:31 +0100, Julien Grall wrote:
> Hi Manish,
> 
> On 28/06/15 19:38, Manish Jaggi wrote:
> > 4.1 Holes in guest memory space
> > ----------------------------
> > Holes are added in the guest memory space for mapping pci device's BAR
> > regions.
> > These are defined in arch-arm.h
> > 
> > /* For 32bit */
> > GUEST_MMIO_HOLE0_BASE, GUEST_MMIO_HOLE0_SIZE
> >  
> > /* For 64bit */
> > GUEST_MMIO_HOLE1_BASE , GUEST_MMIO_HOLE1_SIZE
> 
> The memory layout for 32bit and 64bit are exactly the same. Why do you
> need to differ here?

I took this to be a "hole under 4GB" and a "hole over 4GB" IOW rather
than specific to 32-bit/64-bit guests it is specific to 32/64 bit
devices, the latter of which could be placed higher in RAM, which is
useful since the space under 4GB is naturally more limited.

Ian.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-06-29 10:50   ` Ian Campbell
@ 2015-06-29 11:00     ` Julien Grall
  0 siblings, 0 replies; 51+ messages in thread
From: Julien Grall @ 2015-06-29 11:00 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Prasun Kapoor, Kumar, Vijaya, Manish Jaggi, xen-devel,
	Stefano Stabellini, Kulkarni, Ganapatrao

On 29/06/15 11:50, Ian Campbell wrote:
> On Mon, 2015-06-29 at 11:31 +0100, Julien Grall wrote:
>> Hi Manish,
>>
>> On 28/06/15 19:38, Manish Jaggi wrote:
>>> 4.1 Holes in guest memory space
>>> ----------------------------
>>> Holes are added in the guest memory space for mapping pci device's BAR
>>> regions.
>>> These are defined in arch-arm.h
>>>
>>> /* For 32bit */
>>> GUEST_MMIO_HOLE0_BASE, GUEST_MMIO_HOLE0_SIZE
>>>  
>>> /* For 64bit */
>>> GUEST_MMIO_HOLE1_BASE , GUEST_MMIO_HOLE1_SIZE
>>
>> The memory layout for 32bit and 64bit are exactly the same. Why do you
>> need to differ here?
> 
> I took this to be a "hole under 4GB" and a "hole over 4GB" IOW rather
> than specific to 32-bit/64-bit guests it is specific to 32/64 bit
> devices, the latter of which could be placed higher in RAM, which is
> useful since the space under 4GB is naturally more limited.

I wasn't able to deduce that with this section. It needs a minimum of
explanation based on what you just said.

That also bring the question about 64 bit PCI and short page table. Do
we support it (i.e possibility to allocate 64 bit BAR below 4GB)?

I don't mind the answer, but it needs to be clarify for the implementation.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-06-28 18:38 PCI Pass-through in Xen ARM - Draft 2 Manish Jaggi
  2015-06-29 10:31 ` Julien Grall
@ 2015-06-29 15:34 ` Ian Campbell
  1 sibling, 0 replies; 51+ messages in thread
From: Ian Campbell @ 2015-06-29 15:34 UTC (permalink / raw)
  To: Manish Jaggi
  Cc: Prasun Kapoor, Kumar, Vijaya, xen-devel, Julien Grall,
	Stefano Stabellini, Kulkarni, Ganapatrao

On Mon, 2015-06-29 at 00:08 +0530, Manish Jaggi wrote:
> PCI Pass-through in Xen ARM
> --------------------------
> 
> Draft 2
> 
> Index
> 
> 1. Background
> 
> 2. Basic PCI Support in Xen ARM
> 2.1 pci_hostbridge and pci_hostbridge_ops
> 2.2 PHYSDEVOP_HOSTBRIDGE_ADD hypercall
> 
> 3. Dom0 Access PCI devices
> 
> 4. DomU assignment of PCI device
> 4.1 Holes in guest memory space
> 4.2 New entries in xenstore for device BARs
> 4.3 Hypercall for bdf mapping noification to xen
> 4.4 Change in Linux PCI FrontEnd - backend driver
>   for MSI/X programming
> 
> 5. NUMA and PCI passthrough
> 
> 6. DomU pci device attach flow
> 
> 
> Revision History
> ----------------
> Changes from Draft 1
> a) map_mmio hypercall removed from earlier draft
> b) device bar mapping into guest not 1:1
> c) holes in guest address space 32bit / 64bit for MMIO virtual BARs
> d) xenstore device's BAR info addition.
> 
> 
> 1. Background of PCI passthrough
> --------------------------------
> Passthrough refers to assigning a pci device to a guest domain (domU) such that
> the guest has full control over the device.The MMIO space and interrupts are
> managed by the guest itself, close to how a bare kernel manages a device.
> 
> Device's access to guest address space needs to be isolated and protected. SMMU
> (System MMU - IOMMU in ARM) is programmed by xen hypervisor to allow device
> access guest memory for data transfer and sending MSI/X interrupts. In case of
> MSI/X  the device writes to GITS (ITS address space) Interrupt Translation
> Register.
> 
> 2. Basic PCI Support for ARM
> ----------------------------
> The apis to read write from pci configuration space are based on segment:bdf.
> How the sbdf is mapped to a physical address is under the realm of the pci
> host controller.
> 
> ARM PCI support in Xen, introduces pci host controller similar to what exists
> in Linux. Each drivers registers callbacks, which are invoked on matching the
> compatible property in pci device tree node.
> 
> 2.1:
> The init function in the pci host driver calls to register hostbridge callbacks:
> int pci_hostbridge_register(pci_hostbridge_t *pcihb);
> 
> struct pci_hostbridge_ops {
>      u32 (*pci_conf_read)(struct pci_hostbridge*, u32 bus, u32 devfn,
>                                  u32 reg, u32 bytes);
>      void (*pci_conf_write)(struct pci_hostbridge*, u32 bus, u32 devfn,
>                                  u32 reg, u32 bytes, u32 val);
> };
> 
> struct pci_hostbridge{
>      u32 segno;
>      paddr_t cfg_base;
>      paddr_t cfg_size;
>      struct dt_device_node *dt_node;
>      struct pci_hostbridge_ops ops;
>      struct list_head list;
> };
> 
> A pci conf read function would internally be as follows:
> u32 pcihb_conf_read(u32 seg, u32 bus, u32 devfn,u32 reg, u32 bytes)
> {
>      pci_hostbridge_t *pcihb;
>      list_for_each_entry(pcihb, &pci_hostbridge_list, list)
>      {
>          if(pcihb->segno == seg)
>              return pcihb->ops.pci_conf_read(pcihb, bus, devfn, reg, bytes);
>      }
>      return -1;
> }
> 
> 2.2 PHYSDEVOP_pci_host_bridge_add hypercall
> 
> Xen code accesses PCI configuration space based on the sbdf received from the
> guest. The order in which the pci device tree node appear may not be the same
> order of device enumeration in dom0. Thus there needs to be a mechanism to bind
> the segment number assigned by dom0 to the pci host controller. The hypercall
> is introduced:
> 
> #define PHYSDEVOP_pci_host_bridge_add    44
> struct physdev_pci_host_bridge_add {
>      /* IN */
>      uint16_t seg;
>      uint64_t cfg_base;
>      uint64_t cfg_size;
> };
> 
> This hypercall is invoked before dom0 invokes the PHYSDEVOP_pci_device_add
> hypercall. The handler code invokes to update segment number in pci_hostbridge:
> 
> int pci_hostbridge_setup(uint32_t segno, uint64_t cfg_base, uint64_t cfg_size);
> 
> Subsequent calls to pci_conf_read/write are completed by the pci_hostbridge_ops
> of the respective pci_hostbridge.
> 
> 3. Dom0 access PCI device
> ---------------------------------
> As per the design of xen hypervisor, dom0 enumerates the PCI devices. For each
> device the MMIO space has to be mapped in the Stage2 translation for dom0.

Here "device" is really host bridge, isn't it? i.e. this is done by
mapping the entire MMIO window of each host bridge, not the individual
BAR registers of each device one at a time.

IOW this is functionality of the pci host driver's intitial setup, not
something which is driven from the dom0 enumeration of the bus.

>  For
> dom0 xen maps the ranges in pci nodes in stage 2 translation.
> 
> GITS_ITRANSLATER space (4k( must be programmed in Stage2 translation so that MSI/X
> must work. This is done in vits initialization in dom0/domU.

This also happens at start of day, but what isn't mentioned is that
(AIUI) the SMMU will need to be programmed to map each SBDF to the dom0
p2m as the devices are discovered and reported. Right?

> 
> 4. DomU access / assignment PCI device
> --------------------------------------
> In the flow of pci-attach device, the toolkit

I assume you mean "toolstack" throughout? If so then please run
s/toolkit/toolstack/g so as to use the usual terminology.

>  will read the pci configuration
> space BAR registers. The toolkit has the guest memory map and the information
> of the MMIO holes.
> 
> When the first pci device is assigned to domU, toolkit allocates a virtual
> BAR region from the MMIO hole area. toolkit then sends domctl xc_domain_memory_mapping
> to map in stage2 translation.
> 
> 4.1 Holes in guest memory space
> ----------------------------
> Holes are added in the guest memory space for mapping pci device's BAR regions.
> These are defined in arch-arm.h
> 
> /* For 32bit */
> GUEST_MMIO_HOLE0_BASE, GUEST_MMIO_HOLE0_SIZE
>   
> /* For 64bit */
> GUEST_MMIO_HOLE1_BASE , GUEST_MMIO_HOLE1_SIZE
> 
> 4.2 New entries in xenstore for device BARs
> --------------------------------------------
> toolkit also updates the xenstore information for the device (virtualbar:physical bar).
> This information is read by xenpciback and returned to the pcifront driver configuration
> space accesses.
> 
> 4.3 Hypercall for bdf mapping notification to xen
                   ^v (I think) or maybe vs?

> -----------------------------------------------
> #define PHYSDEVOP_map_sbdf              43
> typedef struct {
>      u32 s;
>      u8 b;
>      u8 df;
>      u16 res;
> } sbdf_t;
> struct physdev_map_sbdf {
>      int domain_id;
>      sbdf_t    sbdf;
>      sbdf_t    gsbdf;
> };
> 
> Each domain has a pdev list, which contains the list of all pci devices. The
> pdev structure already has a sbdf information. The arch_pci_dev is updated to
> contain the gsbdf information. (gs- guest segment id)
> 
> Whenever there is trap from guest or an interrupt has to be injected, the pdev
> list is iterated to find the gsbdf.
> 
> 4.4 Change in Linux PCI ForntEnd - backend driver for MSI/X programming

Typo "Forntend"

> -------------------------------------------------------------
> On the Pci frontend bus a msi-parent as gicv3-its is added.

Are you talking about a device tree property or something else?

Note that pcifront is not described in the DT, only in the xenstore
structure. So a dt property is unlikely to be the right way to describe
this.

We need to think of some way of specifying this such that we don't tie
ourselves into a single vits ABI.

>  As there is a single
> virtual its for a domU, as there is only a single virtual pci bus in domU. This
> ensures that the config_msi calls are handled by the gicv3 its driver in domU
> kernel and not utilizing frontend-backend communication between dom0-domU.
> 
> 5. NUMA domU and vITS
> -----------------------------
> a) On NUMA systems domU still have a single its node.
> b) How can xen identify the ITS on which a device is connected.
> - Using segment number query using api which gives pci host controllers
> device node
> 
> struct dt_device_node* pci_hostbridge_dt_node(uint32_t segno)
> 
> c) Query the interrupt parent of the pci device node to find out the its.
> 
> 6. DomU Bootup flow
> ---------------------
> a. DomU boots up without any pci devices assigned.

I don't think we can/should rule out cold plug at this stage. IOW it
must be possible to boot a domU with PCI devices already assigned.

>  A daemon listens to events
> from the xenstore. When a device is attached to domU, the frontend pci bus driver
> starts enumerating the devices.Front end driver communicates with backend driver
> in dom0 to read the pci config space.

I'm afraid I don't follow any of this. What "daemon"? Is it in the front
or backend? What does it do with the events it is listening for?

> 
> b. backend driver returns the virtual BAR ranges which are already mapped in domU
> stage 2 translation.
> 
> c. Device driver of the specific pci device invokes methods to configure the
> msi/x interrupt which are handled by the its driver in domU kernel. The read/writes
> by the its driver are trapped in xen. ITS driver finds out the actual sbdf based
> on the map_sbdf hypercall information.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-06-29 10:31 ` Julien Grall
  2015-06-29 10:50   ` Ian Campbell
@ 2015-07-05  5:55   ` Manish Jaggi
  2015-07-06  6:13     ` Manish Jaggi
                       ` (3 more replies)
  1 sibling, 4 replies; 51+ messages in thread
From: Manish Jaggi @ 2015-07-05  5:55 UTC (permalink / raw)
  To: Julien Grall, xen-devel, Ian Campbell, Konrad Rzeszutek Wilk,
	Stefano Stabellini, Kulkarni, Ganapatrao, Prasun Kapoor, Kumar,
	Vijaya



On Monday 29 June 2015 04:01 PM, Julien Grall wrote:
> Hi Manish,
>
> On 28/06/15 19:38, Manish Jaggi wrote:
>> 4.1 Holes in guest memory space
>> ----------------------------
>> Holes are added in the guest memory space for mapping pci device's BAR
>> regions.
>> These are defined in arch-arm.h
>>
>> /* For 32bit */
>> GUEST_MMIO_HOLE0_BASE, GUEST_MMIO_HOLE0_SIZE
>>   
>> /* For 64bit */
>> GUEST_MMIO_HOLE1_BASE , GUEST_MMIO_HOLE1_SIZE
> The memory layout for 32bit and 64bit are exactly the same. Why do you
> need to differ here?
I think Ian has already replied. I will change the name of macro
>> 4.2 New entries in xenstore for device BARs
>> --------------------------------------------
>> toolkit also updates the xenstore information for the device
>> (virtualbar:physical bar).
>> This information is read by xenpciback and returned to the pcifront
>> driver configuration
>> space accesses.
> Can you details what do you plan to put in xenstore and how?
It is implementation . But I plan to put under domU / device / heirarchy
> What about the expansion ROM?
Do you want to put some restriction on not using expansion ROM as a 
passthrough device.
>
>> 4.3 Hypercall for bdf mapping notification to xen
>> -----------------------------------------------
>> #define PHYSDEVOP_map_sbdf              43
>> typedef struct {
>>      u32 s;
>>      u8 b;
>>      u8 df;
>>      u16 res;
>> } sbdf_t;
>> struct physdev_map_sbdf {
>>      int domain_id;
>>      sbdf_t    sbdf;
>>      sbdf_t    gsbdf;
>> };
>>
>> Each domain has a pdev list, which contains the list of all pci devices.
>> The
>> pdev structure already has a sbdf information. The arch_pci_dev is
>> updated to
>> contain the gsbdf information. (gs- guest segment id)
>>
>> Whenever there is trap from guest or an interrupt has to be injected,
>> the pdev
>> list is iterated to find the gsbdf.
> Can you give more background for this section? i.e:
> 	- Why do you need this?
> 	- How xen will translate the gbdf to a vDeviceID?
In the context of the hypercall processing.
> 	- Who will call this hypercall?
> 	- Why not setting the gsbdf when the device is assigned?
Can the maintainer of the pciback suggest an alternate.
The answer to your question is that I have only found a place to issue 
the hypercall where
all the information can be located is the function
__xen_pcibk_add_pci_dev


drivers/xen/xen-pciback/vpci.c
----
unlock:
...
                 kfree(dev_entry);

+       /*Issue Hypercall here */
+#ifdef CONFIG_ARM64
+       map_sbdf.domain_id = pdev->xdev->otherend_id;
+       map_sbdf.sbdf_s = dev->bus->domain_nr;
+       map_sbdf.sbdf_b = dev->bus->number;
+       map_sbdf.sbdf_d = dev->devfn>>3;
+       map_sbdf.sbdf_f = dev->devfn & 0x7;
+       map_sbdf.gsbdf_s = 0;
+       map_sbdf.gsbdf_b = 0;
+       map_sbdf.gsbdf_d = slot;
+       map_sbdf.gsbdf_f = dev->devfn & 0x7;
+       pr_info("## sbdf = %d:%d:%d.%d g_sbdf %d:%d:%d.%d \
+                       domain_id=%d ##\r\n",
+                       map_sbdf.sbdf_s,
+                       map_sbdf.sbdf_b,
+                       map_sbdf.sbdf_d,
+                       map_sbdf.sbdf_f,
+                       map_sbdf.gsbdf_s,
+                       map_sbdf.gsbdf_b,
+                       map_sbdf.gsbdf_d,
+                       map_sbdf.gsbdf_f,
+                       map_sbdf.domain_id);
+
+       err = HYPERVISOR_physdev_op(PHYSDEVOP_map_sbdf, &map_sbdf);
+       if (err)
+               printk(KERN_ERR " Xen Error PHYSDEVOP_map_sbdf");
+#endif
---

> Regards,
>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-05  5:55   ` Manish Jaggi
@ 2015-07-06  6:13     ` Manish Jaggi
  2015-07-06  9:11     ` Ian Campbell
                       ` (2 subsequent siblings)
  3 siblings, 0 replies; 51+ messages in thread
From: Manish Jaggi @ 2015-07-06  6:13 UTC (permalink / raw)
  To: Julien Grall, xen-devel, Ian Campbell, Konrad Rzeszutek Wilk,
	Stefano Stabellini, Kulkarni, Ganapatrao, Prasun Kapoor, Kumar,
	Vijaya



On Sunday 05 July 2015 11:25 AM, Manish Jaggi wrote:
>
>
> On Monday 29 June 2015 04:01 PM, Julien Grall wrote:
>> Hi Manish,
>>
>> On 28/06/15 19:38, Manish Jaggi wrote:
>>> 4.1 Holes in guest memory space
>>> ----------------------------
>>> Holes are added in the guest memory space for mapping pci device's BAR
>>> regions.
>>> These are defined in arch-arm.h
>>>
>>> /* For 32bit */
>>> GUEST_MMIO_HOLE0_BASE, GUEST_MMIO_HOLE0_SIZE
>>>   /* For 64bit */
>>> GUEST_MMIO_HOLE1_BASE , GUEST_MMIO_HOLE1_SIZE
>> The memory layout for 32bit and 64bit are exactly the same. Why do you
>> need to differ here?
> I think Ian has already replied. I will change the name of macro
>>> 4.2 New entries in xenstore for device BARs
>>> --------------------------------------------
>>> toolkit also updates the xenstore information for the device
>>> (virtualbar:physical bar).
>>> This information is read by xenpciback and returned to the pcifront
>>> driver configuration
>>> space accesses.
>> Can you details what do you plan to put in xenstore and how?
> It is implementation . But I plan to put under domU / device / heirarchy
>> What about the expansion ROM?
> Do you want to put some restriction on not using expansion ROM as a 
> passthrough device.
>>
>>> 4.3 Hypercall for bdf mapping notification to xen
>>> -----------------------------------------------
>>> #define PHYSDEVOP_map_sbdf              43
>>> typedef struct {
>>>      u32 s;
>>>      u8 b;
>>>      u8 df;
>>>      u16 res;
>>> } sbdf_t;
>>> struct physdev_map_sbdf {
>>>      int domain_id;
>>>      sbdf_t    sbdf;
>>>      sbdf_t    gsbdf;
>>> };
>>>
>>> Each domain has a pdev list, which contains the list of all pci 
>>> devices.
>>> The
>>> pdev structure already has a sbdf information. The arch_pci_dev is
>>> updated to
>>> contain the gsbdf information. (gs- guest segment id)
>>>
>>> Whenever there is trap from guest or an interrupt has to be injected,
>>> the pdev
>>> list is iterated to find the gsbdf.
>> Can you give more background for this section? i.e:
>>     - Why do you need this?
>>     - How xen will translate the gbdf to a vDeviceID?
> In the context of the hypercall processing.
The hypercall handler in xen, would call its_assign_device(sbdf, gsbdf, 
domid);
>>     - Who will call this hypercall?
>>     - Why not setting the gsbdf when the device is assigned?
> Can the maintainer of the pciback suggest an alternate.
> The answer to your question is that I have only found a place to issue 
> the hypercall where
> all the information can be located is the function
> __xen_pcibk_add_pci_dev
>
>
> drivers/xen/xen-pciback/vpci.c
> ----
> unlock:
> ...
>                 kfree(dev_entry);
>
> +       /*Issue Hypercall here */
> +#ifdef CONFIG_ARM64
> +       map_sbdf.domain_id = pdev->xdev->otherend_id;
> +       map_sbdf.sbdf_s = dev->bus->domain_nr;
> +       map_sbdf.sbdf_b = dev->bus->number;
> +       map_sbdf.sbdf_d = dev->devfn>>3;
> +       map_sbdf.sbdf_f = dev->devfn & 0x7;
> +       map_sbdf.gsbdf_s = 0;
> +       map_sbdf.gsbdf_b = 0;
> +       map_sbdf.gsbdf_d = slot;
> +       map_sbdf.gsbdf_f = dev->devfn & 0x7;
> +       pr_info("## sbdf = %d:%d:%d.%d g_sbdf %d:%d:%d.%d \
> +                       domain_id=%d ##\r\n",
> +                       map_sbdf.sbdf_s,
> +                       map_sbdf.sbdf_b,
> +                       map_sbdf.sbdf_d,
> +                       map_sbdf.sbdf_f,
> +                       map_sbdf.gsbdf_s,
> +                       map_sbdf.gsbdf_b,
> +                       map_sbdf.gsbdf_d,
> +                       map_sbdf.gsbdf_f,
> +                       map_sbdf.domain_id);
> +
> +       err = HYPERVISOR_physdev_op(PHYSDEVOP_map_sbdf, &map_sbdf);
> +       if (err)
> +               printk(KERN_ERR " Xen Error PHYSDEVOP_map_sbdf");
> +#endif
> ---
>
>> Regards,
>>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-05  5:55   ` Manish Jaggi
  2015-07-06  6:13     ` Manish Jaggi
@ 2015-07-06  9:11     ` Ian Campbell
  2015-07-06 10:06       ` Manish Jaggi
  2015-07-06 10:43     ` Julien Grall
  2015-07-07 15:27     ` Konrad Rzeszutek Wilk
  3 siblings, 1 reply; 51+ messages in thread
From: Ian Campbell @ 2015-07-06  9:11 UTC (permalink / raw)
  To: Manish Jaggi
  Cc: Prasun Kapoor, Kumar, Vijaya, xen-devel, Julien Grall,
	Stefano Stabellini, Kulkarni, Ganapatrao

On Sun, 2015-07-05 at 11:25 +0530, Manish Jaggi wrote:
> 
> On Monday 29 June 2015 04:01 PM, Julien Grall wrote:
> > Hi Manish,
> >
> > On 28/06/15 19:38, Manish Jaggi wrote:
> >> 4.1 Holes in guest memory space
> >> ----------------------------
> >> Holes are added in the guest memory space for mapping pci device's BAR
> >> regions.
> >> These are defined in arch-arm.h
> >>
> >> /* For 32bit */
> >> GUEST_MMIO_HOLE0_BASE, GUEST_MMIO_HOLE0_SIZE
> >>   
> >> /* For 64bit */
> >> GUEST_MMIO_HOLE1_BASE , GUEST_MMIO_HOLE1_SIZE
> > The memory layout for 32bit and 64bit are exactly the same. Why do you
> > need to differ here?
> I think Ian has already replied. I will change the name of macro
> >> 4.2 New entries in xenstore for device BARs
> >> --------------------------------------------
> >> toolkit also updates the xenstore information for the device
> >> (virtualbar:physical bar).
> >> This information is read by xenpciback and returned to the pcifront
> >> driver configuration
> >> space accesses.
> > Can you details what do you plan to put in xenstore and how?
> It is implementation . But I plan to put under domU / device / heirarchy

Actually, xenstore is an API of sorts which needs to be maintained going
forward (since front and backend can evolve separately, so it does need
some level of design and documentation.

> > What about the expansion ROM?
> Do you want to put some restriction on not using expansion ROM as a 
> passthrough device.

"expansion ROM as a passthrough device" doesn't make sense to me,
passthrough devices may _have_ an expansion ROM.

The expansion ROM is just another BAR. I don't know how pcifront/back
deal with those today on PV x86, but I see no reason for ARM to deviate.


> >
> >> 4.3 Hypercall for bdf mapping notification to xen
> >> -----------------------------------------------
> >> #define PHYSDEVOP_map_sbdf              43
> >> typedef struct {
> >>      u32 s;
> >>      u8 b;
> >>      u8 df;
> >>      u16 res;
> >> } sbdf_t;
> >> struct physdev_map_sbdf {
> >>      int domain_id;
> >>      sbdf_t    sbdf;
> >>      sbdf_t    gsbdf;
> >> };
> >>
> >> Each domain has a pdev list, which contains the list of all pci devices.
> >> The
> >> pdev structure already has a sbdf information. The arch_pci_dev is
> >> updated to
> >> contain the gsbdf information. (gs- guest segment id)
> >>
> >> Whenever there is trap from guest or an interrupt has to be injected,
> >> the pdev
> >> list is iterated to find the gsbdf.
> > Can you give more background for this section? i.e:
> > 	- Why do you need this?
> > 	- How xen will translate the gbdf to a vDeviceID?
> In the context of the hypercall processing.
> > 	- Who will call this hypercall?
> > 	- Why not setting the gsbdf when the device is assigned?
> Can the maintainer of the pciback suggest an alternate.

That's not me, but I don't think this belongs here, I think it can be
done from the toolstack. If you think not then please explain what
information the toolstack doesn't have in its possession which prevents
this mapping from being done there.

> The answer to your question is that I have only found a place to issue 
> the hypercall where
> all the information can be located is the function
> __xen_pcibk_add_pci_dev
> 
> 
> drivers/xen/xen-pciback/vpci.c
> ----
> unlock:
> ...
>                  kfree(dev_entry);
> 
> +       /*Issue Hypercall here */
> +#ifdef CONFIG_ARM64
> +       map_sbdf.domain_id = pdev->xdev->otherend_id;
> +       map_sbdf.sbdf_s = dev->bus->domain_nr;
> +       map_sbdf.sbdf_b = dev->bus->number;
> +       map_sbdf.sbdf_d = dev->devfn>>3;
> +       map_sbdf.sbdf_f = dev->devfn & 0x7;
> +       map_sbdf.gsbdf_s = 0;
> +       map_sbdf.gsbdf_b = 0;
> +       map_sbdf.gsbdf_d = slot;
> +       map_sbdf.gsbdf_f = dev->devfn & 0x7;
> +       pr_info("## sbdf = %d:%d:%d.%d g_sbdf %d:%d:%d.%d \
> +                       domain_id=%d ##\r\n",
> +                       map_sbdf.sbdf_s,
> +                       map_sbdf.sbdf_b,
> +                       map_sbdf.sbdf_d,
> +                       map_sbdf.sbdf_f,
> +                       map_sbdf.gsbdf_s,
> +                       map_sbdf.gsbdf_b,
> +                       map_sbdf.gsbdf_d,
> +                       map_sbdf.gsbdf_f,
> +                       map_sbdf.domain_id);
> +
> +       err = HYPERVISOR_physdev_op(PHYSDEVOP_map_sbdf, &map_sbdf);
> +       if (err)
> +               printk(KERN_ERR " Xen Error PHYSDEVOP_map_sbdf");
> +#endif
> ---
> 
> > Regards,
> >
> 

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-06  9:11     ` Ian Campbell
@ 2015-07-06 10:06       ` Manish Jaggi
  2015-07-06 10:20         ` Ian Campbell
  0 siblings, 1 reply; 51+ messages in thread
From: Manish Jaggi @ 2015-07-06 10:06 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Prasun Kapoor, Kumar, Vijaya, xen-devel, Julien Grall,
	Stefano Stabellini, Kulkarni, Ganapatrao



On Monday 06 July 2015 02:41 PM, Ian Campbell wrote:
> On Sun, 2015-07-05 at 11:25 +0530, Manish Jaggi wrote:
>> On Monday 29 June 2015 04:01 PM, Julien Grall wrote:
>>> Hi Manish,
>>>
>>> On 28/06/15 19:38, Manish Jaggi wrote:
>>>> 4.1 Holes in guest memory space
>>>> ----------------------------
>>>> Holes are added in the guest memory space for mapping pci device's BAR
>>>> regions.
>>>> These are defined in arch-arm.h
>>>>
>>>> /* For 32bit */
>>>> GUEST_MMIO_HOLE0_BASE, GUEST_MMIO_HOLE0_SIZE
>>>>    
>>>> /* For 64bit */
>>>> GUEST_MMIO_HOLE1_BASE , GUEST_MMIO_HOLE1_SIZE
>>> The memory layout for 32bit and 64bit are exactly the same. Why do you
>>> need to differ here?
>> I think Ian has already replied. I will change the name of macro
>>>> 4.2 New entries in xenstore for device BARs
>>>> --------------------------------------------
>>>> toolkit also updates the xenstore information for the device
>>>> (virtualbar:physical bar).
>>>> This information is read by xenpciback and returned to the pcifront
>>>> driver configuration
>>>> space accesses.
>>> Can you details what do you plan to put in xenstore and how?
>> It is implementation . But I plan to put under domU / device / heirarchy
> Actually, xenstore is an API of sorts which needs to be maintained going
> forward (since front and backend can evolve separately, so it does need
> some level of design and documentation.
>
>>> What about the expansion ROM?
>> Do you want to put some restriction on not using expansion ROM as a
>> passthrough device.
> "expansion ROM as a passthrough device" doesn't make sense to me,
> passthrough devices may _have_ an expansion ROM.
>
> The expansion ROM is just another BAR. I don't know how pcifront/back
> deal with those today on PV x86, but I see no reason for ARM to deviate.
>
>
>>>> 4.3 Hypercall for bdf mapping notification to xen
>>>> -----------------------------------------------
>>>> #define PHYSDEVOP_map_sbdf              43
>>>> typedef struct {
>>>>       u32 s;
>>>>       u8 b;
>>>>       u8 df;
>>>>       u16 res;
>>>> } sbdf_t;
>>>> struct physdev_map_sbdf {
>>>>       int domain_id;
>>>>       sbdf_t    sbdf;
>>>>       sbdf_t    gsbdf;
>>>> };
>>>>
>>>> Each domain has a pdev list, which contains the list of all pci devices.
>>>> The
>>>> pdev structure already has a sbdf information. The arch_pci_dev is
>>>> updated to
>>>> contain the gsbdf information. (gs- guest segment id)
>>>>
>>>> Whenever there is trap from guest or an interrupt has to be injected,
>>>> the pdev
>>>> list is iterated to find the gsbdf.
>>> Can you give more background for this section? i.e:
>>> 	- Why do you need this?
>>> 	- How xen will translate the gbdf to a vDeviceID?
>> In the context of the hypercall processing.
>>> 	- Who will call this hypercall?
>>> 	- Why not setting the gsbdf when the device is assigned?
>> Can the maintainer of the pciback suggest an alternate.
> That's not me, but I don't think this belongs here, I think it can be
> done from the toolstack. If you think not then please explain what
> information the toolstack doesn't have in its possession which prevents
> this mapping from being done there.
The toolstack does not have the guest sbdf information. I could only 
find it in xenpciback.

>> The answer to your question is that I have only found a place to issue
>> the hypercall where
>> all the information can be located is the function
>> __xen_pcibk_add_pci_dev
>>
>>
>> drivers/xen/xen-pciback/vpci.c
>> ----
>> unlock:
>> ...
>>                   kfree(dev_entry);
>>
>> +       /*Issue Hypercall here */
>> +#ifdef CONFIG_ARM64
>> +       map_sbdf.domain_id = pdev->xdev->otherend_id;
>> +       map_sbdf.sbdf_s = dev->bus->domain_nr;
>> +       map_sbdf.sbdf_b = dev->bus->number;
>> +       map_sbdf.sbdf_d = dev->devfn>>3;
>> +       map_sbdf.sbdf_f = dev->devfn & 0x7;
>> +       map_sbdf.gsbdf_s = 0;
>> +       map_sbdf.gsbdf_b = 0;
>> +       map_sbdf.gsbdf_d = slot;
>> +       map_sbdf.gsbdf_f = dev->devfn & 0x7;
>> +       pr_info("## sbdf = %d:%d:%d.%d g_sbdf %d:%d:%d.%d \
>> +                       domain_id=%d ##\r\n",
>> +                       map_sbdf.sbdf_s,
>> +                       map_sbdf.sbdf_b,
>> +                       map_sbdf.sbdf_d,
>> +                       map_sbdf.sbdf_f,
>> +                       map_sbdf.gsbdf_s,
>> +                       map_sbdf.gsbdf_b,
>> +                       map_sbdf.gsbdf_d,
>> +                       map_sbdf.gsbdf_f,
>> +                       map_sbdf.domain_id);
>> +
>> +       err = HYPERVISOR_physdev_op(PHYSDEVOP_map_sbdf, &map_sbdf);
>> +       if (err)
>> +               printk(KERN_ERR " Xen Error PHYSDEVOP_map_sbdf");
>> +#endif
>> ---
>>
>>> Regards,
>>>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-06 10:06       ` Manish Jaggi
@ 2015-07-06 10:20         ` Ian Campbell
  2015-07-29  9:37           ` Manish Jaggi
  0 siblings, 1 reply; 51+ messages in thread
From: Ian Campbell @ 2015-07-06 10:20 UTC (permalink / raw)
  To: Manish Jaggi
  Cc: Prasun Kapoor, Kumar, Vijaya, xen-devel, Julien Grall,
	Stefano Stabellini, Kulkarni, Ganapatrao

On Mon, 2015-07-06 at 15:36 +0530, Manish Jaggi wrote:
> 
> On Monday 06 July 2015 02:41 PM, Ian Campbell wrote:
> > On Sun, 2015-07-05 at 11:25 +0530, Manish Jaggi wrote:
> >> On Monday 29 June 2015 04:01 PM, Julien Grall wrote:
> >>> Hi Manish,
> >>>
> >>> On 28/06/15 19:38, Manish Jaggi wrote:
> >>>> 4.1 Holes in guest memory space
> >>>> ----------------------------
> >>>> Holes are added in the guest memory space for mapping pci device's BAR
> >>>> regions.
> >>>> These are defined in arch-arm.h
> >>>>
> >>>> /* For 32bit */
> >>>> GUEST_MMIO_HOLE0_BASE, GUEST_MMIO_HOLE0_SIZE
> >>>>    
> >>>> /* For 64bit */
> >>>> GUEST_MMIO_HOLE1_BASE , GUEST_MMIO_HOLE1_SIZE
> >>> The memory layout for 32bit and 64bit are exactly the same. Why do you
> >>> need to differ here?
> >> I think Ian has already replied. I will change the name of macro
> >>>> 4.2 New entries in xenstore for device BARs
> >>>> --------------------------------------------
> >>>> toolkit also updates the xenstore information for the device
> >>>> (virtualbar:physical bar).
> >>>> This information is read by xenpciback and returned to the pcifront
> >>>> driver configuration
> >>>> space accesses.
> >>> Can you details what do you plan to put in xenstore and how?
> >> It is implementation . But I plan to put under domU / device / heirarchy
> > Actually, xenstore is an API of sorts which needs to be maintained going
> > forward (since front and backend can evolve separately, so it does need
> > some level of design and documentation.
> >
> >>> What about the expansion ROM?
> >> Do you want to put some restriction on not using expansion ROM as a
> >> passthrough device.
> > "expansion ROM as a passthrough device" doesn't make sense to me,
> > passthrough devices may _have_ an expansion ROM.
> >
> > The expansion ROM is just another BAR. I don't know how pcifront/back
> > deal with those today on PV x86, but I see no reason for ARM to deviate.
> >
> >
> >>>> 4.3 Hypercall for bdf mapping notification to xen
> >>>> -----------------------------------------------
> >>>> #define PHYSDEVOP_map_sbdf              43
> >>>> typedef struct {
> >>>>       u32 s;
> >>>>       u8 b;
> >>>>       u8 df;
> >>>>       u16 res;
> >>>> } sbdf_t;
> >>>> struct physdev_map_sbdf {
> >>>>       int domain_id;
> >>>>       sbdf_t    sbdf;
> >>>>       sbdf_t    gsbdf;
> >>>> };
> >>>>
> >>>> Each domain has a pdev list, which contains the list of all pci devices.
> >>>> The
> >>>> pdev structure already has a sbdf information. The arch_pci_dev is
> >>>> updated to
> >>>> contain the gsbdf information. (gs- guest segment id)
> >>>>
> >>>> Whenever there is trap from guest or an interrupt has to be injected,
> >>>> the pdev
> >>>> list is iterated to find the gsbdf.
> >>> Can you give more background for this section? i.e:
> >>> 	- Why do you need this?
> >>> 	- How xen will translate the gbdf to a vDeviceID?
> >> In the context of the hypercall processing.
> >>> 	- Who will call this hypercall?
> >>> 	- Why not setting the gsbdf when the device is assigned?
> >> Can the maintainer of the pciback suggest an alternate.
> > That's not me, but I don't think this belongs here, I think it can be
> > done from the toolstack. If you think not then please explain what
> > information the toolstack doesn't have in its possession which prevents
> > this mapping from being done there.
> The toolstack does not have the guest sbdf information. I could only 
> find it in xenpciback.

Are you sure? The sbdf relates to the physical device, correct? If so
then surely the toolstack knows it -- it's written in the config file
and is the primary parameter to all of the related libxl passthrough
APIs. The toolstack wouldn't be able to do anything about passing
through a given device without knowing which device it should be passing
through.

Perhaps this info needs plumbing through to some new bit of the
toolstack, but it is surely available somewhere.

If you meant the virtual SBDF then that is in libxl_device_pci.vdevfn.

Ian.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-05  5:55   ` Manish Jaggi
  2015-07-06  6:13     ` Manish Jaggi
  2015-07-06  9:11     ` Ian Campbell
@ 2015-07-06 10:43     ` Julien Grall
  2015-07-06 11:09       ` Manish Jaggi
  2015-07-07 15:27     ` Konrad Rzeszutek Wilk
  3 siblings, 1 reply; 51+ messages in thread
From: Julien Grall @ 2015-07-06 10:43 UTC (permalink / raw)
  To: Manish Jaggi, xen-devel, Ian Campbell, Konrad Rzeszutek Wilk,
	Stefano Stabellini, Kulkarni, Ganapatrao, Prasun Kapoor, Kumar,
	Vijaya

On 05/07/15 06:55, Manish Jaggi wrote:
>>> 4.3 Hypercall for bdf mapping notification to xen
>>> -----------------------------------------------
>>> #define PHYSDEVOP_map_sbdf              43
>>> typedef struct {
>>>      u32 s;
>>>      u8 b;
>>>      u8 df;
>>>      u16 res;
>>> } sbdf_t;
>>> struct physdev_map_sbdf {
>>>      int domain_id;
>>>      sbdf_t    sbdf;
>>>      sbdf_t    gsbdf;
>>> };
>>>
>>> Each domain has a pdev list, which contains the list of all pci devices.
>>> The
>>> pdev structure already has a sbdf information. The arch_pci_dev is
>>> updated to
>>> contain the gsbdf information. (gs- guest segment id)
>>>
>>> Whenever there is trap from guest or an interrupt has to be injected,
>>> the pdev
>>> list is iterated to find the gsbdf.
>> Can you give more background for this section? i.e:
>>     - Why do you need this?
>>     - How xen will translate the gbdf to a vDeviceID?
> In the context of the hypercall processing.

That wasn't my question. I asked, how Xen will find the mapping between
the gdbf and vDeviceID? He doesn't have access to the firmware table and
therefore not able to find the right one.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-06 10:43     ` Julien Grall
@ 2015-07-06 11:09       ` Manish Jaggi
  2015-07-06 11:45         ` Julien Grall
  0 siblings, 1 reply; 51+ messages in thread
From: Manish Jaggi @ 2015-07-06 11:09 UTC (permalink / raw)
  To: Julien Grall, xen-devel, Ian Campbell, Konrad Rzeszutek Wilk,
	Stefano Stabellini, Kulkarni, Ganapatrao, Prasun Kapoor, Kumar,
	Vijaya



On Monday 06 July 2015 04:13 PM, Julien Grall wrote:
> On 05/07/15 06:55, Manish Jaggi wrote:
>>>> 4.3 Hypercall for bdf mapping notification to xen
>>>> -----------------------------------------------
>>>> #define PHYSDEVOP_map_sbdf              43
>>>> typedef struct {
>>>>       u32 s;
>>>>       u8 b;
>>>>       u8 df;
>>>>       u16 res;
>>>> } sbdf_t;
>>>> struct physdev_map_sbdf {
>>>>       int domain_id;
>>>>       sbdf_t    sbdf;
>>>>       sbdf_t    gsbdf;
>>>> };
>>>>
>>>> Each domain has a pdev list, which contains the list of all pci devices.
>>>> The
>>>> pdev structure already has a sbdf information. The arch_pci_dev is
>>>> updated to
>>>> contain the gsbdf information. (gs- guest segment id)
>>>>
>>>> Whenever there is trap from guest or an interrupt has to be injected,
>>>> the pdev
>>>> list is iterated to find the gsbdf.
>>> Can you give more background for this section? i.e:
>>>      - Why do you need this?
>>>      - How xen will translate the gbdf to a vDeviceID?
>> In the context of the hypercall processing.
> That wasn't my question. I asked, how Xen will find the mapping between
> the gdbf and vDeviceID? He doesn't have access to the firmware table and
> therefore not able to find the right one.
I believe gsbdf and vDeviceID would be same. In the hypercall processing 
its_assign_device would be called
with params its_assign_device(sbdf, gsbdf, domid)

>
> Regards,
>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-06 11:09       ` Manish Jaggi
@ 2015-07-06 11:45         ` Julien Grall
  2015-07-07  7:10           ` Manish Jaggi
  0 siblings, 1 reply; 51+ messages in thread
From: Julien Grall @ 2015-07-06 11:45 UTC (permalink / raw)
  To: Manish Jaggi, xen-devel, Ian Campbell, Konrad Rzeszutek Wilk,
	Stefano Stabellini, Kulkarni, Ganapatrao, Prasun Kapoor, Kumar,
	Vijaya

On 06/07/15 12:09, Manish Jaggi wrote:
> 
> 
> On Monday 06 July 2015 04:13 PM, Julien Grall wrote:
>> On 05/07/15 06:55, Manish Jaggi wrote:
>>>>> 4.3 Hypercall for bdf mapping notification to xen
>>>>> -----------------------------------------------
>>>>> #define PHYSDEVOP_map_sbdf              43
>>>>> typedef struct {
>>>>>       u32 s;
>>>>>       u8 b;
>>>>>       u8 df;
>>>>>       u16 res;
>>>>> } sbdf_t;
>>>>> struct physdev_map_sbdf {
>>>>>       int domain_id;
>>>>>       sbdf_t    sbdf;
>>>>>       sbdf_t    gsbdf;
>>>>> };
>>>>>
>>>>> Each domain has a pdev list, which contains the list of all pci
>>>>> devices.
>>>>> The
>>>>> pdev structure already has a sbdf information. The arch_pci_dev is
>>>>> updated to
>>>>> contain the gsbdf information. (gs- guest segment id)
>>>>>
>>>>> Whenever there is trap from guest or an interrupt has to be injected,
>>>>> the pdev
>>>>> list is iterated to find the gsbdf.
>>>> Can you give more background for this section? i.e:
>>>>      - Why do you need this?
>>>>      - How xen will translate the gbdf to a vDeviceID?
>>> In the context of the hypercall processing.
>> That wasn't my question. I asked, how Xen will find the mapping between
>> the gdbf and vDeviceID? He doesn't have access to the firmware table and
>> therefore not able to find the right one.
> I believe gsbdf and vDeviceID would be same.

Xen and the guest need to translate the gsbdf the same way. If this is
clearly defined by a spec, then you should give a link to it.

If not, you have to explain in this design doc how you plan to have xen
and the guest using the same vdevID for a given gsbdf.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-06 11:45         ` Julien Grall
@ 2015-07-07  7:10           ` Manish Jaggi
  2015-07-07  8:18             ` Julien Grall
  0 siblings, 1 reply; 51+ messages in thread
From: Manish Jaggi @ 2015-07-07  7:10 UTC (permalink / raw)
  To: Julien Grall, xen-devel, Ian Campbell, Konrad Rzeszutek Wilk,
	Stefano Stabellini, Kulkarni, Ganapatrao, Prasun Kapoor, Kumar,
	Vijaya



On Monday 06 July 2015 05:15 PM, Julien Grall wrote:
> On 06/07/15 12:09, Manish Jaggi wrote:
>>
>> On Monday 06 July 2015 04:13 PM, Julien Grall wrote:
>>> On 05/07/15 06:55, Manish Jaggi wrote:
>>>>>> 4.3 Hypercall for bdf mapping notification to xen
>>>>>> -----------------------------------------------
>>>>>> #define PHYSDEVOP_map_sbdf              43
>>>>>> typedef struct {
>>>>>>        u32 s;
>>>>>>        u8 b;
>>>>>>        u8 df;
>>>>>>        u16 res;
>>>>>> } sbdf_t;
>>>>>> struct physdev_map_sbdf {
>>>>>>        int domain_id;
>>>>>>        sbdf_t    sbdf;
>>>>>>        sbdf_t    gsbdf;
>>>>>> };
>>>>>>
>>>>>> Each domain has a pdev list, which contains the list of all pci
>>>>>> devices.
>>>>>> The
>>>>>> pdev structure already has a sbdf information. The arch_pci_dev is
>>>>>> updated to
>>>>>> contain the gsbdf information. (gs- guest segment id)
>>>>>>
>>>>>> Whenever there is trap from guest or an interrupt has to be injected,
>>>>>> the pdev
>>>>>> list is iterated to find the gsbdf.
>>>>> Can you give more background for this section? i.e:
>>>>>       - Why do you need this?
>>>>>       - How xen will translate the gbdf to a vDeviceID?
>>>> In the context of the hypercall processing.
>>> That wasn't my question. I asked, how Xen will find the mapping between
>>> the gdbf and vDeviceID? He doesn't have access to the firmware table and
>>> therefore not able to find the right one.
>> I believe gsbdf and vDeviceID would be same.
> Xen and the guest need to translate the gsbdf the same way. If this is
> clearly defined by a spec, then you should give a link to it.
They are same, will change sbdf ->DeviceID and gsbdf->vDeviceID.
> If not, you have to explain in this design doc how you plan to have xen
> and the guest using the same vdevID for a given gsbdf.
>
> Regards,
>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-07  7:10           ` Manish Jaggi
@ 2015-07-07  8:18             ` Julien Grall
  2015-07-07  8:46               ` Manish Jaggi
  0 siblings, 1 reply; 51+ messages in thread
From: Julien Grall @ 2015-07-07  8:18 UTC (permalink / raw)
  To: Manish Jaggi, xen-devel, Ian Campbell, Konrad Rzeszutek Wilk,
	Stefano Stabellini, Kulkarni, Ganapatrao, Prasun Kapoor, Kumar,
	Vijaya

Hi Manish,

On 07/07/2015 08:10, Manish Jaggi wrote:
> On Monday 06 July 2015 05:15 PM, Julien Grall wrote:
>> On 06/07/15 12:09, Manish Jaggi wrote:
>>>
>>> On Monday 06 July 2015 04:13 PM, Julien Grall wrote:
>>>> On 05/07/15 06:55, Manish Jaggi wrote:
>>>>>>> 4.3 Hypercall for bdf mapping notification to xen
>>>>>>> -----------------------------------------------
>>>>>>> #define PHYSDEVOP_map_sbdf              43
>>>>>>> typedef struct {
>>>>>>>        u32 s;
>>>>>>>        u8 b;
>>>>>>>        u8 df;
>>>>>>>        u16 res;
>>>>>>> } sbdf_t;
>>>>>>> struct physdev_map_sbdf {
>>>>>>>        int domain_id;
>>>>>>>        sbdf_t    sbdf;
>>>>>>>        sbdf_t    gsbdf;
>>>>>>> };
>>>>>>>
>>>>>>> Each domain has a pdev list, which contains the list of all pci
>>>>>>> devices.
>>>>>>> The
>>>>>>> pdev structure already has a sbdf information. The arch_pci_dev is
>>>>>>> updated to
>>>>>>> contain the gsbdf information. (gs- guest segment id)
>>>>>>>
>>>>>>> Whenever there is trap from guest or an interrupt has to be
>>>>>>> injected,
>>>>>>> the pdev
>>>>>>> list is iterated to find the gsbdf.
>>>>>> Can you give more background for this section? i.e:
>>>>>>       - Why do you need this?
>>>>>>       - How xen will translate the gbdf to a vDeviceID?
>>>>> In the context of the hypercall processing.
>>>> That wasn't my question. I asked, how Xen will find the mapping between
>>>> the gdbf and vDeviceID? He doesn't have access to the firmware table
>>>> and
>>>> therefore not able to find the right one.
>>> I believe gsbdf and vDeviceID would be same.
>> Xen and the guest need to translate the gsbdf the same way. If this is
>> clearly defined by a spec, then you should give a link to it.
> They are same, will change sbdf ->DeviceID and gsbdf->vDeviceID.

As asked you in the previous mail, can you please prove it? The function 
used to get the requester ID (pci_for_each_dma_alias) is more complex 
than a simple return sbdf.

Furthermore, AFAICT, the IORT Table (from ACPI) [1] is used to specify 
the relationships between the requester ID and the DeviceID. So it's not 
obvious that sbdf == DeviceID.

>> If not, you have to explain in this design doc how you plan to have xen
>> and the guest using the same vdevID for a given gsbdf.
>>
>> Regards,
>>
>

[1] 
http://infocenter.arm.com/help/topic/com.arm.doc.den0049a/DEN0049A_IO_Remapping_Table.pdf


-- 
Julien Grall

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-07  8:18             ` Julien Grall
@ 2015-07-07  8:46               ` Manish Jaggi
  2015-07-07 10:54                 ` Manish Jaggi
  2015-07-07 11:24                 ` Ian Campbell
  0 siblings, 2 replies; 51+ messages in thread
From: Manish Jaggi @ 2015-07-07  8:46 UTC (permalink / raw)
  To: Julien Grall, xen-devel, Ian Campbell, Konrad Rzeszutek Wilk,
	Stefano Stabellini, Kulkarni, Ganapatrao, Prasun Kapoor, Kumar,
	Vijaya



On Tuesday 07 July 2015 01:48 PM, Julien Grall wrote:
> Hi Manish,
>
> On 07/07/2015 08:10, Manish Jaggi wrote:
>> On Monday 06 July 2015 05:15 PM, Julien Grall wrote:
>>> On 06/07/15 12:09, Manish Jaggi wrote:
>>>>
>>>> On Monday 06 July 2015 04:13 PM, Julien Grall wrote:
>>>>> On 05/07/15 06:55, Manish Jaggi wrote:
>>>>>>>> 4.3 Hypercall for bdf mapping notification to xen
>>>>>>>> -----------------------------------------------
>>>>>>>> #define PHYSDEVOP_map_sbdf              43
>>>>>>>> typedef struct {
>>>>>>>>        u32 s;
>>>>>>>>        u8 b;
>>>>>>>>        u8 df;
>>>>>>>>        u16 res;
>>>>>>>> } sbdf_t;
>>>>>>>> struct physdev_map_sbdf {
>>>>>>>>        int domain_id;
>>>>>>>>        sbdf_t    sbdf;
>>>>>>>>        sbdf_t    gsbdf;
>>>>>>>> };
>>>>>>>>
>>>>>>>> Each domain has a pdev list, which contains the list of all pci
>>>>>>>> devices.
>>>>>>>> The
>>>>>>>> pdev structure already has a sbdf information. The arch_pci_dev is
>>>>>>>> updated to
>>>>>>>> contain the gsbdf information. (gs- guest segment id)
>>>>>>>>
>>>>>>>> Whenever there is trap from guest or an interrupt has to be
>>>>>>>> injected,
>>>>>>>> the pdev
>>>>>>>> list is iterated to find the gsbdf.
>>>>>>> Can you give more background for this section? i.e:
>>>>>>>       - Why do you need this?
>>>>>>>       - How xen will translate the gbdf to a vDeviceID?
>>>>>> In the context of the hypercall processing.
>>>>> That wasn't my question. I asked, how Xen will find the mapping 
>>>>> between
>>>>> the gdbf and vDeviceID? He doesn't have access to the firmware table
>>>>> and
>>>>> therefore not able to find the right one.
>>>> I believe gsbdf and vDeviceID would be same.
>>> Xen and the guest need to translate the gsbdf the same way. If this is
>>> clearly defined by a spec, then you should give a link to it.
>> They are same, will change sbdf ->DeviceID and gsbdf->vDeviceID.
>
> As asked you in the previous mail, can you please prove it? The 
> function used to get the requester ID (pci_for_each_dma_alias) is more 
> complex than a simple return sbdf.
I am not sure what you would like me to prove.
As of ThunderX Xen code we have assumed sbdf == deviceID. We are not 
using ACPI as of now. This is our implementation. It cannot be wrong 
outrightly.
Can you please suggest what could be the other approach.

>
> Furthermore, AFAICT, the IORT Table (from ACPI) [1] is used to specify 
> the relationships between the requester ID and the DeviceID. So it's 
> not obvious that sbdf == DeviceID.
>
>>> If not, you have to explain in this design doc how you plan to have xen
>>> and the guest using the same vdevID for a given gsbdf.
>>>
>>> Regards,
>>>
>>
>
> [1] 
> http://infocenter.arm.com/help/topic/com.arm.doc.den0049a/DEN0049A_IO_Remapping_Table.pdf
>
>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-07  8:46               ` Manish Jaggi
@ 2015-07-07 10:54                 ` Manish Jaggi
  2015-07-07 11:24                 ` Ian Campbell
  1 sibling, 0 replies; 51+ messages in thread
From: Manish Jaggi @ 2015-07-07 10:54 UTC (permalink / raw)
  To: Julien Grall, xen-devel, Ian Campbell, Konrad Rzeszutek Wilk,
	Stefano Stabellini, Kulkarni, Ganapatrao, Prasun Kapoor, Kumar,
	Vijaya



On Tuesday 07 July 2015 02:16 PM, Manish Jaggi wrote:
>
>
> On Tuesday 07 July 2015 01:48 PM, Julien Grall wrote:
>> Hi Manish,
>>
>> On 07/07/2015 08:10, Manish Jaggi wrote:
>>> On Monday 06 July 2015 05:15 PM, Julien Grall wrote:
>>>> On 06/07/15 12:09, Manish Jaggi wrote:
>>>>>
>>>>> On Monday 06 July 2015 04:13 PM, Julien Grall wrote:
>>>>>> On 05/07/15 06:55, Manish Jaggi wrote:
>>>>>>>>> 4.3 Hypercall for bdf mapping notification to xen
>>>>>>>>> -----------------------------------------------
>>>>>>>>> #define PHYSDEVOP_map_sbdf              43
>>>>>>>>> typedef struct {
>>>>>>>>>        u32 s;
>>>>>>>>>        u8 b;
>>>>>>>>>        u8 df;
>>>>>>>>>        u16 res;
>>>>>>>>> } sbdf_t;
>>>>>>>>> struct physdev_map_sbdf {
>>>>>>>>>        int domain_id;
>>>>>>>>>        sbdf_t    sbdf;
>>>>>>>>>        sbdf_t    gsbdf;
>>>>>>>>> };
>>>>>>>>>
>>>>>>>>> Each domain has a pdev list, which contains the list of all pci
>>>>>>>>> devices.
>>>>>>>>> The
>>>>>>>>> pdev structure already has a sbdf information. The 
>>>>>>>>> arch_pci_dev is
>>>>>>>>> updated to
>>>>>>>>> contain the gsbdf information. (gs- guest segment id)
>>>>>>>>>
>>>>>>>>> Whenever there is trap from guest or an interrupt has to be
>>>>>>>>> injected,
>>>>>>>>> the pdev
>>>>>>>>> list is iterated to find the gsbdf.
>>>>>>>> Can you give more background for this section? i.e:
>>>>>>>>       - Why do you need this?
>>>>>>>>       - How xen will translate the gbdf to a vDeviceID?
>>>>>>> In the context of the hypercall processing.
>>>>>> That wasn't my question. I asked, how Xen will find the mapping 
>>>>>> between
>>>>>> the gdbf and vDeviceID? He doesn't have access to the firmware table
>>>>>> and
>>>>>> therefore not able to find the right one.
>>>>> I believe gsbdf and vDeviceID would be same.
>>>> Xen and the guest need to translate the gsbdf the same way. If this is
>>>> clearly defined by a spec, then you should give a link to it.
>>> They are same, will change sbdf ->DeviceID and gsbdf->vDeviceID.
>>
>> As asked you in the previous mail, can you please prove it? The 
>> function used to get the requester ID (pci_for_each_dma_alias) is 
>> more complex than a simple return sbdf.
> I am not sure what you would like me to prove.
> As of ThunderX Xen code we have assumed sbdf == deviceID. We are not 
> using ACPI as of now. This is our implementation. It cannot be wrong 
> outrightly.
> Can you please suggest what could be the other approach.
>
>>
>> Furthermore, AFAICT, the IORT Table (from ACPI) [1] is used to 
>> specify the relationships between the requester ID and the DeviceID. 
>> So it's not obvious that sbdf == DeviceID.
>>
>>>> If not, you have to explain in this design doc how you plan to have 
>>>> xen
>>>> and the guest using the same vdevID for a given gsbdf.
>>>>
>>>> Regards,
>>>>
>>>
>>
>> [1] 
>> http://infocenter.arm.com/help/topic/com.arm.doc.den0049a/DEN0049A_IO_Remapping_Table.pdf
>>
If ACPI is not used IORT (sbdf - StreamID - deviceID mapping) has to be 
done in device tree.
Can we add this as a TODO. So that first series of patches can be 
accepted with StreamID == DeviceID = sbdf
>>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-07  8:46               ` Manish Jaggi
  2015-07-07 10:54                 ` Manish Jaggi
@ 2015-07-07 11:24                 ` Ian Campbell
  2015-07-09  7:13                   ` Manish Jaggi
  2015-07-14 16:47                   ` Stefano Stabellini
  1 sibling, 2 replies; 51+ messages in thread
From: Ian Campbell @ 2015-07-07 11:24 UTC (permalink / raw)
  To: Manish Jaggi
  Cc: Prasun Kapoor, Kumar, Vijaya, xen-devel, Julien Grall,
	Stefano Stabellini, Kulkarni, Ganapatrao

On Tue, 2015-07-07 at 14:16 +0530, Manish Jaggi wrote:
> > As asked you in the previous mail, can you please prove it? The 
> > function used to get the requester ID (pci_for_each_dma_alias) is more 
> > complex than a simple return sbdf.
> I am not sure what you would like me to prove.
> As of ThunderX Xen code we have assumed sbdf == deviceID.

Please remember that you are not writing "ThunderX Xen code" here, you
are writing generic Xen code which you happen to be testing on Thunder
X. The design and implementation does need to consider the more generic
case I'm afraid.

In particular if this is going to be a PHYSDEVOP then it needs to be
designed to be future proof, since PHYSDEVOP is a stable API i.e. it is
hard to change in the future.

I think I did ask elsewhere _why_ this was a physdev op, since I can't
see why it can't be done by the toolstack, and therefore why it can't be
a domctl.

If this was a domctl there might be scope for accepting an
implementation which made assumptions such as sbdf == deviceid. However
I'd still like to see this topic given proper treatment in the design
and not just glossed over with "this is how ThunderX does things".

Or maybe the solution is simple and we should just do it now -- i.e. can
we add a new field to the PHYSDEVOP_pci_host_bridge_add argument struct
which contains the base deviceid for that bridge (since I believe both
DT and ACPI IORT assume a simple linear mapping[citation needed])?

Ian.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-05  5:55   ` Manish Jaggi
                       ` (2 preceding siblings ...)
  2015-07-06 10:43     ` Julien Grall
@ 2015-07-07 15:27     ` Konrad Rzeszutek Wilk
  3 siblings, 0 replies; 51+ messages in thread
From: Konrad Rzeszutek Wilk @ 2015-07-07 15:27 UTC (permalink / raw)
  To: Manish Jaggi
  Cc: Ian Campbell, Prasun Kapoor, Kumar, Vijaya, xen-devel,
	Julien Grall, Stefano Stabellini, Kulkarni, Ganapatrao

On Sun, Jul 05, 2015 at 11:25:25AM +0530, Manish Jaggi wrote:
> 
> 
> On Monday 29 June 2015 04:01 PM, Julien Grall wrote:
> >Hi Manish,
> >
> >On 28/06/15 19:38, Manish Jaggi wrote:
> >>4.1 Holes in guest memory space
> >>----------------------------
> >>Holes are added in the guest memory space for mapping pci device's BAR
> >>regions.
> >>These are defined in arch-arm.h
> >>
> >>/* For 32bit */
> >>GUEST_MMIO_HOLE0_BASE, GUEST_MMIO_HOLE0_SIZE
> >>/* For 64bit */
> >>GUEST_MMIO_HOLE1_BASE , GUEST_MMIO_HOLE1_SIZE
> >The memory layout for 32bit and 64bit are exactly the same. Why do you
> >need to differ here?
> I think Ian has already replied. I will change the name of macro
> >>4.2 New entries in xenstore for device BARs
> >>--------------------------------------------
> >>toolkit also updates the xenstore information for the device
> >>(virtualbar:physical bar).
> >>This information is read by xenpciback and returned to the pcifront
> >>driver configuration
> >>space accesses.
> >Can you details what do you plan to put in xenstore and how?
> It is implementation . But I plan to put under domU / device / heirarchy
> >What about the expansion ROM?
> Do you want to put some restriction on not using expansion ROM as a
> passthrough device.
> >
> >>4.3 Hypercall for bdf mapping notification to xen
> >>-----------------------------------------------
> >>#define PHYSDEVOP_map_sbdf              43
> >>typedef struct {
> >>     u32 s;
> >>     u8 b;
> >>     u8 df;
> >>     u16 res;
> >>} sbdf_t;
> >>struct physdev_map_sbdf {
> >>     int domain_id;
> >>     sbdf_t    sbdf;
> >>     sbdf_t    gsbdf;
> >>};
> >>
> >>Each domain has a pdev list, which contains the list of all pci devices.
> >>The
> >>pdev structure already has a sbdf information. The arch_pci_dev is
> >>updated to
> >>contain the gsbdf information. (gs- guest segment id)
> >>
> >>Whenever there is trap from guest or an interrupt has to be injected,
> >>the pdev
> >>list is iterated to find the gsbdf.
> >Can you give more background for this section? i.e:
> >	- Why do you need this?
> >	- How xen will translate the gbdf to a vDeviceID?
> In the context of the hypercall processing.
> >	- Who will call this hypercall?
> >	- Why not setting the gsbdf when the device is assigned?
> Can the maintainer of the pciback suggest an alternate.
> The answer to your question is that I have only found a place to issue the
> hypercall where
> all the information can be located is the function
> __xen_pcibk_add_pci_dev
> 
> 
> drivers/xen/xen-pciback/vpci.c
> ----
> unlock:
> ...
>                 kfree(dev_entry);
> 
> +       /*Issue Hypercall here */
> +#ifdef CONFIG_ARM64
> +       map_sbdf.domain_id = pdev->xdev->otherend_id;
> +       map_sbdf.sbdf_s = dev->bus->domain_nr;
> +       map_sbdf.sbdf_b = dev->bus->number;
> +       map_sbdf.sbdf_d = dev->devfn>>3;
> +       map_sbdf.sbdf_f = dev->devfn & 0x7;
> +       map_sbdf.gsbdf_s = 0;
> +       map_sbdf.gsbdf_b = 0;
> +       map_sbdf.gsbdf_d = slot;
> +       map_sbdf.gsbdf_f = dev->devfn & 0x7;
> +       pr_info("## sbdf = %d:%d:%d.%d g_sbdf %d:%d:%d.%d \
> +                       domain_id=%d ##\r\n",
> +                       map_sbdf.sbdf_s,
> +                       map_sbdf.sbdf_b,
> +                       map_sbdf.sbdf_d,
> +                       map_sbdf.sbdf_f,
> +                       map_sbdf.gsbdf_s,
> +                       map_sbdf.gsbdf_b,
> +                       map_sbdf.gsbdf_d,
> +                       map_sbdf.gsbdf_f,
> +                       map_sbdf.domain_id);
> +
> +       err = HYPERVISOR_physdev_op(PHYSDEVOP_map_sbdf, &map_sbdf);
> +       if (err)
> +               printk(KERN_ERR " Xen Error PHYSDEVOP_map_sbdf");
> +#endif

You probably also need the inverse - that is when xen_pcibk_release_pci_dev
is called.

I would suggest you instead add in  drivers/xen/xen-pciback/pciback.h
for xen_pcibk_add_pci_dev and xen_pcibk_release_pci_dev an call
to the arch specific code for this hypercall. On x86 it would be a nop
while for ARM it would the above.

> ---
> 
> >Regards,
> >
> 

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-07 11:24                 ` Ian Campbell
@ 2015-07-09  7:13                   ` Manish Jaggi
  2015-07-09  8:08                     ` Julien Grall
  2015-07-14 16:47                   ` Stefano Stabellini
  1 sibling, 1 reply; 51+ messages in thread
From: Manish Jaggi @ 2015-07-09  7:13 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Prasun Kapoor, Kumar, Vijaya, xen-devel, Julien Grall,
	Stefano Stabellini, Kulkarni, Ganapatrao



On Tuesday 07 July 2015 04:54 PM, Ian Campbell wrote:
> On Tue, 2015-07-07 at 14:16 +0530, Manish Jaggi wrote:
>>> As asked you in the previous mail, can you please prove it? The
>>> function used to get the requester ID (pci_for_each_dma_alias) is more
>>> complex than a simple return sbdf.
>> I am not sure what you would like me to prove.
>> As of ThunderX Xen code we have assumed sbdf == deviceID.
> Please remember that you are not writing "ThunderX Xen code" here, you
> are writing generic Xen code which you happen to be testing on Thunder
> X. The design and implementation does need to consider the more generic
> case I'm afraid.
>
> In particular if this is going to be a PHYSDEVOP then it needs to be
> designed to be future proof, since PHYSDEVOP is a stable API i.e. it is
> hard to change in the future.
>
> I think I did ask elsewhere _why_ this was a physdev op, since I can't
> see why it can't be done by the toolstack, and therefore why it can't be
> a domctl.
If it can be done in domctl I prefer that. Will get back on this.
>
> If this was a domctl there might be scope for accepting an
> implementation which made assumptions such as sbdf == deviceid. However
> I'd still like to see this topic given proper treatment in the design
> and not just glossed over with "this is how ThunderX does things".
I got your point.
> Or maybe the solution is simple and we should just do it now -- i.e. can
> we add a new field to the PHYSDEVOP_pci_host_bridge_add argument struct
> which contains the base deviceid for that bridge
deviceId would be same as sbdf. As we dont have a way to translate sbdf 
to deviceID.
What about SMMU streamID, can we also have sbdf = deviceID = smmuid_bdf
FYI: In thunder each RC is on a separate smmu.

Can we take that as step1.
>   (since I believe both
> DT and ACPI IORT assume a simple linear mapping[citation needed])?
I am ok with the approach but then we have to put something similar to 
IORT in device tree.
Currently it is not there.
If we take that route of creating IORT for host  / guest it would be 
altogether different effort.
> Ian.
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-09  7:13                   ` Manish Jaggi
@ 2015-07-09  8:08                     ` Julien Grall
  2015-07-09 10:30                       ` Manish Jaggi
  2015-07-14 16:37                       ` Stefano Stabellini
  0 siblings, 2 replies; 51+ messages in thread
From: Julien Grall @ 2015-07-09  8:08 UTC (permalink / raw)
  To: Manish Jaggi, Ian Campbell
  Cc: Prasun Kapoor, Kumar, Vijaya, Stefano Stabellini, Kulkarni,
	Ganapatrao, xen-devel

Hi Manish,

On 09/07/2015 08:13, Manish Jaggi wrote:
>>
>> If this was a domctl there might be scope for accepting an
>> implementation which made assumptions such as sbdf == deviceid. However
>> I'd still like to see this topic given proper treatment in the design
>> and not just glossed over with "this is how ThunderX does things".
> I got your point.
>> Or maybe the solution is simple and we should just do it now -- i.e. can
>> we add a new field to the PHYSDEVOP_pci_host_bridge_add argument struct
>> which contains the base deviceid for that bridge
> deviceId would be same as sbdf. As we dont have a way to translate sbdf
> to deviceID.

I think we have to be clear in this design document about the different 
meaning.

When the Device Tree is used, it's assumed that the deviceID will be 
equal to the requester ID and not the sbdf.

Linux provides a function (pci_for_each_dma_alias) which will return a 
requester ID for a given PCI device. It appears that the BDF (the 's' of 
sBDF is only internal to Linux and not part of the hardware) is equal to 
the requester ID on your platform but we can't assume it for anyone else.

When we have a PCI in hand, we have to find the requester ID for this 
device. On we have it we can deduce the streamID and the deviceID. The 
way to do it will depend on whether we use device tree or ACPI:
	- For device tree, the streamID, and deviceID will be equal to the 
requester ID
	- For ACPI, we would have to look up in the ACPI IORT.

For the latter, I think they are static tables and therefore can be 
parse in Xen. So we wouldn't need to PHYSDEVOP_pci_host_bridge_add to 
pass an offset. This will also avoid any assumption that deviceID for a 
given root complex are always contiguous and make extendable for any new 
hardware require a different *ID.

So what we really care is the requester ID. Although, I'm not sure if 
you can find it in Xen. If not, we may need to customize (i.e adding a 
new PHYSDEVOP) PCI add device to take a requesterID in parameter.

Now, in the case of the guest, as we are only supporting device tree, we 
could make the assumption that requesterID == deviceID as long as this 
is exposed in a DOMCTL to allow us flexibility.

It would make sense to extend DOMCTL_assign_device to take the vBDF (or 
requesterID?) in parameter.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-09  8:08                     ` Julien Grall
@ 2015-07-09 10:30                       ` Manish Jaggi
  2015-07-09 13:57                         ` Julien Grall
  2015-07-14 16:37                       ` Stefano Stabellini
  1 sibling, 1 reply; 51+ messages in thread
From: Manish Jaggi @ 2015-07-09 10:30 UTC (permalink / raw)
  To: Julien Grall, Ian Campbell
  Cc: Prasun Kapoor, Kumar, Vijaya, Stefano Stabellini, Kulkarni,
	Ganapatrao, xen-devel



On Thursday 09 July 2015 01:38 PM, Julien Grall wrote:
> Hi Manish,
>
> On 09/07/2015 08:13, Manish Jaggi wrote:
>>>
>>> If this was a domctl there might be scope for accepting an
>>> implementation which made assumptions such as sbdf == deviceid. However
>>> I'd still like to see this topic given proper treatment in the design
>>> and not just glossed over with "this is how ThunderX does things".
>> I got your point.
>>> Or maybe the solution is simple and we should just do it now -- i.e. 
>>> can
>>> we add a new field to the PHYSDEVOP_pci_host_bridge_add argument struct
>>> which contains the base deviceid for that bridge
>> deviceId would be same as sbdf. As we dont have a way to translate sbdf
>> to deviceID.
>
> I think we have to be clear in this design document about the 
> different meaning.
>
> When the Device Tree is used, it's assumed that the deviceID will be 
> equal to the requester ID and not the sbdf.
Does SMMU v2 has a concept of requesterID.
I see requesterID term in SMMUv3
>
> Linux provides a function (pci_for_each_dma_alias) which will return a 
> requester ID for a given PCI device. It appears that the BDF (the 's' 
> of sBDF is only internal to Linux and not part of the hardware) is 
> equal to the requester ID on your platform but we can't assume it for 
> anyone else.
so you mean requesterID = pci_for_each_dma_alias(sbdf)
>
> When we have a PCI in hand, we have to find the requester ID for this 
> device.
That is the question. How to map requesterID to sbdf
> On 
Once ?
> we have it we can deduce the streamID and the deviceID. The way to do 
> it will depend on whether we use device tree or ACPI:
>     - For device tree, the streamID, and deviceID will be equal to the 
> requester ID
what do you think should be streamID when a device is PCI EP and is 
enumerated. Also per ARM SMMU 2.0 spec  StreamID is implementation specific.
As per SMMUv3 specs
"For PCI, it is intended that StreamID is generated from the PCI 
RequesterID. The generation function may be 1:1
where one Root Complex is hosted by one SMMU"

>     - For ACPI, we would have to look up in the ACPI IORT.
>
> For the latter, I think they are static tables and therefore can be 
> parse in Xen. So we wouldn't need to PHYSDEVOP_pci_host_bridge_add to 
> pass an offset. This will also avoid any assumption that deviceID for 
> a given root complex are always contiguous and make extendable for any 
> new hardware require a different *ID.
>
> So what we really care is the requester ID. Although, I'm not sure if 
> you can find it in Xen. If not, we may need to customize (i.e adding a 
> new PHYSDEVOP) PCI add device to take a requesterID in parameter.
>
> Now, in the case of the guest, as we are only supporting device tree, 
> we could make the assumption that requesterID == deviceID as long as 
> this is exposed in a DOMCTL to allow us flexibility.
>
> It would make sense to extend DOMCTL_assign_device to take the vBDF 
> (or requesterID?) in parameter.
>
> Regards,
>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-09 10:30                       ` Manish Jaggi
@ 2015-07-09 13:57                         ` Julien Grall
  2015-07-10  6:07                           ` Pranavkumar Sawargaonkar
  0 siblings, 1 reply; 51+ messages in thread
From: Julien Grall @ 2015-07-09 13:57 UTC (permalink / raw)
  To: Manish Jaggi, Ian Campbell
  Cc: Prasun Kapoor, Kumar, Vijaya, Stefano Stabellini, Kulkarni,
	Ganapatrao, xen-devel



On 09/07/2015 12:30, Manish Jaggi wrote:
> On Thursday 09 July 2015 01:38 PM, Julien Grall wrote:
>> On 09/07/2015 08:13, Manish Jaggi wrote:
>>>>
>>>> If this was a domctl there might be scope for accepting an
>>>> implementation which made assumptions such as sbdf == deviceid. However
>>>> I'd still like to see this topic given proper treatment in the design
>>>> and not just glossed over with "this is how ThunderX does things".
>>> I got your point.
>>>> Or maybe the solution is simple and we should just do it now -- i.e.
>>>> can
>>>> we add a new field to the PHYSDEVOP_pci_host_bridge_add argument struct
>>>> which contains the base deviceid for that bridge
>>> deviceId would be same as sbdf. As we dont have a way to translate sbdf
>>> to deviceID.
>>
>> I think we have to be clear in this design document about the
>> different meaning.
>>
>> When the Device Tree is used, it's assumed that the deviceID will be
>> equal to the requester ID and not the sbdf.
> Does SMMU v2 has a concept of requesterID.
> I see requesterID term in SMMUv3

The requester ID is part of the PCI spec and not the SMMU.

The version of the SMMUv2 spec doesn't mention anything about PCI. I 
suspect this is because the spec has been written before the introduced 
PCI. And therefore this is implementation defined.

>>
>> Linux provides a function (pci_for_each_dma_alias) which will return a
>> requester ID for a given PCI device. It appears that the BDF (the 's'
>> of sBDF is only internal to Linux and not part of the hardware) is
>> equal to the requester ID on your platform but we can't assume it for
>> anyone else.
> so you mean requesterID = pci_for_each_dma_alias(sbdf)

Yes.

>>
>> When we have a PCI in hand, we have to find the requester ID for this
>> device.
> That is the question. How to map requesterID to sbdf

See above.

>> On
> Once ?

Yes.

>> we have it we can deduce the streamID and the deviceID. The way to do
>> it will depend on whether we use device tree or ACPI:
>>     - For device tree, the streamID, and deviceID will be equal to the
>> requester ID
> what do you think should be streamID when a device is PCI EP and is
> enumerated. Also per ARM SMMU 2.0 spec  StreamID is implementation
> specific.
> As per SMMUv3 specs
> "For PCI, it is intended that StreamID is generated from the PCI
> RequesterID. The generation function may be 1:1
> where one Root Complex is hosted by one SMMU"

I think my sentence "For device tree, the streamID, and deviceID will be 
equal to the requester ID" is pretty clear. FWIW, this is the solution 
chosen for Linux:

"Assume Stream ID == Requester ID for now. We need a way to describe the 
ID mappings in FDT" (see arm_smmu_add_pci_device in 
drivers/iommu/arm-smmu.c).

You can refer to my point below about ACPI tables. The solution would be 
exactly the same. If we have a requester ID in hand we can do pretty 
much everything.

The whole point of my previous email is to give insight about what we 
need and what we can deduce based on firmware tables. I didn't cover 
anything implementation details.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-09 13:57                         ` Julien Grall
@ 2015-07-10  6:07                           ` Pranavkumar Sawargaonkar
  0 siblings, 0 replies; 51+ messages in thread
From: Pranavkumar Sawargaonkar @ 2015-07-10  6:07 UTC (permalink / raw)
  To: Julien Grall
  Cc: Ian Campbell, Manish Jaggi, Prasun Kapoor, Kumar, Vijaya,
	xen-devel, Stefano Stabellini, Kulkarni, Ganapatrao

On Thu, Jul 9, 2015 at 7:27 PM, Julien Grall <julien.grall@citrix.com> wrote:
>
>
> On 09/07/2015 12:30, Manish Jaggi wrote:
>>
>> On Thursday 09 July 2015 01:38 PM, Julien Grall wrote:
>>>
>>> On 09/07/2015 08:13, Manish Jaggi wrote:
>>>>>
>>>>>
>>>>> If this was a domctl there might be scope for accepting an
>>>>> implementation which made assumptions such as sbdf == deviceid. However
>>>>> I'd still like to see this topic given proper treatment in the design
>>>>> and not just glossed over with "this is how ThunderX does things".
>>>>
>>>> I got your point.
>>>>>
>>>>> Or maybe the solution is simple and we should just do it now -- i.e.
>>>>> can
>>>>> we add a new field to the PHYSDEVOP_pci_host_bridge_add argument struct
>>>>> which contains the base deviceid for that bridge
>>>>
>>>> deviceId would be same as sbdf. As we dont have a way to translate sbdf
>>>> to deviceID.
>>>
>>>
>>> I think we have to be clear in this design document about the
>>> different meaning.
>>>
>>> When the Device Tree is used, it's assumed that the deviceID will be
>>> equal to the requester ID and not the sbdf.
>>
>> Does SMMU v2 has a concept of requesterID.
>> I see requesterID term in SMMUv3
>
>
> The requester ID is part of the PCI spec and not the SMMU.
>
> The version of the SMMUv2 spec doesn't mention anything about PCI. I suspect
> this is because the spec has been written before the introduced PCI. And
> therefore this is implementation defined.
>
>>>
>>> Linux provides a function (pci_for_each_dma_alias) which will return a
>>> requester ID for a given PCI device. It appears that the BDF (the 's'
>>> of sBDF is only internal to Linux and not part of the hardware) is
>>> equal to the requester ID on your platform but we can't assume it for
>>> anyone else.
>>
>> so you mean requesterID = pci_for_each_dma_alias(sbdf)
>
>
> Yes.
>
>>>
>>> When we have a PCI in hand, we have to find the requester ID for this
>>> device.
>>
>> That is the question. How to map requesterID to sbdf
>
>
> See above.
>
>>> On
>>
>> Once ?
>
>
> Yes.
>
>>> we have it we can deduce the streamID and the deviceID. The way to do
>>> it will depend on whether we use device tree or ACPI:
>>>     - For device tree, the streamID, and deviceID will be equal to the
>>> requester ID
>>
>> what do you think should be streamID when a device is PCI EP and is
>> enumerated. Also per ARM SMMU 2.0 spec  StreamID is implementation
>> specific.
>> As per SMMUv3 specs
>> "For PCI, it is intended that StreamID is generated from the PCI
>> RequesterID. The generation function may be 1:1
>> where one Root Complex is hosted by one SMMU"
>
>
> I think my sentence "For device tree, the streamID, and deviceID will be
> equal to the requester ID" is pretty clear. FWIW, this is the solution
> chosen for Linux:
>
> "Assume Stream ID == Requester ID for now. We need a way to describe the ID
> mappings in FDT" (see arm_smmu_add_pci_device in drivers/iommu/arm-smmu.c).
>
> You can refer to my point below about ACPI tables. The solution would be
> exactly the same. If we have a requester ID in hand we can do pretty much
> everything.

There is already one proposal by Mark Rutland on this topic about
describing StreamID to Requester ID mapping in DT.
http://lists.infradead.org/pipermail/linux-arm-kernel/2015-March/333199.html
Probably till it gets finalized assuming RequesterID=StreamId is the
only way since deriving StreamID from PCIe RequsterID will vary from
one vendor to another.

Thanks,
Pranav
>
> The whole point of my previous email is to give insight about what we need
> and what we can deduce based on firmware tables. I didn't cover anything
> implementation details.
>
> Regards,
>
> --
> Julien Grall
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-09  8:08                     ` Julien Grall
  2015-07-09 10:30                       ` Manish Jaggi
@ 2015-07-14 16:37                       ` Stefano Stabellini
  2015-07-14 16:46                         ` Stefano Stabellini
  1 sibling, 1 reply; 51+ messages in thread
From: Stefano Stabellini @ 2015-07-14 16:37 UTC (permalink / raw)
  To: Julien Grall
  Cc: Ian Campbell, Manish Jaggi, Prasun Kapoor, Kumar, Vijaya,
	xen-devel, Stefano Stabellini, Kulkarni, Ganapatrao

On Thu, 9 Jul 2015, Julien Grall wrote:
> Hi Manish,
> 
> On 09/07/2015 08:13, Manish Jaggi wrote:
> > > 
> > > If this was a domctl there might be scope for accepting an
> > > implementation which made assumptions such as sbdf == deviceid. However
> > > I'd still like to see this topic given proper treatment in the design
> > > and not just glossed over with "this is how ThunderX does things".
> > I got your point.
> > > Or maybe the solution is simple and we should just do it now -- i.e. can
> > > we add a new field to the PHYSDEVOP_pci_host_bridge_add argument struct
> > > which contains the base deviceid for that bridge
> > deviceId would be same as sbdf. As we dont have a way to translate sbdf
> > to deviceID.
> 
> I think we have to be clear in this design document about the different
> meaning.
> 
> When the Device Tree is used, it's assumed that the deviceID will be equal to
> the requester ID and not the sbdf.
> 
> Linux provides a function (pci_for_each_dma_alias) which will return a
> requester ID for a given PCI device. It appears that the BDF (the 's' of sBDF
> is only internal to Linux and not part of the hardware) is equal to the
> requester ID on your platform but we can't assume it for anyone else.

The PCI Express Base Specification states that the requester ID is "The
combination of a Requester's Bus Number, Device Number, and Function
Number that uniquely identifies the Requester."

I think it is safe to assume BDF = requester ID on all platforms.


> When we have a PCI in hand, we have to find the requester ID for this device.
> On we have it we can deduce the streamID and the deviceID. The way to do it
> will depend on whether we use device tree or ACPI:
> 	- For device tree, the streamID, and deviceID will be equal to the
> requester ID
> 	- For ACPI, we would have to look up in the ACPI IORT.

This is true.

For now, given that the implementation is device tree only, we can go
forward with the assumption that streamID == deviceID == requesterID, as
long as it is clearly written down.


> So what we really care is the requester ID.
  
Indeed, but that's the BDF.

   
> Although, I'm not sure if you can
> find it in Xen. If not, we may need to customize (i.e adding a new PHYSDEVOP)
> PCI add device to take a requesterID in parameter.

I think there is no need. Let's just write down in a comment that we
assume BDF == requester id.


> Now, in the case of the guest, as we are only supporting device tree, we could
> make the assumption that requesterID == deviceID as long as this is exposed in
> a DOMCTL to allow us flexibility.

Yep


> It would make sense to extend DOMCTL_assign_device to take the vBDF (or
> requesterID?) in parameter.

Could be a good idea

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-14 16:37                       ` Stefano Stabellini
@ 2015-07-14 16:46                         ` Stefano Stabellini
  2015-07-14 16:58                           ` Julien Grall
  0 siblings, 1 reply; 51+ messages in thread
From: Stefano Stabellini @ 2015-07-14 16:46 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Ian Campbell, Manish Jaggi, Prasun Kapoor, Kumar, Vijaya,
	xen-devel, Julien Grall, Stefano Stabellini, Kulkarni,
	Ganapatrao

On Tue, 14 Jul 2015, Stefano Stabellini wrote:
> On Thu, 9 Jul 2015, Julien Grall wrote:
> > Hi Manish,
> > 
> > On 09/07/2015 08:13, Manish Jaggi wrote:
> > > > 
> > > > If this was a domctl there might be scope for accepting an
> > > > implementation which made assumptions such as sbdf == deviceid. However
> > > > I'd still like to see this topic given proper treatment in the design
> > > > and not just glossed over with "this is how ThunderX does things".
> > > I got your point.
> > > > Or maybe the solution is simple and we should just do it now -- i.e. can
> > > > we add a new field to the PHYSDEVOP_pci_host_bridge_add argument struct
> > > > which contains the base deviceid for that bridge
> > > deviceId would be same as sbdf. As we dont have a way to translate sbdf
> > > to deviceID.
> > 
> > I think we have to be clear in this design document about the different
> > meaning.
> > 
> > When the Device Tree is used, it's assumed that the deviceID will be equal to
> > the requester ID and not the sbdf.
> > 
> > Linux provides a function (pci_for_each_dma_alias) which will return a
> > requester ID for a given PCI device. It appears that the BDF (the 's' of sBDF
> > is only internal to Linux and not part of the hardware) is equal to the
> > requester ID on your platform but we can't assume it for anyone else.
> 
> The PCI Express Base Specification states that the requester ID is "The
> combination of a Requester's Bus Number, Device Number, and Function
> Number that uniquely identifies the Requester."
> 
> I think it is safe to assume BDF = requester ID on all platforms.

With the catch that in case of ARI devices
(http://pcisig.com/sites/default/files/specification_documents/ECN-alt-rid-interpretation-070604.pdf),
BDF is actually BF because the device number is always 0 and the
function number is 8 bits.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-07 11:24                 ` Ian Campbell
  2015-07-09  7:13                   ` Manish Jaggi
@ 2015-07-14 16:47                   ` Stefano Stabellini
  1 sibling, 0 replies; 51+ messages in thread
From: Stefano Stabellini @ 2015-07-14 16:47 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Prasun Kapoor, Kumar, Vijaya, Manish Jaggi, xen-devel,
	Julien Grall, Stefano Stabellini, Kulkarni, Ganapatrao

On Tue, 7 Jul 2015, Ian Campbell wrote:
> On Tue, 2015-07-07 at 14:16 +0530, Manish Jaggi wrote:
> > > As asked you in the previous mail, can you please prove it? The 
> > > function used to get the requester ID (pci_for_each_dma_alias) is more 
> > > complex than a simple return sbdf.
> > I am not sure what you would like me to prove.
> > As of ThunderX Xen code we have assumed sbdf == deviceID.
> 
> Please remember that you are not writing "ThunderX Xen code" here, you
> are writing generic Xen code which you happen to be testing on Thunder
> X. The design and implementation does need to consider the more generic
> case I'm afraid.
> 
> In particular if this is going to be a PHYSDEVOP then it needs to be
> designed to be future proof, since PHYSDEVOP is a stable API i.e. it is
> hard to change in the future.
> 
> I think I did ask elsewhere _why_ this was a physdev op, since I can't
> see why it can't be done by the toolstack, and therefore why it can't be
> a domctl.
> 
> If this was a domctl there might be scope for accepting an
> implementation which made assumptions such as sbdf == deviceid.

It is always easier to deal with domctls compared to physdebops, so I
agree with Ian. However I think we can assume BDF = requester ID, as I
wrote earlier.


> However
> I'd still like to see this topic given proper treatment in the design
> and not just glossed over with "this is how ThunderX does things".

right


> Or maybe the solution is simple and we should just do it now -- i.e. can
> we add a new field to the PHYSDEVOP_pci_host_bridge_add argument struct
> which contains the base deviceid for that bridge (since I believe both
> DT and ACPI IORT assume a simple linear mapping[citation needed])?

Expanding XEN_DOMCTL_assign_device is also a good option

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-14 16:46                         ` Stefano Stabellini
@ 2015-07-14 16:58                           ` Julien Grall
  2015-07-14 18:01                             ` Stefano Stabellini
  0 siblings, 1 reply; 51+ messages in thread
From: Julien Grall @ 2015-07-14 16:58 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Ian Campbell, Manish Jaggi, Prasun Kapoor, Kumar, Vijaya,
	xen-devel, Stefano Stabellini, Kulkarni, Ganapatrao

Hi Stefano,

On 14/07/2015 18:46, Stefano Stabellini wrote:
>>> Linux provides a function (pci_for_each_dma_alias) which will return a
>>> requester ID for a given PCI device. It appears that the BDF (the 's' of sBDF
>>> is only internal to Linux and not part of the hardware) is equal to the
>>> requester ID on your platform but we can't assume it for anyone else.
>>
>> The PCI Express Base Specification states that the requester ID is "The
>> combination of a Requester's Bus Number, Device Number, and Function
>> Number that uniquely identifies the Requester."
>>
>> I think it is safe to assume BDF = requester ID on all platforms.
>
> With the catch that in case of ARI devices
> (http://pcisig.com/sites/default/files/specification_documents/ECN-alt-rid-interpretation-070604.pdf),
> BDF is actually BF because the device number is always 0 and the
> function number is 8 bits.

And some other problem such as broken PCI device...
Both Xen x86 (domain_context_mapping in drivers/passthrough/vtd/iommu.c) 
and Linux (pci_dma_for_each_alias) use a code more complex than 
requesterID = BDF.

So I don't think we can use requesterID = BDF in physdev op unless we 
are *stricly* sure this is valid.

Although, based on the x86 code, Xen should be able to translate the BDF 
into the requester ID...

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-14 16:58                           ` Julien Grall
@ 2015-07-14 18:01                             ` Stefano Stabellini
  2015-07-22  5:41                               ` Manish Jaggi
  0 siblings, 1 reply; 51+ messages in thread
From: Stefano Stabellini @ 2015-07-14 18:01 UTC (permalink / raw)
  To: Julien Grall
  Cc: Ian Campbell, Manish Jaggi, Prasun Kapoor, Kumar, Vijaya,
	Stefano Stabellini, xen-devel, Stefano Stabellini, Kulkarni,
	Ganapatrao

On Tue, 14 Jul 2015, Julien Grall wrote:
> Hi Stefano,
> 
> On 14/07/2015 18:46, Stefano Stabellini wrote:
> > > > Linux provides a function (pci_for_each_dma_alias) which will return a
> > > > requester ID for a given PCI device. It appears that the BDF (the 's' of
> > > > sBDF
> > > > is only internal to Linux and not part of the hardware) is equal to the
> > > > requester ID on your platform but we can't assume it for anyone else.
> > > 
> > > The PCI Express Base Specification states that the requester ID is "The
> > > combination of a Requester's Bus Number, Device Number, and Function
> > > Number that uniquely identifies the Requester."
> > > 
> > > I think it is safe to assume BDF = requester ID on all platforms.
> > 
> > With the catch that in case of ARI devices
> > (http://pcisig.com/sites/default/files/specification_documents/ECN-alt-rid-interpretation-070604.pdf),
> > BDF is actually BF because the device number is always 0 and the
> > function number is 8 bits.
> 
> And some other problem such as broken PCI device...
> Both Xen x86 (domain_context_mapping in drivers/passthrough/vtd/iommu.c) and
> Linux (pci_dma_for_each_alias) use a code more complex than requesterID = BDF.
> 
> So I don't think we can use requesterID = BDF in physdev op unless we are
> *stricly* sure this is valid.

The spec is quite clear about it, but I guess there might be hardware quirks.


> Although, based on the x86 code, Xen should be able to translate the BDF into
> the requester ID...

Yes, that is a good point.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-14 18:01                             ` Stefano Stabellini
@ 2015-07-22  5:41                               ` Manish Jaggi
  2015-07-22  8:34                                 ` Julien Grall
  0 siblings, 1 reply; 51+ messages in thread
From: Manish Jaggi @ 2015-07-22  5:41 UTC (permalink / raw)
  To: Stefano Stabellini, Julien Grall
  Cc: Ian Campbell, Prasun Kapoor, Kumar, Vijaya, xen-devel,
	Stefano Stabellini, Kulkarni, Ganapatrao



On Tuesday 14 July 2015 11:31 PM, Stefano Stabellini wrote:
> On Tue, 14 Jul 2015, Julien Grall wrote:
>> Hi Stefano,
>>
>> On 14/07/2015 18:46, Stefano Stabellini wrote:
>>>>> Linux provides a function (pci_for_each_dma_alias) which will return a
>>>>> requester ID for a given PCI device. It appears that the BDF (the 's' of
>>>>> sBDF
>>>>> is only internal to Linux and not part of the hardware) is equal to the
>>>>> requester ID on your platform but we can't assume it for anyone else.
>>>> The PCI Express Base Specification states that the requester ID is "The
>>>> combination of a Requester's Bus Number, Device Number, and Function
>>>> Number that uniquely identifies the Requester."
>>>>
>>>> I think it is safe to assume BDF = requester ID on all platforms.
>>> With the catch that in case of ARI devices
>>> (http://pcisig.com/sites/default/files/specification_documents/ECN-alt-rid-interpretation-070604.pdf),
>>> BDF is actually BF because the device number is always 0 and the
>>> function number is 8 bits.
>> And some other problem such as broken PCI device...
>> Both Xen x86 (domain_context_mapping in drivers/passthrough/vtd/iommu.c) and
>> Linux (pci_dma_for_each_alias) use a code more complex than requesterID = BDF.
>>
>> So I don't think we can use requesterID = BDF in physdev op unless we are
>> *stricly* sure this is valid.
> The spec is quite clear about it, but I guess there might be hardware quirks.
Can we keep this open and for now till there is agreement make 
requesterid = bdf.
If you are ok, I will update and send Draft 3.
>
>
>> Although, based on the x86 code, Xen should be able to translate the BDF into
>> the requester ID...
> Yes, that is a good point.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-22  5:41                               ` Manish Jaggi
@ 2015-07-22  8:34                                 ` Julien Grall
  0 siblings, 0 replies; 51+ messages in thread
From: Julien Grall @ 2015-07-22  8:34 UTC (permalink / raw)
  To: Manish Jaggi, Stefano Stabellini
  Cc: Ian Campbell, Prasun Kapoor, Kumar, Vijaya, xen-devel,
	Stefano Stabellini, Kulkarni, Ganapatrao

Hi Manish,

On 22/07/2015 06:41, Manish Jaggi wrote:
> Can we keep this open and for now till there is agreement make
> requesterid = bdf.
> If you are ok, I will update and send Draft 3.

I think we can make an agreement on we are able to find a requesterID 
based on the BDF but not that requesterID = BDF. There is different 
reason for that: ARI, hardware quirks...

It would be nice if you can summarize how to find the requesterID, 
StreamIDs (SMMU), deviceID (IOMMU) and use the name accordingly in the spec.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-06 10:20         ` Ian Campbell
@ 2015-07-29  9:37           ` Manish Jaggi
  2015-07-30  9:54             ` Ian Campbell
  0 siblings, 1 reply; 51+ messages in thread
From: Manish Jaggi @ 2015-07-29  9:37 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Prasun Kapoor, Kumar, Vijaya, xen-devel, Julien Grall,
	Stefano Stabellini, Kulkarni, Ganapatrao



On Monday 06 July 2015 03:50 PM, Ian Campbell wrote:
> On Mon, 2015-07-06 at 15:36 +0530, Manish Jaggi wrote:
>> On Monday 06 July 2015 02:41 PM, Ian Campbell wrote:
>>> On Sun, 2015-07-05 at 11:25 +0530, Manish Jaggi wrote:
>>>> On Monday 29 June 2015 04:01 PM, Julien Grall wrote:
>>>>> Hi Manish,
>>>>>
>>>>> On 28/06/15 19:38, Manish Jaggi wrote:
>>>>>> 4.1 Holes in guest memory space
>>>>>> ----------------------------
>>>>>> Holes are added in the guest memory space for mapping pci device's BAR
>>>>>> regions.
>>>>>> These are defined in arch-arm.h
>>>>>>
>>>>>> /* For 32bit */
>>>>>> GUEST_MMIO_HOLE0_BASE, GUEST_MMIO_HOLE0_SIZE
>>>>>>     
>>>>>> /* For 64bit */
>>>>>> GUEST_MMIO_HOLE1_BASE , GUEST_MMIO_HOLE1_SIZE
>>>>> The memory layout for 32bit and 64bit are exactly the same. Why do you
>>>>> need to differ here?
>>>> I think Ian has already replied. I will change the name of macro
>>>>>> 4.2 New entries in xenstore for device BARs
>>>>>> --------------------------------------------
>>>>>> toolkit also updates the xenstore information for the device
>>>>>> (virtualbar:physical bar).
>>>>>> This information is read by xenpciback and returned to the pcifront
>>>>>> driver configuration
>>>>>> space accesses.
>>>>> Can you details what do you plan to put in xenstore and how?
>>>> It is implementation . But I plan to put under domU / device / heirarchy
>>> Actually, xenstore is an API of sorts which needs to be maintained going
>>> forward (since front and backend can evolve separately, so it does need
>>> some level of design and documentation.
>>>
>>>>> What about the expansion ROM?
>>>> Do you want to put some restriction on not using expansion ROM as a
>>>> passthrough device.
>>> "expansion ROM as a passthrough device" doesn't make sense to me,
>>> passthrough devices may _have_ an expansion ROM.
>>>
>>> The expansion ROM is just another BAR. I don't know how pcifront/back
>>> deal with those today on PV x86, but I see no reason for ARM to deviate.
>>>
>>>
>>>>>> 4.3 Hypercall for bdf mapping notification to xen
>>>>>> -----------------------------------------------
>>>>>> #define PHYSDEVOP_map_sbdf              43
>>>>>> typedef struct {
>>>>>>        u32 s;
>>>>>>        u8 b;
>>>>>>        u8 df;
>>>>>>        u16 res;
>>>>>> } sbdf_t;
>>>>>> struct physdev_map_sbdf {
>>>>>>        int domain_id;
>>>>>>        sbdf_t    sbdf;
>>>>>>        sbdf_t    gsbdf;
>>>>>> };
>>>>>>
>>>>>> Each domain has a pdev list, which contains the list of all pci devices.
>>>>>> The
>>>>>> pdev structure already has a sbdf information. The arch_pci_dev is
>>>>>> updated to
>>>>>> contain the gsbdf information. (gs- guest segment id)
>>>>>>
>>>>>> Whenever there is trap from guest or an interrupt has to be injected,
>>>>>> the pdev
>>>>>> list is iterated to find the gsbdf.
>>>>> Can you give more background for this section? i.e:
>>>>> 	- Why do you need this?
>>>>> 	- How xen will translate the gbdf to a vDeviceID?
>>>> In the context of the hypercall processing.
>>>>> 	- Who will call this hypercall?
>>>>> 	- Why not setting the gsbdf when the device is assigned?
>>>> Can the maintainer of the pciback suggest an alternate.
>>> That's not me, but I don't think this belongs here, I think it can be
>>> done from the toolstack. If you think not then please explain what
>>> information the toolstack doesn't have in its possession which prevents
>>> this mapping from being done there.
>> The toolstack does not have the guest sbdf information. I could only
>> find it in xenpciback.
> Are you sure? The sbdf relates to the physical device, correct? If so
> then surely the toolstack knows it -- it's written in the config file
> and is the primary parameter to all of the related libxl passthrough
> APIs. The toolstack wouldn't be able to do anything about passing
> through a given device without knowing which device it should be passing
> through.
>
> Perhaps this info needs plumbing through to some new bit of the
> toolstack, but it is surely available somewhere.
>
> If you meant the virtual SBDF then that is in libxl_device_pci.vdevfn.
I added prints in libxl__device_pci_add. vdevfn is always 0 so this may 
not be the right variable to use.
Can you please recheck.

Also the vdev-X entry in xenstore appears to be created from pciback 
code and not from xl.
Check function xen_pcibk_publish_pci_dev.

So I have to send a hypercall from pciback only.
> Ian.
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-29  9:37           ` Manish Jaggi
@ 2015-07-30  9:54             ` Ian Campbell
  2015-07-30 12:51               ` Manish Jaggi
  0 siblings, 1 reply; 51+ messages in thread
From: Ian Campbell @ 2015-07-30  9:54 UTC (permalink / raw)
  To: Manish Jaggi
  Cc: Prasun Kapoor, Kumar, Vijaya, xen-devel, Julien Grall,
	Stefano Stabellini, Kulkarni, Ganapatrao

On Wed, 2015-07-29 at 15:07 +0530, Manish Jaggi wrote:
> 
> On Monday 06 July 2015 03:50 PM, Ian Campbell wrote:
> > On Mon, 2015-07-06 at 15:36 +0530, Manish Jaggi wrote:
> > > On Monday 06 July 2015 02:41 PM, Ian Campbell wrote:
> > > > On Sun, 2015-07-05 at 11:25 +0530, Manish Jaggi wrote:
> > > > > On Monday 29 June 2015 04:01 PM, Julien Grall wrote:
> > > > > > Hi Manish,
> > > > > > 
> > > > > > On 28/06/15 19:38, Manish Jaggi wrote:
> > > > > > > 4.1 Holes in guest memory space
> > > > > > > ----------------------------
> > > > > > > Holes are added in the guest memory space for mapping pci 
> > > > > > > device's BAR
> > > > > > > regions.
> > > > > > > These are defined in arch-arm.h
> > > > > > > 
> > > > > > > /* For 32bit */
> > > > > > > GUEST_MMIO_HOLE0_BASE, GUEST_MMIO_HOLE0_SIZE
> > > > > > >     
> > > > > > > /* For 64bit */
> > > > > > > GUEST_MMIO_HOLE1_BASE , GUEST_MMIO_HOLE1_SIZE
> > > > > > The memory layout for 32bit and 64bit are exactly the same. Why 
> > > > > > do you
> > > > > > need to differ here?
> > > > > I think Ian has already replied. I will change the name of macro
> > > > > > > 4.2 New entries in xenstore for device BARs
> > > > > > > --------------------------------------------
> > > > > > > toolkit also updates the xenstore information for the device
> > > > > > > (virtualbar:physical bar).
> > > > > > > This information is read by xenpciback and returned to the 
> > > > > > > pcifront
> > > > > > > driver configuration
> > > > > > > space accesses.
> > > > > > Can you details what do you plan to put in xenstore and how?
> > > > > It is implementation . But I plan to put under domU / device / 
> > > > > heirarchy
> > > > Actually, xenstore is an API of sorts which needs to be maintained 
> > > > going
> > > > forward (since front and backend can evolve separately, so it does 
> > > > need
> > > > some level of design and documentation.
> > > > 
> > > > > > What about the expansion ROM?
> > > > > Do you want to put some restriction on not using expansion ROM as 
> > > > > a
> > > > > passthrough device.
> > > > "expansion ROM as a passthrough device" doesn't make sense to me,
> > > > passthrough devices may _have_ an expansion ROM.
> > > > 
> > > > The expansion ROM is just another BAR. I don't know how 
> > > > pcifront/back
> > > > deal with those today on PV x86, but I see no reason for ARM to 
> > > > deviate.
> > > > 
> > > > 
> > > > > > > 4.3 Hypercall for bdf mapping notification to xen
> > > > > > > -----------------------------------------------
> > > > > > > #define PHYSDEVOP_map_sbdf              43
> > > > > > > typedef struct {
> > > > > > >        u32 s;
> > > > > > >        u8 b;
> > > > > > >        u8 df;
> > > > > > >        u16 res;
> > > > > > > } sbdf_t;
> > > > > > > struct physdev_map_sbdf {
> > > > > > >        int domain_id;
> > > > > > >        sbdf_t    sbdf;
> > > > > > >        sbdf_t    gsbdf;
> > > > > > > };
> > > > > > > 
> > > > > > > Each domain has a pdev list, which contains the list of all 
> > > > > > > pci devices.
> > > > > > > The
> > > > > > > pdev structure already has a sbdf information. The 
> > > > > > > arch_pci_dev is
> > > > > > > updated to
> > > > > > > contain the gsbdf information. (gs- guest segment id)
> > > > > > > 
> > > > > > > Whenever there is trap from guest or an interrupt has to be 
> > > > > > > injected,
> > > > > > > the pdev
> > > > > > > list is iterated to find the gsbdf.
> > > > > > Can you give more background for this section? i.e:
> > > > > > 	- Why do you need this?
> > > > > > 	- How xen will translate the gbdf to a vDeviceID?
> > > > > In the context of the hypercall processing.
> > > > > > 	- Who will call this hypercall?
> > > > > > 	- Why not setting the gsbdf when the device is 
> > > > > > assigned?
> > > > > Can the maintainer of the pciback suggest an alternate.
> > > > That's not me, but I don't think this belongs here, I think it can 
> > > > be
> > > > done from the toolstack. If you think not then please explain what
> > > > information the toolstack doesn't have in its possession which 
> > > > prevents
> > > > this mapping from being done there.
> > > The toolstack does not have the guest sbdf information. I could only
> > > find it in xenpciback.
> > Are you sure? The sbdf relates to the physical device, correct? If so
> > then surely the toolstack knows it -- it's written in the config file
> > and is the primary parameter to all of the related libxl passthrough
> > APIs. The toolstack wouldn't be able to do anything about passing
> > through a given device without knowing which device it should be 
> > passing
> > through.
> > 
> > Perhaps this info needs plumbing through to some new bit of the
> > toolstack, but it is surely available somewhere.
> > 
> > If you meant the virtual SBDF then that is in libxl_device_pci.vdevfn.
> I added prints in libxl__device_pci_add. vdevfn is always 0 so this may 
> not be the right variable to use.
> Can you please recheck.
> 
> Also the vdev-X entry in xenstore appears to be created from pciback 
> code and not from xl.
> Check function xen_pcibk_publish_pci_dev.
> 
> So I have to send a hypercall from pciback only.

I don't think the necessarily follows.

You could have the tools read the vdev-X node back on plug.

Or you could change things such that vdevfn is always chosen by the
toolstack for ARM, not optionally like it is on x86.

Please evaluate the options.

Ian.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-30  9:54             ` Ian Campbell
@ 2015-07-30 12:51               ` Manish Jaggi
  2015-07-30 14:39                 ` Ian Campbell
  0 siblings, 1 reply; 51+ messages in thread
From: Manish Jaggi @ 2015-07-30 12:51 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Prasun Kapoor, Kumar, Vijaya, xen-devel, Julien Grall,
	Stefano Stabellini, Kulkarni, Ganapatrao



On Thursday 30 July 2015 03:24 PM, Ian Campbell wrote:
> On Wed, 2015-07-29 at 15:07 +0530, Manish Jaggi wrote:
>> On Monday 06 July 2015 03:50 PM, Ian Campbell wrote:
>>> On Mon, 2015-07-06 at 15:36 +0530, Manish Jaggi wrote:
>>>> On Monday 06 July 2015 02:41 PM, Ian Campbell wrote:
>>>>> On Sun, 2015-07-05 at 11:25 +0530, Manish Jaggi wrote:
>>>>>> On Monday 29 June 2015 04:01 PM, Julien Grall wrote:
>>>>>>> Hi Manish,
>>>>>>>
>>>>>>> On 28/06/15 19:38, Manish Jaggi wrote:
>>>>>>>> 4.1 Holes in guest memory space
>>>>>>>> ----------------------------
>>>>>>>> Holes are added in the guest memory space for mapping pci
>>>>>>>> device's BAR
>>>>>>>> regions.
>>>>>>>> These are defined in arch-arm.h
>>>>>>>>
>>>>>>>> /* For 32bit */
>>>>>>>> GUEST_MMIO_HOLE0_BASE, GUEST_MMIO_HOLE0_SIZE
>>>>>>>>      
>>>>>>>> /* For 64bit */
>>>>>>>> GUEST_MMIO_HOLE1_BASE , GUEST_MMIO_HOLE1_SIZE
>>>>>>> The memory layout for 32bit and 64bit are exactly the same. Why
>>>>>>> do you
>>>>>>> need to differ here?
>>>>>> I think Ian has already replied. I will change the name of macro
>>>>>>>> 4.2 New entries in xenstore for device BARs
>>>>>>>> --------------------------------------------
>>>>>>>> toolkit also updates the xenstore information for the device
>>>>>>>> (virtualbar:physical bar).
>>>>>>>> This information is read by xenpciback and returned to the
>>>>>>>> pcifront
>>>>>>>> driver configuration
>>>>>>>> space accesses.
>>>>>>> Can you details what do you plan to put in xenstore and how?
>>>>>> It is implementation . But I plan to put under domU / device /
>>>>>> heirarchy
>>>>> Actually, xenstore is an API of sorts which needs to be maintained
>>>>> going
>>>>> forward (since front and backend can evolve separately, so it does
>>>>> need
>>>>> some level of design and documentation.
>>>>>
>>>>>>> What about the expansion ROM?
>>>>>> Do you want to put some restriction on not using expansion ROM as
>>>>>> a
>>>>>> passthrough device.
>>>>> "expansion ROM as a passthrough device" doesn't make sense to me,
>>>>> passthrough devices may _have_ an expansion ROM.
>>>>>
>>>>> The expansion ROM is just another BAR. I don't know how
>>>>> pcifront/back
>>>>> deal with those today on PV x86, but I see no reason for ARM to
>>>>> deviate.
>>>>>
>>>>>
>>>>>>>> 4.3 Hypercall for bdf mapping notification to xen
>>>>>>>> -----------------------------------------------
>>>>>>>> #define PHYSDEVOP_map_sbdf              43
>>>>>>>> typedef struct {
>>>>>>>>         u32 s;
>>>>>>>>         u8 b;
>>>>>>>>         u8 df;
>>>>>>>>         u16 res;
>>>>>>>> } sbdf_t;
>>>>>>>> struct physdev_map_sbdf {
>>>>>>>>         int domain_id;
>>>>>>>>         sbdf_t    sbdf;
>>>>>>>>         sbdf_t    gsbdf;
>>>>>>>> };
>>>>>>>>
>>>>>>>> Each domain has a pdev list, which contains the list of all
>>>>>>>> pci devices.
>>>>>>>> The
>>>>>>>> pdev structure already has a sbdf information. The
>>>>>>>> arch_pci_dev is
>>>>>>>> updated to
>>>>>>>> contain the gsbdf information. (gs- guest segment id)
>>>>>>>>
>>>>>>>> Whenever there is trap from guest or an interrupt has to be
>>>>>>>> injected,
>>>>>>>> the pdev
>>>>>>>> list is iterated to find the gsbdf.
>>>>>>> Can you give more background for this section? i.e:
>>>>>>> 	- Why do you need this?
>>>>>>> 	- How xen will translate the gbdf to a vDeviceID?
>>>>>> In the context of the hypercall processing.
>>>>>>> 	- Who will call this hypercall?
>>>>>>> 	- Why not setting the gsbdf when the device is
>>>>>>> assigned?
>>>>>> Can the maintainer of the pciback suggest an alternate.
>>>>> That's not me, but I don't think this belongs here, I think it can
>>>>> be
>>>>> done from the toolstack. If you think not then please explain what
>>>>> information the toolstack doesn't have in its possession which
>>>>> prevents
>>>>> this mapping from being done there.
>>>> The toolstack does not have the guest sbdf information. I could only
>>>> find it in xenpciback.
>>> Are you sure? The sbdf relates to the physical device, correct? If so
>>> then surely the toolstack knows it -- it's written in the config file
>>> and is the primary parameter to all of the related libxl passthrough
>>> APIs. The toolstack wouldn't be able to do anything about passing
>>> through a given device without knowing which device it should be
>>> passing
>>> through.
>>>
>>> Perhaps this info needs plumbing through to some new bit of the
>>> toolstack, but it is surely available somewhere.
>>>
>>> If you meant the virtual SBDF then that is in libxl_device_pci.vdevfn.
>> I added prints in libxl__device_pci_add. vdevfn is always 0 so this may
>> not be the right variable to use.
>> Can you please recheck.
>>
>> Also the vdev-X entry in xenstore appears to be created from pciback
>> code and not from xl.
>> Check function xen_pcibk_publish_pci_dev.
>>
>> So I have to send a hypercall from pciback only.
> I don't think the necessarily follows.
>
> You could have the tools read the vdev-X node back on plug.
I have been trying to get the flow of caller of libxl__device_pci_add 
during pci device assignemnt from cfg file(cold boot).
It should be called form xl create flow. Is it called from C code or 
Python code.

libxl__device_pci_add calls xc_assign_device


Secondly, the vdev-X entry is created async by dom0 watching on event. 
So how the tools could read back and call assign device again.

static void xen_pcibk_be_watch(struct xenbus_watch *watch,
                              const char **vec, unsigned int len)
{
      ...
         switch (xenbus_read_driver_state(pdev->xdev->nodename)) {
         case XenbusStateInitWait:
                 xen_pcibk_setup_backend(pdev);
                 break;
}
>
> Or you could change things such that vdevfn is always chosen by the
> toolstack for ARM, not optionally like it is on x86.
>
> Please evaluate the options.
>
> Ian.
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-30 12:51               ` Manish Jaggi
@ 2015-07-30 14:39                 ` Ian Campbell
  2015-07-31  7:46                   ` Manish Jaggi
  0 siblings, 1 reply; 51+ messages in thread
From: Ian Campbell @ 2015-07-30 14:39 UTC (permalink / raw)
  To: Manish Jaggi
  Cc: Prasun Kapoor, Kumar, Vijaya, xen-devel, Julien Grall,
	Stefano Stabellini, Kulkarni, Ganapatrao

On Thu, 2015-07-30 at 18:21 +0530, Manish Jaggi wrote:
> 
> On Thursday 30 July 2015 03:24 PM, Ian Campbell wrote:
> > On Wed, 2015-07-29 at 15:07 +0530, Manish Jaggi wrote:
> > > On Monday 06 July 2015 03:50 PM, Ian Campbell wrote:
> > > > On Mon, 2015-07-06 at 15:36 +0530, Manish Jaggi wrote:
> > > > > On Monday 06 July 2015 02:41 PM, Ian Campbell wrote:
> > > > > > On Sun, 2015-07-05 at 11:25 +0530, Manish Jaggi wrote:
> > > > > > > On Monday 29 June 2015 04:01 PM, Julien Grall wrote:
> > > > > > > > Hi Manish,
> > > > > > > > 
> > > > > > > > On 28/06/15 19:38, Manish Jaggi wrote:
> > > > > > > > > 4.1 Holes in guest memory space
> > > > > > > > > ----------------------------
> > > > > > > > > Holes are added in the guest memory space for mapping pci
> > > > > > > > > device's BAR
> > > > > > > > > regions.
> > > > > > > > > These are defined in arch-arm.h
> > > > > > > > > 
> > > > > > > > > /* For 32bit */
> > > > > > > > > GUEST_MMIO_HOLE0_BASE, GUEST_MMIO_HOLE0_SIZE
> > > > > > > > >      
> > > > > > > > > /* For 64bit */
> > > > > > > > > GUEST_MMIO_HOLE1_BASE , GUEST_MMIO_HOLE1_SIZE
> > > > > > > > The memory layout for 32bit and 64bit are exactly the same. 
> > > > > > > > Why
> > > > > > > > do you
> > > > > > > > need to differ here?
> > > > > > > I think Ian has already replied. I will change the name of 
> > > > > > > macro
> > > > > > > > > 4.2 New entries in xenstore for device BARs
> > > > > > > > > --------------------------------------------
> > > > > > > > > toolkit also updates the xenstore information for the 
> > > > > > > > > device
> > > > > > > > > (virtualbar:physical bar).
> > > > > > > > > This information is read by xenpciback and returned to 
> > > > > > > > > the
> > > > > > > > > pcifront
> > > > > > > > > driver configuration
> > > > > > > > > space accesses.
> > > > > > > > Can you details what do you plan to put in xenstore and 
> > > > > > > > how?
> > > > > > > It is implementation . But I plan to put under domU / device 
> > > > > > > /
> > > > > > > heirarchy
> > > > > > Actually, xenstore is an API of sorts which needs to be 
> > > > > > maintained
> > > > > > going
> > > > > > forward (since front and backend can evolve separately, so it 
> > > > > > does
> > > > > > need
> > > > > > some level of design and documentation.
> > > > > > 
> > > > > > > > What about the expansion ROM?
> > > > > > > Do you want to put some restriction on not using expansion 
> > > > > > > ROM as
> > > > > > > a
> > > > > > > passthrough device.
> > > > > > "expansion ROM as a passthrough device" doesn't make sense to 
> > > > > > me,
> > > > > > passthrough devices may _have_ an expansion ROM.
> > > > > > 
> > > > > > The expansion ROM is just another BAR. I don't know how
> > > > > > pcifront/back
> > > > > > deal with those today on PV x86, but I see no reason for ARM to
> > > > > > deviate.
> > > > > > 
> > > > > > 
> > > > > > > > > 4.3 Hypercall for bdf mapping notification to xen
> > > > > > > > > -----------------------------------------------
> > > > > > > > > #define PHYSDEVOP_map_sbdf              43
> > > > > > > > > typedef struct {
> > > > > > > > >         u32 s;
> > > > > > > > >         u8 b;
> > > > > > > > >         u8 df;
> > > > > > > > >         u16 res;
> > > > > > > > > } sbdf_t;
> > > > > > > > > struct physdev_map_sbdf {
> > > > > > > > >         int domain_id;
> > > > > > > > >         sbdf_t    sbdf;
> > > > > > > > >         sbdf_t    gsbdf;
> > > > > > > > > };
> > > > > > > > > 
> > > > > > > > > Each domain has a pdev list, which contains the list of 
> > > > > > > > > all
> > > > > > > > > pci devices.
> > > > > > > > > The
> > > > > > > > > pdev structure already has a sbdf information. The
> > > > > > > > > arch_pci_dev is
> > > > > > > > > updated to
> > > > > > > > > contain the gsbdf information. (gs- guest segment id)
> > > > > > > > > 
> > > > > > > > > Whenever there is trap from guest or an interrupt has to 
> > > > > > > > > be
> > > > > > > > > injected,
> > > > > > > > > the pdev
> > > > > > > > > list is iterated to find the gsbdf.
> > > > > > > > Can you give more background for this section? i.e:
> > > > > > > > 	- Why do you need this?
> > > > > > > > 	- How xen will translate the gbdf to a vDeviceID?
> > > > > > > In the context of the hypercall processing.
> > > > > > > > 	- Who will call this hypercall?
> > > > > > > > 	- Why not setting the gsbdf when the device is
> > > > > > > > assigned?
> > > > > > > Can the maintainer of the pciback suggest an alternate.
> > > > > > That's not me, but I don't think this belongs here, I think it 
> > > > > > can
> > > > > > be
> > > > > > done from the toolstack. If you think not then please explain 
> > > > > > what
> > > > > > information the toolstack doesn't have in its possession which
> > > > > > prevents
> > > > > > this mapping from being done there.
> > > > > The toolstack does not have the guest sbdf information. I could 
> > > > > only
> > > > > find it in xenpciback.
> > > > Are you sure? The sbdf relates to the physical device, correct? If 
> > > > so
> > > > then surely the toolstack knows it -- it's written in the config 
> > > > file
> > > > and is the primary parameter to all of the related libxl 
> > > > passthrough
> > > > APIs. The toolstack wouldn't be able to do anything about passing
> > > > through a given device without knowing which device it should be
> > > > passing
> > > > through.
> > > > 
> > > > Perhaps this info needs plumbing through to some new bit of the
> > > > toolstack, but it is surely available somewhere.
> > > > 
> > > > If you meant the virtual SBDF then that is in 
> > > > libxl_device_pci.vdevfn.
> > > I added prints in libxl__device_pci_add. vdevfn is always 0 so this 
> > > may
> > > not be the right variable to use.
> > > Can you please recheck.
> > > 
> > > Also the vdev-X entry in xenstore appears to be created from pciback
> > > code and not from xl.
> > > Check function xen_pcibk_publish_pci_dev.
> > > 
> > > So I have to send a hypercall from pciback only.
> > I don't think the necessarily follows.
> > 
> > You could have the tools read the vdev-X node back on plug.
> I have been trying to get the flow of caller of libxl__device_pci_add 
> during pci device assignemnt from cfg file(cold boot).
> It should be called form xl create flow. Is it called from C code or 
> Python code.

There is no Python code which you need to worry about involved here. You
can completely ignore tools/python.

In the first instance you need only to worry about tools/libxl/libxl* (the
toolstack library). The xl commands are in tools/libxl/xl* and calls
libxl_domain_create_new with a libxl_domain_config struct which contains
the array of pci devices to cold plug.

Hotplug starts at libxl_device_pci_add.

Most of the code for the PCI specific bits are in tools/libxl/libxl_pci.c.

> Secondly, the vdev-X entry is created async by dom0 watching on event. 

> So how the tools could read back and call assign device again.

Perhaps by using a xenstore watch on that node to wait for the assignment
from pciback to occur.

> > Or you could change things such that vdevfn is always chosen by the
> > toolstack for ARM, not optionally like it is on x86.

For this one, the struct libxl_device_pci has a field "vdevfn", which is
supposed to allow the user to specify a specific vdevfn. I'm not sure how
that happens or fits together but libxl could undertake to set that on ARM
in the case where the user hasn't done so, effectively taking control of
the PCI bus assignment.

Ian.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-30 14:39                 ` Ian Campbell
@ 2015-07-31  7:46                   ` Manish Jaggi
  2015-07-31  8:05                     ` Ian Campbell
  0 siblings, 1 reply; 51+ messages in thread
From: Manish Jaggi @ 2015-07-31  7:46 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Prasun Kapoor, Kumar, Vijaya, xen-devel, Julien Grall,
	Stefano Stabellini, Kulkarni, Ganapatrao



On Thursday 30 July 2015 08:09 PM, Ian Campbell wrote:
> On Thu, 2015-07-30 at 18:21 +0530, Manish Jaggi wrote:
>> On Thursday 30 July 2015 03:24 PM, Ian Campbell wrote:
>>> On Wed, 2015-07-29 at 15:07 +0530, Manish Jaggi wrote:
>>>> On Monday 06 July 2015 03:50 PM, Ian Campbell wrote:
>>>>> On Mon, 2015-07-06 at 15:36 +0530, Manish Jaggi wrote:
>>>>>> On Monday 06 July 2015 02:41 PM, Ian Campbell wrote:
>>>>>>> On Sun, 2015-07-05 at 11:25 +0530, Manish Jaggi wrote:
>>>>>>>> On Monday 29 June 2015 04:01 PM, Julien Grall wrote:
>>>>>>>>> Hi Manish,
>>>>>>>>>
>>>>>>>>> On 28/06/15 19:38, Manish Jaggi wrote:
>>>>>>>>>> 4.1 Holes in guest memory space
>>>>>>>>>> ----------------------------
>>>>>>>>>> Holes are added in the guest memory space for mapping pci
>>>>>>>>>> device's BAR
>>>>>>>>>> regions.
>>>>>>>>>> These are defined in arch-arm.h
>>>>>>>>>>
>>>>>>>>>> /* For 32bit */
>>>>>>>>>> GUEST_MMIO_HOLE0_BASE, GUEST_MMIO_HOLE0_SIZE
>>>>>>>>>>       
>>>>>>>>>> /* For 64bit */
>>>>>>>>>> GUEST_MMIO_HOLE1_BASE , GUEST_MMIO_HOLE1_SIZE
>>>>>>>>> The memory layout for 32bit and 64bit are exactly the same.
>>>>>>>>> Why
>>>>>>>>> do you
>>>>>>>>> need to differ here?
>>>>>>>> I think Ian has already replied. I will change the name of
>>>>>>>> macro
>>>>>>>>>> 4.2 New entries in xenstore for device BARs
>>>>>>>>>> --------------------------------------------
>>>>>>>>>> toolkit also updates the xenstore information for the
>>>>>>>>>> device
>>>>>>>>>> (virtualbar:physical bar).
>>>>>>>>>> This information is read by xenpciback and returned to
>>>>>>>>>> the
>>>>>>>>>> pcifront
>>>>>>>>>> driver configuration
>>>>>>>>>> space accesses.
>>>>>>>>> Can you details what do you plan to put in xenstore and
>>>>>>>>> how?
>>>>>>>> It is implementation . But I plan to put under domU / device
>>>>>>>> /
>>>>>>>> heirarchy
>>>>>>> Actually, xenstore is an API of sorts which needs to be
>>>>>>> maintained
>>>>>>> going
>>>>>>> forward (since front and backend can evolve separately, so it
>>>>>>> does
>>>>>>> need
>>>>>>> some level of design and documentation.
>>>>>>>
>>>>>>>>> What about the expansion ROM?
>>>>>>>> Do you want to put some restriction on not using expansion
>>>>>>>> ROM as
>>>>>>>> a
>>>>>>>> passthrough device.
>>>>>>> "expansion ROM as a passthrough device" doesn't make sense to
>>>>>>> me,
>>>>>>> passthrough devices may _have_ an expansion ROM.
>>>>>>>
>>>>>>> The expansion ROM is just another BAR. I don't know how
>>>>>>> pcifront/back
>>>>>>> deal with those today on PV x86, but I see no reason for ARM to
>>>>>>> deviate.
>>>>>>>
>>>>>>>
>>>>>>>>>> 4.3 Hypercall for bdf mapping notification to xen
>>>>>>>>>> -----------------------------------------------
>>>>>>>>>> #define PHYSDEVOP_map_sbdf              43
>>>>>>>>>> typedef struct {
>>>>>>>>>>          u32 s;
>>>>>>>>>>          u8 b;
>>>>>>>>>>          u8 df;
>>>>>>>>>>          u16 res;
>>>>>>>>>> } sbdf_t;
>>>>>>>>>> struct physdev_map_sbdf {
>>>>>>>>>>          int domain_id;
>>>>>>>>>>          sbdf_t    sbdf;
>>>>>>>>>>          sbdf_t    gsbdf;
>>>>>>>>>> };
>>>>>>>>>>
>>>>>>>>>> Each domain has a pdev list, which contains the list of
>>>>>>>>>> all
>>>>>>>>>> pci devices.
>>>>>>>>>> The
>>>>>>>>>> pdev structure already has a sbdf information. The
>>>>>>>>>> arch_pci_dev is
>>>>>>>>>> updated to
>>>>>>>>>> contain the gsbdf information. (gs- guest segment id)
>>>>>>>>>>
>>>>>>>>>> Whenever there is trap from guest or an interrupt has to
>>>>>>>>>> be
>>>>>>>>>> injected,
>>>>>>>>>> the pdev
>>>>>>>>>> list is iterated to find the gsbdf.
>>>>>>>>> Can you give more background for this section? i.e:
>>>>>>>>> 	- Why do you need this?
>>>>>>>>> 	- How xen will translate the gbdf to a vDeviceID?
>>>>>>>> In the context of the hypercall processing.
>>>>>>>>> 	- Who will call this hypercall?
>>>>>>>>> 	- Why not setting the gsbdf when the device is
>>>>>>>>> assigned?
>>>>>>>> Can the maintainer of the pciback suggest an alternate.
>>>>>>> That's not me, but I don't think this belongs here, I think it
>>>>>>> can
>>>>>>> be
>>>>>>> done from the toolstack. If you think not then please explain
>>>>>>> what
>>>>>>> information the toolstack doesn't have in its possession which
>>>>>>> prevents
>>>>>>> this mapping from being done there.
>>>>>> The toolstack does not have the guest sbdf information. I could
>>>>>> only
>>>>>> find it in xenpciback.
>>>>> Are you sure? The sbdf relates to the physical device, correct? If
>>>>> so
>>>>> then surely the toolstack knows it -- it's written in the config
>>>>> file
>>>>> and is the primary parameter to all of the related libxl
>>>>> passthrough
>>>>> APIs. The toolstack wouldn't be able to do anything about passing
>>>>> through a given device without knowing which device it should be
>>>>> passing
>>>>> through.
>>>>>
>>>>> Perhaps this info needs plumbing through to some new bit of the
>>>>> toolstack, but it is surely available somewhere.
>>>>>
>>>>> If you meant the virtual SBDF then that is in
>>>>> libxl_device_pci.vdevfn.
>>>> I added prints in libxl__device_pci_add. vdevfn is always 0 so this
>>>> may
>>>> not be the right variable to use.
>>>> Can you please recheck.
>>>>
>>>> Also the vdev-X entry in xenstore appears to be created from pciback
>>>> code and not from xl.
>>>> Check function xen_pcibk_publish_pci_dev.
>>>>
>>>> So I have to send a hypercall from pciback only.
>>> I don't think the necessarily follows.
>>>
>>> You could have the tools read the vdev-X node back on plug.
>> I have been trying to get the flow of caller of libxl__device_pci_add
>> during pci device assignemnt from cfg file(cold boot).
>> It should be called form xl create flow. Is it called from C code or
>> Python code.
> There is no Python code which you need to worry about involved here. You
> can completely ignore tools/python.
>
> In the first instance you need only to worry about tools/libxl/libxl* (the
> toolstack library). The xl commands are in tools/libxl/xl* and calls
> libxl_domain_create_new with a libxl_domain_config struct which contains
> the array of pci devices to cold plug.
>
> Hotplug starts at libxl_device_pci_add.
>
> Most of the code for the PCI specific bits are in tools/libxl/libxl_pci.c.
>
>> Secondly, the vdev-X entry is created async by dom0 watching on event.
>> So how the tools could read back and call assign device again.
> Perhaps by using a xenstore watch on that node to wait for the assignment
> from pciback to occur.
As per the flow in the do_pci_add function, assign_device is called 
first and based on the success xenstore entry is created.
Are you suggesting to change the sequence.
We can discuss this more on #xenarm irc
>>> Or you could change things such that vdevfn is always chosen by the
>>> toolstack for ARM, not optionally like it is on x86.
> For this one, the struct libxl_device_pci has a field "vdevfn", which is
> supposed to allow the user to specify a specific vdevfn. I'm not sure how
> that happens or fits together but libxl could undertake to set that on ARM
> in the case where the user hasn't done so, effectively taking control of
> the PCI bus assignment.
>
> Ian.
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-31  7:46                   ` Manish Jaggi
@ 2015-07-31  8:05                     ` Ian Campbell
  2015-07-31 10:32                       ` Ian Campbell
  2015-07-31 11:07                       ` Manish Jaggi
  0 siblings, 2 replies; 51+ messages in thread
From: Ian Campbell @ 2015-07-31  8:05 UTC (permalink / raw)
  To: Manish Jaggi
  Cc: Prasun Kapoor, Kumar, Vijaya, xen-devel, Julien Grall,
	Stefano Stabellini, Kulkarni, Ganapatrao

On Fri, 2015-07-31 at 13:16 +0530, Manish Jaggi wrote:
> > > Secondly, the vdev-X entry is created async by dom0 watching on 
> > > event.
> > > So how the tools could read back and call assign device again.
> > Perhaps by using a xenstore watch on that node to wait for the 
> > assignment
> > from pciback to occur.
> As per the flow in the do_pci_add function, assign_device is called 
> first and based on the success xenstore entry is created.
> Are you suggesting to change the sequence.

Perhaps that is what it would take, yes, or maybe some other refactoring
(e.g. splitting assign_device into two stages) might be the answer.

My current preference is for the suggestion below which is to let the
toolstack pick the vdevfn and have pciback honour it.

> We can discuss this more on #xenarm irc

Sorry I missed your ping yesterday, I had already gone home.

> > > > Or you could change things such that vdevfn is always chosen by the
> > > > toolstack for ARM, not optionally like it is on x86.
> > For this one, the struct libxl_device_pci has a field "vdevfn", which 
> > is
> > supposed to allow the user to specify a specific vdevfn. I'm not sure 
> > how
> > that happens or fits together but libxl could undertake to set that on 
> > ARM
> > in the case where the user hasn't done so, effectively taking control 
> > of
> > the PCI bus assignment.
> > 
> > Ian.
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-31  8:05                     ` Ian Campbell
@ 2015-07-31 10:32                       ` Ian Campbell
  2015-07-31 14:24                         ` Konrad Rzeszutek Wilk
  2015-07-31 11:07                       ` Manish Jaggi
  1 sibling, 1 reply; 51+ messages in thread
From: Ian Campbell @ 2015-07-31 10:32 UTC (permalink / raw)
  To: Manish Jaggi
  Cc: Prasun Kapoor, Kumar, Vijaya, xen-devel, Julien Grall,
	Stefano Stabellini, Kulkarni, Ganapatrao

On Fri, 2015-07-31 at 09:05 +0100, Ian Campbell wrote:
> On Fri, 2015-07-31 at 13:16 +0530, Manish Jaggi wrote:
> > > > Secondly, the vdev-X entry is created async by dom0 watching on 
> > > > event.

Stefano points out that there are, confusingly, two nodes in xenstore
relating to the virtual-SBDF.

vdev-X is written by pciback and is read by pcifront, it is effectively
there to communicate the vSBDF to the guest.

vdevfn-X is written by the toolstack (libxl_create_pci_backend_device) to
tell the backend (pciback, or qemu in x86/HVM configurations using old
qemu) the vSBDF to be associated with the device.

It looks like vdevfn-X is not actually currently supported by pciback in
Linux (seemingly only the x86/HVM qemu backend consumes it). I think we
should add that support to pciback for consistency with the qemu based
backend used by x86/HVM guests.

The names are a certainly a bit confusing. We could add a new key with a
better name to communicate the vSBDF from toolstack->backend, but itseems
to me to be that would just adding even more confusion, so I recommend we
don't do that.

Once pciback supports vdevfn then libxl will be able to choose the PCI bus
layout for ARM guests in the case where the use has not requested an
explicit vdevfn for the device.

Does that make sense?

Ian.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-31  8:05                     ` Ian Campbell
  2015-07-31 10:32                       ` Ian Campbell
@ 2015-07-31 11:07                       ` Manish Jaggi
  2015-07-31 11:19                         ` Ian Campbell
  1 sibling, 1 reply; 51+ messages in thread
From: Manish Jaggi @ 2015-07-31 11:07 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Prasun Kapoor, Kumar, Vijaya, xen-devel, Julien Grall,
	Stefano Stabellini, Kulkarni, Ganapatrao



On Friday 31 July 2015 01:35 PM, Ian Campbell wrote:
> On Fri, 2015-07-31 at 13:16 +0530, Manish Jaggi wrote:
>>>> Secondly, the vdev-X entry is created async by dom0 watching on
>>>> event.
>>>> So how the tools could read back and call assign device again.
>>> Perhaps by using a xenstore watch on that node to wait for the
>>> assignment
>>> from pciback to occur.
>> As per the flow in the do_pci_add function, assign_device is called
>> first and based on the success xenstore entry is created.
>> Are you suggesting to change the sequence.
> Perhaps that is what it would take, yes, or maybe some other refactoring
> (e.g. splitting assign_device into two stages) might be the answer.
The hypercall from xenpciback (what I implemented) is actually making 
the assign device in 2 stages.
I think the point of contention is the second stage should be from 
toolstack.

I think calling xc_assign_device after xenstore from the watch callback 
is the only option.
One question is how to split the code for ARM and x86 as this is the 
common code.
Would #ifdef CONFIG_ARM64 ok with maintainers.
>
> My current preference is for the suggestion below which is to let the
> toolstack pick the vdevfn and have pciback honour it.
That would duplicate code for dev-fn generation into toolstack from 
__xen_pcibk_add_pci_dev.

>> We can discuss this more on #xenarm irc
> Sorry I missed your ping yesterday, I had already gone home.
>
>>>>> Or you could change things such that vdevfn is always chosen by the
>>>>> toolstack for ARM, not optionally like it is on x86.
>>> For this one, the struct libxl_device_pci has a field "vdevfn", which
>>> is
>>> supposed to allow the user to specify a specific vdevfn. I'm not sure
>>> how
>>> that happens or fits together but libxl could undertake to set that on
>>> ARM
>>> in the case where the user hasn't done so, effectively taking control
>>> of
>>> the PCI bus assignment.
>>>
>>> Ian.
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-31 11:07                       ` Manish Jaggi
@ 2015-07-31 11:19                         ` Ian Campbell
  2015-07-31 12:50                           ` Manish Jaggi
  0 siblings, 1 reply; 51+ messages in thread
From: Ian Campbell @ 2015-07-31 11:19 UTC (permalink / raw)
  To: Manish Jaggi
  Cc: Prasun Kapoor, Kumar, Vijaya, xen-devel, Julien Grall,
	Stefano Stabellini, Kulkarni, Ganapatrao

On Fri, 2015-07-31 at 16:37 +0530, Manish Jaggi wrote:
> 
> On Friday 31 July 2015 01:35 PM, Ian Campbell wrote:
> > On Fri, 2015-07-31 at 13:16 +0530, Manish Jaggi wrote:
> > > > > Secondly, the vdev-X entry is created async by dom0 watching on
> > > > > event.
> > > > > So how the tools could read back and call assign device again.
> > > > Perhaps by using a xenstore watch on that node to wait for the
> > > > assignment
> > > > from pciback to occur.
> > > As per the flow in the do_pci_add function, assign_device is called
> > > first and based on the success xenstore entry is created.
> > > Are you suggesting to change the sequence.
> > Perhaps that is what it would take, yes, or maybe some other 
> > refactoring
> > (e.g. splitting assign_device into two stages) might be the answer.
> The hypercall from xenpciback (what I implemented) is actually making 
> the assign device in 2 stages.
> I think the point of contention is the second stage should be from 
> toolstack.
> 
> I think calling xc_assign_device after xenstore from the watch callback 
> is the only option.

Only if you ignore the other option I proposed.

> One question is how to split the code for ARM and x86 as this is the 
> common code.
> Would #ifdef CONFIG_ARM64 ok with maintainers.

No. arch hooks in libxl_$ARCH.c (with nop implementations where necessary)
would be the way to approach this. However I still am not convinced this is
the approach we should be taking.

> > My current preference is for the suggestion below which is to let the
> > toolstack pick the vdevfn and have pciback honour it.
> That would duplicate code for dev-fn generation into toolstack from 
> __xen_pcibk_add_pci_dev.

IMHO the toolstack is the correct place for this code, at least for ARM
guests. The toolstack is, in general, responsible for all aspects of the
guest layout. I don't think delegating the PCI bus parts of that to the
dom0 kernel makes sense.

I'd not be surprised if the same turns out to be true for x86/PVH guests
too.

Ian.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-31 11:19                         ` Ian Campbell
@ 2015-07-31 12:50                           ` Manish Jaggi
  2015-07-31 12:57                             ` Ian Campbell
  2015-07-31 12:59                             ` Julien Grall
  0 siblings, 2 replies; 51+ messages in thread
From: Manish Jaggi @ 2015-07-31 12:50 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Prasun Kapoor, Kumar, Vijaya, xen-devel, Julien Grall,
	Stefano Stabellini, Kulkarni, Ganapatrao



On 31/07/15 4:49 pm, Ian Campbell wrote:
> On Fri, 2015-07-31 at 16:37 +0530, Manish Jaggi wrote:
>> On Friday 31 July 2015 01:35 PM, Ian Campbell wrote:
>>> On Fri, 2015-07-31 at 13:16 +0530, Manish Jaggi wrote:
>>>>>> Secondly, the vdev-X entry is created async by dom0 watching on
>>>>>> event.
>>>>>> So how the tools could read back and call assign device again.
>>>>> Perhaps by using a xenstore watch on that node to wait for the
>>>>> assignment
>>>>> from pciback to occur.
>>>> As per the flow in the do_pci_add function, assign_device is called
>>>> first and based on the success xenstore entry is created.
>>>> Are you suggesting to change the sequence.
>>> Perhaps that is what it would take, yes, or maybe some other
>>> refactoring
>>> (e.g. splitting assign_device into two stages) might be the answer.
>> The hypercall from xenpciback (what I implemented) is actually making
>> the assign device in 2 stages.
>> I think the point of contention is the second stage should be from
>> toolstack.
>>
>> I think calling xc_assign_device after xenstore from the watch callback
>> is the only option.
> Only if you ignore the other option I proposed.
>
>> One question is how to split the code for ARM and x86 as this is the
>> common code.
>> Would #ifdef CONFIG_ARM64 ok with maintainers.
> No. arch hooks in libxl_$ARCH.c (with nop implementations where necessary)
> would be the way to approach this. However I still am not convinced this is
> the approach we should be taking.
>
>>> My current preference is for the suggestion below which is to let the
>>> toolstack pick the vdevfn and have pciback honour it.
>> That would duplicate code for dev-fn generation into toolstack from
>> __xen_pcibk_add_pci_dev.
> IMHO the toolstack is the correct place for this code, at least for ARM
> guests. The toolstack is, in general, responsible for all aspects of the
> guest layout. I don't think delegating the PCI bus parts of that to the
> dom0 kernel makes sense.
Ok, i will implement the same from pciback to toolstack. I am not sure 
about the complexity but will give it a try.
With this xen-pciback will not create the vdev-X entry at all.
>
> I'd not be surprised if the same turns out to be true for x86/PVH guests
> too.
> Ian.
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-31 12:50                           ` Manish Jaggi
@ 2015-07-31 12:57                             ` Ian Campbell
  2015-07-31 12:59                             ` Julien Grall
  1 sibling, 0 replies; 51+ messages in thread
From: Ian Campbell @ 2015-07-31 12:57 UTC (permalink / raw)
  To: Manish Jaggi
  Cc: Prasun Kapoor, Kumar, Vijaya, xen-devel, Julien Grall,
	Stefano Stabellini, Kulkarni, Ganapatrao

On Fri, 2015-07-31 at 18:20 +0530, Manish Jaggi wrote:
> 
> On 31/07/15 4:49 pm, Ian Campbell wrote:
> > On Fri, 2015-07-31 at 16:37 +0530, Manish Jaggi wrote:
> > > On Friday 31 July 2015 01:35 PM, Ian Campbell wrote:
> > > > On Fri, 2015-07-31 at 13:16 +0530, Manish Jaggi wrote:
> > > > > > > Secondly, the vdev-X entry is created async by dom0 watching 
> > > > > > > on
> > > > > > > event.
> > > > > > > So how the tools could read back and call assign device 
> > > > > > > again.
> > > > > > Perhaps by using a xenstore watch on that node to wait for the
> > > > > > assignment
> > > > > > from pciback to occur.
> > > > > As per the flow in the do_pci_add function, assign_device is 
> > > > > called
> > > > > first and based on the success xenstore entry is created.
> > > > > Are you suggesting to change the sequence.
> > > > Perhaps that is what it would take, yes, or maybe some other
> > > > refactoring
> > > > (e.g. splitting assign_device into two stages) might be the answer.
> > > The hypercall from xenpciback (what I implemented) is actually making
> > > the assign device in 2 stages.
> > > I think the point of contention is the second stage should be from
> > > toolstack.
> > > 
> > > I think calling xc_assign_device after xenstore from the watch 
> > > callback
> > > is the only option.
> > Only if you ignore the other option I proposed.
> > 
> > > One question is how to split the code for ARM and x86 as this is the
> > > common code.
> > > Would #ifdef CONFIG_ARM64 ok with maintainers.
> > No. arch hooks in libxl_$ARCH.c (with nop implementations where 
> > necessary)
> > would be the way to approach this. However I still am not convinced 
> > this is
> > the approach we should be taking.
> > 
> > > > My current preference is for the suggestion below which is to let 
> > > > the
> > > > toolstack pick the vdevfn and have pciback honour it.
> > > That would duplicate code for dev-fn generation into toolstack from
> > > __xen_pcibk_add_pci_dev.
> > IMHO the toolstack is the correct place for this code, at least for ARM
> > guests. The toolstack is, in general, responsible for all aspects of 
> > the
> > guest layout. I don't think delegating the PCI bus parts of that to the
> > dom0 kernel makes sense.
> Ok, i will implement the same from pciback to toolstack. I am not sure 
> about the complexity but will give it a try.

Thank you.

> With this xen-pciback will not create the vdev-X entry at all.


Uh, where did you get that idea? That node communicates from b.e. to f.e.,
surely it is still needed for that purpose?

The flow seems like it should be:

toolstack -> xenstore:vdevfn-X -> backend -> xenstore:vdev-X -> frontend

where "backend" is codei n pciback which does something like (maybe not in
one function, but logically):

  foo = read(vdevfn-X)
  if (!foo)
     allocate SBDF as code today, setting foo to the result

  write(vdev-X, foo)


IOW if vdevfn-X is unset the backend does the same as today, if it is set
(by the toolstack) then pciback honours it.

Ian.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-31 12:50                           ` Manish Jaggi
  2015-07-31 12:57                             ` Ian Campbell
@ 2015-07-31 12:59                             ` Julien Grall
  2015-07-31 13:27                               ` Ian Campbell
  2015-07-31 14:33                               ` Manish Jaggi
  1 sibling, 2 replies; 51+ messages in thread
From: Julien Grall @ 2015-07-31 12:59 UTC (permalink / raw)
  To: Manish Jaggi, Ian Campbell
  Cc: Prasun Kapoor, Kumar, Vijaya, Stefano Stabellini, Kulkarni,
	Ganapatrao, xen-devel

Hi Manish,

On 31/07/15 13:50, Manish Jaggi wrote:
> Ok, i will implement the same from pciback to toolstack. I am not sure
> about the complexity but will give it a try.
> With this xen-pciback will not create the vdev-X entry at all.

Can you send a new draft before continuing to implement PCI support in Xen?

As long as we are not agree about it, you loose your time trying to
implement something that can drastically change in the next revision.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-31 12:59                             ` Julien Grall
@ 2015-07-31 13:27                               ` Ian Campbell
  2015-07-31 14:33                               ` Manish Jaggi
  1 sibling, 0 replies; 51+ messages in thread
From: Ian Campbell @ 2015-07-31 13:27 UTC (permalink / raw)
  To: Julien Grall, Manish Jaggi
  Cc: Prasun Kapoor, Kumar, Vijaya, Stefano Stabellini, Kulkarni,
	Ganapatrao, xen-devel

On Fri, 2015-07-31 at 13:59 +0100, Julien Grall wrote:
> Hi Manish,
> 
> On 31/07/15 13:50, Manish Jaggi wrote:
> > Ok, i will implement the same from pciback to toolstack. I am not sure
> > about the complexity but will give it a try.
> > With this xen-pciback will not create the vdev-X entry at all.
> 
> Can you send a new draft before continuing to implement PCI support in 
> Xen?
> 
> As long as we are not agree about it, you loose your time trying to
> implement something that can drastically change in the next revision.

Very good idea, yes.

Ian.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-31 10:32                       ` Ian Campbell
@ 2015-07-31 14:24                         ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 51+ messages in thread
From: Konrad Rzeszutek Wilk @ 2015-07-31 14:24 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Manish Jaggi, Prasun Kapoor, Kumar, Vijaya, xen-devel,
	Julien Grall, Stefano Stabellini, Kulkarni, Ganapatrao

On Fri, Jul 31, 2015 at 11:32:19AM +0100, Ian Campbell wrote:
> On Fri, 2015-07-31 at 09:05 +0100, Ian Campbell wrote:
> > On Fri, 2015-07-31 at 13:16 +0530, Manish Jaggi wrote:
> > > > > Secondly, the vdev-X entry is created async by dom0 watching on 
> > > > > event.
> 
> Stefano points out that there are, confusingly, two nodes in xenstore
> relating to the virtual-SBDF.
> 
> vdev-X is written by pciback and is read by pcifront, it is effectively
> there to communicate the vSBDF to the guest.
> 
> vdevfn-X is written by the toolstack (libxl_create_pci_backend_device) to
> tell the backend (pciback, or qemu in x86/HVM configurations using old
> qemu) the vSBDF to be associated with the device.
> 
> It looks like vdevfn-X is not actually currently supported by pciback in
> Linux (seemingly only the x86/HVM qemu backend consumes it). I think we
> should add that support to pciback for consistency with the qemu based
> backend used by x86/HVM guests.
> 
> The names are a certainly a bit confusing. We could add a new key with a
> better name to communicate the vSBDF from toolstack->backend, but itseems
> to me to be that would just adding even more confusion, so I recommend we
> don't do that.
> 
> Once pciback supports vdevfn then libxl will be able to choose the PCI bus
> layout for ARM guests in the case where the use has not requested an
> explicit vdevfn for the device.
> 
> Does that make sense?

Yes.
> 
> Ian.
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-31 12:59                             ` Julien Grall
  2015-07-31 13:27                               ` Ian Campbell
@ 2015-07-31 14:33                               ` Manish Jaggi
  2015-07-31 14:56                                 ` Julien Grall
  1 sibling, 1 reply; 51+ messages in thread
From: Manish Jaggi @ 2015-07-31 14:33 UTC (permalink / raw)
  To: Julien Grall, Ian Campbell
  Cc: Prasun Kapoor, Kumar, Vijaya, Stefano Stabellini, Kulkarni,
	Ganapatrao, xen-devel

Hi Julien,

On 31/07/15 6:29 pm, Julien Grall wrote:
> Hi Manish,
>
> On 31/07/15 13:50, Manish Jaggi wrote:
>> Ok, i will implement the same from pciback to toolstack. I am not sure
>> about the complexity but will give it a try.
>> With this xen-pciback will not create the vdev-X entry at all.
> Can you send a new draft before continuing to implement PCI support in Xen?
I am working on the Draft 3 and addressing comments in draft 2. I am 
doing a feasibility of the stuff I put in draft3.
> As long as we are not agree about it,
I thought I was trying to discuss the same. If you have any point please 
raise it.
>   you loose your time trying to
> implement something that can drastically change in the next revision.
I am only putting the stuff in the Draft3 which *can* be implemented later.
Regards,

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-31 14:33                               ` Manish Jaggi
@ 2015-07-31 14:56                                 ` Julien Grall
  2015-07-31 15:12                                   ` Manish Jaggi
  0 siblings, 1 reply; 51+ messages in thread
From: Julien Grall @ 2015-07-31 14:56 UTC (permalink / raw)
  To: Manish Jaggi, Ian Campbell
  Cc: Prasun Kapoor, Kumar, Vijaya, Stefano Stabellini, Kulkarni,
	Ganapatrao, xen-devel

On 31/07/15 15:33, Manish Jaggi wrote:
> Hi Julien,
> 
> On 31/07/15 6:29 pm, Julien Grall wrote:
>> Hi Manish,
>>
>> On 31/07/15 13:50, Manish Jaggi wrote:
>>> Ok, i will implement the same from pciback to toolstack. I am not sure
>>> about the complexity but will give it a try.
>>> With this xen-pciback will not create the vdev-X entry at all.
>> Can you send a new draft before continuing to implement PCI support in
>> Xen?
> I am working on the Draft 3 and addressing comments in draft 2. I am
> doing a feasibility of the stuff I put in draft3.

Well, I don't think that anything we say within this thread was
impossible to do.

>> As long as we are not agree about it,
> I thought I was trying to discuss the same. If you have any point please
> raise it.

What I meant is, this is a 40-messages thread with lots of discussions
on it.

A new draft containing a summary on what was said would benefits
everyone and help us to get on a design that we think is good.

>>   you loose your time trying to
>> implement something that can drastically change in the next revision.
> I am only putting the stuff in the Draft3 which *can* be implemented later.

But nothing prevent someone in the discussion on Draft3 to say this is
wrong and it has to be done in a different way.

Usually the time between two draft should be pretty short in order to
get sane base for discussion. For now, we are talking about small
portion of design and speculating/trying to remember what was agreed on
other sub-thread.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-31 14:56                                 ` Julien Grall
@ 2015-07-31 15:12                                   ` Manish Jaggi
  2015-07-31 15:13                                     ` Julien Grall
  0 siblings, 1 reply; 51+ messages in thread
From: Manish Jaggi @ 2015-07-31 15:12 UTC (permalink / raw)
  To: Julien Grall, Ian Campbell
  Cc: Prasun Kapoor, Kumar, Vijaya, Stefano Stabellini, Kulkarni,
	Ganapatrao, xen-devel



On 31/07/15 8:26 pm, Julien Grall wrote:
> On 31/07/15 15:33, Manish Jaggi wrote:
>> Hi Julien,
>>
>> On 31/07/15 6:29 pm, Julien Grall wrote:
>>> Hi Manish,
>>>
>>> On 31/07/15 13:50, Manish Jaggi wrote:
>>>> Ok, i will implement the same from pciback to toolstack. I am not sure
>>>> about the complexity but will give it a try.
>>>> With this xen-pciback will not create the vdev-X entry at all.
>>> Can you send a new draft before continuing to implement PCI support in
>>> Xen?
>> I am working on the Draft 3 and addressing comments in draft 2. I am
>> doing a feasibility of the stuff I put in draft3.
> Well, I don't think that anything we say within this thread was
> impossible to do.
>
>>> As long as we are not agree about it,
>> I thought I was trying to discuss the same. If you have any point please
>> raise it.
> What I meant is, this is a 40-messages thread with lots of discussions
> on it.
>
> A new draft containing a summary on what was said would benefits
> everyone and help us to get on a design that we think is good.
>
>>>    you loose your time trying to
>>> implement something that can drastically change in the next revision.
>> I am only putting the stuff in the Draft3 which *can* be implemented later.
> But nothing prevent someone in the discussion on Draft3 to say this is
> wrong and it has to be done in a different way.
>
> Usually the time between two draft should be pretty short in order to
> get sane base for discussion. For now, we are talking about small
> portion of design and speculating/trying to remember what was agreed on
> other sub-thread.
ok will send draft 3 with the points on this topic as under discussion. 
Is that fine?
>
> Regards,
>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2.
  2015-07-31 15:12                                   ` Manish Jaggi
@ 2015-07-31 15:13                                     ` Julien Grall
  0 siblings, 0 replies; 51+ messages in thread
From: Julien Grall @ 2015-07-31 15:13 UTC (permalink / raw)
  To: Manish Jaggi, Ian Campbell
  Cc: Prasun Kapoor, Kumar, Vijaya, Stefano Stabellini, Kulkarni,
	Ganapatrao, xen-devel

On 31/07/15 16:12, Manish Jaggi wrote:
>> Usually the time between two draft should be pretty short in order to
>> get sane base for discussion. For now, we are talking about small
>> portion of design and speculating/trying to remember what was agreed on
>> other sub-thread.
> ok will send draft 3 with the points on this topic as under discussion.
> Is that fine?

Yes please.

Thank you,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2
  2015-07-05  6:07 Manish Jaggi
@ 2015-07-06  9:07 ` Ian Campbell
  0 siblings, 0 replies; 51+ messages in thread
From: Ian Campbell @ 2015-07-06  9:07 UTC (permalink / raw)
  To: Manish Jaggi; +Cc: Prasun Kapoor, Julien Grall, Kumar, Vijaya, xen-devel

On Sun, 2015-07-05 at 11:37 +0530, Manish Jaggi wrote:
> >Ian Campbell Wrote:
> >>On Mon, 2015-06-29 at 00:08 +0530, Manish Jaggi wrote:
> >> PCI Pass-through in Xen ARM
> >> --------------------------
> >>
> >> Draft 2
> >>
> >> Index
> >>
> >> 1. Background
> >>
> >> 2. Basic PCI Support in Xen ARM
> >> 2.1 pci_hostbridge and pci_hostbridge_ops
> >> 2.2 PHYSDEVOP_HOSTBRIDGE_ADD hypercall
> >>
> >> 3. Dom0 Access PCI devices
> >>
> >> 4. DomU assignment of PCI device
> >> 4.1 Holes in guest memory space
> >> 4.2 New entries in xenstore for device BARs
> >> 4.3 Hypercall for bdf mapping noification to xen
> >> 4.4 Change in Linux PCI FrontEnd - backend driver
> >>   for MSI/X programming
> >>
> >> 5. NUMA and PCI passthrough
> >>
> >> 6. DomU pci device attach flow
> >>
> >>
> >> Revision History
> >> ----------------
> >> Changes from Draft 1
> >> a) map_mmio hypercall removed from earlier draft
> >> b) device bar mapping into guest not 1:1
> >> c) holes in guest address space 32bit / 64bit for MMIO virtual BARs
> >> d) xenstore device's BAR info addition.
> >>
> >>
> >> 1. Background of PCI passthrough
> >> --------------------------------
> >> Passthrough refers to assigning a pci device to a guest domain (domU) such
> >> that
> >> the guest has full control over the device.The MMIO space and interrupts are
> >> managed by the guest itself, close to how a bare kernel manages a device.
> >>
> >> Device's access to guest address space needs to be isolated and protected.
> >> SMMU
> >> (System MMU - IOMMU in ARM) is programmed by xen hypervisor to allow device
> >> access guest memory for data transfer and sending MSI/X interrupts. In case of
> >> MSI/X  the device writes to GITS (ITS address space) Interrupt Translation
> >> Register.
> >>
> >> 2. Basic PCI Support for ARM
> >> ----------------------------
> >> The apis to read write from pci configuration space are based on segment:bdf.
> >> How the sbdf is mapped to a physical address is under the realm of the pci
> >> host controller.
> >>
> >> ARM PCI support in Xen, introduces pci host controller similar to what exists
> >> in Linux. Each drivers registers callbacks, which are invoked on matching the
> >> compatible property in pci device tree node.
> >>
> >> 2.1:
> >> The init function in the pci host driver calls to register hostbridge
> >> callbacks:
> >> int pci_hostbridge_register(pci_hostbridge_t *pcihb);
> >>
> >> struct pci_hostbridge_ops {
> >>      u32 (*pci_conf_read)(struct pci_hostbridge*, u32 bus, u32 devfn,
> >>                                  u32 reg, u32 bytes);
> >>      void (*pci_conf_write)(struct pci_hostbridge*, u32 bus, u32 devfn,
> >>                                  u32 reg, u32 bytes, u32 val);
> >> };
> >>
> >> struct pci_hostbridge{
> >>      u32 segno;
> >>      paddr_t cfg_base;
> >>      paddr_t cfg_size;
> >>      struct dt_device_node *dt_node;
> >>      struct pci_hostbridge_ops ops;
> >>      struct list_head list;
> >> };
> >>
> >> A pci conf read function would internally be as follows:
> >> u32 pcihb_conf_read(u32 seg, u32 bus, u32 devfn,u32 reg, u32 bytes)
> >> {
> >>      pci_hostbridge_t *pcihb;
> >>      list_for_each_entry(pcihb, &pci_hostbridge_list, list)
> >>      {
> >>          if(pcihb->segno == seg)
> >>              return pcihb->ops.pci_conf_read(pcihb, bus, devfn, reg, bytes);
> >>      }
> >>      return -1;
> >> }
> >>
> >> 2.2 PHYSDEVOP_pci_host_bridge_add hypercall
> >>
> >> Xen code accesses PCI configuration space based on the sbdf received from the
> >> guest. The order in which the pci device tree node appear may not be the same
> >> order of device enumeration in dom0. Thus there needs to be a mechanism to
> >> bind
> >> the segment number assigned by dom0 to the pci host controller. The hypercall
> >> is introduced:
> >>
> >> #define PHYSDEVOP_pci_host_bridge_add    44
> >> struct physdev_pci_host_bridge_add {
> >>      /* IN */
> >>      uint16_t seg;
> >>      uint64_t cfg_base;
> >>      uint64_t cfg_size;
> >> };
> >>
> >> This hypercall is invoked before dom0 invokes the PHYSDEVOP_pci_device_add
> >> hypercall. The handler code invokes to update segment number in
> >> pci_hostbridge:
> >>
> >> int pci_hostbridge_setup(uint32_t segno, uint64_t cfg_base, uint64_t
> >> cfg_size);
> >>
> >> Subsequent calls to pci_conf_read/write are completed by the
> >> pci_hostbridge_ops
> >> of the respective pci_hostbridge.
> >>
> >> 3. Dom0 access PCI device
> >> ---------------------------------
> >> As per the design of xen hypervisor, dom0 enumerates the PCI devices. For each
> >> device the MMIO space has to be mapped in the Stage2 translation for dom0.
> >
> >Here "device" is really host bridge, isn't it? i.e. this is done by
> >mapping the entire MMIO window of each host bridge, not the individual
> >BAR registers of each device one at a time.
> 
> No the device means the PCIe EP device not RC.

OK, so this paragraph represents a change from the current state of
play, which is that dom0 gets a mapping of the entire window for IO,
MMIO and CFG space constructed during domain build.

Please can you add a rationale for this change, i.e. explain why it is
either a hard requirement or a desirable change (depending on which it
is).

> >>  will read the pci configuration
> >> space BAR registers. The toolkit has the guest memory map and the information
> >> of the MMIO holes.
> >>
> >> When the first pci device is assigned to domU, toolkit allocates a virtual
> >> BAR region from the MMIO hole area. toolkit then sends domctl
> >> xc_domain_memory_mapping
> >> to map in stage2 translation.
> >>
> >> 4.1 Holes in guest memory space
> >> ----------------------------
> >> Holes are added in the guest memory space for mapping pci device's BAR
> >> regions.
> >> These are defined in arch-arm.h
> >>
> >> /* For 32bit */
> >> GUEST_MMIO_HOLE0_BASE, GUEST_MMIO_HOLE0_SIZE
> >>
> >> /* For 64bit */
> >> GUEST_MMIO_HOLE1_BASE , GUEST_MMIO_HOLE1_SIZE
> >>
> >> 4.2 New entries in xenstore for device BARs
> >> --------------------------------------------
> >> toolkit also updates the xenstore information for the device
> >> (virtualbar:physical bar).
> >> This information is read by xenpciback and returned to the pcifront driver
> >> configuration
> >> space accesses.
> >>
> >> 4.3 Hypercall for bdf mapping notification to xen
> >                   ^v (I think) or maybe vs?
> >
> >> -----------------------------------------------
> >> #define PHYSDEVOP_map_sbdf              43
> >> typedef struct {
> >>      u32 s;
> >>      u8 b;
> >>      u8 df;
> >>      u16 res;
> >> } sbdf_t;
> >> struct physdev_map_sbdf {
> >>      int domain_id;
> >>      sbdf_t    sbdf;
> >>      sbdf_t    gsbdf;
> >> };
> >>
> >> Each domain has a pdev list, which contains the list of all pci devices. The
> >> pdev structure already has a sbdf information. The arch_pci_dev is updated to
> >> -------------------------------------------------------------
> >> On the Pci frontend bus a msi-parent as gicv3-its is added.
> >
> >Are you talking about a device tree property or something else?
> >
> Device tree property. xl creates a device tree for domU.
> It is assumed that the its node be there in domU device treee.
> 
> >Note that pcifront is not described in the DT, only in the xenstore
> >structure. So a dt property is unlikely to be the right way to describe
> >this.
> >
> >We need to think of some way of specifying this such that we don't tie
> >ourselves into a single vits ABI.
> >
> Please suggest

This is tricky, since in essence we need a way to create a reference
from xenstore to a firmware table (DT or ACPI node), or we need to
partially shadow xenstore in the firmware tables such that the reference
which can be internal to the firmware table.

I don't much like the second option.

How about a new xenstore property msi-parent which will contain the full
firmware table path to the its node. That would be the full DT node path
in the DT case and whatever the ACPI equivalent is in that case. The
toolstack side implementation can be trivial today since we have a
single vITS at a known (to the toolstack) to be constant path in the DT.

> 
> >>  As there is a single
> >> virtual its for a domU, as there is only a single virtual pci bus in domU.
> >> This
> >> ensures that the config_msi calls are handled by the gicv3 its driver in domU
> >> kernel and not utilizing frontend-backend communication between dom0-domU.
> >>
> >> 5. NUMA domU and vITS
> >> -----------------------------
> >> a) On NUMA systems domU still have a single its node.
> >> b) How can xen identify the ITS on which a device is connected.
> >> - Using segment number query using api which gives pci host controllers
> >> device node
> >>
> >> struct dt_device_node* pci_hostbridge_dt_node(uint32_t segno)
> >>
> >> c) Query the interrupt parent of the pci device node to find out the its.
> >>
> >> 6. DomU Bootup flow
> >> ---------------------
> >> a. DomU boots up without any pci devices assigned.
> >
> >I don't think we can/should rule out cold plug at this stage. IOW it
> >must be possible to boot a domU with PCI devices already assigned.
> >
> As per my understanding the pci front driver receives a notification form xenwatch.
> Upon which it starts enumeration.
> 
> see: pcifront_backend_changed()

This happens at start of day, i.e. just after the watch is registered it
will fire. So it is possible (and desirable) to start guests with PCI
devices attached and things should just work.

> >>  A daemon listens to events
> >> from the xenstore. When a device is attached to domU, the frontend pci bus
> >> driver
> >> starts enumerating the devices.Front end driver communicates with backend
> >> driver
> >> in dom0 to read the pci config space.
> >
> >I'm afraid I don't follow any of this. What "daemon"? Is it in the front
> >or backend? What does it do with the events it is listening for?
> >
> xenwatch

I have no idea what this daemon is or why you think it is needed, and
your one word answer to my three separate and IMHO precise questions has
done nothing to enlighten me.

I am pretty sure that no daemon should be required for this use case. If
you have some reason to think there is then you need to do a much better
job of explaining your reasoning.

Ian.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: PCI Pass-through in Xen ARM - Draft 2
@ 2015-07-05  6:07 Manish Jaggi
  2015-07-06  9:07 ` Ian Campbell
  0 siblings, 1 reply; 51+ messages in thread
From: Manish Jaggi @ 2015-07-05  6:07 UTC (permalink / raw)
  To: Ian Campbell, xen-devel, Prasun Kapoor, Julien Grall, Kumar, Vijaya

>Ian Campbell Wrote:
>>On Mon, 2015-06-29 at 00:08 +0530, Manish Jaggi wrote:
>> PCI Pass-through in Xen ARM
>> --------------------------
>>
>> Draft 2
>>
>> Index
>>
>> 1. Background
>>
>> 2. Basic PCI Support in Xen ARM
>> 2.1 pci_hostbridge and pci_hostbridge_ops
>> 2.2 PHYSDEVOP_HOSTBRIDGE_ADD hypercall
>>
>> 3. Dom0 Access PCI devices
>>
>> 4. DomU assignment of PCI device
>> 4.1 Holes in guest memory space
>> 4.2 New entries in xenstore for device BARs
>> 4.3 Hypercall for bdf mapping noification to xen
>> 4.4 Change in Linux PCI FrontEnd - backend driver
>>   for MSI/X programming
>>
>> 5. NUMA and PCI passthrough
>>
>> 6. DomU pci device attach flow
>>
>>
>> Revision History
>> ----------------
>> Changes from Draft 1
>> a) map_mmio hypercall removed from earlier draft
>> b) device bar mapping into guest not 1:1
>> c) holes in guest address space 32bit / 64bit for MMIO virtual BARs
>> d) xenstore device's BAR info addition.
>>
>>
>> 1. Background of PCI passthrough
>> --------------------------------
>> Passthrough refers to assigning a pci device to a guest domain (domU) such
>> that
>> the guest has full control over the device.The MMIO space and interrupts are
>> managed by the guest itself, close to how a bare kernel manages a device.
>>
>> Device's access to guest address space needs to be isolated and protected.
>> SMMU
>> (System MMU - IOMMU in ARM) is programmed by xen hypervisor to allow device
>> access guest memory for data transfer and sending MSI/X interrupts. In case of
>> MSI/X  the device writes to GITS (ITS address space) Interrupt Translation
>> Register.
>>
>> 2. Basic PCI Support for ARM
>> ----------------------------
>> The apis to read write from pci configuration space are based on segment:bdf.
>> How the sbdf is mapped to a physical address is under the realm of the pci
>> host controller.
>>
>> ARM PCI support in Xen, introduces pci host controller similar to what exists
>> in Linux. Each drivers registers callbacks, which are invoked on matching the
>> compatible property in pci device tree node.
>>
>> 2.1:
>> The init function in the pci host driver calls to register hostbridge
>> callbacks:
>> int pci_hostbridge_register(pci_hostbridge_t *pcihb);
>>
>> struct pci_hostbridge_ops {
>>      u32 (*pci_conf_read)(struct pci_hostbridge*, u32 bus, u32 devfn,
>>                                  u32 reg, u32 bytes);
>>      void (*pci_conf_write)(struct pci_hostbridge*, u32 bus, u32 devfn,
>>                                  u32 reg, u32 bytes, u32 val);
>> };
>>
>> struct pci_hostbridge{
>>      u32 segno;
>>      paddr_t cfg_base;
>>      paddr_t cfg_size;
>>      struct dt_device_node *dt_node;
>>      struct pci_hostbridge_ops ops;
>>      struct list_head list;
>> };
>>
>> A pci conf read function would internally be as follows:
>> u32 pcihb_conf_read(u32 seg, u32 bus, u32 devfn,u32 reg, u32 bytes)
>> {
>>      pci_hostbridge_t *pcihb;
>>      list_for_each_entry(pcihb, &pci_hostbridge_list, list)
>>      {
>>          if(pcihb->segno == seg)
>>              return pcihb->ops.pci_conf_read(pcihb, bus, devfn, reg, bytes);
>>      }
>>      return -1;
>> }
>>
>> 2.2 PHYSDEVOP_pci_host_bridge_add hypercall
>>
>> Xen code accesses PCI configuration space based on the sbdf received from the
>> guest. The order in which the pci device tree node appear may not be the same
>> order of device enumeration in dom0. Thus there needs to be a mechanism to
>> bind
>> the segment number assigned by dom0 to the pci host controller. The hypercall
>> is introduced:
>>
>> #define PHYSDEVOP_pci_host_bridge_add    44
>> struct physdev_pci_host_bridge_add {
>>      /* IN */
>>      uint16_t seg;
>>      uint64_t cfg_base;
>>      uint64_t cfg_size;
>> };
>>
>> This hypercall is invoked before dom0 invokes the PHYSDEVOP_pci_device_add
>> hypercall. The handler code invokes to update segment number in
>> pci_hostbridge:
>>
>> int pci_hostbridge_setup(uint32_t segno, uint64_t cfg_base, uint64_t
>> cfg_size);
>>
>> Subsequent calls to pci_conf_read/write are completed by the
>> pci_hostbridge_ops
>> of the respective pci_hostbridge.
>>
>> 3. Dom0 access PCI device
>> ---------------------------------
>> As per the design of xen hypervisor, dom0 enumerates the PCI devices. For each
>> device the MMIO space has to be mapped in the Stage2 translation for dom0.
>
>Here "device" is really host bridge, isn't it? i.e. this is done by
>mapping the entire MMIO window of each host bridge, not the individual
>BAR registers of each device one at a time.

No the device means the PCIe EP device not RC.

>
>IOW this is functionality of the pci host driver's intitial setup, not
>something which is driven from the dom0 enumeration of the bus.
>
>>  For
>> dom0 xen maps the ranges in pci nodes in stage 2 translation.
>>
>> GITS_ITRANSLATER space (4k( must be programmed in Stage2 translation so that
>> MSI/X
>> must work. This is done in vits initialization in dom0/domU.
>
>This also happens at start of day, but what isn't mentioned is that
>(AIUI) the SMMU will need to be programmed to map each SBDF to the dom0
>p2m as the devices are discovered and reported. Right?
>
Yes, I will add SMMU section in the Draft3.
>>
>> 4. DomU access / assignment PCI device
>> --------------------------------------
>> In the flow of pci-attach device, the toolkit
>
>I assume you mean "toolstack" throughout? If so then please run
>s/toolkit/toolstack/g so as to use the usual terminology.
>
yes

>>  will read the pci configuration
>> space BAR registers. The toolkit has the guest memory map and the information
>> of the MMIO holes.
>>
>> When the first pci device is assigned to domU, toolkit allocates a virtual
>> BAR region from the MMIO hole area. toolkit then sends domctl
>> xc_domain_memory_mapping
>> to map in stage2 translation.
>>
>> 4.1 Holes in guest memory space
>> ----------------------------
>> Holes are added in the guest memory space for mapping pci device's BAR
>> regions.
>> These are defined in arch-arm.h
>>
>> /* For 32bit */
>> GUEST_MMIO_HOLE0_BASE, GUEST_MMIO_HOLE0_SIZE
>>
>> /* For 64bit */
>> GUEST_MMIO_HOLE1_BASE , GUEST_MMIO_HOLE1_SIZE
>>
>> 4.2 New entries in xenstore for device BARs
>> --------------------------------------------
>> toolkit also updates the xenstore information for the device
>> (virtualbar:physical bar).
>> This information is read by xenpciback and returned to the pcifront driver
>> configuration
>> space accesses.
>>
>> 4.3 Hypercall for bdf mapping notification to xen
>                   ^v (I think) or maybe vs?
>
>> -----------------------------------------------
>> #define PHYSDEVOP_map_sbdf              43
>> typedef struct {
>>      u32 s;
>>      u8 b;
>>      u8 df;
>>      u16 res;
>> } sbdf_t;
>> struct physdev_map_sbdf {
>>      int domain_id;
>>      sbdf_t    sbdf;
>>      sbdf_t    gsbdf;
>> };
>>
>> Each domain has a pdev list, which contains the list of all pci devices. The
>> pdev structure already has a sbdf information. The arch_pci_dev is updated to
>> -------------------------------------------------------------
>> On the Pci frontend bus a msi-parent as gicv3-its is added.
>
>Are you talking about a device tree property or something else?
>
Device tree property. xl creates a device tree for domU.
It is assumed that the its node be there in domU device treee.

>Note that pcifront is not described in the DT, only in the xenstore
>structure. So a dt property is unlikely to be the right way to describe
>this.
>
>We need to think of some way of specifying this such that we don't tie
>ourselves into a single vits ABI.
>
Please suggest

>>  As there is a single
>> virtual its for a domU, as there is only a single virtual pci bus in domU.
>> This
>> ensures that the config_msi calls are handled by the gicv3 its driver in domU
>> kernel and not utilizing frontend-backend communication between dom0-domU.
>>
>> 5. NUMA domU and vITS
>> -----------------------------
>> a) On NUMA systems domU still have a single its node.
>> b) How can xen identify the ITS on which a device is connected.
>> - Using segment number query using api which gives pci host controllers
>> device node
>>
>> struct dt_device_node* pci_hostbridge_dt_node(uint32_t segno)
>>
>> c) Query the interrupt parent of the pci device node to find out the its.
>>
>> 6. DomU Bootup flow
>> ---------------------
>> a. DomU boots up without any pci devices assigned.
>
>I don't think we can/should rule out cold plug at this stage. IOW it
>must be possible to boot a domU with PCI devices already assigned.
>
As per my understanding the pci front driver receives a notification form xenwatch.
Upon which it starts enumeration.

see: pcifront_backend_changed()

>>  A daemon listens to events
>> from the xenstore. When a device is attached to domU, the frontend pci bus
>> driver
>> starts enumerating the devices.Front end driver communicates with backend
>> driver
>> in dom0 to read the pci config space.
>
>I'm afraid I don't follow any of this. What "daemon"? Is it in the front
>or backend? What does it do with the events it is listening for?
>
xenwatch
>>
>> b. backend driver returns the virtual BAR ranges which are already mapped in
>> domU
>> stage 2 translation.
>>
>> c. Device driver of the specific pci device invokes methods to configure the
>> msi/x interrupt which are handled by the its driver in domU kernel. The
>> read/writes
>> by the its driver are trapped in xen. ITS driver finds out the actual sbdf
>> based
>> on the map_sbdf hypercall information.
>
>

^ permalink raw reply	[flat|nested] 51+ messages in thread

end of thread, other threads:[~2015-07-31 15:13 UTC | newest]

Thread overview: 51+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-06-28 18:38 PCI Pass-through in Xen ARM - Draft 2 Manish Jaggi
2015-06-29 10:31 ` Julien Grall
2015-06-29 10:50   ` Ian Campbell
2015-06-29 11:00     ` Julien Grall
2015-07-05  5:55   ` Manish Jaggi
2015-07-06  6:13     ` Manish Jaggi
2015-07-06  9:11     ` Ian Campbell
2015-07-06 10:06       ` Manish Jaggi
2015-07-06 10:20         ` Ian Campbell
2015-07-29  9:37           ` Manish Jaggi
2015-07-30  9:54             ` Ian Campbell
2015-07-30 12:51               ` Manish Jaggi
2015-07-30 14:39                 ` Ian Campbell
2015-07-31  7:46                   ` Manish Jaggi
2015-07-31  8:05                     ` Ian Campbell
2015-07-31 10:32                       ` Ian Campbell
2015-07-31 14:24                         ` Konrad Rzeszutek Wilk
2015-07-31 11:07                       ` Manish Jaggi
2015-07-31 11:19                         ` Ian Campbell
2015-07-31 12:50                           ` Manish Jaggi
2015-07-31 12:57                             ` Ian Campbell
2015-07-31 12:59                             ` Julien Grall
2015-07-31 13:27                               ` Ian Campbell
2015-07-31 14:33                               ` Manish Jaggi
2015-07-31 14:56                                 ` Julien Grall
2015-07-31 15:12                                   ` Manish Jaggi
2015-07-31 15:13                                     ` Julien Grall
2015-07-06 10:43     ` Julien Grall
2015-07-06 11:09       ` Manish Jaggi
2015-07-06 11:45         ` Julien Grall
2015-07-07  7:10           ` Manish Jaggi
2015-07-07  8:18             ` Julien Grall
2015-07-07  8:46               ` Manish Jaggi
2015-07-07 10:54                 ` Manish Jaggi
2015-07-07 11:24                 ` Ian Campbell
2015-07-09  7:13                   ` Manish Jaggi
2015-07-09  8:08                     ` Julien Grall
2015-07-09 10:30                       ` Manish Jaggi
2015-07-09 13:57                         ` Julien Grall
2015-07-10  6:07                           ` Pranavkumar Sawargaonkar
2015-07-14 16:37                       ` Stefano Stabellini
2015-07-14 16:46                         ` Stefano Stabellini
2015-07-14 16:58                           ` Julien Grall
2015-07-14 18:01                             ` Stefano Stabellini
2015-07-22  5:41                               ` Manish Jaggi
2015-07-22  8:34                                 ` Julien Grall
2015-07-14 16:47                   ` Stefano Stabellini
2015-07-07 15:27     ` Konrad Rzeszutek Wilk
2015-06-29 15:34 ` Ian Campbell
2015-07-05  6:07 Manish Jaggi
2015-07-06  9:07 ` Ian Campbell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).