linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
@ 2013-11-19  5:17 Bharat Bhushan
  2013-11-19  5:17 ` [PATCH 1/9 v2] pci:msi: add weak function for returning msi region info Bharat Bhushan
                   ` (9 more replies)
  0 siblings, 10 replies; 35+ messages in thread
From: Bharat Bhushan @ 2013-11-19  5:17 UTC (permalink / raw)
  To: alex.williamson, joro, bhelgaas, agraf, scottwood, stuart.yoder,
	iommu, linux-pci, linuxppc-dev, linux-kernel
  Cc: Bharat Bhushan

From: Bharat Bhushan <bharat.bhushan@freescale.com>

PAMU (FSL IOMMU) has a concept of primary window and subwindows.
Primary window corresponds to the complete guest iova address space
(including MSI space), with respect to IOMMU_API this is termed as
geometry. IOVA Base of subwindow is determined from the number of
subwindows (configurable using iommu API).
MSI I/O page must be within the geometry and maximum supported
subwindows, so MSI IO-page is setup just after guest memory iova space.

So patch 1/9-4/9(inclusive) are for defining the interface to get:
  - Number of MSI regions (which is number of MSI banks for powerpc)
  - MSI-region address range: Physical page which have the
    address/addresses used for generating MSI interrupt
    and size of the page.

Patch 5/9-7/9(inclusive) is defining the interface of setting up
MSI iova-base for a msi region(bank) for a device. so that when
msi-message will be composed then this configured iova will be used.
Earlier we were using iommu interface for getting the configured iova
which was not currect and Alex Williamson suggeested this type of interface.

patch 8/9 moves some common functions in a separate file so that these
can be used by FSL_PAMU implementation (next patch uses this).
These will be used later for iommu-none implementation. I believe we
can do more of this but will take step by step.

Finally last patch actually adds the support for FSL-PAMU :)

v1->v2
 - Added interface for setting msi iova for a msi region for a device.
   Earlier I added iommu interface for same but as per comment that is
   removed and now created a direct interface between vfio and msi.
 - Incorporated review comments (details is in individual patch)

Bharat Bhushan (9):
  pci:msi: add weak function for returning msi region info
  pci: msi: expose msi region information functions
  powerpc: pci: Add arch specific msi region interface
  powerpc: msi: Extend the msi region interface to get info from
    fsl_msi
  pci/msi: interface to set an iova for a msi region
  powerpc: pci: Extend msi iova page setup to arch specific
  pci: msi: Extend msi iova setting interface to powerpc arch
  vfio: moving some functions in common file
  vfio pci: Add vfio iommu implementation for FSL_PAMU

 arch/powerpc/include/asm/machdep.h |   10 +
 arch/powerpc/kernel/msi.c          |   28 +
 arch/powerpc/sysdev/fsl_msi.c      |  132 +++++-
 arch/powerpc/sysdev/fsl_msi.h      |   25 +-
 drivers/pci/msi.c                  |   35 ++
 drivers/vfio/Kconfig               |    6 +
 drivers/vfio/Makefile              |    5 +-
 drivers/vfio/vfio_iommu_common.c   |  227 ++++++++
 drivers/vfio/vfio_iommu_common.h   |   27 +
 drivers/vfio/vfio_iommu_fsl_pamu.c | 1003 ++++++++++++++++++++++++++++++++++++
 drivers/vfio/vfio_iommu_type1.c    |  206 +--------
 include/linux/msi.h                |   14 +
 include/linux/pci.h                |   21 +
 include/uapi/linux/vfio.h          |  100 ++++
 14 files changed, 1623 insertions(+), 216 deletions(-)
 create mode 100644 drivers/vfio/vfio_iommu_common.c
 create mode 100644 drivers/vfio/vfio_iommu_common.h
 create mode 100644 drivers/vfio/vfio_iommu_fsl_pamu.c

^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH 1/9 v2] pci:msi: add weak function for returning msi region info
  2013-11-19  5:17 [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU) Bharat Bhushan
@ 2013-11-19  5:17 ` Bharat Bhushan
  2013-11-25 23:36   ` Bjorn Helgaas
  2013-11-19  5:17 ` [PATCH 2/9 v2] pci: msi: expose msi region information functions Bharat Bhushan
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 35+ messages in thread
From: Bharat Bhushan @ 2013-11-19  5:17 UTC (permalink / raw)
  To: alex.williamson, joro, bhelgaas, agraf, scottwood, stuart.yoder,
	iommu, linux-pci, linuxppc-dev, linux-kernel
  Cc: Bharat Bhushan

In Aperture type of IOMMU (like FSL PAMU), VFIO-iommu system need to know
the MSI region to map its window in h/w. This patch just defines the
required weak functions only and will be used by followup patches.

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
---
v1->v2
 - Added description on "struct msi_region" 

 drivers/pci/msi.c   |   22 ++++++++++++++++++++++
 include/linux/msi.h |   14 ++++++++++++++
 2 files changed, 36 insertions(+), 0 deletions(-)

diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
index d5f90d6..2643a29 100644
--- a/drivers/pci/msi.c
+++ b/drivers/pci/msi.c
@@ -67,6 +67,28 @@ int __weak arch_msi_check_device(struct pci_dev *dev, int nvec, int type)
 	return chip->check_device(chip, dev, nvec, type);
 }
 
+int __weak arch_msi_get_region_count(void)
+{
+	return 0;
+}
+
+int __weak arch_msi_get_region(int region_num, struct msi_region *region)
+{
+	return 0;
+}
+
+int msi_get_region_count(void)
+{
+	return arch_msi_get_region_count();
+}
+EXPORT_SYMBOL(msi_get_region_count);
+
+int msi_get_region(int region_num, struct msi_region *region)
+{
+	return arch_msi_get_region(region_num, region);
+}
+EXPORT_SYMBOL(msi_get_region);
+
 int __weak arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
 {
 	struct msi_desc *entry;
diff --git a/include/linux/msi.h b/include/linux/msi.h
index b17ead8..ade1480 100644
--- a/include/linux/msi.h
+++ b/include/linux/msi.h
@@ -51,6 +51,18 @@ struct msi_desc {
 };
 
 /*
+ * This structure is used to get
+ * - physical address
+ * - size
+ * of a msi region
+ */
+struct msi_region {
+	int region_num; /* MSI region number */
+	dma_addr_t addr; /* Address of MSI region */
+	size_t size; /* Size of MSI region */
+};
+
+/*
  * The arch hooks to setup up msi irqs. Those functions are
  * implemented as weak symbols so that they /can/ be overriden by
  * architecture specific code if needed.
@@ -64,6 +76,8 @@ void arch_restore_msi_irqs(struct pci_dev *dev, int irq);
 
 void default_teardown_msi_irqs(struct pci_dev *dev);
 void default_restore_msi_irqs(struct pci_dev *dev, int irq);
+int arch_msi_get_region_count(void);
+int arch_msi_get_region(int region_num, struct msi_region *region);
 
 struct msi_chip {
 	struct module *owner;
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 2/9 v2] pci: msi: expose msi region information functions
  2013-11-19  5:17 [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU) Bharat Bhushan
  2013-11-19  5:17 ` [PATCH 1/9 v2] pci:msi: add weak function for returning msi region info Bharat Bhushan
@ 2013-11-19  5:17 ` Bharat Bhushan
  2013-11-19  5:17 ` [PATCH 3/9 v2] powerpc: pci: Add arch specific msi region interface Bharat Bhushan
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 35+ messages in thread
From: Bharat Bhushan @ 2013-11-19  5:17 UTC (permalink / raw)
  To: alex.williamson, joro, bhelgaas, agraf, scottwood, stuart.yoder,
	iommu, linux-pci, linuxppc-dev, linux-kernel
  Cc: Bharat Bhushan

So by now we have defined all the interfaces for getting the msi region,
this patch expose the interface to linux subsystem. These will be used by
vfio subsystem for setting up iommu for MSI interrupt of direct assignment
devices.

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
---
v1->v2
 - None

 include/linux/pci.h |   13 +++++++++++++
 1 files changed, 13 insertions(+), 0 deletions(-)

diff --git a/include/linux/pci.h b/include/linux/pci.h
index da172f9..c587034 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -1142,6 +1142,7 @@ struct msix_entry {
 	u16	entry;	/* driver uses to specify entry, OS writes */
 };
 
+struct msi_region;
 
 #ifndef CONFIG_PCI_MSI
 static inline int pci_enable_msi_block(struct pci_dev *dev, unsigned int nvec)
@@ -1184,6 +1185,16 @@ static inline int pci_msi_enabled(void)
 {
 	return 0;
 }
+
+static inline int msi_get_region_count(void)
+{
+	return 0;
+}
+
+static inline int msi_get_region(int region_num, struct msi_region *region)
+{
+	return 0;
+}
 #else
 int pci_enable_msi_block(struct pci_dev *dev, unsigned int nvec);
 int pci_enable_msi_block_auto(struct pci_dev *dev, unsigned int *maxvec);
@@ -1196,6 +1207,8 @@ void pci_disable_msix(struct pci_dev *dev);
 void msi_remove_pci_irq_vectors(struct pci_dev *dev);
 void pci_restore_msi_state(struct pci_dev *dev);
 int pci_msi_enabled(void);
+int msi_get_region_count(void);
+int msi_get_region(int region_num, struct msi_region *region);
 #endif
 
 #ifdef CONFIG_PCIEPORTBUS
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 3/9 v2] powerpc: pci: Add arch specific msi region interface
  2013-11-19  5:17 [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU) Bharat Bhushan
  2013-11-19  5:17 ` [PATCH 1/9 v2] pci:msi: add weak function for returning msi region info Bharat Bhushan
  2013-11-19  5:17 ` [PATCH 2/9 v2] pci: msi: expose msi region information functions Bharat Bhushan
@ 2013-11-19  5:17 ` Bharat Bhushan
  2013-11-19  5:17 ` [PATCH 4/9 v2] powerpc: msi: Extend the msi region interface to get info from fsl_msi Bharat Bhushan
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 35+ messages in thread
From: Bharat Bhushan @ 2013-11-19  5:17 UTC (permalink / raw)
  To: alex.williamson, joro, bhelgaas, agraf, scottwood, stuart.yoder,
	iommu, linux-pci, linuxppc-dev, linux-kernel
  Cc: Bharat Bhushan

This patch adds the interface to get the msi region information from arch
specific code. The machine spicific code is not yet defined.

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
---
v1->v2
 - None

 arch/powerpc/include/asm/machdep.h |    8 ++++++++
 arch/powerpc/kernel/msi.c          |   18 ++++++++++++++++++
 2 files changed, 26 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/machdep.h b/arch/powerpc/include/asm/machdep.h
index 8b48090..8d1b787 100644
--- a/arch/powerpc/include/asm/machdep.h
+++ b/arch/powerpc/include/asm/machdep.h
@@ -30,6 +30,7 @@ struct file;
 struct pci_controller;
 struct kimage;
 struct pci_host_bridge;
+struct msi_region;
 
 struct machdep_calls {
 	char		*name;
@@ -124,6 +125,13 @@ struct machdep_calls {
 	int		(*setup_msi_irqs)(struct pci_dev *dev,
 					  int nvec, int type);
 	void		(*teardown_msi_irqs)(struct pci_dev *dev);
+
+	/* Returns the number of MSI regions (banks) */
+	int		(*msi_get_region_count)(void);
+
+	/* Returns the requested region's address and size */
+	int		(*msi_get_region)(int region_num,
+					  struct msi_region *region);
 #endif
 
 	void		(*restart)(char *cmd);
diff --git a/arch/powerpc/kernel/msi.c b/arch/powerpc/kernel/msi.c
index 8bbc12d..1a67787 100644
--- a/arch/powerpc/kernel/msi.c
+++ b/arch/powerpc/kernel/msi.c
@@ -13,6 +13,24 @@
 
 #include <asm/machdep.h>
 
+int arch_msi_get_region_count(void)
+{
+	if (ppc_md.msi_get_region_count) {
+		pr_debug("msi: Using platform get_region_count routine.\n");
+		return ppc_md.msi_get_region_count();
+	}
+	return 0;
+}
+
+int arch_msi_get_region(int region_num, struct msi_region *region)
+{
+	if (ppc_md.msi_get_region) {
+		pr_debug("msi: Using platform get_region routine.\n");
+		return ppc_md.msi_get_region(region_num, region);
+	}
+	return 0;
+}
+
 int arch_msi_check_device(struct pci_dev* dev, int nvec, int type)
 {
 	if (!ppc_md.setup_msi_irqs || !ppc_md.teardown_msi_irqs) {
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 4/9 v2] powerpc: msi: Extend the msi region interface to get info from fsl_msi
  2013-11-19  5:17 [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU) Bharat Bhushan
                   ` (2 preceding siblings ...)
  2013-11-19  5:17 ` [PATCH 3/9 v2] powerpc: pci: Add arch specific msi region interface Bharat Bhushan
@ 2013-11-19  5:17 ` Bharat Bhushan
  2013-11-19  5:17 ` [PATCH 5/9 v2] pci/msi: interface to set an iova for a msi region Bharat Bhushan
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 35+ messages in thread
From: Bharat Bhushan @ 2013-11-19  5:17 UTC (permalink / raw)
  To: alex.williamson, joro, bhelgaas, agraf, scottwood, stuart.yoder,
	iommu, linux-pci, linuxppc-dev, linux-kernel
  Cc: Bharat Bhushan

The FSL MSI will provide the interface to get:
  - Number of MSI regions (which is number of MSI banks for powerpc)
  - Get the region address range: Physical page which have the
    address/addresses used for generating MSI interrupt
    and size of the page.

These are required to create IOMMU (Freescale PAMU) mapping for
devices which are directly assigned using VFIO.

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
---
v1->v2
 - Atomic increment of bank index for parallel probe of msi node 

 arch/powerpc/sysdev/fsl_msi.c |   42 +++++++++++++++++++++++++++++++++++-----
 arch/powerpc/sysdev/fsl_msi.h |   11 ++++++++-
 2 files changed, 45 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/sysdev/fsl_msi.c b/arch/powerpc/sysdev/fsl_msi.c
index 77efbae..eeebbf0 100644
--- a/arch/powerpc/sysdev/fsl_msi.c
+++ b/arch/powerpc/sysdev/fsl_msi.c
@@ -109,6 +109,34 @@ static int fsl_msi_init_allocator(struct fsl_msi *msi_data)
 	return 0;
 }
 
+static int fsl_msi_get_region_count(void)
+{
+	int count = 0;
+	struct fsl_msi *msi_data;
+
+	list_for_each_entry(msi_data, &msi_head, list)
+		count++;
+
+	return count;
+}
+
+static int fsl_msi_get_region(int region_num, struct msi_region *region)
+{
+	struct fsl_msi *msi_data;
+
+	list_for_each_entry(msi_data, &msi_head, list) {
+		if (msi_data->bank_index == region_num) {
+			region->region_num = msi_data->bank_index;
+			/* Setting PAGE_SIZE as MSIIR is a 4 byte register */
+			region->size = PAGE_SIZE;
+			region->addr = msi_data->msiir & ~(region->size - 1);
+			return 0;
+		}
+	}
+
+	return -ENODEV;
+}
+
 static int fsl_msi_check_device(struct pci_dev *pdev, int nvec, int type)
 {
 	if (type == PCI_CAP_ID_MSIX)
@@ -150,7 +178,8 @@ static void fsl_compose_msi_msg(struct pci_dev *pdev, int hwirq,
 	if (reg && (len == sizeof(u64)))
 		address = be64_to_cpup(reg);
 	else
-		address = fsl_pci_immrbar_base(hose) + msi_data->msiir_offset;
+		address = fsl_pci_immrbar_base(hose) +
+			   (msi_data->msiir & 0xfffff);
 
 	msg->address_lo = lower_32_bits(address);
 	msg->address_hi = upper_32_bits(address);
@@ -393,6 +422,7 @@ static int fsl_of_msi_probe(struct platform_device *dev)
 	const struct fsl_msi_feature *features;
 	int len;
 	u32 offset;
+	static atomic_t bank_index = ATOMIC_INIT(-1);
 
 	match = of_match_device(fsl_of_msi_ids, &dev->dev);
 	if (!match)
@@ -436,18 +466,15 @@ static int fsl_of_msi_probe(struct platform_device *dev)
 				dev->dev.of_node->full_name);
 			goto error_out;
 		}
-		msi->msiir_offset =
-			features->msiir_offset + (res.start & 0xfffff);
 
 		/*
 		 * First read the MSIIR/MSIIR1 offset from dts
 		 * On failure use the hardcode MSIIR offset
 		 */
 		if (of_address_to_resource(dev->dev.of_node, 1, &msiir))
-			msi->msiir_offset = features->msiir_offset +
-					    (res.start & MSIIR_OFFSET_MASK);
+			msi->msiir = res.start + features->msiir_offset;
 		else
-			msi->msiir_offset = msiir.start & MSIIR_OFFSET_MASK;
+			msi->msiir = msiir.start;
 	}
 
 	msi->feature = features->fsl_pic_ip;
@@ -521,6 +548,7 @@ static int fsl_of_msi_probe(struct platform_device *dev)
 		}
 	}
 
+	msi->bank_index = atomic_inc_return(&bank_index);
 	list_add_tail(&msi->list, &msi_head);
 
 	/* The multiple setting ppc_md.setup_msi_irqs will not harm things */
@@ -528,6 +556,8 @@ static int fsl_of_msi_probe(struct platform_device *dev)
 		ppc_md.setup_msi_irqs = fsl_setup_msi_irqs;
 		ppc_md.teardown_msi_irqs = fsl_teardown_msi_irqs;
 		ppc_md.msi_check_device = fsl_msi_check_device;
+		ppc_md.msi_get_region_count = fsl_msi_get_region_count;
+		ppc_md.msi_get_region = fsl_msi_get_region;
 	} else if (ppc_md.setup_msi_irqs != fsl_setup_msi_irqs) {
 		dev_err(&dev->dev, "Different MSI driver already installed!\n");
 		err = -ENODEV;
diff --git a/arch/powerpc/sysdev/fsl_msi.h b/arch/powerpc/sysdev/fsl_msi.h
index df9aa9f..a2cc5a2 100644
--- a/arch/powerpc/sysdev/fsl_msi.h
+++ b/arch/powerpc/sysdev/fsl_msi.h
@@ -31,14 +31,21 @@ struct fsl_msi {
 	struct irq_domain *irqhost;
 
 	unsigned long cascade_irq;
-
-	u32 msiir_offset; /* Offset of MSIIR, relative to start of CCSR */
+	phys_addr_t msiir; /* MSIIR Address in CCSR */
 	u32 ibs_shift; /* Shift of interrupt bit select */
 	u32 srs_shift; /* Shift of the shared interrupt register select */
 	void __iomem *msi_regs;
 	u32 feature;
 	int msi_virqs[NR_MSI_REG_MAX];
 
+	/*
+	 * During probe each bank is assigned a index number.
+	 * index number start from 0.
+	 * Example  MSI bank 1 = 0
+	 * MSI bank 2 = 1, and so on.
+	 */
+	int bank_index;
+
 	struct msi_bitmap bitmap;
 
 	struct list_head list;          /* support multiple MSI banks */
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 5/9 v2] pci/msi: interface to set an iova for a msi region
  2013-11-19  5:17 [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU) Bharat Bhushan
                   ` (3 preceding siblings ...)
  2013-11-19  5:17 ` [PATCH 4/9 v2] powerpc: msi: Extend the msi region interface to get info from fsl_msi Bharat Bhushan
@ 2013-11-19  5:17 ` Bharat Bhushan
  2013-11-19  5:17 ` [PATCH 6/9 v2] powerpc: pci: Extend msi iova page setup to arch specific Bharat Bhushan
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 35+ messages in thread
From: Bharat Bhushan @ 2013-11-19  5:17 UTC (permalink / raw)
  To: alex.williamson, joro, bhelgaas, agraf, scottwood, stuart.yoder,
	iommu, linux-pci, linuxppc-dev, linux-kernel
  Cc: Bharat Bhushan

This patch defines an interface by which a msi page
can be mapped to a specific iova page.

This is a requirement in aperture type of IOMMUs (like Freescale PAMU),
where we map msi iova page just after guest memory iova address.

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
---
v2
 - new patch

 drivers/pci/msi.c   |   13 +++++++++++++
 include/linux/pci.h |    8 ++++++++
 2 files changed, 21 insertions(+), 0 deletions(-)

diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
index 2643a29..040609f 100644
--- a/drivers/pci/msi.c
+++ b/drivers/pci/msi.c
@@ -77,6 +77,19 @@ int __weak arch_msi_get_region(int region_num, struct msi_region *region)
 	return 0;
 }
 
+int __weak arch_msi_set_iova(struct pci_dev *pdev, int region_num,
+			     dma_addr_t iova, bool set)
+{
+	return 0;
+}
+
+int msi_set_iova(struct pci_dev *pdev, int region_num,
+		 dma_addr_t iova, bool set)
+{
+	return arch_msi_set_iova(pdev, region_num, iova, set);
+}
+EXPORT_SYMBOL(msi_set_iova);
+
 int msi_get_region_count(void)
 {
 	return arch_msi_get_region_count();
diff --git a/include/linux/pci.h b/include/linux/pci.h
index c587034..c6d3e58 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -1195,6 +1195,12 @@ static inline int msi_get_region(int region_num, struct msi_region *region)
 {
 	return 0;
 }
+
+static inline int msi_set_iova(struct pci_dev *pdev, int region_num,
+			       dma_addr_t iova, bool set)
+{
+	return 0;
+}
 #else
 int pci_enable_msi_block(struct pci_dev *dev, unsigned int nvec);
 int pci_enable_msi_block_auto(struct pci_dev *dev, unsigned int *maxvec);
@@ -1209,6 +1215,8 @@ void pci_restore_msi_state(struct pci_dev *dev);
 int pci_msi_enabled(void);
 int msi_get_region_count(void);
 int msi_get_region(int region_num, struct msi_region *region);
+int msi_set_iova(struct pci_dev *pdev, int region_num,
+		 dma_addr_t iova, bool set);
 #endif
 
 #ifdef CONFIG_PCIEPORTBUS
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 6/9 v2] powerpc: pci: Extend msi iova page setup to arch specific
  2013-11-19  5:17 [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU) Bharat Bhushan
                   ` (4 preceding siblings ...)
  2013-11-19  5:17 ` [PATCH 5/9 v2] pci/msi: interface to set an iova for a msi region Bharat Bhushan
@ 2013-11-19  5:17 ` Bharat Bhushan
  2013-11-19  5:17 ` [PATCH 7/9 v2] pci: msi: Extend msi iova setting interface to powerpc arch Bharat Bhushan
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 35+ messages in thread
From: Bharat Bhushan @ 2013-11-19  5:17 UTC (permalink / raw)
  To: alex.williamson, joro, bhelgaas, agraf, scottwood, stuart.yoder,
	iommu, linux-pci, linuxppc-dev, linux-kernel
  Cc: Bharat Bhushan

This patch extend the interface to arch specific code for setting
msi iova address for a msi page. Machine specific code is not yet
implemented.

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
---
v2
 - new patch

 arch/powerpc/include/asm/machdep.h |    2 ++
 arch/powerpc/kernel/msi.c          |   10 ++++++++++
 2 files changed, 12 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/machdep.h b/arch/powerpc/include/asm/machdep.h
index 8d1b787..e87b806 100644
--- a/arch/powerpc/include/asm/machdep.h
+++ b/arch/powerpc/include/asm/machdep.h
@@ -132,6 +132,8 @@ struct machdep_calls {
 	/* Returns the requested region's address and size */
 	int		(*msi_get_region)(int region_num,
 					  struct msi_region *region);
+	int		(*msi_set_iova)(struct pci_dev *pdev, int region_num,
+					dma_addr_t iova, bool set);
 #endif
 
 	void		(*restart)(char *cmd);
diff --git a/arch/powerpc/kernel/msi.c b/arch/powerpc/kernel/msi.c
index 1a67787..e2bd555 100644
--- a/arch/powerpc/kernel/msi.c
+++ b/arch/powerpc/kernel/msi.c
@@ -13,6 +13,16 @@
 
 #include <asm/machdep.h>
 
+int arch_msi_set_iova(struct pci_dev *pdev, int region_num,
+		      dma_addr_t iova, bool set)
+{
+	if (ppc_md.msi_set_iova) {
+		pr_debug("msi: Using platform get_region_count routine.\n");
+		return ppc_md.msi_set_iova(pdev, region_num, iova, set);
+	}
+	return 0;
+}
+
 int arch_msi_get_region_count(void)
 {
 	if (ppc_md.msi_get_region_count) {
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 7/9 v2] pci: msi: Extend msi iova setting interface to powerpc arch
  2013-11-19  5:17 [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU) Bharat Bhushan
                   ` (5 preceding siblings ...)
  2013-11-19  5:17 ` [PATCH 6/9 v2] powerpc: pci: Extend msi iova page setup to arch specific Bharat Bhushan
@ 2013-11-19  5:17 ` Bharat Bhushan
  2013-11-19  5:17 ` [PATCH 8/9 v2] vfio: moving some functions in common file Bharat Bhushan
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 35+ messages in thread
From: Bharat Bhushan @ 2013-11-19  5:17 UTC (permalink / raw)
  To: alex.williamson, joro, bhelgaas, agraf, scottwood, stuart.yoder,
	iommu, linux-pci, linuxppc-dev, linux-kernel
  Cc: Bharat Bhushan

Now we Keep track of devices which have msi page mapping to specific
iova page for all msi bank. When composing MSI address and data then
this list will be traversed. If device found in the list then use
configured iova page otherwise iova page will be taken as before.

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
---
v2
 - new patch

 arch/powerpc/sysdev/fsl_msi.c |   90 +++++++++++++++++++++++++++++++++++++++++
 arch/powerpc/sysdev/fsl_msi.h |   16 ++++++-
 2 files changed, 104 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/sysdev/fsl_msi.c b/arch/powerpc/sysdev/fsl_msi.c
index eeebbf0..52d2beb 100644
--- a/arch/powerpc/sysdev/fsl_msi.c
+++ b/arch/powerpc/sysdev/fsl_msi.c
@@ -137,6 +137,75 @@ static int fsl_msi_get_region(int region_num, struct msi_region *region)
 	return -ENODEV;
 }
 
+/* Add device to the list which have iova page mapping */
+static int fsl_msi_add_iova_device(struct fsl_msi *msi_data,
+				   struct pci_dev *pdev, dma_addr_t iova)
+{
+	struct fsl_msi_device *device;
+
+	mutex_lock(&msi_data->lock);
+	list_for_each_entry(device, &msi_data->device_list, list) {
+		/* If mapping already exits then update with new page mapping */
+		if (device->dev == pdev) {
+			device->iova = iova;
+			mutex_unlock(&msi_data->lock);
+			return 0;
+		}
+	}
+
+	device = kzalloc(sizeof(struct fsl_msi_device), GFP_KERNEL);
+	if (!device) {
+		pr_err("%s: Memory allocation failed\n", __func__);
+		mutex_unlock(&msi_data->lock);
+		return -ENOMEM;
+	}
+
+	device->dev = pdev;
+	device->iova = iova;
+	list_add_tail(&device->list, &msi_data->device_list);
+	mutex_unlock(&msi_data->lock);
+	return 0;
+}
+
+/* Remove device to the list which have iova page mapping */
+static int fsl_msi_del_iova_device(struct fsl_msi *msi_data,
+				   struct pci_dev *pdev)
+{
+	struct fsl_msi_device *device;
+
+	mutex_lock(&msi_data->lock);
+	list_for_each_entry(device, &msi_data->device_list, list) {
+		if (device->dev == pdev) {
+			list_del(&device->list);
+			kfree(device);
+			break;
+		}
+	}
+	mutex_unlock(&msi_data->lock);
+	return 0;
+}
+
+/* set/clear device iova mapping for the requested msi region */
+static int fsl_msi_set_iova(struct pci_dev *pdev, int region_num,
+			    dma_addr_t iova, bool set)
+{
+	struct fsl_msi *msi_data;
+	int ret = -EINVAL;
+
+	list_for_each_entry(msi_data, &msi_head, list) {
+		if (msi_data->bank_index != region_num)
+			continue;
+
+		if (set)
+			ret = fsl_msi_add_iova_device(msi_data, pdev, iova);
+		else
+			ret = fsl_msi_del_iova_device(msi_data, pdev);
+
+		break;
+	}
+	return ret;
+}
+
 static int fsl_msi_check_device(struct pci_dev *pdev, int nvec, int type)
 {
 	if (type == PCI_CAP_ID_MSIX)
@@ -167,6 +236,7 @@ static void fsl_compose_msi_msg(struct pci_dev *pdev, int hwirq,
 				struct msi_msg *msg,
 				struct fsl_msi *fsl_msi_data)
 {
+	struct fsl_msi_device *device;
 	struct fsl_msi *msi_data = fsl_msi_data;
 	struct pci_controller *hose = pci_bus_to_host(pdev->bus);
 	u64 address; /* Physical address of the MSIIR */
@@ -181,6 +251,15 @@ static void fsl_compose_msi_msg(struct pci_dev *pdev, int hwirq,
 		address = fsl_pci_immrbar_base(hose) +
 			   (msi_data->msiir & 0xfffff);
 
+	mutex_lock(&msi_data->lock);
+	list_for_each_entry(device, &msi_data->device_list, list) {
+		if (device->dev == pdev) {
+			address = device->iova | (msi_data->msiir & 0xfff);
+			break;
+		}
+	}
+	mutex_unlock(&msi_data->lock);
+
 	msg->address_lo = lower_32_bits(address);
 	msg->address_hi = upper_32_bits(address);
 
@@ -356,6 +435,7 @@ static int fsl_of_msi_remove(struct platform_device *ofdev)
 	struct fsl_msi *msi = platform_get_drvdata(ofdev);
 	int virq, i;
 	struct fsl_msi_cascade_data *cascade_data;
+	struct fsl_msi_device *device;
 
 	if (msi->list.prev != NULL)
 		list_del(&msi->list);
@@ -371,6 +451,13 @@ static int fsl_of_msi_remove(struct platform_device *ofdev)
 		msi_bitmap_free(&msi->bitmap);
 	if ((msi->feature & FSL_PIC_IP_MASK) != FSL_PIC_IP_VMPIC)
 		iounmap(msi->msi_regs);
+
+	mutex_lock(&msi->lock);
+	list_for_each_entry(device, &msi->device_list, list) {
+		list_del(&device->list);
+		kfree(device);
+	}
+	mutex_unlock(&msi->lock);
 	kfree(msi);
 
 	return 0;
@@ -436,6 +523,8 @@ static int fsl_of_msi_probe(struct platform_device *dev)
 		dev_err(&dev->dev, "No memory for MSI structure\n");
 		return -ENOMEM;
 	}
+	INIT_LIST_HEAD(&msi->device_list);
+	mutex_init(&msi->lock);
 	platform_set_drvdata(dev, msi);
 
 	msi->irqhost = irq_domain_add_linear(dev->dev.of_node,
@@ -558,6 +647,7 @@ static int fsl_of_msi_probe(struct platform_device *dev)
 		ppc_md.msi_check_device = fsl_msi_check_device;
 		ppc_md.msi_get_region_count = fsl_msi_get_region_count;
 		ppc_md.msi_get_region = fsl_msi_get_region;
+		ppc_md.msi_set_iova = fsl_msi_set_iova;
 	} else if (ppc_md.setup_msi_irqs != fsl_setup_msi_irqs) {
 		dev_err(&dev->dev, "Different MSI driver already installed!\n");
 		err = -ENODEV;
diff --git a/arch/powerpc/sysdev/fsl_msi.h b/arch/powerpc/sysdev/fsl_msi.h
index a2cc5a2..4da2af9 100644
--- a/arch/powerpc/sysdev/fsl_msi.h
+++ b/arch/powerpc/sysdev/fsl_msi.h
@@ -27,9 +27,16 @@
 #define FSL_PIC_IP_IPIC   0x00000002
 #define FSL_PIC_IP_VMPIC  0x00000003
 
+/* List of devices having specific iova page mapping */
+struct fsl_msi_device {
+	struct list_head list;
+	struct pci_dev *dev;
+	dma_addr_t iova;
+};
+
 struct fsl_msi {
 	struct irq_domain *irqhost;
-
+	struct mutex lock;
 	unsigned long cascade_irq;
 	phys_addr_t msiir; /* MSIIR Address in CCSR */
 	u32 ibs_shift; /* Shift of interrupt bit select */
@@ -37,7 +44,12 @@ struct fsl_msi {
 	void __iomem *msi_regs;
 	u32 feature;
 	int msi_virqs[NR_MSI_REG_MAX];
-
+	/*
+	 * Keep track of devices which have msi page mapping to specific
+	 * iova page. Default this is NULL which means legacy way of
+	 * setting iova will be used.
+	 */
+	struct list_head device_list;
 	/*
 	 * During probe each bank is assigned a index number.
 	 * index number start from 0.
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 8/9 v2] vfio: moving some functions in common file
  2013-11-19  5:17 [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU) Bharat Bhushan
                   ` (6 preceding siblings ...)
  2013-11-19  5:17 ` [PATCH 7/9 v2] pci: msi: Extend msi iova setting interface to powerpc arch Bharat Bhushan
@ 2013-11-19  5:17 ` Bharat Bhushan
  2013-11-19  5:17 ` [PATCH 9/9 v2] vfio pci: Add vfio iommu implementation for FSL_PAMU Bharat Bhushan
  2013-11-20 18:47 ` [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU) Alex Williamson
  9 siblings, 0 replies; 35+ messages in thread
From: Bharat Bhushan @ 2013-11-19  5:17 UTC (permalink / raw)
  To: alex.williamson, joro, bhelgaas, agraf, scottwood, stuart.yoder,
	iommu, linux-pci, linuxppc-dev, linux-kernel
  Cc: Bharat Bhushan

Some function defined in vfio_iommu_type1.c are generic (not specific
or type1 iommu) and we want to use these for FSL IOMMU (PAMU) and
going forward in iommu-none driver.
So I have created a new file naming vfio_iommu_common.c and moved some
of generic functions into this file.

I Agree (with Alex Williamson and myself :-)) that some more functions
can be moved to this new common file (with some changes in type1/fsl_pamu
and others). But in this patch i avoided doing these changes and
just moved functions which are straight forward and allow me to
get fsl-powerpc vfio framework in place.

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
---
v1->v2
 - removed un-necessary header file inclusion
 - mark static function which are internal to *common.c

 drivers/vfio/Makefile            |    4 +-
 drivers/vfio/vfio_iommu_common.c |  227 ++++++++++++++++++++++++++++++++++++++
 drivers/vfio/vfio_iommu_common.h |   27 +++++
 drivers/vfio/vfio_iommu_type1.c  |  206 +----------------------------------
 4 files changed, 257 insertions(+), 207 deletions(-)
 create mode 100644 drivers/vfio/vfio_iommu_common.c
 create mode 100644 drivers/vfio/vfio_iommu_common.h

diff --git a/drivers/vfio/Makefile b/drivers/vfio/Makefile
index 72bfabc..c5792ec 100644
--- a/drivers/vfio/Makefile
+++ b/drivers/vfio/Makefile
@@ -1,4 +1,4 @@
 obj-$(CONFIG_VFIO) += vfio.o
-obj-$(CONFIG_VFIO_IOMMU_TYPE1) += vfio_iommu_type1.o
-obj-$(CONFIG_VFIO_IOMMU_SPAPR_TCE) += vfio_iommu_spapr_tce.o
+obj-$(CONFIG_VFIO_IOMMU_TYPE1) += vfio_iommu_common.o vfio_iommu_type1.o
+obj-$(CONFIG_VFIO_IOMMU_SPAPR_TCE) += vfio_iommu_common.o vfio_iommu_spapr_tce.o
 obj-$(CONFIG_VFIO_PCI) += pci/
diff --git a/drivers/vfio/vfio_iommu_common.c b/drivers/vfio/vfio_iommu_common.c
new file mode 100644
index 0000000..08eea71
--- /dev/null
+++ b/drivers/vfio/vfio_iommu_common.c
@@ -0,0 +1,227 @@
+/*
+ * VFIO: Common code for vfio IOMMU support
+ *
+ * Copyright (C) 2012 Red Hat, Inc.  All rights reserved.
+ *     Author: Alex Williamson <alex.williamson@redhat.com>
+ *     Author: Bharat Bhushan <bharat.bhushan@freescale.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Derived from original vfio:
+ * Copyright 2010 Cisco Systems, Inc.  All rights reserved.
+ * Author: Tom Lyon, pugs@cisco.com
+ */
+
+#include <linux/compat.h>
+#include <linux/iommu.h>
+#include <linux/module.h>
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <linux/workqueue.h>
+
+static bool disable_hugepages;
+module_param_named(disable_hugepages,
+		   disable_hugepages, bool, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(disable_hugepages,
+		 "Disable VFIO IOMMU support for IOMMU hugepages.");
+
+struct vwork {
+	struct mm_struct	*mm;
+	long			npage;
+	struct work_struct	work;
+};
+
+/* delayed decrement/increment for locked_vm */
+static void vfio_lock_acct_bg(struct work_struct *work)
+{
+	struct vwork *vwork = container_of(work, struct vwork, work);
+	struct mm_struct *mm;
+
+	mm = vwork->mm;
+	down_write(&mm->mmap_sem);
+	mm->locked_vm += vwork->npage;
+	up_write(&mm->mmap_sem);
+	mmput(mm);
+	kfree(vwork);
+}
+
+void vfio_lock_acct(long npage)
+{
+	struct vwork *vwork;
+	struct mm_struct *mm;
+
+	if (!current->mm || !npage)
+		return; /* process exited or nothing to do */
+
+	if (down_write_trylock(&current->mm->mmap_sem)) {
+		current->mm->locked_vm += npage;
+		up_write(&current->mm->mmap_sem);
+		return;
+	}
+
+	/*
+	 * Couldn't get mmap_sem lock, so must setup to update
+	 * mm->locked_vm later. If locked_vm were atomic, we
+	 * wouldn't need this silliness
+	 */
+	vwork = kmalloc(sizeof(struct vwork), GFP_KERNEL);
+	if (!vwork)
+		return;
+	mm = get_task_mm(current);
+	if (!mm) {
+		kfree(vwork);
+		return;
+	}
+	INIT_WORK(&vwork->work, vfio_lock_acct_bg);
+	vwork->mm = mm;
+	vwork->npage = npage;
+	schedule_work(&vwork->work);
+}
+
+/*
+ * Some mappings aren't backed by a struct page, for example an mmap'd
+ * MMIO range for our own or another device.  These use a different
+ * pfn conversion and shouldn't be tracked as locked pages.
+ */
+static bool is_invalid_reserved_pfn(unsigned long pfn)
+{
+	if (pfn_valid(pfn)) {
+		bool reserved;
+		struct page *tail = pfn_to_page(pfn);
+		struct page *head = compound_trans_head(tail);
+		reserved = !!(PageReserved(head));
+		if (head != tail) {
+			/*
+			 * "head" is not a dangling pointer
+			 * (compound_trans_head takes care of that)
+			 * but the hugepage may have been split
+			 * from under us (and we may not hold a
+			 * reference count on the head page so it can
+			 * be reused before we run PageReferenced), so
+			 * we've to check PageTail before returning
+			 * what we just read.
+			 */
+			smp_rmb();
+			if (PageTail(tail))
+				return reserved;
+		}
+		return PageReserved(tail);
+	}
+
+	return true;
+}
+
+static int put_pfn(unsigned long pfn, int prot)
+{
+	if (!is_invalid_reserved_pfn(pfn)) {
+		struct page *page = pfn_to_page(pfn);
+		if (prot & IOMMU_WRITE)
+			SetPageDirty(page);
+		put_page(page);
+		return 1;
+	}
+	return 0;
+}
+
+static int vaddr_get_pfn(unsigned long vaddr, int prot, unsigned long *pfn)
+{
+	struct page *page[1];
+	struct vm_area_struct *vma;
+	int ret = -EFAULT;
+
+	if (get_user_pages_fast(vaddr, 1, !!(prot & IOMMU_WRITE), page) == 1) {
+		*pfn = page_to_pfn(page[0]);
+		return 0;
+	}
+
+	down_read(&current->mm->mmap_sem);
+
+	vma = find_vma_intersection(current->mm, vaddr, vaddr + 1);
+
+	if (vma && vma->vm_flags & VM_PFNMAP) {
+		*pfn = ((vaddr - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;
+		if (is_invalid_reserved_pfn(*pfn))
+			ret = 0;
+	}
+
+	up_read(&current->mm->mmap_sem);
+
+	return ret;
+}
+
+/*
+ * Attempt to pin pages.  We really don't want to track all the pfns and
+ * the iommu can only map chunks of consecutive pfns anyway, so get the
+ * first page and all consecutive pages with the same locking.
+ */
+long vfio_pin_pages(unsigned long vaddr, long npage,
+			   int prot, unsigned long *pfn_base)
+{
+	unsigned long limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
+	bool lock_cap = capable(CAP_IPC_LOCK);
+	long ret, i;
+
+	if (!current->mm)
+		return -ENODEV;
+
+	ret = vaddr_get_pfn(vaddr, prot, pfn_base);
+	if (ret)
+		return ret;
+
+	if (is_invalid_reserved_pfn(*pfn_base))
+		return 1;
+
+	if (!lock_cap && current->mm->locked_vm + 1 > limit) {
+		put_pfn(*pfn_base, prot);
+		pr_warn("%s: RLIMIT_MEMLOCK (%ld) exceeded\n", __func__,
+			limit << PAGE_SHIFT);
+		return -ENOMEM;
+	}
+
+	if (unlikely(disable_hugepages)) {
+		vfio_lock_acct(1);
+		return 1;
+	}
+
+	/* Lock all the consecutive pages from pfn_base */
+	for (i = 1, vaddr += PAGE_SIZE; i < npage; i++, vaddr += PAGE_SIZE) {
+		unsigned long pfn = 0;
+
+		ret = vaddr_get_pfn(vaddr, prot, &pfn);
+		if (ret)
+			break;
+
+		if (pfn != *pfn_base + i || is_invalid_reserved_pfn(pfn)) {
+			put_pfn(pfn, prot);
+			break;
+		}
+
+		if (!lock_cap && current->mm->locked_vm + i + 1 > limit) {
+			put_pfn(pfn, prot);
+			pr_warn("%s: RLIMIT_MEMLOCK (%ld) exceeded\n",
+				__func__, limit << PAGE_SHIFT);
+			break;
+		}
+	}
+
+	vfio_lock_acct(i);
+
+	return i;
+}
+
+long vfio_unpin_pages(unsigned long pfn, long npage,
+			     int prot, bool do_accounting)
+{
+	unsigned long unlocked = 0;
+	long i;
+
+	for (i = 0; i < npage; i++)
+		unlocked += put_pfn(pfn++, prot);
+
+	if (do_accounting)
+		vfio_lock_acct(-unlocked);
+
+	return unlocked;
+}
diff --git a/drivers/vfio/vfio_iommu_common.h b/drivers/vfio/vfio_iommu_common.h
new file mode 100644
index 0000000..2566ce6
--- /dev/null
+++ b/drivers/vfio/vfio_iommu_common.h
@@ -0,0 +1,27 @@
+/*
+ * Copyright (C) 2012 Red Hat, Inc.  All rights reserved.
+ * Copyright (C) 2013 Freescale Semiconductor, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published
+ * by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307 USA
+ */
+
+#ifndef _VFIO_IOMMU_COMMON_H
+#define _VFIO_IOMMU_COMMON_H
+
+void vfio_lock_acct(long npage);
+long vfio_pin_pages(unsigned long vaddr, long npage, int prot,
+		    unsigned long *pfn_base);
+long vfio_unpin_pages(unsigned long pfn, long npage,
+		      int prot, bool do_accounting);
+#endif
diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index a9807de..e9a58fa 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -37,6 +37,7 @@
 #include <linux/uaccess.h>
 #include <linux/vfio.h>
 #include <linux/workqueue.h>
+#include "vfio_iommu_common.h"
 
 #define DRIVER_VERSION  "0.2"
 #define DRIVER_AUTHOR   "Alex Williamson <alex.williamson@redhat.com>"
@@ -48,12 +49,6 @@ module_param_named(allow_unsafe_interrupts,
 MODULE_PARM_DESC(allow_unsafe_interrupts,
 		 "Enable VFIO IOMMU support for on platforms without interrupt remapping support.");
 
-static bool disable_hugepages;
-module_param_named(disable_hugepages,
-		   disable_hugepages, bool, S_IRUGO | S_IWUSR);
-MODULE_PARM_DESC(disable_hugepages,
-		 "Disable VFIO IOMMU support for IOMMU hugepages.");
-
 struct vfio_iommu {
 	struct iommu_domain	*domain;
 	struct mutex		lock;
@@ -123,205 +118,6 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
 	rb_erase(&old->node, &iommu->dma_list);
 }
 
-struct vwork {
-	struct mm_struct	*mm;
-	long			npage;
-	struct work_struct	work;
-};
-
-/* delayed decrement/increment for locked_vm */
-static void vfio_lock_acct_bg(struct work_struct *work)
-{
-	struct vwork *vwork = container_of(work, struct vwork, work);
-	struct mm_struct *mm;
-
-	mm = vwork->mm;
-	down_write(&mm->mmap_sem);
-	mm->locked_vm += vwork->npage;
-	up_write(&mm->mmap_sem);
-	mmput(mm);
-	kfree(vwork);
-}
-
-static void vfio_lock_acct(long npage)
-{
-	struct vwork *vwork;
-	struct mm_struct *mm;
-
-	if (!current->mm || !npage)
-		return; /* process exited or nothing to do */
-
-	if (down_write_trylock(&current->mm->mmap_sem)) {
-		current->mm->locked_vm += npage;
-		up_write(&current->mm->mmap_sem);
-		return;
-	}
-
-	/*
-	 * Couldn't get mmap_sem lock, so must setup to update
-	 * mm->locked_vm later. If locked_vm were atomic, we
-	 * wouldn't need this silliness
-	 */
-	vwork = kmalloc(sizeof(struct vwork), GFP_KERNEL);
-	if (!vwork)
-		return;
-	mm = get_task_mm(current);
-	if (!mm) {
-		kfree(vwork);
-		return;
-	}
-	INIT_WORK(&vwork->work, vfio_lock_acct_bg);
-	vwork->mm = mm;
-	vwork->npage = npage;
-	schedule_work(&vwork->work);
-}
-
-/*
- * Some mappings aren't backed by a struct page, for example an mmap'd
- * MMIO range for our own or another device.  These use a different
- * pfn conversion and shouldn't be tracked as locked pages.
- */
-static bool is_invalid_reserved_pfn(unsigned long pfn)
-{
-	if (pfn_valid(pfn)) {
-		bool reserved;
-		struct page *tail = pfn_to_page(pfn);
-		struct page *head = compound_trans_head(tail);
-		reserved = !!(PageReserved(head));
-		if (head != tail) {
-			/*
-			 * "head" is not a dangling pointer
-			 * (compound_trans_head takes care of that)
-			 * but the hugepage may have been split
-			 * from under us (and we may not hold a
-			 * reference count on the head page so it can
-			 * be reused before we run PageReferenced), so
-			 * we've to check PageTail before returning
-			 * what we just read.
-			 */
-			smp_rmb();
-			if (PageTail(tail))
-				return reserved;
-		}
-		return PageReserved(tail);
-	}
-
-	return true;
-}
-
-static int put_pfn(unsigned long pfn, int prot)
-{
-	if (!is_invalid_reserved_pfn(pfn)) {
-		struct page *page = pfn_to_page(pfn);
-		if (prot & IOMMU_WRITE)
-			SetPageDirty(page);
-		put_page(page);
-		return 1;
-	}
-	return 0;
-}
-
-static int vaddr_get_pfn(unsigned long vaddr, int prot, unsigned long *pfn)
-{
-	struct page *page[1];
-	struct vm_area_struct *vma;
-	int ret = -EFAULT;
-
-	if (get_user_pages_fast(vaddr, 1, !!(prot & IOMMU_WRITE), page) == 1) {
-		*pfn = page_to_pfn(page[0]);
-		return 0;
-	}
-
-	down_read(&current->mm->mmap_sem);
-
-	vma = find_vma_intersection(current->mm, vaddr, vaddr + 1);
-
-	if (vma && vma->vm_flags & VM_PFNMAP) {
-		*pfn = ((vaddr - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;
-		if (is_invalid_reserved_pfn(*pfn))
-			ret = 0;
-	}
-
-	up_read(&current->mm->mmap_sem);
-
-	return ret;
-}
-
-/*
- * Attempt to pin pages.  We really don't want to track all the pfns and
- * the iommu can only map chunks of consecutive pfns anyway, so get the
- * first page and all consecutive pages with the same locking.
- */
-static long vfio_pin_pages(unsigned long vaddr, long npage,
-			   int prot, unsigned long *pfn_base)
-{
-	unsigned long limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
-	bool lock_cap = capable(CAP_IPC_LOCK);
-	long ret, i;
-
-	if (!current->mm)
-		return -ENODEV;
-
-	ret = vaddr_get_pfn(vaddr, prot, pfn_base);
-	if (ret)
-		return ret;
-
-	if (is_invalid_reserved_pfn(*pfn_base))
-		return 1;
-
-	if (!lock_cap && current->mm->locked_vm + 1 > limit) {
-		put_pfn(*pfn_base, prot);
-		pr_warn("%s: RLIMIT_MEMLOCK (%ld) exceeded\n", __func__,
-			limit << PAGE_SHIFT);
-		return -ENOMEM;
-	}
-
-	if (unlikely(disable_hugepages)) {
-		vfio_lock_acct(1);
-		return 1;
-	}
-
-	/* Lock all the consecutive pages from pfn_base */
-	for (i = 1, vaddr += PAGE_SIZE; i < npage; i++, vaddr += PAGE_SIZE) {
-		unsigned long pfn = 0;
-
-		ret = vaddr_get_pfn(vaddr, prot, &pfn);
-		if (ret)
-			break;
-
-		if (pfn != *pfn_base + i || is_invalid_reserved_pfn(pfn)) {
-			put_pfn(pfn, prot);
-			break;
-		}
-
-		if (!lock_cap && current->mm->locked_vm + i + 1 > limit) {
-			put_pfn(pfn, prot);
-			pr_warn("%s: RLIMIT_MEMLOCK (%ld) exceeded\n",
-				__func__, limit << PAGE_SHIFT);
-			break;
-		}
-	}
-
-	vfio_lock_acct(i);
-
-	return i;
-}
-
-static long vfio_unpin_pages(unsigned long pfn, long npage,
-			     int prot, bool do_accounting)
-{
-	unsigned long unlocked = 0;
-	long i;
-
-	for (i = 0; i < npage; i++)
-		unlocked += put_pfn(pfn++, prot);
-
-	if (do_accounting)
-		vfio_lock_acct(-unlocked);
-
-	return unlocked;
-}
-
 static int vfio_unmap_unpin(struct vfio_iommu *iommu, struct vfio_dma *dma,
 			    dma_addr_t iova, size_t *size)
 {
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 9/9 v2] vfio pci: Add vfio iommu implementation for FSL_PAMU
  2013-11-19  5:17 [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU) Bharat Bhushan
                   ` (7 preceding siblings ...)
  2013-11-19  5:17 ` [PATCH 8/9 v2] vfio: moving some functions in common file Bharat Bhushan
@ 2013-11-19  5:17 ` Bharat Bhushan
  2013-11-20 18:47 ` [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU) Alex Williamson
  9 siblings, 0 replies; 35+ messages in thread
From: Bharat Bhushan @ 2013-11-19  5:17 UTC (permalink / raw)
  To: alex.williamson, joro, bhelgaas, agraf, scottwood, stuart.yoder,
	iommu, linux-pci, linuxppc-dev, linux-kernel
  Cc: Bharat Bhushan

This patch adds vfio iommu support for Freescale IOMMU (PAMU -
Peripheral Access Management Unit).

The Freescale PAMU is an aperture-based IOMMU with the following
characteristics.  Each device has an entry in a table in memory
describing the iova->phys mapping. The mapping has:
   -an overall aperture that is power of 2 sized, and has a start iova that
    is naturally aligned
   -has 1 or more windows within the aperture
   -number of windows must be power of 2, max is 256
   -size of each window is determined by aperture size / # of windows
   -iova of each window is determined by aperture start iova / # of windows
   -the mapped region in each window can be different than
    the window size...mapping must power of 2
   -physical address of the mapping must be naturally aligned
    with the mapping size

Some of the code is derived from TYPE1 iommu (driver/vfio/vfio_iommu_type1.c).

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
---
v1->v2
 - Use lock around msi-dma list
 - check for overlap between dma and msi-dma pages
 - Some code cleanup as per various comments

 drivers/vfio/Kconfig               |    6 +
 drivers/vfio/Makefile              |    1 +
 drivers/vfio/vfio_iommu_fsl_pamu.c | 1003 ++++++++++++++++++++++++++++++++++++
 include/uapi/linux/vfio.h          |  100 ++++
 4 files changed, 1110 insertions(+), 0 deletions(-)
 create mode 100644 drivers/vfio/vfio_iommu_fsl_pamu.c

diff --git a/drivers/vfio/Kconfig b/drivers/vfio/Kconfig
index 26b3d9d..7d1da26 100644
--- a/drivers/vfio/Kconfig
+++ b/drivers/vfio/Kconfig
@@ -8,11 +8,17 @@ config VFIO_IOMMU_SPAPR_TCE
 	depends on VFIO && SPAPR_TCE_IOMMU
 	default n
 
+config VFIO_IOMMU_FSL_PAMU
+	tristate
+	depends on VFIO
+	default n
+
 menuconfig VFIO
 	tristate "VFIO Non-Privileged userspace driver framework"
 	depends on IOMMU_API
 	select VFIO_IOMMU_TYPE1 if X86
 	select VFIO_IOMMU_SPAPR_TCE if (PPC_POWERNV || PPC_PSERIES)
+	select VFIO_IOMMU_FSL_PAMU if FSL_PAMU
 	help
 	  VFIO provides a framework for secure userspace device drivers.
 	  See Documentation/vfio.txt for more details.
diff --git a/drivers/vfio/Makefile b/drivers/vfio/Makefile
index c5792ec..7461350 100644
--- a/drivers/vfio/Makefile
+++ b/drivers/vfio/Makefile
@@ -1,4 +1,5 @@
 obj-$(CONFIG_VFIO) += vfio.o
 obj-$(CONFIG_VFIO_IOMMU_TYPE1) += vfio_iommu_common.o vfio_iommu_type1.o
 obj-$(CONFIG_VFIO_IOMMU_SPAPR_TCE) += vfio_iommu_common.o vfio_iommu_spapr_tce.o
+obj-$(CONFIG_VFIO_IOMMU_FSL_PAMU) += vfio_iommu_common.o vfio_iommu_fsl_pamu.o
 obj-$(CONFIG_VFIO_PCI) += pci/
diff --git a/drivers/vfio/vfio_iommu_fsl_pamu.c b/drivers/vfio/vfio_iommu_fsl_pamu.c
new file mode 100644
index 0000000..66efc84
--- /dev/null
+++ b/drivers/vfio/vfio_iommu_fsl_pamu.c
@@ -0,0 +1,1003 @@
+/*
+ * VFIO: IOMMU DMA mapping support for FSL PAMU IOMMU
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ *
+ * Copyright (C) 2013 Freescale Semiconductor, Inc.
+ *
+ *     Author: Bharat Bhushan <bharat.bhushan@freescale.com>
+ *
+ * This file is derived from driver/vfio/vfio_iommu_type1.c
+ *
+ * The Freescale PAMU is an aperture-based IOMMU with the following
+ * characteristics.  Each device has an entry in a table in memory
+ * describing the iova->phys mapping. The mapping has:
+ *  -an overall aperture that is power of 2 sized, and has a start iova that
+ *   is naturally aligned
+ *  -has 1 or more windows within the aperture
+ *     -number of windows must be power of 2, max is 256
+ *     -size of each window is determined by aperture size / # of windows
+ *     -iova of each window is determined by aperture start iova / # of windows
+ *     -the mapped region in each window can be different than
+ *      the window size...mapping must power of 2
+ *     -physical address of the mapping must be naturally aligned
+ *      with the mapping size
+ */
+
+#include <linux/compat.h>
+#include <linux/device.h>
+#include <linux/fs.h>
+#include <linux/iommu.h>
+#include <linux/module.h>
+#include <linux/mm.h>
+#include <linux/pci.h>		/* pci_bus_type */
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/vfio.h>
+#include <linux/hugetlb.h>
+#include <linux/msi.h>
+#include <asm/fsl_pamu_stash.h>
+
+#include "vfio_iommu_common.h"
+
+#define DRIVER_VERSION  "0.1"
+#define DRIVER_AUTHOR   "Bharat Bhushan <bharat.bhushan@freescale.com>"
+#define DRIVER_DESC     "FSL PAMU IOMMU driver for VFIO"
+
+struct vfio_iommu {
+	struct iommu_domain	*domain;
+	struct mutex		lock;
+	dma_addr_t		aperture_start;
+	dma_addr_t		aperture_end;
+	dma_addr_t		page_size;	/* Maximum mapped Page size */
+	int			nsubwindows;	/* Number of subwindows */
+	struct rb_root		dma_list;
+	struct list_head	msi_dma_list;
+	struct list_head	group_list;
+};
+
+struct vfio_dma {
+	struct rb_node		node;
+	dma_addr_t		iova;		/* Device address */
+	unsigned long		vaddr;		/* Process virtual addr */
+	size_t			size;		/* Map size (bytes) */
+	int			prot;		/* IOMMU_READ/WRITE */
+};
+
+struct vfio_msi_dma {
+	struct list_head	next;
+	dma_addr_t		iova;		/* Device address */
+	size_t			size;		/* MSI page size */
+	int			bank_id;
+	int			prot;		/* IOMMU_READ/WRITE */
+};
+
+struct vfio_group {
+	struct iommu_group	*iommu_group;
+	struct list_head	next;
+};
+
+static int iova_to_win(struct vfio_iommu *iommu, dma_addr_t iova)
+{
+	u64 offset = iova - iommu->aperture_start;
+	do_div(offset, iommu->page_size);
+	return (int) offset;
+}
+
+static int vfio_disable_iommu_domain(struct vfio_iommu *iommu)
+{
+	int enable = 0;
+	return iommu_domain_set_attr(iommu->domain,
+				     DOMAIN_ATTR_FSL_PAMU_ENABLE, &enable);
+}
+
+static int vfio_enable_iommu_domain(struct vfio_iommu *iommu)
+{
+	int enable = 1;
+	return iommu_domain_set_attr(iommu->domain,
+				     DOMAIN_ATTR_FSL_PAMU_ENABLE, &enable);
+}
+
+/* Unmap DMA region */
+/* This function disable iommu if no dma mapping is set */
+static void vfio_check_and_disable_iommu(struct vfio_iommu *iommu)
+{
+	if (list_empty(&iommu->msi_dma_list) && !rb_first(&iommu->dma_list))
+		vfio_disable_iommu_domain(iommu);
+}
+
+static struct vfio_msi_dma *vfio_find_msi_dma(struct vfio_iommu *iommu,
+					      dma_addr_t start, size_t size)
+{
+	struct vfio_msi_dma *msi_dma;
+
+	/* Check MSI MAP entries */
+	list_for_each_entry(msi_dma, &iommu->msi_dma_list, next) {
+		if ((start + size) <= (msi_dma->iova))
+			continue;
+
+		if ((start >= (msi_dma->iova + msi_dma->size)))
+			continue;
+
+		return msi_dma;
+	}
+
+	return NULL;
+}
+
+static struct vfio_dma *vfio_find_dma(struct vfio_iommu *iommu,
+				      dma_addr_t start, size_t size)
+{
+	struct rb_node *node = iommu->dma_list.rb_node;
+
+	/* check DMA MAP entries */
+	while (node) {
+		struct vfio_dma *dma = rb_entry(node, struct vfio_dma, node);
+
+		if (start + size <= dma->iova)
+			node = node->rb_left;
+		else if (start >= dma->iova + dma->size)
+			node = node->rb_right;
+		else
+			return dma;
+	}
+
+	return NULL;
+}
+
+static void vfio_insert_dma(struct vfio_iommu *iommu, struct vfio_dma *new)
+{
+	struct rb_node **link = &iommu->dma_list.rb_node, *parent = NULL;
+	struct vfio_dma *dma;
+
+	while (*link) {
+		parent = *link;
+		dma = rb_entry(parent, struct vfio_dma, node);
+
+		if (new->iova + new->size <= dma->iova)
+			link = &(*link)->rb_left;
+		else
+			link = &(*link)->rb_right;
+	}
+
+	rb_link_node(&new->node, parent, link);
+	rb_insert_color(&new->node, &iommu->dma_list);
+}
+
+static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
+{
+	rb_erase(&old->node, &iommu->dma_list);
+	vfio_check_and_disable_iommu(iommu);
+}
+
+static int vfio_unmap_unpin(struct vfio_iommu *iommu, struct vfio_dma *dma,
+			    dma_addr_t iova, size_t *size)
+{
+	dma_addr_t start = iova;
+	int win, win_start, win_end;
+	long unlocked = 0;
+	unsigned int nr_pages;
+
+	nr_pages = iommu->page_size / PAGE_SIZE;
+	win_start = iova_to_win(iommu, iova);
+	win_end = iova_to_win(iommu, iova + *size - 1);
+
+	/* Release the pinned pages */
+	for (win = win_start; win <= win_end; iova += iommu->page_size, win++) {
+		unsigned long pfn;
+
+		pfn = iommu_iova_to_phys(iommu->domain, iova) >> PAGE_SHIFT;
+		if (!pfn)
+			continue;
+
+		iommu_domain_window_disable(iommu->domain, win);
+
+		unlocked += vfio_unpin_pages(pfn, nr_pages, dma->prot, 1);
+	}
+
+	vfio_lock_acct(-unlocked);
+	*size = iova - start;
+	return 0;
+}
+
+static int vfio_remove_dma_overlap(struct vfio_iommu *iommu, dma_addr_t start,
+				   size_t *size, struct vfio_dma *dma)
+{
+	size_t offset, overlap, tmp;
+	struct vfio_dma *split;
+	int ret;
+
+	if (!*size)
+		return 0;
+
+	/*
+	 * Existing dma region is completely covered, unmap all.  This is
+	 * the likely case since userspace tends to map and unmap buffers
+	 * in one shot rather than multiple mappings within a buffer.
+	 */
+	if (likely(start <= dma->iova &&
+		   start + *size >= dma->iova + dma->size)) {
+		*size = dma->size;
+		ret = vfio_unmap_unpin(iommu, dma, dma->iova, size);
+		if (ret)
+			return ret;
+
+		/*
+		 * Did we remove more than we have?  Should never happen
+		 * since a vfio_dma is contiguous in iova and vaddr.
+		 */
+		WARN_ON(*size != dma->size);
+
+		vfio_remove_dma(iommu, dma);
+		kfree(dma);
+		return 0;
+	}
+
+	/* Overlap low address of existing range */
+	if (start <= dma->iova) {
+		overlap = start + *size - dma->iova;
+		ret = vfio_unmap_unpin(iommu, dma, dma->iova, &overlap);
+		if (ret)
+			return ret;
+
+		vfio_remove_dma(iommu, dma);
+
+		/*
+		 * Check, we may have removed to whole vfio_dma.  If not
+		 * fixup and re-insert.
+		 */
+		if (overlap < dma->size) {
+			dma->iova += overlap;
+			dma->vaddr += overlap;
+			dma->size -= overlap;
+			vfio_insert_dma(iommu, dma);
+		} else
+			kfree(dma);
+
+		*size = overlap;
+		return 0;
+	}
+
+	/* Overlap high address of existing range */
+	if (start + *size >= dma->iova + dma->size) {
+		offset = start - dma->iova;
+		overlap = dma->size - offset;
+
+		ret = vfio_unmap_unpin(iommu, dma, start, &overlap);
+		if (ret)
+			return ret;
+
+		dma->size -= overlap;
+		*size = overlap;
+		return 0;
+	}
+
+	/* Split existing */
+
+	/*
+	 * Allocate our tracking structure early even though it may not
+	 * be used.  An Allocation failure later loses track of pages and
+	 * is more difficult to unwind.
+	 */
+	split = kzalloc(sizeof(*split), GFP_KERNEL);
+	if (!split)
+		return -ENOMEM;
+
+	offset = start - dma->iova;
+
+	ret = vfio_unmap_unpin(iommu, dma, start, size);
+	if (ret || !*size) {
+		kfree(split);
+		return ret;
+	}
+
+	tmp = dma->size;
+
+	/* Resize the lower vfio_dma in place, before the below insert */
+	dma->size = offset;
+
+	/* Insert new for remainder, assuming it didn't all get unmapped */
+	if (likely(offset + *size < tmp)) {
+		split->size = tmp - offset - *size;
+		split->iova = dma->iova + offset + *size;
+		split->vaddr = dma->vaddr + offset + *size;
+		split->prot = dma->prot;
+		vfio_insert_dma(iommu, split);
+	} else
+		kfree(split);
+
+	return 0;
+}
+
+/* Map DMA region */
+static int vfio_dma_map(struct vfio_iommu *iommu, dma_addr_t iova,
+			  unsigned long vaddr, long npage, int prot)
+{
+	int ret = 0, i;
+	size_t size;
+	unsigned int win, nr_subwindows;
+	dma_addr_t iovamap;
+
+	win = iova_to_win(iommu, iova);
+	if (iova != iommu->aperture_start + iommu->page_size * win) {
+		pr_err("%s iova(%llx) unalligned to window size %llx\n",
+			__func__, iova, iommu->page_size);
+		return -EINVAL;
+	}
+
+	/* total size to be mapped */
+	size = npage << PAGE_SHIFT;
+	nr_subwindows = size >> ilog2(iommu->page_size);
+	iovamap = iova;
+
+	for (i = 0; i < nr_subwindows; i++, win++) {
+		unsigned long pfn;
+		unsigned long nr_pages;
+		dma_addr_t mapsize;
+		struct vfio_dma *dma = NULL;
+
+		mapsize = min(iova + size - iovamap, iommu->page_size);
+		nr_pages = mapsize >> PAGE_SHIFT;
+
+		/* Pin a contiguous chunk of memory */
+		ret = vfio_pin_pages(vaddr, nr_pages, prot, &pfn);
+		if (ret != nr_pages) {
+			pr_err("%s unable to pin pages = %lx, pinned(%lx/%lx)\n",
+				__func__, vaddr, npage, nr_pages);
+			ret = -EINVAL;
+			break;
+		}
+
+		ret = iommu_domain_window_enable(iommu->domain, win,
+						 (phys_addr_t)pfn << PAGE_SHIFT,
+						 mapsize, prot);
+		if (ret) {
+			pr_err("%s unable to iommu_map()\n", __func__);
+			ret = -EINVAL;
+			break;
+		}
+
+		/*
+		 * Check if we abut a region below - nothing below 0.
+		 * This is the most likely case when mapping chunks of
+		 * physically contiguous regions within a virtual address
+		 * range.  Update the abutting entry in place since iova
+		 * doesn't change.
+		 */
+		if (likely(iovamap)) {
+			struct vfio_dma *tmp;
+			tmp = vfio_find_dma(iommu, iovamap - 1, 1);
+			if (tmp && tmp->prot == prot &&
+			    tmp->vaddr + tmp->size == vaddr) {
+				tmp->size += mapsize;
+				dma = tmp;
+			}
+		}
+
+		/*
+		 * Check if we abut a region above - nothing above ~0 + 1.
+		 * If we abut above and below, remove and free.  If only
+		 * abut above, remove, modify, reinsert.
+		 */
+		if (likely(iovamap + mapsize)) {
+			struct vfio_dma *tmp;
+			tmp = vfio_find_dma(iommu, iovamap + mapsize, 1);
+			if (tmp && tmp->prot == prot &&
+			    tmp->vaddr == vaddr + mapsize) {
+				vfio_remove_dma(iommu, tmp);
+				if (dma) {
+					dma->size += tmp->size;
+					kfree(tmp);
+				} else {
+					tmp->size += mapsize;
+					tmp->iova = iovamap;
+					tmp->vaddr = vaddr;
+					vfio_insert_dma(iommu, tmp);
+					dma = tmp;
+				}
+			}
+		}
+
+		if (!dma) {
+			dma = kzalloc(sizeof(*dma), GFP_KERNEL);
+			if (!dma) {
+				iommu_unmap(iommu->domain, iovamap, mapsize);
+				vfio_unpin_pages(pfn, npage, prot, true);
+				ret = -ENOMEM;
+				break;
+			}
+
+			dma->size = mapsize;
+			dma->iova = iovamap;
+			dma->vaddr = vaddr;
+			dma->prot = prot;
+			vfio_insert_dma(iommu, dma);
+		}
+
+		iovamap += mapsize;
+		vaddr += mapsize;
+	}
+
+	if (ret) {
+		struct vfio_dma *tmp;
+		while ((tmp = vfio_find_dma(iommu, iova, size))) {
+			int r = vfio_remove_dma_overlap(iommu, iova,
+							&size, tmp);
+			if (WARN_ON(r || !size))
+				break;
+		}
+		return 0;
+	}
+
+	vfio_enable_iommu_domain(iommu);
+	return 0;
+}
+
+static int vfio_dma_do_map(struct vfio_iommu *iommu,
+			   struct vfio_iommu_type1_dma_map *map)
+{
+	dma_addr_t iova = map->iova;
+	size_t size = map->size;
+	unsigned long vaddr = map->vaddr;
+	int ret = 0, prot = 0;
+	long npage;
+
+	/* READ/WRITE from device perspective */
+	if (map->flags & VFIO_DMA_MAP_FLAG_WRITE)
+		prot |= IOMMU_WRITE;
+	if (map->flags & VFIO_DMA_MAP_FLAG_READ)
+		prot |= IOMMU_READ;
+
+	if (!prot)
+		return -EINVAL; /* No READ/WRITE? */
+
+	/* Don't allow IOVA wrap */
+	if (iova + size && iova + size < iova)
+		return -EINVAL;
+
+	/* Don't allow virtual address wrap */
+	if (vaddr + size && vaddr + size < vaddr)
+		return -EINVAL;
+
+	/*
+	 * FIXME: Currently we only support mapping page-size
+	 * of subwindow-size.
+	 */
+	if (size < iommu->page_size)
+		return -EINVAL;
+
+	npage = size >> PAGE_SHIFT;
+	if (!npage)
+		return -EINVAL;
+
+	mutex_lock(&iommu->lock);
+
+	/* Check for dma maping and msi_dma mapping */
+	if (vfio_find_dma(iommu, iova, size) ||
+	    vfio_find_msi_dma(iommu, iova, size)) {
+		ret = -EEXIST;
+		goto out_lock;
+	}
+
+	ret = vfio_dma_map(iommu, iova, vaddr, npage, prot);
+
+out_lock:
+	mutex_unlock(&iommu->lock);
+	return ret;
+}
+
+static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
+			     struct vfio_iommu_type1_dma_unmap *unmap)
+{
+	struct vfio_dma *dma;
+	size_t unmapped = 0, size;
+	int ret = 0;
+
+	mutex_lock(&iommu->lock);
+
+	while ((dma = vfio_find_dma(iommu, unmap->iova, unmap->size))) {
+		size = unmap->size;
+		ret = vfio_remove_dma_overlap(iommu, unmap->iova, &size, dma);
+		if (ret || !size)
+			break;
+		unmapped += size;
+	}
+
+	mutex_unlock(&iommu->lock);
+
+	/*
+	 * We may unmap more than requested, update the unmap struct so
+	 * userspace can know.
+	 */
+	unmap->size = unmapped;
+
+	return ret;
+}
+
+static int vfio_handle_get_attr(struct vfio_iommu *iommu,
+			 struct vfio_pamu_attr *pamu_attr)
+{
+	int ret = 0;
+
+	switch (pamu_attr->attribute) {
+	case VFIO_ATTR_GEOMETRY: {
+		struct iommu_domain_geometry geom;
+		ret = iommu_domain_get_attr(iommu->domain,
+					  DOMAIN_ATTR_GEOMETRY, &geom);
+		pamu_attr->attr_info.attr.aperture_start = geom.aperture_start;
+		pamu_attr->attr_info.attr.aperture_end = geom.aperture_end;
+		break;
+	}
+	case VFIO_ATTR_WINDOWS: {
+		u32 count;
+		ret = iommu_domain_get_attr(iommu->domain,
+				      DOMAIN_ATTR_WINDOWS, &count);
+		pamu_attr->attr_info.windows = count;
+		break;
+	}
+	case VFIO_ATTR_PAMU_STASH: {
+		struct pamu_stash_attribute stash;
+		ret = iommu_domain_get_attr(iommu->domain,
+				      DOMAIN_ATTR_FSL_PAMU_STASH, &stash);
+		pamu_attr->attr_info.stash.cpu = stash.cpu;
+		pamu_attr->attr_info.stash.cache = stash.cache;
+		break;
+	}
+
+	default:
+		pr_err("%s Error: Invalid attribute (%d)\n",
+			 __func__, pamu_attr->attribute);
+		return -EINVAL;
+	}
+
+	return ret;
+}
+
+static int vfio_handle_set_attr(struct vfio_iommu *iommu,
+			 struct vfio_pamu_attr *pamu_attr)
+{
+	int ret = 0;
+
+	switch (pamu_attr->attribute) {
+	case VFIO_ATTR_GEOMETRY: {
+		struct iommu_domain_geometry geom;
+
+		geom.aperture_start = pamu_attr->attr_info.attr.aperture_start;
+		geom.aperture_end = pamu_attr->attr_info.attr.aperture_end;
+		iommu->aperture_start = geom.aperture_start;
+		iommu->aperture_end = geom.aperture_end;
+		geom.force_aperture = 1;
+		ret = iommu_domain_set_attr(iommu->domain,
+					  DOMAIN_ATTR_GEOMETRY, &geom);
+		break;
+	}
+	case VFIO_ATTR_WINDOWS: {
+		u32 count = pamu_attr->attr_info.windows;
+		u64 size = iommu->aperture_end - iommu->aperture_start + 1;
+
+		ret = iommu_domain_set_attr(iommu->domain,
+				      DOMAIN_ATTR_WINDOWS, &count);
+		if (!ret) {
+			iommu->nsubwindows = pamu_attr->attr_info.windows;
+			iommu->page_size = size >> ilog2(count);
+		}
+
+		break;
+	}
+	case VFIO_ATTR_PAMU_STASH: {
+		struct pamu_stash_attribute stash;
+
+		stash.cpu = pamu_attr->attr_info.stash.cpu;
+		stash.cache = pamu_attr->attr_info.stash.cache;
+		ret = iommu_domain_set_attr(iommu->domain,
+				      DOMAIN_ATTR_FSL_PAMU_STASH, &stash);
+		break;
+	}
+
+	default:
+		pr_err("%s Error: Invalid attribute (%d)\n",
+			 __func__, pamu_attr->attribute);
+		return -EINVAL;
+	}
+
+	return ret;
+}
+
+static int pci_msi_set_device_iova(struct device *dev, void *data)
+{
+	struct pci_dev *pdev = to_pci_dev(dev);
+	struct vfio_msi_dma *msi_dma = data;
+
+	return msi_set_iova(pdev, msi_dma->bank_id, msi_dma->iova, 1);
+}
+
+static int pci_msi_clear_device_iova(struct device *dev, void *data)
+{
+	struct pci_dev *pdev = to_pci_dev(dev);
+	struct vfio_msi_dma *msi_dma = data;
+
+	return msi_set_iova(pdev, msi_dma->bank_id, msi_dma->iova, 0);
+}
+
+static int vfio_iommu_set_msi_iova(struct vfio_iommu *iommu,
+				   struct vfio_msi_dma *msi_dma)
+{
+	struct vfio_group *group;
+	int ret = 0;
+
+	list_for_each_entry(group, &iommu->group_list, next) {
+		ret = iommu_group_for_each_dev(group->iommu_group, msi_dma,
+					       pci_msi_set_device_iova);
+	}
+
+	return ret;
+}
+
+static int vfio_iommu_clear_msi_iova(struct vfio_iommu *iommu,
+				     struct vfio_msi_dma *msi_dma)
+{
+	struct vfio_group *group;
+	int ret = 0;
+
+	list_for_each_entry(group, &iommu->group_list, next) {
+		ret = iommu_group_for_each_dev(group->iommu_group, msi_dma,
+					       pci_msi_clear_device_iova);
+	}
+
+	return ret;
+}
+
+static int vfio_do_msi_map(struct vfio_iommu *iommu,
+			struct vfio_pamu_msi_bank_map *msi_map)
+{
+	struct msi_region region;
+	struct vfio_msi_dma *msi_dma;
+	int window;
+	int prot = 0;
+	int ret;
+
+	/* READ/WRITE from device perspective */
+	if (msi_map->flags & VFIO_DMA_MAP_FLAG_WRITE)
+		prot |= IOMMU_WRITE;
+	if (msi_map->flags & VFIO_DMA_MAP_FLAG_READ)
+		prot |= IOMMU_READ;
+
+	if (!prot)
+		return -EINVAL; /* No READ/WRITE? */
+
+	ret = msi_get_region(msi_map->msi_bank_index, &region);
+	if (ret) {
+		pr_err("%s MSI region (%d) not found\n", __func__,
+		       msi_map->msi_bank_index);
+		return ret;
+	}
+
+	mutex_lock(&iommu->lock);
+	/* Check for dma maping and msi_dma mapping */
+	if (vfio_find_dma(iommu, msi_map->iova, region.size) ||
+	    vfio_find_msi_dma(iommu, msi_map->iova, region.size)) {
+		ret = -EEXIST;
+		goto out_lock;
+	}
+
+	window = iova_to_win(iommu, msi_map->iova);
+	ret = iommu_domain_window_enable(iommu->domain, window, region.addr,
+					 region.size, prot);
+	if (ret) {
+		pr_err("%s Error: unable to map msi region\n", __func__);
+		goto out_lock;
+	}
+
+	msi_dma = kzalloc(sizeof(*msi_dma), GFP_KERNEL);
+	if (!msi_dma) {
+		ret = -ENOMEM;
+		goto out_lock;
+	}
+
+	msi_dma->iova = msi_map->iova;
+	msi_dma->size = region.size;
+	msi_dma->bank_id = msi_map->msi_bank_index;
+	list_add(&msi_dma->next, &iommu->msi_dma_list);
+
+	/* Set iova for all the device in iommu-group for the given msi-bank */
+	ret = vfio_iommu_set_msi_iova(iommu, msi_dma);
+
+out_lock:
+	mutex_unlock(&iommu->lock);
+	return ret;
+}
+
+static void vfio_msi_unmap(struct vfio_iommu *iommu, dma_addr_t iova)
+{
+	int window;
+	window = iova_to_win(iommu, iova);
+	iommu_domain_window_disable(iommu->domain, window);
+}
+
+static int vfio_do_msi_unmap(struct vfio_iommu *iommu,
+			     struct vfio_pamu_msi_bank_unmap *msi_unmap)
+{
+	struct vfio_msi_dma *mdma, *mdma_tmp;
+
+	mutex_lock(&iommu->lock);
+
+	list_for_each_entry_safe(mdma, mdma_tmp, &iommu->msi_dma_list, next) {
+		if (mdma->iova == msi_unmap->iova) {
+			/* Clear mapping for msi iova page mapping */
+			vfio_iommu_clear_msi_iova(iommu, mdma);
+			/* Unmap in iommu (PAMU) */
+			vfio_msi_unmap(iommu, mdma->iova);
+			list_del(&mdma->next);
+			vfio_check_and_disable_iommu(iommu);
+			kfree(mdma);
+			mutex_unlock(&iommu->lock);
+			return 0;
+		}
+	}
+
+	mutex_unlock(&iommu->lock);
+	return -EINVAL;
+}
+static void *vfio_iommu_fsl_pamu_open(unsigned long arg)
+{
+	struct vfio_iommu *iommu;
+
+	if (arg != VFIO_FSL_PAMU_IOMMU)
+		return ERR_PTR(-EINVAL);
+
+	iommu = kzalloc(sizeof(*iommu), GFP_KERNEL);
+	if (!iommu)
+		return ERR_PTR(-ENOMEM);
+
+	INIT_LIST_HEAD(&iommu->group_list);
+	iommu->dma_list = RB_ROOT;
+	INIT_LIST_HEAD(&iommu->msi_dma_list);
+	mutex_init(&iommu->lock);
+
+	/*
+	 * Wish we didn't have to know about bus_type here.
+	 */
+	iommu->domain = iommu_domain_alloc(&pci_bus_type);
+	if (!iommu->domain) {
+		kfree(iommu);
+		return ERR_PTR(-EIO);
+	}
+
+	return iommu;
+}
+
+static void vfio_iommu_fsl_pamu_release(void *iommu_data)
+{
+	struct vfio_iommu *iommu = iommu_data;
+	struct vfio_group *group, *group_tmp;
+	struct vfio_msi_dma *mdma, *mdma_tmp;
+	struct rb_node *node;
+
+	list_for_each_entry_safe(group, group_tmp, &iommu->group_list, next) {
+		iommu_detach_group(iommu->domain, group->iommu_group);
+		list_del(&group->next);
+		kfree(group);
+	}
+
+	while ((node = rb_first(&iommu->dma_list))) {
+		struct vfio_dma *dma = rb_entry(node, struct vfio_dma, node);
+		size_t size = dma->size;
+		vfio_remove_dma_overlap(iommu, dma->iova, &size, dma);
+		if (WARN_ON(!size))
+			break;
+	}
+
+	list_for_each_entry_safe(mdma, mdma_tmp, &iommu->msi_dma_list, next) {
+		vfio_msi_unmap(iommu, mdma->iova);
+		list_del(&mdma->next);
+		kfree(mdma);
+	}
+
+	/* Disable the iommu as there is no valid entry */
+	vfio_disable_iommu_domain(iommu);
+
+	iommu_domain_free(iommu->domain);
+	iommu->domain = NULL;
+	kfree(iommu);
+}
+
+static long vfio_iommu_fsl_pamu_ioctl(void *iommu_data,
+				      unsigned int cmd, unsigned long arg)
+{
+	struct vfio_iommu *iommu = iommu_data;
+	unsigned long minsz;
+
+	if (cmd == VFIO_CHECK_EXTENSION) {
+		switch (arg) {
+		case VFIO_FSL_PAMU_IOMMU:
+			return 1;
+		default:
+			return 0;
+		}
+	} else if (cmd == VFIO_IOMMU_MAP_DMA) {
+		struct vfio_iommu_type1_dma_map map;
+		uint32_t mask = VFIO_DMA_MAP_FLAG_READ |
+				VFIO_DMA_MAP_FLAG_WRITE;
+
+		minsz = offsetofend(struct vfio_iommu_type1_dma_map, size);
+
+		if (copy_from_user(&map, (void __user *)arg, minsz))
+			return -EFAULT;
+
+		if (map.argsz < minsz || map.flags & ~mask)
+			return -EINVAL;
+
+		return vfio_dma_do_map(iommu, &map);
+
+	} else if (cmd == VFIO_IOMMU_UNMAP_DMA) {
+		struct vfio_iommu_type1_dma_unmap unmap;
+		long ret;
+
+		minsz = offsetofend(struct vfio_iommu_type1_dma_unmap, size);
+
+		if (copy_from_user(&unmap, (void __user *)arg, minsz))
+			return -EFAULT;
+
+		if (unmap.argsz < minsz || unmap.flags)
+			return -EINVAL;
+
+		ret = vfio_dma_do_unmap(iommu, &unmap);
+		if (ret)
+			return ret;
+
+		return copy_to_user((void __user *)arg, &unmap, minsz);
+	} else if (cmd == VFIO_IOMMU_PAMU_GET_ATTR) {
+		struct vfio_pamu_attr pamu_attr;
+
+		minsz = offsetofend(struct vfio_pamu_attr, attr_info);
+		if (copy_from_user(&pamu_attr, (void __user *)arg, minsz))
+			return -EFAULT;
+
+		if (pamu_attr.argsz < minsz)
+			return -EINVAL;
+
+		vfio_handle_get_attr(iommu, &pamu_attr);
+
+		copy_to_user((void __user *)arg, &pamu_attr, minsz);
+		return 0;
+	} else if (cmd == VFIO_IOMMU_PAMU_SET_ATTR) {
+		struct vfio_pamu_attr pamu_attr;
+
+		minsz = offsetofend(struct vfio_pamu_attr, attr_info);
+		if (copy_from_user(&pamu_attr, (void __user *)arg, minsz))
+			return -EFAULT;
+
+		if (pamu_attr.argsz < minsz)
+			return -EINVAL;
+
+		vfio_handle_set_attr(iommu, &pamu_attr);
+		return 0;
+	} else if (cmd == VFIO_IOMMU_PAMU_GET_MSI_BANK_COUNT) {
+		return msi_get_region_count();
+	} else if (cmd == VFIO_IOMMU_PAMU_MAP_MSI_BANK) {
+		struct vfio_pamu_msi_bank_map msi_map;
+
+		minsz = offsetofend(struct vfio_pamu_msi_bank_map, iova);
+		if (copy_from_user(&msi_map, (void __user *)arg, minsz))
+			return -EFAULT;
+
+		if (msi_map.argsz < minsz)
+			return -EINVAL;
+
+		vfio_do_msi_map(iommu, &msi_map);
+		return 0;
+	} else if (cmd == VFIO_IOMMU_PAMU_UNMAP_MSI_BANK) {
+		struct vfio_pamu_msi_bank_unmap msi_unmap;
+
+		minsz = offsetofend(struct vfio_pamu_msi_bank_unmap, iova);
+		if (copy_from_user(&msi_unmap, (void __user *)arg, minsz))
+			return -EFAULT;
+
+		if (msi_unmap.argsz < minsz)
+			return -EINVAL;
+
+		vfio_do_msi_unmap(iommu, &msi_unmap);
+		return 0;
+
+	}
+
+	return -ENOTTY;
+}
+
+static int vfio_iommu_fsl_pamu_attach_group(void *iommu_data,
+					 struct iommu_group *iommu_group)
+{
+	struct vfio_iommu *iommu = iommu_data;
+	struct vfio_group *group, *tmp;
+	int ret;
+
+	group = kzalloc(sizeof(*group), GFP_KERNEL);
+	if (!group)
+		return -ENOMEM;
+
+	mutex_lock(&iommu->lock);
+
+	list_for_each_entry(tmp, &iommu->group_list, next) {
+		if (tmp->iommu_group == iommu_group) {
+			mutex_unlock(&iommu->lock);
+			kfree(group);
+			return -EINVAL;
+		}
+	}
+
+	ret = iommu_attach_group(iommu->domain, iommu_group);
+	if (ret) {
+		mutex_unlock(&iommu->lock);
+		kfree(group);
+		return ret;
+	}
+
+	group->iommu_group = iommu_group;
+	list_add(&group->next, &iommu->group_list);
+
+	mutex_unlock(&iommu->lock);
+
+	return 0;
+}
+
+static void vfio_iommu_fsl_pamu_detach_group(void *iommu_data,
+					  struct iommu_group *iommu_group)
+{
+	struct vfio_iommu *iommu = iommu_data;
+	struct vfio_group *group;
+
+	mutex_lock(&iommu->lock);
+
+	list_for_each_entry(group, &iommu->group_list, next) {
+		if (group->iommu_group == iommu_group) {
+			iommu_detach_group(iommu->domain, iommu_group);
+			list_del(&group->next);
+			kfree(group);
+			break;
+		}
+	}
+
+	mutex_unlock(&iommu->lock);
+}
+
+static const struct vfio_iommu_driver_ops vfio_iommu_driver_ops_fsl_pamu = {
+	.name		= "vfio-iommu-fsl_pamu",
+	.owner		= THIS_MODULE,
+	.open		= vfio_iommu_fsl_pamu_open,
+	.release	= vfio_iommu_fsl_pamu_release,
+	.ioctl		= vfio_iommu_fsl_pamu_ioctl,
+	.attach_group	= vfio_iommu_fsl_pamu_attach_group,
+	.detach_group	= vfio_iommu_fsl_pamu_detach_group,
+};
+
+static int __init vfio_iommu_fsl_pamu_init(void)
+{
+	if (!iommu_present(&pci_bus_type))
+		return -ENODEV;
+
+	return vfio_register_iommu_driver(&vfio_iommu_driver_ops_fsl_pamu);
+}
+
+static void __exit vfio_iommu_fsl_pamu_cleanup(void)
+{
+	vfio_unregister_iommu_driver(&vfio_iommu_driver_ops_fsl_pamu);
+}
+
+module_init(vfio_iommu_fsl_pamu_init);
+module_exit(vfio_iommu_fsl_pamu_cleanup);
+
+MODULE_VERSION(DRIVER_VERSION);
+MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR(DRIVER_AUTHOR);
+MODULE_DESCRIPTION(DRIVER_DESC);
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index 0fd47f5..d359055 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -23,6 +23,7 @@
 
 #define VFIO_TYPE1_IOMMU		1
 #define VFIO_SPAPR_TCE_IOMMU		2
+#define VFIO_FSL_PAMU_IOMMU		3
 
 /*
  * The IOCTL interface is designed for extensibility by embedding the
@@ -451,4 +452,103 @@ struct vfio_iommu_spapr_tce_info {
 
 /* ***************************************************************** */
 
+/*********** APIs for VFIO_PAMU type only ****************/
+/*
+ * VFIO_IOMMU_PAMU_GET_ATTR - _IO(VFIO_TYPE, VFIO_BASE + 17,
+ *				  struct vfio_pamu_attr)
+ *
+ * Gets the iommu attributes for the current vfio container.
+ * Caller sets argsz and attribute.  The ioctl fills in
+ * the provided struct vfio_pamu_attr based on the attribute
+ * value that was set.
+ * Return: 0 on success, -errno on failure
+ */
+struct vfio_pamu_attr {
+	__u32	argsz;
+	__u32	flags;	/* no flags currently */
+#define VFIO_ATTR_GEOMETRY	0
+#define VFIO_ATTR_WINDOWS	1
+#define VFIO_ATTR_PAMU_STASH	2
+	__u32	attribute;
+
+	union {
+		/* VFIO_ATTR_GEOMETRY */
+		struct {
+			/* first addr that can be mapped */
+			__u64 aperture_start;
+			/* last addr that can be mapped */
+			__u64 aperture_end;
+		} attr;
+
+		/* VFIO_ATTR_WINDOWS */
+		__u32 windows;  /* number of windows in the aperture
+				 * initially this will be the max number
+				 * of windows that can be set
+				 */
+		/* VFIO_ATTR_PAMU_STASH */
+		struct {
+			__u32 cpu;	/* CPU number for stashing */
+			__u32 cache;	/* cache ID for stashing */
+		} stash;
+	} attr_info;
+};
+#define VFIO_IOMMU_PAMU_GET_ATTR  _IO(VFIO_TYPE, VFIO_BASE + 17)
+
+/*
+ * VFIO_IOMMU_PAMU_SET_ATTR - _IO(VFIO_TYPE, VFIO_BASE + 18,
+ *				  struct vfio_pamu_attr)
+ *
+ * Sets the iommu attributes for the current vfio container.
+ * Caller sets struct vfio_pamu attr, including argsz and attribute and
+ * setting any fields that are valid for the attribute.
+ * Return: 0 on success, -errno on failure
+ */
+#define VFIO_IOMMU_PAMU_SET_ATTR  _IO(VFIO_TYPE, VFIO_BASE + 18)
+
+/*
+ * VFIO_IOMMU_PAMU_GET_MSI_BANK_COUNT - _IO(VFIO_TYPE, VFIO_BASE + 19, __u32)
+ *
+ * Returns the number of MSI banks for this platform.  This tells user space
+ * how many aperture windows should be reserved for MSI banks when setting
+ * the PAMU geometry and window count.
+ * Return: __u32 bank count on success, -errno on failure
+ */
+#define VFIO_IOMMU_PAMU_GET_MSI_BANK_COUNT _IO(VFIO_TYPE, VFIO_BASE + 19)
+
+/*
+ * VFIO_IOMMU_PAMU_MAP_MSI_BANK - _IO(VFIO_TYPE, VFIO_BASE + 20,
+ *				      struct vfio_pamu_msi_bank_map)
+ *
+ * Maps the MSI bank at the specified index and iova.  User space must
+ * call this ioctl once for each MSI bank (count of banks is returned by
+ * VFIO_IOMMU_PAMU_GET_MSI_BANK_COUNT).
+ * Caller provides struct vfio_pamu_msi_bank_map with all fields set.
+ * Return: 0 on success, -errno on failure
+ */
+
+struct vfio_pamu_msi_bank_map {
+	__u32	argsz;
+	__u32	flags;		/* no flags currently */
+	__u32	msi_bank_index;	/* the index of the MSI bank */
+	__u64	iova;		/* the iova the bank is to be mapped to */
+};
+#define VFIO_IOMMU_PAMU_MAP_MSI_BANK  _IO(VFIO_TYPE, VFIO_BASE + 20)
+
+/*
+ * VFIO_IOMMU_PAMU_UNMAP_MSI_BANK - _IO(VFIO_TYPE, VFIO_BASE + 21,
+ *					struct vfio_pamu_msi_bank_unmap)
+ *
+ * Unmaps the MSI bank at the specified iova.
+ * Caller provides struct vfio_pamu_msi_bank_unmap with all fields set.
+ * Operates on VFIO file descriptor (/dev/vfio/vfio).
+ * Return: 0 on success, -errno on failure
+ */
+
+struct vfio_pamu_msi_bank_unmap {
+	__u32	argsz;
+	__u32	flags;	/* no flags currently */
+	__u64	iova;	/* the iova to be unmapped to */
+};
+#define VFIO_IOMMU_PAMU_UNMAP_MSI_BANK  _IO(VFIO_TYPE, VFIO_BASE + 21)
+
 #endif /* _UAPIVFIO_H */
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  2013-11-19  5:17 [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU) Bharat Bhushan
                   ` (8 preceding siblings ...)
  2013-11-19  5:17 ` [PATCH 9/9 v2] vfio pci: Add vfio iommu implementation for FSL_PAMU Bharat Bhushan
@ 2013-11-20 18:47 ` Alex Williamson
  2013-11-21 11:20   ` Varun Sethi
  2013-11-21 11:20   ` Bharat Bhushan
  9 siblings, 2 replies; 35+ messages in thread
From: Alex Williamson @ 2013-11-20 18:47 UTC (permalink / raw)
  To: Bharat Bhushan
  Cc: linux-pci, joro, agraf, stuart.yoder, Bharat Bhushan, scottwood,
	iommu, bhelgaas, linuxppc-dev, linux-kernel

On Tue, 2013-11-19 at 10:47 +0530, Bharat Bhushan wrote:
> From: Bharat Bhushan <bharat.bhushan@freescale.com>
> 
> PAMU (FSL IOMMU) has a concept of primary window and subwindows.
> Primary window corresponds to the complete guest iova address space
> (including MSI space), with respect to IOMMU_API this is termed as
> geometry. IOVA Base of subwindow is determined from the number of
> subwindows (configurable using iommu API).
> MSI I/O page must be within the geometry and maximum supported
> subwindows, so MSI IO-page is setup just after guest memory iova space.
> 
> So patch 1/9-4/9(inclusive) are for defining the interface to get:
>   - Number of MSI regions (which is number of MSI banks for powerpc)
>   - MSI-region address range: Physical page which have the
>     address/addresses used for generating MSI interrupt
>     and size of the page.
> 
> Patch 5/9-7/9(inclusive) is defining the interface of setting up
> MSI iova-base for a msi region(bank) for a device. so that when
> msi-message will be composed then this configured iova will be used.
> Earlier we were using iommu interface for getting the configured iova
> which was not currect and Alex Williamson suggeested this type of interface.
> 
> patch 8/9 moves some common functions in a separate file so that these
> can be used by FSL_PAMU implementation (next patch uses this).
> These will be used later for iommu-none implementation. I believe we
> can do more of this but will take step by step.
> 
> Finally last patch actually adds the support for FSL-PAMU :)

Patches 1-3: msi_get_region needs to return an error an error (probably
-EINVAL) if called on a path where there's no backend implementation.
Otherwise the caller doesn't know that the data in the region pointer
isn't valid.

Patches 5&6: same as above for msi_set_iova, return an error if no
backend implementation.

Patch 7: Why does fsl_msi_del_iova_device bother to return anything if
it's always zero?  Return -ENODEV when not found?

Patch 9:

vfio_handle_get_attr() passes random kernel data back to userspace in
the event of iommu_domain_get_attr() error.

vfio_handle_set_attr(): I don't see any data validation happening, is
iommu_domain_set_attr() really that safe?

For both of those, drop the pr_err on unknown attribute, it's sufficient
to return error.

Is VFIO_IOMMU_PAMU_GET_MSI_BANK_COUNT per aperture (ie. each vfio user
has $COUNT regions at their disposal exclusively)?  Thanks,

Alex

> v1->v2
>  - Added interface for setting msi iova for a msi region for a device.
>    Earlier I added iommu interface for same but as per comment that is
>    removed and now created a direct interface between vfio and msi.
>  - Incorporated review comments (details is in individual patch)
> 
> Bharat Bhushan (9):
>   pci:msi: add weak function for returning msi region info
>   pci: msi: expose msi region information functions
>   powerpc: pci: Add arch specific msi region interface
>   powerpc: msi: Extend the msi region interface to get info from
>     fsl_msi
>   pci/msi: interface to set an iova for a msi region
>   powerpc: pci: Extend msi iova page setup to arch specific
>   pci: msi: Extend msi iova setting interface to powerpc arch
>   vfio: moving some functions in common file
>   vfio pci: Add vfio iommu implementation for FSL_PAMU
> 
>  arch/powerpc/include/asm/machdep.h |   10 +
>  arch/powerpc/kernel/msi.c          |   28 +
>  arch/powerpc/sysdev/fsl_msi.c      |  132 +++++-
>  arch/powerpc/sysdev/fsl_msi.h      |   25 +-
>  drivers/pci/msi.c                  |   35 ++
>  drivers/vfio/Kconfig               |    6 +
>  drivers/vfio/Makefile              |    5 +-
>  drivers/vfio/vfio_iommu_common.c   |  227 ++++++++
>  drivers/vfio/vfio_iommu_common.h   |   27 +
>  drivers/vfio/vfio_iommu_fsl_pamu.c | 1003 ++++++++++++++++++++++++++++++++++++
>  drivers/vfio/vfio_iommu_type1.c    |  206 +--------
>  include/linux/msi.h                |   14 +
>  include/linux/pci.h                |   21 +
>  include/uapi/linux/vfio.h          |  100 ++++
>  14 files changed, 1623 insertions(+), 216 deletions(-)
>  create mode 100644 drivers/vfio/vfio_iommu_common.c
>  create mode 100644 drivers/vfio/vfio_iommu_common.h
>  create mode 100644 drivers/vfio/vfio_iommu_fsl_pamu.c
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  2013-11-20 18:47 ` [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU) Alex Williamson
@ 2013-11-21 11:20   ` Varun Sethi
  2013-11-21 11:20   ` Bharat Bhushan
  1 sibling, 0 replies; 35+ messages in thread
From: Varun Sethi @ 2013-11-21 11:20 UTC (permalink / raw)
  To: Alex Williamson, Bharat Bhushan
  Cc: linux-pci, agraf, Stuart Yoder, bhelgaas, iommu, Scott Wood,
	linuxppc-dev, linux-kernel



> -----Original Message-----
> From: iommu-bounces@lists.linux-foundation.org [mailto:iommu-
> bounces@lists.linux-foundation.org] On Behalf Of Alex Williamson
> Sent: Thursday, November 21, 2013 12:17 AM
> To: Bhushan Bharat-R65777
> Cc: linux-pci@vger.kernel.org; agraf@suse.de; Yoder Stuart-B08248; Wood
> Scott-B07421; iommu@lists.linux-foundation.org; bhelgaas@google.com;
> linuxppc-dev@lists.ozlabs.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU
> (PAMU)
>=20
> On Tue, 2013-11-19 at 10:47 +0530, Bharat Bhushan wrote:
> > From: Bharat Bhushan <bharat.bhushan@freescale.com>
> >
> > PAMU (FSL IOMMU) has a concept of primary window and subwindows.
> > Primary window corresponds to the complete guest iova address space
> > (including MSI space), with respect to IOMMU_API this is termed as
> > geometry. IOVA Base of subwindow is determined from the number of
> > subwindows (configurable using iommu API).
> > MSI I/O page must be within the geometry and maximum supported
> > subwindows, so MSI IO-page is setup just after guest memory iova space.
> >
> > So patch 1/9-4/9(inclusive) are for defining the interface to get:
> >   - Number of MSI regions (which is number of MSI banks for powerpc)
> >   - MSI-region address range: Physical page which have the
> >     address/addresses used for generating MSI interrupt
> >     and size of the page.
> >
> > Patch 5/9-7/9(inclusive) is defining the interface of setting up MSI
> > iova-base for a msi region(bank) for a device. so that when
> > msi-message will be composed then this configured iova will be used.
> > Earlier we were using iommu interface for getting the configured iova
> > which was not currect and Alex Williamson suggeested this type of
> interface.
> >
> > patch 8/9 moves some common functions in a separate file so that these
> > can be used by FSL_PAMU implementation (next patch uses this).
> > These will be used later for iommu-none implementation. I believe we
> > can do more of this but will take step by step.
> >
> > Finally last patch actually adds the support for FSL-PAMU :)
>=20
> Patches 1-3: msi_get_region needs to return an error an error (probably
> -EINVAL) if called on a path where there's no backend implementation.
> Otherwise the caller doesn't know that the data in the region pointer
> isn't valid.
>=20
> Patches 5&6: same as above for msi_set_iova, return an error if no
> backend implementation.
>=20
> Patch 7: Why does fsl_msi_del_iova_device bother to return anything if
> it's always zero?  Return -ENODEV when not found?
>=20
> Patch 9:
>=20
> vfio_handle_get_attr() passes random kernel data back to userspace in the
> event of iommu_domain_get_attr() error.
>=20
> vfio_handle_set_attr(): I don't see any data validation happening, is
> iommu_domain_set_attr() really that safe?
[Sethi Varun-B16395] The parameter validation can be left to the lower leve=
l iommu driver. The attribute could be specific to a given hardware.

-Varun

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  2013-11-20 18:47 ` [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU) Alex Williamson
  2013-11-21 11:20   ` Varun Sethi
@ 2013-11-21 11:20   ` Bharat Bhushan
  2013-11-21 20:43     ` Alex Williamson
  1 sibling, 1 reply; 35+ messages in thread
From: Bharat Bhushan @ 2013-11-21 11:20 UTC (permalink / raw)
  To: Alex Williamson
  Cc: linux-pci, joro, agraf, Stuart Yoder, Scott Wood, iommu,
	bhelgaas, linuxppc-dev, linux-kernel

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogQWxleCBXaWxsaWFtc29u
IFttYWlsdG86YWxleC53aWxsaWFtc29uQHJlZGhhdC5jb21dDQo+IFNlbnQ6IFRodXJzZGF5LCBO
b3ZlbWJlciAyMSwgMjAxMyAxMjoxNyBBTQ0KPiBUbzogQmh1c2hhbiBCaGFyYXQtUjY1Nzc3DQo+
IENjOiBqb3JvQDhieXRlcy5vcmc7IGJoZWxnYWFzQGdvb2dsZS5jb207IGFncmFmQHN1c2UuZGU7
IFdvb2QgU2NvdHQtQjA3NDIxOw0KPiBZb2RlciBTdHVhcnQtQjA4MjQ4OyBpb21tdUBsaXN0cy5s
aW51eC1mb3VuZGF0aW9uLm9yZzsgbGludXgtDQo+IHBjaUB2Z2VyLmtlcm5lbC5vcmc7IGxpbnV4
cHBjLWRldkBsaXN0cy5vemxhYnMub3JnOyBsaW51eC0NCj4ga2VybmVsQHZnZXIua2VybmVsLm9y
ZzsgQmh1c2hhbiBCaGFyYXQtUjY1Nzc3DQo+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggMC85IHYyXSB2
ZmlvLXBjaTogYWRkIHN1cHBvcnQgZm9yIEZyZWVzY2FsZSBJT01NVSAoUEFNVSkNCj4gDQo+IE9u
IFR1ZSwgMjAxMy0xMS0xOSBhdCAxMDo0NyArMDUzMCwgQmhhcmF0IEJodXNoYW4gd3JvdGU6DQo+
ID4gRnJvbTogQmhhcmF0IEJodXNoYW4gPGJoYXJhdC5iaHVzaGFuQGZyZWVzY2FsZS5jb20+DQo+
ID4NCj4gPiBQQU1VIChGU0wgSU9NTVUpIGhhcyBhIGNvbmNlcHQgb2YgcHJpbWFyeSB3aW5kb3cg
YW5kIHN1YndpbmRvd3MuDQo+ID4gUHJpbWFyeSB3aW5kb3cgY29ycmVzcG9uZHMgdG8gdGhlIGNv
bXBsZXRlIGd1ZXN0IGlvdmEgYWRkcmVzcyBzcGFjZQ0KPiA+IChpbmNsdWRpbmcgTVNJIHNwYWNl
KSwgd2l0aCByZXNwZWN0IHRvIElPTU1VX0FQSSB0aGlzIGlzIHRlcm1lZCBhcw0KPiA+IGdlb21l
dHJ5LiBJT1ZBIEJhc2Ugb2Ygc3Vid2luZG93IGlzIGRldGVybWluZWQgZnJvbSB0aGUgbnVtYmVy
IG9mDQo+ID4gc3Vid2luZG93cyAoY29uZmlndXJhYmxlIHVzaW5nIGlvbW11IEFQSSkuDQo+ID4g
TVNJIEkvTyBwYWdlIG11c3QgYmUgd2l0aGluIHRoZSBnZW9tZXRyeSBhbmQgbWF4aW11bSBzdXBw
b3J0ZWQNCj4gPiBzdWJ3aW5kb3dzLCBzbyBNU0kgSU8tcGFnZSBpcyBzZXR1cCBqdXN0IGFmdGVy
IGd1ZXN0IG1lbW9yeSBpb3ZhIHNwYWNlLg0KPiA+DQo+ID4gU28gcGF0Y2ggMS85LTQvOShpbmNs
dXNpdmUpIGFyZSBmb3IgZGVmaW5pbmcgdGhlIGludGVyZmFjZSB0byBnZXQ6DQo+ID4gICAtIE51
bWJlciBvZiBNU0kgcmVnaW9ucyAod2hpY2ggaXMgbnVtYmVyIG9mIE1TSSBiYW5rcyBmb3IgcG93
ZXJwYykNCj4gPiAgIC0gTVNJLXJlZ2lvbiBhZGRyZXNzIHJhbmdlOiBQaHlzaWNhbCBwYWdlIHdo
aWNoIGhhdmUgdGhlDQo+ID4gICAgIGFkZHJlc3MvYWRkcmVzc2VzIHVzZWQgZm9yIGdlbmVyYXRp
bmcgTVNJIGludGVycnVwdA0KPiA+ICAgICBhbmQgc2l6ZSBvZiB0aGUgcGFnZS4NCj4gPg0KPiA+
IFBhdGNoIDUvOS03LzkoaW5jbHVzaXZlKSBpcyBkZWZpbmluZyB0aGUgaW50ZXJmYWNlIG9mIHNl
dHRpbmcgdXAgTVNJDQo+ID4gaW92YS1iYXNlIGZvciBhIG1zaSByZWdpb24oYmFuaykgZm9yIGEg
ZGV2aWNlLiBzbyB0aGF0IHdoZW4NCj4gPiBtc2ktbWVzc2FnZSB3aWxsIGJlIGNvbXBvc2VkIHRo
ZW4gdGhpcyBjb25maWd1cmVkIGlvdmEgd2lsbCBiZSB1c2VkLg0KPiA+IEVhcmxpZXIgd2Ugd2Vy
ZSB1c2luZyBpb21tdSBpbnRlcmZhY2UgZm9yIGdldHRpbmcgdGhlIGNvbmZpZ3VyZWQgaW92YQ0K
PiA+IHdoaWNoIHdhcyBub3QgY3VycmVjdCBhbmQgQWxleCBXaWxsaWFtc29uIHN1Z2dlZXN0ZWQg
dGhpcyB0eXBlIG9mIGludGVyZmFjZS4NCj4gPg0KPiA+IHBhdGNoIDgvOSBtb3ZlcyBzb21lIGNv
bW1vbiBmdW5jdGlvbnMgaW4gYSBzZXBhcmF0ZSBmaWxlIHNvIHRoYXQgdGhlc2UNCj4gPiBjYW4g
YmUgdXNlZCBieSBGU0xfUEFNVSBpbXBsZW1lbnRhdGlvbiAobmV4dCBwYXRjaCB1c2VzIHRoaXMp
Lg0KPiA+IFRoZXNlIHdpbGwgYmUgdXNlZCBsYXRlciBmb3IgaW9tbXUtbm9uZSBpbXBsZW1lbnRh
dGlvbi4gSSBiZWxpZXZlIHdlDQo+ID4gY2FuIGRvIG1vcmUgb2YgdGhpcyBidXQgd2lsbCB0YWtl
IHN0ZXAgYnkgc3RlcC4NCj4gPg0KPiA+IEZpbmFsbHkgbGFzdCBwYXRjaCBhY3R1YWxseSBhZGRz
IHRoZSBzdXBwb3J0IGZvciBGU0wtUEFNVSA6KQ0KPiANCj4gUGF0Y2hlcyAxLTM6IG1zaV9nZXRf
cmVnaW9uIG5lZWRzIHRvIHJldHVybiBhbiBlcnJvciBhbiBlcnJvciAocHJvYmFibHkNCj4gLUVJ
TlZBTCkgaWYgY2FsbGVkIG9uIGEgcGF0aCB3aGVyZSB0aGVyZSdzIG5vIGJhY2tlbmQgaW1wbGVt
ZW50YXRpb24uDQo+IE90aGVyd2lzZSB0aGUgY2FsbGVyIGRvZXNuJ3Qga25vdyB0aGF0IHRoZSBk
YXRhIGluIHRoZSByZWdpb24gcG9pbnRlciBpc24ndA0KPiB2YWxpZC4NCg0Kd2lsbCBjb3JyZWN0
Lg0KDQo+IA0KPiBQYXRjaGVzIDUmNjogc2FtZSBhcyBhYm92ZSBmb3IgbXNpX3NldF9pb3ZhLCBy
ZXR1cm4gYW4gZXJyb3IgaWYgbm8gYmFja2VuZA0KPiBpbXBsZW1lbnRhdGlvbi4NCg0KT2sNCg0K
PiANCj4gUGF0Y2ggNzogV2h5IGRvZXMgZnNsX21zaV9kZWxfaW92YV9kZXZpY2UgYm90aGVyIHRv
IHJldHVybiBhbnl0aGluZyBpZiBpdCdzDQo+IGFsd2F5cyB6ZXJvPyAgUmV0dXJuIC1FTk9ERVYg
d2hlbiBub3QgZm91bmQ/DQoNCldpbGwgbWFrZSAtRU5PREVWLg0KDQo+IA0KPiBQYXRjaCA5Og0K
PiANCj4gdmZpb19oYW5kbGVfZ2V0X2F0dHIoKSBwYXNzZXMgcmFuZG9tIGtlcm5lbCBkYXRhIGJh
Y2sgdG8gdXNlcnNwYWNlIGluIHRoZSBldmVudA0KPiBvZiBpb21tdV9kb21haW5fZ2V0X2F0dHIo
KSBlcnJvci4NCg0KV2lsbCBjb3JyZWN0Lg0KDQo+IA0KPiB2ZmlvX2hhbmRsZV9zZXRfYXR0cigp
OiBJIGRvbid0IHNlZSBhbnkgZGF0YSB2YWxpZGF0aW9uIGhhcHBlbmluZywgaXMNCj4gaW9tbXVf
ZG9tYWluX3NldF9hdHRyKCkgcmVhbGx5IHRoYXQgc2FmZT8NCg0KV2UgZG8gbm90IG5lZWQgYW55
IGRhdGEgdmFsaWRhdGlvbiBoZXJlIGFuZCBpb21tdSBkcml2ZXIgZG9lcyB3aGF0ZXZlciBuZWVk
ZWQuDQpTbyB5ZXMsICBpb21tdV9kb21haW5fc2V0X2F0dHIoKSBpcyBzYWZlLg0KDQo+IA0KPiBG
b3IgYm90aCBvZiB0aG9zZSwgZHJvcCB0aGUgcHJfZXJyIG9uIHVua25vd24gYXR0cmlidXRlLCBp
dCdzIHN1ZmZpY2llbnQgdG8NCj4gcmV0dXJuIGVycm9yLg0KDQpvaw0KDQo+IA0KPiBJcyBWRklP
X0lPTU1VX1BBTVVfR0VUX01TSV9CQU5LX0NPVU5UIHBlciBhcGVydHVyZSAoaWUuIGVhY2ggdmZp
byB1c2VyIGhhcw0KPiAkQ09VTlQgcmVnaW9ucyBhdCB0aGVpciBkaXNwb3NhbCBleGNsdXNpdmVs
eSk/DQoNCk51bWJlciBvZiBtc2ktYmFuayBjb3VudCBpcyBzeXN0ZW0gd2lkZSBhbmQgbm90IHBl
ciBhcGVydHVyZSwgQnV0IHdpbGwgYmUgc2V0dGluZyB3aW5kb3dzIGZvciBiYW5rcyBpbiB0aGUg
ZGV2aWNlIGFwZXJ0dXJlLg0KU28gc2F5IGlmIHdlIGFyZSBkaXJlY3QgYXNzaWduaW5nIDIgcGNp
IGRldmljZSAoYm90aCBoYXZlIGRpZmZlcmVudCBpb21tdSBncm91cCwgc28gMiBhcGVydHVyZSBp
biBpb21tdSkgdG8gVk0uDQpOb3cgcWVtdSBjYW4gbWFrZSBvbmx5IG9uZSBjYWxsIHRvIGtub3cg
aG93IG1hbnkgbXNpLWJhbmtzIGFyZSB0aGVyZSBidXQgaXQgbXVzdCBzZXQgc3ViLXdpbmRvd3Mg
Zm9yIGFsbCBiYW5rcyBmb3IgYm90aCBwY2kgZGV2aWNlIGluIGl0cyByZXNwZWN0aXZlIGFwZXJ0
dXJlLg0KDQpUaGFua3MNCi1CaGFyYXQNCg0KPiAgVGhhbmtzLA0KPiANCj4gQWxleA0KPiANCj4g
PiB2MS0+djINCj4gPiAgLSBBZGRlZCBpbnRlcmZhY2UgZm9yIHNldHRpbmcgbXNpIGlvdmEgZm9y
IGEgbXNpIHJlZ2lvbiBmb3IgYSBkZXZpY2UuDQo+ID4gICAgRWFybGllciBJIGFkZGVkIGlvbW11
IGludGVyZmFjZSBmb3Igc2FtZSBidXQgYXMgcGVyIGNvbW1lbnQgdGhhdCBpcw0KPiA+ICAgIHJl
bW92ZWQgYW5kIG5vdyBjcmVhdGVkIGEgZGlyZWN0IGludGVyZmFjZSBiZXR3ZWVuIHZmaW8gYW5k
IG1zaS4NCj4gPiAgLSBJbmNvcnBvcmF0ZWQgcmV2aWV3IGNvbW1lbnRzIChkZXRhaWxzIGlzIGlu
IGluZGl2aWR1YWwgcGF0Y2gpDQo+ID4NCj4gPiBCaGFyYXQgQmh1c2hhbiAoOSk6DQo+ID4gICBw
Y2k6bXNpOiBhZGQgd2VhayBmdW5jdGlvbiBmb3IgcmV0dXJuaW5nIG1zaSByZWdpb24gaW5mbw0K
PiA+ICAgcGNpOiBtc2k6IGV4cG9zZSBtc2kgcmVnaW9uIGluZm9ybWF0aW9uIGZ1bmN0aW9ucw0K
PiA+ICAgcG93ZXJwYzogcGNpOiBBZGQgYXJjaCBzcGVjaWZpYyBtc2kgcmVnaW9uIGludGVyZmFj
ZQ0KPiA+ICAgcG93ZXJwYzogbXNpOiBFeHRlbmQgdGhlIG1zaSByZWdpb24gaW50ZXJmYWNlIHRv
IGdldCBpbmZvIGZyb20NCj4gPiAgICAgZnNsX21zaQ0KPiA+ICAgcGNpL21zaTogaW50ZXJmYWNl
IHRvIHNldCBhbiBpb3ZhIGZvciBhIG1zaSByZWdpb24NCj4gPiAgIHBvd2VycGM6IHBjaTogRXh0
ZW5kIG1zaSBpb3ZhIHBhZ2Ugc2V0dXAgdG8gYXJjaCBzcGVjaWZpYw0KPiA+ICAgcGNpOiBtc2k6
IEV4dGVuZCBtc2kgaW92YSBzZXR0aW5nIGludGVyZmFjZSB0byBwb3dlcnBjIGFyY2gNCj4gPiAg
IHZmaW86IG1vdmluZyBzb21lIGZ1bmN0aW9ucyBpbiBjb21tb24gZmlsZQ0KPiA+ICAgdmZpbyBw
Y2k6IEFkZCB2ZmlvIGlvbW11IGltcGxlbWVudGF0aW9uIGZvciBGU0xfUEFNVQ0KPiA+DQo+ID4g
IGFyY2gvcG93ZXJwYy9pbmNsdWRlL2FzbS9tYWNoZGVwLmggfCAgIDEwICsNCj4gPiAgYXJjaC9w
b3dlcnBjL2tlcm5lbC9tc2kuYyAgICAgICAgICB8ICAgMjggKw0KPiA+ICBhcmNoL3Bvd2VycGMv
c3lzZGV2L2ZzbF9tc2kuYyAgICAgIHwgIDEzMiArKysrKy0NCj4gPiAgYXJjaC9wb3dlcnBjL3N5
c2Rldi9mc2xfbXNpLmggICAgICB8ICAgMjUgKy0NCj4gPiAgZHJpdmVycy9wY2kvbXNpLmMgICAg
ICAgICAgICAgICAgICB8ICAgMzUgKysNCj4gPiAgZHJpdmVycy92ZmlvL0tjb25maWcgICAgICAg
ICAgICAgICB8ICAgIDYgKw0KPiA+ICBkcml2ZXJzL3ZmaW8vTWFrZWZpbGUgICAgICAgICAgICAg
IHwgICAgNSArLQ0KPiA+ICBkcml2ZXJzL3ZmaW8vdmZpb19pb21tdV9jb21tb24uYyAgIHwgIDIy
NyArKysrKysrKw0KPiA+ICBkcml2ZXJzL3ZmaW8vdmZpb19pb21tdV9jb21tb24uaCAgIHwgICAy
NyArDQo+ID4gIGRyaXZlcnMvdmZpby92ZmlvX2lvbW11X2ZzbF9wYW11LmMgfCAxMDAzDQo+ICsr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKw0KPiA+ICBkcml2ZXJzL3ZmaW8vdmZp
b19pb21tdV90eXBlMS5jICAgIHwgIDIwNiArLS0tLS0tLS0NCj4gPiAgaW5jbHVkZS9saW51eC9t
c2kuaCAgICAgICAgICAgICAgICB8ICAgMTQgKw0KPiA+ICBpbmNsdWRlL2xpbnV4L3BjaS5oICAg
ICAgICAgICAgICAgIHwgICAyMSArDQo+ID4gIGluY2x1ZGUvdWFwaS9saW51eC92ZmlvLmggICAg
ICAgICAgfCAgMTAwICsrKysNCj4gPiAgMTQgZmlsZXMgY2hhbmdlZCwgMTYyMyBpbnNlcnRpb25z
KCspLCAyMTYgZGVsZXRpb25zKC0pICBjcmVhdGUgbW9kZQ0KPiA+IDEwMDY0NCBkcml2ZXJzL3Zm
aW8vdmZpb19pb21tdV9jb21tb24uYyAgY3JlYXRlIG1vZGUgMTAwNjQ0DQo+ID4gZHJpdmVycy92
ZmlvL3ZmaW9faW9tbXVfY29tbW9uLmggIGNyZWF0ZSBtb2RlIDEwMDY0NA0KPiA+IGRyaXZlcnMv
dmZpby92ZmlvX2lvbW11X2ZzbF9wYW11LmMNCj4gPg0KPiA+DQo+ID4gLS0NCj4gPiBUbyB1bnN1
YnNjcmliZSBmcm9tIHRoaXMgbGlzdDogc2VuZCB0aGUgbGluZSAidW5zdWJzY3JpYmUNCj4gPiBs
aW51eC1rZXJuZWwiIGluIHRoZSBib2R5IG9mIGEgbWVzc2FnZSB0byBtYWpvcmRvbW9Admdlci5r
ZXJuZWwub3JnDQo+ID4gTW9yZSBtYWpvcmRvbW8gaW5mbyBhdCAgaHR0cDovL3ZnZXIua2VybmVs
Lm9yZy9tYWpvcmRvbW8taW5mby5odG1sDQo+ID4gUGxlYXNlIHJlYWQgdGhlIEZBUSBhdCAgaHR0
cDovL3d3dy50dXgub3JnL2xrbWwvDQo+IA0KPiANCj4gDQoNCg==

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  2013-11-21 11:20   ` Bharat Bhushan
@ 2013-11-21 20:43     ` Alex Williamson
  2013-11-21 20:47       ` Scott Wood
  0 siblings, 1 reply; 35+ messages in thread
From: Alex Williamson @ 2013-11-21 20:43 UTC (permalink / raw)
  To: Bharat Bhushan
  Cc: linux-pci, joro, agraf, Stuart Yoder, Scott Wood, iommu,
	bhelgaas, linuxppc-dev, linux-kernel

On Thu, 2013-11-21 at 11:20 +0000, Bharat Bhushan wrote:
> 
> > -----Original Message-----
> > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > Sent: Thursday, November 21, 2013 12:17 AM
> > To: Bhushan Bharat-R65777
> > Cc: joro@8bytes.org; bhelgaas@google.com; agraf@suse.de; Wood Scott-B07421;
> > Yoder Stuart-B08248; iommu@lists.linux-foundation.org; linux-
> > pci@vger.kernel.org; linuxppc-dev@lists.ozlabs.org; linux-
> > kernel@vger.kernel.org; Bhushan Bharat-R65777
> > Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
> > 
> > On Tue, 2013-11-19 at 10:47 +0530, Bharat Bhushan wrote:
> > > From: Bharat Bhushan <bharat.bhushan@freescale.com>
> > >
> > > PAMU (FSL IOMMU) has a concept of primary window and subwindows.
> > > Primary window corresponds to the complete guest iova address space
> > > (including MSI space), with respect to IOMMU_API this is termed as
> > > geometry. IOVA Base of subwindow is determined from the number of
> > > subwindows (configurable using iommu API).
> > > MSI I/O page must be within the geometry and maximum supported
> > > subwindows, so MSI IO-page is setup just after guest memory iova space.
> > >
> > > So patch 1/9-4/9(inclusive) are for defining the interface to get:
> > >   - Number of MSI regions (which is number of MSI banks for powerpc)
> > >   - MSI-region address range: Physical page which have the
> > >     address/addresses used for generating MSI interrupt
> > >     and size of the page.
> > >
> > > Patch 5/9-7/9(inclusive) is defining the interface of setting up MSI
> > > iova-base for a msi region(bank) for a device. so that when
> > > msi-message will be composed then this configured iova will be used.
> > > Earlier we were using iommu interface for getting the configured iova
> > > which was not currect and Alex Williamson suggeested this type of interface.
> > >
> > > patch 8/9 moves some common functions in a separate file so that these
> > > can be used by FSL_PAMU implementation (next patch uses this).
> > > These will be used later for iommu-none implementation. I believe we
> > > can do more of this but will take step by step.
> > >
> > > Finally last patch actually adds the support for FSL-PAMU :)
> > 
> > Patches 1-3: msi_get_region needs to return an error an error (probably
> > -EINVAL) if called on a path where there's no backend implementation.
> > Otherwise the caller doesn't know that the data in the region pointer isn't
> > valid.
> 
> will correct.
> 
> > 
> > Patches 5&6: same as above for msi_set_iova, return an error if no backend
> > implementation.
> 
> Ok
> 
> > 
> > Patch 7: Why does fsl_msi_del_iova_device bother to return anything if it's
> > always zero?  Return -ENODEV when not found?
> 
> Will make -ENODEV.
> 
> > 
> > Patch 9:
> > 
> > vfio_handle_get_attr() passes random kernel data back to userspace in the event
> > of iommu_domain_get_attr() error.
> 
> Will correct.
> 
> > 
> > vfio_handle_set_attr(): I don't see any data validation happening, is
> > iommu_domain_set_attr() really that safe?
> 
> We do not need any data validation here and iommu driver does whatever needed.
> So yes,  iommu_domain_set_attr() is safe.
> 
> > 
> > For both of those, drop the pr_err on unknown attribute, it's sufficient to
> > return error.
> 
> ok
> 
> > 
> > Is VFIO_IOMMU_PAMU_GET_MSI_BANK_COUNT per aperture (ie. each vfio user has
> > $COUNT regions at their disposal exclusively)?
> 
> Number of msi-bank count is system wide and not per aperture, But will be setting windows for banks in the device aperture.
> So say if we are direct assigning 2 pci device (both have different iommu group, so 2 aperture in iommu) to VM.
> Now qemu can make only one call to know how many msi-banks are there but it must set sub-windows for all banks for both pci device in its respective aperture.

I'm still confused.  What I want to make sure of is that the banks are
independent per aperture.  For instance, if we have two separate
userspace processes operating independently and they both chose to use
msi bank zero for their device, that's bank zero within each aperture
and doesn't interfere.  Or another way to ask is can a malicious user
interfere with other users by using the wrong bank.  Thanks,

Alex

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  2013-11-21 20:43     ` Alex Williamson
@ 2013-11-21 20:47       ` Scott Wood
  2013-11-21 21:00         ` Alex Williamson
  0 siblings, 1 reply; 35+ messages in thread
From: Scott Wood @ 2013-11-21 20:47 UTC (permalink / raw)
  To: Alex Williamson
  Cc: linux-pci, agraf, Stuart Yoder, Bharat Bhushan, iommu, bhelgaas,
	linuxppc-dev, linux-kernel

On Thu, 2013-11-21 at 13:43 -0700, Alex Williamson wrote:
> On Thu, 2013-11-21 at 11:20 +0000, Bharat Bhushan wrote:
> > 
> > > -----Original Message-----
> > > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > > Sent: Thursday, November 21, 2013 12:17 AM
> > > To: Bhushan Bharat-R65777
> > > Cc: joro@8bytes.org; bhelgaas@google.com; agraf@suse.de; Wood Scott-B07421;
> > > Yoder Stuart-B08248; iommu@lists.linux-foundation.org; linux-
> > > pci@vger.kernel.org; linuxppc-dev@lists.ozlabs.org; linux-
> > > kernel@vger.kernel.org; Bhushan Bharat-R65777
> > > Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
> > > 
> > > Is VFIO_IOMMU_PAMU_GET_MSI_BANK_COUNT per aperture (ie. each vfio user has
> > > $COUNT regions at their disposal exclusively)?
> > 
> > Number of msi-bank count is system wide and not per aperture, But will be setting windows for banks in the device aperture.
> > So say if we are direct assigning 2 pci device (both have different iommu group, so 2 aperture in iommu) to VM.
> > Now qemu can make only one call to know how many msi-banks are there but it must set sub-windows for all banks for both pci device in its respective aperture.
> 
> I'm still confused.  What I want to make sure of is that the banks are
> independent per aperture.  For instance, if we have two separate
> userspace processes operating independently and they both chose to use
> msi bank zero for their device, that's bank zero within each aperture
> and doesn't interfere.  Or another way to ask is can a malicious user
> interfere with other users by using the wrong bank.  Thanks,

They can interfere.  With this hardware, the only way to prevent that is
to make sure that a bank is not shared by multiple protection contexts.
For some of our users, though, I believe preventing this is less
important than the performance benefit.

-Scott

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  2013-11-21 20:47       ` Scott Wood
@ 2013-11-21 21:00         ` Alex Williamson
  2013-11-25  5:33           ` Bharat Bhushan
  0 siblings, 1 reply; 35+ messages in thread
From: Alex Williamson @ 2013-11-21 21:00 UTC (permalink / raw)
  To: Scott Wood
  Cc: linux-pci, agraf, Stuart Yoder, Bharat Bhushan, iommu, bhelgaas,
	linuxppc-dev, linux-kernel

On Thu, 2013-11-21 at 14:47 -0600, Scott Wood wrote:
> On Thu, 2013-11-21 at 13:43 -0700, Alex Williamson wrote:
> > On Thu, 2013-11-21 at 11:20 +0000, Bharat Bhushan wrote:
> > > 
> > > > -----Original Message-----
> > > > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > > > Sent: Thursday, November 21, 2013 12:17 AM
> > > > To: Bhushan Bharat-R65777
> > > > Cc: joro@8bytes.org; bhelgaas@google.com; agraf@suse.de; Wood Scott-B07421;
> > > > Yoder Stuart-B08248; iommu@lists.linux-foundation.org; linux-
> > > > pci@vger.kernel.org; linuxppc-dev@lists.ozlabs.org; linux-
> > > > kernel@vger.kernel.org; Bhushan Bharat-R65777
> > > > Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
> > > > 
> > > > Is VFIO_IOMMU_PAMU_GET_MSI_BANK_COUNT per aperture (ie. each vfio user has
> > > > $COUNT regions at their disposal exclusively)?
> > > 
> > > Number of msi-bank count is system wide and not per aperture, But will be setting windows for banks in the device aperture.
> > > So say if we are direct assigning 2 pci device (both have different iommu group, so 2 aperture in iommu) to VM.
> > > Now qemu can make only one call to know how many msi-banks are there but it must set sub-windows for all banks for both pci device in its respective aperture.
> > 
> > I'm still confused.  What I want to make sure of is that the banks are
> > independent per aperture.  For instance, if we have two separate
> > userspace processes operating independently and they both chose to use
> > msi bank zero for their device, that's bank zero within each aperture
> > and doesn't interfere.  Or another way to ask is can a malicious user
> > interfere with other users by using the wrong bank.  Thanks,
> 
> They can interfere.  With this hardware, the only way to prevent that is
> to make sure that a bank is not shared by multiple protection contexts.
> For some of our users, though, I believe preventing this is less
> important than the performance benefit.

I think we need some sort of ownership model around the msi banks then.
Otherwise there's nothing preventing another userspace from attempting
an MSI based attack on other users, or perhaps even on the host.  VFIO
can't allow that.  Thanks,

Alex

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  2013-11-21 21:00         ` Alex Williamson
@ 2013-11-25  5:33           ` Bharat Bhushan
  2013-11-25 16:38             ` Alex Williamson
  2013-12-06  0:00             ` Scott Wood
  0 siblings, 2 replies; 35+ messages in thread
From: Bharat Bhushan @ 2013-11-25  5:33 UTC (permalink / raw)
  To: Alex Williamson, Scott Wood
  Cc: linux-pci, agraf, Stuart Yoder, iommu, bhelgaas, linuxppc-dev,
	linux-kernel

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogQWxleCBXaWxsaWFtc29u
IFttYWlsdG86YWxleC53aWxsaWFtc29uQHJlZGhhdC5jb21dDQo+IFNlbnQ6IEZyaWRheSwgTm92
ZW1iZXIgMjIsIDIwMTMgMjozMSBBTQ0KPiBUbzogV29vZCBTY290dC1CMDc0MjENCj4gQ2M6IEJo
dXNoYW4gQmhhcmF0LVI2NTc3NzsgbGludXgtcGNpQHZnZXIua2VybmVsLm9yZzsgYWdyYWZAc3Vz
ZS5kZTsgWW9kZXINCj4gU3R1YXJ0LUIwODI0ODsgaW9tbXVAbGlzdHMubGludXgtZm91bmRhdGlv
bi5vcmc7IGJoZWxnYWFzQGdvb2dsZS5jb207IGxpbnV4cHBjLQ0KPiBkZXZAbGlzdHMub3psYWJz
Lm9yZzsgbGludXgta2VybmVsQHZnZXIua2VybmVsLm9yZw0KPiBTdWJqZWN0OiBSZTogW1BBVENI
IDAvOSB2Ml0gdmZpby1wY2k6IGFkZCBzdXBwb3J0IGZvciBGcmVlc2NhbGUgSU9NTVUgKFBBTVUp
DQo+IA0KPiBPbiBUaHUsIDIwMTMtMTEtMjEgYXQgMTQ6NDcgLTA2MDAsIFNjb3R0IFdvb2Qgd3Jv
dGU6DQo+ID4gT24gVGh1LCAyMDEzLTExLTIxIGF0IDEzOjQzIC0wNzAwLCBBbGV4IFdpbGxpYW1z
b24gd3JvdGU6DQo+ID4gPiBPbiBUaHUsIDIwMTMtMTEtMjEgYXQgMTE6MjAgKzAwMDAsIEJoYXJh
dCBCaHVzaGFuIHdyb3RlOg0KPiA+ID4gPg0KPiA+ID4gPiA+IC0tLS0tT3JpZ2luYWwgTWVzc2Fn
ZS0tLS0tDQo+ID4gPiA+ID4gRnJvbTogQWxleCBXaWxsaWFtc29uIFttYWlsdG86YWxleC53aWxs
aWFtc29uQHJlZGhhdC5jb21dDQo+ID4gPiA+ID4gU2VudDogVGh1cnNkYXksIE5vdmVtYmVyIDIx
LCAyMDEzIDEyOjE3IEFNDQo+ID4gPiA+ID4gVG86IEJodXNoYW4gQmhhcmF0LVI2NTc3Nw0KPiA+
ID4gPiA+IENjOiBqb3JvQDhieXRlcy5vcmc7IGJoZWxnYWFzQGdvb2dsZS5jb207IGFncmFmQHN1
c2UuZGU7IFdvb2QNCj4gPiA+ID4gPiBTY290dC1CMDc0MjE7IFlvZGVyIFN0dWFydC1CMDgyNDg7
DQo+ID4gPiA+ID4gaW9tbXVAbGlzdHMubGludXgtZm91bmRhdGlvbi5vcmc7IGxpbnV4LSBwY2lA
dmdlci5rZXJuZWwub3JnOw0KPiA+ID4gPiA+IGxpbnV4cHBjLWRldkBsaXN0cy5vemxhYnMub3Jn
OyBsaW51eC0ga2VybmVsQHZnZXIua2VybmVsLm9yZzsNCj4gPiA+ID4gPiBCaHVzaGFuIEJoYXJh
dC1SNjU3NzcNCj4gPiA+ID4gPiBTdWJqZWN0OiBSZTogW1BBVENIIDAvOSB2Ml0gdmZpby1wY2k6
IGFkZCBzdXBwb3J0IGZvciBGcmVlc2NhbGUNCj4gPiA+ID4gPiBJT01NVSAoUEFNVSkNCj4gPiA+
ID4gPg0KPiA+ID4gPiA+IElzIFZGSU9fSU9NTVVfUEFNVV9HRVRfTVNJX0JBTktfQ09VTlQgcGVy
IGFwZXJ0dXJlIChpZS4gZWFjaA0KPiA+ID4gPiA+IHZmaW8gdXNlciBoYXMgJENPVU5UIHJlZ2lv
bnMgYXQgdGhlaXIgZGlzcG9zYWwgZXhjbHVzaXZlbHkpPw0KPiA+ID4gPg0KPiA+ID4gPiBOdW1i
ZXIgb2YgbXNpLWJhbmsgY291bnQgaXMgc3lzdGVtIHdpZGUgYW5kIG5vdCBwZXIgYXBlcnR1cmUs
IEJ1dCB3aWxsIGJlDQo+IHNldHRpbmcgd2luZG93cyBmb3IgYmFua3MgaW4gdGhlIGRldmljZSBh
cGVydHVyZS4NCj4gPiA+ID4gU28gc2F5IGlmIHdlIGFyZSBkaXJlY3QgYXNzaWduaW5nIDIgcGNp
IGRldmljZSAoYm90aCBoYXZlIGRpZmZlcmVudCBpb21tdQ0KPiBncm91cCwgc28gMiBhcGVydHVy
ZSBpbiBpb21tdSkgdG8gVk0uDQo+ID4gPiA+IE5vdyBxZW11IGNhbiBtYWtlIG9ubHkgb25lIGNh
bGwgdG8ga25vdyBob3cgbWFueSBtc2ktYmFua3MgYXJlIHRoZXJlIGJ1dA0KPiBpdCBtdXN0IHNl
dCBzdWItd2luZG93cyBmb3IgYWxsIGJhbmtzIGZvciBib3RoIHBjaSBkZXZpY2UgaW4gaXRzIHJl
c3BlY3RpdmUNCj4gYXBlcnR1cmUuDQo+ID4gPg0KPiA+ID4gSSdtIHN0aWxsIGNvbmZ1c2VkLiAg
V2hhdCBJIHdhbnQgdG8gbWFrZSBzdXJlIG9mIGlzIHRoYXQgdGhlIGJhbmtzDQo+ID4gPiBhcmUg
aW5kZXBlbmRlbnQgcGVyIGFwZXJ0dXJlLiAgRm9yIGluc3RhbmNlLCBpZiB3ZSBoYXZlIHR3byBz
ZXBhcmF0ZQ0KPiA+ID4gdXNlcnNwYWNlIHByb2Nlc3NlcyBvcGVyYXRpbmcgaW5kZXBlbmRlbnRs
eSBhbmQgdGhleSBib3RoIGNob3NlIHRvDQo+ID4gPiB1c2UgbXNpIGJhbmsgemVybyBmb3IgdGhl
aXIgZGV2aWNlLCB0aGF0J3MgYmFuayB6ZXJvIHdpdGhpbiBlYWNoDQo+ID4gPiBhcGVydHVyZSBh
bmQgZG9lc24ndCBpbnRlcmZlcmUuICBPciBhbm90aGVyIHdheSB0byBhc2sgaXMgY2FuIGENCj4g
PiA+IG1hbGljaW91cyB1c2VyIGludGVyZmVyZSB3aXRoIG90aGVyIHVzZXJzIGJ5IHVzaW5nIHRo
ZSB3cm9uZyBiYW5rLg0KPiA+ID4gVGhhbmtzLA0KPiA+DQo+ID4gVGhleSBjYW4gaW50ZXJmZXJl
Lg0KDQpXYW50IHRvIGJlIHN1cmUgb2YgaG93IHRoZXkgY2FuIGludGVyZmVyZT8NCg0KPj4gIFdp
dGggdGhpcyBoYXJkd2FyZSwgdGhlIG9ubHkgd2F5IHRvIHByZXZlbnQgdGhhdA0KPiA+IGlzIHRv
IG1ha2Ugc3VyZSB0aGF0IGEgYmFuayBpcyBub3Qgc2hhcmVkIGJ5IG11bHRpcGxlIHByb3RlY3Rp
b24gY29udGV4dHMuDQo+ID4gRm9yIHNvbWUgb2Ygb3VyIHVzZXJzLCB0aG91Z2gsIEkgYmVsaWV2
ZSBwcmV2ZW50aW5nIHRoaXMgaXMgbGVzcw0KPiA+IGltcG9ydGFudCB0aGFuIHRoZSBwZXJmb3Jt
YW5jZSBiZW5lZml0Lg0KDQpTbyBzaG91bGQgd2UgbGV0IHRoaXMgcGF0Y2ggc2VyaWVzIGluIHdp
dGhvdXQgcHJvdGVjdGlvbj8NCg0KPiANCj4gSSB0aGluayB3ZSBuZWVkIHNvbWUgc29ydCBvZiBv
d25lcnNoaXAgbW9kZWwgYXJvdW5kIHRoZSBtc2kgYmFua3MgdGhlbi4NCj4gT3RoZXJ3aXNlIHRo
ZXJlJ3Mgbm90aGluZyBwcmV2ZW50aW5nIGFub3RoZXIgdXNlcnNwYWNlIGZyb20gYXR0ZW1wdGlu
ZyBhbiBNU0kNCj4gYmFzZWQgYXR0YWNrIG9uIG90aGVyIHVzZXJzLCBvciBwZXJoYXBzIGV2ZW4g
b24gdGhlIGhvc3QuICBWRklPIGNhbid0IGFsbG93DQo+IHRoYXQuICBUaGFua3MsDQoNCldlIGhh
dmUgdmVyeSBmZXcgKDMgTVNJIGJhbmsgb24gbW9zdCBvZiBjaGlwcyksIHNvIHdlIGNhbiBub3Qg
YXNzaWduIG9uZSB0byBlYWNoIHVzZXJzcGFjZS4gV2hhdCB3ZSBjYW4gZG8gaXMgaG9zdCBhbmQg
dXNlcnNwYWNlIGRvZXMgbm90IHNoYXJlIGEgTVNJIGJhbmsgd2hpbGUgdXNlcnNwYWNlIHdpbGwg
c2hhcmUgYSBNU0kgYmFuay4NCg0KDQpUaGFua3MNCi1CaGFyYXQNCg0KPiANCj4gQWxleA0KPiAN
Cg0K

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  2013-11-25  5:33           ` Bharat Bhushan
@ 2013-11-25 16:38             ` Alex Williamson
  2013-11-27 16:08               ` Bharat Bhushan
  2013-11-28  9:19               ` Bharat Bhushan
  2013-12-06  0:00             ` Scott Wood
  1 sibling, 2 replies; 35+ messages in thread
From: Alex Williamson @ 2013-11-25 16:38 UTC (permalink / raw)
  To: Bharat Bhushan
  Cc: linux-pci, agraf, Stuart Yoder, bhelgaas, iommu, Scott Wood,
	linuxppc-dev, linux-kernel

On Mon, 2013-11-25 at 05:33 +0000, Bharat Bhushan wrote:
> 
> > -----Original Message-----
> > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > Sent: Friday, November 22, 2013 2:31 AM
> > To: Wood Scott-B07421
> > Cc: Bhushan Bharat-R65777; linux-pci@vger.kernel.org; agraf@suse.de; Yoder
> > Stuart-B08248; iommu@lists.linux-foundation.org; bhelgaas@google.com; linuxppc-
> > dev@lists.ozlabs.org; linux-kernel@vger.kernel.org
> > Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
> > 
> > On Thu, 2013-11-21 at 14:47 -0600, Scott Wood wrote:
> > > On Thu, 2013-11-21 at 13:43 -0700, Alex Williamson wrote:
> > > > On Thu, 2013-11-21 at 11:20 +0000, Bharat Bhushan wrote:
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > > > > > Sent: Thursday, November 21, 2013 12:17 AM
> > > > > > To: Bhushan Bharat-R65777
> > > > > > Cc: joro@8bytes.org; bhelgaas@google.com; agraf@suse.de; Wood
> > > > > > Scott-B07421; Yoder Stuart-B08248;
> > > > > > iommu@lists.linux-foundation.org; linux- pci@vger.kernel.org;
> > > > > > linuxppc-dev@lists.ozlabs.org; linux- kernel@vger.kernel.org;
> > > > > > Bhushan Bharat-R65777
> > > > > > Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale
> > > > > > IOMMU (PAMU)
> > > > > >
> > > > > > Is VFIO_IOMMU_PAMU_GET_MSI_BANK_COUNT per aperture (ie. each
> > > > > > vfio user has $COUNT regions at their disposal exclusively)?
> > > > >
> > > > > Number of msi-bank count is system wide and not per aperture, But will be
> > setting windows for banks in the device aperture.
> > > > > So say if we are direct assigning 2 pci device (both have different iommu
> > group, so 2 aperture in iommu) to VM.
> > > > > Now qemu can make only one call to know how many msi-banks are there but
> > it must set sub-windows for all banks for both pci device in its respective
> > aperture.
> > > >
> > > > I'm still confused.  What I want to make sure of is that the banks
> > > > are independent per aperture.  For instance, if we have two separate
> > > > userspace processes operating independently and they both chose to
> > > > use msi bank zero for their device, that's bank zero within each
> > > > aperture and doesn't interfere.  Or another way to ask is can a
> > > > malicious user interfere with other users by using the wrong bank.
> > > > Thanks,
> > >
> > > They can interfere.
> 
> Want to be sure of how they can interfere?

What happens if more than one user selects the same MSI bank?
Minimally, wouldn't that result in the IOMMU blocking transactions from
the previous user once the new user activates their mapping?

> >>  With this hardware, the only way to prevent that
> > > is to make sure that a bank is not shared by multiple protection contexts.
> > > For some of our users, though, I believe preventing this is less
> > > important than the performance benefit.
> 
> So should we let this patch series in without protection?

No.

> > 
> > I think we need some sort of ownership model around the msi banks then.
> > Otherwise there's nothing preventing another userspace from attempting an MSI
> > based attack on other users, or perhaps even on the host.  VFIO can't allow
> > that.  Thanks,
> 
> We have very few (3 MSI bank on most of chips), so we can not assign
> one to each userspace. What we can do is host and userspace does not
> share a MSI bank while userspace will share a MSI bank.

Then you probably need VFIO to "own" the MSI bank and program devices
into it rather than exposing the MSI banks to userspace to let them have
direct access.  Thanks,

Alex

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 1/9 v2] pci:msi: add weak function for returning msi region info
  2013-11-19  5:17 ` [PATCH 1/9 v2] pci:msi: add weak function for returning msi region info Bharat Bhushan
@ 2013-11-25 23:36   ` Bjorn Helgaas
  2013-11-28 10:08     ` Bharat Bhushan
  0 siblings, 1 reply; 35+ messages in thread
From: Bjorn Helgaas @ 2013-11-25 23:36 UTC (permalink / raw)
  To: Bharat Bhushan
  Cc: linux-pci, joro, stuart.yoder, iommu, agraf, Bharat Bhushan,
	alex.williamson, scottwood, linuxppc-dev, linux-kernel

On Tue, Nov 19, 2013 at 10:47:05AM +0530, Bharat Bhushan wrote:
> In Aperture type of IOMMU (like FSL PAMU), VFIO-iommu system need to know
> the MSI region to map its window in h/w. This patch just defines the
> required weak functions only and will be used by followup patches.
> 
> Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
> ---
> v1->v2
>  - Added description on "struct msi_region" 
> 
>  drivers/pci/msi.c   |   22 ++++++++++++++++++++++
>  include/linux/msi.h |   14 ++++++++++++++
>  2 files changed, 36 insertions(+), 0 deletions(-)
> 
> diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
> index d5f90d6..2643a29 100644
> --- a/drivers/pci/msi.c
> +++ b/drivers/pci/msi.c
> @@ -67,6 +67,28 @@ int __weak arch_msi_check_device(struct pci_dev *dev, int nvec, int type)
>  	return chip->check_device(chip, dev, nvec, type);
>  }
>  
> +int __weak arch_msi_get_region_count(void)
> +{
> +	return 0;
> +}
> +
> +int __weak arch_msi_get_region(int region_num, struct msi_region *region)
> +{
> +	return 0;
> +}
> +
> +int msi_get_region_count(void)
> +{
> +	return arch_msi_get_region_count();
> +}
> +EXPORT_SYMBOL(msi_get_region_count);
> +
> +int msi_get_region(int region_num, struct msi_region *region)
> +{
> +	return arch_msi_get_region(region_num, region);
> +}
> +EXPORT_SYMBOL(msi_get_region);
> +
>  int __weak arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
>  {
>  	struct msi_desc *entry;
> diff --git a/include/linux/msi.h b/include/linux/msi.h
> index b17ead8..ade1480 100644
> --- a/include/linux/msi.h
> +++ b/include/linux/msi.h
> @@ -51,6 +51,18 @@ struct msi_desc {
>  };
>  
>  /*
> + * This structure is used to get
> + * - physical address
> + * - size
> + * of a msi region
> + */
> +struct msi_region {
> +	int region_num; /* MSI region number */
> +	dma_addr_t addr; /* Address of MSI region */
> +	size_t size; /* Size of MSI region */
> +};
> +
> +/*
>   * The arch hooks to setup up msi irqs. Those functions are
>   * implemented as weak symbols so that they /can/ be overriden by
>   * architecture specific code if needed.
> @@ -64,6 +76,8 @@ void arch_restore_msi_irqs(struct pci_dev *dev, int irq);
>  
>  void default_teardown_msi_irqs(struct pci_dev *dev);
>  void default_restore_msi_irqs(struct pci_dev *dev, int irq);
> +int arch_msi_get_region_count(void);
> +int arch_msi_get_region(int region_num, struct msi_region *region);

It doesn't look like any of this (struct msi_region, msi_get_region(),
msi_get_region_count()) is actually used by drivers/pci/msi.c, so I don't
think it needs to be declared in generic code.  It looks like it's only
used in drivers/vfio/vfio_iommu_fsl_pamu.c, where you already know you have
an FSL IOMMU, and you can just call FSL-specific interfaces directly.

Bjorn

>  
>  struct msi_chip {
>  	struct module *owner;
> -- 
> 1.7.0.4
> 
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  2013-11-25 16:38             ` Alex Williamson
@ 2013-11-27 16:08               ` Bharat Bhushan
  2013-11-28  9:19               ` Bharat Bhushan
  1 sibling, 0 replies; 35+ messages in thread
From: Bharat Bhushan @ 2013-11-27 16:08 UTC (permalink / raw)
  To: Alex Williamson
  Cc: linux-pci, agraf, Stuart Yoder, bhelgaas, iommu, Scott Wood,
	linuxppc-dev, linux-kernel

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogQWxleCBXaWxsaWFtc29u
IFttYWlsdG86YWxleC53aWxsaWFtc29uQHJlZGhhdC5jb21dDQo+IFNlbnQ6IE1vbmRheSwgTm92
ZW1iZXIgMjUsIDIwMTMgMTA6MDggUE0NCj4gVG86IEJodXNoYW4gQmhhcmF0LVI2NTc3Nw0KPiBD
YzogV29vZCBTY290dC1CMDc0MjE7IGxpbnV4LXBjaUB2Z2VyLmtlcm5lbC5vcmc7IGFncmFmQHN1
c2UuZGU7IFlvZGVyIFN0dWFydC0NCj4gQjA4MjQ4OyBpb21tdUBsaXN0cy5saW51eC1mb3VuZGF0
aW9uLm9yZzsgYmhlbGdhYXNAZ29vZ2xlLmNvbTsgbGludXhwcGMtDQo+IGRldkBsaXN0cy5vemxh
YnMub3JnOyBsaW51eC1rZXJuZWxAdmdlci5rZXJuZWwub3JnDQo+IFN1YmplY3Q6IFJlOiBbUEFU
Q0ggMC85IHYyXSB2ZmlvLXBjaTogYWRkIHN1cHBvcnQgZm9yIEZyZWVzY2FsZSBJT01NVSAoUEFN
VSkNCj4gDQo+IE9uIE1vbiwgMjAxMy0xMS0yNSBhdCAwNTozMyArMDAwMCwgQmhhcmF0IEJodXNo
YW4gd3JvdGU6DQo+ID4NCj4gPiA+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+ID4gPiBG
cm9tOiBBbGV4IFdpbGxpYW1zb24gW21haWx0bzphbGV4LndpbGxpYW1zb25AcmVkaGF0LmNvbV0N
Cj4gPiA+IFNlbnQ6IEZyaWRheSwgTm92ZW1iZXIgMjIsIDIwMTMgMjozMSBBTQ0KPiA+ID4gVG86
IFdvb2QgU2NvdHQtQjA3NDIxDQo+ID4gPiBDYzogQmh1c2hhbiBCaGFyYXQtUjY1Nzc3OyBsaW51
eC1wY2lAdmdlci5rZXJuZWwub3JnOyBhZ3JhZkBzdXNlLmRlOw0KPiA+ID4gWW9kZXIgU3R1YXJ0
LUIwODI0ODsgaW9tbXVAbGlzdHMubGludXgtZm91bmRhdGlvbi5vcmc7DQo+ID4gPiBiaGVsZ2Fh
c0Bnb29nbGUuY29tOyBsaW51eHBwYy0gZGV2QGxpc3RzLm96bGFicy5vcmc7DQo+ID4gPiBsaW51
eC1rZXJuZWxAdmdlci5rZXJuZWwub3JnDQo+ID4gPiBTdWJqZWN0OiBSZTogW1BBVENIIDAvOSB2
Ml0gdmZpby1wY2k6IGFkZCBzdXBwb3J0IGZvciBGcmVlc2NhbGUNCj4gPiA+IElPTU1VIChQQU1V
KQ0KPiA+ID4NCj4gPiA+IE9uIFRodSwgMjAxMy0xMS0yMSBhdCAxNDo0NyAtMDYwMCwgU2NvdHQg
V29vZCB3cm90ZToNCj4gPiA+ID4gT24gVGh1LCAyMDEzLTExLTIxIGF0IDEzOjQzIC0wNzAwLCBB
bGV4IFdpbGxpYW1zb24gd3JvdGU6DQo+ID4gPiA+ID4gT24gVGh1LCAyMDEzLTExLTIxIGF0IDEx
OjIwICswMDAwLCBCaGFyYXQgQmh1c2hhbiB3cm90ZToNCj4gPiA+ID4gPiA+DQo+ID4gPiA+ID4g
PiA+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+ID4gPiA+ID4gPiA+IEZyb206IEFsZXgg
V2lsbGlhbXNvbiBbbWFpbHRvOmFsZXgud2lsbGlhbXNvbkByZWRoYXQuY29tXQ0KPiA+ID4gPiA+
ID4gPiBTZW50OiBUaHVyc2RheSwgTm92ZW1iZXIgMjEsIDIwMTMgMTI6MTcgQU0NCj4gPiA+ID4g
PiA+ID4gVG86IEJodXNoYW4gQmhhcmF0LVI2NTc3Nw0KPiA+ID4gPiA+ID4gPiBDYzogam9yb0A4
Ynl0ZXMub3JnOyBiaGVsZ2Fhc0Bnb29nbGUuY29tOyBhZ3JhZkBzdXNlLmRlOw0KPiA+ID4gPiA+
ID4gPiBXb29kIFNjb3R0LUIwNzQyMTsgWW9kZXIgU3R1YXJ0LUIwODI0ODsNCj4gPiA+ID4gPiA+
ID4gaW9tbXVAbGlzdHMubGludXgtZm91bmRhdGlvbi5vcmc7IGxpbnV4LQ0KPiA+ID4gPiA+ID4g
PiBwY2lAdmdlci5rZXJuZWwub3JnOyBsaW51eHBwYy1kZXZAbGlzdHMub3psYWJzLm9yZzsgbGlu
dXgtDQo+ID4gPiA+ID4gPiA+IGtlcm5lbEB2Z2VyLmtlcm5lbC5vcmc7IEJodXNoYW4gQmhhcmF0
LVI2NTc3Nw0KPiA+ID4gPiA+ID4gPiBTdWJqZWN0OiBSZTogW1BBVENIIDAvOSB2Ml0gdmZpby1w
Y2k6IGFkZCBzdXBwb3J0IGZvcg0KPiA+ID4gPiA+ID4gPiBGcmVlc2NhbGUgSU9NTVUgKFBBTVUp
DQo+ID4gPiA+ID4gPiA+DQo+ID4gPiA+ID4gPiA+IElzIFZGSU9fSU9NTVVfUEFNVV9HRVRfTVNJ
X0JBTktfQ09VTlQgcGVyIGFwZXJ0dXJlIChpZS4gZWFjaA0KPiA+ID4gPiA+ID4gPiB2ZmlvIHVz
ZXIgaGFzICRDT1VOVCByZWdpb25zIGF0IHRoZWlyIGRpc3Bvc2FsIGV4Y2x1c2l2ZWx5KT8NCj4g
PiA+ID4gPiA+DQo+ID4gPiA+ID4gPiBOdW1iZXIgb2YgbXNpLWJhbmsgY291bnQgaXMgc3lzdGVt
IHdpZGUgYW5kIG5vdCBwZXIgYXBlcnR1cmUsDQo+ID4gPiA+ID4gPiBCdXQgd2lsbCBiZQ0KPiA+
ID4gc2V0dGluZyB3aW5kb3dzIGZvciBiYW5rcyBpbiB0aGUgZGV2aWNlIGFwZXJ0dXJlLg0KPiA+
ID4gPiA+ID4gU28gc2F5IGlmIHdlIGFyZSBkaXJlY3QgYXNzaWduaW5nIDIgcGNpIGRldmljZSAo
Ym90aCBoYXZlDQo+ID4gPiA+ID4gPiBkaWZmZXJlbnQgaW9tbXUNCj4gPiA+IGdyb3VwLCBzbyAy
IGFwZXJ0dXJlIGluIGlvbW11KSB0byBWTS4NCj4gPiA+ID4gPiA+IE5vdyBxZW11IGNhbiBtYWtl
IG9ubHkgb25lIGNhbGwgdG8ga25vdyBob3cgbWFueSBtc2ktYmFua3MgYXJlDQo+ID4gPiA+ID4g
PiB0aGVyZSBidXQNCj4gPiA+IGl0IG11c3Qgc2V0IHN1Yi13aW5kb3dzIGZvciBhbGwgYmFua3Mg
Zm9yIGJvdGggcGNpIGRldmljZSBpbiBpdHMNCj4gPiA+IHJlc3BlY3RpdmUgYXBlcnR1cmUuDQo+
ID4gPiA+ID4NCj4gPiA+ID4gPiBJJ20gc3RpbGwgY29uZnVzZWQuICBXaGF0IEkgd2FudCB0byBt
YWtlIHN1cmUgb2YgaXMgdGhhdCB0aGUNCj4gPiA+ID4gPiBiYW5rcyBhcmUgaW5kZXBlbmRlbnQg
cGVyIGFwZXJ0dXJlLiAgRm9yIGluc3RhbmNlLCBpZiB3ZSBoYXZlDQo+ID4gPiA+ID4gdHdvIHNl
cGFyYXRlIHVzZXJzcGFjZSBwcm9jZXNzZXMgb3BlcmF0aW5nIGluZGVwZW5kZW50bHkgYW5kDQo+
ID4gPiA+ID4gdGhleSBib3RoIGNob3NlIHRvIHVzZSBtc2kgYmFuayB6ZXJvIGZvciB0aGVpciBk
ZXZpY2UsIHRoYXQncw0KPiA+ID4gPiA+IGJhbmsgemVybyB3aXRoaW4gZWFjaCBhcGVydHVyZSBh
bmQgZG9lc24ndCBpbnRlcmZlcmUuICBPcg0KPiA+ID4gPiA+IGFub3RoZXIgd2F5IHRvIGFzayBp
cyBjYW4gYSBtYWxpY2lvdXMgdXNlciBpbnRlcmZlcmUgd2l0aCBvdGhlciB1c2VycyBieQ0KPiB1
c2luZyB0aGUgd3JvbmcgYmFuay4NCj4gPiA+ID4gPiBUaGFua3MsDQo+ID4gPiA+DQo+ID4gPiA+
IFRoZXkgY2FuIGludGVyZmVyZS4NCj4gPg0KPiA+IFdhbnQgdG8gYmUgc3VyZSBvZiBob3cgdGhl
eSBjYW4gaW50ZXJmZXJlPw0KPiANCj4gV2hhdCBoYXBwZW5zIGlmIG1vcmUgdGhhbiBvbmUgdXNl
ciBzZWxlY3RzIHRoZSBzYW1lIE1TSSBiYW5rPw0KPiBNaW5pbWFsbHksIHdvdWxkbid0IHRoYXQg
cmVzdWx0IGluIHRoZSBJT01NVSBibG9ja2luZyB0cmFuc2FjdGlvbnMgZnJvbSB0aGUNCj4gcHJl
dmlvdXMgdXNlciBvbmNlIHRoZSBuZXcgdXNlciBhY3RpdmF0ZXMgdGhlaXIgbWFwcGluZz8NCg0K
WWVzIGFuZCBubzsgV2l0aCBjdXJyZW50IGltcGxlbWVudGF0aW9uIHllcyBidXQgd2l0aCBhIG1p
bm9yIGNoYW5nZSBuby4gTGF0ZXIgaW4gdGhpcyByZXNwb25zZSBJIHdpbGwgZXhwbGFpbiBob3cu
DQoNCj4gDQo+ID4gPj4gIFdpdGggdGhpcyBoYXJkd2FyZSwgdGhlIG9ubHkgd2F5IHRvIHByZXZl
bnQgdGhhdA0KPiA+ID4gPiBpcyB0byBtYWtlIHN1cmUgdGhhdCBhIGJhbmsgaXMgbm90IHNoYXJl
ZCBieSBtdWx0aXBsZSBwcm90ZWN0aW9uIGNvbnRleHRzLg0KPiA+ID4gPiBGb3Igc29tZSBvZiBv
dXIgdXNlcnMsIHRob3VnaCwgSSBiZWxpZXZlIHByZXZlbnRpbmcgdGhpcyBpcyBsZXNzDQo+ID4g
PiA+IGltcG9ydGFudCB0aGFuIHRoZSBwZXJmb3JtYW5jZSBiZW5lZml0Lg0KPiA+DQo+ID4gU28g
c2hvdWxkIHdlIGxldCB0aGlzIHBhdGNoIHNlcmllcyBpbiB3aXRob3V0IHByb3RlY3Rpb24/DQo+
IA0KPiBOby4NCj4gDQo+ID4gPg0KPiA+ID4gSSB0aGluayB3ZSBuZWVkIHNvbWUgc29ydCBvZiBv
d25lcnNoaXAgbW9kZWwgYXJvdW5kIHRoZSBtc2kgYmFua3MgdGhlbi4NCj4gPiA+IE90aGVyd2lz
ZSB0aGVyZSdzIG5vdGhpbmcgcHJldmVudGluZyBhbm90aGVyIHVzZXJzcGFjZSBmcm9tDQo+ID4g
PiBhdHRlbXB0aW5nIGFuIE1TSSBiYXNlZCBhdHRhY2sgb24gb3RoZXIgdXNlcnMsIG9yIHBlcmhh
cHMgZXZlbiBvbg0KPiA+ID4gdGhlIGhvc3QuICBWRklPIGNhbid0IGFsbG93IHRoYXQuICBUaGFu
a3MsDQo+ID4NCj4gPiBXZSBoYXZlIHZlcnkgZmV3ICgzIE1TSSBiYW5rIG9uIG1vc3Qgb2YgY2hp
cHMpLCBzbyB3ZSBjYW4gbm90IGFzc2lnbg0KPiA+IG9uZSB0byBlYWNoIHVzZXJzcGFjZS4gV2hh
dCB3ZSBjYW4gZG8gaXMgaG9zdCBhbmQgdXNlcnNwYWNlIGRvZXMgbm90DQo+ID4gc2hhcmUgYSBN
U0kgYmFuayB3aGlsZSB1c2Vyc3BhY2Ugd2lsbCBzaGFyZSBhIE1TSSBiYW5rLg0KPiANCj4gVGhl
biB5b3UgcHJvYmFibHkgbmVlZCBWRklPIHRvICJvd24iIHRoZSBNU0kgYmFuayBhbmQgcHJvZ3Jh
bSBkZXZpY2VzIGludG8gaXQNCj4gcmF0aGVyIHRoYW4gZXhwb3NpbmcgdGhlIE1TSSBiYW5rcyB0
byB1c2Vyc3BhY2UgdG8gbGV0IHRoZW0gaGF2ZSBkaXJlY3QgYWNjZXNzLg0KDQpPdmVyYWxsIGlk
ZWEgb2YgZXhwb3NpbmcgdGhlIGRldGFpbHMgb2YgbXNpIHJlZ2lvbnMgdG8gdXNlcnNwYWNlIGFy
ZQ0KIDEpIFVzZXIgc3BhY2UgY2FuIGRlZmluZSB0aGUgYXBlcnR1cmUgc2l6ZSB0byBmaXQgTVNJ
IG1hcHBpbmcgaW4gSU9NTVUuDQogMikgc2V0dXAgaW92YSBmb3IgYSBNU0kgYmFua3M7IHdoaWNo
IGlzIGp1c3QgYWZ0ZXIgZ3Vlc3QgbWVtb3J5LiANCg0KQnV0IGN1cnJlbnRseSB3ZSBleHBvc2Ug
dGhlICJzaXplIiBhbmQgImFkZHJlc3MiIG9mIE1TSSBiYW5rcywgcGFzc2luZyBhZGRyZXNzIGlz
IG9mIG5vIHVzZSBhbmQgY2FuIGJlIHByb2JsZW1hdGljLg0KSWYgd2UganVzdCBwcm92aWRlIHRo
ZSBzaXplIG9mIE1TSSBiYW5rIHRvIHVzZXJzcGFjZSB0aGVuIHVzZXJzcGFjZSBjYW5ub3QgZG8g
YW55dGhpbmcgd3JvbmcuDQoNCldoaWxlIGl0IGlzIHN0aWxsIHRoZSByZXNwb25zaWJpbGl0eSBv
ZiBob3N0IChNU0krVkZJTykgdG8gY29tcG9zZSBNU0ktYWRkcmVzcyBhbmQgTVNJLWRhdGE7IHNv
IEkgdGhpbmsgdGhpcyBzaG91bGQgbG9vayBmaW5lLg0KDQo+IFRoYW5rcywNCj4gDQo+IEFsZXgN
Cj4gDQoNCg==

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  2013-11-25 16:38             ` Alex Williamson
  2013-11-27 16:08               ` Bharat Bhushan
@ 2013-11-28  9:19               ` Bharat Bhushan
  2013-12-06  0:21                 ` Scott Wood
  1 sibling, 1 reply; 35+ messages in thread
From: Bharat Bhushan @ 2013-11-28  9:19 UTC (permalink / raw)
  To: Bharat Bhushan, Alex Williamson
  Cc: linux-pci, agraf, Stuart Yoder, bhelgaas, iommu, Scott Wood,
	linuxppc-dev, linux-kernel

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogQmh1c2hhbiBCaGFyYXQt
UjY1Nzc3DQo+IFNlbnQ6IFdlZG5lc2RheSwgTm92ZW1iZXIgMjcsIDIwMTMgOTozOSBQTQ0KPiBU
bzogJ0FsZXggV2lsbGlhbXNvbicNCj4gQ2M6IFdvb2QgU2NvdHQtQjA3NDIxOyBsaW51eC1wY2lA
dmdlci5rZXJuZWwub3JnOyBhZ3JhZkBzdXNlLmRlOyBZb2RlciBTdHVhcnQtDQo+IEIwODI0ODsg
aW9tbXVAbGlzdHMubGludXgtZm91bmRhdGlvbi5vcmc7IGJoZWxnYWFzQGdvb2dsZS5jb207IGxp
bnV4cHBjLQ0KPiBkZXZAbGlzdHMub3psYWJzLm9yZzsgbGludXgta2VybmVsQHZnZXIua2VybmVs
Lm9yZw0KPiBTdWJqZWN0OiBSRTogW1BBVENIIDAvOSB2Ml0gdmZpby1wY2k6IGFkZCBzdXBwb3J0
IGZvciBGcmVlc2NhbGUgSU9NTVUgKFBBTVUpDQo+IA0KPiANCj4gDQo+ID4gLS0tLS1PcmlnaW5h
bCBNZXNzYWdlLS0tLS0NCj4gPiBGcm9tOiBBbGV4IFdpbGxpYW1zb24gW21haWx0bzphbGV4Lndp
bGxpYW1zb25AcmVkaGF0LmNvbV0NCj4gPiBTZW50OiBNb25kYXksIE5vdmVtYmVyIDI1LCAyMDEz
IDEwOjA4IFBNDQo+ID4gVG86IEJodXNoYW4gQmhhcmF0LVI2NTc3Nw0KPiA+IENjOiBXb29kIFNj
b3R0LUIwNzQyMTsgbGludXgtcGNpQHZnZXIua2VybmVsLm9yZzsgYWdyYWZAc3VzZS5kZTsgWW9k
ZXINCj4gPiBTdHVhcnQtIEIwODI0ODsgaW9tbXVAbGlzdHMubGludXgtZm91bmRhdGlvbi5vcmc7
IGJoZWxnYWFzQGdvb2dsZS5jb207DQo+ID4gbGludXhwcGMtIGRldkBsaXN0cy5vemxhYnMub3Jn
OyBsaW51eC1rZXJuZWxAdmdlci5rZXJuZWwub3JnDQo+ID4gU3ViamVjdDogUmU6IFtQQVRDSCAw
LzkgdjJdIHZmaW8tcGNpOiBhZGQgc3VwcG9ydCBmb3IgRnJlZXNjYWxlIElPTU1VDQo+ID4gKFBB
TVUpDQo+ID4NCj4gPiBPbiBNb24sIDIwMTMtMTEtMjUgYXQgMDU6MzMgKzAwMDAsIEJoYXJhdCBC
aHVzaGFuIHdyb3RlOg0KPiA+ID4NCj4gPiA+ID4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0N
Cj4gPiA+ID4gRnJvbTogQWxleCBXaWxsaWFtc29uIFttYWlsdG86YWxleC53aWxsaWFtc29uQHJl
ZGhhdC5jb21dDQo+ID4gPiA+IFNlbnQ6IEZyaWRheSwgTm92ZW1iZXIgMjIsIDIwMTMgMjozMSBB
TQ0KPiA+ID4gPiBUbzogV29vZCBTY290dC1CMDc0MjENCj4gPiA+ID4gQ2M6IEJodXNoYW4gQmhh
cmF0LVI2NTc3NzsgbGludXgtcGNpQHZnZXIua2VybmVsLm9yZzsNCj4gPiA+ID4gYWdyYWZAc3Vz
ZS5kZTsgWW9kZXIgU3R1YXJ0LUIwODI0ODsNCj4gPiA+ID4gaW9tbXVAbGlzdHMubGludXgtZm91
bmRhdGlvbi5vcmc7IGJoZWxnYWFzQGdvb2dsZS5jb207IGxpbnV4cHBjLQ0KPiA+ID4gPiBkZXZA
bGlzdHMub3psYWJzLm9yZzsgbGludXgta2VybmVsQHZnZXIua2VybmVsLm9yZw0KPiA+ID4gPiBT
dWJqZWN0OiBSZTogW1BBVENIIDAvOSB2Ml0gdmZpby1wY2k6IGFkZCBzdXBwb3J0IGZvciBGcmVl
c2NhbGUNCj4gPiA+ID4gSU9NTVUgKFBBTVUpDQo+ID4gPiA+DQo+ID4gPiA+IE9uIFRodSwgMjAx
My0xMS0yMSBhdCAxNDo0NyAtMDYwMCwgU2NvdHQgV29vZCB3cm90ZToNCj4gPiA+ID4gPiBPbiBU
aHUsIDIwMTMtMTEtMjEgYXQgMTM6NDMgLTA3MDAsIEFsZXggV2lsbGlhbXNvbiB3cm90ZToNCj4g
PiA+ID4gPiA+IE9uIFRodSwgMjAxMy0xMS0yMSBhdCAxMToyMCArMDAwMCwgQmhhcmF0IEJodXNo
YW4gd3JvdGU6DQo+ID4gPiA+ID4gPiA+DQo+ID4gPiA+ID4gPiA+ID4gLS0tLS1PcmlnaW5hbCBN
ZXNzYWdlLS0tLS0NCj4gPiA+ID4gPiA+ID4gPiBGcm9tOiBBbGV4IFdpbGxpYW1zb24gW21haWx0
bzphbGV4LndpbGxpYW1zb25AcmVkaGF0LmNvbV0NCj4gPiA+ID4gPiA+ID4gPiBTZW50OiBUaHVy
c2RheSwgTm92ZW1iZXIgMjEsIDIwMTMgMTI6MTcgQU0NCj4gPiA+ID4gPiA+ID4gPiBUbzogQmh1
c2hhbiBCaGFyYXQtUjY1Nzc3DQo+ID4gPiA+ID4gPiA+ID4gQ2M6IGpvcm9AOGJ5dGVzLm9yZzsg
YmhlbGdhYXNAZ29vZ2xlLmNvbTsgYWdyYWZAc3VzZS5kZTsNCj4gPiA+ID4gPiA+ID4gPiBXb29k
IFNjb3R0LUIwNzQyMTsgWW9kZXIgU3R1YXJ0LUIwODI0ODsNCj4gPiA+ID4gPiA+ID4gPiBpb21t
dUBsaXN0cy5saW51eC1mb3VuZGF0aW9uLm9yZzsgbGludXgtDQo+ID4gPiA+ID4gPiA+ID4gcGNp
QHZnZXIua2VybmVsLm9yZzsgbGludXhwcGMtZGV2QGxpc3RzLm96bGFicy5vcmc7IGxpbnV4LQ0K
PiA+ID4gPiA+ID4gPiA+IGtlcm5lbEB2Z2VyLmtlcm5lbC5vcmc7IEJodXNoYW4gQmhhcmF0LVI2
NTc3Nw0KPiA+ID4gPiA+ID4gPiA+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggMC85IHYyXSB2ZmlvLXBj
aTogYWRkIHN1cHBvcnQgZm9yDQo+ID4gPiA+ID4gPiA+ID4gRnJlZXNjYWxlIElPTU1VIChQQU1V
KQ0KPiA+ID4gPiA+ID4gPiA+DQo+ID4gPiA+ID4gPiA+ID4gSXMgVkZJT19JT01NVV9QQU1VX0dF
VF9NU0lfQkFOS19DT1VOVCBwZXIgYXBlcnR1cmUgKGllLg0KPiA+ID4gPiA+ID4gPiA+IGVhY2gg
dmZpbyB1c2VyIGhhcyAkQ09VTlQgcmVnaW9ucyBhdCB0aGVpciBkaXNwb3NhbCBleGNsdXNpdmVs
eSk/DQo+ID4gPiA+ID4gPiA+DQo+ID4gPiA+ID4gPiA+IE51bWJlciBvZiBtc2ktYmFuayBjb3Vu
dCBpcyBzeXN0ZW0gd2lkZSBhbmQgbm90IHBlcg0KPiA+ID4gPiA+ID4gPiBhcGVydHVyZSwgQnV0
IHdpbGwgYmUNCj4gPiA+ID4gc2V0dGluZyB3aW5kb3dzIGZvciBiYW5rcyBpbiB0aGUgZGV2aWNl
IGFwZXJ0dXJlLg0KPiA+ID4gPiA+ID4gPiBTbyBzYXkgaWYgd2UgYXJlIGRpcmVjdCBhc3NpZ25p
bmcgMiBwY2kgZGV2aWNlIChib3RoIGhhdmUNCj4gPiA+ID4gPiA+ID4gZGlmZmVyZW50IGlvbW11
DQo+ID4gPiA+IGdyb3VwLCBzbyAyIGFwZXJ0dXJlIGluIGlvbW11KSB0byBWTS4NCj4gPiA+ID4g
PiA+ID4gTm93IHFlbXUgY2FuIG1ha2Ugb25seSBvbmUgY2FsbCB0byBrbm93IGhvdyBtYW55IG1z
aS1iYW5rcw0KPiA+ID4gPiA+ID4gPiBhcmUgdGhlcmUgYnV0DQo+ID4gPiA+IGl0IG11c3Qgc2V0
IHN1Yi13aW5kb3dzIGZvciBhbGwgYmFua3MgZm9yIGJvdGggcGNpIGRldmljZSBpbiBpdHMNCj4g
PiA+ID4gcmVzcGVjdGl2ZSBhcGVydHVyZS4NCj4gPiA+ID4gPiA+DQo+ID4gPiA+ID4gPiBJJ20g
c3RpbGwgY29uZnVzZWQuICBXaGF0IEkgd2FudCB0byBtYWtlIHN1cmUgb2YgaXMgdGhhdCB0aGUN
Cj4gPiA+ID4gPiA+IGJhbmtzIGFyZSBpbmRlcGVuZGVudCBwZXIgYXBlcnR1cmUuICBGb3IgaW5z
dGFuY2UsIGlmIHdlIGhhdmUNCj4gPiA+ID4gPiA+IHR3byBzZXBhcmF0ZSB1c2Vyc3BhY2UgcHJv
Y2Vzc2VzIG9wZXJhdGluZyBpbmRlcGVuZGVudGx5IGFuZA0KPiA+ID4gPiA+ID4gdGhleSBib3Ro
IGNob3NlIHRvIHVzZSBtc2kgYmFuayB6ZXJvIGZvciB0aGVpciBkZXZpY2UsIHRoYXQncw0KPiA+
ID4gPiA+ID4gYmFuayB6ZXJvIHdpdGhpbiBlYWNoIGFwZXJ0dXJlIGFuZCBkb2Vzbid0IGludGVy
ZmVyZS4gIE9yDQo+ID4gPiA+ID4gPiBhbm90aGVyIHdheSB0byBhc2sgaXMgY2FuIGEgbWFsaWNp
b3VzIHVzZXIgaW50ZXJmZXJlIHdpdGgNCj4gPiA+ID4gPiA+IG90aGVyIHVzZXJzIGJ5DQo+ID4g
dXNpbmcgdGhlIHdyb25nIGJhbmsuDQo+ID4gPiA+ID4gPiBUaGFua3MsDQo+ID4gPiA+ID4NCj4g
PiA+ID4gPiBUaGV5IGNhbiBpbnRlcmZlcmUuDQo+ID4gPg0KPiA+ID4gV2FudCB0byBiZSBzdXJl
IG9mIGhvdyB0aGV5IGNhbiBpbnRlcmZlcmU/DQo+ID4NCj4gPiBXaGF0IGhhcHBlbnMgaWYgbW9y
ZSB0aGFuIG9uZSB1c2VyIHNlbGVjdHMgdGhlIHNhbWUgTVNJIGJhbms/DQo+ID4gTWluaW1hbGx5
LCB3b3VsZG4ndCB0aGF0IHJlc3VsdCBpbiB0aGUgSU9NTVUgYmxvY2tpbmcgdHJhbnNhY3Rpb25z
DQo+ID4gZnJvbSB0aGUgcHJldmlvdXMgdXNlciBvbmNlIHRoZSBuZXcgdXNlciBhY3RpdmF0ZXMg
dGhlaXIgbWFwcGluZz8NCj4gDQo+IFllcyBhbmQgbm87IFdpdGggY3VycmVudCBpbXBsZW1lbnRh
dGlvbiB5ZXMgYnV0IHdpdGggYSBtaW5vciBjaGFuZ2Ugbm8uIExhdGVyIGluDQo+IHRoaXMgcmVz
cG9uc2UgSSB3aWxsIGV4cGxhaW4gaG93Lg0KPiANCj4gPg0KPiA+ID4gPj4gIFdpdGggdGhpcyBo
YXJkd2FyZSwgdGhlIG9ubHkgd2F5IHRvIHByZXZlbnQgdGhhdA0KPiA+ID4gPiA+IGlzIHRvIG1h
a2Ugc3VyZSB0aGF0IGEgYmFuayBpcyBub3Qgc2hhcmVkIGJ5IG11bHRpcGxlIHByb3RlY3Rpb24N
Cj4gY29udGV4dHMuDQo+ID4gPiA+ID4gRm9yIHNvbWUgb2Ygb3VyIHVzZXJzLCB0aG91Z2gsIEkg
YmVsaWV2ZSBwcmV2ZW50aW5nIHRoaXMgaXMgbGVzcw0KPiA+ID4gPiA+IGltcG9ydGFudCB0aGFu
IHRoZSBwZXJmb3JtYW5jZSBiZW5lZml0Lg0KPiA+ID4NCj4gPiA+IFNvIHNob3VsZCB3ZSBsZXQg
dGhpcyBwYXRjaCBzZXJpZXMgaW4gd2l0aG91dCBwcm90ZWN0aW9uPw0KPiA+DQo+ID4gTm8uDQo+
ID4NCj4gPiA+ID4NCj4gPiA+ID4gSSB0aGluayB3ZSBuZWVkIHNvbWUgc29ydCBvZiBvd25lcnNo
aXAgbW9kZWwgYXJvdW5kIHRoZSBtc2kgYmFua3MgdGhlbi4NCj4gPiA+ID4gT3RoZXJ3aXNlIHRo
ZXJlJ3Mgbm90aGluZyBwcmV2ZW50aW5nIGFub3RoZXIgdXNlcnNwYWNlIGZyb20NCj4gPiA+ID4g
YXR0ZW1wdGluZyBhbiBNU0kgYmFzZWQgYXR0YWNrIG9uIG90aGVyIHVzZXJzLCBvciBwZXJoYXBz
IGV2ZW4gb24NCj4gPiA+ID4gdGhlIGhvc3QuICBWRklPIGNhbid0IGFsbG93IHRoYXQuICBUaGFu
a3MsDQo+ID4gPg0KPiA+ID4gV2UgaGF2ZSB2ZXJ5IGZldyAoMyBNU0kgYmFuayBvbiBtb3N0IG9m
IGNoaXBzKSwgc28gd2UgY2FuIG5vdCBhc3NpZ24NCj4gPiA+IG9uZSB0byBlYWNoIHVzZXJzcGFj
ZS4gV2hhdCB3ZSBjYW4gZG8gaXMgaG9zdCBhbmQgdXNlcnNwYWNlIGRvZXMgbm90DQo+ID4gPiBz
aGFyZSBhIE1TSSBiYW5rIHdoaWxlIHVzZXJzcGFjZSB3aWxsIHNoYXJlIGEgTVNJIGJhbmsuDQo+
ID4NCj4gPiBUaGVuIHlvdSBwcm9iYWJseSBuZWVkIFZGSU8gdG8gIm93biIgdGhlIE1TSSBiYW5r
IGFuZCBwcm9ncmFtIGRldmljZXMNCj4gPiBpbnRvIGl0IHJhdGhlciB0aGFuIGV4cG9zaW5nIHRo
ZSBNU0kgYmFua3MgdG8gdXNlcnNwYWNlIHRvIGxldCB0aGVtIGhhdmUNCj4gZGlyZWN0IGFjY2Vz
cy4NCj4gDQo+IE92ZXJhbGwgaWRlYSBvZiBleHBvc2luZyB0aGUgZGV0YWlscyBvZiBtc2kgcmVn
aW9ucyB0byB1c2Vyc3BhY2UgYXJlDQo+ICAxKSBVc2VyIHNwYWNlIGNhbiBkZWZpbmUgdGhlIGFw
ZXJ0dXJlIHNpemUgdG8gZml0IE1TSSBtYXBwaW5nIGluIElPTU1VLg0KPiAgMikgc2V0dXAgaW92
YSBmb3IgYSBNU0kgYmFua3M7IHdoaWNoIGlzIGp1c3QgYWZ0ZXIgZ3Vlc3QgbWVtb3J5Lg0KPiAN
Cj4gQnV0IGN1cnJlbnRseSB3ZSBleHBvc2UgdGhlICJzaXplIiBhbmQgImFkZHJlc3MiIG9mIE1T
SSBiYW5rcywgcGFzc2luZyBhZGRyZXNzDQo+IGlzIG9mIG5vIHVzZSBhbmQgY2FuIGJlIHByb2Js
ZW1hdGljLg0KDQpJIGFtIHNvcnJ5LCBhYm92ZSBpbmZvcm1hdGlvbiBpcyBub3QgY29ycmVjdC4g
Q3VycmVudGx5IG5laXRoZXIgd2UgZXhwb3NlICJhZGRyZXNzIiBub3IgInNpemUiIHRvIHVzZXIg
c3BhY2UuIFdlIG9ubHkgZXhwb3NlIG51bWJlciBvZiBNU0kgQkFOSyBjb3VudCBhbmQgdXNlcnNw
YWNlIGFkZHMgb25lIHN1Yi13aW5kb3cgZm9yIGVhY2ggYmFuay4NCg0KPiBJZiB3ZSBqdXN0IHBy
b3ZpZGUgdGhlIHNpemUgb2YgTVNJIGJhbmsgdG8gdXNlcnNwYWNlIHRoZW4gdXNlcnNwYWNlIGNh
bm5vdCBkbw0KPiBhbnl0aGluZyB3cm9uZy4NCg0KU28gdXNlcnNwYWNlIGRvZXMgbm90IGtub3cg
YWRkcmVzcywgc28gaXQgY2Fubm90IG1tYXAgYW5kIGNhdXNlIGFueSBpbnRlcmZlcmVuY2UgYnkg
ZGlyZWN0bHkgcmVhZGluZy93cml0aW5nLg0KV2hlbiB1c2VyIHNwYWNlIG1ha2VzIFZGSU9fREVW
SUNFX1NFVF9JUlFTIGlvY3RsIGZvciBNU0kgdHlwZSB0aGVuIFZGSU8gd2l0aCBNU0kgbGF5ZXIg
Y29tcG9zZSBhbmQgd3JpdGUgTVNJIGFkZHJlc3MgYW5kIERhdGEgaW4gYWN0dWFsIGRldmljZS4g
VGhpcyBpcyBhbGwgYWJzdHJhY3RlZCB3aXRoaW4gaG9zdCBrZXJuZWwuDQoNCkRvIHdlIHNlZSBh
bnkgaXNzdWUgd2l0aCB0aGlzIGFwcHJvYWNoPw0KDQpUaGFua3MNCi1CaGFyYXQNCg0KPiANCj4g
V2hpbGUgaXQgaXMgc3RpbGwgdGhlIHJlc3BvbnNpYmlsaXR5IG9mIGhvc3QgKE1TSStWRklPKSB0
byBjb21wb3NlIE1TSS1hZGRyZXNzDQo+IGFuZCBNU0ktZGF0YTsgc28gSSB0aGluayB0aGlzIHNo
b3VsZCBsb29rIGZpbmUuDQo+IA0KPiA+IFRoYW5rcywNCj4gPg0KPiA+IEFsZXgNCj4gPg0KDQo=

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH 1/9 v2] pci:msi: add weak function for returning msi region info
  2013-11-25 23:36   ` Bjorn Helgaas
@ 2013-11-28 10:08     ` Bharat Bhushan
  0 siblings, 0 replies; 35+ messages in thread
From: Bharat Bhushan @ 2013-11-28 10:08 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: linux-pci, joro, Stuart Yoder, iommu, agraf, alex.williamson,
	Scott Wood, linuxppc-dev, linux-kernel



> -----Original Message-----
> From: linux-pci-owner@vger.kernel.org [mailto:linux-pci-owner@vger.kernel=
.org]
> On Behalf Of Bjorn Helgaas
> Sent: Tuesday, November 26, 2013 5:06 AM
> To: Bhushan Bharat-R65777
> Cc: alex.williamson@redhat.com; joro@8bytes.org; agraf@suse.de; Wood Scot=
t-
> B07421; Yoder Stuart-B08248; iommu@lists.linux-foundation.org; linux-
> pci@vger.kernel.org; linuxppc-dev@lists.ozlabs.org; linux-
> kernel@vger.kernel.org; Bhushan Bharat-R65777
> Subject: Re: [PATCH 1/9 v2] pci:msi: add weak function for returning msi =
region
> info
>=20
> On Tue, Nov 19, 2013 at 10:47:05AM +0530, Bharat Bhushan wrote:
> > In Aperture type of IOMMU (like FSL PAMU), VFIO-iommu system need to
> > know the MSI region to map its window in h/w. This patch just defines
> > the required weak functions only and will be used by followup patches.
> >
> > Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
> > ---
> > v1->v2
> >  - Added description on "struct msi_region"
> >
> >  drivers/pci/msi.c   |   22 ++++++++++++++++++++++
> >  include/linux/msi.h |   14 ++++++++++++++
> >  2 files changed, 36 insertions(+), 0 deletions(-)
> >
> > diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c index
> > d5f90d6..2643a29 100644
> > --- a/drivers/pci/msi.c
> > +++ b/drivers/pci/msi.c
> > @@ -67,6 +67,28 @@ int __weak arch_msi_check_device(struct pci_dev *dev=
, int
> nvec, int type)
> >  	return chip->check_device(chip, dev, nvec, type);  }
> >
> > +int __weak arch_msi_get_region_count(void) {
> > +	return 0;
> > +}
> > +
> > +int __weak arch_msi_get_region(int region_num, struct msi_region
> > +*region) {
> > +	return 0;
> > +}
> > +
> > +int msi_get_region_count(void)
> > +{
> > +	return arch_msi_get_region_count();
> > +}
> > +EXPORT_SYMBOL(msi_get_region_count);
> > +
> > +int msi_get_region(int region_num, struct msi_region *region) {
> > +	return arch_msi_get_region(region_num, region); }
> > +EXPORT_SYMBOL(msi_get_region);
> > +
> >  int __weak arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int
> > type)  {
> >  	struct msi_desc *entry;
> > diff --git a/include/linux/msi.h b/include/linux/msi.h index
> > b17ead8..ade1480 100644
> > --- a/include/linux/msi.h
> > +++ b/include/linux/msi.h
> > @@ -51,6 +51,18 @@ struct msi_desc {
> >  };
> >
> >  /*
> > + * This structure is used to get
> > + * - physical address
> > + * - size
> > + * of a msi region
> > + */
> > +struct msi_region {
> > +	int region_num; /* MSI region number */
> > +	dma_addr_t addr; /* Address of MSI region */
> > +	size_t size; /* Size of MSI region */ };
> > +
> > +/*
> >   * The arch hooks to setup up msi irqs. Those functions are
> >   * implemented as weak symbols so that they /can/ be overriden by
> >   * architecture specific code if needed.
> > @@ -64,6 +76,8 @@ void arch_restore_msi_irqs(struct pci_dev *dev, int
> > irq);
> >
> >  void default_teardown_msi_irqs(struct pci_dev *dev);  void
> > default_restore_msi_irqs(struct pci_dev *dev, int irq);
> > +int arch_msi_get_region_count(void);
> > +int arch_msi_get_region(int region_num, struct msi_region *region);
>=20
> It doesn't look like any of this (struct msi_region, msi_get_region(),
> msi_get_region_count()) is actually used by drivers/pci/msi.c, so I don't=
 think
> it needs to be declared in generic code.  It looks like it's only used in
> drivers/vfio/vfio_iommu_fsl_pamu.c, where you already know you have an FS=
L
> IOMMU, and you can just call FSL-specific interfaces directly.

Thanks Bjorn,

Want to be sure of what you are suggesting.

What I understood is that we define these (struct msi_region, msi_get_regio=
n(), msi_get_region_count()) in arch/powerpc/include/fsl_msi.h (a new file)=
. Include this header file directly in driver/vfio/vfio_iommu_fsl_pamu.c

Same also applies for msi_set_iova() in patch-5 ?

-Bharat

>=20
> Bjorn
>=20
> >
> >  struct msi_chip {
> >  	struct module *owner;
> > --
> > 1.7.0.4
> >
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-pci" in t=
he body
> of a message to majordomo@vger.kernel.org More majordomo info at
> http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  2013-11-25  5:33           ` Bharat Bhushan
  2013-11-25 16:38             ` Alex Williamson
@ 2013-12-06  0:00             ` Scott Wood
  2013-12-06  4:17               ` Bharat Bhushan
  1 sibling, 1 reply; 35+ messages in thread
From: Scott Wood @ 2013-12-06  0:00 UTC (permalink / raw)
  To: Bharat Bhushan
  Cc: linux-pci, agraf, iommu, Yoder Stuart-B08248, Alex Williamson,
	bhelgaas, linuxppc-dev, linux-kernel

On Sun, 2013-11-24 at 23:33 -0600, Bharat Bhushan wrote:
> 
> > -----Original Message-----
> > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > Sent: Friday, November 22, 2013 2:31 AM
> > To: Wood Scott-B07421
> > Cc: Bhushan Bharat-R65777; linux-pci@vger.kernel.org; agraf@suse.de; Yoder
> > Stuart-B08248; iommu@lists.linux-foundation.org; bhelgaas@google.com; linuxppc-
> > dev@lists.ozlabs.org; linux-kernel@vger.kernel.org
> > Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
> >
> > On Thu, 2013-11-21 at 14:47 -0600, Scott Wood wrote:
> > > On Thu, 2013-11-21 at 13:43 -0700, Alex Williamson wrote:
> > > > On Thu, 2013-11-21 at 11:20 +0000, Bharat Bhushan wrote:
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > > > > > Sent: Thursday, November 21, 2013 12:17 AM
> > > > > > To: Bhushan Bharat-R65777
> > > > > > Cc: joro@8bytes.org; bhelgaas@google.com; agraf@suse.de; Wood
> > > > > > Scott-B07421; Yoder Stuart-B08248;
> > > > > > iommu@lists.linux-foundation.org; linux- pci@vger.kernel.org;
> > > > > > linuxppc-dev@lists.ozlabs.org; linux- kernel@vger.kernel.org;
> > > > > > Bhushan Bharat-R65777
> > > > > > Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale
> > > > > > IOMMU (PAMU)
> > > > > >
> > > > > > Is VFIO_IOMMU_PAMU_GET_MSI_BANK_COUNT per aperture (ie. each
> > > > > > vfio user has $COUNT regions at their disposal exclusively)?
> > > > >
> > > > > Number of msi-bank count is system wide and not per aperture, But will be
> > setting windows for banks in the device aperture.
> > > > > So say if we are direct assigning 2 pci device (both have different iommu
> > group, so 2 aperture in iommu) to VM.
> > > > > Now qemu can make only one call to know how many msi-banks are there but
> > it must set sub-windows for all banks for both pci device in its respective
> > aperture.
> > > >
> > > > I'm still confused.  What I want to make sure of is that the banks
> > > > are independent per aperture.  For instance, if we have two separate
> > > > userspace processes operating independently and they both chose to
> > > > use msi bank zero for their device, that's bank zero within each
> > > > aperture and doesn't interfere.  Or another way to ask is can a
> > > > malicious user interfere with other users by using the wrong bank.
> > > > Thanks,
> > >
> > > They can interfere.
> 
> Want to be sure of how they can interfere?

If more than one VFIO user shares the same MSI group, one of the users
can send MSIs to another user, by using the wrong interrupt within the
bank.  Unexpected MSIs could cause misbehavior or denial of service.

> >>  With this hardware, the only way to prevent that
> > > is to make sure that a bank is not shared by multiple protection contexts.
> > > For some of our users, though, I believe preventing this is less
> > > important than the performance benefit.
> 
> So should we let this patch series in without protection?

No, there should be some sort of opt-in mechanism similar to IOMMU-less
VFIO -- but not the same exact one, since one is a much more serious
loss of isolation than the other.

> > I think we need some sort of ownership model around the msi banks then.
> > Otherwise there's nothing preventing another userspace from attempting an MSI
> > based attack on other users, or perhaps even on the host.  VFIO can't allow
> > that.  Thanks,
> 
> We have very few (3 MSI bank on most of chips), so we can not assign
> one to each userspace.

That depends on how many users there are.

-Scott

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  2013-11-28  9:19               ` Bharat Bhushan
@ 2013-12-06  0:21                 ` Scott Wood
  2013-12-06  4:11                   ` Bharat Bhushan
  0 siblings, 1 reply; 35+ messages in thread
From: Scott Wood @ 2013-12-06  0:21 UTC (permalink / raw)
  To: Bharat Bhushan
  Cc: linux-pci, agraf, iommu, Yoder Stuart-B08248, Alex Williamson,
	bhelgaas, linuxppc-dev, linux-kernel

On Thu, 2013-11-28 at 03:19 -0600, Bharat Bhushan wrote:
> 
> > -----Original Message-----
> > From: Bhushan Bharat-R65777
> > Sent: Wednesday, November 27, 2013 9:39 PM
> > To: 'Alex Williamson'
> > Cc: Wood Scott-B07421; linux-pci@vger.kernel.org; agraf@suse.de; Yoder Stuart-
> > B08248; iommu@lists.linux-foundation.org; bhelgaas@google.com; linuxppc-
> > dev@lists.ozlabs.org; linux-kernel@vger.kernel.org
> > Subject: RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
> >
> >
> >
> > > -----Original Message-----
> > > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > > Sent: Monday, November 25, 2013 10:08 PM
> > > To: Bhushan Bharat-R65777
> > > Cc: Wood Scott-B07421; linux-pci@vger.kernel.org; agraf@suse.de; Yoder
> > > Stuart- B08248; iommu@lists.linux-foundation.org; bhelgaas@google.com;
> > > linuxppc- dev@lists.ozlabs.org; linux-kernel@vger.kernel.org
> > > Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU
> > > (PAMU)
> > >
> > > On Mon, 2013-11-25 at 05:33 +0000, Bharat Bhushan wrote:
> > > >
> > > > > -----Original Message-----
> > > > > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > > > > Sent: Friday, November 22, 2013 2:31 AM
> > > > > To: Wood Scott-B07421
> > > > > Cc: Bhushan Bharat-R65777; linux-pci@vger.kernel.org;
> > > > > agraf@suse.de; Yoder Stuart-B08248;
> > > > > iommu@lists.linux-foundation.org; bhelgaas@google.com; linuxppc-
> > > > > dev@lists.ozlabs.org; linux-kernel@vger.kernel.org
> > > > > Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale
> > > > > IOMMU (PAMU)
> > > > >
> > > > > On Thu, 2013-11-21 at 14:47 -0600, Scott Wood wrote:
> > > > > > On Thu, 2013-11-21 at 13:43 -0700, Alex Williamson wrote:
> > > > > > > On Thu, 2013-11-21 at 11:20 +0000, Bharat Bhushan wrote:
> > > > > > > >
> > > > > > > > > -----Original Message-----
> > > > > > > > > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > > > > > > > > Sent: Thursday, November 21, 2013 12:17 AM
> > > > > > > > > To: Bhushan Bharat-R65777
> > > > > > > > > Cc: joro@8bytes.org; bhelgaas@google.com; agraf@suse.de;
> > > > > > > > > Wood Scott-B07421; Yoder Stuart-B08248;
> > > > > > > > > iommu@lists.linux-foundation.org; linux-
> > > > > > > > > pci@vger.kernel.org; linuxppc-dev@lists.ozlabs.org; linux-
> > > > > > > > > kernel@vger.kernel.org; Bhushan Bharat-R65777
> > > > > > > > > Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for
> > > > > > > > > Freescale IOMMU (PAMU)
> > > > > > > > >
> > > > > > > > > Is VFIO_IOMMU_PAMU_GET_MSI_BANK_COUNT per aperture (ie.
> > > > > > > > > each vfio user has $COUNT regions at their disposal exclusively)?
> > > > > > > >
> > > > > > > > Number of msi-bank count is system wide and not per
> > > > > > > > aperture, But will be
> > > > > setting windows for banks in the device aperture.
> > > > > > > > So say if we are direct assigning 2 pci device (both have
> > > > > > > > different iommu
> > > > > group, so 2 aperture in iommu) to VM.
> > > > > > > > Now qemu can make only one call to know how many msi-banks
> > > > > > > > are there but
> > > > > it must set sub-windows for all banks for both pci device in its
> > > > > respective aperture.
> > > > > > >
> > > > > > > I'm still confused.  What I want to make sure of is that the
> > > > > > > banks are independent per aperture.  For instance, if we have
> > > > > > > two separate userspace processes operating independently and
> > > > > > > they both chose to use msi bank zero for their device, that's
> > > > > > > bank zero within each aperture and doesn't interfere.  Or
> > > > > > > another way to ask is can a malicious user interfere with
> > > > > > > other users by
> > > using the wrong bank.
> > > > > > > Thanks,
> > > > > >
> > > > > > They can interfere.
> > > >
> > > > Want to be sure of how they can interfere?
> > >
> > > What happens if more than one user selects the same MSI bank?
> > > Minimally, wouldn't that result in the IOMMU blocking transactions
> > > from the previous user once the new user activates their mapping?
> >
> > Yes and no; With current implementation yes but with a minor change no. Later in
> > this response I will explain how.
> >
> > >
> > > > >>  With this hardware, the only way to prevent that
> > > > > > is to make sure that a bank is not shared by multiple protection
> > contexts.
> > > > > > For some of our users, though, I believe preventing this is less
> > > > > > important than the performance benefit.
> > > >
> > > > So should we let this patch series in without protection?
> > >
> > > No.
> > >
> > > > >
> > > > > I think we need some sort of ownership model around the msi banks then.
> > > > > Otherwise there's nothing preventing another userspace from
> > > > > attempting an MSI based attack on other users, or perhaps even on
> > > > > the host.  VFIO can't allow that.  Thanks,
> > > >
> > > > We have very few (3 MSI bank on most of chips), so we can not assign
> > > > one to each userspace. What we can do is host and userspace does not
> > > > share a MSI bank while userspace will share a MSI bank.
> > >
> > > Then you probably need VFIO to "own" the MSI bank and program devices
> > > into it rather than exposing the MSI banks to userspace to let them have
> > direct access.
> >
> > Overall idea of exposing the details of msi regions to userspace are
> >  1) User space can define the aperture size to fit MSI mapping in IOMMU.
> >  2) setup iova for a MSI banks; which is just after guest memory.
> >
> > But currently we expose the "size" and "address" of MSI banks, passing address
> > is of no use and can be problematic.
> 
> I am sorry, above information is not correct. Currently neither we expose "address" nor "size" to user space. We only expose number of MSI BANK count and userspace adds one sub-window for each bank.
> 
> > If we just provide the size of MSI bank to userspace then userspace cannot do
> > anything wrong.
> 
> So userspace does not know address, so it cannot mmap and cause any interference by directly reading/writing.

That's security through obscurity...  Couldn't the malicious user find
out the address via other means, such as experimentation on another
system over which they have full control?  What would happen if the user
reads from their device's PCI config space?  Or gets the information via
some back door in the PCI device they own?  Or pokes throughout the
address space looking for something that generates an interrupt to its
own device?

-Scott

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  2013-12-06  0:21                 ` Scott Wood
@ 2013-12-06  4:11                   ` Bharat Bhushan
  2013-12-06 18:59                     ` Scott Wood
  0 siblings, 1 reply; 35+ messages in thread
From: Bharat Bhushan @ 2013-12-06  4:11 UTC (permalink / raw)
  To: Scott Wood
  Cc: linux-pci, agraf, iommu, Stuart Yoder, Alex Williamson, bhelgaas,
	linuxppc-dev, linux-kernel

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogV29vZCBTY290dC1CMDc0
MjENCj4gU2VudDogRnJpZGF5LCBEZWNlbWJlciAwNiwgMjAxMyA1OjUyIEFNDQo+IFRvOiBCaHVz
aGFuIEJoYXJhdC1SNjU3NzcNCj4gQ2M6IEFsZXggV2lsbGlhbXNvbjsgbGludXgtcGNpQHZnZXIu
a2VybmVsLm9yZzsgYWdyYWZAc3VzZS5kZTsgWW9kZXIgU3R1YXJ0LQ0KPiBCMDgyNDg7IGlvbW11
QGxpc3RzLmxpbnV4LWZvdW5kYXRpb24ub3JnOyBiaGVsZ2Fhc0Bnb29nbGUuY29tOyBsaW51eHBw
Yy0NCj4gZGV2QGxpc3RzLm96bGFicy5vcmc7IGxpbnV4LWtlcm5lbEB2Z2VyLmtlcm5lbC5vcmcN
Cj4gU3ViamVjdDogUmU6IFtQQVRDSCAwLzkgdjJdIHZmaW8tcGNpOiBhZGQgc3VwcG9ydCBmb3Ig
RnJlZXNjYWxlIElPTU1VIChQQU1VKQ0KPiANCj4gT24gVGh1LCAyMDEzLTExLTI4IGF0IDAzOjE5
IC0wNjAwLCBCaGFyYXQgQmh1c2hhbiB3cm90ZToNCj4gPg0KPiA+ID4gLS0tLS1PcmlnaW5hbCBN
ZXNzYWdlLS0tLS0NCj4gPiA+IEZyb206IEJodXNoYW4gQmhhcmF0LVI2NTc3Nw0KPiA+ID4gU2Vu
dDogV2VkbmVzZGF5LCBOb3ZlbWJlciAyNywgMjAxMyA5OjM5IFBNDQo+ID4gPiBUbzogJ0FsZXgg
V2lsbGlhbXNvbicNCj4gPiA+IENjOiBXb29kIFNjb3R0LUIwNzQyMTsgbGludXgtcGNpQHZnZXIu
a2VybmVsLm9yZzsgYWdyYWZAc3VzZS5kZTsNCj4gPiA+IFlvZGVyIFN0dWFydC0gQjA4MjQ4OyBp
b21tdUBsaXN0cy5saW51eC1mb3VuZGF0aW9uLm9yZzsNCj4gPiA+IGJoZWxnYWFzQGdvb2dsZS5j
b207IGxpbnV4cHBjLSBkZXZAbGlzdHMub3psYWJzLm9yZzsNCj4gPiA+IGxpbnV4LWtlcm5lbEB2
Z2VyLmtlcm5lbC5vcmcNCj4gPiA+IFN1YmplY3Q6IFJFOiBbUEFUQ0ggMC85IHYyXSB2ZmlvLXBj
aTogYWRkIHN1cHBvcnQgZm9yIEZyZWVzY2FsZQ0KPiA+ID4gSU9NTVUgKFBBTVUpDQo+ID4gPg0K
PiA+ID4NCj4gPiA+DQo+ID4gPiA+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+ID4gPiA+
IEZyb206IEFsZXggV2lsbGlhbXNvbiBbbWFpbHRvOmFsZXgud2lsbGlhbXNvbkByZWRoYXQuY29t
XQ0KPiA+ID4gPiBTZW50OiBNb25kYXksIE5vdmVtYmVyIDI1LCAyMDEzIDEwOjA4IFBNDQo+ID4g
PiA+IFRvOiBCaHVzaGFuIEJoYXJhdC1SNjU3NzcNCj4gPiA+ID4gQ2M6IFdvb2QgU2NvdHQtQjA3
NDIxOyBsaW51eC1wY2lAdmdlci5rZXJuZWwub3JnOyBhZ3JhZkBzdXNlLmRlOw0KPiA+ID4gPiBZ
b2Rlcg0KPiA+ID4gPiBTdHVhcnQtIEIwODI0ODsgaW9tbXVAbGlzdHMubGludXgtZm91bmRhdGlv
bi5vcmc7DQo+ID4gPiA+IGJoZWxnYWFzQGdvb2dsZS5jb207DQo+ID4gPiA+IGxpbnV4cHBjLSBk
ZXZAbGlzdHMub3psYWJzLm9yZzsgbGludXgta2VybmVsQHZnZXIua2VybmVsLm9yZw0KPiA+ID4g
PiBTdWJqZWN0OiBSZTogW1BBVENIIDAvOSB2Ml0gdmZpby1wY2k6IGFkZCBzdXBwb3J0IGZvciBG
cmVlc2NhbGUNCj4gPiA+ID4gSU9NTVUNCj4gPiA+ID4gKFBBTVUpDQo+ID4gPiA+DQo+ID4gPiA+
IE9uIE1vbiwgMjAxMy0xMS0yNSBhdCAwNTozMyArMDAwMCwgQmhhcmF0IEJodXNoYW4gd3JvdGU6
DQo+ID4gPiA+ID4NCj4gPiA+ID4gPiA+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+ID4g
PiA+ID4gPiBGcm9tOiBBbGV4IFdpbGxpYW1zb24gW21haWx0bzphbGV4LndpbGxpYW1zb25AcmVk
aGF0LmNvbV0NCj4gPiA+ID4gPiA+IFNlbnQ6IEZyaWRheSwgTm92ZW1iZXIgMjIsIDIwMTMgMjoz
MSBBTQ0KPiA+ID4gPiA+ID4gVG86IFdvb2QgU2NvdHQtQjA3NDIxDQo+ID4gPiA+ID4gPiBDYzog
Qmh1c2hhbiBCaGFyYXQtUjY1Nzc3OyBsaW51eC1wY2lAdmdlci5rZXJuZWwub3JnOw0KPiA+ID4g
PiA+ID4gYWdyYWZAc3VzZS5kZTsgWW9kZXIgU3R1YXJ0LUIwODI0ODsNCj4gPiA+ID4gPiA+IGlv
bW11QGxpc3RzLmxpbnV4LWZvdW5kYXRpb24ub3JnOyBiaGVsZ2Fhc0Bnb29nbGUuY29tOw0KPiA+
ID4gPiA+ID4gbGludXhwcGMtIGRldkBsaXN0cy5vemxhYnMub3JnOyBsaW51eC1rZXJuZWxAdmdl
ci5rZXJuZWwub3JnDQo+ID4gPiA+ID4gPiBTdWJqZWN0OiBSZTogW1BBVENIIDAvOSB2Ml0gdmZp
by1wY2k6IGFkZCBzdXBwb3J0IGZvcg0KPiA+ID4gPiA+ID4gRnJlZXNjYWxlIElPTU1VIChQQU1V
KQ0KPiA+ID4gPiA+ID4NCj4gPiA+ID4gPiA+IE9uIFRodSwgMjAxMy0xMS0yMSBhdCAxNDo0NyAt
MDYwMCwgU2NvdHQgV29vZCB3cm90ZToNCj4gPiA+ID4gPiA+ID4gT24gVGh1LCAyMDEzLTExLTIx
IGF0IDEzOjQzIC0wNzAwLCBBbGV4IFdpbGxpYW1zb24gd3JvdGU6DQo+ID4gPiA+ID4gPiA+ID4g
T24gVGh1LCAyMDEzLTExLTIxIGF0IDExOjIwICswMDAwLCBCaGFyYXQgQmh1c2hhbiB3cm90ZToN
Cj4gPiA+ID4gPiA+ID4gPiA+DQo+ID4gPiA+ID4gPiA+ID4gPiA+IC0tLS0tT3JpZ2luYWwgTWVz
c2FnZS0tLS0tDQo+ID4gPiA+ID4gPiA+ID4gPiA+IEZyb206IEFsZXggV2lsbGlhbXNvbg0KPiA+
ID4gPiA+ID4gPiA+ID4gPiBbbWFpbHRvOmFsZXgud2lsbGlhbXNvbkByZWRoYXQuY29tXQ0KPiA+
ID4gPiA+ID4gPiA+ID4gPiBTZW50OiBUaHVyc2RheSwgTm92ZW1iZXIgMjEsIDIwMTMgMTI6MTcg
QU0NCj4gPiA+ID4gPiA+ID4gPiA+ID4gVG86IEJodXNoYW4gQmhhcmF0LVI2NTc3Nw0KPiA+ID4g
PiA+ID4gPiA+ID4gPiBDYzogam9yb0A4Ynl0ZXMub3JnOyBiaGVsZ2Fhc0Bnb29nbGUuY29tOw0K
PiA+ID4gPiA+ID4gPiA+ID4gPiBhZ3JhZkBzdXNlLmRlOyBXb29kIFNjb3R0LUIwNzQyMTsgWW9k
ZXIgU3R1YXJ0LUIwODI0ODsNCj4gPiA+ID4gPiA+ID4gPiA+ID4gaW9tbXVAbGlzdHMubGludXgt
Zm91bmRhdGlvbi5vcmc7IGxpbnV4LQ0KPiA+ID4gPiA+ID4gPiA+ID4gPiBwY2lAdmdlci5rZXJu
ZWwub3JnOyBsaW51eHBwYy1kZXZAbGlzdHMub3psYWJzLm9yZzsNCj4gPiA+ID4gPiA+ID4gPiA+
ID4gbGludXgtIGtlcm5lbEB2Z2VyLmtlcm5lbC5vcmc7IEJodXNoYW4gQmhhcmF0LVI2NTc3Nw0K
PiA+ID4gPiA+ID4gPiA+ID4gPiBTdWJqZWN0OiBSZTogW1BBVENIIDAvOSB2Ml0gdmZpby1wY2k6
IGFkZCBzdXBwb3J0IGZvcg0KPiA+ID4gPiA+ID4gPiA+ID4gPiBGcmVlc2NhbGUgSU9NTVUgKFBB
TVUpDQo+ID4gPiA+ID4gPiA+ID4gPiA+DQo+ID4gPiA+ID4gPiA+ID4gPiA+IElzIFZGSU9fSU9N
TVVfUEFNVV9HRVRfTVNJX0JBTktfQ09VTlQgcGVyIGFwZXJ0dXJlIChpZS4NCj4gPiA+ID4gPiA+
ID4gPiA+ID4gZWFjaCB2ZmlvIHVzZXIgaGFzICRDT1VOVCByZWdpb25zIGF0IHRoZWlyIGRpc3Bv
c2FsDQo+IGV4Y2x1c2l2ZWx5KT8NCj4gPiA+ID4gPiA+ID4gPiA+DQo+ID4gPiA+ID4gPiA+ID4g
PiBOdW1iZXIgb2YgbXNpLWJhbmsgY291bnQgaXMgc3lzdGVtIHdpZGUgYW5kIG5vdCBwZXINCj4g
PiA+ID4gPiA+ID4gPiA+IGFwZXJ0dXJlLCBCdXQgd2lsbCBiZQ0KPiA+ID4gPiA+ID4gc2V0dGlu
ZyB3aW5kb3dzIGZvciBiYW5rcyBpbiB0aGUgZGV2aWNlIGFwZXJ0dXJlLg0KPiA+ID4gPiA+ID4g
PiA+ID4gU28gc2F5IGlmIHdlIGFyZSBkaXJlY3QgYXNzaWduaW5nIDIgcGNpIGRldmljZSAoYm90
aA0KPiA+ID4gPiA+ID4gPiA+ID4gaGF2ZSBkaWZmZXJlbnQgaW9tbXUNCj4gPiA+ID4gPiA+IGdy
b3VwLCBzbyAyIGFwZXJ0dXJlIGluIGlvbW11KSB0byBWTS4NCj4gPiA+ID4gPiA+ID4gPiA+IE5v
dyBxZW11IGNhbiBtYWtlIG9ubHkgb25lIGNhbGwgdG8ga25vdyBob3cgbWFueQ0KPiA+ID4gPiA+
ID4gPiA+ID4gbXNpLWJhbmtzIGFyZSB0aGVyZSBidXQNCj4gPiA+ID4gPiA+IGl0IG11c3Qgc2V0
IHN1Yi13aW5kb3dzIGZvciBhbGwgYmFua3MgZm9yIGJvdGggcGNpIGRldmljZSBpbg0KPiA+ID4g
PiA+ID4gaXRzIHJlc3BlY3RpdmUgYXBlcnR1cmUuDQo+ID4gPiA+ID4gPiA+ID4NCj4gPiA+ID4g
PiA+ID4gPiBJJ20gc3RpbGwgY29uZnVzZWQuICBXaGF0IEkgd2FudCB0byBtYWtlIHN1cmUgb2Yg
aXMgdGhhdA0KPiA+ID4gPiA+ID4gPiA+IHRoZSBiYW5rcyBhcmUgaW5kZXBlbmRlbnQgcGVyIGFw
ZXJ0dXJlLiAgRm9yIGluc3RhbmNlLCBpZg0KPiA+ID4gPiA+ID4gPiA+IHdlIGhhdmUgdHdvIHNl
cGFyYXRlIHVzZXJzcGFjZSBwcm9jZXNzZXMgb3BlcmF0aW5nDQo+ID4gPiA+ID4gPiA+ID4gaW5k
ZXBlbmRlbnRseSBhbmQgdGhleSBib3RoIGNob3NlIHRvIHVzZSBtc2kgYmFuayB6ZXJvIGZvcg0K
PiA+ID4gPiA+ID4gPiA+IHRoZWlyIGRldmljZSwgdGhhdCdzIGJhbmsgemVybyB3aXRoaW4gZWFj
aCBhcGVydHVyZSBhbmQNCj4gPiA+ID4gPiA+ID4gPiBkb2Vzbid0IGludGVyZmVyZS4gIE9yIGFu
b3RoZXIgd2F5IHRvIGFzayBpcyBjYW4gYQ0KPiA+ID4gPiA+ID4gPiA+IG1hbGljaW91cyB1c2Vy
IGludGVyZmVyZSB3aXRoIG90aGVyIHVzZXJzIGJ5DQo+ID4gPiA+IHVzaW5nIHRoZSB3cm9uZyBi
YW5rLg0KPiA+ID4gPiA+ID4gPiA+IFRoYW5rcywNCj4gPiA+ID4gPiA+ID4NCj4gPiA+ID4gPiA+
ID4gVGhleSBjYW4gaW50ZXJmZXJlLg0KPiA+ID4gPiA+DQo+ID4gPiA+ID4gV2FudCB0byBiZSBz
dXJlIG9mIGhvdyB0aGV5IGNhbiBpbnRlcmZlcmU/DQo+ID4gPiA+DQo+ID4gPiA+IFdoYXQgaGFw
cGVucyBpZiBtb3JlIHRoYW4gb25lIHVzZXIgc2VsZWN0cyB0aGUgc2FtZSBNU0kgYmFuaz8NCj4g
PiA+ID4gTWluaW1hbGx5LCB3b3VsZG4ndCB0aGF0IHJlc3VsdCBpbiB0aGUgSU9NTVUgYmxvY2tp
bmcgdHJhbnNhY3Rpb25zDQo+ID4gPiA+IGZyb20gdGhlIHByZXZpb3VzIHVzZXIgb25jZSB0aGUg
bmV3IHVzZXIgYWN0aXZhdGVzIHRoZWlyIG1hcHBpbmc/DQo+ID4gPg0KPiA+ID4gWWVzIGFuZCBu
bzsgV2l0aCBjdXJyZW50IGltcGxlbWVudGF0aW9uIHllcyBidXQgd2l0aCBhIG1pbm9yIGNoYW5n
ZQ0KPiA+ID4gbm8uIExhdGVyIGluIHRoaXMgcmVzcG9uc2UgSSB3aWxsIGV4cGxhaW4gaG93Lg0K
PiA+ID4NCj4gPiA+ID4NCj4gPiA+ID4gPiA+PiAgV2l0aCB0aGlzIGhhcmR3YXJlLCB0aGUgb25s
eSB3YXkgdG8gcHJldmVudCB0aGF0DQo+ID4gPiA+ID4gPiA+IGlzIHRvIG1ha2Ugc3VyZSB0aGF0
IGEgYmFuayBpcyBub3Qgc2hhcmVkIGJ5IG11bHRpcGxlDQo+ID4gPiA+ID4gPiA+IHByb3RlY3Rp
b24NCj4gPiA+IGNvbnRleHRzLg0KPiA+ID4gPiA+ID4gPiBGb3Igc29tZSBvZiBvdXIgdXNlcnMs
IHRob3VnaCwgSSBiZWxpZXZlIHByZXZlbnRpbmcgdGhpcyBpcw0KPiA+ID4gPiA+ID4gPiBsZXNz
IGltcG9ydGFudCB0aGFuIHRoZSBwZXJmb3JtYW5jZSBiZW5lZml0Lg0KPiA+ID4gPiA+DQo+ID4g
PiA+ID4gU28gc2hvdWxkIHdlIGxldCB0aGlzIHBhdGNoIHNlcmllcyBpbiB3aXRob3V0IHByb3Rl
Y3Rpb24/DQo+ID4gPiA+DQo+ID4gPiA+IE5vLg0KPiA+ID4gPg0KPiA+ID4gPiA+ID4NCj4gPiA+
ID4gPiA+IEkgdGhpbmsgd2UgbmVlZCBzb21lIHNvcnQgb2Ygb3duZXJzaGlwIG1vZGVsIGFyb3Vu
ZCB0aGUgbXNpIGJhbmtzDQo+IHRoZW4uDQo+ID4gPiA+ID4gPiBPdGhlcndpc2UgdGhlcmUncyBu
b3RoaW5nIHByZXZlbnRpbmcgYW5vdGhlciB1c2Vyc3BhY2UgZnJvbQ0KPiA+ID4gPiA+ID4gYXR0
ZW1wdGluZyBhbiBNU0kgYmFzZWQgYXR0YWNrIG9uIG90aGVyIHVzZXJzLCBvciBwZXJoYXBzIGV2
ZW4NCj4gPiA+ID4gPiA+IG9uIHRoZSBob3N0LiAgVkZJTyBjYW4ndCBhbGxvdyB0aGF0LiAgVGhh
bmtzLA0KPiA+ID4gPiA+DQo+ID4gPiA+ID4gV2UgaGF2ZSB2ZXJ5IGZldyAoMyBNU0kgYmFuayBv
biBtb3N0IG9mIGNoaXBzKSwgc28gd2UgY2FuIG5vdA0KPiA+ID4gPiA+IGFzc2lnbiBvbmUgdG8g
ZWFjaCB1c2Vyc3BhY2UuIFdoYXQgd2UgY2FuIGRvIGlzIGhvc3QgYW5kDQo+ID4gPiA+ID4gdXNl
cnNwYWNlIGRvZXMgbm90IHNoYXJlIGEgTVNJIGJhbmsgd2hpbGUgdXNlcnNwYWNlIHdpbGwgc2hh
cmUgYSBNU0kNCj4gYmFuay4NCj4gPiA+ID4NCj4gPiA+ID4gVGhlbiB5b3UgcHJvYmFibHkgbmVl
ZCBWRklPIHRvICJvd24iIHRoZSBNU0kgYmFuayBhbmQgcHJvZ3JhbQ0KPiA+ID4gPiBkZXZpY2Vz
IGludG8gaXQgcmF0aGVyIHRoYW4gZXhwb3NpbmcgdGhlIE1TSSBiYW5rcyB0byB1c2Vyc3BhY2Ug
dG8NCj4gPiA+ID4gbGV0IHRoZW0gaGF2ZQ0KPiA+ID4gZGlyZWN0IGFjY2Vzcy4NCj4gPiA+DQo+
ID4gPiBPdmVyYWxsIGlkZWEgb2YgZXhwb3NpbmcgdGhlIGRldGFpbHMgb2YgbXNpIHJlZ2lvbnMg
dG8gdXNlcnNwYWNlIGFyZQ0KPiA+ID4gIDEpIFVzZXIgc3BhY2UgY2FuIGRlZmluZSB0aGUgYXBl
cnR1cmUgc2l6ZSB0byBmaXQgTVNJIG1hcHBpbmcgaW4gSU9NTVUuDQo+ID4gPiAgMikgc2V0dXAg
aW92YSBmb3IgYSBNU0kgYmFua3M7IHdoaWNoIGlzIGp1c3QgYWZ0ZXIgZ3Vlc3QgbWVtb3J5Lg0K
PiA+ID4NCj4gPiA+IEJ1dCBjdXJyZW50bHkgd2UgZXhwb3NlIHRoZSAic2l6ZSIgYW5kICJhZGRy
ZXNzIiBvZiBNU0kgYmFua3MsDQo+ID4gPiBwYXNzaW5nIGFkZHJlc3MgaXMgb2Ygbm8gdXNlIGFu
ZCBjYW4gYmUgcHJvYmxlbWF0aWMuDQo+ID4NCj4gPiBJIGFtIHNvcnJ5LCBhYm92ZSBpbmZvcm1h
dGlvbiBpcyBub3QgY29ycmVjdC4gQ3VycmVudGx5IG5laXRoZXIgd2UgZXhwb3NlDQo+ICJhZGRy
ZXNzIiBub3IgInNpemUiIHRvIHVzZXIgc3BhY2UuIFdlIG9ubHkgZXhwb3NlIG51bWJlciBvZiBN
U0kgQkFOSyBjb3VudCBhbmQNCj4gdXNlcnNwYWNlIGFkZHMgb25lIHN1Yi13aW5kb3cgZm9yIGVh
Y2ggYmFuay4NCj4gPg0KPiA+ID4gSWYgd2UganVzdCBwcm92aWRlIHRoZSBzaXplIG9mIE1TSSBi
YW5rIHRvIHVzZXJzcGFjZSB0aGVuIHVzZXJzcGFjZQ0KPiA+ID4gY2Fubm90IGRvIGFueXRoaW5n
IHdyb25nLg0KPiA+DQo+ID4gU28gdXNlcnNwYWNlIGRvZXMgbm90IGtub3cgYWRkcmVzcywgc28g
aXQgY2Fubm90IG1tYXAgYW5kIGNhdXNlIGFueQ0KPiBpbnRlcmZlcmVuY2UgYnkgZGlyZWN0bHkg
cmVhZGluZy93cml0aW5nLg0KPiANCj4gVGhhdCdzIHNlY3VyaXR5IHRocm91Z2ggb2JzY3VyaXR5
Li4uICBDb3VsZG4ndCB0aGUgbWFsaWNpb3VzIHVzZXIgZmluZCBvdXQgdGhlDQo+IGFkZHJlc3Mg
dmlhIG90aGVyIG1lYW5zLCBzdWNoIGFzIGV4cGVyaW1lbnRhdGlvbiBvbiBhbm90aGVyIHN5c3Rl
bSBvdmVyIHdoaWNoDQo+IHRoZXkgaGF2ZSBmdWxsIGNvbnRyb2w/ICBXaGF0IHdvdWxkIGhhcHBl
biBpZiB0aGUgdXNlciByZWFkcyBmcm9tIHRoZWlyIGRldmljZSdzDQo+IFBDSSBjb25maWcgc3Bh
Y2U/ICBPciBnZXRzIHRoZSBpbmZvcm1hdGlvbiB2aWEgc29tZSBiYWNrIGRvb3IgaW4gdGhlIFBD
SSBkZXZpY2UNCj4gdGhleSBvd24/ICBPciBwb2tlcyB0aHJvdWdob3V0IHRoZSBhZGRyZXNzIHNw
YWNlIGxvb2tpbmcgZm9yIHNvbWV0aGluZyB0aGF0DQo+IGdlbmVyYXRlcyBhbiBpbnRlcnJ1cHQg
dG8gaXRzIG93biBkZXZpY2U/DQoNClNvIGhvdyB0byBzb2x2ZSB0aGlzIHByb2JsZW0sIEFueSBz
dWdnZXN0aW9uID8NCg0KV2UgaGF2ZSB0byBtYXAgb25lIHdpbmRvdyBpbiBQQU1VIGZvciBNU0lz
IGFuZCBhIG1hbGljaW91cyB1c2VyIGNhbiBhc2sgaXRzIGRldmljZSB0byBkbyBETUEgdG8gTVNJ
IHdpbmRvdyByZWdpb24gd2l0aCBhbnkgcGFpciBvZiBhZGRyZXNzIGFuZCBkYXRhLCB3aGljaCBj
YW4gbGVhZCB0byB1bmV4cGVjdGVkIE1TSXMgaW4gc3lzdGVtPw0KDQpUaGFua3MNCi1CaGFyYXQN
Cg0KPiANCj4gLVNjb3R0DQo+IA0KDQo=

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  2013-12-06  0:00             ` Scott Wood
@ 2013-12-06  4:17               ` Bharat Bhushan
  2013-12-06 19:25                 ` Scott Wood
  0 siblings, 1 reply; 35+ messages in thread
From: Bharat Bhushan @ 2013-12-06  4:17 UTC (permalink / raw)
  To: Scott Wood
  Cc: linux-pci, agraf, iommu, Stuart Yoder, Alex Williamson, bhelgaas,
	linuxppc-dev, linux-kernel

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogV29vZCBTY290dC1CMDc0
MjENCj4gU2VudDogRnJpZGF5LCBEZWNlbWJlciAwNiwgMjAxMyA1OjMxIEFNDQo+IFRvOiBCaHVz
aGFuIEJoYXJhdC1SNjU3NzcNCj4gQ2M6IEFsZXggV2lsbGlhbXNvbjsgbGludXgtcGNpQHZnZXIu
a2VybmVsLm9yZzsgYWdyYWZAc3VzZS5kZTsgWW9kZXIgU3R1YXJ0LQ0KPiBCMDgyNDg7IGlvbW11
QGxpc3RzLmxpbnV4LWZvdW5kYXRpb24ub3JnOyBiaGVsZ2Fhc0Bnb29nbGUuY29tOyBsaW51eHBw
Yy0NCj4gZGV2QGxpc3RzLm96bGFicy5vcmc7IGxpbnV4LWtlcm5lbEB2Z2VyLmtlcm5lbC5vcmcN
Cj4gU3ViamVjdDogUmU6IFtQQVRDSCAwLzkgdjJdIHZmaW8tcGNpOiBhZGQgc3VwcG9ydCBmb3Ig
RnJlZXNjYWxlIElPTU1VIChQQU1VKQ0KPiANCj4gT24gU3VuLCAyMDEzLTExLTI0IGF0IDIzOjMz
IC0wNjAwLCBCaGFyYXQgQmh1c2hhbiB3cm90ZToNCj4gPg0KPiA+ID4gLS0tLS1PcmlnaW5hbCBN
ZXNzYWdlLS0tLS0NCj4gPiA+IEZyb206IEFsZXggV2lsbGlhbXNvbiBbbWFpbHRvOmFsZXgud2ls
bGlhbXNvbkByZWRoYXQuY29tXQ0KPiA+ID4gU2VudDogRnJpZGF5LCBOb3ZlbWJlciAyMiwgMjAx
MyAyOjMxIEFNDQo+ID4gPiBUbzogV29vZCBTY290dC1CMDc0MjENCj4gPiA+IENjOiBCaHVzaGFu
IEJoYXJhdC1SNjU3Nzc7IGxpbnV4LXBjaUB2Z2VyLmtlcm5lbC5vcmc7IGFncmFmQHN1c2UuZGU7
DQo+ID4gPiBZb2RlciBTdHVhcnQtQjA4MjQ4OyBpb21tdUBsaXN0cy5saW51eC1mb3VuZGF0aW9u
Lm9yZzsNCj4gPiA+IGJoZWxnYWFzQGdvb2dsZS5jb207IGxpbnV4cHBjLSBkZXZAbGlzdHMub3ps
YWJzLm9yZzsNCj4gPiA+IGxpbnV4LWtlcm5lbEB2Z2VyLmtlcm5lbC5vcmcNCj4gPiA+IFN1Ympl
Y3Q6IFJlOiBbUEFUQ0ggMC85IHYyXSB2ZmlvLXBjaTogYWRkIHN1cHBvcnQgZm9yIEZyZWVzY2Fs
ZQ0KPiA+ID4gSU9NTVUgKFBBTVUpDQo+ID4gPg0KPiA+ID4gT24gVGh1LCAyMDEzLTExLTIxIGF0
IDE0OjQ3IC0wNjAwLCBTY290dCBXb29kIHdyb3RlOg0KPiA+ID4gPiBPbiBUaHUsIDIwMTMtMTEt
MjEgYXQgMTM6NDMgLTA3MDAsIEFsZXggV2lsbGlhbXNvbiB3cm90ZToNCj4gPiA+ID4gPiBPbiBU
aHUsIDIwMTMtMTEtMjEgYXQgMTE6MjAgKzAwMDAsIEJoYXJhdCBCaHVzaGFuIHdyb3RlOg0KPiA+
ID4gPiA+ID4NCj4gPiA+ID4gPiA+ID4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gPiA+
ID4gPiA+ID4gRnJvbTogQWxleCBXaWxsaWFtc29uIFttYWlsdG86YWxleC53aWxsaWFtc29uQHJl
ZGhhdC5jb21dDQo+ID4gPiA+ID4gPiA+IFNlbnQ6IFRodXJzZGF5LCBOb3ZlbWJlciAyMSwgMjAx
MyAxMjoxNyBBTQ0KPiA+ID4gPiA+ID4gPiBUbzogQmh1c2hhbiBCaGFyYXQtUjY1Nzc3DQo+ID4g
PiA+ID4gPiA+IENjOiBqb3JvQDhieXRlcy5vcmc7IGJoZWxnYWFzQGdvb2dsZS5jb207IGFncmFm
QHN1c2UuZGU7DQo+ID4gPiA+ID4gPiA+IFdvb2QgU2NvdHQtQjA3NDIxOyBZb2RlciBTdHVhcnQt
QjA4MjQ4Ow0KPiA+ID4gPiA+ID4gPiBpb21tdUBsaXN0cy5saW51eC1mb3VuZGF0aW9uLm9yZzsg
bGludXgtDQo+ID4gPiA+ID4gPiA+IHBjaUB2Z2VyLmtlcm5lbC5vcmc7IGxpbnV4cHBjLWRldkBs
aXN0cy5vemxhYnMub3JnOyBsaW51eC0NCj4gPiA+ID4gPiA+ID4ga2VybmVsQHZnZXIua2VybmVs
Lm9yZzsgQmh1c2hhbiBCaGFyYXQtUjY1Nzc3DQo+ID4gPiA+ID4gPiA+IFN1YmplY3Q6IFJlOiBb
UEFUQ0ggMC85IHYyXSB2ZmlvLXBjaTogYWRkIHN1cHBvcnQgZm9yDQo+ID4gPiA+ID4gPiA+IEZy
ZWVzY2FsZSBJT01NVSAoUEFNVSkNCj4gPiA+ID4gPiA+ID4NCj4gPiA+ID4gPiA+ID4gSXMgVkZJ
T19JT01NVV9QQU1VX0dFVF9NU0lfQkFOS19DT1VOVCBwZXIgYXBlcnR1cmUgKGllLiBlYWNoDQo+
ID4gPiA+ID4gPiA+IHZmaW8gdXNlciBoYXMgJENPVU5UIHJlZ2lvbnMgYXQgdGhlaXIgZGlzcG9z
YWwgZXhjbHVzaXZlbHkpPw0KPiA+ID4gPiA+ID4NCj4gPiA+ID4gPiA+IE51bWJlciBvZiBtc2kt
YmFuayBjb3VudCBpcyBzeXN0ZW0gd2lkZSBhbmQgbm90IHBlciBhcGVydHVyZSwNCj4gPiA+ID4g
PiA+IEJ1dCB3aWxsIGJlDQo+ID4gPiBzZXR0aW5nIHdpbmRvd3MgZm9yIGJhbmtzIGluIHRoZSBk
ZXZpY2UgYXBlcnR1cmUuDQo+ID4gPiA+ID4gPiBTbyBzYXkgaWYgd2UgYXJlIGRpcmVjdCBhc3Np
Z25pbmcgMiBwY2kgZGV2aWNlIChib3RoIGhhdmUNCj4gPiA+ID4gPiA+IGRpZmZlcmVudCBpb21t
dQ0KPiA+ID4gZ3JvdXAsIHNvIDIgYXBlcnR1cmUgaW4gaW9tbXUpIHRvIFZNLg0KPiA+ID4gPiA+
ID4gTm93IHFlbXUgY2FuIG1ha2Ugb25seSBvbmUgY2FsbCB0byBrbm93IGhvdyBtYW55IG1zaS1i
YW5rcyBhcmUNCj4gPiA+ID4gPiA+IHRoZXJlIGJ1dA0KPiA+ID4gaXQgbXVzdCBzZXQgc3ViLXdp
bmRvd3MgZm9yIGFsbCBiYW5rcyBmb3IgYm90aCBwY2kgZGV2aWNlIGluIGl0cw0KPiA+ID4gcmVz
cGVjdGl2ZSBhcGVydHVyZS4NCj4gPiA+ID4gPg0KPiA+ID4gPiA+IEknbSBzdGlsbCBjb25mdXNl
ZC4gIFdoYXQgSSB3YW50IHRvIG1ha2Ugc3VyZSBvZiBpcyB0aGF0IHRoZQ0KPiA+ID4gPiA+IGJh
bmtzIGFyZSBpbmRlcGVuZGVudCBwZXIgYXBlcnR1cmUuICBGb3IgaW5zdGFuY2UsIGlmIHdlIGhh
dmUNCj4gPiA+ID4gPiB0d28gc2VwYXJhdGUgdXNlcnNwYWNlIHByb2Nlc3NlcyBvcGVyYXRpbmcg
aW5kZXBlbmRlbnRseSBhbmQNCj4gPiA+ID4gPiB0aGV5IGJvdGggY2hvc2UgdG8gdXNlIG1zaSBi
YW5rIHplcm8gZm9yIHRoZWlyIGRldmljZSwgdGhhdCdzDQo+ID4gPiA+ID4gYmFuayB6ZXJvIHdp
dGhpbiBlYWNoIGFwZXJ0dXJlIGFuZCBkb2Vzbid0IGludGVyZmVyZS4gIE9yDQo+ID4gPiA+ID4g
YW5vdGhlciB3YXkgdG8gYXNrIGlzIGNhbiBhIG1hbGljaW91cyB1c2VyIGludGVyZmVyZSB3aXRo
IG90aGVyIHVzZXJzIGJ5DQo+IHVzaW5nIHRoZSB3cm9uZyBiYW5rLg0KPiA+ID4gPiA+IFRoYW5r
cywNCj4gPiA+ID4NCj4gPiA+ID4gVGhleSBjYW4gaW50ZXJmZXJlLg0KPiA+DQo+ID4gV2FudCB0
byBiZSBzdXJlIG9mIGhvdyB0aGV5IGNhbiBpbnRlcmZlcmU/DQo+IA0KPiBJZiBtb3JlIHRoYW4g
b25lIFZGSU8gdXNlciBzaGFyZXMgdGhlIHNhbWUgTVNJIGdyb3VwLCBvbmUgb2YgdGhlIHVzZXJz
IGNhbiBzZW5kDQo+IE1TSXMgdG8gYW5vdGhlciB1c2VyLCBieSB1c2luZyB0aGUgd3JvbmcgaW50
ZXJydXB0IHdpdGhpbiB0aGUgYmFuay4gIFVuZXhwZWN0ZWQNCj4gTVNJcyBjb3VsZCBjYXVzZSBt
aXNiZWhhdmlvciBvciBkZW5pYWwgb2Ygc2VydmljZS4NCj4gDQo+ID4gPj4gIFdpdGggdGhpcyBo
YXJkd2FyZSwgdGhlIG9ubHkgd2F5IHRvIHByZXZlbnQgdGhhdA0KPiA+ID4gPiBpcyB0byBtYWtl
IHN1cmUgdGhhdCBhIGJhbmsgaXMgbm90IHNoYXJlZCBieSBtdWx0aXBsZSBwcm90ZWN0aW9uIGNv
bnRleHRzLg0KPiA+ID4gPiBGb3Igc29tZSBvZiBvdXIgdXNlcnMsIHRob3VnaCwgSSBiZWxpZXZl
IHByZXZlbnRpbmcgdGhpcyBpcyBsZXNzDQo+ID4gPiA+IGltcG9ydGFudCB0aGFuIHRoZSBwZXJm
b3JtYW5jZSBiZW5lZml0Lg0KPiA+DQo+ID4gU28gc2hvdWxkIHdlIGxldCB0aGlzIHBhdGNoIHNl
cmllcyBpbiB3aXRob3V0IHByb3RlY3Rpb24/DQo+IA0KPiBObywgdGhlcmUgc2hvdWxkIGJlIHNv
bWUgc29ydCBvZiBvcHQtaW4gbWVjaGFuaXNtIHNpbWlsYXIgdG8gSU9NTVUtbGVzcyBWRklPIC0t
DQo+IGJ1dCBub3QgdGhlIHNhbWUgZXhhY3Qgb25lLCBzaW5jZSBvbmUgaXMgYSBtdWNoIG1vcmUg
c2VyaW91cyBsb3NzIG9mIGlzb2xhdGlvbg0KPiB0aGFuIHRoZSBvdGhlci4NCg0KQ2FuIHlvdSBw
bGVhc2UgZWxhYm9yYXRlICJvcHQtaW4gbWVjaGFuaXNtIj8NCg0KPiANCj4gPiA+IEkgdGhpbmsg
d2UgbmVlZCBzb21lIHNvcnQgb2Ygb3duZXJzaGlwIG1vZGVsIGFyb3VuZCB0aGUgbXNpIGJhbmtz
IHRoZW4uDQo+ID4gPiBPdGhlcndpc2UgdGhlcmUncyBub3RoaW5nIHByZXZlbnRpbmcgYW5vdGhl
ciB1c2Vyc3BhY2UgZnJvbQ0KPiA+ID4gYXR0ZW1wdGluZyBhbiBNU0kgYmFzZWQgYXR0YWNrIG9u
IG90aGVyIHVzZXJzLCBvciBwZXJoYXBzIGV2ZW4gb24NCj4gPiA+IHRoZSBob3N0LiAgVkZJTyBj
YW4ndCBhbGxvdyB0aGF0LiAgVGhhbmtzLA0KPiA+DQo+ID4gV2UgaGF2ZSB2ZXJ5IGZldyAoMyBN
U0kgYmFuayBvbiBtb3N0IG9mIGNoaXBzKSwgc28gd2UgY2FuIG5vdCBhc3NpZ24NCj4gPiBvbmUg
dG8gZWFjaCB1c2Vyc3BhY2UuDQo+IA0KPiBUaGF0IGRlcGVuZHMgb24gaG93IG1hbnkgdXNlcnMg
dGhlcmUgYXJlLg0KDQpXaGF0IEkgdGhpbmsgd2UgY2FuIGRvIGlzOg0KIC0gUmVzZXJ2ZSBvbmUg
TVNJIHJlZ2lvbiBmb3IgaG9zdC4gSG9zdCB3aWxsIG5vdCBzaGFyZSBNU0kgcmVnaW9uIHdpdGgg
R3Vlc3QuDQogLSBGb3IgdXB0byAyIEd1ZXN0IChNQVggbXNpIHdpdGggaG9zdCAtIDEpIGdpdmUg
dGhlbiBzZXBhcmF0ZSBNU0kgc3ViIHJlZ2lvbnMNCiAtIEFkZGl0aW9uYWwgR3Vlc3Qgd2lsbCBz
aGFyZSBNU0kgcmVnaW9uIHdpdGggb3RoZXIgZ3Vlc3QuDQoNCkFueSBiZXR0ZXIgc3VnZ2VzdGlv
biBhcmUgbW9zdCB3ZWxjb21lLg0KDQpUaGFua3MNCi1CaGFyYXQNCj4gDQo+IC1TY290dA0KPiAN
Cg0K

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  2013-12-06  4:11                   ` Bharat Bhushan
@ 2013-12-06 18:59                     ` Scott Wood
  2013-12-06 19:30                       ` Alex Williamson
  0 siblings, 1 reply; 35+ messages in thread
From: Scott Wood @ 2013-12-06 18:59 UTC (permalink / raw)
  To: Bharat Bhushan
  Cc: linux-pci, agraf, iommu, Yoder Stuart-B08248, Alex Williamson,
	bhelgaas, linuxppc-dev, linux-kernel

On Thu, 2013-12-05 at 22:11 -0600, Bharat Bhushan wrote:
> 
> > -----Original Message-----
> > From: Wood Scott-B07421
> > Sent: Friday, December 06, 2013 5:52 AM
> > To: Bhushan Bharat-R65777
> > Cc: Alex Williamson; linux-pci@vger.kernel.org; agraf@suse.de; Yoder Stuart-
> > B08248; iommu@lists.linux-foundation.org; bhelgaas@google.com; linuxppc-
> > dev@lists.ozlabs.org; linux-kernel@vger.kernel.org
> > Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
> >
> > On Thu, 2013-11-28 at 03:19 -0600, Bharat Bhushan wrote:
> > >
> > > > -----Original Message-----
> > > > From: Bhushan Bharat-R65777
> > > > Sent: Wednesday, November 27, 2013 9:39 PM
> > > > To: 'Alex Williamson'
> > > > Cc: Wood Scott-B07421; linux-pci@vger.kernel.org; agraf@suse.de;
> > > > Yoder Stuart- B08248; iommu@lists.linux-foundation.org;
> > > > bhelgaas@google.com; linuxppc- dev@lists.ozlabs.org;
> > > > linux-kernel@vger.kernel.org
> > > > Subject: RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale
> > > > IOMMU (PAMU)
> > > >
> > > > If we just provide the size of MSI bank to userspace then userspace
> > > > cannot do anything wrong.
> > >
> > > So userspace does not know address, so it cannot mmap and cause any
> > interference by directly reading/writing.
> >
> > That's security through obscurity...  Couldn't the malicious user find out the
> > address via other means, such as experimentation on another system over which
> > they have full control?  What would happen if the user reads from their device's
> > PCI config space?  Or gets the information via some back door in the PCI device
> > they own?  Or pokes throughout the address space looking for something that
> > generates an interrupt to its own device?
> 
> So how to solve this problem, Any suggestion ?
> 
> We have to map one window in PAMU for MSIs and a malicious user can ask
> its device to do DMA to MSI window region with any pair of address and
> data, which can lead to unexpected MSIs in system?

I don't think there are any solutions other than to limit each bank to
one user, unless the admin turns some knob that says they're OK with the
partial loss of isolation.

-Scott

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  2013-12-06  4:17               ` Bharat Bhushan
@ 2013-12-06 19:25                 ` Scott Wood
  2013-12-10  5:37                   ` Bharat.Bhushan
  0 siblings, 1 reply; 35+ messages in thread
From: Scott Wood @ 2013-12-06 19:25 UTC (permalink / raw)
  To: Bharat Bhushan
  Cc: linux-pci, agraf, iommu, Yoder Stuart-B08248, Alex Williamson,
	bhelgaas, linuxppc-dev, linux-kernel

On Thu, 2013-12-05 at 22:17 -0600, Bharat Bhushan wrote:
> 
> > -----Original Message-----
> > From: Wood Scott-B07421
> > Sent: Friday, December 06, 2013 5:31 AM
> > To: Bhushan Bharat-R65777
> > Cc: Alex Williamson; linux-pci@vger.kernel.org; agraf@suse.de; Yoder Stuart-
> > B08248; iommu@lists.linux-foundation.org; bhelgaas@google.com; linuxppc-
> > dev@lists.ozlabs.org; linux-kernel@vger.kernel.org
> > Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
> >
> > On Sun, 2013-11-24 at 23:33 -0600, Bharat Bhushan wrote:
> > >
> > > > -----Original Message-----
> > > > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > > > Sent: Friday, November 22, 2013 2:31 AM
> > > > To: Wood Scott-B07421
> > > > Cc: Bhushan Bharat-R65777; linux-pci@vger.kernel.org; agraf@suse.de;
> > > > Yoder Stuart-B08248; iommu@lists.linux-foundation.org;
> > > > bhelgaas@google.com; linuxppc- dev@lists.ozlabs.org;
> > > > linux-kernel@vger.kernel.org
> > > > Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale
> > > > IOMMU (PAMU)
> > > >
> > > > On Thu, 2013-11-21 at 14:47 -0600, Scott Wood wrote:
> > > > > They can interfere.
> > >
> > > Want to be sure of how they can interfere?
> >
> > If more than one VFIO user shares the same MSI group, one of the users can send
> > MSIs to another user, by using the wrong interrupt within the bank.  Unexpected
> > MSIs could cause misbehavior or denial of service.
> >
> > > >>  With this hardware, the only way to prevent that
> > > > > is to make sure that a bank is not shared by multiple protection contexts.
> > > > > For some of our users, though, I believe preventing this is less
> > > > > important than the performance benefit.
> > >
> > > So should we let this patch series in without protection?
> >
> > No, there should be some sort of opt-in mechanism similar to IOMMU-less VFIO --
> > but not the same exact one, since one is a much more serious loss of isolation
> > than the other.
> 
> Can you please elaborate "opt-in mechanism"?

The system should be secure by default.  If the administrator wants to
relax protection in order to accomplish some functionality, that should
require an explicit request such as a write to a sysfs file.

> > > > I think we need some sort of ownership model around the msi banks then.
> > > > Otherwise there's nothing preventing another userspace from
> > > > attempting an MSI based attack on other users, or perhaps even on
> > > > the host.  VFIO can't allow that.  Thanks,
> > >
> > > We have very few (3 MSI bank on most of chips), so we can not assign
> > > one to each userspace.
> >
> > That depends on how many users there are.
> 
> What I think we can do is:
>  - Reserve one MSI region for host. Host will not share MSI region with Guest.
>  - For upto 2 Guest (MAX msi with host - 1) give then separate MSI sub regions
>  - Additional Guest will share MSI region with other guest.
> 
> Any better suggestion are most welcome.

If the administrator does not opt into this partial loss of isolation,
then once you run out of MSI groups, new users should not be able to set
up MSIs.

-Scott

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  2013-12-06 18:59                     ` Scott Wood
@ 2013-12-06 19:30                       ` Alex Williamson
  2013-12-07  0:22                         ` Scott Wood
  2013-12-10  5:37                         ` Bharat.Bhushan
  0 siblings, 2 replies; 35+ messages in thread
From: Alex Williamson @ 2013-12-06 19:30 UTC (permalink / raw)
  To: Scott Wood
  Cc: linux-pci, agraf, Yoder Stuart-B08248, Bharat Bhushan, iommu,
	bhelgaas, linuxppc-dev, linux-kernel

On Fri, 2013-12-06 at 12:59 -0600, Scott Wood wrote:
> On Thu, 2013-12-05 at 22:11 -0600, Bharat Bhushan wrote:
> > 
> > > -----Original Message-----
> > > From: Wood Scott-B07421
> > > Sent: Friday, December 06, 2013 5:52 AM
> > > To: Bhushan Bharat-R65777
> > > Cc: Alex Williamson; linux-pci@vger.kernel.org; agraf@suse.de; Yoder Stuart-
> > > B08248; iommu@lists.linux-foundation.org; bhelgaas@google.com; linuxppc-
> > > dev@lists.ozlabs.org; linux-kernel@vger.kernel.org
> > > Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
> > >
> > > On Thu, 2013-11-28 at 03:19 -0600, Bharat Bhushan wrote:
> > > >
> > > > > -----Original Message-----
> > > > > From: Bhushan Bharat-R65777
> > > > > Sent: Wednesday, November 27, 2013 9:39 PM
> > > > > To: 'Alex Williamson'
> > > > > Cc: Wood Scott-B07421; linux-pci@vger.kernel.org; agraf@suse.de;
> > > > > Yoder Stuart- B08248; iommu@lists.linux-foundation.org;
> > > > > bhelgaas@google.com; linuxppc- dev@lists.ozlabs.org;
> > > > > linux-kernel@vger.kernel.org
> > > > > Subject: RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale
> > > > > IOMMU (PAMU)
> > > > >
> > > > > If we just provide the size of MSI bank to userspace then userspace
> > > > > cannot do anything wrong.
> > > >
> > > > So userspace does not know address, so it cannot mmap and cause any
> > > interference by directly reading/writing.
> > >
> > > That's security through obscurity...  Couldn't the malicious user find out the
> > > address via other means, such as experimentation on another system over which
> > > they have full control?  What would happen if the user reads from their device's
> > > PCI config space?  Or gets the information via some back door in the PCI device
> > > they own?  Or pokes throughout the address space looking for something that
> > > generates an interrupt to its own device?
> > 
> > So how to solve this problem, Any suggestion ?
> > 
> > We have to map one window in PAMU for MSIs and a malicious user can ask
> > its device to do DMA to MSI window region with any pair of address and
> > data, which can lead to unexpected MSIs in system?
> 
> I don't think there are any solutions other than to limit each bank to
> one user, unless the admin turns some knob that says they're OK with the
> partial loss of isolation.

Even if the admin does opt-in to an allow_unsafe_interrupts options, it
should still be reasonably difficult for one guest to interfere with the
other.  I don't think we want to rely on the blind luck of making the
full MSI bank accessible to multiple guests and hoping they don't step
on each other.  That probably means that vfio needs to manage the space
rather than the guest.  Thanks,

Alex

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  2013-12-06 19:30                       ` Alex Williamson
@ 2013-12-07  0:22                         ` Scott Wood
  2013-12-10  5:37                         ` Bharat.Bhushan
  1 sibling, 0 replies; 35+ messages in thread
From: Scott Wood @ 2013-12-07  0:22 UTC (permalink / raw)
  To: Alex Williamson
  Cc: linux-pci, agraf, Yoder Stuart-B08248, iommu, bhelgaas,
	linuxppc-dev, linux-kernel

On Fri, 2013-12-06 at 12:30 -0700, Alex Williamson wrote:
> On Fri, 2013-12-06 at 12:59 -0600, Scott Wood wrote:
> > On Thu, 2013-12-05 at 22:11 -0600, Bharat Bhushan wrote:
> > > 
> > > > -----Original Message-----
> > > > From: Wood Scott-B07421
> > > > Sent: Friday, December 06, 2013 5:52 AM
> > > > To: Bhushan Bharat-R65777
> > > > Cc: Alex Williamson; linux-pci@vger.kernel.org; agraf@suse.de; Yoder Stuart-
> > > > B08248; iommu@lists.linux-foundation.org; bhelgaas@google.com; linuxppc-
> > > > dev@lists.ozlabs.org; linux-kernel@vger.kernel.org
> > > > Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
> > > >
> > > > On Thu, 2013-11-28 at 03:19 -0600, Bharat Bhushan wrote:
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Bhushan Bharat-R65777
> > > > > > Sent: Wednesday, November 27, 2013 9:39 PM
> > > > > > To: 'Alex Williamson'
> > > > > > Cc: Wood Scott-B07421; linux-pci@vger.kernel.org; agraf@suse.de;
> > > > > > Yoder Stuart- B08248; iommu@lists.linux-foundation.org;
> > > > > > bhelgaas@google.com; linuxppc- dev@lists.ozlabs.org;
> > > > > > linux-kernel@vger.kernel.org
> > > > > > Subject: RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale
> > > > > > IOMMU (PAMU)
> > > > > >
> > > > > > If we just provide the size of MSI bank to userspace then userspace
> > > > > > cannot do anything wrong.
> > > > >
> > > > > So userspace does not know address, so it cannot mmap and cause any
> > > > interference by directly reading/writing.
> > > >
> > > > That's security through obscurity...  Couldn't the malicious user find out the
> > > > address via other means, such as experimentation on another system over which
> > > > they have full control?  What would happen if the user reads from their device's
> > > > PCI config space?  Or gets the information via some back door in the PCI device
> > > > they own?  Or pokes throughout the address space looking for something that
> > > > generates an interrupt to its own device?
> > > 
> > > So how to solve this problem, Any suggestion ?
> > > 
> > > We have to map one window in PAMU for MSIs and a malicious user can ask
> > > its device to do DMA to MSI window region with any pair of address and
> > > data, which can lead to unexpected MSIs in system?
> > 
> > I don't think there are any solutions other than to limit each bank to
> > one user, unless the admin turns some knob that says they're OK with the
> > partial loss of isolation.
> 
> Even if the admin does opt-in to an allow_unsafe_interrupts options, it
> should still be reasonably difficult for one guest to interfere with the
> other.  I don't think we want to rely on the blind luck of making the
> full MSI bank accessible to multiple guests and hoping they don't step
> on each other.  That probably means that vfio needs to manage the space
> rather than the guest.  Thanks,

Yes, the MSIs within a given bank would be allocated by the host kernel
in any case (presumably by the MSI driver, not VFIO itself).  This is
just about what happens if the MSI page is written to outside of the
normal mechanism.

-Scott

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  2013-12-06 19:30                       ` Alex Williamson
  2013-12-07  0:22                         ` Scott Wood
@ 2013-12-10  5:37                         ` Bharat.Bhushan
  2013-12-10  5:53                           ` Alex Williamson
  1 sibling, 1 reply; 35+ messages in thread
From: Bharat.Bhushan @ 2013-12-10  5:37 UTC (permalink / raw)
  To: Alex Williamson, Scott Wood
  Cc: linux-pci, agraf, Stuart Yoder, Bharat.Bhushan, iommu, bhelgaas,
	linuxppc-dev, linux-kernel

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogQWxleCBXaWxsaWFtc29u
IFttYWlsdG86YWxleC53aWxsaWFtc29uQHJlZGhhdC5jb21dDQo+IFNlbnQ6IFNhdHVyZGF5LCBE
ZWNlbWJlciAwNywgMjAxMyAxOjAwIEFNDQo+IFRvOiBXb29kIFNjb3R0LUIwNzQyMQ0KPiBDYzog
Qmh1c2hhbiBCaGFyYXQtUjY1Nzc3OyBsaW51eC1wY2lAdmdlci5rZXJuZWwub3JnOyBhZ3JhZkBz
dXNlLmRlOyBZb2Rlcg0KPiBTdHVhcnQtQjA4MjQ4OyBpb21tdUBsaXN0cy5saW51eC1mb3VuZGF0
aW9uLm9yZzsgYmhlbGdhYXNAZ29vZ2xlLmNvbTsgbGludXhwcGMtDQo+IGRldkBsaXN0cy5vemxh
YnMub3JnOyBsaW51eC1rZXJuZWxAdmdlci5rZXJuZWwub3JnDQo+IFN1YmplY3Q6IFJlOiBbUEFU
Q0ggMC85IHYyXSB2ZmlvLXBjaTogYWRkIHN1cHBvcnQgZm9yIEZyZWVzY2FsZSBJT01NVSAoUEFN
VSkNCj4gDQo+IE9uIEZyaSwgMjAxMy0xMi0wNiBhdCAxMjo1OSAtMDYwMCwgU2NvdHQgV29vZCB3
cm90ZToNCj4gPiBPbiBUaHUsIDIwMTMtMTItMDUgYXQgMjI6MTEgLTA2MDAsIEJoYXJhdCBCaHVz
aGFuIHdyb3RlOg0KPiA+ID4NCj4gPiA+ID4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4g
PiA+ID4gRnJvbTogV29vZCBTY290dC1CMDc0MjENCj4gPiA+ID4gU2VudDogRnJpZGF5LCBEZWNl
bWJlciAwNiwgMjAxMyA1OjUyIEFNDQo+ID4gPiA+IFRvOiBCaHVzaGFuIEJoYXJhdC1SNjU3NzcN
Cj4gPiA+ID4gQ2M6IEFsZXggV2lsbGlhbXNvbjsgbGludXgtcGNpQHZnZXIua2VybmVsLm9yZzsg
YWdyYWZAc3VzZS5kZTsNCj4gPiA+ID4gWW9kZXIgU3R1YXJ0LSBCMDgyNDg7IGlvbW11QGxpc3Rz
LmxpbnV4LWZvdW5kYXRpb24ub3JnOw0KPiA+ID4gPiBiaGVsZ2Fhc0Bnb29nbGUuY29tOyBsaW51
eHBwYy0gZGV2QGxpc3RzLm96bGFicy5vcmc7DQo+ID4gPiA+IGxpbnV4LWtlcm5lbEB2Z2VyLmtl
cm5lbC5vcmcNCj4gPiA+ID4gU3ViamVjdDogUmU6IFtQQVRDSCAwLzkgdjJdIHZmaW8tcGNpOiBh
ZGQgc3VwcG9ydCBmb3IgRnJlZXNjYWxlDQo+ID4gPiA+IElPTU1VIChQQU1VKQ0KPiA+ID4gPg0K
PiA+ID4gPiBPbiBUaHUsIDIwMTMtMTEtMjggYXQgMDM6MTkgLTA2MDAsIEJoYXJhdCBCaHVzaGFu
IHdyb3RlOg0KPiA+ID4gPiA+DQo+ID4gPiA+ID4gPiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0t
LQ0KPiA+ID4gPiA+ID4gRnJvbTogQmh1c2hhbiBCaGFyYXQtUjY1Nzc3DQo+ID4gPiA+ID4gPiBT
ZW50OiBXZWRuZXNkYXksIE5vdmVtYmVyIDI3LCAyMDEzIDk6MzkgUE0NCj4gPiA+ID4gPiA+IFRv
OiAnQWxleCBXaWxsaWFtc29uJw0KPiA+ID4gPiA+ID4gQ2M6IFdvb2QgU2NvdHQtQjA3NDIxOyBs
aW51eC1wY2lAdmdlci5rZXJuZWwub3JnOw0KPiA+ID4gPiA+ID4gYWdyYWZAc3VzZS5kZTsgWW9k
ZXIgU3R1YXJ0LSBCMDgyNDg7DQo+ID4gPiA+ID4gPiBpb21tdUBsaXN0cy5saW51eC1mb3VuZGF0
aW9uLm9yZzsgYmhlbGdhYXNAZ29vZ2xlLmNvbTsNCj4gPiA+ID4gPiA+IGxpbnV4cHBjLSBkZXZA
bGlzdHMub3psYWJzLm9yZzsgbGludXgta2VybmVsQHZnZXIua2VybmVsLm9yZw0KPiA+ID4gPiA+
ID4gU3ViamVjdDogUkU6IFtQQVRDSCAwLzkgdjJdIHZmaW8tcGNpOiBhZGQgc3VwcG9ydCBmb3IN
Cj4gPiA+ID4gPiA+IEZyZWVzY2FsZSBJT01NVSAoUEFNVSkNCj4gPiA+ID4gPiA+DQo+ID4gPiA+
ID4gPiBJZiB3ZSBqdXN0IHByb3ZpZGUgdGhlIHNpemUgb2YgTVNJIGJhbmsgdG8gdXNlcnNwYWNl
IHRoZW4NCj4gPiA+ID4gPiA+IHVzZXJzcGFjZSBjYW5ub3QgZG8gYW55dGhpbmcgd3JvbmcuDQo+
ID4gPiA+ID4NCj4gPiA+ID4gPiBTbyB1c2Vyc3BhY2UgZG9lcyBub3Qga25vdyBhZGRyZXNzLCBz
byBpdCBjYW5ub3QgbW1hcCBhbmQgY2F1c2UNCj4gPiA+ID4gPiBhbnkNCj4gPiA+ID4gaW50ZXJm
ZXJlbmNlIGJ5IGRpcmVjdGx5IHJlYWRpbmcvd3JpdGluZy4NCj4gPiA+ID4NCj4gPiA+ID4gVGhh
dCdzIHNlY3VyaXR5IHRocm91Z2ggb2JzY3VyaXR5Li4uICBDb3VsZG4ndCB0aGUgbWFsaWNpb3Vz
IHVzZXINCj4gPiA+ID4gZmluZCBvdXQgdGhlIGFkZHJlc3MgdmlhIG90aGVyIG1lYW5zLCBzdWNo
IGFzIGV4cGVyaW1lbnRhdGlvbiBvbg0KPiA+ID4gPiBhbm90aGVyIHN5c3RlbSBvdmVyIHdoaWNo
IHRoZXkgaGF2ZSBmdWxsIGNvbnRyb2w/ICBXaGF0IHdvdWxkDQo+ID4gPiA+IGhhcHBlbiBpZiB0
aGUgdXNlciByZWFkcyBmcm9tIHRoZWlyIGRldmljZSdzIFBDSSBjb25maWcgc3BhY2U/ICBPcg0K
PiA+ID4gPiBnZXRzIHRoZSBpbmZvcm1hdGlvbiB2aWEgc29tZSBiYWNrIGRvb3IgaW4gdGhlIFBD
SSBkZXZpY2UgdGhleQ0KPiA+ID4gPiBvd24/ICBPciBwb2tlcyB0aHJvdWdob3V0IHRoZSBhZGRy
ZXNzIHNwYWNlIGxvb2tpbmcgZm9yIHNvbWV0aGluZyB0aGF0DQo+IGdlbmVyYXRlcyBhbiBpbnRl
cnJ1cHQgdG8gaXRzIG93biBkZXZpY2U/DQo+ID4gPg0KPiA+ID4gU28gaG93IHRvIHNvbHZlIHRo
aXMgcHJvYmxlbSwgQW55IHN1Z2dlc3Rpb24gPw0KPiA+ID4NCj4gPiA+IFdlIGhhdmUgdG8gbWFw
IG9uZSB3aW5kb3cgaW4gUEFNVSBmb3IgTVNJcyBhbmQgYSBtYWxpY2lvdXMgdXNlciBjYW4NCj4g
PiA+IGFzayBpdHMgZGV2aWNlIHRvIGRvIERNQSB0byBNU0kgd2luZG93IHJlZ2lvbiB3aXRoIGFu
eSBwYWlyIG9mDQo+ID4gPiBhZGRyZXNzIGFuZCBkYXRhLCB3aGljaCBjYW4gbGVhZCB0byB1bmV4
cGVjdGVkIE1TSXMgaW4gc3lzdGVtPw0KPiA+DQo+ID4gSSBkb24ndCB0aGluayB0aGVyZSBhcmUg
YW55IHNvbHV0aW9ucyBvdGhlciB0aGFuIHRvIGxpbWl0IGVhY2ggYmFuayB0bw0KPiA+IG9uZSB1
c2VyLCB1bmxlc3MgdGhlIGFkbWluIHR1cm5zIHNvbWUga25vYiB0aGF0IHNheXMgdGhleSdyZSBP
SyB3aXRoDQo+ID4gdGhlIHBhcnRpYWwgbG9zcyBvZiBpc29sYXRpb24uDQo+IA0KPiBFdmVuIGlm
IHRoZSBhZG1pbiBkb2VzIG9wdC1pbiB0byBhbiBhbGxvd191bnNhZmVfaW50ZXJydXB0cyBvcHRp
b25zLCBpdCBzaG91bGQNCj4gc3RpbGwgYmUgcmVhc29uYWJseSBkaWZmaWN1bHQgZm9yIG9uZSBn
dWVzdCB0byBpbnRlcmZlcmUgd2l0aCB0aGUgb3RoZXIuICBJDQo+IGRvbid0IHRoaW5rIHdlIHdh
bnQgdG8gcmVseSBvbiB0aGUgYmxpbmQgbHVjayBvZiBtYWtpbmcgdGhlIGZ1bGwgTVNJIGJhbmsN
Cj4gYWNjZXNzaWJsZSB0byBtdWx0aXBsZSBndWVzdHMgYW5kIGhvcGluZyB0aGV5IGRvbid0IHN0
ZXAgb24gZWFjaCBvdGhlci4NCg0KTm90IHN1cmUgaG93IHRvIHNvbHZlIGluIHRoaXMgY2FzZSAo
c2hhcmluZyBNU0kgcGFnZSkNCg0KPiAgVGhhdCBwcm9iYWJseSBtZWFucyB0aGF0IHZmaW8gbmVl
ZHMgdG8gbWFuYWdlIHRoZSBzcGFjZSByYXRoZXIgdGhhbiB0aGUgZ3Vlc3QuDQoNCldoYXQgeW91
IG1lYW4gYnkgIiB2ZmlvIG5lZWRzIHRvIG1hbmFnZSB0aGUgc3BhY2UgcmF0aGVyIHRoYW4gdGhl
IGd1ZXN0Ij8NCg0KVGhhbmtzDQotQmhhcmF0DQoNCj4gVGhhbmtzLA0KPiANCj4gQWxleA0KPiAN
Cg0K

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  2013-12-06 19:25                 ` Scott Wood
@ 2013-12-10  5:37                   ` Bharat.Bhushan
  2013-12-10 20:29                     ` Scott Wood
  0 siblings, 1 reply; 35+ messages in thread
From: Bharat.Bhushan @ 2013-12-10  5:37 UTC (permalink / raw)
  To: 'Wood Scott-B07421', Bharat.Bhushan
  Cc: linux-pci, agraf, iommu, Stuart Yoder, Alex Williamson, bhelgaas,
	linuxppc-dev, linux-kernel

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogV29vZCBTY290dC1CMDc0
MjENCj4gU2VudDogU2F0dXJkYXksIERlY2VtYmVyIDA3LCAyMDEzIDEyOjU1IEFNDQo+IFRvOiBC
aHVzaGFuIEJoYXJhdC1SNjU3NzcNCj4gQ2M6IEFsZXggV2lsbGlhbXNvbjsgbGludXgtcGNpQHZn
ZXIua2VybmVsLm9yZzsgYWdyYWZAc3VzZS5kZTsgWW9kZXIgU3R1YXJ0LQ0KPiBCMDgyNDg7IGlv
bW11QGxpc3RzLmxpbnV4LWZvdW5kYXRpb24ub3JnOyBiaGVsZ2Fhc0Bnb29nbGUuY29tOyBsaW51
eHBwYy0NCj4gZGV2QGxpc3RzLm96bGFicy5vcmc7IGxpbnV4LWtlcm5lbEB2Z2VyLmtlcm5lbC5v
cmcNCj4gU3ViamVjdDogUmU6IFtQQVRDSCAwLzkgdjJdIHZmaW8tcGNpOiBhZGQgc3VwcG9ydCBm
b3IgRnJlZXNjYWxlIElPTU1VIChQQU1VKQ0KPiANCj4gT24gVGh1LCAyMDEzLTEyLTA1IGF0IDIy
OjE3IC0wNjAwLCBCaGFyYXQgQmh1c2hhbiB3cm90ZToNCj4gPg0KPiA+ID4gLS0tLS1PcmlnaW5h
bCBNZXNzYWdlLS0tLS0NCj4gPiA+IEZyb206IFdvb2QgU2NvdHQtQjA3NDIxDQo+ID4gPiBTZW50
OiBGcmlkYXksIERlY2VtYmVyIDA2LCAyMDEzIDU6MzEgQU0NCj4gPiA+IFRvOiBCaHVzaGFuIEJo
YXJhdC1SNjU3NzcNCj4gPiA+IENjOiBBbGV4IFdpbGxpYW1zb247IGxpbnV4LXBjaUB2Z2VyLmtl
cm5lbC5vcmc7IGFncmFmQHN1c2UuZGU7IFlvZGVyDQo+ID4gPiBTdHVhcnQtIEIwODI0ODsgaW9t
bXVAbGlzdHMubGludXgtZm91bmRhdGlvbi5vcmc7DQo+ID4gPiBiaGVsZ2Fhc0Bnb29nbGUuY29t
OyBsaW51eHBwYy0gZGV2QGxpc3RzLm96bGFicy5vcmc7DQo+ID4gPiBsaW51eC1rZXJuZWxAdmdl
ci5rZXJuZWwub3JnDQo+ID4gPiBTdWJqZWN0OiBSZTogW1BBVENIIDAvOSB2Ml0gdmZpby1wY2k6
IGFkZCBzdXBwb3J0IGZvciBGcmVlc2NhbGUNCj4gPiA+IElPTU1VIChQQU1VKQ0KPiA+ID4NCj4g
PiA+IE9uIFN1biwgMjAxMy0xMS0yNCBhdCAyMzozMyAtMDYwMCwgQmhhcmF0IEJodXNoYW4gd3Jv
dGU6DQo+ID4gPiA+DQo+ID4gPiA+ID4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gPiA+
ID4gPiBGcm9tOiBBbGV4IFdpbGxpYW1zb24gW21haWx0bzphbGV4LndpbGxpYW1zb25AcmVkaGF0
LmNvbV0NCj4gPiA+ID4gPiBTZW50OiBGcmlkYXksIE5vdmVtYmVyIDIyLCAyMDEzIDI6MzEgQU0N
Cj4gPiA+ID4gPiBUbzogV29vZCBTY290dC1CMDc0MjENCj4gPiA+ID4gPiBDYzogQmh1c2hhbiBC
aGFyYXQtUjY1Nzc3OyBsaW51eC1wY2lAdmdlci5rZXJuZWwub3JnOw0KPiA+ID4gPiA+IGFncmFm
QHN1c2UuZGU7IFlvZGVyIFN0dWFydC1CMDgyNDg7DQo+ID4gPiA+ID4gaW9tbXVAbGlzdHMubGlu
dXgtZm91bmRhdGlvbi5vcmc7IGJoZWxnYWFzQGdvb2dsZS5jb207IGxpbnV4cHBjLQ0KPiA+ID4g
PiA+IGRldkBsaXN0cy5vemxhYnMub3JnOyBsaW51eC1rZXJuZWxAdmdlci5rZXJuZWwub3JnDQo+
ID4gPiA+ID4gU3ViamVjdDogUmU6IFtQQVRDSCAwLzkgdjJdIHZmaW8tcGNpOiBhZGQgc3VwcG9y
dCBmb3IgRnJlZXNjYWxlDQo+ID4gPiA+ID4gSU9NTVUgKFBBTVUpDQo+ID4gPiA+ID4NCj4gPiA+
ID4gPiBPbiBUaHUsIDIwMTMtMTEtMjEgYXQgMTQ6NDcgLTA2MDAsIFNjb3R0IFdvb2Qgd3JvdGU6
DQo+ID4gPiA+ID4gPiBUaGV5IGNhbiBpbnRlcmZlcmUuDQo+ID4gPiA+DQo+ID4gPiA+IFdhbnQg
dG8gYmUgc3VyZSBvZiBob3cgdGhleSBjYW4gaW50ZXJmZXJlPw0KPiA+ID4NCj4gPiA+IElmIG1v
cmUgdGhhbiBvbmUgVkZJTyB1c2VyIHNoYXJlcyB0aGUgc2FtZSBNU0kgZ3JvdXAsIG9uZSBvZiB0
aGUNCj4gPiA+IHVzZXJzIGNhbiBzZW5kIE1TSXMgdG8gYW5vdGhlciB1c2VyLCBieSB1c2luZyB0
aGUgd3JvbmcgaW50ZXJydXB0DQo+ID4gPiB3aXRoaW4gdGhlIGJhbmsuICBVbmV4cGVjdGVkIE1T
SXMgY291bGQgY2F1c2UgbWlzYmVoYXZpb3Igb3IgZGVuaWFsIG9mDQo+IHNlcnZpY2UuDQo+ID4g
Pg0KPiA+ID4gPiA+PiAgV2l0aCB0aGlzIGhhcmR3YXJlLCB0aGUgb25seSB3YXkgdG8gcHJldmVu
dCB0aGF0DQo+ID4gPiA+ID4gPiBpcyB0byBtYWtlIHN1cmUgdGhhdCBhIGJhbmsgaXMgbm90IHNo
YXJlZCBieSBtdWx0aXBsZSBwcm90ZWN0aW9uDQo+IGNvbnRleHRzLg0KPiA+ID4gPiA+ID4gRm9y
IHNvbWUgb2Ygb3VyIHVzZXJzLCB0aG91Z2gsIEkgYmVsaWV2ZSBwcmV2ZW50aW5nIHRoaXMgaXMN
Cj4gPiA+ID4gPiA+IGxlc3MgaW1wb3J0YW50IHRoYW4gdGhlIHBlcmZvcm1hbmNlIGJlbmVmaXQu
DQo+ID4gPiA+DQo+ID4gPiA+IFNvIHNob3VsZCB3ZSBsZXQgdGhpcyBwYXRjaCBzZXJpZXMgaW4g
d2l0aG91dCBwcm90ZWN0aW9uPw0KPiA+ID4NCj4gPiA+IE5vLCB0aGVyZSBzaG91bGQgYmUgc29t
ZSBzb3J0IG9mIG9wdC1pbiBtZWNoYW5pc20gc2ltaWxhciB0bw0KPiA+ID4gSU9NTVUtbGVzcyBW
RklPIC0tIGJ1dCBub3QgdGhlIHNhbWUgZXhhY3Qgb25lLCBzaW5jZSBvbmUgaXMgYSBtdWNoDQo+
ID4gPiBtb3JlIHNlcmlvdXMgbG9zcyBvZiBpc29sYXRpb24gdGhhbiB0aGUgb3RoZXIuDQo+ID4N
Cj4gPiBDYW4geW91IHBsZWFzZSBlbGFib3JhdGUgIm9wdC1pbiBtZWNoYW5pc20iPw0KPiANCj4g
VGhlIHN5c3RlbSBzaG91bGQgYmUgc2VjdXJlIGJ5IGRlZmF1bHQuICBJZiB0aGUgYWRtaW5pc3Ry
YXRvciB3YW50cyB0byByZWxheA0KPiBwcm90ZWN0aW9uIGluIG9yZGVyIHRvIGFjY29tcGxpc2gg
c29tZSBmdW5jdGlvbmFsaXR5LCB0aGF0IHNob3VsZCByZXF1aXJlIGFuDQo+IGV4cGxpY2l0IHJl
cXVlc3Qgc3VjaCBhcyBhIHdyaXRlIHRvIGEgc3lzZnMgZmlsZS4NCj4gDQo+ID4gPiA+ID4gSSB0
aGluayB3ZSBuZWVkIHNvbWUgc29ydCBvZiBvd25lcnNoaXAgbW9kZWwgYXJvdW5kIHRoZSBtc2kg
YmFua3MgdGhlbi4NCj4gPiA+ID4gPiBPdGhlcndpc2UgdGhlcmUncyBub3RoaW5nIHByZXZlbnRp
bmcgYW5vdGhlciB1c2Vyc3BhY2UgZnJvbQ0KPiA+ID4gPiA+IGF0dGVtcHRpbmcgYW4gTVNJIGJh
c2VkIGF0dGFjayBvbiBvdGhlciB1c2Vycywgb3IgcGVyaGFwcyBldmVuDQo+ID4gPiA+ID4gb24g
dGhlIGhvc3QuICBWRklPIGNhbid0IGFsbG93IHRoYXQuICBUaGFua3MsDQo+ID4gPiA+DQo+ID4g
PiA+IFdlIGhhdmUgdmVyeSBmZXcgKDMgTVNJIGJhbmsgb24gbW9zdCBvZiBjaGlwcyksIHNvIHdl
IGNhbiBub3QNCj4gPiA+ID4gYXNzaWduIG9uZSB0byBlYWNoIHVzZXJzcGFjZS4NCj4gPiA+DQo+
ID4gPiBUaGF0IGRlcGVuZHMgb24gaG93IG1hbnkgdXNlcnMgdGhlcmUgYXJlLg0KPiA+DQo+ID4g
V2hhdCBJIHRoaW5rIHdlIGNhbiBkbyBpczoNCj4gPiAgLSBSZXNlcnZlIG9uZSBNU0kgcmVnaW9u
IGZvciBob3N0LiBIb3N0IHdpbGwgbm90IHNoYXJlIE1TSSByZWdpb24gd2l0aCBHdWVzdC4NCj4g
PiAgLSBGb3IgdXB0byAyIEd1ZXN0IChNQVggbXNpIHdpdGggaG9zdCAtIDEpIGdpdmUgdGhlbiBz
ZXBhcmF0ZSBNU0kgc3ViDQo+ID4gcmVnaW9ucw0KPiA+ICAtIEFkZGl0aW9uYWwgR3Vlc3Qgd2ls
bCBzaGFyZSBNU0kgcmVnaW9uIHdpdGggb3RoZXIgZ3Vlc3QuDQo+ID4NCj4gPiBBbnkgYmV0dGVy
IHN1Z2dlc3Rpb24gYXJlIG1vc3Qgd2VsY29tZS4NCj4gDQo+IElmIHRoZSBhZG1pbmlzdHJhdG9y
IGRvZXMgbm90IG9wdCBpbnRvIHRoaXMgcGFydGlhbCBsb3NzIG9mIGlzb2xhdGlvbiwgdGhlbiBv
bmNlDQo+IHlvdSBydW4gb3V0IG9mIE1TSSBncm91cHMsIG5ldyB1c2VycyBzaG91bGQgbm90IGJl
IGFibGUgdG8gc2V0IHVwIE1TSXMuDQoNClNvIG1lYW4gdmZpbyBzaG91bGQgdXNlIExlZ2FjeSB3
aGVuIG91dCBvZiBNU0kgYmFua3M/DQoNClRoYW5rcw0KLUJoYXJhdA0KDQo+IA0KPiAtU2NvdHQN
Cj4gDQoNCg==

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  2013-12-10  5:37                         ` Bharat.Bhushan
@ 2013-12-10  5:53                           ` Alex Williamson
  2013-12-10  9:09                             ` Bharat.Bhushan
  0 siblings, 1 reply; 35+ messages in thread
From: Alex Williamson @ 2013-12-10  5:53 UTC (permalink / raw)
  To: Bharat.Bhushan
  Cc: linux-pci, agraf, Stuart Yoder, bhelgaas, iommu, Scott Wood,
	linuxppc-dev, linux-kernel

On Tue, 2013-12-10 at 05:37 +0000, Bharat.Bhushan@freescale.com wrote:
> 
> > -----Original Message-----
> > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > Sent: Saturday, December 07, 2013 1:00 AM
> > To: Wood Scott-B07421
> > Cc: Bhushan Bharat-R65777; linux-pci@vger.kernel.org; agraf@suse.de; Yoder
> > Stuart-B08248; iommu@lists.linux-foundation.org; bhelgaas@google.com; linuxppc-
> > dev@lists.ozlabs.org; linux-kernel@vger.kernel.org
> > Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
> > 
> > On Fri, 2013-12-06 at 12:59 -0600, Scott Wood wrote:
> > > On Thu, 2013-12-05 at 22:11 -0600, Bharat Bhushan wrote:
> > > >
> > > > > -----Original Message-----
> > > > > From: Wood Scott-B07421
> > > > > Sent: Friday, December 06, 2013 5:52 AM
> > > > > To: Bhushan Bharat-R65777
> > > > > Cc: Alex Williamson; linux-pci@vger.kernel.org; agraf@suse.de;
> > > > > Yoder Stuart- B08248; iommu@lists.linux-foundation.org;
> > > > > bhelgaas@google.com; linuxppc- dev@lists.ozlabs.org;
> > > > > linux-kernel@vger.kernel.org
> > > > > Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale
> > > > > IOMMU (PAMU)
> > > > >
> > > > > On Thu, 2013-11-28 at 03:19 -0600, Bharat Bhushan wrote:
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: Bhushan Bharat-R65777
> > > > > > > Sent: Wednesday, November 27, 2013 9:39 PM
> > > > > > > To: 'Alex Williamson'
> > > > > > > Cc: Wood Scott-B07421; linux-pci@vger.kernel.org;
> > > > > > > agraf@suse.de; Yoder Stuart- B08248;
> > > > > > > iommu@lists.linux-foundation.org; bhelgaas@google.com;
> > > > > > > linuxppc- dev@lists.ozlabs.org; linux-kernel@vger.kernel.org
> > > > > > > Subject: RE: [PATCH 0/9 v2] vfio-pci: add support for
> > > > > > > Freescale IOMMU (PAMU)
> > > > > > >
> > > > > > > If we just provide the size of MSI bank to userspace then
> > > > > > > userspace cannot do anything wrong.
> > > > > >
> > > > > > So userspace does not know address, so it cannot mmap and cause
> > > > > > any
> > > > > interference by directly reading/writing.
> > > > >
> > > > > That's security through obscurity...  Couldn't the malicious user
> > > > > find out the address via other means, such as experimentation on
> > > > > another system over which they have full control?  What would
> > > > > happen if the user reads from their device's PCI config space?  Or
> > > > > gets the information via some back door in the PCI device they
> > > > > own?  Or pokes throughout the address space looking for something that
> > generates an interrupt to its own device?
> > > >
> > > > So how to solve this problem, Any suggestion ?
> > > >
> > > > We have to map one window in PAMU for MSIs and a malicious user can
> > > > ask its device to do DMA to MSI window region with any pair of
> > > > address and data, which can lead to unexpected MSIs in system?
> > >
> > > I don't think there are any solutions other than to limit each bank to
> > > one user, unless the admin turns some knob that says they're OK with
> > > the partial loss of isolation.
> > 
> > Even if the admin does opt-in to an allow_unsafe_interrupts options, it should
> > still be reasonably difficult for one guest to interfere with the other.  I
> > don't think we want to rely on the blind luck of making the full MSI bank
> > accessible to multiple guests and hoping they don't step on each other.
> 
> Not sure how to solve in this case (sharing MSI page)
> 
> >  That probably means that vfio needs to manage the space rather than the guest.
> 
> What you mean by " vfio needs to manage the space rather than the guest"?

I mean there needs to be some kernel component managing the contents of
the MSI page rather than just handing it out to the user and hoping for
the best.  The user API also needs to remain the same whether the user
has the MSI page exclusively or it's shared with others (kernel or
users).  Thanks,

Alex

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  2013-12-10  5:53                           ` Alex Williamson
@ 2013-12-10  9:09                             ` Bharat.Bhushan
  0 siblings, 0 replies; 35+ messages in thread
From: Bharat.Bhushan @ 2013-12-10  9:09 UTC (permalink / raw)
  To: Alex Williamson
  Cc: linux-pci, agraf, Stuart Yoder, bhelgaas, iommu, Scott Wood,
	linuxppc-dev, linux-kernel

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogQWxleCBXaWxsaWFtc29u
IFttYWlsdG86YWxleC53aWxsaWFtc29uQHJlZGhhdC5jb21dDQo+IFNlbnQ6IFR1ZXNkYXksIERl
Y2VtYmVyIDEwLCAyMDEzIDExOjIzIEFNDQo+IFRvOiBCaHVzaGFuIEJoYXJhdC1SNjU3NzcNCj4g
Q2M6IFdvb2QgU2NvdHQtQjA3NDIxOyBsaW51eC1wY2lAdmdlci5rZXJuZWwub3JnOyBhZ3JhZkBz
dXNlLmRlOyBZb2RlciBTdHVhcnQtDQo+IEIwODI0ODsgaW9tbXVAbGlzdHMubGludXgtZm91bmRh
dGlvbi5vcmc7IGJoZWxnYWFzQGdvb2dsZS5jb207IGxpbnV4cHBjLQ0KPiBkZXZAbGlzdHMub3ps
YWJzLm9yZzsgbGludXgta2VybmVsQHZnZXIua2VybmVsLm9yZw0KPiBTdWJqZWN0OiBSZTogW1BB
VENIIDAvOSB2Ml0gdmZpby1wY2k6IGFkZCBzdXBwb3J0IGZvciBGcmVlc2NhbGUgSU9NTVUgKFBB
TVUpDQo+IA0KPiBPbiBUdWUsIDIwMTMtMTItMTAgYXQgMDU6MzcgKzAwMDAsIEJoYXJhdC5CaHVz
aGFuQGZyZWVzY2FsZS5jb20gd3JvdGU6DQo+ID4NCj4gPiA+IC0tLS0tT3JpZ2luYWwgTWVzc2Fn
ZS0tLS0tDQo+ID4gPiBGcm9tOiBBbGV4IFdpbGxpYW1zb24gW21haWx0bzphbGV4LndpbGxpYW1z
b25AcmVkaGF0LmNvbV0NCj4gPiA+IFNlbnQ6IFNhdHVyZGF5LCBEZWNlbWJlciAwNywgMjAxMyAx
OjAwIEFNDQo+ID4gPiBUbzogV29vZCBTY290dC1CMDc0MjENCj4gPiA+IENjOiBCaHVzaGFuIEJo
YXJhdC1SNjU3Nzc7IGxpbnV4LXBjaUB2Z2VyLmtlcm5lbC5vcmc7IGFncmFmQHN1c2UuZGU7DQo+
ID4gPiBZb2RlciBTdHVhcnQtQjA4MjQ4OyBpb21tdUBsaXN0cy5saW51eC1mb3VuZGF0aW9uLm9y
ZzsNCj4gPiA+IGJoZWxnYWFzQGdvb2dsZS5jb207IGxpbnV4cHBjLSBkZXZAbGlzdHMub3psYWJz
Lm9yZzsNCj4gPiA+IGxpbnV4LWtlcm5lbEB2Z2VyLmtlcm5lbC5vcmcNCj4gPiA+IFN1YmplY3Q6
IFJlOiBbUEFUQ0ggMC85IHYyXSB2ZmlvLXBjaTogYWRkIHN1cHBvcnQgZm9yIEZyZWVzY2FsZQ0K
PiA+ID4gSU9NTVUgKFBBTVUpDQo+ID4gPg0KPiA+ID4gT24gRnJpLCAyMDEzLTEyLTA2IGF0IDEy
OjU5IC0wNjAwLCBTY290dCBXb29kIHdyb3RlOg0KPiA+ID4gPiBPbiBUaHUsIDIwMTMtMTItMDUg
YXQgMjI6MTEgLTA2MDAsIEJoYXJhdCBCaHVzaGFuIHdyb3RlOg0KPiA+ID4gPiA+DQo+ID4gPiA+
ID4gPiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiA+ID4gPiA+ID4gRnJvbTogV29vZCBT
Y290dC1CMDc0MjENCj4gPiA+ID4gPiA+IFNlbnQ6IEZyaWRheSwgRGVjZW1iZXIgMDYsIDIwMTMg
NTo1MiBBTQ0KPiA+ID4gPiA+ID4gVG86IEJodXNoYW4gQmhhcmF0LVI2NTc3Nw0KPiA+ID4gPiA+
ID4gQ2M6IEFsZXggV2lsbGlhbXNvbjsgbGludXgtcGNpQHZnZXIua2VybmVsLm9yZzsgYWdyYWZA
c3VzZS5kZTsNCj4gPiA+ID4gPiA+IFlvZGVyIFN0dWFydC0gQjA4MjQ4OyBpb21tdUBsaXN0cy5s
aW51eC1mb3VuZGF0aW9uLm9yZzsNCj4gPiA+ID4gPiA+IGJoZWxnYWFzQGdvb2dsZS5jb207IGxp
bnV4cHBjLSBkZXZAbGlzdHMub3psYWJzLm9yZzsNCj4gPiA+ID4gPiA+IGxpbnV4LWtlcm5lbEB2
Z2VyLmtlcm5lbC5vcmcNCj4gPiA+ID4gPiA+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggMC85IHYyXSB2
ZmlvLXBjaTogYWRkIHN1cHBvcnQgZm9yDQo+ID4gPiA+ID4gPiBGcmVlc2NhbGUgSU9NTVUgKFBB
TVUpDQo+ID4gPiA+ID4gPg0KPiA+ID4gPiA+ID4gT24gVGh1LCAyMDEzLTExLTI4IGF0IDAzOjE5
IC0wNjAwLCBCaGFyYXQgQmh1c2hhbiB3cm90ZToNCj4gPiA+ID4gPiA+ID4NCj4gPiA+ID4gPiA+
ID4gPiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiA+ID4gPiA+ID4gPiA+IEZyb206IEJo
dXNoYW4gQmhhcmF0LVI2NTc3Nw0KPiA+ID4gPiA+ID4gPiA+IFNlbnQ6IFdlZG5lc2RheSwgTm92
ZW1iZXIgMjcsIDIwMTMgOTozOSBQTQ0KPiA+ID4gPiA+ID4gPiA+IFRvOiAnQWxleCBXaWxsaWFt
c29uJw0KPiA+ID4gPiA+ID4gPiA+IENjOiBXb29kIFNjb3R0LUIwNzQyMTsgbGludXgtcGNpQHZn
ZXIua2VybmVsLm9yZzsNCj4gPiA+ID4gPiA+ID4gPiBhZ3JhZkBzdXNlLmRlOyBZb2RlciBTdHVh
cnQtIEIwODI0ODsNCj4gPiA+ID4gPiA+ID4gPiBpb21tdUBsaXN0cy5saW51eC1mb3VuZGF0aW9u
Lm9yZzsgYmhlbGdhYXNAZ29vZ2xlLmNvbTsNCj4gPiA+ID4gPiA+ID4gPiBsaW51eHBwYy0gZGV2
QGxpc3RzLm96bGFicy5vcmc7DQo+ID4gPiA+ID4gPiA+ID4gbGludXgta2VybmVsQHZnZXIua2Vy
bmVsLm9yZw0KPiA+ID4gPiA+ID4gPiA+IFN1YmplY3Q6IFJFOiBbUEFUQ0ggMC85IHYyXSB2Zmlv
LXBjaTogYWRkIHN1cHBvcnQgZm9yDQo+ID4gPiA+ID4gPiA+ID4gRnJlZXNjYWxlIElPTU1VIChQ
QU1VKQ0KPiA+ID4gPiA+ID4gPiA+DQo+ID4gPiA+ID4gPiA+ID4gSWYgd2UganVzdCBwcm92aWRl
IHRoZSBzaXplIG9mIE1TSSBiYW5rIHRvIHVzZXJzcGFjZSB0aGVuDQo+ID4gPiA+ID4gPiA+ID4g
dXNlcnNwYWNlIGNhbm5vdCBkbyBhbnl0aGluZyB3cm9uZy4NCj4gPiA+ID4gPiA+ID4NCj4gPiA+
ID4gPiA+ID4gU28gdXNlcnNwYWNlIGRvZXMgbm90IGtub3cgYWRkcmVzcywgc28gaXQgY2Fubm90
IG1tYXAgYW5kDQo+ID4gPiA+ID4gPiA+IGNhdXNlIGFueQ0KPiA+ID4gPiA+ID4gaW50ZXJmZXJl
bmNlIGJ5IGRpcmVjdGx5IHJlYWRpbmcvd3JpdGluZy4NCj4gPiA+ID4gPiA+DQo+ID4gPiA+ID4g
PiBUaGF0J3Mgc2VjdXJpdHkgdGhyb3VnaCBvYnNjdXJpdHkuLi4gIENvdWxkbid0IHRoZSBtYWxp
Y2lvdXMNCj4gPiA+ID4gPiA+IHVzZXIgZmluZCBvdXQgdGhlIGFkZHJlc3MgdmlhIG90aGVyIG1l
YW5zLCBzdWNoIGFzDQo+ID4gPiA+ID4gPiBleHBlcmltZW50YXRpb24gb24gYW5vdGhlciBzeXN0
ZW0gb3ZlciB3aGljaCB0aGV5IGhhdmUgZnVsbA0KPiA+ID4gPiA+ID4gY29udHJvbD8gIFdoYXQg
d291bGQgaGFwcGVuIGlmIHRoZSB1c2VyIHJlYWRzIGZyb20gdGhlaXINCj4gPiA+ID4gPiA+IGRl
dmljZSdzIFBDSSBjb25maWcgc3BhY2U/ICBPciBnZXRzIHRoZSBpbmZvcm1hdGlvbiB2aWEgc29t
ZQ0KPiA+ID4gPiA+ID4gYmFjayBkb29yIGluIHRoZSBQQ0kgZGV2aWNlIHRoZXkgb3duPyAgT3Ig
cG9rZXMgdGhyb3VnaG91dCB0aGUNCj4gPiA+ID4gPiA+IGFkZHJlc3Mgc3BhY2UgbG9va2luZyBm
b3Igc29tZXRoaW5nIHRoYXQNCj4gPiA+IGdlbmVyYXRlcyBhbiBpbnRlcnJ1cHQgdG8gaXRzIG93
biBkZXZpY2U/DQo+ID4gPiA+ID4NCj4gPiA+ID4gPiBTbyBob3cgdG8gc29sdmUgdGhpcyBwcm9i
bGVtLCBBbnkgc3VnZ2VzdGlvbiA/DQo+ID4gPiA+ID4NCj4gPiA+ID4gPiBXZSBoYXZlIHRvIG1h
cCBvbmUgd2luZG93IGluIFBBTVUgZm9yIE1TSXMgYW5kIGEgbWFsaWNpb3VzIHVzZXINCj4gPiA+
ID4gPiBjYW4gYXNrIGl0cyBkZXZpY2UgdG8gZG8gRE1BIHRvIE1TSSB3aW5kb3cgcmVnaW9uIHdp
dGggYW55IHBhaXINCj4gPiA+ID4gPiBvZiBhZGRyZXNzIGFuZCBkYXRhLCB3aGljaCBjYW4gbGVh
ZCB0byB1bmV4cGVjdGVkIE1TSXMgaW4gc3lzdGVtPw0KPiA+ID4gPg0KPiA+ID4gPiBJIGRvbid0
IHRoaW5rIHRoZXJlIGFyZSBhbnkgc29sdXRpb25zIG90aGVyIHRoYW4gdG8gbGltaXQgZWFjaA0K
PiA+ID4gPiBiYW5rIHRvIG9uZSB1c2VyLCB1bmxlc3MgdGhlIGFkbWluIHR1cm5zIHNvbWUga25v
YiB0aGF0IHNheXMNCj4gPiA+ID4gdGhleSdyZSBPSyB3aXRoIHRoZSBwYXJ0aWFsIGxvc3Mgb2Yg
aXNvbGF0aW9uLg0KPiA+ID4NCj4gPiA+IEV2ZW4gaWYgdGhlIGFkbWluIGRvZXMgb3B0LWluIHRv
IGFuIGFsbG93X3Vuc2FmZV9pbnRlcnJ1cHRzIG9wdGlvbnMsDQo+ID4gPiBpdCBzaG91bGQgc3Rp
bGwgYmUgcmVhc29uYWJseSBkaWZmaWN1bHQgZm9yIG9uZSBndWVzdCB0byBpbnRlcmZlcmUNCj4g
PiA+IHdpdGggdGhlIG90aGVyLiAgSSBkb24ndCB0aGluayB3ZSB3YW50IHRvIHJlbHkgb24gdGhl
IGJsaW5kIGx1Y2sgb2YNCj4gPiA+IG1ha2luZyB0aGUgZnVsbCBNU0kgYmFuayBhY2Nlc3NpYmxl
IHRvIG11bHRpcGxlIGd1ZXN0cyBhbmQgaG9waW5nIHRoZXkgZG9uJ3QNCj4gc3RlcCBvbiBlYWNo
IG90aGVyLg0KPiA+DQo+ID4gTm90IHN1cmUgaG93IHRvIHNvbHZlIGluIHRoaXMgY2FzZSAoc2hh
cmluZyBNU0kgcGFnZSkNCj4gPg0KPiA+ID4gIFRoYXQgcHJvYmFibHkgbWVhbnMgdGhhdCB2Zmlv
IG5lZWRzIHRvIG1hbmFnZSB0aGUgc3BhY2UgcmF0aGVyIHRoYW4gdGhlDQo+IGd1ZXN0Lg0KPiA+
DQo+ID4gV2hhdCB5b3UgbWVhbiBieSAiIHZmaW8gbmVlZHMgdG8gbWFuYWdlIHRoZSBzcGFjZSBy
YXRoZXIgdGhhbiB0aGUgZ3Vlc3QiPw0KPiANCj4gSSBtZWFuIHRoZXJlIG5lZWRzIHRvIGJlIHNv
bWUga2VybmVsIGNvbXBvbmVudCBtYW5hZ2luZyB0aGUgY29udGVudHMgb2YgdGhlIE1TSQ0KPiBw
YWdlIHJhdGhlciB0aGFuIGp1c3QgaGFuZGluZyBpdCBvdXQgdG8gdGhlIHVzZXIgYW5kIGhvcGlu
ZyBmb3IgdGhlIGJlc3QuICBUaGUNCj4gdXNlciBBUEkgYWxzbyBuZWVkcyB0byByZW1haW4gdGhl
IHNhbWUgd2hldGhlciB0aGUgdXNlciBoYXMgdGhlIE1TSSBwYWdlDQo+IGV4Y2x1c2l2ZWx5IG9y
IGl0J3Mgc2hhcmVkIHdpdGggb3RoZXJzIChrZXJuZWwgb3IgdXNlcnMpLiAgVGhhbmtzLA0KDQpX
ZSBoYXZlIGxpbWl0ZWQgbnVtYmVyIG9mIE1TSSBiYW5rcywgc28gd2UgY2Fubm90IHByb3ZpZGUg
ZXhwbGljaXQgTVNJIGJhbmsgdG8gZWFjaCBWTXMuDQpCZWxvdyBpcyB0aGUgc3VtbWFyeSBvZiBt
c2kgYWxsb2NhdGlvbi9vd25lcnNoaXAgbW9kZWwgSSBhbSB0aGlua2luZyBvZjoNCg0KT3B0aW9u
LTE6IFVzZXItc3BhY2UgYXdhcmUgb2YgTVNJIGJhbmtzDQo9PT09PT09PT0gDQoxICkgVXNlcnNw
YWNlIHdpbGwgbWFrZSBHRVRfTVNJX1JFR0lPTihyZXF1ZXN0IG51bWJlciBvZiBNU0kgYmFua3Mp
DQoJLSBWRklPIHdpbGwgYWxsb2NhdGUgcmVxdWVzdGVkIG51bWJlciBvZiBNU0kgYmFuazsNCgkt
IElmIGFsbG9jYXRpb24gc3VjY2VlZCB0aGVuIHJldHVybiBudW1iZXIgb2YgYmFua3MNCgktIElm
IGFsbG9jYXRpb24gZmFpbHMgdGhlbiBjaGVjayBvcHQtaW4gZmxhZyBzZXQgYnkgYWRtaW5pc3Ry
YXRvciAoYWxsb3dfdW5zYWZlX2ludGVycnVwdHMpOw0KICAgICAgICAgYWxsb3dfdW5zYWZlX2lu
dGVycnVwdHMgID09IDA7IE5vdCBhbGxvd2VkIHRvIHNoYXJlOyByZXR1cm4gRkFJTCAoLUVOT0RF
VikNCiAgICAgICAgIGVsc2Ugc2hhcmUgTVNJIGJhbmsgb2YgYW5vdGhlciBWTS4NCg0KMikgVXNl
cnNwYWNlIHdpbGwgYWRqdXN0IGdlb21ldHJ5IHNpemUgYXMgcGVyIG51bWJlciBvZiBiYW5rcyBh
bmQgY2FsbHMgU0VUX0dFT01FVFJZDQoNCjMpIFVzZXJzcGFjZSB3aWxsIGRvIERNQV9NQVAgZm9y
IGl0cyBtZW1vcnkNCg0KNCkgVXNlcnNwYWNlIHdpbGwgZG8gTVNJX01BUCBmb3IgbnVtYmVyIG9m
IGJhbmtzIGl0IGhhdmUNCgktIE1TSV9NQVAoaW92YSwgYmFuayBudW1iZXIpOw0KCS0gU2hvdWxk
IGlvdmEgYmUgcGFzc2VkIGJ5IHVzZXJzcGFjZSBvciBub3Q/IEkgdGhpbmsgd2Ugc2hvdWxkIHBh
c3MgaW92YSBhcyBpdCBkb2VzIG5vdCBrbm93IGlmIHVzZXJzcGFjZSB3aWxsIGNhbGwgRE1BX01B
UCBmb3Igc2FtZSBpb3ZhIGxhdGVyIG9uLg0KCSAgVkZJTyBjYW4gc29tZWhvdyBmaW5kIGEgbWFn
aWMgSU9WQSB3aXRoaW4gZ2VvbWV0cnkgYnV0IHdpbGwgYXNzdW1lIHRoYXQgdXNlcnNwYWNlIHdp
bGwgbm90IG1ha2UgRE1BX01BUCBsYXRlciBvbi4NCgkNCg0KT3B0aW9uLTI6IFVzZXJzcGFjZSB0
cmFuc3BhcmVudCBNU0kgYmFua3MNCj09PT09PT09PSANCjEpIFVzZXJzcGFjZSBzZXR1cCBnZW9t
ZXRyeSBvZiBpdHMgbWVtb3J5IChzYXkgY2FsbCBhcyAidXNlcnNwYWNlLWdlb21ldHJ5IikgKFNF
VF9HRU9NRVRSWSkNCgktIFZGSU8gd2lsbCBhbGxvY2F0ZSBNU0kgYmFuay9zOyBob3cgbWFueT8/
Lg0KCS0gRXJyb3Igb3V0IGlmIG5vdCBhdmFpbGFibGUgKHNoYXJlZCBhbmQvb3IgZXhjbHVzaXZl
LCBzYW1lIGFzIGluIG9wdGlvbi0xIGFib3ZlKQ0KCS0gVkZJTyB3aWxsIGFkanVzdCBnZW9tZXRy
eSBhY2NvcmRpbmdseSAoc2F5IGNhbGxlZCBhcyAiYWN0dWFsLWdlb21ldHJ5IikuDQoNCjIpIFVz
ZXJzcGFjZSB3aWxsIGRvIERNQV9NQVAgZm9yIGl0cyBtZW1vcnkuDQoJLSBWRklPIGFsbG93cyBv
bmx5IHdpdGhpbiAidXNlcnNwYWNlLWdlb21ldHJ5Ii4NCg0KMykgVXNlcnNwYWNlIHdpbGwgZG8g
TVNJX01BUCBhZnRlciBhbGwgRE1BX01BUCBjb21wbGV0ZQ0KCS0gVkZJTyB3aWxsIGZpbmQgYSBt
YWdpYyBJT1ZBIGFmdGVyICJ1c2Vyc3BhY2UtZ2VvbWV0cnkiIGJ1dCB3aXRoaW4gImFjdHVhbC1n
ZW9tZXRyeSIuDQoJLSBBbGxvY2F0ZWQgTVNJIGJhbmsvcyBpbiBzdGVwLTEgYXJlIG1hcHBlZCBp
biBJT01NVQ0KDQo9PT09PT09PT0NCg0KTm90ZTogSXJyZXNwZWN0aXZlIG9mIHdoaWNoIG9wdGlv
biB3ZSB1c2UsIGEgbWFsaWNpb3VzIHVzZXJzcGFjZSBjYW4gaW50ZXJmZXJlIHdpdGggYW5vdGhl
ciB1c2Vyc3BhY2UgYnkgcHJvZ3JhbW1pbmcgZGV2aWNlIERNQSB3cm9uZ2x5Lg0KDQpPcHRpb24t
MSBsb29rcyBmbGV4aWJsZSBhbmQgZ29vZCB0byBtZSBidXQgb3BlbiBmb3Igc3VnZ2VzdGlvbnMu
DQoNClRoYW5rcw0KLUJoYXJhdA0KDQoNCj4gDQo+IEFsZXgNCj4gDQo+IA0KDQo=

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  2013-12-10  5:37                   ` Bharat.Bhushan
@ 2013-12-10 20:29                     ` Scott Wood
  0 siblings, 0 replies; 35+ messages in thread
From: Scott Wood @ 2013-12-10 20:29 UTC (permalink / raw)
  To: Bharat.Bhushan
  Cc: Stuart Yoder, linux-pci, Alex Williamson, agraf, bhelgaas, iommu,
	'Wood Scott-B07421',
	linuxppc-dev, linux-kernel

My e-mail address is <scottwood@freescale.com>, not
<IMCEAEX-_O=MMS_OU=EXTERNAL+20+28FYDIBOHF25SPDLT
+29_CN=RECIPIENTS_CN=F0FAAC8D7E74473A9EE1C45B068D838A@namprd03.prod.outlook.com>

On Tue, 2013-12-10 at 05:37 +0000, Bharat.Bhushan@freescale.com wrote:
> 
> > -----Original Message-----
> > From: Wood Scott-B07421
> > Sent: Saturday, December 07, 2013 12:55 AM
> > To: Bhushan Bharat-R65777
> > Cc: Alex Williamson; linux-pci@vger.kernel.org; agraf@suse.de; Yoder Stuart-
> > B08248; iommu@lists.linux-foundation.org; bhelgaas@google.com; linuxppc-
> > dev@lists.ozlabs.org; linux-kernel@vger.kernel.org
> > Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
> > 
> > If the administrator does not opt into this partial loss of isolation, then once
> > you run out of MSI groups, new users should not be able to set up MSIs.
> 
> So mean vfio should use Legacy when out of MSI banks?

Yes, if the administrator hasn't granted permission to share.

-Scott

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2013-12-10 22:47 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-11-19  5:17 [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU) Bharat Bhushan
2013-11-19  5:17 ` [PATCH 1/9 v2] pci:msi: add weak function for returning msi region info Bharat Bhushan
2013-11-25 23:36   ` Bjorn Helgaas
2013-11-28 10:08     ` Bharat Bhushan
2013-11-19  5:17 ` [PATCH 2/9 v2] pci: msi: expose msi region information functions Bharat Bhushan
2013-11-19  5:17 ` [PATCH 3/9 v2] powerpc: pci: Add arch specific msi region interface Bharat Bhushan
2013-11-19  5:17 ` [PATCH 4/9 v2] powerpc: msi: Extend the msi region interface to get info from fsl_msi Bharat Bhushan
2013-11-19  5:17 ` [PATCH 5/9 v2] pci/msi: interface to set an iova for a msi region Bharat Bhushan
2013-11-19  5:17 ` [PATCH 6/9 v2] powerpc: pci: Extend msi iova page setup to arch specific Bharat Bhushan
2013-11-19  5:17 ` [PATCH 7/9 v2] pci: msi: Extend msi iova setting interface to powerpc arch Bharat Bhushan
2013-11-19  5:17 ` [PATCH 8/9 v2] vfio: moving some functions in common file Bharat Bhushan
2013-11-19  5:17 ` [PATCH 9/9 v2] vfio pci: Add vfio iommu implementation for FSL_PAMU Bharat Bhushan
2013-11-20 18:47 ` [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU) Alex Williamson
2013-11-21 11:20   ` Varun Sethi
2013-11-21 11:20   ` Bharat Bhushan
2013-11-21 20:43     ` Alex Williamson
2013-11-21 20:47       ` Scott Wood
2013-11-21 21:00         ` Alex Williamson
2013-11-25  5:33           ` Bharat Bhushan
2013-11-25 16:38             ` Alex Williamson
2013-11-27 16:08               ` Bharat Bhushan
2013-11-28  9:19               ` Bharat Bhushan
2013-12-06  0:21                 ` Scott Wood
2013-12-06  4:11                   ` Bharat Bhushan
2013-12-06 18:59                     ` Scott Wood
2013-12-06 19:30                       ` Alex Williamson
2013-12-07  0:22                         ` Scott Wood
2013-12-10  5:37                         ` Bharat.Bhushan
2013-12-10  5:53                           ` Alex Williamson
2013-12-10  9:09                             ` Bharat.Bhushan
2013-12-06  0:00             ` Scott Wood
2013-12-06  4:17               ` Bharat Bhushan
2013-12-06 19:25                 ` Scott Wood
2013-12-10  5:37                   ` Bharat.Bhushan
2013-12-10 20:29                     ` Scott Wood

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).