linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH kernel v4 00/11] powerpc/powernv/npu: Enable PCI pass through for NVLink
@ 2016-04-29  8:55 Alexey Kardashevskiy
  2016-04-29  8:55 ` [PATCH kernel v4 01/11] vfio_pci: Test for extended capabilities if config space > 256 bytes Alexey Kardashevskiy
                   ` (10 more replies)
  0 siblings, 11 replies; 24+ messages in thread
From: Alexey Kardashevskiy @ 2016-04-29  8:55 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Alexey Kardashevskiy, Alex Williamson, Alistair Popple,
	Benjamin Herrenschmidt, Dan Carpenter, Daniel Axtens,
	David Gibson, Gavin Shan, Russell Currey, kvm, linux-kernel

IBM POWER8 NVlink systems contain usual Tesla K40-ish GPUs but also
contain a couple of really fast links between GPU and CPU. These links
are exposed to the userspace by the OPAL firmware as bridges.
In order to make these links work when GPU is passed to the guest,
these bridges need to be passed as well; otherwise performance will
degrade. More details are in 11/11.

This reworks the existing NPU support in the powernv platform and adds
VFIO support on top of that.

v4 has new patch "powerpc/powernv/npu: Add set/unset window" and bunch of
cleanups.

"vfio_pci: Test for extended capabilities if config space > 256 bytes" is
included here if anyone decides to test the patchset which will crash
without it.

This was tested on POWER8NVL platform; pvr=0x004c0100.


Please comment. Thanks.

Alex, I guess we will need your "acked-by" for
"vfio/spapr: Relax the IOMMU compatibility check" to proceed.


Alexey Kardashevskiy (11):
  vfio_pci: Test for extended capabilities if config space > 256 bytes
  vfio/spapr: Relax the IOMMU compatibility check
  powerpc/powernv: Rename pnv_pci_ioda2_tce_invalidate_entire
  powerpc/powernv: Define TCE Kill flags
  powerpc/powernv/npu: TCE Kill helpers cleanup
  powerpc/powernv/npu: Use the correct IOMMU page size
  powerpc/powernv/npu: Simplify DMA setup
  powerpc/powernv/ioda2: Export debug helper pe_level_printk()
  powerpc/powernv/npu: Add set/unset window helpers
  powerpc/powernv/npu: Rework TCE Kill handling
  powerpc/powernv/npu: Enable NVLink pass through

 arch/powerpc/platforms/powernv/npu-dma.c  | 287 ++++++++++++++++--------------
 arch/powerpc/platforms/powernv/pci-ioda.c | 224 +++++++++++++++--------
 arch/powerpc/platforms/powernv/pci.h      |  31 ++--
 drivers/vfio/pci/vfio_pci_config.c        |  17 +-
 drivers/vfio/vfio_iommu_spapr_tce.c       |   3 +-
 5 files changed, 327 insertions(+), 235 deletions(-)

-- 
2.5.0.rc3

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH kernel v4 01/11] vfio_pci: Test for extended capabilities if config space > 256 bytes
  2016-04-29  8:55 [PATCH kernel v4 00/11] powerpc/powernv/npu: Enable PCI pass through for NVLink Alexey Kardashevskiy
@ 2016-04-29  8:55 ` Alexey Kardashevskiy
  2016-04-29 15:42   ` Alex Williamson
  2016-04-29  8:55 ` [PATCH kernel v4 02/11] vfio/spapr: Relax the IOMMU compatibility check Alexey Kardashevskiy
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 24+ messages in thread
From: Alexey Kardashevskiy @ 2016-04-29  8:55 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Alexey Kardashevskiy, Alex Williamson, Alistair Popple,
	Benjamin Herrenschmidt, Dan Carpenter, Daniel Axtens,
	David Gibson, Gavin Shan, Russell Currey, kvm, linux-kernel

PCI-Express spec says that reading 4 bytes at offset 100h should return
zero if there is no extended capability so VFIO reads this dword to
know if there are extended capabilities.

However it is not always possible to access the extended space so
generic PCI code in pci_cfg_space_size_ext() checks if
pci_read_config_dword() can read beyond 100h and if the check fails,
it sets the config space size to 100h.

VFIO does its own extended capabilities check by reading at offset 100h
which may produce 0xffffffff which VFIO treats as the extended config
space presense and calls vfio_ecap_init() which fails to parse
capabilities (which is expected) but right before the exit, it writes
zero at offset 100h which is beyond the buffer allocated for
vdev->vconfig (which is 256 bytes) which leads to random memory
corruption.

This makes VFIO only check for the extended capabilities if
the discovered config size is more than 256 bytes.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v2:
* instead of checking for 0xffffffff, this only does the check if
device's config size is big enough
---
 drivers/vfio/pci/vfio_pci_config.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci_config.c
index 142c533..d0c4358 100644
--- a/drivers/vfio/pci/vfio_pci_config.c
+++ b/drivers/vfio/pci/vfio_pci_config.c
@@ -1124,9 +1124,12 @@ static int vfio_cap_len(struct vfio_pci_device *vdev, u8 cap, u8 pos)
 			return pcibios_err_to_errno(ret);
 
 		if (PCI_X_CMD_VERSION(word)) {
-			/* Test for extended capabilities */
-			pci_read_config_dword(pdev, PCI_CFG_SPACE_SIZE, &dword);
-			vdev->extended_caps = (dword != 0);
+			if (pdev->cfg_size > PCI_CFG_SPACE_SIZE) {
+				/* Test for extended capabilities */
+				pci_read_config_dword(pdev, PCI_CFG_SPACE_SIZE,
+						&dword);
+				vdev->extended_caps = (dword != 0);
+			}
 			return PCI_CAP_PCIX_SIZEOF_V2;
 		} else
 			return PCI_CAP_PCIX_SIZEOF_V0;
@@ -1138,9 +1141,11 @@ static int vfio_cap_len(struct vfio_pci_device *vdev, u8 cap, u8 pos)
 
 		return byte;
 	case PCI_CAP_ID_EXP:
-		/* Test for extended capabilities */
-		pci_read_config_dword(pdev, PCI_CFG_SPACE_SIZE, &dword);
-		vdev->extended_caps = (dword != 0);
+		if (pdev->cfg_size > PCI_CFG_SPACE_SIZE) {
+			/* Test for extended capabilities */
+			pci_read_config_dword(pdev, PCI_CFG_SPACE_SIZE, &dword);
+			vdev->extended_caps = dword != 0;
+		}
 
 		/* length based on version */
 		if ((pcie_caps_reg(pdev) & PCI_EXP_FLAGS_VERS) == 1)
-- 
2.5.0.rc3

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH kernel v4 02/11] vfio/spapr: Relax the IOMMU compatibility check
  2016-04-29  8:55 [PATCH kernel v4 00/11] powerpc/powernv/npu: Enable PCI pass through for NVLink Alexey Kardashevskiy
  2016-04-29  8:55 ` [PATCH kernel v4 01/11] vfio_pci: Test for extended capabilities if config space > 256 bytes Alexey Kardashevskiy
@ 2016-04-29  8:55 ` Alexey Kardashevskiy
  2016-04-29 15:41   ` Alex Williamson
  2016-05-10 21:48   ` [kernel,v4,02/11] " Michael Ellerman
  2016-04-29  8:55 ` [PATCH kernel v4 03/11] powerpc/powernv: Rename pnv_pci_ioda2_tce_invalidate_entire Alexey Kardashevskiy
                   ` (8 subsequent siblings)
  10 siblings, 2 replies; 24+ messages in thread
From: Alexey Kardashevskiy @ 2016-04-29  8:55 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Alexey Kardashevskiy, Alex Williamson, Alistair Popple,
	Benjamin Herrenschmidt, Dan Carpenter, Daniel Axtens,
	David Gibson, Gavin Shan, Russell Currey, kvm, linux-kernel

We are going to have multiple different types of PHB on the same system
with POWER8 + NVLink and PHBs will have different IOMMU ops. However
we only really care about one callback - create_table - so we can
relax the compatibility check here.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
 drivers/vfio/vfio_iommu_spapr_tce.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c
index 0582b72..3054e3f 100644
--- a/drivers/vfio/vfio_iommu_spapr_tce.c
+++ b/drivers/vfio/vfio_iommu_spapr_tce.c
@@ -1188,7 +1188,8 @@ static int tce_iommu_attach_group(void *iommu_data,
 			goto unlock_exit;
 		}
 		table_group_tmp = iommu_group_get_iommudata(tcegrp->grp);
-		if (table_group_tmp->ops != table_group->ops) {
+		if (table_group_tmp->ops->create_table !=
+				table_group->ops->create_table) {
 			pr_warn("tce_vfio: Group %d is incompatible with group %d\n",
 					iommu_group_id(iommu_group),
 					iommu_group_id(tcegrp->grp));
-- 
2.5.0.rc3

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH kernel v4 03/11] powerpc/powernv: Rename pnv_pci_ioda2_tce_invalidate_entire
  2016-04-29  8:55 [PATCH kernel v4 00/11] powerpc/powernv/npu: Enable PCI pass through for NVLink Alexey Kardashevskiy
  2016-04-29  8:55 ` [PATCH kernel v4 01/11] vfio_pci: Test for extended capabilities if config space > 256 bytes Alexey Kardashevskiy
  2016-04-29  8:55 ` [PATCH kernel v4 02/11] vfio/spapr: Relax the IOMMU compatibility check Alexey Kardashevskiy
@ 2016-04-29  8:55 ` Alexey Kardashevskiy
  2016-04-29  8:55 ` [PATCH kernel v4 04/11] powerpc/powernv: Define TCE Kill flags Alexey Kardashevskiy
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 24+ messages in thread
From: Alexey Kardashevskiy @ 2016-04-29  8:55 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Alexey Kardashevskiy, Alex Williamson, Alistair Popple,
	Benjamin Herrenschmidt, Dan Carpenter, Daniel Axtens,
	David Gibson, Gavin Shan, Russell Currey, kvm, linux-kernel

As in fact pnv_pci_ioda2_tce_invalidate_entire() invalidates TCEs for
the specific PE rather than the entire cache, rename it to
pnv_pci_ioda2_tce_invalidate_pe(). In later patches we will add
a proper pnv_pci_ioda2_tce_invalidate_entire().

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
 arch/powerpc/platforms/powernv/pci-ioda.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index c5baaf3..ca307b6 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -1811,7 +1811,7 @@ static struct iommu_table_ops pnv_ioda1_iommu_ops = {
 	.get = pnv_tce_get,
 };
 
-static inline void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_ioda_pe *pe)
+static inline void pnv_pci_ioda2_tce_invalidate_pe(struct pnv_ioda_pe *pe)
 {
 	/* 01xb - invalidate TCEs that match the specified PE# */
 	unsigned long val = (0x4ull << 60) | (pe->pe_number & 0xFF);
@@ -2075,7 +2075,7 @@ static long pnv_pci_ioda2_set_window(struct iommu_table_group *table_group,
 
 	pnv_pci_link_table_and_group(phb->hose->node, num,
 			tbl, &pe->table_group);
-	pnv_pci_ioda2_tce_invalidate_entire(pe);
+	pnv_pci_ioda2_tce_invalidate_pe(pe);
 
 	return 0;
 }
@@ -2219,7 +2219,7 @@ static long pnv_pci_ioda2_unset_window(struct iommu_table_group *table_group,
 	if (ret)
 		pe_warn(pe, "Unmapping failed, ret = %ld\n", ret);
 	else
-		pnv_pci_ioda2_tce_invalidate_entire(pe);
+		pnv_pci_ioda2_tce_invalidate_pe(pe);
 
 	pnv_pci_unlink_table_and_group(table_group->tables[num], table_group);
 
-- 
2.5.0.rc3

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH kernel v4 04/11] powerpc/powernv: Define TCE Kill flags
  2016-04-29  8:55 [PATCH kernel v4 00/11] powerpc/powernv/npu: Enable PCI pass through for NVLink Alexey Kardashevskiy
                   ` (2 preceding siblings ...)
  2016-04-29  8:55 ` [PATCH kernel v4 03/11] powerpc/powernv: Rename pnv_pci_ioda2_tce_invalidate_entire Alexey Kardashevskiy
@ 2016-04-29  8:55 ` Alexey Kardashevskiy
  2016-04-29  8:55 ` [PATCH kernel v4 05/11] powerpc/powernv/npu: TCE Kill helpers cleanup Alexey Kardashevskiy
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 24+ messages in thread
From: Alexey Kardashevskiy @ 2016-04-29  8:55 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Alexey Kardashevskiy, Alex Williamson, Alistair Popple,
	Benjamin Herrenschmidt, Dan Carpenter, Daniel Axtens,
	David Gibson, Gavin Shan, Russell Currey, kvm, linux-kernel

This replaces magic constants for TCE Kill IODA2 register with macros.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
 arch/powerpc/platforms/powernv/pci-ioda.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index ca307b6..03be25d 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -1811,10 +1811,13 @@ static struct iommu_table_ops pnv_ioda1_iommu_ops = {
 	.get = pnv_tce_get,
 };
 
+#define TCE_KILL_INVAL_PE   PPC_BIT(1)
+#define TCE_KILL_INVAL_TCE  PPC_BIT(2)
+
 static inline void pnv_pci_ioda2_tce_invalidate_pe(struct pnv_ioda_pe *pe)
 {
 	/* 01xb - invalidate TCEs that match the specified PE# */
-	unsigned long val = (0x4ull << 60) | (pe->pe_number & 0xFF);
+	unsigned long val = TCE_KILL_INVAL_PE | (pe->pe_number & 0xFF);
 	struct pnv_phb *phb = pe->phb;
 	struct pnv_ioda_pe *npe;
 	int i;
@@ -1842,7 +1845,7 @@ static void pnv_pci_ioda2_do_tce_invalidate(unsigned pe_number, bool rm,
 	unsigned long start, end, inc;
 
 	/* We'll invalidate DMA address in PE scope */
-	start = 0x2ull << 60;
+	start = TCE_KILL_INVAL_TCE;
 	start |= (pe_number & 0xFF);
 	end = start;
 
-- 
2.5.0.rc3

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH kernel v4 05/11] powerpc/powernv/npu: TCE Kill helpers cleanup
  2016-04-29  8:55 [PATCH kernel v4 00/11] powerpc/powernv/npu: Enable PCI pass through for NVLink Alexey Kardashevskiy
                   ` (3 preceding siblings ...)
  2016-04-29  8:55 ` [PATCH kernel v4 04/11] powerpc/powernv: Define TCE Kill flags Alexey Kardashevskiy
@ 2016-04-29  8:55 ` Alexey Kardashevskiy
  2016-04-29  8:55 ` [PATCH kernel v4 06/11] powerpc/powernv/npu: Use the correct IOMMU page size Alexey Kardashevskiy
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 24+ messages in thread
From: Alexey Kardashevskiy @ 2016-04-29  8:55 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Alexey Kardashevskiy, Alex Williamson, Alistair Popple,
	Benjamin Herrenschmidt, Dan Carpenter, Daniel Axtens,
	David Gibson, Gavin Shan, Russell Currey, kvm, linux-kernel

NPU PHB TCE Kill register is exactly the same as in the rest of POWER8
so let's reuse the existing code for NPU. The only bit missing is
a helper to reset the entire TCE cache so this moves such a helper
from NPU code and renames it.

Since pnv_npu_tce_invalidate() does really invalidate the entire cache,
this uses pnv_pci_ioda2_tce_invalidate_entire() directly for NPU.
This adds an explicit comment for workaround for invalidating NPU TCE
cache.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alistair Popple <alistair@popple.id.au>
---
 arch/powerpc/platforms/powernv/npu-dma.c  | 41 -------------------------------
 arch/powerpc/platforms/powernv/pci-ioda.c | 29 ++++++++++++++++++----
 arch/powerpc/platforms/powernv/pci.h      |  7 +-----
 3 files changed, 25 insertions(+), 52 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
index 7229acd..778570c 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -25,8 +25,6 @@
  * Other types of TCE cache invalidation are not functional in the
  * hardware.
  */
-#define TCE_KILL_INVAL_ALL PPC_BIT(0)
-
 static struct pci_dev *get_pci_dev(struct device_node *dn)
 {
 	return PCI_DN(dn)->pcidev;
@@ -161,45 +159,6 @@ static struct pnv_ioda_pe *get_gpu_pci_dev_and_pe(struct pnv_ioda_pe *npe,
 	return pe;
 }
 
-void pnv_npu_tce_invalidate_entire(struct pnv_ioda_pe *npe)
-{
-	struct pnv_phb *phb = npe->phb;
-
-	if (WARN_ON(phb->type != PNV_PHB_NPU ||
-		    !phb->ioda.tce_inval_reg ||
-		    !(npe->flags & PNV_IODA_PE_DEV)))
-		return;
-
-	mb(); /* Ensure previous TCE table stores are visible */
-	__raw_writeq(cpu_to_be64(TCE_KILL_INVAL_ALL),
-		phb->ioda.tce_inval_reg);
-}
-
-void pnv_npu_tce_invalidate(struct pnv_ioda_pe *npe,
-				struct iommu_table *tbl,
-				unsigned long index,
-				unsigned long npages,
-				bool rm)
-{
-	struct pnv_phb *phb = npe->phb;
-
-	/* We can only invalidate the whole cache on NPU */
-	unsigned long val = TCE_KILL_INVAL_ALL;
-
-	if (WARN_ON(phb->type != PNV_PHB_NPU ||
-		    !phb->ioda.tce_inval_reg ||
-		    !(npe->flags & PNV_IODA_PE_DEV)))
-		return;
-
-	mb(); /* Ensure previous TCE table stores are visible */
-	if (rm)
-		__raw_rm_writeq(cpu_to_be64(val),
-		  (__be64 __iomem *) phb->ioda.tce_inval_reg_phys);
-	else
-		__raw_writeq(cpu_to_be64(val),
-			phb->ioda.tce_inval_reg);
-}
-
 void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe)
 {
 	struct pnv_ioda_pe *gpe;
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index 03be25d..a67d51e 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -1811,9 +1811,23 @@ static struct iommu_table_ops pnv_ioda1_iommu_ops = {
 	.get = pnv_tce_get,
 };
 
+#define TCE_KILL_INVAL_ALL  PPC_BIT(0)
 #define TCE_KILL_INVAL_PE   PPC_BIT(1)
 #define TCE_KILL_INVAL_TCE  PPC_BIT(2)
 
+void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_phb *phb, bool rm)
+{
+	const unsigned long val = TCE_KILL_INVAL_ALL;
+
+	mb(); /* Ensure previous TCE table stores are visible */
+	if (rm)
+		__raw_rm_writeq(cpu_to_be64(val),
+				(__be64 __iomem *)
+				phb->ioda.tce_inval_reg_phys);
+	else
+		__raw_writeq(cpu_to_be64(val), phb->ioda.tce_inval_reg);
+}
+
 static inline void pnv_pci_ioda2_tce_invalidate_pe(struct pnv_ioda_pe *pe)
 {
 	/* 01xb - invalidate TCEs that match the specified PE# */
@@ -1834,7 +1848,7 @@ static inline void pnv_pci_ioda2_tce_invalidate_pe(struct pnv_ioda_pe *pe)
 			if (!npe || npe->phb->type != PNV_PHB_NPU)
 				continue;
 
-			pnv_npu_tce_invalidate_entire(npe);
+			pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
 		}
 }
 
@@ -1883,14 +1897,19 @@ static void pnv_pci_ioda2_tce_invalidate(struct iommu_table *tbl,
 			index, npages);
 
 		if (pe->flags & PNV_IODA_PE_PEER)
-			/* Invalidate PEs using the same TCE table */
+			/*
+			 * The NVLink hardware does not support TCE kill
+			 * per TCE entry so we have to invalidate
+			 * the entire cache for it.
+			 */
 			for (i = 0; i < PNV_IODA_MAX_PEER_PES; i++) {
 				npe = pe->peers[i];
-				if (!npe || npe->phb->type != PNV_PHB_NPU)
+				if (!npe || npe->phb->type != PNV_PHB_NPU ||
+						!npe->phb->ioda.tce_inval_reg)
 					continue;
 
-				pnv_npu_tce_invalidate(npe, tbl, index,
-							npages, rm);
+				pnv_pci_ioda2_tce_invalidate_entire(npe->phb,
+						rm);
 			}
 	}
 }
diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
index 3f814f3..0b89a4c 100644
--- a/arch/powerpc/platforms/powernv/pci.h
+++ b/arch/powerpc/platforms/powernv/pci.h
@@ -237,15 +237,10 @@ extern int pnv_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type);
 extern void pnv_teardown_msi_irqs(struct pci_dev *pdev);
 
 /* Nvlink functions */
-extern void pnv_npu_tce_invalidate_entire(struct pnv_ioda_pe *npe);
-extern void pnv_npu_tce_invalidate(struct pnv_ioda_pe *npe,
-				       struct iommu_table *tbl,
-				       unsigned long index,
-				       unsigned long npages,
-				       bool rm);
 extern void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe);
 extern void pnv_npu_setup_dma_pe(struct pnv_ioda_pe *npe);
 extern int pnv_npu_dma_set_bypass(struct pnv_ioda_pe *npe, bool enabled);
 extern int pnv_npu_dma_set_mask(struct pci_dev *npdev, u64 dma_mask);
+extern void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_phb *phb, bool rm);
 
 #endif /* __POWERNV_PCI_H */
-- 
2.5.0.rc3

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH kernel v4 06/11] powerpc/powernv/npu: Use the correct IOMMU page size
  2016-04-29  8:55 [PATCH kernel v4 00/11] powerpc/powernv/npu: Enable PCI pass through for NVLink Alexey Kardashevskiy
                   ` (4 preceding siblings ...)
  2016-04-29  8:55 ` [PATCH kernel v4 05/11] powerpc/powernv/npu: TCE Kill helpers cleanup Alexey Kardashevskiy
@ 2016-04-29  8:55 ` Alexey Kardashevskiy
  2016-04-29  8:55 ` [PATCH kernel v4 07/11] powerpc/powernv/npu: Simplify DMA setup Alexey Kardashevskiy
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 24+ messages in thread
From: Alexey Kardashevskiy @ 2016-04-29  8:55 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Alexey Kardashevskiy, Alex Williamson, Alistair Popple,
	Benjamin Herrenschmidt, Dan Carpenter, Daniel Axtens,
	David Gibson, Gavin Shan, Russell Currey, kvm, linux-kernel

This uses the page size from iommu_table instead of hard-coded 4K.
This should cause no change in behavior.

While we are here, move bits around to prepare for further rework
which will define and use iommu_table_group_ops.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alistair Popple <alistair@popple.id.au>
---
 arch/powerpc/platforms/powernv/npu-dma.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
index 778570c..5bd5fee 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -204,8 +204,7 @@ static void pnv_npu_disable_bypass(struct pnv_ioda_pe *npe)
 	struct pnv_phb *phb = npe->phb;
 	struct pci_dev *gpdev;
 	struct pnv_ioda_pe *gpe;
-	void *addr;
-	unsigned int size;
+	struct iommu_table *tbl;
 	int64_t rc;
 
 	/*
@@ -219,11 +218,11 @@ static void pnv_npu_disable_bypass(struct pnv_ioda_pe *npe)
 	if (!gpe)
 		return;
 
-	addr = (void *)gpe->table_group.tables[0]->it_base;
-	size = gpe->table_group.tables[0]->it_size << 3;
+	tbl = gpe->table_group.tables[0];
 	rc = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
-					npe->pe_number, 1, __pa(addr),
-					size, 0x1000);
+					npe->pe_number, 1, __pa(tbl->it_base),
+					tbl->it_size << 3,
+					IOMMU_PAGE_SIZE(tbl));
 	if (rc != OPAL_SUCCESS)
 		pr_warn("%s: Error %lld setting DMA window on PHB#%d-PE#%d\n",
 			__func__, rc, phb->hose->global_number, npe->pe_number);
-- 
2.5.0.rc3

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH kernel v4 07/11] powerpc/powernv/npu: Simplify DMA setup
  2016-04-29  8:55 [PATCH kernel v4 00/11] powerpc/powernv/npu: Enable PCI pass through for NVLink Alexey Kardashevskiy
                   ` (5 preceding siblings ...)
  2016-04-29  8:55 ` [PATCH kernel v4 06/11] powerpc/powernv/npu: Use the correct IOMMU page size Alexey Kardashevskiy
@ 2016-04-29  8:55 ` Alexey Kardashevskiy
  2016-04-29  8:55 ` [PATCH kernel v4 08/11] powerpc/powernv/ioda2: Export debug helper pe_level_printk() Alexey Kardashevskiy
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 24+ messages in thread
From: Alexey Kardashevskiy @ 2016-04-29  8:55 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Alexey Kardashevskiy, Alex Williamson, Alistair Popple,
	Benjamin Herrenschmidt, Dan Carpenter, Daniel Axtens,
	David Gibson, Gavin Shan, Russell Currey, kvm, linux-kernel

NPU devices are emulated in firmware and mainly used for NPU NVLink
training; one NPU device is per a hardware link. Their DMA/TCE setup
must match the GPU which is connected via PCIe and NVLink so any changes
to the DMA/TCE setup on the GPU PCIe device need to be propagated to
the NVLink device as this is what device drivers expect and it doesn't
make much sense to do anything else.

This makes NPU DMA setup explicit.
pnv_npu_ioda_controller_ops::pnv_npu_dma_set_mask is moved to pci-ioda,
made static and prints warning as dma_set_mask() should never be called
on this function as in any case it will not configure GPU; so we make
this explicit.

Instead of using PNV_IODA_PE_PEER and peers[] (which the next patch will
remove), we test every PCI device if there are corresponding NVLink
devices. If there are any, we propagate bypass mode to just found NPU
devices by calling the setup helper directly (which takes @bypass) and
avoid guessing (i.e. calculating from DMA mask) whether we need bypass
or not on NPU devices. Since DMA setup happens in very rare occasion,
this will not slow down booting or VFIO start/stop much.

This renames pnv_npu_disable_bypass to pnv_npu_dma_set_32 to make it
more clear what the function really does which is programming 32bit
table address to the TVT ("disabling bypass" means writing zeroes to
the TVT).

This removes pnv_npu_dma_set_bypass() from pnv_npu_ioda_fixup() as
the DMA configuration on NPU does not matter until dma_set_mask() is
called on GPU and that will do the NPU DMA configuration.

This removes phb->dma_dev_setup initialization for NPU as
pnv_pci_ioda_dma_dev_setup is no-op for it anyway.

This stops using npe->tce_bypass_base as it never changes and values
other than zero are not supported.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alistair Popple <alistair@popple.id.au>
---
Changes:
v2:
* changed first paragraph of the commit log from Alistair comment
* removed npe->tce_bypass_base
---
 arch/powerpc/platforms/powernv/npu-dma.c  | 89 ++++++++++++++-----------------
 arch/powerpc/platforms/powernv/pci-ioda.c | 30 +++++------
 arch/powerpc/platforms/powernv/pci.h      |  3 +-
 3 files changed, 53 insertions(+), 69 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
index 5bd5fee..bec9267 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -196,10 +196,9 @@ void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe)
 }
 
 /*
- * For the NPU we want to point the TCE table at the same table as the
- * real PCI device.
+ * Enables 32 bit DMA on NPU.
  */
-static void pnv_npu_disable_bypass(struct pnv_ioda_pe *npe)
+static void pnv_npu_dma_set_32(struct pnv_ioda_pe *npe)
 {
 	struct pnv_phb *phb = npe->phb;
 	struct pci_dev *gpdev;
@@ -235,72 +234,62 @@ static void pnv_npu_disable_bypass(struct pnv_ioda_pe *npe)
 }
 
 /*
- * Enable/disable bypass mode on the NPU. The NPU only supports one
+ * Enables bypass mode on the NPU. The NPU only supports one
  * window per link, so bypass needs to be explicitly enabled or
  * disabled. Unlike for a PHB3 bypass and non-bypass modes can't be
  * active at the same time.
  */
-int pnv_npu_dma_set_bypass(struct pnv_ioda_pe *npe, bool enable)
+static int pnv_npu_dma_set_bypass(struct pnv_ioda_pe *npe)
 {
 	struct pnv_phb *phb = npe->phb;
 	int64_t rc = 0;
+	phys_addr_t top = memblock_end_of_DRAM();
 
 	if (phb->type != PNV_PHB_NPU || !npe->pdev)
 		return -EINVAL;
 
-	if (enable) {
-		/* Enable the bypass window */
-		phys_addr_t top = memblock_end_of_DRAM();
+	/* Enable the bypass window */
 
-		npe->tce_bypass_base = 0;
-		top = roundup_pow_of_two(top);
-		dev_info(&npe->pdev->dev, "Enabling bypass for PE %d\n",
-			 npe->pe_number);
-		rc = opal_pci_map_pe_dma_window_real(phb->opal_id,
-					npe->pe_number, npe->pe_number,
-					npe->tce_bypass_base, top);
-	} else {
-		/*
-		 * Disable the bypass window by replacing it with the
-		 * TCE32 window.
-		 */
-		pnv_npu_disable_bypass(npe);
-	}
+	top = roundup_pow_of_two(top);
+	dev_info(&npe->pdev->dev, "Enabling bypass for PE %d\n",
+			npe->pe_number);
+	rc = opal_pci_map_pe_dma_window_real(phb->opal_id,
+			npe->pe_number, npe->pe_number,
+			0 /* bypass base */, top);
 
 	return rc;
 }
 
-int pnv_npu_dma_set_mask(struct pci_dev *npdev, u64 dma_mask)
+void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass)
 {
-	struct pci_controller *hose = pci_bus_to_host(npdev->bus);
-	struct pnv_phb *phb = hose->private_data;
-	struct pci_dn *pdn = pci_get_pdn(npdev);
-	struct pnv_ioda_pe *npe, *gpe;
-	struct pci_dev *gpdev;
-	uint64_t top;
-	bool bypass = false;
+	int i;
+	struct pnv_phb *phb;
+	struct pci_dn *pdn;
+	struct pnv_ioda_pe *npe;
+	struct pci_dev *npdev;
 
-	if (WARN_ON(!pdn || pdn->pe_number == IODA_INVALID_PE))
-		return -ENXIO;
+	for (i = 0; ; ++i) {
+		npdev = pnv_pci_get_npu_dev(gpdev, i);
 
-	/* We only do bypass if it's enabled on the linked device */
-	npe = &phb->ioda.pe_array[pdn->pe_number];
-	gpe = get_gpu_pci_dev_and_pe(npe, &gpdev);
-	if (!gpe)
-		return -ENODEV;
+		if (!npdev)
+			break;
 
-	if (gpe->tce_bypass_enabled) {
-		top = gpe->tce_bypass_base + memblock_end_of_DRAM() - 1;
-		bypass = (dma_mask >= top);
+		pdn = pci_get_pdn(npdev);
+		if (WARN_ON(!pdn || pdn->pe_number == IODA_INVALID_PE))
+			return;
+
+		phb = pci_bus_to_host(npdev->bus)->private_data;
+
+		/* We only do bypass if it's enabled on the linked device */
+		npe = &phb->ioda.pe_array[pdn->pe_number];
+
+		if (bypass) {
+			dev_info(&npdev->dev,
+					"Using 64-bit DMA iommu bypass\n");
+			pnv_npu_dma_set_bypass(npe);
+		} else {
+			dev_info(&npdev->dev, "Using 32-bit DMA via iommu\n");
+			pnv_npu_dma_set_32(npe);
+		}
 	}
-
-	if (bypass)
-		dev_info(&npdev->dev, "Using 64-bit DMA iommu bypass\n");
-	else
-		dev_info(&npdev->dev, "Using 32-bit DMA via iommu\n");
-
-	pnv_npu_dma_set_bypass(npe, bypass);
-	*npdev->dev.dma_mask = dma_mask;
-
-	return 0;
 }
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index a67d51e..272521e 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -1640,8 +1640,6 @@ static int pnv_pci_ioda_dma_set_mask(struct pci_dev *pdev, u64 dma_mask)
 	struct pnv_ioda_pe *pe;
 	uint64_t top;
 	bool bypass = false;
-	struct pci_dev *linked_npu_dev;
-	int i;
 
 	if (WARN_ON(!pdn || pdn->pe_number == IODA_INVALID_PE))
 		return -ENODEV;;
@@ -1662,15 +1660,7 @@ static int pnv_pci_ioda_dma_set_mask(struct pci_dev *pdev, u64 dma_mask)
 	*pdev->dev.dma_mask = dma_mask;
 
 	/* Update peer npu devices */
-	if (pe->flags & PNV_IODA_PE_PEER)
-		for (i = 0; i < PNV_IODA_MAX_PEER_PES; i++) {
-			if (!pe->peers[i])
-				continue;
-
-			linked_npu_dev = pe->peers[i]->pdev;
-			if (dma_get_mask(&linked_npu_dev->dev) != dma_mask)
-				dma_set_mask(&linked_npu_dev->dev, dma_mask);
-		}
+	pnv_npu_try_dma_set_bypass(pdev, bypass);
 
 	return 0;
 }
@@ -3094,7 +3084,6 @@ static void pnv_npu_ioda_fixup(void)
 			enable_bypass = dma_get_mask(&pe->pdev->dev) ==
 				DMA_BIT_MASK(64);
 			pnv_npu_init_dma_pe(pe);
-			pnv_npu_dma_set_bypass(pe, enable_bypass);
 		}
 	}
 }
@@ -3246,6 +3235,14 @@ static const struct pci_controller_ops pnv_pci_ioda_controller_ops = {
        .shutdown = pnv_pci_ioda_shutdown,
 };
 
+static int pnv_npu_dma_set_mask(struct pci_dev *npdev, u64 dma_mask)
+{
+	dev_err_once(&npdev->dev,
+			"%s operation unsupported for NVLink devices\n",
+			__func__);
+	return -EPERM;
+}
+
 static const struct pci_controller_ops pnv_npu_ioda_controller_ops = {
 	.dma_dev_setup = pnv_pci_dma_dev_setup,
 #ifdef CONFIG_PCI_MSI
@@ -3402,9 +3399,6 @@ static void __init pnv_pci_init_ioda_phb(struct device_node *np,
 	/* Setup RID -> PE mapping function */
 	phb->bdfn_to_pe = pnv_ioda_bdfn_to_pe;
 
-	/* Setup TCEs */
-	phb->dma_dev_setup = pnv_pci_ioda_dma_dev_setup;
-
 	/* Setup MSI support */
 	pnv_pci_init_ioda_msis(phb);
 
@@ -3417,10 +3411,12 @@ static void __init pnv_pci_init_ioda_phb(struct device_node *np,
 	 */
 	ppc_md.pcibios_fixup = pnv_pci_ioda_fixup;
 
-	if (phb->type == PNV_PHB_NPU)
+	if (phb->type == PNV_PHB_NPU) {
 		hose->controller_ops = pnv_npu_ioda_controller_ops;
-	else
+	} else {
+		phb->dma_dev_setup = pnv_pci_ioda_dma_dev_setup;
 		hose->controller_ops = pnv_pci_ioda_controller_ops;
+	}
 
 #ifdef CONFIG_PCI_IOV
 	ppc_md.pcibios_fixup_sriov = pnv_pci_ioda_fixup_iov_resources;
diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
index 0b89a4c..d574a9d 100644
--- a/arch/powerpc/platforms/powernv/pci.h
+++ b/arch/powerpc/platforms/powernv/pci.h
@@ -239,8 +239,7 @@ extern void pnv_teardown_msi_irqs(struct pci_dev *pdev);
 /* Nvlink functions */
 extern void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe);
 extern void pnv_npu_setup_dma_pe(struct pnv_ioda_pe *npe);
-extern int pnv_npu_dma_set_bypass(struct pnv_ioda_pe *npe, bool enabled);
-extern int pnv_npu_dma_set_mask(struct pci_dev *npdev, u64 dma_mask);
+extern void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass);
 extern void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_phb *phb, bool rm);
 
 #endif /* __POWERNV_PCI_H */
-- 
2.5.0.rc3

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH kernel v4 08/11] powerpc/powernv/ioda2: Export debug helper pe_level_printk()
  2016-04-29  8:55 [PATCH kernel v4 00/11] powerpc/powernv/npu: Enable PCI pass through for NVLink Alexey Kardashevskiy
                   ` (6 preceding siblings ...)
  2016-04-29  8:55 ` [PATCH kernel v4 07/11] powerpc/powernv/npu: Simplify DMA setup Alexey Kardashevskiy
@ 2016-04-29  8:55 ` Alexey Kardashevskiy
  2016-05-03  5:46   ` Alistair Popple
  2016-04-29  8:55 ` [PATCH kernel v4 09/11] powerpc/powernv/npu: Add set/unset window helpers Alexey Kardashevskiy
                   ` (2 subsequent siblings)
  10 siblings, 1 reply; 24+ messages in thread
From: Alexey Kardashevskiy @ 2016-04-29  8:55 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Alexey Kardashevskiy, Alex Williamson, Alistair Popple,
	Benjamin Herrenschmidt, Dan Carpenter, Daniel Axtens,
	David Gibson, Gavin Shan, Russell Currey, kvm, linux-kernel

This exports debugging helper pe_level_printk() and corresponding macroses
so they can be used in npu-dma.c.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
 arch/powerpc/platforms/powernv/pci-ioda.c | 9 +--------
 arch/powerpc/platforms/powernv/pci.h      | 9 +++++++++
 2 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index 272521e..db7695f 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -56,7 +56,7 @@
 
 static void pnv_pci_ioda2_table_free_pages(struct iommu_table *tbl);
 
-static void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level,
+void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level,
 			    const char *fmt, ...)
 {
 	struct va_format vaf;
@@ -87,13 +87,6 @@ static void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level,
 	va_end(args);
 }
 
-#define pe_err(pe, fmt, ...)					\
-	pe_level_printk(pe, KERN_ERR, fmt, ##__VA_ARGS__)
-#define pe_warn(pe, fmt, ...)					\
-	pe_level_printk(pe, KERN_WARNING, fmt, ##__VA_ARGS__)
-#define pe_info(pe, fmt, ...)					\
-	pe_level_printk(pe, KERN_INFO, fmt, ##__VA_ARGS__)
-
 static bool pnv_iommu_bypass_disabled __read_mostly;
 
 static int __init iommu_setup(char *str)
diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
index d574a9d..485e5b1 100644
--- a/arch/powerpc/platforms/powernv/pci.h
+++ b/arch/powerpc/platforms/powernv/pci.h
@@ -236,6 +236,15 @@ extern void pnv_pci_dma_bus_setup(struct pci_bus *bus);
 extern int pnv_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type);
 extern void pnv_teardown_msi_irqs(struct pci_dev *pdev);
 
+extern void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level,
+			    const char *fmt, ...);
+#define pe_err(pe, fmt, ...)					\
+	pe_level_printk(pe, KERN_ERR, fmt, ##__VA_ARGS__)
+#define pe_warn(pe, fmt, ...)					\
+	pe_level_printk(pe, KERN_WARNING, fmt, ##__VA_ARGS__)
+#define pe_info(pe, fmt, ...)					\
+	pe_level_printk(pe, KERN_INFO, fmt, ##__VA_ARGS__)
+
 /* Nvlink functions */
 extern void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe);
 extern void pnv_npu_setup_dma_pe(struct pnv_ioda_pe *npe);
-- 
2.5.0.rc3

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH kernel v4 09/11] powerpc/powernv/npu: Add set/unset window helpers
  2016-04-29  8:55 [PATCH kernel v4 00/11] powerpc/powernv/npu: Enable PCI pass through for NVLink Alexey Kardashevskiy
                   ` (7 preceding siblings ...)
  2016-04-29  8:55 ` [PATCH kernel v4 08/11] powerpc/powernv/ioda2: Export debug helper pe_level_printk() Alexey Kardashevskiy
@ 2016-04-29  8:55 ` Alexey Kardashevskiy
  2016-05-03  6:25   ` Alistair Popple
  2016-04-29  8:55 ` [PATCH kernel v4 10/11] powerpc/powernv/npu: Rework TCE Kill handling Alexey Kardashevskiy
  2016-04-29  8:55 ` [PATCH kernel v4 11/11] powerpc/powernv/npu: Enable NVLink pass through Alexey Kardashevskiy
  10 siblings, 1 reply; 24+ messages in thread
From: Alexey Kardashevskiy @ 2016-04-29  8:55 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Alexey Kardashevskiy, Alex Williamson, Alistair Popple,
	Benjamin Herrenschmidt, Dan Carpenter, Daniel Axtens,
	David Gibson, Gavin Shan, Russell Currey, kvm, linux-kernel

The upcoming NVLink passthrough support will require NPU code to cope
with two DMA windows.

This adds a pnv_npu_set_window() helper which programs 32bit window to
the hardware. This also adds multilevel TCE support.

This adds a pnv_npu_unset_window() helper which removes the DMA window
from the hardware. This does not make difference now as the caller -
pnv_npu_dma_set_bypass() - enables bypass in the hardware but the next
patch will use it to manage TCE table lists for TCE Kill handling.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
 arch/powerpc/platforms/powernv/npu-dma.c | 65 +++++++++++++++++++++++++++-----
 1 file changed, 55 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
index bec9267..800d70f 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -159,6 +159,56 @@ static struct pnv_ioda_pe *get_gpu_pci_dev_and_pe(struct pnv_ioda_pe *npe,
 	return pe;
 }
 
+static long pnv_npu_set_window(struct pnv_ioda_pe *npe,
+		struct iommu_table *tbl)
+{
+	struct pnv_phb *phb = npe->phb;
+	int64_t rc;
+	const unsigned long size = tbl->it_indirect_levels ?
+		tbl->it_level_size : tbl->it_size;
+	const __u64 start_addr = tbl->it_offset << tbl->it_page_shift;
+	const __u64 win_size = tbl->it_size << tbl->it_page_shift;
+
+	pe_info(npe, "Setting up window %llx..%llx pg=%lx\n",
+			start_addr, start_addr + win_size - 1,
+			IOMMU_PAGE_SIZE(tbl));
+
+	rc = opal_pci_map_pe_dma_window(phb->opal_id,
+			npe->pe_number,
+			npe->pe_number,
+			tbl->it_indirect_levels + 1,
+			__pa(tbl->it_base),
+			size << 3,
+			IOMMU_PAGE_SIZE(tbl));
+	if (rc) {
+		pe_err(npe, "Failed to configure TCE table, err %lld\n", rc);
+		return rc;
+	}
+	pnv_pci_ioda2_tce_invalidate_entire(phb, false);
+
+	return 0;
+}
+
+static long pnv_npu_unset_window(struct pnv_ioda_pe *npe)
+{
+	struct pnv_phb *phb = npe->phb;
+	int64_t rc;
+
+	pe_info(npe, "Removing DMA window\n");
+
+	rc = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
+			npe->pe_number,
+			0/* levels */, 0/* table address */,
+			0/* table size */, 0/* page size */);
+	if (rc) {
+		pe_err(npe, "Unmapping failed, ret = %lld\n", rc);
+		return rc;
+	}
+	pnv_pci_ioda2_tce_invalidate_entire(phb, false);
+
+	return 0;
+}
+
 void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe)
 {
 	struct pnv_ioda_pe *gpe;
@@ -200,10 +250,8 @@ void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe)
  */
 static void pnv_npu_dma_set_32(struct pnv_ioda_pe *npe)
 {
-	struct pnv_phb *phb = npe->phb;
 	struct pci_dev *gpdev;
 	struct pnv_ioda_pe *gpe;
-	struct iommu_table *tbl;
 	int64_t rc;
 
 	/*
@@ -217,14 +265,7 @@ static void pnv_npu_dma_set_32(struct pnv_ioda_pe *npe)
 	if (!gpe)
 		return;
 
-	tbl = gpe->table_group.tables[0];
-	rc = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
-					npe->pe_number, 1, __pa(tbl->it_base),
-					tbl->it_size << 3,
-					IOMMU_PAGE_SIZE(tbl));
-	if (rc != OPAL_SUCCESS)
-		pr_warn("%s: Error %lld setting DMA window on PHB#%d-PE#%d\n",
-			__func__, rc, phb->hose->global_number, npe->pe_number);
+	rc = pnv_npu_set_window(npe, gpe->table_group.tables[0]);
 
 	/*
 	 * We don't initialise npu_pe->tce32_table as we always use
@@ -248,6 +289,10 @@ static int pnv_npu_dma_set_bypass(struct pnv_ioda_pe *npe)
 	if (phb->type != PNV_PHB_NPU || !npe->pdev)
 		return -EINVAL;
 
+	rc = pnv_npu_unset_window(npe);
+	if (rc != OPAL_SUCCESS)
+		return rc;
+
 	/* Enable the bypass window */
 
 	top = roundup_pow_of_two(top);
-- 
2.5.0.rc3

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH kernel v4 10/11] powerpc/powernv/npu: Rework TCE Kill handling
  2016-04-29  8:55 [PATCH kernel v4 00/11] powerpc/powernv/npu: Enable PCI pass through for NVLink Alexey Kardashevskiy
                   ` (8 preceding siblings ...)
  2016-04-29  8:55 ` [PATCH kernel v4 09/11] powerpc/powernv/npu: Add set/unset window helpers Alexey Kardashevskiy
@ 2016-04-29  8:55 ` Alexey Kardashevskiy
  2016-05-03  7:37   ` Alistair Popple
  2016-04-29  8:55 ` [PATCH kernel v4 11/11] powerpc/powernv/npu: Enable NVLink pass through Alexey Kardashevskiy
  10 siblings, 1 reply; 24+ messages in thread
From: Alexey Kardashevskiy @ 2016-04-29  8:55 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Alexey Kardashevskiy, Alex Williamson, Alistair Popple,
	Benjamin Herrenschmidt, Dan Carpenter, Daniel Axtens,
	David Gibson, Gavin Shan, Russell Currey, kvm, linux-kernel

The pnv_ioda_pe struct keeps an array of peers. At the moment it is only
used to link GPU and NPU for 2 purposes:

1. Access NPU quickly when configuring DMA for GPU - this was addressed
in the previos patch by removing use of it as DMA setup is not what
the kernel would constantly do.

2. Invalidate TCE cache for NPU when it is invalidated for GPU.
GPU and NPU are in different PE. There is already a mechanism to
attach multiple iommu_table_group to the same iommu_table (used for VFIO),
we can reuse it here so does this patch.

This gets rid of peers[] array and PNV_IODA_PE_PEER flag as they are
not needed anymore.

While we are here, add TCE cache invalidation after enabling bypass.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v4:
* reworked as "powerpc/powernv/npu: Add set/unset window helpers" has been
added
---
 arch/powerpc/platforms/powernv/npu-dma.c  | 69 +++++++++----------------------
 arch/powerpc/platforms/powernv/pci-ioda.c | 57 ++++---------------------
 arch/powerpc/platforms/powernv/pci.h      |  6 ---
 3 files changed, 26 insertions(+), 106 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
index 800d70f..cb2d1da 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -136,22 +136,17 @@ static struct pnv_ioda_pe *get_gpu_pci_dev_and_pe(struct pnv_ioda_pe *npe,
 	struct pnv_ioda_pe *pe;
 	struct pci_dn *pdn;
 
-	if (npe->flags & PNV_IODA_PE_PEER) {
-		pe = npe->peers[0];
-		pdev = pe->pdev;
-	} else {
-		pdev = pnv_pci_get_gpu_dev(npe->pdev);
-		if (!pdev)
-			return NULL;
+	pdev = pnv_pci_get_gpu_dev(npe->pdev);
+	if (!pdev)
+		return NULL;
 
-		pdn = pci_get_pdn(pdev);
-		if (WARN_ON(!pdn || pdn->pe_number == IODA_INVALID_PE))
-			return NULL;
+	pdn = pci_get_pdn(pdev);
+	if (WARN_ON(!pdn || pdn->pe_number == IODA_INVALID_PE))
+		return NULL;
 
-		hose = pci_bus_to_host(pdev->bus);
-		phb = hose->private_data;
-		pe = &phb->ioda.pe_array[pdn->pe_number];
-	}
+	hose = pci_bus_to_host(pdev->bus);
+	phb = hose->private_data;
+	pe = &phb->ioda.pe_array[pdn->pe_number];
 
 	if (gpdev)
 		*gpdev = pdev;
@@ -186,6 +181,10 @@ static long pnv_npu_set_window(struct pnv_ioda_pe *npe,
 	}
 	pnv_pci_ioda2_tce_invalidate_entire(phb, false);
 
+	/* Add the table to the list so its TCE cache will get invalidated */
+	pnv_pci_link_table_and_group(phb->hose->node, 0,
+			tbl, &npe->table_group);
+
 	return 0;
 }
 
@@ -206,45 +205,12 @@ static long pnv_npu_unset_window(struct pnv_ioda_pe *npe)
 	}
 	pnv_pci_ioda2_tce_invalidate_entire(phb, false);
 
+	pnv_pci_unlink_table_and_group(npe->table_group.tables[0],
+			&npe->table_group);
+
 	return 0;
 }
 
-void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe)
-{
-	struct pnv_ioda_pe *gpe;
-	struct pci_dev *gpdev;
-	int i, avail = -1;
-
-	if (!npe->pdev || !(npe->flags & PNV_IODA_PE_DEV))
-		return;
-
-	gpe = get_gpu_pci_dev_and_pe(npe, &gpdev);
-	if (!gpe)
-		return;
-
-	for (i = 0; i < PNV_IODA_MAX_PEER_PES; i++) {
-		/* Nothing to do if the PE is already connected. */
-		if (gpe->peers[i] == npe)
-			return;
-
-		if (!gpe->peers[i])
-			avail = i;
-	}
-
-	if (WARN_ON(avail < 0))
-		return;
-
-	gpe->peers[avail] = npe;
-	gpe->flags |= PNV_IODA_PE_PEER;
-
-	/*
-	 * We assume that the NPU devices only have a single peer PE
-	 * (the GPU PCIe device PE).
-	 */
-	npe->peers[0] = gpe;
-	npe->flags |= PNV_IODA_PE_PEER;
-}
-
 /*
  * Enables 32 bit DMA on NPU.
  */
@@ -302,6 +268,9 @@ static int pnv_npu_dma_set_bypass(struct pnv_ioda_pe *npe)
 			npe->pe_number, npe->pe_number,
 			0 /* bypass base */, top);
 
+	if (rc == OPAL_SUCCESS)
+		pnv_pci_ioda2_tce_invalidate_entire(phb, false);
+
 	return rc;
 }
 
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index db7695f..922c74c 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -1816,23 +1816,12 @@ static inline void pnv_pci_ioda2_tce_invalidate_pe(struct pnv_ioda_pe *pe)
 	/* 01xb - invalidate TCEs that match the specified PE# */
 	unsigned long val = TCE_KILL_INVAL_PE | (pe->pe_number & 0xFF);
 	struct pnv_phb *phb = pe->phb;
-	struct pnv_ioda_pe *npe;
-	int i;
 
 	if (!phb->ioda.tce_inval_reg)
 		return;
 
 	mb(); /* Ensure above stores are visible */
 	__raw_writeq(cpu_to_be64(val), phb->ioda.tce_inval_reg);
-
-	if (pe->flags & PNV_IODA_PE_PEER)
-		for (i = 0; i < PNV_IODA_MAX_PEER_PES; i++) {
-			npe = pe->peers[i];
-			if (!npe || npe->phb->type != PNV_PHB_NPU)
-				continue;
-
-			pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
-		}
 }
 
 static void pnv_pci_ioda2_do_tce_invalidate(unsigned pe_number, bool rm,
@@ -1867,33 +1856,24 @@ static void pnv_pci_ioda2_tce_invalidate(struct iommu_table *tbl,
 	struct iommu_table_group_link *tgl;
 
 	list_for_each_entry_rcu(tgl, &tbl->it_group_list, next) {
-		struct pnv_ioda_pe *npe;
 		struct pnv_ioda_pe *pe = container_of(tgl->table_group,
 				struct pnv_ioda_pe, table_group);
 		__be64 __iomem *invalidate = rm ?
 			(__be64 __iomem *)pe->phb->ioda.tce_inval_reg_phys :
 			pe->phb->ioda.tce_inval_reg;
-		int i;
 
-		pnv_pci_ioda2_do_tce_invalidate(pe->pe_number, rm,
-			invalidate, tbl->it_page_shift,
-			index, npages);
-
-		if (pe->flags & PNV_IODA_PE_PEER)
+		if (pe->phb->type == PNV_PHB_NPU) {
 			/*
 			 * The NVLink hardware does not support TCE kill
 			 * per TCE entry so we have to invalidate
 			 * the entire cache for it.
 			 */
-			for (i = 0; i < PNV_IODA_MAX_PEER_PES; i++) {
-				npe = pe->peers[i];
-				if (!npe || npe->phb->type != PNV_PHB_NPU ||
-						!npe->phb->ioda.tce_inval_reg)
-					continue;
-
-				pnv_pci_ioda2_tce_invalidate_entire(npe->phb,
-						rm);
-			}
+			pnv_pci_ioda2_tce_invalidate_entire(pe->phb, rm);
+			continue;
+		}
+		pnv_pci_ioda2_do_tce_invalidate(pe->pe_number, rm,
+			invalidate, tbl->it_page_shift,
+			index, npages);
 	}
 }
 
@@ -3061,26 +3041,6 @@ static void pnv_pci_ioda_create_dbgfs(void)
 #endif /* CONFIG_DEBUG_FS */
 }
 
-static void pnv_npu_ioda_fixup(void)
-{
-	bool enable_bypass;
-	struct pci_controller *hose, *tmp;
-	struct pnv_phb *phb;
-	struct pnv_ioda_pe *pe;
-
-	list_for_each_entry_safe(hose, tmp, &hose_list, list_node) {
-		phb = hose->private_data;
-		if (phb->type != PNV_PHB_NPU)
-			continue;
-
-		list_for_each_entry(pe, &phb->ioda.pe_dma_list, dma_link) {
-			enable_bypass = dma_get_mask(&pe->pdev->dev) ==
-				DMA_BIT_MASK(64);
-			pnv_npu_init_dma_pe(pe);
-		}
-	}
-}
-
 static void pnv_pci_ioda_fixup(void)
 {
 	pnv_pci_ioda_setup_PEs();
@@ -3093,9 +3053,6 @@ static void pnv_pci_ioda_fixup(void)
 	eeh_init();
 	eeh_addr_cache_build();
 #endif
-
-	/* Link NPU IODA tables to their PCI devices. */
-	pnv_npu_ioda_fixup();
 }
 
 /*
diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
index 485e5b1..e117bd8 100644
--- a/arch/powerpc/platforms/powernv/pci.h
+++ b/arch/powerpc/platforms/powernv/pci.h
@@ -24,7 +24,6 @@ enum pnv_phb_model {
 #define PNV_IODA_PE_MASTER	(1 << 3)	/* Master PE in compound case	*/
 #define PNV_IODA_PE_SLAVE	(1 << 4)	/* Slave PE in compound case	*/
 #define PNV_IODA_PE_VF		(1 << 5)	/* PE for one VF 		*/
-#define PNV_IODA_PE_PEER	(1 << 6)	/* PE has peers			*/
 
 /* Data associated with a PE, including IOMMU tracking etc.. */
 struct pnv_phb;
@@ -32,9 +31,6 @@ struct pnv_ioda_pe {
 	unsigned long		flags;
 	struct pnv_phb		*phb;
 
-#define PNV_IODA_MAX_PEER_PES	8
-	struct pnv_ioda_pe	*peers[PNV_IODA_MAX_PEER_PES];
-
 	/* A PE can be associated with a single device or an
 	 * entire bus (& children). In the former case, pdev
 	 * is populated, in the later case, pbus is.
@@ -246,8 +242,6 @@ extern void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level,
 	pe_level_printk(pe, KERN_INFO, fmt, ##__VA_ARGS__)
 
 /* Nvlink functions */
-extern void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe);
-extern void pnv_npu_setup_dma_pe(struct pnv_ioda_pe *npe);
 extern void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass);
 extern void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_phb *phb, bool rm);
 
-- 
2.5.0.rc3

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH kernel v4 11/11] powerpc/powernv/npu: Enable NVLink pass through
  2016-04-29  8:55 [PATCH kernel v4 00/11] powerpc/powernv/npu: Enable PCI pass through for NVLink Alexey Kardashevskiy
                   ` (9 preceding siblings ...)
  2016-04-29  8:55 ` [PATCH kernel v4 10/11] powerpc/powernv/npu: Rework TCE Kill handling Alexey Kardashevskiy
@ 2016-04-29  8:55 ` Alexey Kardashevskiy
  2016-05-03 14:08   ` Alistair Popple
  10 siblings, 1 reply; 24+ messages in thread
From: Alexey Kardashevskiy @ 2016-04-29  8:55 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Alexey Kardashevskiy, Alex Williamson, Alistair Popple,
	Benjamin Herrenschmidt, Dan Carpenter, Daniel Axtens,
	David Gibson, Gavin Shan, Russell Currey, kvm, linux-kernel

IBM POWER8 NVlink systems come with Tesla K40-ish GPUs each of which
also has a couple of fast speed links (NVLink). The interface to links
is exposed as an emulated PCI bridge which is included into the same
IOMMU group as the corresponding GPU.

In the kernel, NPUs get a separate PHB of the PNV_PHB_NPU type and a PE
which behave pretty much as the standard IODA2 PHB except NPU PHB has
just a single TVE in the hardware which means it can have either
32bit window or 64bit window or DMA bypass but never two of these.

In order to make these links work when GPU is passed to the guest,
these bridges need to be passed as well; otherwise performance will
degrade.

This implements and exports API to manage NPU state in regard to VFIO;
it replicates iommu_table_group_ops.

This defines a new pnv_pci_ioda2_npu_ops which is assigned to
the IODA2 bridge if there are NPUs for a GPU on the bridge.
The new callbacks call the default IODA2 callbacks plus new NPU API.
This adds a gpe_table_group_to_npe() helper to find NPU PE for the IODA2
table_group, it is not expected to fail as the helper is only called
from the pnv_pci_ioda2_npu_ops.

This does not define NPU-specific .release_ownership() so after
VFIO is finished, DMA on NPU is disabled which is ok as the nvidia
driver sets DMA mask when probing which enable 32 or 64bit DMA on NPU.

This adds a pnv_pci_npu_setup_iommu() helper which adds NPUs to
the GPU group if any found. The helper uses helpers to look for
the "ibm,gpu" property in the device tree which is a phandle of
the corresponding GPU.

This adds an additional loop over PEs in pnv_ioda_setup_dma() as the main
loop skips NPU PEs as they do not have 32bit DMA segments.

As pnv_npu_set_window() and pnv_npu_unset_window() are started being used
by the new IODA2-NPU IOMMU group, this makes the helpers public and
adds the DMA window number parameter.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v4:
* reused pnv_npu_set_window/pnv_npu_unset_window where possible
* added comments, changed commit log

v3:
* moved NPU-to-GPU IOMMU grouping later after all PHBs are discovered
* removed hack to make iommu_add_device() work, iommu_group_add_device()
is used instead
* cleanup in gpe_table_group_to_npe_cb()

v2:
* reimplemented to support NPU + GPU in the same group
* merged "powerpc/powernv/npu: Add NPU devices to IOMMU group" and
"powerpc/powernv/npu: Enable passing through via VFIO" into this patch
---
 arch/powerpc/platforms/powernv/npu-dma.c  |  64 +++++++++++++++++--
 arch/powerpc/platforms/powernv/pci-ioda.c | 102 ++++++++++++++++++++++++++++++
 arch/powerpc/platforms/powernv/pci.h      |   6 ++
 3 files changed, 166 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
index cb2d1da..0459e10 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -12,6 +12,7 @@
 #include <linux/export.h>
 #include <linux/pci.h>
 #include <linux/memblock.h>
+#include <linux/iommu.h>
 
 #include <asm/iommu.h>
 #include <asm/pnv-pci.h>
@@ -154,7 +155,7 @@ static struct pnv_ioda_pe *get_gpu_pci_dev_and_pe(struct pnv_ioda_pe *npe,
 	return pe;
 }
 
-static long pnv_npu_set_window(struct pnv_ioda_pe *npe,
+long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
 		struct iommu_table *tbl)
 {
 	struct pnv_phb *phb = npe->phb;
@@ -182,13 +183,13 @@ static long pnv_npu_set_window(struct pnv_ioda_pe *npe,
 	pnv_pci_ioda2_tce_invalidate_entire(phb, false);
 
 	/* Add the table to the list so its TCE cache will get invalidated */
-	pnv_pci_link_table_and_group(phb->hose->node, 0,
+	pnv_pci_link_table_and_group(phb->hose->node, num,
 			tbl, &npe->table_group);
 
 	return 0;
 }
 
-static long pnv_npu_unset_window(struct pnv_ioda_pe *npe)
+long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num)
 {
 	struct pnv_phb *phb = npe->phb;
 	int64_t rc;
@@ -205,7 +206,7 @@ static long pnv_npu_unset_window(struct pnv_ioda_pe *npe)
 	}
 	pnv_pci_ioda2_tce_invalidate_entire(phb, false);
 
-	pnv_pci_unlink_table_and_group(npe->table_group.tables[0],
+	pnv_pci_unlink_table_and_group(npe->table_group.tables[num],
 			&npe->table_group);
 
 	return 0;
@@ -231,7 +232,7 @@ static void pnv_npu_dma_set_32(struct pnv_ioda_pe *npe)
 	if (!gpe)
 		return;
 
-	rc = pnv_npu_set_window(npe, gpe->table_group.tables[0]);
+	rc = pnv_npu_set_window(npe, 0, gpe->table_group.tables[0]);
 
 	/*
 	 * We don't initialise npu_pe->tce32_table as we always use
@@ -255,7 +256,7 @@ static int pnv_npu_dma_set_bypass(struct pnv_ioda_pe *npe)
 	if (phb->type != PNV_PHB_NPU || !npe->pdev)
 		return -EINVAL;
 
-	rc = pnv_npu_unset_window(npe);
+	rc = pnv_npu_unset_window(npe, 0);
 	if (rc != OPAL_SUCCESS)
 		return rc;
 
@@ -307,3 +308,54 @@ void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass)
 		}
 	}
 }
+
+/* Switch ownership from platform code to external user (e.g. VFIO) */
+void pnv_npu_take_ownership(struct pnv_ioda_pe *npe)
+{
+	struct pnv_phb *phb = npe->phb;
+	int64_t rc;
+
+	/*
+	 * Note: NPU has just a single TVE in the hardware which means that
+	 * while used by the kernel, it can have either 32bit window or
+	 * DMA bypass but never both. So we deconfigure 32bit window only
+	 * if it was enabled at the moment of ownership change.
+	 */
+	if (npe->table_group.tables[0]) {
+		pnv_npu_unset_window(npe, 0);
+		return;
+	}
+
+	/* Disable bypass */
+	rc = opal_pci_map_pe_dma_window_real(phb->opal_id,
+			npe->pe_number, npe->pe_number,
+			0 /* bypass base */, 0);
+	if (rc) {
+		pe_err(npe, "Failed to disable bypass, err %lld\n", rc);
+		return;
+	}
+	pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
+}
+
+struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
+{
+	struct pnv_phb *phb = npe->phb;
+	struct pci_bus *pbus = phb->hose->bus;
+	struct pci_dev *npdev, *gpdev = NULL, *gptmp;
+	struct pnv_ioda_pe *gpe = get_gpu_pci_dev_and_pe(npe, &gpdev);
+
+	if (!gpe || !gpdev)
+		return NULL;
+
+	list_for_each_entry(npdev, &pbus->devices, bus_list) {
+		gptmp = pnv_pci_get_gpu_dev(npdev);
+
+		if (gptmp != gpdev)
+			continue;
+
+		pe_info(gpe, "Attached NPU %s\n", dev_name(&npdev->dev));
+		iommu_group_add_device(gpe->table_group.group, &npdev->dev);
+	}
+
+	return gpe;
+}
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index 922c74c..feabe50 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -2273,6 +2273,90 @@ static struct iommu_table_group_ops pnv_pci_ioda2_ops = {
 	.take_ownership = pnv_ioda2_take_ownership,
 	.release_ownership = pnv_ioda2_release_ownership,
 };
+
+static int gpe_table_group_to_npe_cb(struct device *dev, void *opaque)
+{
+	struct pci_controller *hose;
+	struct pnv_phb *phb;
+	struct pnv_ioda_pe **ptmppe = opaque;
+	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
+	struct pci_dn *pdn = pci_get_pdn(pdev);
+
+	if (!pdn || pdn->pe_number == IODA_INVALID_PE)
+		return 0;
+
+	hose = pci_bus_to_host(pdev->bus);
+	phb = hose->private_data;
+	if (phb->type != PNV_PHB_NPU)
+		return 0;
+
+	*ptmppe = &phb->ioda.pe_array[pdn->pe_number];
+
+	return 1;
+}
+
+/*
+ * This returns PE of associated NPU.
+ * This assumes that NPU is in the same IOMMU group with GPU and there is
+ * no other PEs.
+ */
+static struct pnv_ioda_pe *gpe_table_group_to_npe(
+		struct iommu_table_group *table_group)
+{
+	struct pnv_ioda_pe *npe = NULL;
+	int ret = iommu_group_for_each_dev(table_group->group, &npe,
+			gpe_table_group_to_npe_cb);
+
+	BUG_ON(!ret || !npe);
+
+	return npe;
+}
+
+static long pnv_pci_ioda2_npu_set_window(struct iommu_table_group *table_group,
+		int num, struct iommu_table *tbl)
+{
+	long ret = pnv_pci_ioda2_set_window(table_group, num, tbl);
+
+	if (ret)
+		return ret;
+
+	ret = pnv_npu_set_window(gpe_table_group_to_npe(table_group), num, tbl);
+	if (ret)
+		pnv_pci_ioda2_unset_window(table_group, num);
+
+	return ret;
+}
+
+static long pnv_pci_ioda2_npu_unset_window(
+		struct iommu_table_group *table_group,
+		int num)
+{
+	long ret = pnv_pci_ioda2_unset_window(table_group, num);
+
+	if (ret)
+		return ret;
+
+	return pnv_npu_unset_window(gpe_table_group_to_npe(table_group), num);
+}
+
+static void pnv_ioda2_npu_take_ownership(struct iommu_table_group *table_group)
+{
+	/*
+	 * Detach NPU first as pnv_ioda2_take_ownership() will destroy
+	 * the iommu_table if 32bit DMA is enabled.
+	 */
+	pnv_npu_take_ownership(gpe_table_group_to_npe(table_group));
+	pnv_ioda2_take_ownership(table_group);
+}
+
+static struct iommu_table_group_ops pnv_pci_ioda2_npu_ops = {
+	.get_table_size = pnv_pci_ioda2_get_table_size,
+	.create_table = pnv_pci_ioda2_create_table,
+	.set_window = pnv_pci_ioda2_npu_set_window,
+	.unset_window = pnv_pci_ioda2_npu_unset_window,
+	.take_ownership = pnv_ioda2_npu_take_ownership,
+	.release_ownership = pnv_ioda2_release_ownership,
+};
 #endif
 
 static void pnv_pci_ioda_setup_opal_tce_kill(struct pnv_phb *phb)
@@ -3012,6 +3096,7 @@ static void pnv_pci_ioda_setup_DMA(void)
 {
 	struct pci_controller *hose, *tmp;
 	struct pnv_phb *phb;
+	struct pnv_ioda_pe *pe, *gpe;
 
 	list_for_each_entry_safe(hose, tmp, &hose_list, list_node) {
 		pnv_ioda_setup_dma(hose->private_data);
@@ -3020,6 +3105,23 @@ static void pnv_pci_ioda_setup_DMA(void)
 		phb = hose->private_data;
 		phb->initialized = 1;
 	}
+
+	/*
+	 * Now we have all PHBs discovered, time to add NPU devices to
+	 * the corresponding IOMMU groups.
+	 */
+	list_for_each_entry_safe(hose, tmp, &hose_list, list_node) {
+		phb = hose->private_data;
+
+		if (phb->type != PNV_PHB_NPU)
+			continue;
+
+		list_for_each_entry(pe, &phb->ioda.pe_dma_list, dma_link) {
+			gpe = pnv_pci_npu_setup_iommu(pe);
+			if (gpe)
+				gpe->table_group.ops = &pnv_pci_ioda2_npu_ops;
+		}
+	}
 }
 
 static void pnv_pci_ioda_create_dbgfs(void)
diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
index e117bd8..ebc6ee4 100644
--- a/arch/powerpc/platforms/powernv/pci.h
+++ b/arch/powerpc/platforms/powernv/pci.h
@@ -244,5 +244,11 @@ extern void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level,
 /* Nvlink functions */
 extern void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass);
 extern void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_phb *phb, bool rm);
+extern struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe);
+extern long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
+		struct iommu_table *tbl);
+extern long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num);
+extern void pnv_npu_take_ownership(struct pnv_ioda_pe *npe);
+extern void pnv_npu_release_ownership(struct pnv_ioda_pe *npe);
 
 #endif /* __POWERNV_PCI_H */
-- 
2.5.0.rc3

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH kernel v4 02/11] vfio/spapr: Relax the IOMMU compatibility check
  2016-04-29  8:55 ` [PATCH kernel v4 02/11] vfio/spapr: Relax the IOMMU compatibility check Alexey Kardashevskiy
@ 2016-04-29 15:41   ` Alex Williamson
  2016-05-10 21:48   ` [kernel,v4,02/11] " Michael Ellerman
  1 sibling, 0 replies; 24+ messages in thread
From: Alex Williamson @ 2016-04-29 15:41 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: linuxppc-dev, Alistair Popple, Benjamin Herrenschmidt,
	Dan Carpenter, Daniel Axtens, David Gibson, Gavin Shan,
	Russell Currey, kvm, linux-kernel

On Fri, 29 Apr 2016 18:55:15 +1000
Alexey Kardashevskiy <aik@ozlabs.ru> wrote:

> We are going to have multiple different types of PHB on the same system
> with POWER8 + NVLink and PHBs will have different IOMMU ops. However
> we only really care about one callback - create_table - so we can
> relax the compatibility check here.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>  drivers/vfio/vfio_iommu_spapr_tce.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c
> index 0582b72..3054e3f 100644
> --- a/drivers/vfio/vfio_iommu_spapr_tce.c
> +++ b/drivers/vfio/vfio_iommu_spapr_tce.c
> @@ -1188,7 +1188,8 @@ static int tce_iommu_attach_group(void *iommu_data,
>  			goto unlock_exit;
>  		}
>  		table_group_tmp = iommu_group_get_iommudata(tcegrp->grp);
> -		if (table_group_tmp->ops != table_group->ops) {
> +		if (table_group_tmp->ops->create_table !=
> +				table_group->ops->create_table) {
>  			pr_warn("tce_vfio: Group %d is incompatible with group %d\n",
>  					iommu_group_id(iommu_group),
>  					iommu_group_id(tcegrp->grp));

Acked-by: Alex Williamson <alex.williamson@redhat.com>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH kernel v4 01/11] vfio_pci: Test for extended capabilities if config space > 256 bytes
  2016-04-29  8:55 ` [PATCH kernel v4 01/11] vfio_pci: Test for extended capabilities if config space > 256 bytes Alexey Kardashevskiy
@ 2016-04-29 15:42   ` Alex Williamson
  0 siblings, 0 replies; 24+ messages in thread
From: Alex Williamson @ 2016-04-29 15:42 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: linuxppc-dev, Alistair Popple, Benjamin Herrenschmidt,
	Dan Carpenter, Daniel Axtens, David Gibson, Gavin Shan,
	Russell Currey, kvm, linux-kernel

On Fri, 29 Apr 2016 18:55:14 +1000
Alexey Kardashevskiy <aik@ozlabs.ru> wrote:

> PCI-Express spec says that reading 4 bytes at offset 100h should return
> zero if there is no extended capability so VFIO reads this dword to
> know if there are extended capabilities.
> 
> However it is not always possible to access the extended space so
> generic PCI code in pci_cfg_space_size_ext() checks if
> pci_read_config_dword() can read beyond 100h and if the check fails,
> it sets the config space size to 100h.
> 
> VFIO does its own extended capabilities check by reading at offset 100h
> which may produce 0xffffffff which VFIO treats as the extended config
> space presense and calls vfio_ecap_init() which fails to parse
> capabilities (which is expected) but right before the exit, it writes
> zero at offset 100h which is beyond the buffer allocated for
> vdev->vconfig (which is 256 bytes) which leads to random memory
> corruption.
> 
> This makes VFIO only check for the extended capabilities if
> the discovered config size is more than 256 bytes.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> ---
> Changes:
> v2:
> * instead of checking for 0xffffffff, this only does the check if
> device's config size is big enough
> ---

I'll take this patch separately, please drop from this series.  Thanks,

Alex

>  drivers/vfio/pci/vfio_pci_config.c | 17 +++++++++++------
>  1 file changed, 11 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci_config.c
> index 142c533..d0c4358 100644
> --- a/drivers/vfio/pci/vfio_pci_config.c
> +++ b/drivers/vfio/pci/vfio_pci_config.c
> @@ -1124,9 +1124,12 @@ static int vfio_cap_len(struct vfio_pci_device *vdev, u8 cap, u8 pos)
>  			return pcibios_err_to_errno(ret);
>  
>  		if (PCI_X_CMD_VERSION(word)) {
> -			/* Test for extended capabilities */
> -			pci_read_config_dword(pdev, PCI_CFG_SPACE_SIZE, &dword);
> -			vdev->extended_caps = (dword != 0);
> +			if (pdev->cfg_size > PCI_CFG_SPACE_SIZE) {
> +				/* Test for extended capabilities */
> +				pci_read_config_dword(pdev, PCI_CFG_SPACE_SIZE,
> +						&dword);
> +				vdev->extended_caps = (dword != 0);
> +			}
>  			return PCI_CAP_PCIX_SIZEOF_V2;
>  		} else
>  			return PCI_CAP_PCIX_SIZEOF_V0;
> @@ -1138,9 +1141,11 @@ static int vfio_cap_len(struct vfio_pci_device *vdev, u8 cap, u8 pos)
>  
>  		return byte;
>  	case PCI_CAP_ID_EXP:
> -		/* Test for extended capabilities */
> -		pci_read_config_dword(pdev, PCI_CFG_SPACE_SIZE, &dword);
> -		vdev->extended_caps = (dword != 0);
> +		if (pdev->cfg_size > PCI_CFG_SPACE_SIZE) {
> +			/* Test for extended capabilities */
> +			pci_read_config_dword(pdev, PCI_CFG_SPACE_SIZE, &dword);
> +			vdev->extended_caps = dword != 0;
> +		}
>  
>  		/* length based on version */
>  		if ((pcie_caps_reg(pdev) & PCI_EXP_FLAGS_VERS) == 1)

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH kernel v4 08/11] powerpc/powernv/ioda2: Export debug helper pe_level_printk()
  2016-04-29  8:55 ` [PATCH kernel v4 08/11] powerpc/powernv/ioda2: Export debug helper pe_level_printk() Alexey Kardashevskiy
@ 2016-05-03  5:46   ` Alistair Popple
  2016-05-03  5:58     ` Alistair Popple
  0 siblings, 1 reply; 24+ messages in thread
From: Alistair Popple @ 2016-05-03  5:46 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: linuxppc-dev, Alex Williamson, Benjamin Herrenschmidt,
	Dan Carpenter, Daniel Axtens, David Gibson, Gavin Shan,
	Russell Currey, kvm, linux-kernel

There's one call to pr_warn() in pnv_npu_disable_bypass() that could arguably 
be converted to pe_warn(), but we can clean that up later as the patch looks 
fine and I'm assuming subsequent patches make use of these.

Reviewed-By: Alistair Popple <alistair@popple.id.au>

On Fri, 29 Apr 2016 18:55:21 Alexey Kardashevskiy wrote:
> This exports debugging helper pe_level_printk() and corresponding macroses
> so they can be used in npu-dma.c.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> ---
>  arch/powerpc/platforms/powernv/pci-ioda.c | 9 +--------
>  arch/powerpc/platforms/powernv/pci.h      | 9 +++++++++
>  2 files changed, 10 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c 
b/arch/powerpc/platforms/powernv/pci-ioda.c
> index 272521e..db7695f 100644
> --- a/arch/powerpc/platforms/powernv/pci-ioda.c
> +++ b/arch/powerpc/platforms/powernv/pci-ioda.c
> @@ -56,7 +56,7 @@
>  
>  static void pnv_pci_ioda2_table_free_pages(struct iommu_table *tbl);
>  
> -static void pe_level_printk(const struct pnv_ioda_pe *pe, const char 
*level,
> +void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level,
>  			    const char *fmt, ...)
>  {
>  	struct va_format vaf;
> @@ -87,13 +87,6 @@ static void pe_level_printk(const struct pnv_ioda_pe *pe, 
const char *level,
>  	va_end(args);
>  }
>  
> -#define pe_err(pe, fmt, ...)					\
> -	pe_level_printk(pe, KERN_ERR, fmt, ##__VA_ARGS__)
> -#define pe_warn(pe, fmt, ...)					\
> -	pe_level_printk(pe, KERN_WARNING, fmt, ##__VA_ARGS__)
> -#define pe_info(pe, fmt, ...)					\
> -	pe_level_printk(pe, KERN_INFO, fmt, ##__VA_ARGS__)
> -
>  static bool pnv_iommu_bypass_disabled __read_mostly;
>  
>  static int __init iommu_setup(char *str)
> diff --git a/arch/powerpc/platforms/powernv/pci.h 
b/arch/powerpc/platforms/powernv/pci.h
> index d574a9d..485e5b1 100644
> --- a/arch/powerpc/platforms/powernv/pci.h
> +++ b/arch/powerpc/platforms/powernv/pci.h
> @@ -236,6 +236,15 @@ extern void pnv_pci_dma_bus_setup(struct pci_bus *bus);
>  extern int pnv_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type);
>  extern void pnv_teardown_msi_irqs(struct pci_dev *pdev);
>  
> +extern void pe_level_printk(const struct pnv_ioda_pe *pe, const char 
*level,
> +			    const char *fmt, ...);
> +#define pe_err(pe, fmt, ...)					\
> +	pe_level_printk(pe, KERN_ERR, fmt, ##__VA_ARGS__)
> +#define pe_warn(pe, fmt, ...)					\
> +	pe_level_printk(pe, KERN_WARNING, fmt, ##__VA_ARGS__)
> +#define pe_info(pe, fmt, ...)					\
> +	pe_level_printk(pe, KERN_INFO, fmt, ##__VA_ARGS__)
> +
>  /* Nvlink functions */
>  extern void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe);
>  extern void pnv_npu_setup_dma_pe(struct pnv_ioda_pe *npe);
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH kernel v4 08/11] powerpc/powernv/ioda2: Export debug helper pe_level_printk()
  2016-05-03  5:46   ` Alistair Popple
@ 2016-05-03  5:58     ` Alistair Popple
  0 siblings, 0 replies; 24+ messages in thread
From: Alistair Popple @ 2016-05-03  5:58 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: linuxppc-dev, Alex Williamson, Benjamin Herrenschmidt,
	Dan Carpenter, Daniel Axtens, David Gibson, Gavin Shan,
	Russell Currey, kvm, linux-kernel

On Tue, 3 May 2016 15:46:33 Alistair Popple wrote:
> There's one call to pr_warn() in pnv_npu_disable_bypass() that could 
arguably 
> be converted to pe_warn(), but we can clean that up later as the patch looks 
> fine and I'm assuming subsequent patches make use of these.

And inevitably the next patch in the series cleans that up anyway. Feel free 
to ignore the noise above :-)

> Reviewed-By: Alistair Popple <alistair@popple.id.au>
> 
> On Fri, 29 Apr 2016 18:55:21 Alexey Kardashevskiy wrote:
> > This exports debugging helper pe_level_printk() and corresponding macroses
> > so they can be used in npu-dma.c.
> > 
> > Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> > ---
> >  arch/powerpc/platforms/powernv/pci-ioda.c | 9 +--------
> >  arch/powerpc/platforms/powernv/pci.h      | 9 +++++++++
> >  2 files changed, 10 insertions(+), 8 deletions(-)
> > 
> > diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c 
> b/arch/powerpc/platforms/powernv/pci-ioda.c
> > index 272521e..db7695f 100644
> > --- a/arch/powerpc/platforms/powernv/pci-ioda.c
> > +++ b/arch/powerpc/platforms/powernv/pci-ioda.c
> > @@ -56,7 +56,7 @@
> >  
> >  static void pnv_pci_ioda2_table_free_pages(struct iommu_table *tbl);
> >  
> > -static void pe_level_printk(const struct pnv_ioda_pe *pe, const char 
> *level,
> > +void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level,
> >  			    const char *fmt, ...)
> >  {
> >  	struct va_format vaf;
> > @@ -87,13 +87,6 @@ static void pe_level_printk(const struct pnv_ioda_pe 
*pe, 
> const char *level,
> >  	va_end(args);
> >  }
> >  
> > -#define pe_err(pe, fmt, ...)					\
> > -	pe_level_printk(pe, KERN_ERR, fmt, ##__VA_ARGS__)
> > -#define pe_warn(pe, fmt, ...)					\
> > -	pe_level_printk(pe, KERN_WARNING, fmt, ##__VA_ARGS__)
> > -#define pe_info(pe, fmt, ...)					\
> > -	pe_level_printk(pe, KERN_INFO, fmt, ##__VA_ARGS__)
> > -
> >  static bool pnv_iommu_bypass_disabled __read_mostly;
> >  
> >  static int __init iommu_setup(char *str)
> > diff --git a/arch/powerpc/platforms/powernv/pci.h 
> b/arch/powerpc/platforms/powernv/pci.h
> > index d574a9d..485e5b1 100644
> > --- a/arch/powerpc/platforms/powernv/pci.h
> > +++ b/arch/powerpc/platforms/powernv/pci.h
> > @@ -236,6 +236,15 @@ extern void pnv_pci_dma_bus_setup(struct pci_bus 
*bus);
> >  extern int pnv_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type);
> >  extern void pnv_teardown_msi_irqs(struct pci_dev *pdev);
> >  
> > +extern void pe_level_printk(const struct pnv_ioda_pe *pe, const char 
> *level,
> > +			    const char *fmt, ...);
> > +#define pe_err(pe, fmt, ...)					\
> > +	pe_level_printk(pe, KERN_ERR, fmt, ##__VA_ARGS__)
> > +#define pe_warn(pe, fmt, ...)					\
> > +	pe_level_printk(pe, KERN_WARNING, fmt, ##__VA_ARGS__)
> > +#define pe_info(pe, fmt, ...)					\
> > +	pe_level_printk(pe, KERN_INFO, fmt, ##__VA_ARGS__)
> > +
> >  /* Nvlink functions */
> >  extern void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe);
> >  extern void pnv_npu_setup_dma_pe(struct pnv_ioda_pe *npe);
> > 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH kernel v4 09/11] powerpc/powernv/npu: Add set/unset window helpers
  2016-04-29  8:55 ` [PATCH kernel v4 09/11] powerpc/powernv/npu: Add set/unset window helpers Alexey Kardashevskiy
@ 2016-05-03  6:25   ` Alistair Popple
  0 siblings, 0 replies; 24+ messages in thread
From: Alistair Popple @ 2016-05-03  6:25 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: linuxppc-dev, Alex Williamson, Benjamin Herrenschmidt,
	Dan Carpenter, Daniel Axtens, David Gibson, Gavin Shan,
	Russell Currey, kvm, linux-kernel


On Fri, 29 Apr 2016 18:55:22 Alexey Kardashevskiy wrote:
> The upcoming NVLink passthrough support will require NPU code to cope
> with two DMA windows.
> 
> This adds a pnv_npu_set_window() helper which programs 32bit window to
> the hardware. This also adds multilevel TCE support.
> 
> This adds a pnv_npu_unset_window() helper which removes the DMA window
> from the hardware. This does not make difference now as the caller -
> pnv_npu_dma_set_bypass() - enables bypass in the hardware but the next
> patch will use it to manage TCE table lists for TCE Kill handling.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> ---
>  arch/powerpc/platforms/powernv/npu-dma.c | 65 
+++++++++++++++++++++++++++-----
>  1 file changed, 55 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/powerpc/platforms/powernv/npu-dma.c 
b/arch/powerpc/platforms/powernv/npu-dma.c
> index bec9267..800d70f 100644
> --- a/arch/powerpc/platforms/powernv/npu-dma.c
> +++ b/arch/powerpc/platforms/powernv/npu-dma.c
> @@ -159,6 +159,56 @@ static struct pnv_ioda_pe 
*get_gpu_pci_dev_and_pe(struct pnv_ioda_pe *npe,
>  	return pe;
>  }
>  
> +static long pnv_npu_set_window(struct pnv_ioda_pe *npe,
> +		struct iommu_table *tbl)
> +{
> +	struct pnv_phb *phb = npe->phb;
> +	int64_t rc;
> +	const unsigned long size = tbl->it_indirect_levels ?
> +		tbl->it_level_size : tbl->it_size;
> +	const __u64 start_addr = tbl->it_offset << tbl->it_page_shift;
> +	const __u64 win_size = tbl->it_size << tbl->it_page_shift;
> +
> +	pe_info(npe, "Setting up window %llx..%llx pg=%lx\n",
> +			start_addr, start_addr + win_size - 1,
> +			IOMMU_PAGE_SIZE(tbl));
> +
> +	rc = opal_pci_map_pe_dma_window(phb->opal_id,
> +			npe->pe_number,
> +			npe->pe_number,
> +			tbl->it_indirect_levels + 1,
> +			__pa(tbl->it_base),
> +			size << 3,
> +			IOMMU_PAGE_SIZE(tbl));
> +	if (rc) {
> +		pe_err(npe, "Failed to configure TCE table, err %lld\n", rc);
> +		return rc;
> +	}
> +	pnv_pci_ioda2_tce_invalidate_entire(phb, false);
> +
> +	return 0;
> +}
> +
> +static long pnv_npu_unset_window(struct pnv_ioda_pe *npe)
> +{
> +	struct pnv_phb *phb = npe->phb;
> +	int64_t rc;
> +
> +	pe_info(npe, "Removing DMA window\n");
> +
> +	rc = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
> +			npe->pe_number,
> +			0/* levels */, 0/* table address */,
> +			0/* table size */, 0/* page size */);
> +	if (rc) {
> +		pe_err(npe, "Unmapping failed, ret = %lld\n", rc);
> +		return rc;
> +	}
> +	pnv_pci_ioda2_tce_invalidate_entire(phb, false);
> +
> +	return 0;
> +}
> +
>  void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe)
>  {
>  	struct pnv_ioda_pe *gpe;
> @@ -200,10 +250,8 @@ void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe)
>   */
>  static void pnv_npu_dma_set_32(struct pnv_ioda_pe *npe)
>  {
> -	struct pnv_phb *phb = npe->phb;
>  	struct pci_dev *gpdev;
>  	struct pnv_ioda_pe *gpe;
> -	struct iommu_table *tbl;
>  	int64_t rc;
>  
>  	/*
> @@ -217,14 +265,7 @@ static void pnv_npu_dma_set_32(struct pnv_ioda_pe *npe)
>  	if (!gpe)
>  		return;
>  
> -	tbl = gpe->table_group.tables[0];
> -	rc = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
> -					npe->pe_number, 1, __pa(tbl->it_base),
> -					tbl->it_size << 3,
> -					IOMMU_PAGE_SIZE(tbl));
> -	if (rc != OPAL_SUCCESS)
> -		pr_warn("%s: Error %lld setting DMA window on PHB#%d-PE#%d\n",
> -			__func__, rc, phb->hose->global_number, npe-
>pe_number);
> +	rc = pnv_npu_set_window(npe, gpe->table_group.tables[0]);
>  
>  	/*
>  	 * We don't initialise npu_pe->tce32_table as we always use
> @@ -248,6 +289,10 @@ static int pnv_npu_dma_set_bypass(struct pnv_ioda_pe 
*npe)
>  	if (phb->type != PNV_PHB_NPU || !npe->pdev)
>  		return -EINVAL;
>  
> +	rc = pnv_npu_unset_window(npe);

As noted in the commit message you technically don't need to do this as the 
subsequent OPAL call to enable the bypass window below will just overwrite the 
same TVE. However I like the logic of clearing the existing TVE before writing 
a new one anyway as it also ensures the TCE cache gets properly invalidated.

Other than that this patch looks like it doesn't change any existing 
behaviour.

Reviewed-By: Alistair Popple <alistair@popple.id.au>

> +	if (rc != OPAL_SUCCESS)
> +		return rc;
> +
>  	/* Enable the bypass window */
>  
>  	top = roundup_pow_of_two(top);
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH kernel v4 10/11] powerpc/powernv/npu: Rework TCE Kill handling
  2016-04-29  8:55 ` [PATCH kernel v4 10/11] powerpc/powernv/npu: Rework TCE Kill handling Alexey Kardashevskiy
@ 2016-05-03  7:37   ` Alistair Popple
  2016-05-05  4:23     ` Alexey Kardashevskiy
  0 siblings, 1 reply; 24+ messages in thread
From: Alistair Popple @ 2016-05-03  7:37 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: linuxppc-dev, Alex Williamson, Benjamin Herrenschmidt,
	Dan Carpenter, Daniel Axtens, David Gibson, Gavin Shan,
	Russell Currey, kvm, linux-kernel

On Fri, 29 Apr 2016 18:55:23 Alexey Kardashevskiy wrote:
> The pnv_ioda_pe struct keeps an array of peers. At the moment it is only
> used to link GPU and NPU for 2 purposes:
> 
> 1. Access NPU quickly when configuring DMA for GPU - this was addressed
> in the previos patch by removing use of it as DMA setup is not what
> the kernel would constantly do.

Agreed. It was used here because the peer array was added to deal with (2) 
below ...

> 2. Invalidate TCE cache for NPU when it is invalidated for GPU.
> GPU and NPU are in different PE. There is already a mechanism to
> attach multiple iommu_table_group to the same iommu_table (used for VFIO),
> we can reuse it here so does this patch.

... because we weren't aware iommu_table_group could be used to do this 
instead.

> This gets rid of peers[] array and PNV_IODA_PE_PEER flag as they are
> not needed anymore.

Happy to see it go. I'm not too familiar with iommu groups but based on the 
code and what you have described to me both here and offline everything looks 
good to me. One pretty minor style comment below.

Reviewed-By: Alistair Popple <alistair@popple.id.au>

> While we are here, add TCE cache invalidation after enabling bypass.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> ---
> Changes:
> v4:
> * reworked as "powerpc/powernv/npu: Add set/unset window helpers" has been
> added
> ---
>  arch/powerpc/platforms/powernv/npu-dma.c  | 69 
+++++++++----------------------
>  arch/powerpc/platforms/powernv/pci-ioda.c | 57 ++++---------------------
>  arch/powerpc/platforms/powernv/pci.h      |  6 ---
>  3 files changed, 26 insertions(+), 106 deletions(-)
> 
> diff --git a/arch/powerpc/platforms/powernv/npu-dma.c 
b/arch/powerpc/platforms/powernv/npu-dma.c
> index 800d70f..cb2d1da 100644
> --- a/arch/powerpc/platforms/powernv/npu-dma.c
> +++ b/arch/powerpc/platforms/powernv/npu-dma.c
> @@ -136,22 +136,17 @@ static struct pnv_ioda_pe 
*get_gpu_pci_dev_and_pe(struct pnv_ioda_pe *npe,
>  	struct pnv_ioda_pe *pe;
>  	struct pci_dn *pdn;
>  
> -	if (npe->flags & PNV_IODA_PE_PEER) {
> -		pe = npe->peers[0];
> -		pdev = pe->pdev;
> -	} else {
> -		pdev = pnv_pci_get_gpu_dev(npe->pdev);
> -		if (!pdev)
> -			return NULL;
> +	pdev = pnv_pci_get_gpu_dev(npe->pdev);
> +	if (!pdev)
> +		return NULL;
>  
> -		pdn = pci_get_pdn(pdev);
> -		if (WARN_ON(!pdn || pdn->pe_number == IODA_INVALID_PE))
> -			return NULL;
> +	pdn = pci_get_pdn(pdev);
> +	if (WARN_ON(!pdn || pdn->pe_number == IODA_INVALID_PE))
> +		return NULL;
>  
> -		hose = pci_bus_to_host(pdev->bus);
> -		phb = hose->private_data;
> -		pe = &phb->ioda.pe_array[pdn->pe_number];
> -	}
> +	hose = pci_bus_to_host(pdev->bus);
> +	phb = hose->private_data;
> +	pe = &phb->ioda.pe_array[pdn->pe_number];
>  
>  	if (gpdev)
>  		*gpdev = pdev;
> @@ -186,6 +181,10 @@ static long pnv_npu_set_window(struct pnv_ioda_pe *npe,
>  	}
>  	pnv_pci_ioda2_tce_invalidate_entire(phb, false);
>  
> +	/* Add the table to the list so its TCE cache will get invalidated */
> +	pnv_pci_link_table_and_group(phb->hose->node, 0,
> +			tbl, &npe->table_group);

Where tbl is associated with the GPU and is what links the NPU and GPU PEs.

>  	return 0;
>  }
>  
> @@ -206,45 +205,12 @@ static long pnv_npu_unset_window(struct pnv_ioda_pe 
*npe)
>  	}
>  	pnv_pci_ioda2_tce_invalidate_entire(phb, false);
>  
> +	pnv_pci_unlink_table_and_group(npe->table_group.tables[0],
> +			&npe->table_group);
> +
>  	return 0;
>  }
>  
> -void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe)
> -{
> -	struct pnv_ioda_pe *gpe;
> -	struct pci_dev *gpdev;
> -	int i, avail = -1;
> -
> -	if (!npe->pdev || !(npe->flags & PNV_IODA_PE_DEV))
> -		return;
> -
> -	gpe = get_gpu_pci_dev_and_pe(npe, &gpdev);
> -	if (!gpe)
> -		return;
> -
> -	for (i = 0; i < PNV_IODA_MAX_PEER_PES; i++) {
> -		/* Nothing to do if the PE is already connected. */
> -		if (gpe->peers[i] == npe)
> -			return;
> -
> -		if (!gpe->peers[i])
> -			avail = i;
> -	}
> -
> -	if (WARN_ON(avail < 0))
> -		return;
> -
> -	gpe->peers[avail] = npe;
> -	gpe->flags |= PNV_IODA_PE_PEER;
> -
> -	/*
> -	 * We assume that the NPU devices only have a single peer PE
> -	 * (the GPU PCIe device PE).
> -	 */
> -	npe->peers[0] = gpe;
> -	npe->flags |= PNV_IODA_PE_PEER;
> -}
> -
>  /*
>   * Enables 32 bit DMA on NPU.
>   */
> @@ -302,6 +268,9 @@ static int pnv_npu_dma_set_bypass(struct pnv_ioda_pe 
*npe)
>  			npe->pe_number, npe->pe_number,
>  			0 /* bypass base */, top);
>  
> +	if (rc == OPAL_SUCCESS)
> +		pnv_pci_ioda2_tce_invalidate_entire(phb, false);
> +
>  	return rc;
>  }
>  
> diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c 
b/arch/powerpc/platforms/powernv/pci-ioda.c
> index db7695f..922c74c 100644
> --- a/arch/powerpc/platforms/powernv/pci-ioda.c
> +++ b/arch/powerpc/platforms/powernv/pci-ioda.c
> @@ -1816,23 +1816,12 @@ static inline void 
pnv_pci_ioda2_tce_invalidate_pe(struct pnv_ioda_pe *pe)
>  	/* 01xb - invalidate TCEs that match the specified PE# */
>  	unsigned long val = TCE_KILL_INVAL_PE | (pe->pe_number & 0xFF);
>  	struct pnv_phb *phb = pe->phb;
> -	struct pnv_ioda_pe *npe;
> -	int i;
>  
>  	if (!phb->ioda.tce_inval_reg)
>  		return;
>  
>  	mb(); /* Ensure above stores are visible */
>  	__raw_writeq(cpu_to_be64(val), phb->ioda.tce_inval_reg);
> -
> -	if (pe->flags & PNV_IODA_PE_PEER)
> -		for (i = 0; i < PNV_IODA_MAX_PEER_PES; i++) {
> -			npe = pe->peers[i];
> -			if (!npe || npe->phb->type != PNV_PHB_NPU)
> -				continue;
> -
> -			pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
> -		}
>  }
>  
>  static void pnv_pci_ioda2_do_tce_invalidate(unsigned pe_number, bool rm,
> @@ -1867,33 +1856,24 @@ static void pnv_pci_ioda2_tce_invalidate(struct 
iommu_table *tbl,
>  	struct iommu_table_group_link *tgl;
>  
>  	list_for_each_entry_rcu(tgl, &tbl->it_group_list, next) {
> -		struct pnv_ioda_pe *npe;
>  		struct pnv_ioda_pe *pe = container_of(tgl->table_group,
>  				struct pnv_ioda_pe, table_group);
>  		__be64 __iomem *invalidate = rm ?
>  			(__be64 __iomem *)pe->phb->ioda.tce_inval_reg_phys :
>  			pe->phb->ioda.tce_inval_reg;
> -		int i;
>  
> -		pnv_pci_ioda2_do_tce_invalidate(pe->pe_number, rm,
> -			invalidate, tbl->it_page_shift,
> -			index, npages);
> -
> -		if (pe->flags & PNV_IODA_PE_PEER)
> +		if (pe->phb->type == PNV_PHB_NPU) {
>  			/*
>  			 * The NVLink hardware does not support TCE kill
>  			 * per TCE entry so we have to invalidate
>  			 * the entire cache for it.
>  			 */
> -			for (i = 0; i < PNV_IODA_MAX_PEER_PES; i++) {
> -				npe = pe->peers[i];
> -				if (!npe || npe->phb->type != PNV_PHB_NPU ||
> -						!npe->phb->ioda.tce_inval_reg)
> -					continue;
> -
> -				pnv_pci_ioda2_tce_invalidate_entire(npe->phb,
> -						rm);
> -			}
> +			pnv_pci_ioda2_tce_invalidate_entire(pe->phb, rm);
> +			continue;
> +		}

Personally I think an if/else instead of the continue would be cleaner here.

> +		pnv_pci_ioda2_do_tce_invalidate(pe->pe_number, rm,
> +			invalidate, tbl->it_page_shift,
> +			index, npages);
>  	}
>  }
>  
> @@ -3061,26 +3041,6 @@ static void pnv_pci_ioda_create_dbgfs(void)
>  #endif /* CONFIG_DEBUG_FS */
>  }
>  
> -static void pnv_npu_ioda_fixup(void)
> -{
> -	bool enable_bypass;
> -	struct pci_controller *hose, *tmp;
> -	struct pnv_phb *phb;
> -	struct pnv_ioda_pe *pe;
> -
> -	list_for_each_entry_safe(hose, tmp, &hose_list, list_node) {
> -		phb = hose->private_data;
> -		if (phb->type != PNV_PHB_NPU)
> -			continue;
> -
> -		list_for_each_entry(pe, &phb->ioda.pe_dma_list, dma_link) {
> -			enable_bypass = dma_get_mask(&pe->pdev->dev) ==
> -				DMA_BIT_MASK(64);
> -			pnv_npu_init_dma_pe(pe);
> -		}
> -	}
> -}
> -
>  static void pnv_pci_ioda_fixup(void)
>  {
>  	pnv_pci_ioda_setup_PEs();
> @@ -3093,9 +3053,6 @@ static void pnv_pci_ioda_fixup(void)
>  	eeh_init();
>  	eeh_addr_cache_build();
>  #endif
> -
> -	/* Link NPU IODA tables to their PCI devices. */
> -	pnv_npu_ioda_fixup();
>  }
>  
>  /*
> diff --git a/arch/powerpc/platforms/powernv/pci.h 
b/arch/powerpc/platforms/powernv/pci.h
> index 485e5b1..e117bd8 100644
> --- a/arch/powerpc/platforms/powernv/pci.h
> +++ b/arch/powerpc/platforms/powernv/pci.h
> @@ -24,7 +24,6 @@ enum pnv_phb_model {
>  #define PNV_IODA_PE_MASTER	(1 << 3)	/* Master PE in compound case	
*/
>  #define PNV_IODA_PE_SLAVE	(1 << 4)	/* Slave PE in compound case	
*/
>  #define PNV_IODA_PE_VF		(1 << 5)	/* PE for one VF 	
	*/
> -#define PNV_IODA_PE_PEER	(1 << 6)	/* PE has peers			
*/
>  
>  /* Data associated with a PE, including IOMMU tracking etc.. */
>  struct pnv_phb;
> @@ -32,9 +31,6 @@ struct pnv_ioda_pe {
>  	unsigned long		flags;
>  	struct pnv_phb		*phb;
>  
> -#define PNV_IODA_MAX_PEER_PES	8
> -	struct pnv_ioda_pe	*peers[PNV_IODA_MAX_PEER_PES];
> -
>  	/* A PE can be associated with a single device or an
>  	 * entire bus (& children). In the former case, pdev
>  	 * is populated, in the later case, pbus is.
> @@ -246,8 +242,6 @@ extern void pe_level_printk(const struct pnv_ioda_pe 
*pe, const char *level,
>  	pe_level_printk(pe, KERN_INFO, fmt, ##__VA_ARGS__)
>  
>  /* Nvlink functions */
> -extern void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe);
> -extern void pnv_npu_setup_dma_pe(struct pnv_ioda_pe *npe);
>  extern void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass);
>  extern void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_phb *phb, bool 
rm);
>  
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH kernel v4 11/11] powerpc/powernv/npu: Enable NVLink pass through
  2016-04-29  8:55 ` [PATCH kernel v4 11/11] powerpc/powernv/npu: Enable NVLink pass through Alexey Kardashevskiy
@ 2016-05-03 14:08   ` Alistair Popple
  2016-05-05  5:49     ` Alexey Kardashevskiy
  0 siblings, 1 reply; 24+ messages in thread
From: Alistair Popple @ 2016-05-03 14:08 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: linuxppc-dev, Alex Williamson, Benjamin Herrenschmidt,
	Dan Carpenter, Daniel Axtens, David Gibson, Gavin Shan,
	Russell Currey, kvm, linux-kernel

Hi Alexey,

On Fri, 29 Apr 2016 18:55:24 Alexey Kardashevskiy wrote:
> IBM POWER8 NVlink systems come with Tesla K40-ish GPUs each of which
> also has a couple of fast speed links (NVLink). The interface to links
> is exposed as an emulated PCI bridge which is included into the same
> IOMMU group as the corresponding GPU.
> 
> In the kernel, NPUs get a separate PHB of the PNV_PHB_NPU type and a PE
> which behave pretty much as the standard IODA2 PHB except NPU PHB has
> just a single TVE in the hardware which means it can have either
> 32bit window or 64bit window or DMA bypass but never two of these.
> 
> In order to make these links work when GPU is passed to the guest,
> these bridges need to be passed as well; otherwise performance will
> degrade.
> 
> This implements and exports API to manage NPU state in regard to VFIO;
> it replicates iommu_table_group_ops.
> 
> This defines a new pnv_pci_ioda2_npu_ops which is assigned to
> the IODA2 bridge if there are NPUs for a GPU on the bridge.
> The new callbacks call the default IODA2 callbacks plus new NPU API.
> This adds a gpe_table_group_to_npe() helper to find NPU PE for the IODA2
> table_group, it is not expected to fail as the helper is only called
> from the pnv_pci_ioda2_npu_ops.
> 
> This does not define NPU-specific .release_ownership() so after
> VFIO is finished, DMA on NPU is disabled which is ok as the nvidia
> driver sets DMA mask when probing which enable 32 or 64bit DMA on NPU.
> 
> This adds a pnv_pci_npu_setup_iommu() helper which adds NPUs to
> the GPU group if any found. The helper uses helpers to look for
> the "ibm,gpu" property in the device tree which is a phandle of
> the corresponding GPU.
> 
> This adds an additional loop over PEs in pnv_ioda_setup_dma() as the main
> loop skips NPU PEs as they do not have 32bit DMA segments.
> 
> As pnv_npu_set_window() and pnv_npu_unset_window() are started being used
> by the new IODA2-NPU IOMMU group, this makes the helpers public and
> adds the DMA window number parameter.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> ---
> Changes:
> v4:
> * reused pnv_npu_set_window/pnv_npu_unset_window where possible
> * added comments, changed commit log
> 
> v3:
> * moved NPU-to-GPU IOMMU grouping later after all PHBs are discovered
> * removed hack to make iommu_add_device() work, iommu_group_add_device()
> is used instead
> * cleanup in gpe_table_group_to_npe_cb()
> 
> v2:
> * reimplemented to support NPU + GPU in the same group
> * merged "powerpc/powernv/npu: Add NPU devices to IOMMU group" and
> "powerpc/powernv/npu: Enable passing through via VFIO" into this patch
> ---
>  arch/powerpc/platforms/powernv/npu-dma.c  |  64 +++++++++++++++++--
>  arch/powerpc/platforms/powernv/pci-ioda.c | 102 
++++++++++++++++++++++++++++++
>  arch/powerpc/platforms/powernv/pci.h      |   6 ++
>  3 files changed, 166 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/powerpc/platforms/powernv/npu-dma.c 
b/arch/powerpc/platforms/powernv/npu-dma.c
> index cb2d1da..0459e10 100644
> --- a/arch/powerpc/platforms/powernv/npu-dma.c
> +++ b/arch/powerpc/platforms/powernv/npu-dma.c
> @@ -12,6 +12,7 @@
>  #include <linux/export.h>
>  #include <linux/pci.h>
>  #include <linux/memblock.h>
> +#include <linux/iommu.h>
>  
>  #include <asm/iommu.h>
>  #include <asm/pnv-pci.h>
> @@ -154,7 +155,7 @@ static struct pnv_ioda_pe *get_gpu_pci_dev_and_pe(struct 
pnv_ioda_pe *npe,
>  	return pe;
>  }
>  
> -static long pnv_npu_set_window(struct pnv_ioda_pe *npe,
> +long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
>  		struct iommu_table *tbl)
>  {
>  	struct pnv_phb *phb = npe->phb;
> @@ -182,13 +183,13 @@ static long pnv_npu_set_window(struct pnv_ioda_pe 
*npe,
>  	pnv_pci_ioda2_tce_invalidate_entire(phb, false);
>  
>  	/* Add the table to the list so its TCE cache will get invalidated */
> -	pnv_pci_link_table_and_group(phb->hose->node, 0,
> +	pnv_pci_link_table_and_group(phb->hose->node, num,
>  			tbl, &npe->table_group);
>  
>  	return 0;
>  }
>  
> -static long pnv_npu_unset_window(struct pnv_ioda_pe *npe)
> +long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num)
>  {
>  	struct pnv_phb *phb = npe->phb;
>  	int64_t rc;
> @@ -205,7 +206,7 @@ static long pnv_npu_unset_window(struct pnv_ioda_pe 
*npe)
>  	}
>  	pnv_pci_ioda2_tce_invalidate_entire(phb, false);
>  
> -	pnv_pci_unlink_table_and_group(npe->table_group.tables[0],
> +	pnv_pci_unlink_table_and_group(npe->table_group.tables[num],
>  			&npe->table_group);
>  
>  	return 0;
> @@ -231,7 +232,7 @@ static void pnv_npu_dma_set_32(struct pnv_ioda_pe *npe)
>  	if (!gpe)
>  		return;
>  
> -	rc = pnv_npu_set_window(npe, gpe->table_group.tables[0]);
> +	rc = pnv_npu_set_window(npe, 0, gpe->table_group.tables[0]);
>  
>  	/*
>  	 * We don't initialise npu_pe->tce32_table as we always use
> @@ -255,7 +256,7 @@ static int pnv_npu_dma_set_bypass(struct pnv_ioda_pe 
*npe)
>  	if (phb->type != PNV_PHB_NPU || !npe->pdev)
>  		return -EINVAL;
>  
> -	rc = pnv_npu_unset_window(npe);
> +	rc = pnv_npu_unset_window(npe, 0);
>  	if (rc != OPAL_SUCCESS)
>  		return rc;
>  
> @@ -307,3 +308,54 @@ void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, 
bool bypass)
>  		}
>  	}
>  }
> +
> +/* Switch ownership from platform code to external user (e.g. VFIO) */
> +void pnv_npu_take_ownership(struct pnv_ioda_pe *npe)
> +{
> +	struct pnv_phb *phb = npe->phb;
> +	int64_t rc;
> +
> +	/*
> +	 * Note: NPU has just a single TVE in the hardware which means that
> +	 * while used by the kernel, it can have either 32bit window or
> +	 * DMA bypass but never both. So we deconfigure 32bit window only
> +	 * if it was enabled at the moment of ownership change.
> +	 */
> +	if (npe->table_group.tables[0]) {
> +		pnv_npu_unset_window(npe, 0);
> +		return;
> +	}
> +
> +	/* Disable bypass */
> +	rc = opal_pci_map_pe_dma_window_real(phb->opal_id,
> +			npe->pe_number, npe->pe_number,
> +			0 /* bypass base */, 0);
> +	if (rc) {
> +		pe_err(npe, "Failed to disable bypass, err %lld\n", rc);
> +		return;
> +	}
> +	pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
> +}
> +
> +struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
> +{
> +	struct pnv_phb *phb = npe->phb;
> +	struct pci_bus *pbus = phb->hose->bus;
> +	struct pci_dev *npdev, *gpdev = NULL, *gptmp;
> +	struct pnv_ioda_pe *gpe = get_gpu_pci_dev_and_pe(npe, &gpdev);
> +
> +	if (!gpe || !gpdev)
> +		return NULL;
> +
> +	list_for_each_entry(npdev, &pbus->devices, bus_list) {
> +		gptmp = pnv_pci_get_gpu_dev(npdev);
> +
> +		if (gptmp != gpdev)
> +			continue;

If I'm not mistaken it looks like you are trying to find all the NPU PEs also 
attached to the same GPU on the same (firmware emulated) NPU PHB? I'm just 
curious - does this work if the GPU has say 2 NPU PE#s (ie. links) on 
different NPU PHB's?

> +		pe_info(gpe, "Attached NPU %s\n", dev_name(&npdev->dev));
> +		iommu_group_add_device(gpe->table_group.group, &npdev->dev);
> +	}
> +
> +	return gpe;
> +}
> diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c 
b/arch/powerpc/platforms/powernv/pci-ioda.c
> index 922c74c..feabe50 100644
> --- a/arch/powerpc/platforms/powernv/pci-ioda.c
> +++ b/arch/powerpc/platforms/powernv/pci-ioda.c
> @@ -2273,6 +2273,90 @@ static struct iommu_table_group_ops pnv_pci_ioda2_ops 
= {
>  	.take_ownership = pnv_ioda2_take_ownership,
>  	.release_ownership = pnv_ioda2_release_ownership,
>  };
> +
> +static int gpe_table_group_to_npe_cb(struct device *dev, void *opaque)
> +{
> +	struct pci_controller *hose;
> +	struct pnv_phb *phb;
> +	struct pnv_ioda_pe **ptmppe = opaque;
> +	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
> +	struct pci_dn *pdn = pci_get_pdn(pdev);
> +
> +	if (!pdn || pdn->pe_number == IODA_INVALID_PE)
> +		return 0;
> +
> +	hose = pci_bus_to_host(pdev->bus);
> +	phb = hose->private_data;
> +	if (phb->type != PNV_PHB_NPU)
> +		return 0;
> +
> +	*ptmppe = &phb->ioda.pe_array[pdn->pe_number];
> +
> +	return 1;
> +}
> +
> +/*
> + * This returns PE of associated NPU.
> + * This assumes that NPU is in the same IOMMU group with GPU and there is
> + * no other PEs.
> + */

Which answers my above question as this doesn't look like it supports GPUs 
with multiple NPU PE#s attached. I don't think this is actually a problem 
though as no hardware I know of will ever do this, even though theoretically 
it's possible.

It might be nice if we could warn if this configuration is detected but not 
really a big issue. Everything else looks reasonable as far as I can tell 
(although again I'm no vfio iommu groups expert) so happy for you to add my 
reviewed-by:

Reviewed-By: Alistair Popple <alistair@popple.id.au>

> +static struct pnv_ioda_pe *gpe_table_group_to_npe(
> +		struct iommu_table_group *table_group)
> +{
> +	struct pnv_ioda_pe *npe = NULL;
> +	int ret = iommu_group_for_each_dev(table_group->group, &npe,
> +			gpe_table_group_to_npe_cb);
> +
> +	BUG_ON(!ret || !npe);
> +
> +	return npe;
> +}
> +
> +static long pnv_pci_ioda2_npu_set_window(struct iommu_table_group 
*table_group,
> +		int num, struct iommu_table *tbl)
> +{
> +	long ret = pnv_pci_ioda2_set_window(table_group, num, tbl);
> +
> +	if (ret)
> +		return ret;
> +
> +	ret = pnv_npu_set_window(gpe_table_group_to_npe(table_group), num, 
tbl);
> +	if (ret)
> +		pnv_pci_ioda2_unset_window(table_group, num);
> +
> +	return ret;
> +}
> +
> +static long pnv_pci_ioda2_npu_unset_window(
> +		struct iommu_table_group *table_group,
> +		int num)
> +{
> +	long ret = pnv_pci_ioda2_unset_window(table_group, num);
> +
> +	if (ret)
> +		return ret;
> +
> +	return pnv_npu_unset_window(gpe_table_group_to_npe(table_group), num);
> +}
> +
> +static void pnv_ioda2_npu_take_ownership(struct iommu_table_group 
*table_group)
> +{
> +	/*
> +	 * Detach NPU first as pnv_ioda2_take_ownership() will destroy
> +	 * the iommu_table if 32bit DMA is enabled.
> +	 */
> +	pnv_npu_take_ownership(gpe_table_group_to_npe(table_group));
> +	pnv_ioda2_take_ownership(table_group);
> +}
> +
> +static struct iommu_table_group_ops pnv_pci_ioda2_npu_ops = {
> +	.get_table_size = pnv_pci_ioda2_get_table_size,
> +	.create_table = pnv_pci_ioda2_create_table,
> +	.set_window = pnv_pci_ioda2_npu_set_window,
> +	.unset_window = pnv_pci_ioda2_npu_unset_window,
> +	.take_ownership = pnv_ioda2_npu_take_ownership,
> +	.release_ownership = pnv_ioda2_release_ownership,
> +};
>  #endif
>  
>  static void pnv_pci_ioda_setup_opal_tce_kill(struct pnv_phb *phb)
> @@ -3012,6 +3096,7 @@ static void pnv_pci_ioda_setup_DMA(void)
>  {
>  	struct pci_controller *hose, *tmp;
>  	struct pnv_phb *phb;
> +	struct pnv_ioda_pe *pe, *gpe;
>  
>  	list_for_each_entry_safe(hose, tmp, &hose_list, list_node) {
>  		pnv_ioda_setup_dma(hose->private_data);
> @@ -3020,6 +3105,23 @@ static void pnv_pci_ioda_setup_DMA(void)
>  		phb = hose->private_data;
>  		phb->initialized = 1;
>  	}
> +
> +	/*
> +	 * Now we have all PHBs discovered, time to add NPU devices to
> +	 * the corresponding IOMMU groups.
> +	 */
> +	list_for_each_entry_safe(hose, tmp, &hose_list, list_node) {
> +		phb = hose->private_data;
> +
> +		if (phb->type != PNV_PHB_NPU)
> +			continue;
> +
> +		list_for_each_entry(pe, &phb->ioda.pe_dma_list, dma_link) {
> +			gpe = pnv_pci_npu_setup_iommu(pe);
> +			if (gpe)
> +				gpe->table_group.ops = &pnv_pci_ioda2_npu_ops;
> +		}
> +	}
>  }
>  
>  static void pnv_pci_ioda_create_dbgfs(void)
> diff --git a/arch/powerpc/platforms/powernv/pci.h 
b/arch/powerpc/platforms/powernv/pci.h
> index e117bd8..ebc6ee4 100644
> --- a/arch/powerpc/platforms/powernv/pci.h
> +++ b/arch/powerpc/platforms/powernv/pci.h
> @@ -244,5 +244,11 @@ extern void pe_level_printk(const struct pnv_ioda_pe 
*pe, const char *level,
>  /* Nvlink functions */
>  extern void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass);
>  extern void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_phb *phb, bool 
rm);
> +extern struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe 
*npe);
> +extern long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
> +		struct iommu_table *tbl);
> +extern long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num);
> +extern void pnv_npu_take_ownership(struct pnv_ioda_pe *npe);
> +extern void pnv_npu_release_ownership(struct pnv_ioda_pe *npe);
>  
>  #endif /* __POWERNV_PCI_H */
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH kernel v4 10/11] powerpc/powernv/npu: Rework TCE Kill handling
  2016-05-03  7:37   ` Alistair Popple
@ 2016-05-05  4:23     ` Alexey Kardashevskiy
  2016-05-06  1:11       ` Alistair Popple
  0 siblings, 1 reply; 24+ messages in thread
From: Alexey Kardashevskiy @ 2016-05-05  4:23 UTC (permalink / raw)
  To: Alistair Popple
  Cc: linuxppc-dev, Alex Williamson, Benjamin Herrenschmidt,
	Dan Carpenter, Daniel Axtens, David Gibson, Gavin Shan,
	Russell Currey, kvm, linux-kernel

On 05/03/2016 05:37 PM, Alistair Popple wrote:
> On Fri, 29 Apr 2016 18:55:23 Alexey Kardashevskiy wrote:
>> The pnv_ioda_pe struct keeps an array of peers. At the moment it is only
>> used to link GPU and NPU for 2 purposes:
>>
>> 1. Access NPU quickly when configuring DMA for GPU - this was addressed
>> in the previos patch by removing use of it as DMA setup is not what
>> the kernel would constantly do.
>
> Agreed. It was used here because the peer array was added to deal with (2)
> below ...
>
>> 2. Invalidate TCE cache for NPU when it is invalidated for GPU.
>> GPU and NPU are in different PE. There is already a mechanism to
>> attach multiple iommu_table_group to the same iommu_table (used for VFIO),
>> we can reuse it here so does this patch.
>
> ... because we weren't aware iommu_table_group could be used to do this
> instead.
>
>> This gets rid of peers[] array and PNV_IODA_PE_PEER flag as they are
>> not needed anymore.
>
> Happy to see it go. I'm not too familiar with iommu groups but based on the
> code and what you have described to me both here and offline everything looks
> good to me. One pretty minor style comment below.
>
> Reviewed-By: Alistair Popple <alistair@popple.id.au>
>
>> While we are here, add TCE cache invalidation after enabling bypass.
>>
>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>> ---
>> Changes:
>> v4:
>> * reworked as "powerpc/powernv/npu: Add set/unset window helpers" has been
>> added
>> ---
>>  arch/powerpc/platforms/powernv/npu-dma.c  | 69
> +++++++++----------------------
>>  arch/powerpc/platforms/powernv/pci-ioda.c | 57 ++++---------------------
>>  arch/powerpc/platforms/powernv/pci.h      |  6 ---
>>  3 files changed, 26 insertions(+), 106 deletions(-)
>>
>> diff --git a/arch/powerpc/platforms/powernv/npu-dma.c
> b/arch/powerpc/platforms/powernv/npu-dma.c
>> index 800d70f..cb2d1da 100644
>> --- a/arch/powerpc/platforms/powernv/npu-dma.c
>> +++ b/arch/powerpc/platforms/powernv/npu-dma.c
>> @@ -136,22 +136,17 @@ static struct pnv_ioda_pe
> *get_gpu_pci_dev_and_pe(struct pnv_ioda_pe *npe,
>>  	struct pnv_ioda_pe *pe;
>>  	struct pci_dn *pdn;
>>
>> -	if (npe->flags & PNV_IODA_PE_PEER) {
>> -		pe = npe->peers[0];
>> -		pdev = pe->pdev;
>> -	} else {
>> -		pdev = pnv_pci_get_gpu_dev(npe->pdev);
>> -		if (!pdev)
>> -			return NULL;
>> +	pdev = pnv_pci_get_gpu_dev(npe->pdev);
>> +	if (!pdev)
>> +		return NULL;
>>
>> -		pdn = pci_get_pdn(pdev);
>> -		if (WARN_ON(!pdn || pdn->pe_number == IODA_INVALID_PE))
>> -			return NULL;
>> +	pdn = pci_get_pdn(pdev);
>> +	if (WARN_ON(!pdn || pdn->pe_number == IODA_INVALID_PE))
>> +		return NULL;
>>
>> -		hose = pci_bus_to_host(pdev->bus);
>> -		phb = hose->private_data;
>> -		pe = &phb->ioda.pe_array[pdn->pe_number];
>> -	}
>> +	hose = pci_bus_to_host(pdev->bus);
>> +	phb = hose->private_data;
>> +	pe = &phb->ioda.pe_array[pdn->pe_number];
>>
>>  	if (gpdev)
>>  		*gpdev = pdev;
>> @@ -186,6 +181,10 @@ static long pnv_npu_set_window(struct pnv_ioda_pe *npe,
>>  	}
>>  	pnv_pci_ioda2_tce_invalidate_entire(phb, false);
>>
>> +	/* Add the table to the list so its TCE cache will get invalidated */
>> +	pnv_pci_link_table_and_group(phb->hose->node, 0,
>> +			tbl, &npe->table_group);
>
> Where tbl is associated with the GPU and is what links the NPU and GPU PEs.
>
>>  	return 0;
>>  }
>>
>> @@ -206,45 +205,12 @@ static long pnv_npu_unset_window(struct pnv_ioda_pe
> *npe)
>>  	}
>>  	pnv_pci_ioda2_tce_invalidate_entire(phb, false);
>>
>> +	pnv_pci_unlink_table_and_group(npe->table_group.tables[0],
>> +			&npe->table_group);
>> +
>>  	return 0;
>>  }
>>
>> -void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe)
>> -{
>> -	struct pnv_ioda_pe *gpe;
>> -	struct pci_dev *gpdev;
>> -	int i, avail = -1;
>> -
>> -	if (!npe->pdev || !(npe->flags & PNV_IODA_PE_DEV))
>> -		return;
>> -
>> -	gpe = get_gpu_pci_dev_and_pe(npe, &gpdev);
>> -	if (!gpe)
>> -		return;
>> -
>> -	for (i = 0; i < PNV_IODA_MAX_PEER_PES; i++) {
>> -		/* Nothing to do if the PE is already connected. */
>> -		if (gpe->peers[i] == npe)
>> -			return;
>> -
>> -		if (!gpe->peers[i])
>> -			avail = i;
>> -	}
>> -
>> -	if (WARN_ON(avail < 0))
>> -		return;
>> -
>> -	gpe->peers[avail] = npe;
>> -	gpe->flags |= PNV_IODA_PE_PEER;
>> -
>> -	/*
>> -	 * We assume that the NPU devices only have a single peer PE
>> -	 * (the GPU PCIe device PE).
>> -	 */
>> -	npe->peers[0] = gpe;
>> -	npe->flags |= PNV_IODA_PE_PEER;
>> -}
>> -
>>  /*
>>   * Enables 32 bit DMA on NPU.
>>   */
>> @@ -302,6 +268,9 @@ static int pnv_npu_dma_set_bypass(struct pnv_ioda_pe
> *npe)
>>  			npe->pe_number, npe->pe_number,
>>  			0 /* bypass base */, top);
>>
>> +	if (rc == OPAL_SUCCESS)
>> +		pnv_pci_ioda2_tce_invalidate_entire(phb, false);
>> +
>>  	return rc;
>>  }
>>
>> diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c
> b/arch/powerpc/platforms/powernv/pci-ioda.c
>> index db7695f..922c74c 100644
>> --- a/arch/powerpc/platforms/powernv/pci-ioda.c
>> +++ b/arch/powerpc/platforms/powernv/pci-ioda.c
>> @@ -1816,23 +1816,12 @@ static inline void
> pnv_pci_ioda2_tce_invalidate_pe(struct pnv_ioda_pe *pe)
>>  	/* 01xb - invalidate TCEs that match the specified PE# */
>>  	unsigned long val = TCE_KILL_INVAL_PE | (pe->pe_number & 0xFF);
>>  	struct pnv_phb *phb = pe->phb;
>> -	struct pnv_ioda_pe *npe;
>> -	int i;
>>
>>  	if (!phb->ioda.tce_inval_reg)
>>  		return;
>>
>>  	mb(); /* Ensure above stores are visible */
>>  	__raw_writeq(cpu_to_be64(val), phb->ioda.tce_inval_reg);
>> -
>> -	if (pe->flags & PNV_IODA_PE_PEER)
>> -		for (i = 0; i < PNV_IODA_MAX_PEER_PES; i++) {
>> -			npe = pe->peers[i];
>> -			if (!npe || npe->phb->type != PNV_PHB_NPU)
>> -				continue;
>> -
>> -			pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
>> -		}
>>  }
>>
>>  static void pnv_pci_ioda2_do_tce_invalidate(unsigned pe_number, bool rm,
>> @@ -1867,33 +1856,24 @@ static void pnv_pci_ioda2_tce_invalidate(struct
> iommu_table *tbl,
>>  	struct iommu_table_group_link *tgl;
>>
>>  	list_for_each_entry_rcu(tgl, &tbl->it_group_list, next) {
>> -		struct pnv_ioda_pe *npe;
>>  		struct pnv_ioda_pe *pe = container_of(tgl->table_group,
>>  				struct pnv_ioda_pe, table_group);
>>  		__be64 __iomem *invalidate = rm ?
>>  			(__be64 __iomem *)pe->phb->ioda.tce_inval_reg_phys :
>>  			pe->phb->ioda.tce_inval_reg;
>> -		int i;
>>
>> -		pnv_pci_ioda2_do_tce_invalidate(pe->pe_number, rm,
>> -			invalidate, tbl->it_page_shift,
>> -			index, npages);
>> -
>> -		if (pe->flags & PNV_IODA_PE_PEER)
>> +		if (pe->phb->type == PNV_PHB_NPU) {
>>  			/*
>>  			 * The NVLink hardware does not support TCE kill
>>  			 * per TCE entry so we have to invalidate
>>  			 * the entire cache for it.
>>  			 */
>> -			for (i = 0; i < PNV_IODA_MAX_PEER_PES; i++) {
>> -				npe = pe->peers[i];
>> -				if (!npe || npe->phb->type != PNV_PHB_NPU ||
>> -						!npe->phb->ioda.tce_inval_reg)
>> -					continue;
>> -
>> -				pnv_pci_ioda2_tce_invalidate_entire(npe->phb,
>> -						rm);
>> -			}
>> +			pnv_pci_ioda2_tce_invalidate_entire(pe->phb, rm);
>> +			continue;
>> +		}
>
> Personally I think an if/else instead of the continue would be cleaner here.


I see it as a hack which is nice to have in a way that it does not touch 
the correct code (in this case - by enclosing it into "else{}").

Also, that would increase the indentation of the 
pnv_pci_ioda2_do_tce_invalidate() below for no good reason.


>
>> +		pnv_pci_ioda2_do_tce_invalidate(pe->pe_number, rm,
>> +			invalidate, tbl->it_page_shift,
>> +			index, npages);
>>  	}
>>  }
>>
>> @@ -3061,26 +3041,6 @@ static void pnv_pci_ioda_create_dbgfs(void)
>>  #endif /* CONFIG_DEBUG_FS */
>>  }
>>
>> -static void pnv_npu_ioda_fixup(void)
>> -{
>> -	bool enable_bypass;
>> -	struct pci_controller *hose, *tmp;
>> -	struct pnv_phb *phb;
>> -	struct pnv_ioda_pe *pe;
>> -
>> -	list_for_each_entry_safe(hose, tmp, &hose_list, list_node) {
>> -		phb = hose->private_data;
>> -		if (phb->type != PNV_PHB_NPU)
>> -			continue;
>> -
>> -		list_for_each_entry(pe, &phb->ioda.pe_dma_list, dma_link) {
>> -			enable_bypass = dma_get_mask(&pe->pdev->dev) ==
>> -				DMA_BIT_MASK(64);
>> -			pnv_npu_init_dma_pe(pe);
>> -		}
>> -	}
>> -}
>> -
>>  static void pnv_pci_ioda_fixup(void)
>>  {
>>  	pnv_pci_ioda_setup_PEs();
>> @@ -3093,9 +3053,6 @@ static void pnv_pci_ioda_fixup(void)
>>  	eeh_init();
>>  	eeh_addr_cache_build();
>>  #endif
>> -
>> -	/* Link NPU IODA tables to their PCI devices. */
>> -	pnv_npu_ioda_fixup();
>>  }
>>
>>  /*
>> diff --git a/arch/powerpc/platforms/powernv/pci.h
> b/arch/powerpc/platforms/powernv/pci.h
>> index 485e5b1..e117bd8 100644
>> --- a/arch/powerpc/platforms/powernv/pci.h
>> +++ b/arch/powerpc/platforms/powernv/pci.h
>> @@ -24,7 +24,6 @@ enum pnv_phb_model {
>>  #define PNV_IODA_PE_MASTER	(1 << 3)	/* Master PE in compound case	
> */
>>  #define PNV_IODA_PE_SLAVE	(1 << 4)	/* Slave PE in compound case	
> */
>>  #define PNV_IODA_PE_VF		(1 << 5)	/* PE for one VF 	
> 	*/
>> -#define PNV_IODA_PE_PEER	(1 << 6)	/* PE has peers			
> */
>>
>>  /* Data associated with a PE, including IOMMU tracking etc.. */
>>  struct pnv_phb;
>> @@ -32,9 +31,6 @@ struct pnv_ioda_pe {
>>  	unsigned long		flags;
>>  	struct pnv_phb		*phb;
>>
>> -#define PNV_IODA_MAX_PEER_PES	8
>> -	struct pnv_ioda_pe	*peers[PNV_IODA_MAX_PEER_PES];
>> -
>>  	/* A PE can be associated with a single device or an
>>  	 * entire bus (& children). In the former case, pdev
>>  	 * is populated, in the later case, pbus is.
>> @@ -246,8 +242,6 @@ extern void pe_level_printk(const struct pnv_ioda_pe
> *pe, const char *level,
>>  	pe_level_printk(pe, KERN_INFO, fmt, ##__VA_ARGS__)
>>
>>  /* Nvlink functions */
>> -extern void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe);
>> -extern void pnv_npu_setup_dma_pe(struct pnv_ioda_pe *npe);
>>  extern void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass);
>>  extern void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_phb *phb, bool
> rm);
>>
>>
>


-- 
Alexey

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH kernel v4 11/11] powerpc/powernv/npu: Enable NVLink pass through
  2016-05-03 14:08   ` Alistair Popple
@ 2016-05-05  5:49     ` Alexey Kardashevskiy
  2016-05-06  1:02       ` Alistair Popple
  0 siblings, 1 reply; 24+ messages in thread
From: Alexey Kardashevskiy @ 2016-05-05  5:49 UTC (permalink / raw)
  To: Alistair Popple
  Cc: linuxppc-dev, Alex Williamson, Benjamin Herrenschmidt,
	Dan Carpenter, Daniel Axtens, David Gibson, Gavin Shan,
	Russell Currey, kvm, linux-kernel

On 05/04/2016 12:08 AM, Alistair Popple wrote:
> Hi Alexey,
>
> On Fri, 29 Apr 2016 18:55:24 Alexey Kardashevskiy wrote:
>> IBM POWER8 NVlink systems come with Tesla K40-ish GPUs each of which
>> also has a couple of fast speed links (NVLink). The interface to links
>> is exposed as an emulated PCI bridge which is included into the same
>> IOMMU group as the corresponding GPU.
>>
>> In the kernel, NPUs get a separate PHB of the PNV_PHB_NPU type and a PE
>> which behave pretty much as the standard IODA2 PHB except NPU PHB has
>> just a single TVE in the hardware which means it can have either
>> 32bit window or 64bit window or DMA bypass but never two of these.
>>
>> In order to make these links work when GPU is passed to the guest,
>> these bridges need to be passed as well; otherwise performance will
>> degrade.
>>
>> This implements and exports API to manage NPU state in regard to VFIO;
>> it replicates iommu_table_group_ops.
>>
>> This defines a new pnv_pci_ioda2_npu_ops which is assigned to
>> the IODA2 bridge if there are NPUs for a GPU on the bridge.
>> The new callbacks call the default IODA2 callbacks plus new NPU API.
>> This adds a gpe_table_group_to_npe() helper to find NPU PE for the IODA2
>> table_group, it is not expected to fail as the helper is only called
>> from the pnv_pci_ioda2_npu_ops.
>>
>> This does not define NPU-specific .release_ownership() so after
>> VFIO is finished, DMA on NPU is disabled which is ok as the nvidia
>> driver sets DMA mask when probing which enable 32 or 64bit DMA on NPU.
>>
>> This adds a pnv_pci_npu_setup_iommu() helper which adds NPUs to
>> the GPU group if any found. The helper uses helpers to look for
>> the "ibm,gpu" property in the device tree which is a phandle of
>> the corresponding GPU.
>>
>> This adds an additional loop over PEs in pnv_ioda_setup_dma() as the main
>> loop skips NPU PEs as they do not have 32bit DMA segments.
>>
>> As pnv_npu_set_window() and pnv_npu_unset_window() are started being used
>> by the new IODA2-NPU IOMMU group, this makes the helpers public and
>> adds the DMA window number parameter.
>>
>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>> ---
>> Changes:
>> v4:
>> * reused pnv_npu_set_window/pnv_npu_unset_window where possible
>> * added comments, changed commit log
>>
>> v3:
>> * moved NPU-to-GPU IOMMU grouping later after all PHBs are discovered
>> * removed hack to make iommu_add_device() work, iommu_group_add_device()
>> is used instead
>> * cleanup in gpe_table_group_to_npe_cb()
>>
>> v2:
>> * reimplemented to support NPU + GPU in the same group
>> * merged "powerpc/powernv/npu: Add NPU devices to IOMMU group" and
>> "powerpc/powernv/npu: Enable passing through via VFIO" into this patch
>> ---
>>  arch/powerpc/platforms/powernv/npu-dma.c  |  64 +++++++++++++++++--
>>  arch/powerpc/platforms/powernv/pci-ioda.c | 102
> ++++++++++++++++++++++++++++++
>>  arch/powerpc/platforms/powernv/pci.h      |   6 ++
>>  3 files changed, 166 insertions(+), 6 deletions(-)
>>
>> diff --git a/arch/powerpc/platforms/powernv/npu-dma.c
> b/arch/powerpc/platforms/powernv/npu-dma.c
>> index cb2d1da..0459e10 100644
>> --- a/arch/powerpc/platforms/powernv/npu-dma.c
>> +++ b/arch/powerpc/platforms/powernv/npu-dma.c
>> @@ -12,6 +12,7 @@
>>  #include <linux/export.h>
>>  #include <linux/pci.h>
>>  #include <linux/memblock.h>
>> +#include <linux/iommu.h>
>>
>>  #include <asm/iommu.h>
>>  #include <asm/pnv-pci.h>
>> @@ -154,7 +155,7 @@ static struct pnv_ioda_pe *get_gpu_pci_dev_and_pe(struct
> pnv_ioda_pe *npe,
>>  	return pe;
>>  }
>>
>> -static long pnv_npu_set_window(struct pnv_ioda_pe *npe,
>> +long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
>>  		struct iommu_table *tbl)
>>  {
>>  	struct pnv_phb *phb = npe->phb;
>> @@ -182,13 +183,13 @@ static long pnv_npu_set_window(struct pnv_ioda_pe
> *npe,
>>  	pnv_pci_ioda2_tce_invalidate_entire(phb, false);
>>
>>  	/* Add the table to the list so its TCE cache will get invalidated */
>> -	pnv_pci_link_table_and_group(phb->hose->node, 0,
>> +	pnv_pci_link_table_and_group(phb->hose->node, num,
>>  			tbl, &npe->table_group);
>>
>>  	return 0;
>>  }
>>
>> -static long pnv_npu_unset_window(struct pnv_ioda_pe *npe)
>> +long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num)
>>  {
>>  	struct pnv_phb *phb = npe->phb;
>>  	int64_t rc;
>> @@ -205,7 +206,7 @@ static long pnv_npu_unset_window(struct pnv_ioda_pe
> *npe)
>>  	}
>>  	pnv_pci_ioda2_tce_invalidate_entire(phb, false);
>>
>> -	pnv_pci_unlink_table_and_group(npe->table_group.tables[0],
>> +	pnv_pci_unlink_table_and_group(npe->table_group.tables[num],
>>  			&npe->table_group);
>>
>>  	return 0;
>> @@ -231,7 +232,7 @@ static void pnv_npu_dma_set_32(struct pnv_ioda_pe *npe)
>>  	if (!gpe)
>>  		return;
>>
>> -	rc = pnv_npu_set_window(npe, gpe->table_group.tables[0]);
>> +	rc = pnv_npu_set_window(npe, 0, gpe->table_group.tables[0]);
>>
>>  	/*
>>  	 * We don't initialise npu_pe->tce32_table as we always use
>> @@ -255,7 +256,7 @@ static int pnv_npu_dma_set_bypass(struct pnv_ioda_pe
> *npe)
>>  	if (phb->type != PNV_PHB_NPU || !npe->pdev)
>>  		return -EINVAL;
>>
>> -	rc = pnv_npu_unset_window(npe);
>> +	rc = pnv_npu_unset_window(npe, 0);
>>  	if (rc != OPAL_SUCCESS)
>>  		return rc;
>>
>> @@ -307,3 +308,54 @@ void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev,
> bool bypass)
>>  		}
>>  	}
>>  }
>> +
>> +/* Switch ownership from platform code to external user (e.g. VFIO) */
>> +void pnv_npu_take_ownership(struct pnv_ioda_pe *npe)
>> +{
>> +	struct pnv_phb *phb = npe->phb;
>> +	int64_t rc;
>> +
>> +	/*
>> +	 * Note: NPU has just a single TVE in the hardware which means that
>> +	 * while used by the kernel, it can have either 32bit window or
>> +	 * DMA bypass but never both. So we deconfigure 32bit window only
>> +	 * if it was enabled at the moment of ownership change.
>> +	 */
>> +	if (npe->table_group.tables[0]) {
>> +		pnv_npu_unset_window(npe, 0);
>> +		return;
>> +	}
>> +
>> +	/* Disable bypass */
>> +	rc = opal_pci_map_pe_dma_window_real(phb->opal_id,
>> +			npe->pe_number, npe->pe_number,
>> +			0 /* bypass base */, 0);
>> +	if (rc) {
>> +		pe_err(npe, "Failed to disable bypass, err %lld\n", rc);
>> +		return;
>> +	}
>> +	pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
>> +}
>> +
>> +struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
>> +{
>> +	struct pnv_phb *phb = npe->phb;
>> +	struct pci_bus *pbus = phb->hose->bus;
>> +	struct pci_dev *npdev, *gpdev = NULL, *gptmp;
>> +	struct pnv_ioda_pe *gpe = get_gpu_pci_dev_and_pe(npe, &gpdev);
>> +
>> +	if (!gpe || !gpdev)
>> +		return NULL;
>> +
>> +	list_for_each_entry(npdev, &pbus->devices, bus_list) {
>> +		gptmp = pnv_pci_get_gpu_dev(npdev);
>> +
>> +		if (gptmp != gpdev)
>> +			continue;
>
> If I'm not mistaken it looks like you are trying to find all the NPU PEs also
> attached to the same GPU on the same (firmware emulated) NPU PHB? I'm just
> curious - does this work if the GPU has say 2 NPU PE#s (ie. links) on
> different NPU PHB's?
>
>> +		pe_info(gpe, "Attached NPU %s\n", dev_name(&npdev->dev));
>> +		iommu_group_add_device(gpe->table_group.group, &npdev->dev);
>> +	}
>> +
>> +	return gpe;
>> +}
>> diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c
> b/arch/powerpc/platforms/powernv/pci-ioda.c
>> index 922c74c..feabe50 100644
>> --- a/arch/powerpc/platforms/powernv/pci-ioda.c
>> +++ b/arch/powerpc/platforms/powernv/pci-ioda.c
>> @@ -2273,6 +2273,90 @@ static struct iommu_table_group_ops pnv_pci_ioda2_ops
> = {
>>  	.take_ownership = pnv_ioda2_take_ownership,
>>  	.release_ownership = pnv_ioda2_release_ownership,
>>  };
>> +
>> +static int gpe_table_group_to_npe_cb(struct device *dev, void *opaque)
>> +{
>> +	struct pci_controller *hose;
>> +	struct pnv_phb *phb;
>> +	struct pnv_ioda_pe **ptmppe = opaque;
>> +	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
>> +	struct pci_dn *pdn = pci_get_pdn(pdev);
>> +
>> +	if (!pdn || pdn->pe_number == IODA_INVALID_PE)
>> +		return 0;
>> +
>> +	hose = pci_bus_to_host(pdev->bus);
>> +	phb = hose->private_data;
>> +	if (phb->type != PNV_PHB_NPU)
>> +		return 0;
>> +
>> +	*ptmppe = &phb->ioda.pe_array[pdn->pe_number];
>> +
>> +	return 1;
>> +}
>> +
>> +/*
>> + * This returns PE of associated NPU.
>> + * This assumes that NPU is in the same IOMMU group with GPU and there is
>> + * no other PEs.
>> + */
>
> Which answers my above question as this doesn't look like it supports GPUs
> with multiple NPU PE#s attached. I don't think this is actually a problem
> though as no hardware I know of will ever do this, even though theoretically
> it's possible.


I believe when such a situation happens in the future, it will be different 
PVR, PHB4 (or 5 or whatever) and IODA3 (or 4, or...).

I could write code in assumption that there can be more NPU PEs per one GPU 
PE but it does not make much sense (to me) to design the hardware this way 
and when/if it will be designed this way, the details most likely will 
differ from what I can imagine today.



>
> It might be nice if we could warn if this configuration is detected but not
> really a big issue.
 >
> Everything else looks reasonable as far as I can tell
> (although again I'm no vfio iommu groups expert) so happy for you to add my
> reviewed-by:
>
> Reviewed-By: Alistair Popple <alistair@popple.id.au>


Thanks!

>
>> +static struct pnv_ioda_pe *gpe_table_group_to_npe(
>> +		struct iommu_table_group *table_group)
>> +{
>> +	struct pnv_ioda_pe *npe = NULL;
>> +	int ret = iommu_group_for_each_dev(table_group->group, &npe,
>> +			gpe_table_group_to_npe_cb);
>> +
>> +	BUG_ON(!ret || !npe);
>> +
>> +	return npe;
>> +}
>> +
>> +static long pnv_pci_ioda2_npu_set_window(struct iommu_table_group
> *table_group,
>> +		int num, struct iommu_table *tbl)
>> +{
>> +	long ret = pnv_pci_ioda2_set_window(table_group, num, tbl);
>> +
>> +	if (ret)
>> +		return ret;
>> +
>> +	ret = pnv_npu_set_window(gpe_table_group_to_npe(table_group), num,
> tbl);
>> +	if (ret)
>> +		pnv_pci_ioda2_unset_window(table_group, num);
>> +
>> +	return ret;
>> +}
>> +
>> +static long pnv_pci_ioda2_npu_unset_window(
>> +		struct iommu_table_group *table_group,
>> +		int num)
>> +{
>> +	long ret = pnv_pci_ioda2_unset_window(table_group, num);
>> +
>> +	if (ret)
>> +		return ret;
>> +
>> +	return pnv_npu_unset_window(gpe_table_group_to_npe(table_group), num);
>> +}
>> +
>> +static void pnv_ioda2_npu_take_ownership(struct iommu_table_group
> *table_group)
>> +{
>> +	/*
>> +	 * Detach NPU first as pnv_ioda2_take_ownership() will destroy
>> +	 * the iommu_table if 32bit DMA is enabled.
>> +	 */
>> +	pnv_npu_take_ownership(gpe_table_group_to_npe(table_group));
>> +	pnv_ioda2_take_ownership(table_group);
>> +}
>> +
>> +static struct iommu_table_group_ops pnv_pci_ioda2_npu_ops = {
>> +	.get_table_size = pnv_pci_ioda2_get_table_size,
>> +	.create_table = pnv_pci_ioda2_create_table,
>> +	.set_window = pnv_pci_ioda2_npu_set_window,
>> +	.unset_window = pnv_pci_ioda2_npu_unset_window,
>> +	.take_ownership = pnv_ioda2_npu_take_ownership,
>> +	.release_ownership = pnv_ioda2_release_ownership,
>> +};
>>  #endif
>>
>>  static void pnv_pci_ioda_setup_opal_tce_kill(struct pnv_phb *phb)
>> @@ -3012,6 +3096,7 @@ static void pnv_pci_ioda_setup_DMA(void)
>>  {
>>  	struct pci_controller *hose, *tmp;
>>  	struct pnv_phb *phb;
>> +	struct pnv_ioda_pe *pe, *gpe;
>>
>>  	list_for_each_entry_safe(hose, tmp, &hose_list, list_node) {
>>  		pnv_ioda_setup_dma(hose->private_data);
>> @@ -3020,6 +3105,23 @@ static void pnv_pci_ioda_setup_DMA(void)
>>  		phb = hose->private_data;
>>  		phb->initialized = 1;
>>  	}
>> +
>> +	/*
>> +	 * Now we have all PHBs discovered, time to add NPU devices to
>> +	 * the corresponding IOMMU groups.
>> +	 */
>> +	list_for_each_entry_safe(hose, tmp, &hose_list, list_node) {
>> +		phb = hose->private_data;
>> +
>> +		if (phb->type != PNV_PHB_NPU)
>> +			continue;
>> +
>> +		list_for_each_entry(pe, &phb->ioda.pe_dma_list, dma_link) {
>> +			gpe = pnv_pci_npu_setup_iommu(pe);
>> +			if (gpe)
>> +				gpe->table_group.ops = &pnv_pci_ioda2_npu_ops;
>> +		}
>> +	}
>>  }
>>
>>  static void pnv_pci_ioda_create_dbgfs(void)
>> diff --git a/arch/powerpc/platforms/powernv/pci.h
> b/arch/powerpc/platforms/powernv/pci.h
>> index e117bd8..ebc6ee4 100644
>> --- a/arch/powerpc/platforms/powernv/pci.h
>> +++ b/arch/powerpc/platforms/powernv/pci.h
>> @@ -244,5 +244,11 @@ extern void pe_level_printk(const struct pnv_ioda_pe
> *pe, const char *level,
>>  /* Nvlink functions */
>>  extern void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass);
>>  extern void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_phb *phb, bool
> rm);
>> +extern struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe
> *npe);
>> +extern long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
>> +		struct iommu_table *tbl);
>> +extern long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num);
>> +extern void pnv_npu_take_ownership(struct pnv_ioda_pe *npe);
>> +extern void pnv_npu_release_ownership(struct pnv_ioda_pe *npe);
>>
>>  #endif /* __POWERNV_PCI_H */
>>
>


-- 
Alexey

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH kernel v4 11/11] powerpc/powernv/npu: Enable NVLink pass through
  2016-05-05  5:49     ` Alexey Kardashevskiy
@ 2016-05-06  1:02       ` Alistair Popple
  0 siblings, 0 replies; 24+ messages in thread
From: Alistair Popple @ 2016-05-06  1:02 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: linuxppc-dev, Alex Williamson, Benjamin Herrenschmidt,
	Dan Carpenter, Daniel Axtens, David Gibson, Gavin Shan,
	Russell Currey, kvm, linux-kernel

On Thu, 5 May 2016 15:49:18 Alexey Kardashevskiy wrote:
> On 05/04/2016 12:08 AM, Alistair Popple wrote:
> > Hi Alexey,
> >
> > On Fri, 29 Apr 2016 18:55:24 Alexey Kardashevskiy wrote:
> >> IBM POWER8 NVlink systems come with Tesla K40-ish GPUs each of which
> >> also has a couple of fast speed links (NVLink). The interface to links
> >> is exposed as an emulated PCI bridge which is included into the same
> >> IOMMU group as the corresponding GPU.
> >>
> >> In the kernel, NPUs get a separate PHB of the PNV_PHB_NPU type and a PE
> >> which behave pretty much as the standard IODA2 PHB except NPU PHB has
> >> just a single TVE in the hardware which means it can have either
> >> 32bit window or 64bit window or DMA bypass but never two of these.
> >>
> >> In order to make these links work when GPU is passed to the guest,
> >> these bridges need to be passed as well; otherwise performance will
> >> degrade.
> >>
> >> This implements and exports API to manage NPU state in regard to VFIO;
> >> it replicates iommu_table_group_ops.
> >>
> >> This defines a new pnv_pci_ioda2_npu_ops which is assigned to
> >> the IODA2 bridge if there are NPUs for a GPU on the bridge.
> >> The new callbacks call the default IODA2 callbacks plus new NPU API.
> >> This adds a gpe_table_group_to_npe() helper to find NPU PE for the IODA2
> >> table_group, it is not expected to fail as the helper is only called
> >> from the pnv_pci_ioda2_npu_ops.
> >>
> >> This does not define NPU-specific .release_ownership() so after
> >> VFIO is finished, DMA on NPU is disabled which is ok as the nvidia
> >> driver sets DMA mask when probing which enable 32 or 64bit DMA on NPU.
> >>
> >> This adds a pnv_pci_npu_setup_iommu() helper which adds NPUs to
> >> the GPU group if any found. The helper uses helpers to look for
> >> the "ibm,gpu" property in the device tree which is a phandle of
> >> the corresponding GPU.
> >>
> >> This adds an additional loop over PEs in pnv_ioda_setup_dma() as the main
> >> loop skips NPU PEs as they do not have 32bit DMA segments.
> >>
> >> As pnv_npu_set_window() and pnv_npu_unset_window() are started being used
> >> by the new IODA2-NPU IOMMU group, this makes the helpers public and
> >> adds the DMA window number parameter.
> >>
> >> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> >> ---
> >> Changes:
> >> v4:
> >> * reused pnv_npu_set_window/pnv_npu_unset_window where possible
> >> * added comments, changed commit log
> >>
> >> v3:
> >> * moved NPU-to-GPU IOMMU grouping later after all PHBs are discovered
> >> * removed hack to make iommu_add_device() work, iommu_group_add_device()
> >> is used instead
> >> * cleanup in gpe_table_group_to_npe_cb()
> >>
> >> v2:
> >> * reimplemented to support NPU + GPU in the same group
> >> * merged "powerpc/powernv/npu: Add NPU devices to IOMMU group" and
> >> "powerpc/powernv/npu: Enable passing through via VFIO" into this patch
> >> ---
> >>  arch/powerpc/platforms/powernv/npu-dma.c  |  64 +++++++++++++++++--
> >>  arch/powerpc/platforms/powernv/pci-ioda.c | 102
> > ++++++++++++++++++++++++++++++
> >>  arch/powerpc/platforms/powernv/pci.h      |   6 ++
> >>  3 files changed, 166 insertions(+), 6 deletions(-)
> >>
> >> diff --git a/arch/powerpc/platforms/powernv/npu-dma.c
> > b/arch/powerpc/platforms/powernv/npu-dma.c
> >> index cb2d1da..0459e10 100644
> >> --- a/arch/powerpc/platforms/powernv/npu-dma.c
> >> +++ b/arch/powerpc/platforms/powernv/npu-dma.c
> >> @@ -12,6 +12,7 @@
> >>  #include <linux/export.h>
> >>  #include <linux/pci.h>
> >>  #include <linux/memblock.h>
> >> +#include <linux/iommu.h>
> >>
> >>  #include <asm/iommu.h>
> >>  #include <asm/pnv-pci.h>
> >> @@ -154,7 +155,7 @@ static struct pnv_ioda_pe *get_gpu_pci_dev_and_pe(struct
> > pnv_ioda_pe *npe,
> >>  	return pe;
> >>  }
> >>
> >> -static long pnv_npu_set_window(struct pnv_ioda_pe *npe,
> >> +long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
> >>  		struct iommu_table *tbl)
> >>  {
> >>  	struct pnv_phb *phb = npe->phb;
> >> @@ -182,13 +183,13 @@ static long pnv_npu_set_window(struct pnv_ioda_pe
> > *npe,
> >>  	pnv_pci_ioda2_tce_invalidate_entire(phb, false);
> >>
> >>  	/* Add the table to the list so its TCE cache will get invalidated */
> >> -	pnv_pci_link_table_and_group(phb->hose->node, 0,
> >> +	pnv_pci_link_table_and_group(phb->hose->node, num,
> >>  			tbl, &npe->table_group);
> >>
> >>  	return 0;
> >>  }
> >>
> >> -static long pnv_npu_unset_window(struct pnv_ioda_pe *npe)
> >> +long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num)
> >>  {
> >>  	struct pnv_phb *phb = npe->phb;
> >>  	int64_t rc;
> >> @@ -205,7 +206,7 @@ static long pnv_npu_unset_window(struct pnv_ioda_pe
> > *npe)
> >>  	}
> >>  	pnv_pci_ioda2_tce_invalidate_entire(phb, false);
> >>
> >> -	pnv_pci_unlink_table_and_group(npe->table_group.tables[0],
> >> +	pnv_pci_unlink_table_and_group(npe->table_group.tables[num],
> >>  			&npe->table_group);
> >>
> >>  	return 0;
> >> @@ -231,7 +232,7 @@ static void pnv_npu_dma_set_32(struct pnv_ioda_pe *npe)
> >>  	if (!gpe)
> >>  		return;
> >>
> >> -	rc = pnv_npu_set_window(npe, gpe->table_group.tables[0]);
> >> +	rc = pnv_npu_set_window(npe, 0, gpe->table_group.tables[0]);
> >>
> >>  	/*
> >>  	 * We don't initialise npu_pe->tce32_table as we always use
> >> @@ -255,7 +256,7 @@ static int pnv_npu_dma_set_bypass(struct pnv_ioda_pe
> > *npe)
> >>  	if (phb->type != PNV_PHB_NPU || !npe->pdev)
> >>  		return -EINVAL;
> >>
> >> -	rc = pnv_npu_unset_window(npe);
> >> +	rc = pnv_npu_unset_window(npe, 0);
> >>  	if (rc != OPAL_SUCCESS)
> >>  		return rc;
> >>
> >> @@ -307,3 +308,54 @@ void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev,
> > bool bypass)
> >>  		}
> >>  	}
> >>  }
> >> +
> >> +/* Switch ownership from platform code to external user (e.g. VFIO) */
> >> +void pnv_npu_take_ownership(struct pnv_ioda_pe *npe)
> >> +{
> >> +	struct pnv_phb *phb = npe->phb;
> >> +	int64_t rc;
> >> +
> >> +	/*
> >> +	 * Note: NPU has just a single TVE in the hardware which means that
> >> +	 * while used by the kernel, it can have either 32bit window or
> >> +	 * DMA bypass but never both. So we deconfigure 32bit window only
> >> +	 * if it was enabled at the moment of ownership change.
> >> +	 */
> >> +	if (npe->table_group.tables[0]) {
> >> +		pnv_npu_unset_window(npe, 0);
> >> +		return;
> >> +	}
> >> +
> >> +	/* Disable bypass */
> >> +	rc = opal_pci_map_pe_dma_window_real(phb->opal_id,
> >> +			npe->pe_number, npe->pe_number,
> >> +			0 /* bypass base */, 0);
> >> +	if (rc) {
> >> +		pe_err(npe, "Failed to disable bypass, err %lld\n", rc);
> >> +		return;
> >> +	}
> >> +	pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
> >> +}
> >> +
> >> +struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
> >> +{
> >> +	struct pnv_phb *phb = npe->phb;
> >> +	struct pci_bus *pbus = phb->hose->bus;
> >> +	struct pci_dev *npdev, *gpdev = NULL, *gptmp;
> >> +	struct pnv_ioda_pe *gpe = get_gpu_pci_dev_and_pe(npe, &gpdev);
> >> +
> >> +	if (!gpe || !gpdev)
> >> +		return NULL;
> >> +
> >> +	list_for_each_entry(npdev, &pbus->devices, bus_list) {
> >> +		gptmp = pnv_pci_get_gpu_dev(npdev);
> >> +
> >> +		if (gptmp != gpdev)
> >> +			continue;
> >
> > If I'm not mistaken it looks like you are trying to find all the NPU PEs also
> > attached to the same GPU on the same (firmware emulated) NPU PHB? I'm just
> > curious - does this work if the GPU has say 2 NPU PE#s (ie. links) on
> > different NPU PHB's?
> >
> >> +		pe_info(gpe, "Attached NPU %s\n", dev_name(&npdev->dev));
> >> +		iommu_group_add_device(gpe->table_group.group, &npdev->dev);
> >> +	}
> >> +
> >> +	return gpe;
> >> +}
> >> diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c
> > b/arch/powerpc/platforms/powernv/pci-ioda.c
> >> index 922c74c..feabe50 100644
> >> --- a/arch/powerpc/platforms/powernv/pci-ioda.c
> >> +++ b/arch/powerpc/platforms/powernv/pci-ioda.c
> >> @@ -2273,6 +2273,90 @@ static struct iommu_table_group_ops pnv_pci_ioda2_ops
> > = {
> >>  	.take_ownership = pnv_ioda2_take_ownership,
> >>  	.release_ownership = pnv_ioda2_release_ownership,
> >>  };
> >> +
> >> +static int gpe_table_group_to_npe_cb(struct device *dev, void *opaque)
> >> +{
> >> +	struct pci_controller *hose;
> >> +	struct pnv_phb *phb;
> >> +	struct pnv_ioda_pe **ptmppe = opaque;
> >> +	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
> >> +	struct pci_dn *pdn = pci_get_pdn(pdev);
> >> +
> >> +	if (!pdn || pdn->pe_number == IODA_INVALID_PE)
> >> +		return 0;
> >> +
> >> +	hose = pci_bus_to_host(pdev->bus);
> >> +	phb = hose->private_data;
> >> +	if (phb->type != PNV_PHB_NPU)
> >> +		return 0;
> >> +
> >> +	*ptmppe = &phb->ioda.pe_array[pdn->pe_number];
> >> +
> >> +	return 1;
> >> +}
> >> +
> >> +/*
> >> + * This returns PE of associated NPU.
> >> + * This assumes that NPU is in the same IOMMU group with GPU and there is
> >> + * no other PEs.
> >> + */
> >
> > Which answers my above question as this doesn't look like it supports GPUs
> > with multiple NPU PE#s attached. I don't think this is actually a problem
> > though as no hardware I know of will ever do this, even though theoretically
> > it's possible.
> 
> 
> I believe when such a situation happens in the future, it will be different 
> PVR, PHB4 (or 5 or whatever) and IODA3 (or 4, or...).

In *theory* it could still happen on PHB3/IODA2.
 
> I could write code in assumption that there can be more NPU PEs per one GPU 
> PE but it does not make much sense (to me) to design the hardware this way 
> and when/if it will be designed this way, the details most likely will 
> differ from what I can imagine today.

I completely agree. No point reworking it and adding complexity for a situation
that most likely will never exist. I just wanted to get an understanding of any
restrictions in case something does change in future.

> >
> > It might be nice if we could warn if this configuration is detected but not
> > really a big issue.
>  >
> > Everything else looks reasonable as far as I can tell
> > (although again I'm no vfio iommu groups expert) so happy for you to add my
> > reviewed-by:
> >
> > Reviewed-By: Alistair Popple <alistair@popple.id.au>
> 
> 
> Thanks!
> 
> >
> >> +static struct pnv_ioda_pe *gpe_table_group_to_npe(
> >> +		struct iommu_table_group *table_group)
> >> +{
> >> +	struct pnv_ioda_pe *npe = NULL;
> >> +	int ret = iommu_group_for_each_dev(table_group->group, &npe,
> >> +			gpe_table_group_to_npe_cb);
> >> +
> >> +	BUG_ON(!ret || !npe);
> >> +
> >> +	return npe;
> >> +}
> >> +
> >> +static long pnv_pci_ioda2_npu_set_window(struct iommu_table_group
> > *table_group,
> >> +		int num, struct iommu_table *tbl)
> >> +{
> >> +	long ret = pnv_pci_ioda2_set_window(table_group, num, tbl);
> >> +
> >> +	if (ret)
> >> +		return ret;
> >> +
> >> +	ret = pnv_npu_set_window(gpe_table_group_to_npe(table_group), num,
> > tbl);
> >> +	if (ret)
> >> +		pnv_pci_ioda2_unset_window(table_group, num);
> >> +
> >> +	return ret;
> >> +}
> >> +
> >> +static long pnv_pci_ioda2_npu_unset_window(
> >> +		struct iommu_table_group *table_group,
> >> +		int num)
> >> +{
> >> +	long ret = pnv_pci_ioda2_unset_window(table_group, num);
> >> +
> >> +	if (ret)
> >> +		return ret;
> >> +
> >> +	return pnv_npu_unset_window(gpe_table_group_to_npe(table_group), num);
> >> +}
> >> +
> >> +static void pnv_ioda2_npu_take_ownership(struct iommu_table_group
> > *table_group)
> >> +{
> >> +	/*
> >> +	 * Detach NPU first as pnv_ioda2_take_ownership() will destroy
> >> +	 * the iommu_table if 32bit DMA is enabled.
> >> +	 */
> >> +	pnv_npu_take_ownership(gpe_table_group_to_npe(table_group));
> >> +	pnv_ioda2_take_ownership(table_group);
> >> +}
> >> +
> >> +static struct iommu_table_group_ops pnv_pci_ioda2_npu_ops = {
> >> +	.get_table_size = pnv_pci_ioda2_get_table_size,
> >> +	.create_table = pnv_pci_ioda2_create_table,
> >> +	.set_window = pnv_pci_ioda2_npu_set_window,
> >> +	.unset_window = pnv_pci_ioda2_npu_unset_window,
> >> +	.take_ownership = pnv_ioda2_npu_take_ownership,
> >> +	.release_ownership = pnv_ioda2_release_ownership,
> >> +};
> >>  #endif
> >>
> >>  static void pnv_pci_ioda_setup_opal_tce_kill(struct pnv_phb *phb)
> >> @@ -3012,6 +3096,7 @@ static void pnv_pci_ioda_setup_DMA(void)
> >>  {
> >>  	struct pci_controller *hose, *tmp;
> >>  	struct pnv_phb *phb;
> >> +	struct pnv_ioda_pe *pe, *gpe;
> >>
> >>  	list_for_each_entry_safe(hose, tmp, &hose_list, list_node) {
> >>  		pnv_ioda_setup_dma(hose->private_data);
> >> @@ -3020,6 +3105,23 @@ static void pnv_pci_ioda_setup_DMA(void)
> >>  		phb = hose->private_data;
> >>  		phb->initialized = 1;
> >>  	}
> >> +
> >> +	/*
> >> +	 * Now we have all PHBs discovered, time to add NPU devices to
> >> +	 * the corresponding IOMMU groups.
> >> +	 */
> >> +	list_for_each_entry_safe(hose, tmp, &hose_list, list_node) {
> >> +		phb = hose->private_data;
> >> +
> >> +		if (phb->type != PNV_PHB_NPU)
> >> +			continue;
> >> +
> >> +		list_for_each_entry(pe, &phb->ioda.pe_dma_list, dma_link) {
> >> +			gpe = pnv_pci_npu_setup_iommu(pe);
> >> +			if (gpe)
> >> +				gpe->table_group.ops = &pnv_pci_ioda2_npu_ops;
> >> +		}
> >> +	}
> >>  }
> >>
> >>  static void pnv_pci_ioda_create_dbgfs(void)
> >> diff --git a/arch/powerpc/platforms/powernv/pci.h
> > b/arch/powerpc/platforms/powernv/pci.h
> >> index e117bd8..ebc6ee4 100644
> >> --- a/arch/powerpc/platforms/powernv/pci.h
> >> +++ b/arch/powerpc/platforms/powernv/pci.h
> >> @@ -244,5 +244,11 @@ extern void pe_level_printk(const struct pnv_ioda_pe
> > *pe, const char *level,
> >>  /* Nvlink functions */
> >>  extern void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass);
> >>  extern void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_phb *phb, bool
> > rm);
> >> +extern struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe
> > *npe);
> >> +extern long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
> >> +		struct iommu_table *tbl);
> >> +extern long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num);
> >> +extern void pnv_npu_take_ownership(struct pnv_ioda_pe *npe);
> >> +extern void pnv_npu_release_ownership(struct pnv_ioda_pe *npe);
> >>
> >>  #endif /* __POWERNV_PCI_H */
> >>
> >
> 
> 
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH kernel v4 10/11] powerpc/powernv/npu: Rework TCE Kill handling
  2016-05-05  4:23     ` Alexey Kardashevskiy
@ 2016-05-06  1:11       ` Alistair Popple
  0 siblings, 0 replies; 24+ messages in thread
From: Alistair Popple @ 2016-05-06  1:11 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: linuxppc-dev, Alex Williamson, Benjamin Herrenschmidt,
	Dan Carpenter, Daniel Axtens, David Gibson, Gavin Shan,
	Russell Currey, kvm, linux-kernel

On Thu, 5 May 2016 14:23:20 Alexey Kardashevskiy wrote:
> On 05/03/2016 05:37 PM, Alistair Popple wrote:
> > On Fri, 29 Apr 2016 18:55:23 Alexey Kardashevskiy wrote:
> >> The pnv_ioda_pe struct keeps an array of peers. At the moment it is only
> >> used to link GPU and NPU for 2 purposes:
> >>
> >> 1. Access NPU quickly when configuring DMA for GPU - this was addressed
> >> in the previos patch by removing use of it as DMA setup is not what
> >> the kernel would constantly do.
> >
> > Agreed. It was used here because the peer array was added to deal with (2)
> > below ...
> >
> >> 2. Invalidate TCE cache for NPU when it is invalidated for GPU.
> >> GPU and NPU are in different PE. There is already a mechanism to
> >> attach multiple iommu_table_group to the same iommu_table (used for VFIO),
> >> we can reuse it here so does this patch.
> >
> > ... because we weren't aware iommu_table_group could be used to do this
> > instead.
> >
> >> This gets rid of peers[] array and PNV_IODA_PE_PEER flag as they are
> >> not needed anymore.
> >
> > Happy to see it go. I'm not too familiar with iommu groups but based on the
> > code and what you have described to me both here and offline everything looks
> > good to me. One pretty minor style comment below.
> >
> > Reviewed-By: Alistair Popple <alistair@popple.id.au>
> >
> >> While we are here, add TCE cache invalidation after enabling bypass.
> >>
> >> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> >> ---
> >> Changes:
> >> v4:
> >> * reworked as "powerpc/powernv/npu: Add set/unset window helpers" has been
> >> added
> >> ---
> >>  arch/powerpc/platforms/powernv/npu-dma.c  | 69
> > +++++++++----------------------
> >>  arch/powerpc/platforms/powernv/pci-ioda.c | 57 ++++---------------------
> >>  arch/powerpc/platforms/powernv/pci.h      |  6 ---
> >>  3 files changed, 26 insertions(+), 106 deletions(-)
> >>
> >> diff --git a/arch/powerpc/platforms/powernv/npu-dma.c
> > b/arch/powerpc/platforms/powernv/npu-dma.c
> >> index 800d70f..cb2d1da 100644
> >> --- a/arch/powerpc/platforms/powernv/npu-dma.c
> >> +++ b/arch/powerpc/platforms/powernv/npu-dma.c
> >> @@ -136,22 +136,17 @@ static struct pnv_ioda_pe
> > *get_gpu_pci_dev_and_pe(struct pnv_ioda_pe *npe,
> >>  	struct pnv_ioda_pe *pe;
> >>  	struct pci_dn *pdn;
> >>
> >> -	if (npe->flags & PNV_IODA_PE_PEER) {
> >> -		pe = npe->peers[0];
> >> -		pdev = pe->pdev;
> >> -	} else {
> >> -		pdev = pnv_pci_get_gpu_dev(npe->pdev);
> >> -		if (!pdev)
> >> -			return NULL;
> >> +	pdev = pnv_pci_get_gpu_dev(npe->pdev);
> >> +	if (!pdev)
> >> +		return NULL;
> >>
> >> -		pdn = pci_get_pdn(pdev);
> >> -		if (WARN_ON(!pdn || pdn->pe_number == IODA_INVALID_PE))
> >> -			return NULL;
> >> +	pdn = pci_get_pdn(pdev);
> >> +	if (WARN_ON(!pdn || pdn->pe_number == IODA_INVALID_PE))
> >> +		return NULL;
> >>
> >> -		hose = pci_bus_to_host(pdev->bus);
> >> -		phb = hose->private_data;
> >> -		pe = &phb->ioda.pe_array[pdn->pe_number];
> >> -	}
> >> +	hose = pci_bus_to_host(pdev->bus);
> >> +	phb = hose->private_data;
> >> +	pe = &phb->ioda.pe_array[pdn->pe_number];
> >>
> >>  	if (gpdev)
> >>  		*gpdev = pdev;
> >> @@ -186,6 +181,10 @@ static long pnv_npu_set_window(struct pnv_ioda_pe *npe,
> >>  	}
> >>  	pnv_pci_ioda2_tce_invalidate_entire(phb, false);
> >>
> >> +	/* Add the table to the list so its TCE cache will get invalidated */
> >> +	pnv_pci_link_table_and_group(phb->hose->node, 0,
> >> +			tbl, &npe->table_group);
> >
> > Where tbl is associated with the GPU and is what links the NPU and GPU PEs.
> >
> >>  	return 0;
> >>  }
> >>
> >> @@ -206,45 +205,12 @@ static long pnv_npu_unset_window(struct pnv_ioda_pe
> > *npe)
> >>  	}
> >>  	pnv_pci_ioda2_tce_invalidate_entire(phb, false);
> >>
> >> +	pnv_pci_unlink_table_and_group(npe->table_group.tables[0],
> >> +			&npe->table_group);
> >> +
> >>  	return 0;
> >>  }
> >>
> >> -void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe)
> >> -{
> >> -	struct pnv_ioda_pe *gpe;
> >> -	struct pci_dev *gpdev;
> >> -	int i, avail = -1;
> >> -
> >> -	if (!npe->pdev || !(npe->flags & PNV_IODA_PE_DEV))
> >> -		return;
> >> -
> >> -	gpe = get_gpu_pci_dev_and_pe(npe, &gpdev);
> >> -	if (!gpe)
> >> -		return;
> >> -
> >> -	for (i = 0; i < PNV_IODA_MAX_PEER_PES; i++) {
> >> -		/* Nothing to do if the PE is already connected. */
> >> -		if (gpe->peers[i] == npe)
> >> -			return;
> >> -
> >> -		if (!gpe->peers[i])
> >> -			avail = i;
> >> -	}
> >> -
> >> -	if (WARN_ON(avail < 0))
> >> -		return;
> >> -
> >> -	gpe->peers[avail] = npe;
> >> -	gpe->flags |= PNV_IODA_PE_PEER;
> >> -
> >> -	/*
> >> -	 * We assume that the NPU devices only have a single peer PE
> >> -	 * (the GPU PCIe device PE).
> >> -	 */
> >> -	npe->peers[0] = gpe;
> >> -	npe->flags |= PNV_IODA_PE_PEER;
> >> -}
> >> -
> >>  /*
> >>   * Enables 32 bit DMA on NPU.
> >>   */
> >> @@ -302,6 +268,9 @@ static int pnv_npu_dma_set_bypass(struct pnv_ioda_pe
> > *npe)
> >>  			npe->pe_number, npe->pe_number,
> >>  			0 /* bypass base */, top);
> >>
> >> +	if (rc == OPAL_SUCCESS)
> >> +		pnv_pci_ioda2_tce_invalidate_entire(phb, false);
> >> +
> >>  	return rc;
> >>  }
> >>
> >> diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c
> > b/arch/powerpc/platforms/powernv/pci-ioda.c
> >> index db7695f..922c74c 100644
> >> --- a/arch/powerpc/platforms/powernv/pci-ioda.c
> >> +++ b/arch/powerpc/platforms/powernv/pci-ioda.c
> >> @@ -1816,23 +1816,12 @@ static inline void
> > pnv_pci_ioda2_tce_invalidate_pe(struct pnv_ioda_pe *pe)
> >>  	/* 01xb - invalidate TCEs that match the specified PE# */
> >>  	unsigned long val = TCE_KILL_INVAL_PE | (pe->pe_number & 0xFF);
> >>  	struct pnv_phb *phb = pe->phb;
> >> -	struct pnv_ioda_pe *npe;
> >> -	int i;
> >>
> >>  	if (!phb->ioda.tce_inval_reg)
> >>  		return;
> >>
> >>  	mb(); /* Ensure above stores are visible */
> >>  	__raw_writeq(cpu_to_be64(val), phb->ioda.tce_inval_reg);
> >> -
> >> -	if (pe->flags & PNV_IODA_PE_PEER)
> >> -		for (i = 0; i < PNV_IODA_MAX_PEER_PES; i++) {
> >> -			npe = pe->peers[i];
> >> -			if (!npe || npe->phb->type != PNV_PHB_NPU)
> >> -				continue;
> >> -
> >> -			pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
> >> -		}
> >>  }
> >>
> >>  static void pnv_pci_ioda2_do_tce_invalidate(unsigned pe_number, bool rm,
> >> @@ -1867,33 +1856,24 @@ static void pnv_pci_ioda2_tce_invalidate(struct
> > iommu_table *tbl,
> >>  	struct iommu_table_group_link *tgl;
> >>
> >>  	list_for_each_entry_rcu(tgl, &tbl->it_group_list, next) {
> >> -		struct pnv_ioda_pe *npe;
> >>  		struct pnv_ioda_pe *pe = container_of(tgl->table_group,
> >>  				struct pnv_ioda_pe, table_group);
> >>  		__be64 __iomem *invalidate = rm ?
> >>  			(__be64 __iomem *)pe->phb->ioda.tce_inval_reg_phys :
> >>  			pe->phb->ioda.tce_inval_reg;
> >> -		int i;
> >>
> >> -		pnv_pci_ioda2_do_tce_invalidate(pe->pe_number, rm,
> >> -			invalidate, tbl->it_page_shift,
> >> -			index, npages);
> >> -
> >> -		if (pe->flags & PNV_IODA_PE_PEER)
> >> +		if (pe->phb->type == PNV_PHB_NPU) {
> >>  			/*
> >>  			 * The NVLink hardware does not support TCE kill
> >>  			 * per TCE entry so we have to invalidate
> >>  			 * the entire cache for it.
> >>  			 */
> >> -			for (i = 0; i < PNV_IODA_MAX_PEER_PES; i++) {
> >> -				npe = pe->peers[i];
> >> -				if (!npe || npe->phb->type != PNV_PHB_NPU ||
> >> -						!npe->phb->ioda.tce_inval_reg)
> >> -					continue;
> >> -
> >> -				pnv_pci_ioda2_tce_invalidate_entire(npe->phb,
> >> -						rm);
> >> -			}
> >> +			pnv_pci_ioda2_tce_invalidate_entire(pe->phb, rm);
> >> +			continue;
> >> +		}
> >
> > Personally I think an if/else instead of the continue would be cleaner here.
> 
> 
> I see it as a hack which is nice to have in a way that it does not touch 
> the correct code (in this case - by enclosing it into "else{}").
> 
> Also, that would increase the indentation of the 
> pnv_pci_ioda2_do_tce_invalidate() below for no good reason.

As I said it really is just a matter of personal preference. My only reason
for preferring if/else is that it reduces the chance of someone (read me)
adding something common that needs to happen after either invalidate missing
the continue and then having it not work on NPU.

On the other hand it's still obvious enough and if someone's modifying the code
they probably should read the existing code first anyway. It doesn't actually
need to be changed.

- Alistair

> >
> >> +		pnv_pci_ioda2_do_tce_invalidate(pe->pe_number, rm,
> >> +			invalidate, tbl->it_page_shift,
> >> +			index, npages);
> >>  	}
> >>  }
> >>
> >> @@ -3061,26 +3041,6 @@ static void pnv_pci_ioda_create_dbgfs(void)
> >>  #endif /* CONFIG_DEBUG_FS */
> >>  }
> >>
> >> -static void pnv_npu_ioda_fixup(void)
> >> -{
> >> -	bool enable_bypass;
> >> -	struct pci_controller *hose, *tmp;
> >> -	struct pnv_phb *phb;
> >> -	struct pnv_ioda_pe *pe;
> >> -
> >> -	list_for_each_entry_safe(hose, tmp, &hose_list, list_node) {
> >> -		phb = hose->private_data;
> >> -		if (phb->type != PNV_PHB_NPU)
> >> -			continue;
> >> -
> >> -		list_for_each_entry(pe, &phb->ioda.pe_dma_list, dma_link) {
> >> -			enable_bypass = dma_get_mask(&pe->pdev->dev) ==
> >> -				DMA_BIT_MASK(64);
> >> -			pnv_npu_init_dma_pe(pe);
> >> -		}
> >> -	}
> >> -}
> >> -
> >>  static void pnv_pci_ioda_fixup(void)
> >>  {
> >>  	pnv_pci_ioda_setup_PEs();
> >> @@ -3093,9 +3053,6 @@ static void pnv_pci_ioda_fixup(void)
> >>  	eeh_init();
> >>  	eeh_addr_cache_build();
> >>  #endif
> >> -
> >> -	/* Link NPU IODA tables to their PCI devices. */
> >> -	pnv_npu_ioda_fixup();
> >>  }
> >>
> >>  /*
> >> diff --git a/arch/powerpc/platforms/powernv/pci.h
> > b/arch/powerpc/platforms/powernv/pci.h
> >> index 485e5b1..e117bd8 100644
> >> --- a/arch/powerpc/platforms/powernv/pci.h
> >> +++ b/arch/powerpc/platforms/powernv/pci.h
> >> @@ -24,7 +24,6 @@ enum pnv_phb_model {
> >>  #define PNV_IODA_PE_MASTER	(1 << 3)	/* Master PE in compound case	
> > */
> >>  #define PNV_IODA_PE_SLAVE	(1 << 4)	/* Slave PE in compound case	
> > */
> >>  #define PNV_IODA_PE_VF		(1 << 5)	/* PE for one VF 	
> > 	*/
> >> -#define PNV_IODA_PE_PEER	(1 << 6)	/* PE has peers			
> > */
> >>
> >>  /* Data associated with a PE, including IOMMU tracking etc.. */
> >>  struct pnv_phb;
> >> @@ -32,9 +31,6 @@ struct pnv_ioda_pe {
> >>  	unsigned long		flags;
> >>  	struct pnv_phb		*phb;
> >>
> >> -#define PNV_IODA_MAX_PEER_PES	8
> >> -	struct pnv_ioda_pe	*peers[PNV_IODA_MAX_PEER_PES];
> >> -
> >>  	/* A PE can be associated with a single device or an
> >>  	 * entire bus (& children). In the former case, pdev
> >>  	 * is populated, in the later case, pbus is.
> >> @@ -246,8 +242,6 @@ extern void pe_level_printk(const struct pnv_ioda_pe
> > *pe, const char *level,
> >>  	pe_level_printk(pe, KERN_INFO, fmt, ##__VA_ARGS__)
> >>
> >>  /* Nvlink functions */
> >> -extern void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe);
> >> -extern void pnv_npu_setup_dma_pe(struct pnv_ioda_pe *npe);
> >>  extern void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass);
> >>  extern void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_phb *phb, bool
> > rm);
> >>
> >>
> >
> 
> 
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [kernel,v4,02/11] vfio/spapr: Relax the IOMMU compatibility check
  2016-04-29  8:55 ` [PATCH kernel v4 02/11] vfio/spapr: Relax the IOMMU compatibility check Alexey Kardashevskiy
  2016-04-29 15:41   ` Alex Williamson
@ 2016-05-10 21:48   ` Michael Ellerman
  1 sibling, 0 replies; 24+ messages in thread
From: Michael Ellerman @ 2016-05-10 21:48 UTC (permalink / raw)
  To: Alexey Kardashevskiy, linuxppc-dev
  Cc: kvm, Alexey Kardashevskiy, Alistair Popple, Gavin Shan,
	linux-kernel, Alex Williamson, David Gibson, Dan Carpenter,
	Daniel Axtens

On Fri, 2016-29-04 at 08:55:15 UTC, Alexey Kardashevskiy wrote:
> We are going to have multiple different types of PHB on the same system
> with POWER8 + NVLink and PHBs will have different IOMMU ops. However
> we only really care about one callback - create_table - so we can
> relax the compatibility check here.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
> Acked-by: Alex Williamson <alex.williamson@redhat.com>

Series applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/45295687a90a31135ab575803e

cheers

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2016-05-10 21:48 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-04-29  8:55 [PATCH kernel v4 00/11] powerpc/powernv/npu: Enable PCI pass through for NVLink Alexey Kardashevskiy
2016-04-29  8:55 ` [PATCH kernel v4 01/11] vfio_pci: Test for extended capabilities if config space > 256 bytes Alexey Kardashevskiy
2016-04-29 15:42   ` Alex Williamson
2016-04-29  8:55 ` [PATCH kernel v4 02/11] vfio/spapr: Relax the IOMMU compatibility check Alexey Kardashevskiy
2016-04-29 15:41   ` Alex Williamson
2016-05-10 21:48   ` [kernel,v4,02/11] " Michael Ellerman
2016-04-29  8:55 ` [PATCH kernel v4 03/11] powerpc/powernv: Rename pnv_pci_ioda2_tce_invalidate_entire Alexey Kardashevskiy
2016-04-29  8:55 ` [PATCH kernel v4 04/11] powerpc/powernv: Define TCE Kill flags Alexey Kardashevskiy
2016-04-29  8:55 ` [PATCH kernel v4 05/11] powerpc/powernv/npu: TCE Kill helpers cleanup Alexey Kardashevskiy
2016-04-29  8:55 ` [PATCH kernel v4 06/11] powerpc/powernv/npu: Use the correct IOMMU page size Alexey Kardashevskiy
2016-04-29  8:55 ` [PATCH kernel v4 07/11] powerpc/powernv/npu: Simplify DMA setup Alexey Kardashevskiy
2016-04-29  8:55 ` [PATCH kernel v4 08/11] powerpc/powernv/ioda2: Export debug helper pe_level_printk() Alexey Kardashevskiy
2016-05-03  5:46   ` Alistair Popple
2016-05-03  5:58     ` Alistair Popple
2016-04-29  8:55 ` [PATCH kernel v4 09/11] powerpc/powernv/npu: Add set/unset window helpers Alexey Kardashevskiy
2016-05-03  6:25   ` Alistair Popple
2016-04-29  8:55 ` [PATCH kernel v4 10/11] powerpc/powernv/npu: Rework TCE Kill handling Alexey Kardashevskiy
2016-05-03  7:37   ` Alistair Popple
2016-05-05  4:23     ` Alexey Kardashevskiy
2016-05-06  1:11       ` Alistair Popple
2016-04-29  8:55 ` [PATCH kernel v4 11/11] powerpc/powernv/npu: Enable NVLink pass through Alexey Kardashevskiy
2016-05-03 14:08   ` Alistair Popple
2016-05-05  5:49     ` Alexey Kardashevskiy
2016-05-06  1:02       ` Alistair Popple

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).