All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH kernel v4 00/19] powerpc/powernv/npu, vfio: NVIDIA V100 + P9 passthrough
@ 2018-11-23  5:52 ` Alexey Kardashevskiy
  0 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson


This is for passing through NVIDIA V100 GPUs on POWER9 systems.
19/19 has the details of hardware setup.

This implements support for NVIDIA V100 GPU with coherent memory and
NPU/ATS support available in the POWER9 CPU. The aim is to support
unmodified vendor driver in the guest.

This is pushed to (both guest and host kernels):
https://github.com/aik/linux/tree/nv2-stage7

Matching qemu is pushed to github:
https://github.com/aik/qemu/tree/nv2-stage6

Skiboot bits are here:
https://github.com/aik/skiboot/tree/nv2-stage4

The individual patches have changelogs. Some were dropped as not required
or quite useless.


Please comment. Thanks.



Alexey Kardashevskiy (19):
  powerpc/ioda/npu: Call skiboot's hot reset hook when disabling NPU2
  powerpc/mm/iommu/vfio_spapr_tce: Change mm_iommu_get to reference a
    region
  powerpc/vfio/iommu/kvm: Do not pin device memory
  powerpc/powernv: Move npu struct from pnv_phb to pci_controller
  powerpc/powernv/npu: Move OPAL calls away from context manipulation
  powerpc/pseries/iommu: Use memory@ nodes in max RAM address
    calculation
  powerpc/pseries/npu: Enable platform support
  powerpc/pseries: Remove IOMMU API support for non-LPAR systems
  powerpc/powernv/pseries: Rework device adding to IOMMU groups
  powerpc/iommu_api: Move IOMMU groups setup to a single place
  powerpc/powernv: Reference iommu_table while it is linked to a group
  powerpc/powernv: Add purge cache OPAL call
  powerpc/powernv/npu: Move single TVE handling to NPU PE
  powerpc/powernv/npu: Convert NPU IOMMU helpers to
    iommu_table_group_ops
  powerpc/powernv/npu: Add compound IOMMU groups
  powerpc/powernv/npu: Add release_ownership hook
  vfio_pci: Allow mapping extra regions
  vfio_pci: Allow regions to add own capabilities
  vfio_pci: Add NVIDIA GV100GL [Tesla V100 SXM2] subdriver

 drivers/vfio/pci/Makefile                     |   1 +
 arch/powerpc/include/asm/iommu.h              |  17 +-
 arch/powerpc/include/asm/mmu_context.h        |   9 +-
 arch/powerpc/include/asm/opal-api.h           |   3 +-
 arch/powerpc/include/asm/opal.h               |   1 +
 arch/powerpc/include/asm/pci-bridge.h         |   1 +
 arch/powerpc/include/asm/pci.h                |   4 +
 arch/powerpc/platforms/powernv/pci.h          |  30 +-
 drivers/vfio/pci/trace.h                      | 102 ++++
 drivers/vfio/pci/vfio_pci_private.h           |   8 +
 include/uapi/linux/vfio.h                     |  27 +
 arch/powerpc/kernel/iommu.c                   |  67 +--
 arch/powerpc/kvm/book3s_64_vio.c              |  18 +-
 arch/powerpc/mm/mmu_context_iommu.c           | 103 +++-
 arch/powerpc/platforms/powernv/npu-dma.c      | 529 +++++++++++++++---
 arch/powerpc/platforms/powernv/opal.c         |   1 +
 arch/powerpc/platforms/powernv/pci-ioda-tce.c |   3 +-
 arch/powerpc/platforms/powernv/pci-ioda.c     | 231 ++++----
 arch/powerpc/platforms/powernv/pci.c          |  43 +-
 arch/powerpc/platforms/pseries/iommu.c        |  89 ++-
 arch/powerpc/platforms/pseries/pci.c          |  22 +
 drivers/vfio/pci/vfio_pci.c                   |  52 +-
 drivers/vfio/pci/vfio_pci_nvlink2.c           | 448 +++++++++++++++
 drivers/vfio/vfio_iommu_spapr_tce.c           |  65 ++-
 .../powerpc/platforms/powernv/opal-wrappers.S |   1 +
 drivers/vfio/pci/Kconfig                      |   6 +
 26 files changed, 1492 insertions(+), 389 deletions(-)
 create mode 100644 drivers/vfio/pci/trace.h
 create mode 100644 drivers/vfio/pci/vfio_pci_nvlink2.c

-- 
2.17.1


^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 00/19] powerpc/powernv/npu, vfio: NVIDIA V100 + P9 passthrough
@ 2018-11-23  5:52 ` Alexey Kardashevskiy
  0 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson


This is for passing through NVIDIA V100 GPUs on POWER9 systems.
19/19 has the details of hardware setup.

This implements support for NVIDIA V100 GPU with coherent memory and
NPU/ATS support available in the POWER9 CPU. The aim is to support
unmodified vendor driver in the guest.

This is pushed to (both guest and host kernels):
https://github.com/aik/linux/tree/nv2-stage7

Matching qemu is pushed to github:
https://github.com/aik/qemu/tree/nv2-stage6

Skiboot bits are here:
https://github.com/aik/skiboot/tree/nv2-stage4

The individual patches have changelogs. Some were dropped as not required
or quite useless.


Please comment. Thanks.



Alexey Kardashevskiy (19):
  powerpc/ioda/npu: Call skiboot's hot reset hook when disabling NPU2
  powerpc/mm/iommu/vfio_spapr_tce: Change mm_iommu_get to reference a
    region
  powerpc/vfio/iommu/kvm: Do not pin device memory
  powerpc/powernv: Move npu struct from pnv_phb to pci_controller
  powerpc/powernv/npu: Move OPAL calls away from context manipulation
  powerpc/pseries/iommu: Use memory@ nodes in max RAM address
    calculation
  powerpc/pseries/npu: Enable platform support
  powerpc/pseries: Remove IOMMU API support for non-LPAR systems
  powerpc/powernv/pseries: Rework device adding to IOMMU groups
  powerpc/iommu_api: Move IOMMU groups setup to a single place
  powerpc/powernv: Reference iommu_table while it is linked to a group
  powerpc/powernv: Add purge cache OPAL call
  powerpc/powernv/npu: Move single TVE handling to NPU PE
  powerpc/powernv/npu: Convert NPU IOMMU helpers to
    iommu_table_group_ops
  powerpc/powernv/npu: Add compound IOMMU groups
  powerpc/powernv/npu: Add release_ownership hook
  vfio_pci: Allow mapping extra regions
  vfio_pci: Allow regions to add own capabilities
  vfio_pci: Add NVIDIA GV100GL [Tesla V100 SXM2] subdriver

 drivers/vfio/pci/Makefile                     |   1 +
 arch/powerpc/include/asm/iommu.h              |  17 +-
 arch/powerpc/include/asm/mmu_context.h        |   9 +-
 arch/powerpc/include/asm/opal-api.h           |   3 +-
 arch/powerpc/include/asm/opal.h               |   1 +
 arch/powerpc/include/asm/pci-bridge.h         |   1 +
 arch/powerpc/include/asm/pci.h                |   4 +
 arch/powerpc/platforms/powernv/pci.h          |  30 +-
 drivers/vfio/pci/trace.h                      | 102 ++++
 drivers/vfio/pci/vfio_pci_private.h           |   8 +
 include/uapi/linux/vfio.h                     |  27 +
 arch/powerpc/kernel/iommu.c                   |  67 +--
 arch/powerpc/kvm/book3s_64_vio.c              |  18 +-
 arch/powerpc/mm/mmu_context_iommu.c           | 103 +++-
 arch/powerpc/platforms/powernv/npu-dma.c      | 529 +++++++++++++++---
 arch/powerpc/platforms/powernv/opal.c         |   1 +
 arch/powerpc/platforms/powernv/pci-ioda-tce.c |   3 +-
 arch/powerpc/platforms/powernv/pci-ioda.c     | 231 ++++----
 arch/powerpc/platforms/powernv/pci.c          |  43 +-
 arch/powerpc/platforms/pseries/iommu.c        |  89 ++-
 arch/powerpc/platforms/pseries/pci.c          |  22 +
 drivers/vfio/pci/vfio_pci.c                   |  52 +-
 drivers/vfio/pci/vfio_pci_nvlink2.c           | 448 +++++++++++++++
 drivers/vfio/vfio_iommu_spapr_tce.c           |  65 ++-
 .../powerpc/platforms/powernv/opal-wrappers.S |   1 +
 drivers/vfio/pci/Kconfig                      |   6 +
 26 files changed, 1492 insertions(+), 389 deletions(-)
 create mode 100644 drivers/vfio/pci/trace.h
 create mode 100644 drivers/vfio/pci/vfio_pci_nvlink2.c

-- 
2.17.1

^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 01/19] powerpc/ioda/npu: Call skiboot's hot reset hook when disabling NPU2
  2018-11-23  5:52 ` Alexey Kardashevskiy
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  -1 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

The skiboot firmware has a hot reset handler which fences the NVIDIA V100
GPU RAM on Witherspoons and makes accesses no-op instead of throwing HMIs:
https://github.com/open-power/skiboot/commit/fca2b2b839a67

Now we are going to pass V100 via VFIO which most certainly involves
KVM guests which are often terminated without getting a chance to offline
GPU RAM so we end up with a running machine with misconfigured memory.
Accessing this memory produces hardware management interrupts (HMI)
which bring the host down.

To suppress HMIs, this wires up this hot reset hook to vfio_pci_disable()
via pci_disable_device() which switches NPU2 to a safe mode and prevents
HMIs.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Acked-by: Alistair Popple <alistair@popple.id.au>
---
Changes:
v2:
* updated the commit log
---
 arch/powerpc/platforms/powernv/pci-ioda.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index 9ee7a30..29c6837 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -3676,6 +3676,15 @@ static void pnv_pci_release_device(struct pci_dev *pdev)
 		pnv_ioda_release_pe(pe);
 }
 
+static void pnv_npu_disable_device(struct pci_dev *pdev)
+{
+	struct eeh_dev *edev = pci_dev_to_eeh_dev(pdev);
+	struct eeh_pe *eehpe = edev ? edev->pe : NULL;
+
+	if (eehpe && eeh_ops && eeh_ops->reset)
+		eeh_ops->reset(eehpe, EEH_RESET_HOT);
+}
+
 static void pnv_pci_ioda_shutdown(struct pci_controller *hose)
 {
 	struct pnv_phb *phb = hose->private_data;
@@ -3720,6 +3729,7 @@ static const struct pci_controller_ops pnv_npu_ioda_controller_ops = {
 	.reset_secondary_bus	= pnv_pci_reset_secondary_bus,
 	.dma_set_mask		= pnv_npu_dma_set_mask,
 	.shutdown		= pnv_pci_ioda_shutdown,
+	.disable_device		= pnv_npu_disable_device,
 };
 
 static const struct pci_controller_ops pnv_npu_ocapi_ioda_controller_ops = {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 01/19] powerpc/ioda/npu: Call skiboot's hot reset hook when disabling NPU2
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  0 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

The skiboot firmware has a hot reset handler which fences the NVIDIA V100
GPU RAM on Witherspoons and makes accesses no-op instead of throwing HMIs:
https://github.com/open-power/skiboot/commit/fca2b2b839a67

Now we are going to pass V100 via VFIO which most certainly involves
KVM guests which are often terminated without getting a chance to offline
GPU RAM so we end up with a running machine with misconfigured memory.
Accessing this memory produces hardware management interrupts (HMI)
which bring the host down.

To suppress HMIs, this wires up this hot reset hook to vfio_pci_disable()
via pci_disable_device() which switches NPU2 to a safe mode and prevents
HMIs.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Acked-by: Alistair Popple <alistair@popple.id.au>
---
Changes:
v2:
* updated the commit log
---
 arch/powerpc/platforms/powernv/pci-ioda.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index 9ee7a30..29c6837 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -3676,6 +3676,15 @@ static void pnv_pci_release_device(struct pci_dev *pdev)
 		pnv_ioda_release_pe(pe);
 }
 
+static void pnv_npu_disable_device(struct pci_dev *pdev)
+{
+	struct eeh_dev *edev = pci_dev_to_eeh_dev(pdev);
+	struct eeh_pe *eehpe = edev ? edev->pe : NULL;
+
+	if (eehpe && eeh_ops && eeh_ops->reset)
+		eeh_ops->reset(eehpe, EEH_RESET_HOT);
+}
+
 static void pnv_pci_ioda_shutdown(struct pci_controller *hose)
 {
 	struct pnv_phb *phb = hose->private_data;
@@ -3720,6 +3729,7 @@ static const struct pci_controller_ops pnv_npu_ioda_controller_ops = {
 	.reset_secondary_bus	= pnv_pci_reset_secondary_bus,
 	.dma_set_mask		= pnv_npu_dma_set_mask,
 	.shutdown		= pnv_pci_ioda_shutdown,
+	.disable_device		= pnv_npu_disable_device,
 };
 
 static const struct pci_controller_ops pnv_npu_ocapi_ioda_controller_ops = {
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 02/19] powerpc/mm/iommu/vfio_spapr_tce: Change mm_iommu_get to reference a region
  2018-11-23  5:52 ` Alexey Kardashevskiy
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  -1 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

Normally mm_iommu_get() is supposed to add a reference and
mm_iommu_put() to remove it. However historically mm_iommu_find() does
the referencing and mm_iommu_get() is doing allocation and referencing.

We are going to add another helper to preregister device memory so
instead of having mm_iommu_new() which pre-registers the normal memory
and references the region, we need separate helpers for pre-registering
and referencing.

This renames:
- mm_iommu_get to mm_iommu_new;
- mm_iommu_find to mm_iommu_get.

To make the mm_iommu_get name reflect what it is supposed to do, this
changes mm_iommu_get() to reference the region so from now on for every
mm_iommu_get() we need a matching mm_iommu_put().

This removes the check for exact match as the check for overlap is
enough now.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v4:
* squashed "powerpc/mm/iommu: Make mm_iommu_new() fail on existing regions" into this

v2:
* merged 2 patches into one
---
 arch/powerpc/include/asm/mmu_context.h |  4 +--
 arch/powerpc/mm/mmu_context_iommu.c    | 19 +++++++------
 drivers/vfio/vfio_iommu_spapr_tce.c    | 37 +++++++++++++++++---------
 3 files changed, 35 insertions(+), 25 deletions(-)

diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
index 0381394..2d6b00d 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -21,7 +21,7 @@ struct mm_iommu_table_group_mem_t;
 
 extern int isolate_lru_page(struct page *page);	/* from internal.h */
 extern bool mm_iommu_preregistered(struct mm_struct *mm);
-extern long mm_iommu_get(struct mm_struct *mm,
+extern long mm_iommu_new(struct mm_struct *mm,
 		unsigned long ua, unsigned long entries,
 		struct mm_iommu_table_group_mem_t **pmem);
 extern long mm_iommu_put(struct mm_struct *mm,
@@ -32,7 +32,7 @@ extern struct mm_iommu_table_group_mem_t *mm_iommu_lookup(struct mm_struct *mm,
 		unsigned long ua, unsigned long size);
 extern struct mm_iommu_table_group_mem_t *mm_iommu_lookup_rm(
 		struct mm_struct *mm, unsigned long ua, unsigned long size);
-extern struct mm_iommu_table_group_mem_t *mm_iommu_find(struct mm_struct *mm,
+extern struct mm_iommu_table_group_mem_t *mm_iommu_get(struct mm_struct *mm,
 		unsigned long ua, unsigned long entries);
 extern long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
 		unsigned long ua, unsigned int pageshift, unsigned long *hpa);
diff --git a/arch/powerpc/mm/mmu_context_iommu.c b/arch/powerpc/mm/mmu_context_iommu.c
index 1d5161f..580d89e 100644
--- a/arch/powerpc/mm/mmu_context_iommu.c
+++ b/arch/powerpc/mm/mmu_context_iommu.c
@@ -89,7 +89,7 @@ bool mm_iommu_preregistered(struct mm_struct *mm)
 }
 EXPORT_SYMBOL_GPL(mm_iommu_preregistered);
 
-long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
+long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
 		struct mm_iommu_table_group_mem_t **pmem)
 {
 	struct mm_iommu_table_group_mem_t *mem;
@@ -102,12 +102,6 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
 
 	list_for_each_entry_rcu(mem, &mm->context.iommu_group_mem_list,
 			next) {
-		if ((mem->ua == ua) && (mem->entries == entries)) {
-			++mem->used;
-			*pmem = mem;
-			goto unlock_exit;
-		}
-
 		/* Overlap? */
 		if ((mem->ua < (ua + (entries << PAGE_SHIFT))) &&
 				(ua < (mem->ua +
@@ -202,7 +196,7 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
 
 	return ret;
 }
-EXPORT_SYMBOL_GPL(mm_iommu_get);
+EXPORT_SYMBOL_GPL(mm_iommu_new);
 
 static void mm_iommu_unpin(struct mm_iommu_table_group_mem_t *mem)
 {
@@ -318,21 +312,26 @@ struct mm_iommu_table_group_mem_t *mm_iommu_lookup_rm(struct mm_struct *mm,
 	return ret;
 }
 
-struct mm_iommu_table_group_mem_t *mm_iommu_find(struct mm_struct *mm,
+struct mm_iommu_table_group_mem_t *mm_iommu_get(struct mm_struct *mm,
 		unsigned long ua, unsigned long entries)
 {
 	struct mm_iommu_table_group_mem_t *mem, *ret = NULL;
 
+	mutex_lock(&mem_list_mutex);
+
 	list_for_each_entry_rcu(mem, &mm->context.iommu_group_mem_list, next) {
 		if ((mem->ua == ua) && (mem->entries == entries)) {
 			ret = mem;
+			++mem->used;
 			break;
 		}
 	}
 
+	mutex_unlock(&mem_list_mutex);
+
 	return ret;
 }
-EXPORT_SYMBOL_GPL(mm_iommu_find);
+EXPORT_SYMBOL_GPL(mm_iommu_get);
 
 long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
 		unsigned long ua, unsigned int pageshift, unsigned long *hpa)
diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c
index ad63725..56db071 100644
--- a/drivers/vfio/vfio_iommu_spapr_tce.c
+++ b/drivers/vfio/vfio_iommu_spapr_tce.c
@@ -151,12 +151,13 @@ static long tce_iommu_unregister_pages(struct tce_container *container,
 {
 	struct mm_iommu_table_group_mem_t *mem;
 	struct tce_iommu_prereg *tcemem;
-	bool found = false;
+	bool found;
+	long ret;
 
 	if ((vaddr & ~PAGE_MASK) || (size & ~PAGE_MASK))
 		return -EINVAL;
 
-	mem = mm_iommu_find(container->mm, vaddr, size >> PAGE_SHIFT);
+	mem = mm_iommu_get(container->mm, vaddr, size >> PAGE_SHIFT);
 	if (!mem)
 		return -ENOENT;
 
@@ -168,9 +169,13 @@ static long tce_iommu_unregister_pages(struct tce_container *container,
 	}
 
 	if (!found)
-		return -ENOENT;
+		ret = -ENOENT;
+	else
+		ret = tce_iommu_prereg_free(container, tcemem);
 
-	return tce_iommu_prereg_free(container, tcemem);
+	mm_iommu_put(container->mm, mem);
+
+	return ret;
 }
 
 static long tce_iommu_register_pages(struct tce_container *container,
@@ -185,22 +190,24 @@ static long tce_iommu_register_pages(struct tce_container *container,
 			((vaddr + size) < vaddr))
 		return -EINVAL;
 
-	mem = mm_iommu_find(container->mm, vaddr, entries);
+	mem = mm_iommu_get(container->mm, vaddr, entries);
 	if (mem) {
 		list_for_each_entry(tcemem, &container->prereg_list, next) {
-			if (tcemem->mem == mem)
-				return -EBUSY;
+			if (tcemem->mem == mem) {
+				ret = -EBUSY;
+				goto put_exit;
+			}
 		}
+	} else {
+		ret = mm_iommu_new(container->mm, vaddr, entries, &mem);
+		if (ret)
+			return ret;
 	}
 
-	ret = mm_iommu_get(container->mm, vaddr, entries, &mem);
-	if (ret)
-		return ret;
-
 	tcemem = kzalloc(sizeof(*tcemem), GFP_KERNEL);
 	if (!tcemem) {
-		mm_iommu_put(container->mm, mem);
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto put_exit;
 	}
 
 	tcemem->mem = mem;
@@ -209,6 +216,10 @@ static long tce_iommu_register_pages(struct tce_container *container,
 	container->enabled = true;
 
 	return 0;
+
+put_exit:
+	mm_iommu_put(container->mm, mem);
+	return ret;
 }
 
 static bool tce_page_is_contained(struct page *page, unsigned page_shift)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 02/19] powerpc/mm/iommu/vfio_spapr_tce: Change mm_iommu_get to reference a region
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  0 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

Normally mm_iommu_get() is supposed to add a reference and
mm_iommu_put() to remove it. However historically mm_iommu_find() does
the referencing and mm_iommu_get() is doing allocation and referencing.

We are going to add another helper to preregister device memory so
instead of having mm_iommu_new() which pre-registers the normal memory
and references the region, we need separate helpers for pre-registering
and referencing.

This renames:
- mm_iommu_get to mm_iommu_new;
- mm_iommu_find to mm_iommu_get.

To make the mm_iommu_get name reflect what it is supposed to do, this
changes mm_iommu_get() to reference the region so from now on for every
mm_iommu_get() we need a matching mm_iommu_put().

This removes the check for exact match as the check for overlap is
enough now.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v4:
* squashed "powerpc/mm/iommu: Make mm_iommu_new() fail on existing regions" into this

v2:
* merged 2 patches into one
---
 arch/powerpc/include/asm/mmu_context.h |  4 +--
 arch/powerpc/mm/mmu_context_iommu.c    | 19 +++++++------
 drivers/vfio/vfio_iommu_spapr_tce.c    | 37 +++++++++++++++++---------
 3 files changed, 35 insertions(+), 25 deletions(-)

diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
index 0381394..2d6b00d 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -21,7 +21,7 @@ struct mm_iommu_table_group_mem_t;
 
 extern int isolate_lru_page(struct page *page);	/* from internal.h */
 extern bool mm_iommu_preregistered(struct mm_struct *mm);
-extern long mm_iommu_get(struct mm_struct *mm,
+extern long mm_iommu_new(struct mm_struct *mm,
 		unsigned long ua, unsigned long entries,
 		struct mm_iommu_table_group_mem_t **pmem);
 extern long mm_iommu_put(struct mm_struct *mm,
@@ -32,7 +32,7 @@ extern struct mm_iommu_table_group_mem_t *mm_iommu_lookup(struct mm_struct *mm,
 		unsigned long ua, unsigned long size);
 extern struct mm_iommu_table_group_mem_t *mm_iommu_lookup_rm(
 		struct mm_struct *mm, unsigned long ua, unsigned long size);
-extern struct mm_iommu_table_group_mem_t *mm_iommu_find(struct mm_struct *mm,
+extern struct mm_iommu_table_group_mem_t *mm_iommu_get(struct mm_struct *mm,
 		unsigned long ua, unsigned long entries);
 extern long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
 		unsigned long ua, unsigned int pageshift, unsigned long *hpa);
diff --git a/arch/powerpc/mm/mmu_context_iommu.c b/arch/powerpc/mm/mmu_context_iommu.c
index 1d5161f..580d89e 100644
--- a/arch/powerpc/mm/mmu_context_iommu.c
+++ b/arch/powerpc/mm/mmu_context_iommu.c
@@ -89,7 +89,7 @@ bool mm_iommu_preregistered(struct mm_struct *mm)
 }
 EXPORT_SYMBOL_GPL(mm_iommu_preregistered);
 
-long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
+long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
 		struct mm_iommu_table_group_mem_t **pmem)
 {
 	struct mm_iommu_table_group_mem_t *mem;
@@ -102,12 +102,6 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
 
 	list_for_each_entry_rcu(mem, &mm->context.iommu_group_mem_list,
 			next) {
-		if ((mem->ua = ua) && (mem->entries = entries)) {
-			++mem->used;
-			*pmem = mem;
-			goto unlock_exit;
-		}
-
 		/* Overlap? */
 		if ((mem->ua < (ua + (entries << PAGE_SHIFT))) &&
 				(ua < (mem->ua +
@@ -202,7 +196,7 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
 
 	return ret;
 }
-EXPORT_SYMBOL_GPL(mm_iommu_get);
+EXPORT_SYMBOL_GPL(mm_iommu_new);
 
 static void mm_iommu_unpin(struct mm_iommu_table_group_mem_t *mem)
 {
@@ -318,21 +312,26 @@ struct mm_iommu_table_group_mem_t *mm_iommu_lookup_rm(struct mm_struct *mm,
 	return ret;
 }
 
-struct mm_iommu_table_group_mem_t *mm_iommu_find(struct mm_struct *mm,
+struct mm_iommu_table_group_mem_t *mm_iommu_get(struct mm_struct *mm,
 		unsigned long ua, unsigned long entries)
 {
 	struct mm_iommu_table_group_mem_t *mem, *ret = NULL;
 
+	mutex_lock(&mem_list_mutex);
+
 	list_for_each_entry_rcu(mem, &mm->context.iommu_group_mem_list, next) {
 		if ((mem->ua = ua) && (mem->entries = entries)) {
 			ret = mem;
+			++mem->used;
 			break;
 		}
 	}
 
+	mutex_unlock(&mem_list_mutex);
+
 	return ret;
 }
-EXPORT_SYMBOL_GPL(mm_iommu_find);
+EXPORT_SYMBOL_GPL(mm_iommu_get);
 
 long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
 		unsigned long ua, unsigned int pageshift, unsigned long *hpa)
diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c
index ad63725..56db071 100644
--- a/drivers/vfio/vfio_iommu_spapr_tce.c
+++ b/drivers/vfio/vfio_iommu_spapr_tce.c
@@ -151,12 +151,13 @@ static long tce_iommu_unregister_pages(struct tce_container *container,
 {
 	struct mm_iommu_table_group_mem_t *mem;
 	struct tce_iommu_prereg *tcemem;
-	bool found = false;
+	bool found;
+	long ret;
 
 	if ((vaddr & ~PAGE_MASK) || (size & ~PAGE_MASK))
 		return -EINVAL;
 
-	mem = mm_iommu_find(container->mm, vaddr, size >> PAGE_SHIFT);
+	mem = mm_iommu_get(container->mm, vaddr, size >> PAGE_SHIFT);
 	if (!mem)
 		return -ENOENT;
 
@@ -168,9 +169,13 @@ static long tce_iommu_unregister_pages(struct tce_container *container,
 	}
 
 	if (!found)
-		return -ENOENT;
+		ret = -ENOENT;
+	else
+		ret = tce_iommu_prereg_free(container, tcemem);
 
-	return tce_iommu_prereg_free(container, tcemem);
+	mm_iommu_put(container->mm, mem);
+
+	return ret;
 }
 
 static long tce_iommu_register_pages(struct tce_container *container,
@@ -185,22 +190,24 @@ static long tce_iommu_register_pages(struct tce_container *container,
 			((vaddr + size) < vaddr))
 		return -EINVAL;
 
-	mem = mm_iommu_find(container->mm, vaddr, entries);
+	mem = mm_iommu_get(container->mm, vaddr, entries);
 	if (mem) {
 		list_for_each_entry(tcemem, &container->prereg_list, next) {
-			if (tcemem->mem = mem)
-				return -EBUSY;
+			if (tcemem->mem = mem) {
+				ret = -EBUSY;
+				goto put_exit;
+			}
 		}
+	} else {
+		ret = mm_iommu_new(container->mm, vaddr, entries, &mem);
+		if (ret)
+			return ret;
 	}
 
-	ret = mm_iommu_get(container->mm, vaddr, entries, &mem);
-	if (ret)
-		return ret;
-
 	tcemem = kzalloc(sizeof(*tcemem), GFP_KERNEL);
 	if (!tcemem) {
-		mm_iommu_put(container->mm, mem);
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto put_exit;
 	}
 
 	tcemem->mem = mem;
@@ -209,6 +216,10 @@ static long tce_iommu_register_pages(struct tce_container *container,
 	container->enabled = true;
 
 	return 0;
+
+put_exit:
+	mm_iommu_put(container->mm, mem);
+	return ret;
 }
 
 static bool tce_page_is_contained(struct page *page, unsigned page_shift)
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 03/19] powerpc/vfio/iommu/kvm: Do not pin device memory
  2018-11-23  5:52 ` Alexey Kardashevskiy
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  -1 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

This new memory does not have page structs as it is not plugged to
the host so gup() will fail anyway.

This adds 2 helpers:
- mm_iommu_newdev() to preregister the "memory device" memory so
the rest of API can still be used;
- mm_iommu_is_devmem() to know if the physical address is one of thise
new regions which we must avoid unpinning of.

This adds @mm to tce_page_is_contained() and iommu_tce_xchg() to test
if the memory is device memory to avoid pfn_to_page().

This adds a check for device memory in mm_iommu_ua_mark_dirty_rm() which
does delayed pages dirtying.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v4:
* added device memory check in the real mode path
---
 arch/powerpc/include/asm/iommu.h       |  5 +-
 arch/powerpc/include/asm/mmu_context.h |  5 ++
 arch/powerpc/kernel/iommu.c            |  9 ++-
 arch/powerpc/kvm/book3s_64_vio.c       | 18 +++---
 arch/powerpc/mm/mmu_context_iommu.c    | 86 +++++++++++++++++++++++---
 drivers/vfio/vfio_iommu_spapr_tce.c    | 28 ++++++---
 6 files changed, 119 insertions(+), 32 deletions(-)

diff --git a/arch/powerpc/include/asm/iommu.h b/arch/powerpc/include/asm/iommu.h
index 35db0cb..a8aeac0 100644
--- a/arch/powerpc/include/asm/iommu.h
+++ b/arch/powerpc/include/asm/iommu.h
@@ -218,8 +218,9 @@ extern void iommu_register_group(struct iommu_table_group *table_group,
 extern int iommu_add_device(struct device *dev);
 extern void iommu_del_device(struct device *dev);
 extern int __init tce_iommu_bus_notifier_init(void);
-extern long iommu_tce_xchg(struct iommu_table *tbl, unsigned long entry,
-		unsigned long *hpa, enum dma_data_direction *direction);
+extern long iommu_tce_xchg(struct mm_struct *mm, struct iommu_table *tbl,
+		unsigned long entry, unsigned long *hpa,
+		enum dma_data_direction *direction);
 #else
 static inline void iommu_register_group(struct iommu_table_group *table_group,
 					int pci_domain_number,
diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
index 2d6b00d..f0f9f3d 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -24,6 +24,9 @@ extern bool mm_iommu_preregistered(struct mm_struct *mm);
 extern long mm_iommu_new(struct mm_struct *mm,
 		unsigned long ua, unsigned long entries,
 		struct mm_iommu_table_group_mem_t **pmem);
+extern long mm_iommu_newdev(struct mm_struct *mm, unsigned long ua,
+		unsigned long entries, unsigned long dev_hpa,
+		struct mm_iommu_table_group_mem_t **pmem);
 extern long mm_iommu_put(struct mm_struct *mm,
 		struct mm_iommu_table_group_mem_t *mem);
 extern void mm_iommu_init(struct mm_struct *mm);
@@ -39,6 +42,8 @@ extern long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
 extern long mm_iommu_ua_to_hpa_rm(struct mm_iommu_table_group_mem_t *mem,
 		unsigned long ua, unsigned int pageshift, unsigned long *hpa);
 extern void mm_iommu_ua_mark_dirty_rm(struct mm_struct *mm, unsigned long ua);
+extern bool mm_iommu_is_devmem(struct mm_struct *mm, unsigned long hpa,
+		unsigned int pageshift);
 extern long mm_iommu_mapped_inc(struct mm_iommu_table_group_mem_t *mem);
 extern void mm_iommu_mapped_dec(struct mm_iommu_table_group_mem_t *mem);
 #endif
diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
index f0dc680..8ccfdd9 100644
--- a/arch/powerpc/kernel/iommu.c
+++ b/arch/powerpc/kernel/iommu.c
@@ -47,6 +47,7 @@
 #include <asm/fadump.h>
 #include <asm/vio.h>
 #include <asm/tce.h>
+#include <asm/mmu_context.h>
 
 #define DBG(...)
 
@@ -993,15 +994,17 @@ int iommu_tce_check_gpa(unsigned long page_shift, unsigned long gpa)
 }
 EXPORT_SYMBOL_GPL(iommu_tce_check_gpa);
 
-long iommu_tce_xchg(struct iommu_table *tbl, unsigned long entry,
-		unsigned long *hpa, enum dma_data_direction *direction)
+long iommu_tce_xchg(struct mm_struct *mm, struct iommu_table *tbl,
+		unsigned long entry, unsigned long *hpa,
+		enum dma_data_direction *direction)
 {
 	long ret;
 
 	ret = tbl->it_ops->exchange(tbl, entry, hpa, direction);
 
 	if (!ret && ((*direction == DMA_FROM_DEVICE) ||
-			(*direction == DMA_BIDIRECTIONAL)))
+			(*direction == DMA_BIDIRECTIONAL)) &&
+			!mm_iommu_is_devmem(mm, *hpa, tbl->it_page_shift))
 		SetPageDirty(pfn_to_page(*hpa >> PAGE_SHIFT));
 
 	/* if (unlikely(ret))
diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c
index 62a8d03..532ab797 100644
--- a/arch/powerpc/kvm/book3s_64_vio.c
+++ b/arch/powerpc/kvm/book3s_64_vio.c
@@ -397,12 +397,13 @@ static long kvmppc_tce_validate(struct kvmppc_spapr_tce_table *stt,
 	return H_SUCCESS;
 }
 
-static void kvmppc_clear_tce(struct iommu_table *tbl, unsigned long entry)
+static void kvmppc_clear_tce(struct mm_struct *mm, struct iommu_table *tbl,
+		unsigned long entry)
 {
 	unsigned long hpa = 0;
 	enum dma_data_direction dir = DMA_NONE;
 
-	iommu_tce_xchg(tbl, entry, &hpa, &dir);
+	iommu_tce_xchg(mm, tbl, entry, &hpa, &dir);
 }
 
 static long kvmppc_tce_iommu_mapped_dec(struct kvm *kvm,
@@ -433,7 +434,7 @@ static long kvmppc_tce_iommu_do_unmap(struct kvm *kvm,
 	unsigned long hpa = 0;
 	long ret;
 
-	if (WARN_ON_ONCE(iommu_tce_xchg(tbl, entry, &hpa, &dir)))
+	if (WARN_ON_ONCE(iommu_tce_xchg(kvm->mm, tbl, entry, &hpa, &dir)))
 		return H_TOO_HARD;
 
 	if (dir == DMA_NONE)
@@ -441,7 +442,7 @@ static long kvmppc_tce_iommu_do_unmap(struct kvm *kvm,
 
 	ret = kvmppc_tce_iommu_mapped_dec(kvm, tbl, entry);
 	if (ret != H_SUCCESS)
-		iommu_tce_xchg(tbl, entry, &hpa, &dir);
+		iommu_tce_xchg(kvm->mm, tbl, entry, &hpa, &dir);
 
 	return ret;
 }
@@ -487,7 +488,7 @@ long kvmppc_tce_iommu_do_map(struct kvm *kvm, struct iommu_table *tbl,
 	if (mm_iommu_mapped_inc(mem))
 		return H_TOO_HARD;
 
-	ret = iommu_tce_xchg(tbl, entry, &hpa, &dir);
+	ret = iommu_tce_xchg(kvm->mm, tbl, entry, &hpa, &dir);
 	if (WARN_ON_ONCE(ret)) {
 		mm_iommu_mapped_dec(mem);
 		return H_TOO_HARD;
@@ -566,7 +567,7 @@ long kvmppc_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
 					entry, ua, dir);
 
 		if (ret != H_SUCCESS) {
-			kvmppc_clear_tce(stit->tbl, entry);
+			kvmppc_clear_tce(vcpu->kvm->mm, stit->tbl, entry);
 			goto unlock_exit;
 		}
 	}
@@ -655,7 +656,8 @@ long kvmppc_h_put_tce_indirect(struct kvm_vcpu *vcpu,
 					iommu_tce_direction(tce));
 
 			if (ret != H_SUCCESS) {
-				kvmppc_clear_tce(stit->tbl, entry);
+				kvmppc_clear_tce(vcpu->kvm->mm, stit->tbl,
+						entry);
 				goto unlock_exit;
 			}
 		}
@@ -704,7 +706,7 @@ long kvmppc_h_stuff_tce(struct kvm_vcpu *vcpu,
 				return ret;
 
 			WARN_ON_ONCE(1);
-			kvmppc_clear_tce(stit->tbl, entry);
+			kvmppc_clear_tce(vcpu->kvm->mm, stit->tbl, entry);
 		}
 	}
 
diff --git a/arch/powerpc/mm/mmu_context_iommu.c b/arch/powerpc/mm/mmu_context_iommu.c
index 580d89e..663feb0 100644
--- a/arch/powerpc/mm/mmu_context_iommu.c
+++ b/arch/powerpc/mm/mmu_context_iommu.c
@@ -47,6 +47,8 @@ struct mm_iommu_table_group_mem_t {
 		struct page **hpages;	/* vmalloc'ed */
 		phys_addr_t *hpas;
 	};
+#define MM_IOMMU_TABLE_INVALID_HPA	((uint64_t)-1)
+	u64 dev_hpa;		/* Device memory base address */
 };
 
 static long mm_iommu_adjust_locked_vm(struct mm_struct *mm,
@@ -89,7 +91,8 @@ bool mm_iommu_preregistered(struct mm_struct *mm)
 }
 EXPORT_SYMBOL_GPL(mm_iommu_preregistered);
 
-long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
+static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua,
+		unsigned long entries, unsigned long dev_hpa,
 		struct mm_iommu_table_group_mem_t **pmem)
 {
 	struct mm_iommu_table_group_mem_t *mem;
@@ -112,11 +115,13 @@ long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
 
 	}
 
-	ret = mm_iommu_adjust_locked_vm(mm, entries, true);
-	if (ret)
-		goto unlock_exit;
+	if (dev_hpa == MM_IOMMU_TABLE_INVALID_HPA) {
+		ret = mm_iommu_adjust_locked_vm(mm, entries, true);
+		if (ret)
+			goto unlock_exit;
 
-	locked_entries = entries;
+		locked_entries = entries;
+	}
 
 	mem = kzalloc(sizeof(*mem), GFP_KERNEL);
 	if (!mem) {
@@ -124,6 +129,13 @@ long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
 		goto unlock_exit;
 	}
 
+	if (dev_hpa != MM_IOMMU_TABLE_INVALID_HPA) {
+		mem->pageshift = __ffs(dev_hpa | (entries << PAGE_SHIFT));
+		mem->dev_hpa = dev_hpa;
+		goto good_exit;
+	}
+	mem->dev_hpa = MM_IOMMU_TABLE_INVALID_HPA;
+
 	/*
 	 * For a starting point for a maximum page size calculation
 	 * we use @ua and @entries natural alignment to allow IOMMU pages
@@ -180,6 +192,7 @@ long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
 
 	}
 
+good_exit:
 	atomic64_set(&mem->mapped, 1);
 	mem->used = 1;
 	mem->ua = ua;
@@ -196,13 +209,31 @@ long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
 
 	return ret;
 }
+
+long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
+		struct mm_iommu_table_group_mem_t **pmem)
+{
+	return mm_iommu_do_alloc(mm, ua, entries, MM_IOMMU_TABLE_INVALID_HPA,
+			pmem);
+}
 EXPORT_SYMBOL_GPL(mm_iommu_new);
 
+long mm_iommu_newdev(struct mm_struct *mm, unsigned long ua,
+		unsigned long entries, unsigned long dev_hpa,
+		struct mm_iommu_table_group_mem_t **pmem)
+{
+	return mm_iommu_do_alloc(mm, ua, entries, dev_hpa, pmem);
+}
+EXPORT_SYMBOL_GPL(mm_iommu_newdev);
+
 static void mm_iommu_unpin(struct mm_iommu_table_group_mem_t *mem)
 {
 	long i;
 	struct page *page = NULL;
 
+	if (!mem->hpas)
+		return;
+
 	for (i = 0; i < mem->entries; ++i) {
 		if (!mem->hpas[i])
 			continue;
@@ -244,6 +275,7 @@ static void mm_iommu_release(struct mm_iommu_table_group_mem_t *mem)
 long mm_iommu_put(struct mm_struct *mm, struct mm_iommu_table_group_mem_t *mem)
 {
 	long ret = 0;
+	unsigned long entries, dev_hpa;
 
 	mutex_lock(&mem_list_mutex);
 
@@ -265,9 +297,12 @@ long mm_iommu_put(struct mm_struct *mm, struct mm_iommu_table_group_mem_t *mem)
 	}
 
 	/* @mapped became 0 so now mappings are disabled, release the region */
+	entries = mem->entries;
+	dev_hpa = mem->dev_hpa;
 	mm_iommu_release(mem);
 
-	mm_iommu_adjust_locked_vm(mm, mem->entries, false);
+	if (dev_hpa == MM_IOMMU_TABLE_INVALID_HPA)
+		mm_iommu_adjust_locked_vm(mm, entries, false);
 
 unlock_exit:
 	mutex_unlock(&mem_list_mutex);
@@ -337,7 +372,7 @@ long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
 		unsigned long ua, unsigned int pageshift, unsigned long *hpa)
 {
 	const long entry = (ua - mem->ua) >> PAGE_SHIFT;
-	u64 *va = &mem->hpas[entry];
+	u64 *va;
 
 	if (entry >= mem->entries)
 		return -EFAULT;
@@ -345,6 +380,12 @@ long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
 	if (pageshift > mem->pageshift)
 		return -EFAULT;
 
+	if (!mem->hpas) {
+		*hpa = mem->dev_hpa + (ua - mem->ua);
+		return 0;
+	}
+
+	va = &mem->hpas[entry];
 	*hpa = (*va & MM_IOMMU_TABLE_GROUP_PAGE_MASK) | (ua & ~PAGE_MASK);
 
 	return 0;
@@ -355,7 +396,6 @@ long mm_iommu_ua_to_hpa_rm(struct mm_iommu_table_group_mem_t *mem,
 		unsigned long ua, unsigned int pageshift, unsigned long *hpa)
 {
 	const long entry = (ua - mem->ua) >> PAGE_SHIFT;
-	void *va = &mem->hpas[entry];
 	unsigned long *pa;
 
 	if (entry >= mem->entries)
@@ -364,7 +404,12 @@ long mm_iommu_ua_to_hpa_rm(struct mm_iommu_table_group_mem_t *mem,
 	if (pageshift > mem->pageshift)
 		return -EFAULT;
 
-	pa = (void *) vmalloc_to_phys(va);
+	if (!mem->hpas) {
+		*hpa = mem->dev_hpa + (ua - mem->ua);
+		return 0;
+	}
+
+	pa = (void *) vmalloc_to_phys(&mem->hpas[entry]);
 	if (!pa)
 		return -EFAULT;
 
@@ -384,6 +429,9 @@ extern void mm_iommu_ua_mark_dirty_rm(struct mm_struct *mm, unsigned long ua)
 	if (!mem)
 		return;
 
+	if (mem->dev_hpa != MM_IOMMU_TABLE_INVALID_HPA)
+		return;
+
 	entry = (ua - mem->ua) >> PAGE_SHIFT;
 	va = &mem->hpas[entry];
 
@@ -394,6 +442,26 @@ extern void mm_iommu_ua_mark_dirty_rm(struct mm_struct *mm, unsigned long ua)
 	*pa |= MM_IOMMU_TABLE_GROUP_PAGE_DIRTY;
 }
 
+extern bool mm_iommu_is_devmem(struct mm_struct *mm, unsigned long hpa,
+		unsigned int pageshift)
+{
+	struct mm_iommu_table_group_mem_t *mem;
+	const unsigned long pagesize = 1UL << pageshift;
+
+	list_for_each_entry_rcu(mem, &mm->context.iommu_group_mem_list, next) {
+		if (mem->dev_hpa == MM_IOMMU_TABLE_INVALID_HPA)
+			continue;
+
+		if ((mem->dev_hpa <= hpa) &&
+				(hpa + pagesize <= mem->dev_hpa +
+				 (mem->entries << PAGE_SHIFT)))
+			return true;
+	}
+
+	return false;
+}
+EXPORT_SYMBOL_GPL(mm_iommu_is_devmem);
+
 long mm_iommu_mapped_inc(struct mm_iommu_table_group_mem_t *mem)
 {
 	if (atomic64_inc_not_zero(&mem->mapped))
diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c
index 56db071..ed89137 100644
--- a/drivers/vfio/vfio_iommu_spapr_tce.c
+++ b/drivers/vfio/vfio_iommu_spapr_tce.c
@@ -222,8 +222,15 @@ static long tce_iommu_register_pages(struct tce_container *container,
 	return ret;
 }
 
-static bool tce_page_is_contained(struct page *page, unsigned page_shift)
+static bool tce_page_is_contained(struct mm_struct *mm, unsigned long hpa,
+		unsigned int page_shift)
 {
+	struct page *page;
+
+	if (mm_iommu_is_devmem(mm, hpa, page_shift))
+		return true;
+
+	page = pfn_to_page(hpa >> PAGE_SHIFT);
 	/*
 	 * Check that the TCE table granularity is not bigger than the size of
 	 * a page we just found. Otherwise the hardware can get access to
@@ -499,7 +506,8 @@ static int tce_iommu_clear(struct tce_container *container,
 
 		direction = DMA_NONE;
 		oldhpa = 0;
-		ret = iommu_tce_xchg(tbl, entry, &oldhpa, &direction);
+		ret = iommu_tce_xchg(container->mm, tbl, entry, &oldhpa,
+				&direction);
 		if (ret)
 			continue;
 
@@ -537,7 +545,6 @@ static long tce_iommu_build(struct tce_container *container,
 		enum dma_data_direction direction)
 {
 	long i, ret = 0;
-	struct page *page;
 	unsigned long hpa;
 	enum dma_data_direction dirtmp;
 
@@ -548,15 +555,16 @@ static long tce_iommu_build(struct tce_container *container,
 		if (ret)
 			break;
 
-		page = pfn_to_page(hpa >> PAGE_SHIFT);
-		if (!tce_page_is_contained(page, tbl->it_page_shift)) {
+		if (!tce_page_is_contained(container->mm, hpa,
+				tbl->it_page_shift)) {
 			ret = -EPERM;
 			break;
 		}
 
 		hpa |= offset;
 		dirtmp = direction;
-		ret = iommu_tce_xchg(tbl, entry + i, &hpa, &dirtmp);
+		ret = iommu_tce_xchg(container->mm, tbl, entry + i, &hpa,
+				&dirtmp);
 		if (ret) {
 			tce_iommu_unuse_page(container, hpa);
 			pr_err("iommu_tce: %s failed ioba=%lx, tce=%lx, ret=%ld\n",
@@ -583,7 +591,6 @@ static long tce_iommu_build_v2(struct tce_container *container,
 		enum dma_data_direction direction)
 {
 	long i, ret = 0;
-	struct page *page;
 	unsigned long hpa;
 	enum dma_data_direction dirtmp;
 
@@ -596,8 +603,8 @@ static long tce_iommu_build_v2(struct tce_container *container,
 		if (ret)
 			break;
 
-		page = pfn_to_page(hpa >> PAGE_SHIFT);
-		if (!tce_page_is_contained(page, tbl->it_page_shift)) {
+		if (!tce_page_is_contained(container->mm, hpa,
+				tbl->it_page_shift)) {
 			ret = -EPERM;
 			break;
 		}
@@ -610,7 +617,8 @@ static long tce_iommu_build_v2(struct tce_container *container,
 		if (mm_iommu_mapped_inc(mem))
 			break;
 
-		ret = iommu_tce_xchg(tbl, entry + i, &hpa, &dirtmp);
+		ret = iommu_tce_xchg(container->mm, tbl, entry + i, &hpa,
+				&dirtmp);
 		if (ret) {
 			/* dirtmp cannot be DMA_NONE here */
 			tce_iommu_unuse_page_v2(container, tbl, entry + i);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 03/19] powerpc/vfio/iommu/kvm: Do not pin device memory
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  0 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

This new memory does not have page structs as it is not plugged to
the host so gup() will fail anyway.

This adds 2 helpers:
- mm_iommu_newdev() to preregister the "memory device" memory so
the rest of API can still be used;
- mm_iommu_is_devmem() to know if the physical address is one of thise
new regions which we must avoid unpinning of.

This adds @mm to tce_page_is_contained() and iommu_tce_xchg() to test
if the memory is device memory to avoid pfn_to_page().

This adds a check for device memory in mm_iommu_ua_mark_dirty_rm() which
does delayed pages dirtying.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v4:
* added device memory check in the real mode path
---
 arch/powerpc/include/asm/iommu.h       |  5 +-
 arch/powerpc/include/asm/mmu_context.h |  5 ++
 arch/powerpc/kernel/iommu.c            |  9 ++-
 arch/powerpc/kvm/book3s_64_vio.c       | 18 +++---
 arch/powerpc/mm/mmu_context_iommu.c    | 86 +++++++++++++++++++++++---
 drivers/vfio/vfio_iommu_spapr_tce.c    | 28 ++++++---
 6 files changed, 119 insertions(+), 32 deletions(-)

diff --git a/arch/powerpc/include/asm/iommu.h b/arch/powerpc/include/asm/iommu.h
index 35db0cb..a8aeac0 100644
--- a/arch/powerpc/include/asm/iommu.h
+++ b/arch/powerpc/include/asm/iommu.h
@@ -218,8 +218,9 @@ extern void iommu_register_group(struct iommu_table_group *table_group,
 extern int iommu_add_device(struct device *dev);
 extern void iommu_del_device(struct device *dev);
 extern int __init tce_iommu_bus_notifier_init(void);
-extern long iommu_tce_xchg(struct iommu_table *tbl, unsigned long entry,
-		unsigned long *hpa, enum dma_data_direction *direction);
+extern long iommu_tce_xchg(struct mm_struct *mm, struct iommu_table *tbl,
+		unsigned long entry, unsigned long *hpa,
+		enum dma_data_direction *direction);
 #else
 static inline void iommu_register_group(struct iommu_table_group *table_group,
 					int pci_domain_number,
diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
index 2d6b00d..f0f9f3d 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -24,6 +24,9 @@ extern bool mm_iommu_preregistered(struct mm_struct *mm);
 extern long mm_iommu_new(struct mm_struct *mm,
 		unsigned long ua, unsigned long entries,
 		struct mm_iommu_table_group_mem_t **pmem);
+extern long mm_iommu_newdev(struct mm_struct *mm, unsigned long ua,
+		unsigned long entries, unsigned long dev_hpa,
+		struct mm_iommu_table_group_mem_t **pmem);
 extern long mm_iommu_put(struct mm_struct *mm,
 		struct mm_iommu_table_group_mem_t *mem);
 extern void mm_iommu_init(struct mm_struct *mm);
@@ -39,6 +42,8 @@ extern long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
 extern long mm_iommu_ua_to_hpa_rm(struct mm_iommu_table_group_mem_t *mem,
 		unsigned long ua, unsigned int pageshift, unsigned long *hpa);
 extern void mm_iommu_ua_mark_dirty_rm(struct mm_struct *mm, unsigned long ua);
+extern bool mm_iommu_is_devmem(struct mm_struct *mm, unsigned long hpa,
+		unsigned int pageshift);
 extern long mm_iommu_mapped_inc(struct mm_iommu_table_group_mem_t *mem);
 extern void mm_iommu_mapped_dec(struct mm_iommu_table_group_mem_t *mem);
 #endif
diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
index f0dc680..8ccfdd9 100644
--- a/arch/powerpc/kernel/iommu.c
+++ b/arch/powerpc/kernel/iommu.c
@@ -47,6 +47,7 @@
 #include <asm/fadump.h>
 #include <asm/vio.h>
 #include <asm/tce.h>
+#include <asm/mmu_context.h>
 
 #define DBG(...)
 
@@ -993,15 +994,17 @@ int iommu_tce_check_gpa(unsigned long page_shift, unsigned long gpa)
 }
 EXPORT_SYMBOL_GPL(iommu_tce_check_gpa);
 
-long iommu_tce_xchg(struct iommu_table *tbl, unsigned long entry,
-		unsigned long *hpa, enum dma_data_direction *direction)
+long iommu_tce_xchg(struct mm_struct *mm, struct iommu_table *tbl,
+		unsigned long entry, unsigned long *hpa,
+		enum dma_data_direction *direction)
 {
 	long ret;
 
 	ret = tbl->it_ops->exchange(tbl, entry, hpa, direction);
 
 	if (!ret && ((*direction = DMA_FROM_DEVICE) ||
-			(*direction = DMA_BIDIRECTIONAL)))
+			(*direction = DMA_BIDIRECTIONAL)) &&
+			!mm_iommu_is_devmem(mm, *hpa, tbl->it_page_shift))
 		SetPageDirty(pfn_to_page(*hpa >> PAGE_SHIFT));
 
 	/* if (unlikely(ret))
diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c
index 62a8d03..532ab797 100644
--- a/arch/powerpc/kvm/book3s_64_vio.c
+++ b/arch/powerpc/kvm/book3s_64_vio.c
@@ -397,12 +397,13 @@ static long kvmppc_tce_validate(struct kvmppc_spapr_tce_table *stt,
 	return H_SUCCESS;
 }
 
-static void kvmppc_clear_tce(struct iommu_table *tbl, unsigned long entry)
+static void kvmppc_clear_tce(struct mm_struct *mm, struct iommu_table *tbl,
+		unsigned long entry)
 {
 	unsigned long hpa = 0;
 	enum dma_data_direction dir = DMA_NONE;
 
-	iommu_tce_xchg(tbl, entry, &hpa, &dir);
+	iommu_tce_xchg(mm, tbl, entry, &hpa, &dir);
 }
 
 static long kvmppc_tce_iommu_mapped_dec(struct kvm *kvm,
@@ -433,7 +434,7 @@ static long kvmppc_tce_iommu_do_unmap(struct kvm *kvm,
 	unsigned long hpa = 0;
 	long ret;
 
-	if (WARN_ON_ONCE(iommu_tce_xchg(tbl, entry, &hpa, &dir)))
+	if (WARN_ON_ONCE(iommu_tce_xchg(kvm->mm, tbl, entry, &hpa, &dir)))
 		return H_TOO_HARD;
 
 	if (dir = DMA_NONE)
@@ -441,7 +442,7 @@ static long kvmppc_tce_iommu_do_unmap(struct kvm *kvm,
 
 	ret = kvmppc_tce_iommu_mapped_dec(kvm, tbl, entry);
 	if (ret != H_SUCCESS)
-		iommu_tce_xchg(tbl, entry, &hpa, &dir);
+		iommu_tce_xchg(kvm->mm, tbl, entry, &hpa, &dir);
 
 	return ret;
 }
@@ -487,7 +488,7 @@ long kvmppc_tce_iommu_do_map(struct kvm *kvm, struct iommu_table *tbl,
 	if (mm_iommu_mapped_inc(mem))
 		return H_TOO_HARD;
 
-	ret = iommu_tce_xchg(tbl, entry, &hpa, &dir);
+	ret = iommu_tce_xchg(kvm->mm, tbl, entry, &hpa, &dir);
 	if (WARN_ON_ONCE(ret)) {
 		mm_iommu_mapped_dec(mem);
 		return H_TOO_HARD;
@@ -566,7 +567,7 @@ long kvmppc_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
 					entry, ua, dir);
 
 		if (ret != H_SUCCESS) {
-			kvmppc_clear_tce(stit->tbl, entry);
+			kvmppc_clear_tce(vcpu->kvm->mm, stit->tbl, entry);
 			goto unlock_exit;
 		}
 	}
@@ -655,7 +656,8 @@ long kvmppc_h_put_tce_indirect(struct kvm_vcpu *vcpu,
 					iommu_tce_direction(tce));
 
 			if (ret != H_SUCCESS) {
-				kvmppc_clear_tce(stit->tbl, entry);
+				kvmppc_clear_tce(vcpu->kvm->mm, stit->tbl,
+						entry);
 				goto unlock_exit;
 			}
 		}
@@ -704,7 +706,7 @@ long kvmppc_h_stuff_tce(struct kvm_vcpu *vcpu,
 				return ret;
 
 			WARN_ON_ONCE(1);
-			kvmppc_clear_tce(stit->tbl, entry);
+			kvmppc_clear_tce(vcpu->kvm->mm, stit->tbl, entry);
 		}
 	}
 
diff --git a/arch/powerpc/mm/mmu_context_iommu.c b/arch/powerpc/mm/mmu_context_iommu.c
index 580d89e..663feb0 100644
--- a/arch/powerpc/mm/mmu_context_iommu.c
+++ b/arch/powerpc/mm/mmu_context_iommu.c
@@ -47,6 +47,8 @@ struct mm_iommu_table_group_mem_t {
 		struct page **hpages;	/* vmalloc'ed */
 		phys_addr_t *hpas;
 	};
+#define MM_IOMMU_TABLE_INVALID_HPA	((uint64_t)-1)
+	u64 dev_hpa;		/* Device memory base address */
 };
 
 static long mm_iommu_adjust_locked_vm(struct mm_struct *mm,
@@ -89,7 +91,8 @@ bool mm_iommu_preregistered(struct mm_struct *mm)
 }
 EXPORT_SYMBOL_GPL(mm_iommu_preregistered);
 
-long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
+static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua,
+		unsigned long entries, unsigned long dev_hpa,
 		struct mm_iommu_table_group_mem_t **pmem)
 {
 	struct mm_iommu_table_group_mem_t *mem;
@@ -112,11 +115,13 @@ long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
 
 	}
 
-	ret = mm_iommu_adjust_locked_vm(mm, entries, true);
-	if (ret)
-		goto unlock_exit;
+	if (dev_hpa = MM_IOMMU_TABLE_INVALID_HPA) {
+		ret = mm_iommu_adjust_locked_vm(mm, entries, true);
+		if (ret)
+			goto unlock_exit;
 
-	locked_entries = entries;
+		locked_entries = entries;
+	}
 
 	mem = kzalloc(sizeof(*mem), GFP_KERNEL);
 	if (!mem) {
@@ -124,6 +129,13 @@ long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
 		goto unlock_exit;
 	}
 
+	if (dev_hpa != MM_IOMMU_TABLE_INVALID_HPA) {
+		mem->pageshift = __ffs(dev_hpa | (entries << PAGE_SHIFT));
+		mem->dev_hpa = dev_hpa;
+		goto good_exit;
+	}
+	mem->dev_hpa = MM_IOMMU_TABLE_INVALID_HPA;
+
 	/*
 	 * For a starting point for a maximum page size calculation
 	 * we use @ua and @entries natural alignment to allow IOMMU pages
@@ -180,6 +192,7 @@ long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
 
 	}
 
+good_exit:
 	atomic64_set(&mem->mapped, 1);
 	mem->used = 1;
 	mem->ua = ua;
@@ -196,13 +209,31 @@ long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
 
 	return ret;
 }
+
+long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
+		struct mm_iommu_table_group_mem_t **pmem)
+{
+	return mm_iommu_do_alloc(mm, ua, entries, MM_IOMMU_TABLE_INVALID_HPA,
+			pmem);
+}
 EXPORT_SYMBOL_GPL(mm_iommu_new);
 
+long mm_iommu_newdev(struct mm_struct *mm, unsigned long ua,
+		unsigned long entries, unsigned long dev_hpa,
+		struct mm_iommu_table_group_mem_t **pmem)
+{
+	return mm_iommu_do_alloc(mm, ua, entries, dev_hpa, pmem);
+}
+EXPORT_SYMBOL_GPL(mm_iommu_newdev);
+
 static void mm_iommu_unpin(struct mm_iommu_table_group_mem_t *mem)
 {
 	long i;
 	struct page *page = NULL;
 
+	if (!mem->hpas)
+		return;
+
 	for (i = 0; i < mem->entries; ++i) {
 		if (!mem->hpas[i])
 			continue;
@@ -244,6 +275,7 @@ static void mm_iommu_release(struct mm_iommu_table_group_mem_t *mem)
 long mm_iommu_put(struct mm_struct *mm, struct mm_iommu_table_group_mem_t *mem)
 {
 	long ret = 0;
+	unsigned long entries, dev_hpa;
 
 	mutex_lock(&mem_list_mutex);
 
@@ -265,9 +297,12 @@ long mm_iommu_put(struct mm_struct *mm, struct mm_iommu_table_group_mem_t *mem)
 	}
 
 	/* @mapped became 0 so now mappings are disabled, release the region */
+	entries = mem->entries;
+	dev_hpa = mem->dev_hpa;
 	mm_iommu_release(mem);
 
-	mm_iommu_adjust_locked_vm(mm, mem->entries, false);
+	if (dev_hpa = MM_IOMMU_TABLE_INVALID_HPA)
+		mm_iommu_adjust_locked_vm(mm, entries, false);
 
 unlock_exit:
 	mutex_unlock(&mem_list_mutex);
@@ -337,7 +372,7 @@ long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
 		unsigned long ua, unsigned int pageshift, unsigned long *hpa)
 {
 	const long entry = (ua - mem->ua) >> PAGE_SHIFT;
-	u64 *va = &mem->hpas[entry];
+	u64 *va;
 
 	if (entry >= mem->entries)
 		return -EFAULT;
@@ -345,6 +380,12 @@ long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
 	if (pageshift > mem->pageshift)
 		return -EFAULT;
 
+	if (!mem->hpas) {
+		*hpa = mem->dev_hpa + (ua - mem->ua);
+		return 0;
+	}
+
+	va = &mem->hpas[entry];
 	*hpa = (*va & MM_IOMMU_TABLE_GROUP_PAGE_MASK) | (ua & ~PAGE_MASK);
 
 	return 0;
@@ -355,7 +396,6 @@ long mm_iommu_ua_to_hpa_rm(struct mm_iommu_table_group_mem_t *mem,
 		unsigned long ua, unsigned int pageshift, unsigned long *hpa)
 {
 	const long entry = (ua - mem->ua) >> PAGE_SHIFT;
-	void *va = &mem->hpas[entry];
 	unsigned long *pa;
 
 	if (entry >= mem->entries)
@@ -364,7 +404,12 @@ long mm_iommu_ua_to_hpa_rm(struct mm_iommu_table_group_mem_t *mem,
 	if (pageshift > mem->pageshift)
 		return -EFAULT;
 
-	pa = (void *) vmalloc_to_phys(va);
+	if (!mem->hpas) {
+		*hpa = mem->dev_hpa + (ua - mem->ua);
+		return 0;
+	}
+
+	pa = (void *) vmalloc_to_phys(&mem->hpas[entry]);
 	if (!pa)
 		return -EFAULT;
 
@@ -384,6 +429,9 @@ extern void mm_iommu_ua_mark_dirty_rm(struct mm_struct *mm, unsigned long ua)
 	if (!mem)
 		return;
 
+	if (mem->dev_hpa != MM_IOMMU_TABLE_INVALID_HPA)
+		return;
+
 	entry = (ua - mem->ua) >> PAGE_SHIFT;
 	va = &mem->hpas[entry];
 
@@ -394,6 +442,26 @@ extern void mm_iommu_ua_mark_dirty_rm(struct mm_struct *mm, unsigned long ua)
 	*pa |= MM_IOMMU_TABLE_GROUP_PAGE_DIRTY;
 }
 
+extern bool mm_iommu_is_devmem(struct mm_struct *mm, unsigned long hpa,
+		unsigned int pageshift)
+{
+	struct mm_iommu_table_group_mem_t *mem;
+	const unsigned long pagesize = 1UL << pageshift;
+
+	list_for_each_entry_rcu(mem, &mm->context.iommu_group_mem_list, next) {
+		if (mem->dev_hpa = MM_IOMMU_TABLE_INVALID_HPA)
+			continue;
+
+		if ((mem->dev_hpa <= hpa) &&
+				(hpa + pagesize <= mem->dev_hpa +
+				 (mem->entries << PAGE_SHIFT)))
+			return true;
+	}
+
+	return false;
+}
+EXPORT_SYMBOL_GPL(mm_iommu_is_devmem);
+
 long mm_iommu_mapped_inc(struct mm_iommu_table_group_mem_t *mem)
 {
 	if (atomic64_inc_not_zero(&mem->mapped))
diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c
index 56db071..ed89137 100644
--- a/drivers/vfio/vfio_iommu_spapr_tce.c
+++ b/drivers/vfio/vfio_iommu_spapr_tce.c
@@ -222,8 +222,15 @@ static long tce_iommu_register_pages(struct tce_container *container,
 	return ret;
 }
 
-static bool tce_page_is_contained(struct page *page, unsigned page_shift)
+static bool tce_page_is_contained(struct mm_struct *mm, unsigned long hpa,
+		unsigned int page_shift)
 {
+	struct page *page;
+
+	if (mm_iommu_is_devmem(mm, hpa, page_shift))
+		return true;
+
+	page = pfn_to_page(hpa >> PAGE_SHIFT);
 	/*
 	 * Check that the TCE table granularity is not bigger than the size of
 	 * a page we just found. Otherwise the hardware can get access to
@@ -499,7 +506,8 @@ static int tce_iommu_clear(struct tce_container *container,
 
 		direction = DMA_NONE;
 		oldhpa = 0;
-		ret = iommu_tce_xchg(tbl, entry, &oldhpa, &direction);
+		ret = iommu_tce_xchg(container->mm, tbl, entry, &oldhpa,
+				&direction);
 		if (ret)
 			continue;
 
@@ -537,7 +545,6 @@ static long tce_iommu_build(struct tce_container *container,
 		enum dma_data_direction direction)
 {
 	long i, ret = 0;
-	struct page *page;
 	unsigned long hpa;
 	enum dma_data_direction dirtmp;
 
@@ -548,15 +555,16 @@ static long tce_iommu_build(struct tce_container *container,
 		if (ret)
 			break;
 
-		page = pfn_to_page(hpa >> PAGE_SHIFT);
-		if (!tce_page_is_contained(page, tbl->it_page_shift)) {
+		if (!tce_page_is_contained(container->mm, hpa,
+				tbl->it_page_shift)) {
 			ret = -EPERM;
 			break;
 		}
 
 		hpa |= offset;
 		dirtmp = direction;
-		ret = iommu_tce_xchg(tbl, entry + i, &hpa, &dirtmp);
+		ret = iommu_tce_xchg(container->mm, tbl, entry + i, &hpa,
+				&dirtmp);
 		if (ret) {
 			tce_iommu_unuse_page(container, hpa);
 			pr_err("iommu_tce: %s failed ioba=%lx, tce=%lx, ret=%ld\n",
@@ -583,7 +591,6 @@ static long tce_iommu_build_v2(struct tce_container *container,
 		enum dma_data_direction direction)
 {
 	long i, ret = 0;
-	struct page *page;
 	unsigned long hpa;
 	enum dma_data_direction dirtmp;
 
@@ -596,8 +603,8 @@ static long tce_iommu_build_v2(struct tce_container *container,
 		if (ret)
 			break;
 
-		page = pfn_to_page(hpa >> PAGE_SHIFT);
-		if (!tce_page_is_contained(page, tbl->it_page_shift)) {
+		if (!tce_page_is_contained(container->mm, hpa,
+				tbl->it_page_shift)) {
 			ret = -EPERM;
 			break;
 		}
@@ -610,7 +617,8 @@ static long tce_iommu_build_v2(struct tce_container *container,
 		if (mm_iommu_mapped_inc(mem))
 			break;
 
-		ret = iommu_tce_xchg(tbl, entry + i, &hpa, &dirtmp);
+		ret = iommu_tce_xchg(container->mm, tbl, entry + i, &hpa,
+				&dirtmp);
 		if (ret) {
 			/* dirtmp cannot be DMA_NONE here */
 			tce_iommu_unuse_page_v2(container, tbl, entry + i);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 04/19] powerpc/powernv: Move npu struct from pnv_phb to pci_controller
  2018-11-23  5:52 ` Alexey Kardashevskiy
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  -1 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

The powernv PCI code stores NPU data in the pnv_phb struct. The latter
is referenced by pci_controller::private_data. We are going to have NPU2
support in the pseries platform as well but it does not store any
private_data in in the pci_controller struct; and even if it did,
it would be a different data structure.

This makes npu a pointer and stores it one level higher in
the pci_controller struct.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v4:
* changed subj from "powerpc/powernv: Detach npu struct from pnv_phb"
* got rid of global list of npus - store them now in pci_controller
* got rid of npdev_to_npu() helper
---
 arch/powerpc/include/asm/pci-bridge.h    |  1 +
 arch/powerpc/platforms/powernv/pci.h     | 16 -----
 arch/powerpc/platforms/powernv/npu-dma.c | 81 ++++++++++++++++++------
 3 files changed, 64 insertions(+), 34 deletions(-)

diff --git a/arch/powerpc/include/asm/pci-bridge.h b/arch/powerpc/include/asm/pci-bridge.h
index 94d4490..aee4fcc 100644
--- a/arch/powerpc/include/asm/pci-bridge.h
+++ b/arch/powerpc/include/asm/pci-bridge.h
@@ -129,6 +129,7 @@ struct pci_controller {
 #endif	/* CONFIG_PPC64 */
 
 	void *private_data;
+	struct npu *npu;
 };
 
 /* These are used for config access before all the PCI probing
diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
index 2131373..f2d50974 100644
--- a/arch/powerpc/platforms/powernv/pci.h
+++ b/arch/powerpc/platforms/powernv/pci.h
@@ -8,9 +8,6 @@
 
 struct pci_dn;
 
-/* Maximum possible number of ATSD MMIO registers per NPU */
-#define NV_NMMU_ATSD_REGS 8
-
 enum pnv_phb_type {
 	PNV_PHB_IODA1		= 0,
 	PNV_PHB_IODA2		= 1,
@@ -176,19 +173,6 @@ struct pnv_phb {
 	unsigned int		diag_data_size;
 	u8			*diag_data;
 
-	/* Nvlink2 data */
-	struct npu {
-		int index;
-		__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
-		unsigned int mmio_atsd_count;
-
-		/* Bitmask for MMIO register usage */
-		unsigned long mmio_atsd_usage;
-
-		/* Do we need to explicitly flush the nest mmu? */
-		bool nmmu_flush;
-	} npu;
-
 	int p2p_target_count;
 };
 
diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
index 91d488f..7dd5c0e5 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -327,6 +327,25 @@ struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
 	return gpe;
 }
 
+/*
+ * NPU2 ATS
+ */
+/* Maximum possible number of ATSD MMIO registers per NPU */
+#define NV_NMMU_ATSD_REGS 8
+
+/* An NPU descriptor, valid for POWER9 only */
+struct npu {
+	int index;
+	__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
+	unsigned int mmio_atsd_count;
+
+	/* Bitmask for MMIO register usage */
+	unsigned long mmio_atsd_usage;
+
+	/* Do we need to explicitly flush the nest mmu? */
+	bool nmmu_flush;
+};
+
 /* Maximum number of nvlinks per npu */
 #define NV_MAX_LINKS 6
 
@@ -478,7 +497,6 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
 	int i, j;
 	struct npu *npu;
 	struct pci_dev *npdev;
-	struct pnv_phb *nphb;
 
 	for (i = 0; i <= max_npu2_index; i++) {
 		mmio_atsd_reg[i].reg = -1;
@@ -493,8 +511,10 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
 			if (!npdev)
 				continue;
 
-			nphb = pci_bus_to_host(npdev->bus)->private_data;
-			npu = &nphb->npu;
+			npu = pci_bus_to_host(npdev->bus)->npu;
+			if (!npu)
+				continue;
+
 			mmio_atsd_reg[i].npu = npu;
 			mmio_atsd_reg[i].reg = get_mmio_atsd_reg(npu);
 			while (mmio_atsd_reg[i].reg < 0) {
@@ -662,6 +682,7 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev,
 	struct pnv_phb *nphb;
 	struct npu *npu;
 	struct npu_context *npu_context;
+	struct pci_controller *hose;
 
 	/*
 	 * At present we don't support GPUs connected to multiple NPUs and I'm
@@ -689,8 +710,11 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev,
 		return ERR_PTR(-EINVAL);
 	}
 
-	nphb = pci_bus_to_host(npdev->bus)->private_data;
-	npu = &nphb->npu;
+	hose = pci_bus_to_host(npdev->bus);
+	nphb = hose->private_data;
+	npu = hose->npu;
+	if (!npu)
+		return ERR_PTR(-ENODEV);
 
 	/*
 	 * Setup the NPU context table for a particular GPU. These need to be
@@ -764,7 +788,7 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev,
 	 */
 	WRITE_ONCE(npu_context->npdev[npu->index][nvlink_index], npdev);
 
-	if (!nphb->npu.nmmu_flush) {
+	if (!npu->nmmu_flush) {
 		/*
 		 * If we're not explicitly flushing ourselves we need to mark
 		 * the thread for global flushes
@@ -802,6 +826,7 @@ void pnv_npu2_destroy_context(struct npu_context *npu_context,
 	struct pci_dev *npdev = pnv_pci_get_npu_dev(gpdev, 0);
 	struct device_node *nvlink_dn;
 	u32 nvlink_index;
+	struct pci_controller *hose;
 
 	if (WARN_ON(!npdev))
 		return;
@@ -809,8 +834,11 @@ void pnv_npu2_destroy_context(struct npu_context *npu_context,
 	if (!firmware_has_feature(FW_FEATURE_OPAL))
 		return;
 
-	nphb = pci_bus_to_host(npdev->bus)->private_data;
-	npu = &nphb->npu;
+	hose = pci_bus_to_host(npdev->bus);
+	nphb = hose->private_data;
+	npu = hose->npu;
+	if (!npu)
+		return;
 	nvlink_dn = of_parse_phandle(npdev->dev.of_node, "ibm,nvlink", 0);
 	if (WARN_ON(of_property_read_u32(nvlink_dn, "ibm,npu-link-index",
 							&nvlink_index)))
@@ -888,9 +916,15 @@ int pnv_npu2_init(struct pnv_phb *phb)
 	struct pci_dev *gpdev;
 	static int npu_index;
 	uint64_t rc = 0;
+	struct pci_controller *hose = phb->hose;
+	struct npu *npu;
+	int ret;
 
-	phb->npu.nmmu_flush =
-		of_property_read_bool(phb->hose->dn, "ibm,nmmu-flush");
+	npu = kzalloc(sizeof(*npu), GFP_KERNEL);
+	if (!npu)
+		return -ENOMEM;
+
+	npu->nmmu_flush = of_property_read_bool(hose->dn, "ibm,nmmu-flush");
 	for_each_child_of_node(phb->hose->dn, dn) {
 		gpdev = pnv_pci_get_gpu_dev(get_pci_dev(dn));
 		if (gpdev) {
@@ -904,18 +938,29 @@ int pnv_npu2_init(struct pnv_phb *phb)
 		}
 	}
 
-	for (i = 0; !of_property_read_u64_index(phb->hose->dn, "ibm,mmio-atsd",
+	for (i = 0; !of_property_read_u64_index(hose->dn, "ibm,mmio-atsd",
 							i, &mmio_atsd); i++)
-		phb->npu.mmio_atsd_regs[i] = ioremap(mmio_atsd, 32);
+		npu->mmio_atsd_regs[i] = ioremap(mmio_atsd, 32);
 
-	pr_info("NPU%lld: Found %d MMIO ATSD registers", phb->opal_id, i);
-	phb->npu.mmio_atsd_count = i;
-	phb->npu.mmio_atsd_usage = 0;
+	pr_info("NPU%d: Found %d MMIO ATSD registers", hose->global_number, i);
+	npu->mmio_atsd_count = i;
+	npu->mmio_atsd_usage = 0;
 	npu_index++;
-	if (WARN_ON(npu_index >= NV_MAX_NPUS))
-		return -ENOSPC;
+	if (WARN_ON(npu_index >= NV_MAX_NPUS)) {
+		ret = -ENOSPC;
+		goto fail_exit;
+	}
 	max_npu2_index = npu_index;
-	phb->npu.index = npu_index;
+	npu->index = npu_index;
+	hose->npu = npu;
 
 	return 0;
+
+fail_exit:
+	for (i = 0; i < npu->mmio_atsd_count; ++i)
+		iounmap(npu->mmio_atsd_regs[i]);
+
+	kfree(npu);
+
+	return ret;
 }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 04/19] powerpc/powernv: Move npu struct from pnv_phb to pci_controller
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  0 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

The powernv PCI code stores NPU data in the pnv_phb struct. The latter
is referenced by pci_controller::private_data. We are going to have NPU2
support in the pseries platform as well but it does not store any
private_data in in the pci_controller struct; and even if it did,
it would be a different data structure.

This makes npu a pointer and stores it one level higher in
the pci_controller struct.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v4:
* changed subj from "powerpc/powernv: Detach npu struct from pnv_phb"
* got rid of global list of npus - store them now in pci_controller
* got rid of npdev_to_npu() helper
---
 arch/powerpc/include/asm/pci-bridge.h    |  1 +
 arch/powerpc/platforms/powernv/pci.h     | 16 -----
 arch/powerpc/platforms/powernv/npu-dma.c | 81 ++++++++++++++++++------
 3 files changed, 64 insertions(+), 34 deletions(-)

diff --git a/arch/powerpc/include/asm/pci-bridge.h b/arch/powerpc/include/asm/pci-bridge.h
index 94d4490..aee4fcc 100644
--- a/arch/powerpc/include/asm/pci-bridge.h
+++ b/arch/powerpc/include/asm/pci-bridge.h
@@ -129,6 +129,7 @@ struct pci_controller {
 #endif	/* CONFIG_PPC64 */
 
 	void *private_data;
+	struct npu *npu;
 };
 
 /* These are used for config access before all the PCI probing
diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
index 2131373..f2d50974 100644
--- a/arch/powerpc/platforms/powernv/pci.h
+++ b/arch/powerpc/platforms/powernv/pci.h
@@ -8,9 +8,6 @@
 
 struct pci_dn;
 
-/* Maximum possible number of ATSD MMIO registers per NPU */
-#define NV_NMMU_ATSD_REGS 8
-
 enum pnv_phb_type {
 	PNV_PHB_IODA1		= 0,
 	PNV_PHB_IODA2		= 1,
@@ -176,19 +173,6 @@ struct pnv_phb {
 	unsigned int		diag_data_size;
 	u8			*diag_data;
 
-	/* Nvlink2 data */
-	struct npu {
-		int index;
-		__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
-		unsigned int mmio_atsd_count;
-
-		/* Bitmask for MMIO register usage */
-		unsigned long mmio_atsd_usage;
-
-		/* Do we need to explicitly flush the nest mmu? */
-		bool nmmu_flush;
-	} npu;
-
 	int p2p_target_count;
 };
 
diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
index 91d488f..7dd5c0e5 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -327,6 +327,25 @@ struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
 	return gpe;
 }
 
+/*
+ * NPU2 ATS
+ */
+/* Maximum possible number of ATSD MMIO registers per NPU */
+#define NV_NMMU_ATSD_REGS 8
+
+/* An NPU descriptor, valid for POWER9 only */
+struct npu {
+	int index;
+	__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
+	unsigned int mmio_atsd_count;
+
+	/* Bitmask for MMIO register usage */
+	unsigned long mmio_atsd_usage;
+
+	/* Do we need to explicitly flush the nest mmu? */
+	bool nmmu_flush;
+};
+
 /* Maximum number of nvlinks per npu */
 #define NV_MAX_LINKS 6
 
@@ -478,7 +497,6 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
 	int i, j;
 	struct npu *npu;
 	struct pci_dev *npdev;
-	struct pnv_phb *nphb;
 
 	for (i = 0; i <= max_npu2_index; i++) {
 		mmio_atsd_reg[i].reg = -1;
@@ -493,8 +511,10 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
 			if (!npdev)
 				continue;
 
-			nphb = pci_bus_to_host(npdev->bus)->private_data;
-			npu = &nphb->npu;
+			npu = pci_bus_to_host(npdev->bus)->npu;
+			if (!npu)
+				continue;
+
 			mmio_atsd_reg[i].npu = npu;
 			mmio_atsd_reg[i].reg = get_mmio_atsd_reg(npu);
 			while (mmio_atsd_reg[i].reg < 0) {
@@ -662,6 +682,7 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev,
 	struct pnv_phb *nphb;
 	struct npu *npu;
 	struct npu_context *npu_context;
+	struct pci_controller *hose;
 
 	/*
 	 * At present we don't support GPUs connected to multiple NPUs and I'm
@@ -689,8 +710,11 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev,
 		return ERR_PTR(-EINVAL);
 	}
 
-	nphb = pci_bus_to_host(npdev->bus)->private_data;
-	npu = &nphb->npu;
+	hose = pci_bus_to_host(npdev->bus);
+	nphb = hose->private_data;
+	npu = hose->npu;
+	if (!npu)
+		return ERR_PTR(-ENODEV);
 
 	/*
 	 * Setup the NPU context table for a particular GPU. These need to be
@@ -764,7 +788,7 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev,
 	 */
 	WRITE_ONCE(npu_context->npdev[npu->index][nvlink_index], npdev);
 
-	if (!nphb->npu.nmmu_flush) {
+	if (!npu->nmmu_flush) {
 		/*
 		 * If we're not explicitly flushing ourselves we need to mark
 		 * the thread for global flushes
@@ -802,6 +826,7 @@ void pnv_npu2_destroy_context(struct npu_context *npu_context,
 	struct pci_dev *npdev = pnv_pci_get_npu_dev(gpdev, 0);
 	struct device_node *nvlink_dn;
 	u32 nvlink_index;
+	struct pci_controller *hose;
 
 	if (WARN_ON(!npdev))
 		return;
@@ -809,8 +834,11 @@ void pnv_npu2_destroy_context(struct npu_context *npu_context,
 	if (!firmware_has_feature(FW_FEATURE_OPAL))
 		return;
 
-	nphb = pci_bus_to_host(npdev->bus)->private_data;
-	npu = &nphb->npu;
+	hose = pci_bus_to_host(npdev->bus);
+	nphb = hose->private_data;
+	npu = hose->npu;
+	if (!npu)
+		return;
 	nvlink_dn = of_parse_phandle(npdev->dev.of_node, "ibm,nvlink", 0);
 	if (WARN_ON(of_property_read_u32(nvlink_dn, "ibm,npu-link-index",
 							&nvlink_index)))
@@ -888,9 +916,15 @@ int pnv_npu2_init(struct pnv_phb *phb)
 	struct pci_dev *gpdev;
 	static int npu_index;
 	uint64_t rc = 0;
+	struct pci_controller *hose = phb->hose;
+	struct npu *npu;
+	int ret;
 
-	phb->npu.nmmu_flush -		of_property_read_bool(phb->hose->dn, "ibm,nmmu-flush");
+	npu = kzalloc(sizeof(*npu), GFP_KERNEL);
+	if (!npu)
+		return -ENOMEM;
+
+	npu->nmmu_flush = of_property_read_bool(hose->dn, "ibm,nmmu-flush");
 	for_each_child_of_node(phb->hose->dn, dn) {
 		gpdev = pnv_pci_get_gpu_dev(get_pci_dev(dn));
 		if (gpdev) {
@@ -904,18 +938,29 @@ int pnv_npu2_init(struct pnv_phb *phb)
 		}
 	}
 
-	for (i = 0; !of_property_read_u64_index(phb->hose->dn, "ibm,mmio-atsd",
+	for (i = 0; !of_property_read_u64_index(hose->dn, "ibm,mmio-atsd",
 							i, &mmio_atsd); i++)
-		phb->npu.mmio_atsd_regs[i] = ioremap(mmio_atsd, 32);
+		npu->mmio_atsd_regs[i] = ioremap(mmio_atsd, 32);
 
-	pr_info("NPU%lld: Found %d MMIO ATSD registers", phb->opal_id, i);
-	phb->npu.mmio_atsd_count = i;
-	phb->npu.mmio_atsd_usage = 0;
+	pr_info("NPU%d: Found %d MMIO ATSD registers", hose->global_number, i);
+	npu->mmio_atsd_count = i;
+	npu->mmio_atsd_usage = 0;
 	npu_index++;
-	if (WARN_ON(npu_index >= NV_MAX_NPUS))
-		return -ENOSPC;
+	if (WARN_ON(npu_index >= NV_MAX_NPUS)) {
+		ret = -ENOSPC;
+		goto fail_exit;
+	}
 	max_npu2_index = npu_index;
-	phb->npu.index = npu_index;
+	npu->index = npu_index;
+	hose->npu = npu;
 
 	return 0;
+
+fail_exit:
+	for (i = 0; i < npu->mmio_atsd_count; ++i)
+		iounmap(npu->mmio_atsd_regs[i]);
+
+	kfree(npu);
+
+	return ret;
 }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 05/19] powerpc/powernv/npu: Move OPAL calls away from context manipulation
  2018-11-23  5:52 ` Alexey Kardashevskiy
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  -1 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

When introduced, the NPU context init/destroy helpers called OPAL which
enabled/disabled PID (a userspace memory context ID) filtering in an NPU
per a GPU; this was a requirement for P9 DD1.0. However newer chip
revision added a PID wildcard support so there is no more need to
call OPAL every time a new context is initialized. Also, since the PID
wildcard support was added, skiboot does not clear wildcard entries
in the NPU so these remain in the hardware till the system reboot.

This moves LPID and wildcard programming to the PE setup code which
executes once during the booting process so NPU2 context init/destroy
won't need to do additional configuration.

This removes the check for FW_FEATURE_OPAL as pnv_npu2_init_context/
pnv_npu2_release_context/pnv_npu2_init do not call OPAL anymore.

This moves pnv_npu2_init() declaration as pseries should be able to use it.
This keeps pnv_npu2_map_lpar() in powernv as pseries is not allowed to
call that. This exports pnv_npu2_map_lpar_dev() as following patches
will use it from the VFIO driver.

While at it, replace redundant list_for_each_entry_safe() with
a simpler list_for_each_entry().

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v4:
* add flags check in pnv_npu2_init_context()
---
 arch/powerpc/include/asm/pci.h            |   3 +
 arch/powerpc/platforms/powernv/pci.h      |   2 +-
 arch/powerpc/platforms/powernv/npu-dma.c  | 104 +++++++++++-----------
 arch/powerpc/platforms/powernv/pci-ioda.c |  15 +++-
 4 files changed, 70 insertions(+), 54 deletions(-)

diff --git a/arch/powerpc/include/asm/pci.h b/arch/powerpc/include/asm/pci.h
index 2af9ded..baf2886 100644
--- a/arch/powerpc/include/asm/pci.h
+++ b/arch/powerpc/include/asm/pci.h
@@ -129,5 +129,8 @@ extern void pcibios_scan_phb(struct pci_controller *hose);
 
 extern struct pci_dev *pnv_pci_get_gpu_dev(struct pci_dev *npdev);
 extern struct pci_dev *pnv_pci_get_npu_dev(struct pci_dev *gpdev, int index);
+extern int pnv_npu2_init(struct pci_controller *hose);
+extern int pnv_npu2_map_lpar_dev(struct pci_dev *gpdev, unsigned int lparid,
+		unsigned long msr);
 
 #endif /* __ASM_POWERPC_PCI_H */
diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
index f2d50974..ddb4f02 100644
--- a/arch/powerpc/platforms/powernv/pci.h
+++ b/arch/powerpc/platforms/powernv/pci.h
@@ -190,6 +190,7 @@ extern void pnv_pci_init_ioda_hub(struct device_node *np);
 extern void pnv_pci_init_ioda2_phb(struct device_node *np);
 extern void pnv_pci_init_npu_phb(struct device_node *np);
 extern void pnv_pci_init_npu2_opencapi_phb(struct device_node *np);
+extern void pnv_npu2_map_lpar(struct pnv_ioda_pe *gpe, unsigned long msr);
 extern void pnv_pci_reset_secondary_bus(struct pci_dev *dev);
 extern int pnv_eeh_phb_reset(struct pci_controller *hose, int option);
 
@@ -220,7 +221,6 @@ extern long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
 extern long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num);
 extern void pnv_npu_take_ownership(struct pnv_ioda_pe *npe);
 extern void pnv_npu_release_ownership(struct pnv_ioda_pe *npe);
-extern int pnv_npu2_init(struct pnv_phb *phb);
 
 /* pci-ioda-tce.c */
 #define POWERNV_IOMMU_DEFAULT_LEVELS	1
diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
index 7dd5c0e5..ef1457f 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -679,7 +679,6 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev,
 	u32 nvlink_index;
 	struct device_node *nvlink_dn;
 	struct mm_struct *mm = current->mm;
-	struct pnv_phb *nphb;
 	struct npu *npu;
 	struct npu_context *npu_context;
 	struct pci_controller *hose;
@@ -690,13 +689,14 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev,
 	 */
 	struct pci_dev *npdev = pnv_pci_get_npu_dev(gpdev, 0);
 
-	if (!firmware_has_feature(FW_FEATURE_OPAL))
-		return ERR_PTR(-ENODEV);
-
 	if (!npdev)
 		/* No nvlink associated with this GPU device */
 		return ERR_PTR(-ENODEV);
 
+	/* We only support DR/PR/HV in pnv_npu2_map_lpar_dev() */
+	if (flags & ~(MSR_DR | MSR_PR | MSR_HV))
+		return ERR_PTR(-EINVAL);
+
 	nvlink_dn = of_parse_phandle(npdev->dev.of_node, "ibm,nvlink", 0);
 	if (WARN_ON(of_property_read_u32(nvlink_dn, "ibm,npu-link-index",
 							&nvlink_index)))
@@ -711,23 +711,10 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev,
 	}
 
 	hose = pci_bus_to_host(npdev->bus);
-	nphb = hose->private_data;
 	npu = hose->npu;
 	if (!npu)
 		return ERR_PTR(-ENODEV);
 
-	/*
-	 * Setup the NPU context table for a particular GPU. These need to be
-	 * per-GPU as we need the tables to filter ATSDs when there are no
-	 * active contexts on a particular GPU. It is safe for these to be
-	 * called concurrently with destroy as the OPAL call takes appropriate
-	 * locks and refcounts on init/destroy.
-	 */
-	rc = opal_npu_init_context(nphb->opal_id, mm->context.id, flags,
-				PCI_DEVID(gpdev->bus->number, gpdev->devfn));
-	if (rc < 0)
-		return ERR_PTR(-ENOSPC);
-
 	/*
 	 * We store the npu pci device so we can more easily get at the
 	 * associated npus.
@@ -738,9 +725,6 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev,
 		if (npu_context->release_cb != cb ||
 			npu_context->priv != priv) {
 			spin_unlock(&npu_context_lock);
-			opal_npu_destroy_context(nphb->opal_id, mm->context.id,
-						PCI_DEVID(gpdev->bus->number,
-							gpdev->devfn));
 			return ERR_PTR(-EINVAL);
 		}
 
@@ -766,9 +750,6 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev,
 
 		if (rc) {
 			kfree(npu_context);
-			opal_npu_destroy_context(nphb->opal_id, mm->context.id,
-					PCI_DEVID(gpdev->bus->number,
-						gpdev->devfn));
 			return ERR_PTR(rc);
 		}
 
@@ -821,7 +802,6 @@ void pnv_npu2_destroy_context(struct npu_context *npu_context,
 			struct pci_dev *gpdev)
 {
 	int removed;
-	struct pnv_phb *nphb;
 	struct npu *npu;
 	struct pci_dev *npdev = pnv_pci_get_npu_dev(gpdev, 0);
 	struct device_node *nvlink_dn;
@@ -831,11 +811,7 @@ void pnv_npu2_destroy_context(struct npu_context *npu_context,
 	if (WARN_ON(!npdev))
 		return;
 
-	if (!firmware_has_feature(FW_FEATURE_OPAL))
-		return;
-
 	hose = pci_bus_to_host(npdev->bus);
-	nphb = hose->private_data;
 	npu = hose->npu;
 	if (!npu)
 		return;
@@ -844,8 +820,6 @@ void pnv_npu2_destroy_context(struct npu_context *npu_context,
 							&nvlink_index)))
 		return;
 	WRITE_ONCE(npu_context->npdev[npu->index][nvlink_index], NULL);
-	opal_npu_destroy_context(nphb->opal_id, npu_context->mm->context.id,
-				PCI_DEVID(gpdev->bus->number, gpdev->devfn));
 	spin_lock(&npu_context_lock);
 	removed = kref_put(&npu_context->kref, pnv_npu2_release_context);
 	spin_unlock(&npu_context_lock);
@@ -877,9 +851,6 @@ int pnv_npu2_handle_fault(struct npu_context *context, uintptr_t *ea,
 	/* mmap_sem should be held so the struct_mm must be present */
 	struct mm_struct *mm = context->mm;
 
-	if (!firmware_has_feature(FW_FEATURE_OPAL))
-		return -ENODEV;
-
 	WARN_ON(!rwsem_is_locked(&mm->mmap_sem));
 
 	for (i = 0; i < count; i++) {
@@ -908,15 +879,11 @@ int pnv_npu2_handle_fault(struct npu_context *context, uintptr_t *ea,
 }
 EXPORT_SYMBOL(pnv_npu2_handle_fault);
 
-int pnv_npu2_init(struct pnv_phb *phb)
+int pnv_npu2_init(struct pci_controller *hose)
 {
 	unsigned int i;
 	u64 mmio_atsd;
-	struct device_node *dn;
-	struct pci_dev *gpdev;
 	static int npu_index;
-	uint64_t rc = 0;
-	struct pci_controller *hose = phb->hose;
 	struct npu *npu;
 	int ret;
 
@@ -925,18 +892,6 @@ int pnv_npu2_init(struct pnv_phb *phb)
 		return -ENOMEM;
 
 	npu->nmmu_flush = of_property_read_bool(hose->dn, "ibm,nmmu-flush");
-	for_each_child_of_node(phb->hose->dn, dn) {
-		gpdev = pnv_pci_get_gpu_dev(get_pci_dev(dn));
-		if (gpdev) {
-			rc = opal_npu_map_lpar(phb->opal_id,
-				PCI_DEVID(gpdev->bus->number, gpdev->devfn),
-				0, 0);
-			if (rc)
-				dev_err(&gpdev->dev,
-					"Error %lld mapping device to LPAR\n",
-					rc);
-		}
-	}
 
 	for (i = 0; !of_property_read_u64_index(hose->dn, "ibm,mmio-atsd",
 							i, &mmio_atsd); i++)
@@ -964,3 +919,52 @@ int pnv_npu2_init(struct pnv_phb *phb)
 
 	return ret;
 }
+
+int pnv_npu2_map_lpar_dev(struct pci_dev *gpdev, unsigned int lparid,
+		unsigned long msr)
+{
+	int ret;
+	struct pci_dev *npdev = pnv_pci_get_npu_dev(gpdev, 0);
+	struct pci_controller *hose;
+	struct pnv_phb *nphb;
+
+	if (!npdev)
+		return -ENODEV;
+
+	hose = pci_bus_to_host(npdev->bus);
+	nphb = hose->private_data;
+
+	dev_dbg(&gpdev->dev, "Map LPAR opalid=%llu lparid=%u\n",
+			nphb->opal_id, lparid);
+	/*
+	 * Currently we only support radix and non-zero LPCR only makes sense
+	 * for hash tables so skiboot expects the LPCR parameter to be a zero.
+	 */
+	ret = opal_npu_map_lpar(nphb->opal_id,
+			PCI_DEVID(gpdev->bus->number, gpdev->devfn), lparid,
+			0 /* LPCR bits */);
+	if (ret) {
+		dev_err(&gpdev->dev, "Error %d mapping device to LPAR\n", ret);
+		return ret;
+	}
+
+	dev_dbg(&gpdev->dev, "init context opalid=%llu msr=%lx\n",
+			nphb->opal_id, msr);
+	ret = opal_npu_init_context(nphb->opal_id, 0/*__unused*/, msr,
+			PCI_DEVID(gpdev->bus->number, gpdev->devfn));
+	if (ret < 0)
+		dev_err(&gpdev->dev, "Failed to init context: %d\n", ret);
+	else
+		ret = 0;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(pnv_npu2_map_lpar_dev);
+
+void pnv_npu2_map_lpar(struct pnv_ioda_pe *gpe, unsigned long msr)
+{
+	struct pci_dev *gpdev;
+
+	list_for_each_entry(gpdev, &gpe->pbus->devices, bus_list)
+		pnv_npu2_map_lpar_dev(gpdev, 0, msr);
+}
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index 29c6837..2f9eb43 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -1271,19 +1271,20 @@ static void pnv_ioda_setup_npu_PEs(struct pci_bus *bus)
 
 static void pnv_pci_ioda_setup_PEs(void)
 {
-	struct pci_controller *hose, *tmp;
+	struct pci_controller *hose;
 	struct pnv_phb *phb;
 	struct pci_bus *bus;
 	struct pci_dev *pdev;
+	struct pnv_ioda_pe *pe;
 
-	list_for_each_entry_safe(hose, tmp, &hose_list, list_node) {
+	list_for_each_entry(hose, &hose_list, list_node) {
 		phb = hose->private_data;
 		if (phb->type == PNV_PHB_NPU_NVLINK) {
 			/* PE#0 is needed for error reporting */
 			pnv_ioda_reserve_pe(phb, 0);
 			pnv_ioda_setup_npu_PEs(hose->bus);
 			if (phb->model == PNV_PHB_MODEL_NPU2)
-				pnv_npu2_init(phb);
+				pnv_npu2_init(hose);
 		}
 		if (phb->type == PNV_PHB_NPU_OCAPI) {
 			bus = hose->bus;
@@ -1291,6 +1292,14 @@ static void pnv_pci_ioda_setup_PEs(void)
 				pnv_ioda_setup_dev_PE(pdev);
 		}
 	}
+	list_for_each_entry(hose, &hose_list, list_node) {
+		phb = hose->private_data;
+		if (phb->type != PNV_PHB_IODA2)
+			continue;
+
+		list_for_each_entry(pe, &phb->ioda.pe_list, list)
+			pnv_npu2_map_lpar(pe, MSR_DR | MSR_PR | MSR_HV);
+	}
 }
 
 #ifdef CONFIG_PCI_IOV
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 05/19] powerpc/powernv/npu: Move OPAL calls away from context manipulation
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  0 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

When introduced, the NPU context init/destroy helpers called OPAL which
enabled/disabled PID (a userspace memory context ID) filtering in an NPU
per a GPU; this was a requirement for P9 DD1.0. However newer chip
revision added a PID wildcard support so there is no more need to
call OPAL every time a new context is initialized. Also, since the PID
wildcard support was added, skiboot does not clear wildcard entries
in the NPU so these remain in the hardware till the system reboot.

This moves LPID and wildcard programming to the PE setup code which
executes once during the booting process so NPU2 context init/destroy
won't need to do additional configuration.

This removes the check for FW_FEATURE_OPAL as pnv_npu2_init_context/
pnv_npu2_release_context/pnv_npu2_init do not call OPAL anymore.

This moves pnv_npu2_init() declaration as pseries should be able to use it.
This keeps pnv_npu2_map_lpar() in powernv as pseries is not allowed to
call that. This exports pnv_npu2_map_lpar_dev() as following patches
will use it from the VFIO driver.

While at it, replace redundant list_for_each_entry_safe() with
a simpler list_for_each_entry().

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v4:
* add flags check in pnv_npu2_init_context()
---
 arch/powerpc/include/asm/pci.h            |   3 +
 arch/powerpc/platforms/powernv/pci.h      |   2 +-
 arch/powerpc/platforms/powernv/npu-dma.c  | 104 +++++++++++-----------
 arch/powerpc/platforms/powernv/pci-ioda.c |  15 +++-
 4 files changed, 70 insertions(+), 54 deletions(-)

diff --git a/arch/powerpc/include/asm/pci.h b/arch/powerpc/include/asm/pci.h
index 2af9ded..baf2886 100644
--- a/arch/powerpc/include/asm/pci.h
+++ b/arch/powerpc/include/asm/pci.h
@@ -129,5 +129,8 @@ extern void pcibios_scan_phb(struct pci_controller *hose);
 
 extern struct pci_dev *pnv_pci_get_gpu_dev(struct pci_dev *npdev);
 extern struct pci_dev *pnv_pci_get_npu_dev(struct pci_dev *gpdev, int index);
+extern int pnv_npu2_init(struct pci_controller *hose);
+extern int pnv_npu2_map_lpar_dev(struct pci_dev *gpdev, unsigned int lparid,
+		unsigned long msr);
 
 #endif /* __ASM_POWERPC_PCI_H */
diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
index f2d50974..ddb4f02 100644
--- a/arch/powerpc/platforms/powernv/pci.h
+++ b/arch/powerpc/platforms/powernv/pci.h
@@ -190,6 +190,7 @@ extern void pnv_pci_init_ioda_hub(struct device_node *np);
 extern void pnv_pci_init_ioda2_phb(struct device_node *np);
 extern void pnv_pci_init_npu_phb(struct device_node *np);
 extern void pnv_pci_init_npu2_opencapi_phb(struct device_node *np);
+extern void pnv_npu2_map_lpar(struct pnv_ioda_pe *gpe, unsigned long msr);
 extern void pnv_pci_reset_secondary_bus(struct pci_dev *dev);
 extern int pnv_eeh_phb_reset(struct pci_controller *hose, int option);
 
@@ -220,7 +221,6 @@ extern long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
 extern long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num);
 extern void pnv_npu_take_ownership(struct pnv_ioda_pe *npe);
 extern void pnv_npu_release_ownership(struct pnv_ioda_pe *npe);
-extern int pnv_npu2_init(struct pnv_phb *phb);
 
 /* pci-ioda-tce.c */
 #define POWERNV_IOMMU_DEFAULT_LEVELS	1
diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
index 7dd5c0e5..ef1457f 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -679,7 +679,6 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev,
 	u32 nvlink_index;
 	struct device_node *nvlink_dn;
 	struct mm_struct *mm = current->mm;
-	struct pnv_phb *nphb;
 	struct npu *npu;
 	struct npu_context *npu_context;
 	struct pci_controller *hose;
@@ -690,13 +689,14 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev,
 	 */
 	struct pci_dev *npdev = pnv_pci_get_npu_dev(gpdev, 0);
 
-	if (!firmware_has_feature(FW_FEATURE_OPAL))
-		return ERR_PTR(-ENODEV);
-
 	if (!npdev)
 		/* No nvlink associated with this GPU device */
 		return ERR_PTR(-ENODEV);
 
+	/* We only support DR/PR/HV in pnv_npu2_map_lpar_dev() */
+	if (flags & ~(MSR_DR | MSR_PR | MSR_HV))
+		return ERR_PTR(-EINVAL);
+
 	nvlink_dn = of_parse_phandle(npdev->dev.of_node, "ibm,nvlink", 0);
 	if (WARN_ON(of_property_read_u32(nvlink_dn, "ibm,npu-link-index",
 							&nvlink_index)))
@@ -711,23 +711,10 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev,
 	}
 
 	hose = pci_bus_to_host(npdev->bus);
-	nphb = hose->private_data;
 	npu = hose->npu;
 	if (!npu)
 		return ERR_PTR(-ENODEV);
 
-	/*
-	 * Setup the NPU context table for a particular GPU. These need to be
-	 * per-GPU as we need the tables to filter ATSDs when there are no
-	 * active contexts on a particular GPU. It is safe for these to be
-	 * called concurrently with destroy as the OPAL call takes appropriate
-	 * locks and refcounts on init/destroy.
-	 */
-	rc = opal_npu_init_context(nphb->opal_id, mm->context.id, flags,
-				PCI_DEVID(gpdev->bus->number, gpdev->devfn));
-	if (rc < 0)
-		return ERR_PTR(-ENOSPC);
-
 	/*
 	 * We store the npu pci device so we can more easily get at the
 	 * associated npus.
@@ -738,9 +725,6 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev,
 		if (npu_context->release_cb != cb ||
 			npu_context->priv != priv) {
 			spin_unlock(&npu_context_lock);
-			opal_npu_destroy_context(nphb->opal_id, mm->context.id,
-						PCI_DEVID(gpdev->bus->number,
-							gpdev->devfn));
 			return ERR_PTR(-EINVAL);
 		}
 
@@ -766,9 +750,6 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev,
 
 		if (rc) {
 			kfree(npu_context);
-			opal_npu_destroy_context(nphb->opal_id, mm->context.id,
-					PCI_DEVID(gpdev->bus->number,
-						gpdev->devfn));
 			return ERR_PTR(rc);
 		}
 
@@ -821,7 +802,6 @@ void pnv_npu2_destroy_context(struct npu_context *npu_context,
 			struct pci_dev *gpdev)
 {
 	int removed;
-	struct pnv_phb *nphb;
 	struct npu *npu;
 	struct pci_dev *npdev = pnv_pci_get_npu_dev(gpdev, 0);
 	struct device_node *nvlink_dn;
@@ -831,11 +811,7 @@ void pnv_npu2_destroy_context(struct npu_context *npu_context,
 	if (WARN_ON(!npdev))
 		return;
 
-	if (!firmware_has_feature(FW_FEATURE_OPAL))
-		return;
-
 	hose = pci_bus_to_host(npdev->bus);
-	nphb = hose->private_data;
 	npu = hose->npu;
 	if (!npu)
 		return;
@@ -844,8 +820,6 @@ void pnv_npu2_destroy_context(struct npu_context *npu_context,
 							&nvlink_index)))
 		return;
 	WRITE_ONCE(npu_context->npdev[npu->index][nvlink_index], NULL);
-	opal_npu_destroy_context(nphb->opal_id, npu_context->mm->context.id,
-				PCI_DEVID(gpdev->bus->number, gpdev->devfn));
 	spin_lock(&npu_context_lock);
 	removed = kref_put(&npu_context->kref, pnv_npu2_release_context);
 	spin_unlock(&npu_context_lock);
@@ -877,9 +851,6 @@ int pnv_npu2_handle_fault(struct npu_context *context, uintptr_t *ea,
 	/* mmap_sem should be held so the struct_mm must be present */
 	struct mm_struct *mm = context->mm;
 
-	if (!firmware_has_feature(FW_FEATURE_OPAL))
-		return -ENODEV;
-
 	WARN_ON(!rwsem_is_locked(&mm->mmap_sem));
 
 	for (i = 0; i < count; i++) {
@@ -908,15 +879,11 @@ int pnv_npu2_handle_fault(struct npu_context *context, uintptr_t *ea,
 }
 EXPORT_SYMBOL(pnv_npu2_handle_fault);
 
-int pnv_npu2_init(struct pnv_phb *phb)
+int pnv_npu2_init(struct pci_controller *hose)
 {
 	unsigned int i;
 	u64 mmio_atsd;
-	struct device_node *dn;
-	struct pci_dev *gpdev;
 	static int npu_index;
-	uint64_t rc = 0;
-	struct pci_controller *hose = phb->hose;
 	struct npu *npu;
 	int ret;
 
@@ -925,18 +892,6 @@ int pnv_npu2_init(struct pnv_phb *phb)
 		return -ENOMEM;
 
 	npu->nmmu_flush = of_property_read_bool(hose->dn, "ibm,nmmu-flush");
-	for_each_child_of_node(phb->hose->dn, dn) {
-		gpdev = pnv_pci_get_gpu_dev(get_pci_dev(dn));
-		if (gpdev) {
-			rc = opal_npu_map_lpar(phb->opal_id,
-				PCI_DEVID(gpdev->bus->number, gpdev->devfn),
-				0, 0);
-			if (rc)
-				dev_err(&gpdev->dev,
-					"Error %lld mapping device to LPAR\n",
-					rc);
-		}
-	}
 
 	for (i = 0; !of_property_read_u64_index(hose->dn, "ibm,mmio-atsd",
 							i, &mmio_atsd); i++)
@@ -964,3 +919,52 @@ int pnv_npu2_init(struct pnv_phb *phb)
 
 	return ret;
 }
+
+int pnv_npu2_map_lpar_dev(struct pci_dev *gpdev, unsigned int lparid,
+		unsigned long msr)
+{
+	int ret;
+	struct pci_dev *npdev = pnv_pci_get_npu_dev(gpdev, 0);
+	struct pci_controller *hose;
+	struct pnv_phb *nphb;
+
+	if (!npdev)
+		return -ENODEV;
+
+	hose = pci_bus_to_host(npdev->bus);
+	nphb = hose->private_data;
+
+	dev_dbg(&gpdev->dev, "Map LPAR opalid=%llu lparid=%u\n",
+			nphb->opal_id, lparid);
+	/*
+	 * Currently we only support radix and non-zero LPCR only makes sense
+	 * for hash tables so skiboot expects the LPCR parameter to be a zero.
+	 */
+	ret = opal_npu_map_lpar(nphb->opal_id,
+			PCI_DEVID(gpdev->bus->number, gpdev->devfn), lparid,
+			0 /* LPCR bits */);
+	if (ret) {
+		dev_err(&gpdev->dev, "Error %d mapping device to LPAR\n", ret);
+		return ret;
+	}
+
+	dev_dbg(&gpdev->dev, "init context opalid=%llu msr=%lx\n",
+			nphb->opal_id, msr);
+	ret = opal_npu_init_context(nphb->opal_id, 0/*__unused*/, msr,
+			PCI_DEVID(gpdev->bus->number, gpdev->devfn));
+	if (ret < 0)
+		dev_err(&gpdev->dev, "Failed to init context: %d\n", ret);
+	else
+		ret = 0;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(pnv_npu2_map_lpar_dev);
+
+void pnv_npu2_map_lpar(struct pnv_ioda_pe *gpe, unsigned long msr)
+{
+	struct pci_dev *gpdev;
+
+	list_for_each_entry(gpdev, &gpe->pbus->devices, bus_list)
+		pnv_npu2_map_lpar_dev(gpdev, 0, msr);
+}
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index 29c6837..2f9eb43 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -1271,19 +1271,20 @@ static void pnv_ioda_setup_npu_PEs(struct pci_bus *bus)
 
 static void pnv_pci_ioda_setup_PEs(void)
 {
-	struct pci_controller *hose, *tmp;
+	struct pci_controller *hose;
 	struct pnv_phb *phb;
 	struct pci_bus *bus;
 	struct pci_dev *pdev;
+	struct pnv_ioda_pe *pe;
 
-	list_for_each_entry_safe(hose, tmp, &hose_list, list_node) {
+	list_for_each_entry(hose, &hose_list, list_node) {
 		phb = hose->private_data;
 		if (phb->type = PNV_PHB_NPU_NVLINK) {
 			/* PE#0 is needed for error reporting */
 			pnv_ioda_reserve_pe(phb, 0);
 			pnv_ioda_setup_npu_PEs(hose->bus);
 			if (phb->model = PNV_PHB_MODEL_NPU2)
-				pnv_npu2_init(phb);
+				pnv_npu2_init(hose);
 		}
 		if (phb->type = PNV_PHB_NPU_OCAPI) {
 			bus = hose->bus;
@@ -1291,6 +1292,14 @@ static void pnv_pci_ioda_setup_PEs(void)
 				pnv_ioda_setup_dev_PE(pdev);
 		}
 	}
+	list_for_each_entry(hose, &hose_list, list_node) {
+		phb = hose->private_data;
+		if (phb->type != PNV_PHB_IODA2)
+			continue;
+
+		list_for_each_entry(pe, &phb->ioda.pe_list, list)
+			pnv_npu2_map_lpar(pe, MSR_DR | MSR_PR | MSR_HV);
+	}
 }
 
 #ifdef CONFIG_PCI_IOV
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 06/19] powerpc/pseries/iommu: Use memory@ nodes in max RAM address calculation
  2018-11-23  5:52 ` Alexey Kardashevskiy
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  -1 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

We might have memory@ nodes with "linux,usable-memory" set to zero
(for example, to replicate powernv's behaviour for GPU coherent memory)
which means that the memory needs an extra initialization but since
it can be used afterwards, the pseries platform will try mapping it
for DMA so the DMA window needs to cover those memory regions too;
if the window cannot cover new memory regions, the memory onlining fails.

This walks through the memory nodes to find the highest RAM address to
let a huge DMA window cover that too in case this memory gets onlined
later.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v4:
* uses of_read_number directly instead of cut-n-pasted read_n_cells
---
 arch/powerpc/platforms/pseries/iommu.c | 34 +++++++++++++++++++++++++-
 1 file changed, 33 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
index 06f0296..7da74b5 100644
--- a/arch/powerpc/platforms/pseries/iommu.c
+++ b/arch/powerpc/platforms/pseries/iommu.c
@@ -964,6 +964,38 @@ struct failed_ddw_pdn {
 
 static LIST_HEAD(failed_ddw_pdn_list);
 
+static phys_addr_t ddw_memory_hotplug_max(void)
+{
+	phys_addr_t max_addr = memory_hotplug_max();
+	struct device_node *memory;
+
+	for_each_node_by_type(memory, "memory") {
+		unsigned long start, size;
+		int ranges, n_mem_addr_cells, n_mem_size_cells, len;
+		const __be32 *memcell_buf;
+
+		memcell_buf = of_get_property(memory, "reg", &len);
+		if (!memcell_buf || len <= 0)
+			continue;
+
+		n_mem_addr_cells = of_n_addr_cells(memory);
+		n_mem_size_cells = of_n_size_cells(memory);
+
+		/* ranges in cell */
+		ranges = (len >> 2) / (n_mem_addr_cells + n_mem_size_cells);
+
+		/* these are order-sensitive, and modify the buffer pointer */
+		start = of_read_number(memcell_buf, n_mem_addr_cells);
+		memcell_buf += n_mem_addr_cells;
+		size = of_read_number(memcell_buf, n_mem_size_cells);
+		memcell_buf += n_mem_size_cells;
+
+		max_addr = max_t(phys_addr_t, max_addr, start + size);
+	}
+
+	return max_addr;
+}
+
 /*
  * If the PE supports dynamic dma windows, and there is space for a table
  * that can map all pages in a linear offset, then setup such a table,
@@ -1053,7 +1085,7 @@ static u64 enable_ddw(struct pci_dev *dev, struct device_node *pdn)
 	}
 	/* verify the window * number of ptes will map the partition */
 	/* check largest block * page size > max memory hotplug addr */
-	max_addr = memory_hotplug_max();
+	max_addr = ddw_memory_hotplug_max();
 	if (query.largest_available_block < (max_addr >> page_shift)) {
 		dev_dbg(&dev->dev, "can't map partition max 0x%llx with %u "
 			  "%llu-sized pages\n", max_addr,  query.largest_available_block,
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 06/19] powerpc/pseries/iommu: Use memory@ nodes in max RAM address calculation
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  0 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

We might have memory@ nodes with "linux,usable-memory" set to zero
(for example, to replicate powernv's behaviour for GPU coherent memory)
which means that the memory needs an extra initialization but since
it can be used afterwards, the pseries platform will try mapping it
for DMA so the DMA window needs to cover those memory regions too;
if the window cannot cover new memory regions, the memory onlining fails.

This walks through the memory nodes to find the highest RAM address to
let a huge DMA window cover that too in case this memory gets onlined
later.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v4:
* uses of_read_number directly instead of cut-n-pasted read_n_cells
---
 arch/powerpc/platforms/pseries/iommu.c | 34 +++++++++++++++++++++++++-
 1 file changed, 33 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
index 06f0296..7da74b5 100644
--- a/arch/powerpc/platforms/pseries/iommu.c
+++ b/arch/powerpc/platforms/pseries/iommu.c
@@ -964,6 +964,38 @@ struct failed_ddw_pdn {
 
 static LIST_HEAD(failed_ddw_pdn_list);
 
+static phys_addr_t ddw_memory_hotplug_max(void)
+{
+	phys_addr_t max_addr = memory_hotplug_max();
+	struct device_node *memory;
+
+	for_each_node_by_type(memory, "memory") {
+		unsigned long start, size;
+		int ranges, n_mem_addr_cells, n_mem_size_cells, len;
+		const __be32 *memcell_buf;
+
+		memcell_buf = of_get_property(memory, "reg", &len);
+		if (!memcell_buf || len <= 0)
+			continue;
+
+		n_mem_addr_cells = of_n_addr_cells(memory);
+		n_mem_size_cells = of_n_size_cells(memory);
+
+		/* ranges in cell */
+		ranges = (len >> 2) / (n_mem_addr_cells + n_mem_size_cells);
+
+		/* these are order-sensitive, and modify the buffer pointer */
+		start = of_read_number(memcell_buf, n_mem_addr_cells);
+		memcell_buf += n_mem_addr_cells;
+		size = of_read_number(memcell_buf, n_mem_size_cells);
+		memcell_buf += n_mem_size_cells;
+
+		max_addr = max_t(phys_addr_t, max_addr, start + size);
+	}
+
+	return max_addr;
+}
+
 /*
  * If the PE supports dynamic dma windows, and there is space for a table
  * that can map all pages in a linear offset, then setup such a table,
@@ -1053,7 +1085,7 @@ static u64 enable_ddw(struct pci_dev *dev, struct device_node *pdn)
 	}
 	/* verify the window * number of ptes will map the partition */
 	/* check largest block * page size > max memory hotplug addr */
-	max_addr = memory_hotplug_max();
+	max_addr = ddw_memory_hotplug_max();
 	if (query.largest_available_block < (max_addr >> page_shift)) {
 		dev_dbg(&dev->dev, "can't map partition max 0x%llx with %u "
 			  "%llu-sized pages\n", max_addr,  query.largest_available_block,
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 07/19] powerpc/pseries/npu: Enable platform support
  2018-11-23  5:52 ` Alexey Kardashevskiy
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  -1 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

We already changed NPU API for GPUs to not to call OPAL and the remaining
bit is initializing NPU structures.

This searches for POWER9 NVLinks attached to any device on a PHB and
initializes an NPU structure if any found.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v4:
* dropped "IBM,npu-vphb" compatible type on PHB and use the type of NVLink
---
 arch/powerpc/platforms/pseries/pci.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/arch/powerpc/platforms/pseries/pci.c b/arch/powerpc/platforms/pseries/pci.c
index 41d8a4d..e1b9b45 100644
--- a/arch/powerpc/platforms/pseries/pci.c
+++ b/arch/powerpc/platforms/pseries/pci.c
@@ -29,6 +29,7 @@
 #include <asm/pci-bridge.h>
 #include <asm/prom.h>
 #include <asm/ppc-pci.h>
+#include <asm/pci.h>
 #include "pseries.h"
 
 #if 0
@@ -237,6 +238,8 @@ static void __init pSeries_request_regions(void)
 
 void __init pSeries_final_fixup(void)
 {
+	struct pci_controller *hose;
+
 	pSeries_request_regions();
 
 	eeh_probe_devices();
@@ -246,6 +249,25 @@ void __init pSeries_final_fixup(void)
 	ppc_md.pcibios_sriov_enable = pseries_pcibios_sriov_enable;
 	ppc_md.pcibios_sriov_disable = pseries_pcibios_sriov_disable;
 #endif
+	list_for_each_entry(hose, &hose_list, list_node) {
+		struct device_node *dn = hose->dn, *nvdn;
+
+		while (1) {
+			dn = of_find_all_nodes(dn);
+			if (!dn)
+				break;
+			nvdn = of_parse_phandle(dn, "ibm,nvlink", 0);
+			if (!nvdn)
+				continue;
+			if (!of_device_is_compatible(nvdn, "ibm,npu-link"))
+				continue;
+			if (!of_device_is_compatible(nvdn->parent,
+						"ibm,power9-npu"))
+				continue;
+			pnv_npu2_init(hose);
+			break;
+		}
+	}
 }
 
 /*
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 07/19] powerpc/pseries/npu: Enable platform support
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  0 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

We already changed NPU API for GPUs to not to call OPAL and the remaining
bit is initializing NPU structures.

This searches for POWER9 NVLinks attached to any device on a PHB and
initializes an NPU structure if any found.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v4:
* dropped "IBM,npu-vphb" compatible type on PHB and use the type of NVLink
---
 arch/powerpc/platforms/pseries/pci.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/arch/powerpc/platforms/pseries/pci.c b/arch/powerpc/platforms/pseries/pci.c
index 41d8a4d..e1b9b45 100644
--- a/arch/powerpc/platforms/pseries/pci.c
+++ b/arch/powerpc/platforms/pseries/pci.c
@@ -29,6 +29,7 @@
 #include <asm/pci-bridge.h>
 #include <asm/prom.h>
 #include <asm/ppc-pci.h>
+#include <asm/pci.h>
 #include "pseries.h"
 
 #if 0
@@ -237,6 +238,8 @@ static void __init pSeries_request_regions(void)
 
 void __init pSeries_final_fixup(void)
 {
+	struct pci_controller *hose;
+
 	pSeries_request_regions();
 
 	eeh_probe_devices();
@@ -246,6 +249,25 @@ void __init pSeries_final_fixup(void)
 	ppc_md.pcibios_sriov_enable = pseries_pcibios_sriov_enable;
 	ppc_md.pcibios_sriov_disable = pseries_pcibios_sriov_disable;
 #endif
+	list_for_each_entry(hose, &hose_list, list_node) {
+		struct device_node *dn = hose->dn, *nvdn;
+
+		while (1) {
+			dn = of_find_all_nodes(dn);
+			if (!dn)
+				break;
+			nvdn = of_parse_phandle(dn, "ibm,nvlink", 0);
+			if (!nvdn)
+				continue;
+			if (!of_device_is_compatible(nvdn, "ibm,npu-link"))
+				continue;
+			if (!of_device_is_compatible(nvdn->parent,
+						"ibm,power9-npu"))
+				continue;
+			pnv_npu2_init(hose);
+			break;
+		}
+	}
 }
 
 /*
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 08/19] powerpc/pseries: Remove IOMMU API support for non-LPAR systems
  2018-11-23  5:52 ` Alexey Kardashevskiy
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  -1 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

The pci_dma_bus_setup_pSeries and pci_dma_dev_setup_pSeries hooks are
registered for the pseries platform which does not have FW_FEATURE_LPAR;
these would be pre-powernv platforms which we never supported PCI pass
through for anyway so remove it.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
 arch/powerpc/platforms/pseries/iommu.c | 9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
index 7da74b5..8f9d3be 100644
--- a/arch/powerpc/platforms/pseries/iommu.c
+++ b/arch/powerpc/platforms/pseries/iommu.c
@@ -645,7 +645,6 @@ static void pci_dma_bus_setup_pSeries(struct pci_bus *bus)
 	iommu_table_setparms(pci->phb, dn, tbl);
 	tbl->it_ops = &iommu_table_pseries_ops;
 	iommu_init_table(tbl, pci->phb->node);
-	iommu_register_group(pci->table_group, pci_domain_nr(bus), 0);
 
 	/* Divide the rest (1.75GB) among the children */
 	pci->phb->dma_window_size = 0x80000000ul;
@@ -756,10 +755,7 @@ static void pci_dma_dev_setup_pSeries(struct pci_dev *dev)
 		iommu_table_setparms(phb, dn, tbl);
 		tbl->it_ops = &iommu_table_pseries_ops;
 		iommu_init_table(tbl, phb->node);
-		iommu_register_group(PCI_DN(dn)->table_group,
-				pci_domain_nr(phb->bus), 0);
 		set_iommu_table_base(&dev->dev, tbl);
-		iommu_add_device(&dev->dev);
 		return;
 	}
 
@@ -770,11 +766,10 @@ static void pci_dma_dev_setup_pSeries(struct pci_dev *dev)
 	while (dn && PCI_DN(dn) && PCI_DN(dn)->table_group == NULL)
 		dn = dn->parent;
 
-	if (dn && PCI_DN(dn)) {
+	if (dn && PCI_DN(dn))
 		set_iommu_table_base(&dev->dev,
 				PCI_DN(dn)->table_group->tables[0]);
-		iommu_add_device(&dev->dev);
-	} else
+	else
 		printk(KERN_WARNING "iommu: Device %s has no iommu table\n",
 		       pci_name(dev));
 }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 08/19] powerpc/pseries: Remove IOMMU API support for non-LPAR systems
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  0 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

The pci_dma_bus_setup_pSeries and pci_dma_dev_setup_pSeries hooks are
registered for the pseries platform which does not have FW_FEATURE_LPAR;
these would be pre-powernv platforms which we never supported PCI pass
through for anyway so remove it.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
 arch/powerpc/platforms/pseries/iommu.c | 9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
index 7da74b5..8f9d3be 100644
--- a/arch/powerpc/platforms/pseries/iommu.c
+++ b/arch/powerpc/platforms/pseries/iommu.c
@@ -645,7 +645,6 @@ static void pci_dma_bus_setup_pSeries(struct pci_bus *bus)
 	iommu_table_setparms(pci->phb, dn, tbl);
 	tbl->it_ops = &iommu_table_pseries_ops;
 	iommu_init_table(tbl, pci->phb->node);
-	iommu_register_group(pci->table_group, pci_domain_nr(bus), 0);
 
 	/* Divide the rest (1.75GB) among the children */
 	pci->phb->dma_window_size = 0x80000000ul;
@@ -756,10 +755,7 @@ static void pci_dma_dev_setup_pSeries(struct pci_dev *dev)
 		iommu_table_setparms(phb, dn, tbl);
 		tbl->it_ops = &iommu_table_pseries_ops;
 		iommu_init_table(tbl, phb->node);
-		iommu_register_group(PCI_DN(dn)->table_group,
-				pci_domain_nr(phb->bus), 0);
 		set_iommu_table_base(&dev->dev, tbl);
-		iommu_add_device(&dev->dev);
 		return;
 	}
 
@@ -770,11 +766,10 @@ static void pci_dma_dev_setup_pSeries(struct pci_dev *dev)
 	while (dn && PCI_DN(dn) && PCI_DN(dn)->table_group = NULL)
 		dn = dn->parent;
 
-	if (dn && PCI_DN(dn)) {
+	if (dn && PCI_DN(dn))
 		set_iommu_table_base(&dev->dev,
 				PCI_DN(dn)->table_group->tables[0]);
-		iommu_add_device(&dev->dev);
-	} else
+	else
 		printk(KERN_WARNING "iommu: Device %s has no iommu table\n",
 		       pci_name(dev));
 }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 09/19] powerpc/powernv/pseries: Rework device adding to IOMMU groups
  2018-11-23  5:52 ` Alexey Kardashevskiy
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  -1 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

The powernv platform registers IOMMU groups and adds devices to them
from the pci_controller_ops::setup_bridge() hook except one case when
virtual functions (SRIOV VFs) are added from a bus notifier.

The pseries platform registers IOMMU groups from
the pci_controller_ops::dma_bus_setup() hook and adds devices from
the pci_controller_ops::dma_dev_setup() hook. The very same bus notifier
used for powernv does not add devices for pseries though as
__of_scan_bus() adds devices first, then it does the bus/dev DMA setup.

Both platforms use iommu_add_device() which takes a device and expects
it to have a valid IOMMU table struct with an iommu_table_group pointer
which in turn points the iommu_group struct (which represents
an IOMMU group). Although the helper seems easy to use, it relies on
some pre-existing device configuration and associated data structures
which it does not really need.

This simplifies iommu_add_device() to take the table_group pointer
directly. Pseries already has a table_group pointer handy and the bus
notified is not used anyway. For powernv, this copies the existing bus
notifier, makes it work for powernv only which means an easy way of
getting to the table_group pointer. This was tested on VFs but should
also support physical PCI hotplug.

Since iommu_add_device() receives the table_group pointer directly,
pseries does not do TCE cache invalidation (the hypervisor does) nor
allow multiple groups per a VFIO container (in other words sharing
an IOMMU table between partitionable endpoints), this removes
iommu_table_group_link from pseries.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
 arch/powerpc/include/asm/iommu.h          | 12 ++---
 arch/powerpc/kernel/iommu.c               | 58 ++---------------------
 arch/powerpc/platforms/powernv/pci-ioda.c | 10 +---
 arch/powerpc/platforms/powernv/pci.c      | 43 ++++++++++++++++-
 arch/powerpc/platforms/pseries/iommu.c    | 46 +++++++++---------
 5 files changed, 74 insertions(+), 95 deletions(-)

diff --git a/arch/powerpc/include/asm/iommu.h b/arch/powerpc/include/asm/iommu.h
index a8aeac0..e847ff6 100644
--- a/arch/powerpc/include/asm/iommu.h
+++ b/arch/powerpc/include/asm/iommu.h
@@ -215,9 +215,9 @@ struct iommu_table_group {
 
 extern void iommu_register_group(struct iommu_table_group *table_group,
 				 int pci_domain_number, unsigned long pe_num);
-extern int iommu_add_device(struct device *dev);
+extern int iommu_add_device(struct iommu_table_group *table_group,
+		struct device *dev);
 extern void iommu_del_device(struct device *dev);
-extern int __init tce_iommu_bus_notifier_init(void);
 extern long iommu_tce_xchg(struct mm_struct *mm, struct iommu_table *tbl,
 		unsigned long entry, unsigned long *hpa,
 		enum dma_data_direction *direction);
@@ -228,7 +228,8 @@ static inline void iommu_register_group(struct iommu_table_group *table_group,
 {
 }
 
-static inline int iommu_add_device(struct device *dev)
+static inline int iommu_add_device(struct iommu_table_group *table_group,
+		struct device *dev)
 {
 	return 0;
 }
@@ -236,11 +237,6 @@ static inline int iommu_add_device(struct device *dev)
 static inline void iommu_del_device(struct device *dev)
 {
 }
-
-static inline int __init tce_iommu_bus_notifier_init(void)
-{
-        return 0;
-}
 #endif /* !CONFIG_IOMMU_API */
 
 int dma_iommu_mapping_error(struct device *dev, dma_addr_t dma_addr);
diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
index 8ccfdd9..1e85168 100644
--- a/arch/powerpc/kernel/iommu.c
+++ b/arch/powerpc/kernel/iommu.c
@@ -1076,11 +1076,8 @@ void iommu_release_ownership(struct iommu_table *tbl)
 }
 EXPORT_SYMBOL_GPL(iommu_release_ownership);
 
-int iommu_add_device(struct device *dev)
+int iommu_add_device(struct iommu_table_group *table_group, struct device *dev)
 {
-	struct iommu_table *tbl;
-	struct iommu_table_group_link *tgl;
-
 	/*
 	 * The sysfs entries should be populated before
 	 * binding IOMMU group. If sysfs entries isn't
@@ -1096,32 +1093,10 @@ int iommu_add_device(struct device *dev)
 		return -EBUSY;
 	}
 
-	tbl = get_iommu_table_base(dev);
-	if (!tbl) {
-		pr_debug("%s: Skipping device %s with no tbl\n",
-			 __func__, dev_name(dev));
-		return 0;
-	}
-
-	tgl = list_first_entry_or_null(&tbl->it_group_list,
-			struct iommu_table_group_link, next);
-	if (!tgl) {
-		pr_debug("%s: Skipping device %s with no group\n",
-			 __func__, dev_name(dev));
-		return 0;
-	}
 	pr_debug("%s: Adding %s to iommu group %d\n",
-		 __func__, dev_name(dev),
-		 iommu_group_id(tgl->table_group->group));
+		 __func__, dev_name(dev),  iommu_group_id(table_group->group));
 
-	if (PAGE_SIZE < IOMMU_PAGE_SIZE(tbl)) {
-		pr_err("%s: Invalid IOMMU page size %lx (%lx) on %s\n",
-		       __func__, IOMMU_PAGE_SIZE(tbl),
-		       PAGE_SIZE, dev_name(dev));
-		return -EINVAL;
-	}
-
-	return iommu_group_add_device(tgl->table_group->group, dev);
+	return iommu_group_add_device(table_group->group, dev);
 }
 EXPORT_SYMBOL_GPL(iommu_add_device);
 
@@ -1141,31 +1116,4 @@ void iommu_del_device(struct device *dev)
 	iommu_group_remove_device(dev);
 }
 EXPORT_SYMBOL_GPL(iommu_del_device);
-
-static int tce_iommu_bus_notifier(struct notifier_block *nb,
-                unsigned long action, void *data)
-{
-        struct device *dev = data;
-
-        switch (action) {
-        case BUS_NOTIFY_ADD_DEVICE:
-                return iommu_add_device(dev);
-        case BUS_NOTIFY_DEL_DEVICE:
-                if (dev->iommu_group)
-                        iommu_del_device(dev);
-                return 0;
-        default:
-                return 0;
-        }
-}
-
-static struct notifier_block tce_iommu_bus_nb = {
-        .notifier_call = tce_iommu_bus_notifier,
-};
-
-int __init tce_iommu_bus_notifier_init(void)
-{
-        bus_register_notifier(&pci_bus_type, &tce_iommu_bus_nb);
-        return 0;
-}
 #endif /* CONFIG_IOMMU_API */
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index 2f9eb43..0cd7146 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -1940,7 +1940,7 @@ static void pnv_ioda_setup_bus_dma(struct pnv_ioda_pe *pe,
 		set_iommu_table_base(&dev->dev, pe->table_group.tables[0]);
 		set_dma_offset(&dev->dev, pe->tce_bypass_base);
 		if (add_to_group)
-			iommu_add_device(&dev->dev);
+			iommu_add_device(&pe->table_group, &dev->dev);
 
 		if ((pe->flags & PNV_IODA_PE_BUS_ALL) && dev->subordinate)
 			pnv_ioda_setup_bus_dma(pe, dev->subordinate,
@@ -2526,14 +2526,6 @@ static long pnv_pci_ioda2_setup_default_config(struct pnv_ioda_pe *pe)
 	if (!pnv_iommu_bypass_disabled)
 		pnv_pci_ioda2_set_bypass(pe, true);
 
-	/*
-	 * Setting table base here only for carrying iommu_group
-	 * further down to let iommu_add_device() do the job.
-	 * pnv_pci_ioda_dma_dev_setup will override it later anyway.
-	 */
-	if (pe->flags & PNV_IODA_PE_DEV)
-		set_iommu_table_base(&pe->pdev->dev, tbl);
-
 	return 0;
 }
 
diff --git a/arch/powerpc/platforms/powernv/pci.c b/arch/powerpc/platforms/powernv/pci.c
index db230a35..5121fb8 100644
--- a/arch/powerpc/platforms/powernv/pci.c
+++ b/arch/powerpc/platforms/powernv/pci.c
@@ -1127,4 +1127,45 @@ void __init pnv_pci_init(void)
 	set_pci_dma_ops(&dma_iommu_ops);
 }
 
-machine_subsys_initcall_sync(powernv, tce_iommu_bus_notifier_init);
+static int pnv_tce_iommu_bus_notifier(struct notifier_block *nb,
+		unsigned long action, void *data)
+{
+	struct device *dev = data;
+	struct pci_dev *pdev;
+	struct pci_dn *pdn;
+	struct pnv_ioda_pe *pe;
+	struct pci_controller *hose;
+	struct pnv_phb *phb;
+
+	switch (action) {
+	case BUS_NOTIFY_ADD_DEVICE:
+		pdev = to_pci_dev(dev);
+		pdn = pci_get_pdn(pdev);
+		hose = pci_bus_to_host(pdev->bus);
+		phb = hose->private_data;
+
+		WARN_ON_ONCE(!phb);
+		if (!pdn || pdn->pe_number == IODA_INVALID_PE || !phb)
+			return 0;
+
+		pe = &phb->ioda.pe_array[pdn->pe_number];
+		iommu_add_device(&pe->table_group, dev);
+		return 0;
+	case BUS_NOTIFY_DEL_DEVICE:
+		iommu_del_device(dev);
+		return 0;
+	default:
+		return 0;
+	}
+}
+
+static struct notifier_block pnv_tce_iommu_bus_nb = {
+	.notifier_call = pnv_tce_iommu_bus_notifier,
+};
+
+static int __init pnv_tce_iommu_bus_notifier_init(void)
+{
+	bus_register_notifier(&pci_bus_type, &pnv_tce_iommu_bus_nb);
+	return 0;
+}
+machine_subsys_initcall_sync(powernv, pnv_tce_iommu_bus_notifier_init);
diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
index 8f9d3be..a6302ab 100644
--- a/arch/powerpc/platforms/pseries/iommu.c
+++ b/arch/powerpc/platforms/pseries/iommu.c
@@ -57,7 +57,6 @@ static struct iommu_table_group *iommu_pseries_alloc_group(int node)
 {
 	struct iommu_table_group *table_group;
 	struct iommu_table *tbl;
-	struct iommu_table_group_link *tgl;
 
 	table_group = kzalloc_node(sizeof(struct iommu_table_group), GFP_KERNEL,
 			   node);
@@ -68,22 +67,13 @@ static struct iommu_table_group *iommu_pseries_alloc_group(int node)
 	if (!tbl)
 		goto free_group;
 
-	tgl = kzalloc_node(sizeof(struct iommu_table_group_link), GFP_KERNEL,
-			node);
-	if (!tgl)
-		goto free_table;
-
 	INIT_LIST_HEAD_RCU(&tbl->it_group_list);
 	kref_init(&tbl->it_kref);
-	tgl->table_group = table_group;
-	list_add_rcu(&tgl->next, &tbl->it_group_list);
 
 	table_group->tables[0] = tbl;
 
 	return table_group;
 
-free_table:
-	kfree(tbl);
 free_group:
 	kfree(table_group);
 	return NULL;
@@ -93,23 +83,12 @@ static void iommu_pseries_free_group(struct iommu_table_group *table_group,
 		const char *node_name)
 {
 	struct iommu_table *tbl;
-#ifdef CONFIG_IOMMU_API
-	struct iommu_table_group_link *tgl;
-#endif
 
 	if (!table_group)
 		return;
 
 	tbl = table_group->tables[0];
 #ifdef CONFIG_IOMMU_API
-	tgl = list_first_entry_or_null(&tbl->it_group_list,
-			struct iommu_table_group_link, next);
-
-	WARN_ON_ONCE(!tgl);
-	if (tgl) {
-		list_del_rcu(&tgl->next);
-		kfree(tgl);
-	}
 	if (table_group->group) {
 		iommu_group_put(table_group->group);
 		BUG_ON(table_group->group);
@@ -1217,7 +1196,7 @@ static void pci_dma_dev_setup_pSeriesLP(struct pci_dev *dev)
 	}
 
 	set_iommu_table_base(&dev->dev, pci->table_group->tables[0]);
-	iommu_add_device(&dev->dev);
+	iommu_add_device(pci->table_group, &dev->dev);
 }
 
 static int dma_set_mask_pSeriesLP(struct device *dev, u64 dma_mask)
@@ -1422,4 +1401,27 @@ static int __init disable_multitce(char *str)
 
 __setup("multitce=", disable_multitce);
 
+static int tce_iommu_bus_notifier(struct notifier_block *nb,
+		unsigned long action, void *data)
+{
+	struct device *dev = data;
+
+	switch (action) {
+	case BUS_NOTIFY_DEL_DEVICE:
+		iommu_del_device(dev);
+		return 0;
+	default:
+		return 0;
+	}
+}
+
+static struct notifier_block tce_iommu_bus_nb = {
+	.notifier_call = tce_iommu_bus_notifier,
+};
+
+static int __init tce_iommu_bus_notifier_init(void)
+{
+	bus_register_notifier(&pci_bus_type, &tce_iommu_bus_nb);
+	return 0;
+}
 machine_subsys_initcall_sync(pseries, tce_iommu_bus_notifier_init);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 09/19] powerpc/powernv/pseries: Rework device adding to IOMMU groups
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  0 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

The powernv platform registers IOMMU groups and adds devices to them
from the pci_controller_ops::setup_bridge() hook except one case when
virtual functions (SRIOV VFs) are added from a bus notifier.

The pseries platform registers IOMMU groups from
the pci_controller_ops::dma_bus_setup() hook and adds devices from
the pci_controller_ops::dma_dev_setup() hook. The very same bus notifier
used for powernv does not add devices for pseries though as
__of_scan_bus() adds devices first, then it does the bus/dev DMA setup.

Both platforms use iommu_add_device() which takes a device and expects
it to have a valid IOMMU table struct with an iommu_table_group pointer
which in turn points the iommu_group struct (which represents
an IOMMU group). Although the helper seems easy to use, it relies on
some pre-existing device configuration and associated data structures
which it does not really need.

This simplifies iommu_add_device() to take the table_group pointer
directly. Pseries already has a table_group pointer handy and the bus
notified is not used anyway. For powernv, this copies the existing bus
notifier, makes it work for powernv only which means an easy way of
getting to the table_group pointer. This was tested on VFs but should
also support physical PCI hotplug.

Since iommu_add_device() receives the table_group pointer directly,
pseries does not do TCE cache invalidation (the hypervisor does) nor
allow multiple groups per a VFIO container (in other words sharing
an IOMMU table between partitionable endpoints), this removes
iommu_table_group_link from pseries.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
 arch/powerpc/include/asm/iommu.h          | 12 ++---
 arch/powerpc/kernel/iommu.c               | 58 ++---------------------
 arch/powerpc/platforms/powernv/pci-ioda.c | 10 +---
 arch/powerpc/platforms/powernv/pci.c      | 43 ++++++++++++++++-
 arch/powerpc/platforms/pseries/iommu.c    | 46 +++++++++---------
 5 files changed, 74 insertions(+), 95 deletions(-)

diff --git a/arch/powerpc/include/asm/iommu.h b/arch/powerpc/include/asm/iommu.h
index a8aeac0..e847ff6 100644
--- a/arch/powerpc/include/asm/iommu.h
+++ b/arch/powerpc/include/asm/iommu.h
@@ -215,9 +215,9 @@ struct iommu_table_group {
 
 extern void iommu_register_group(struct iommu_table_group *table_group,
 				 int pci_domain_number, unsigned long pe_num);
-extern int iommu_add_device(struct device *dev);
+extern int iommu_add_device(struct iommu_table_group *table_group,
+		struct device *dev);
 extern void iommu_del_device(struct device *dev);
-extern int __init tce_iommu_bus_notifier_init(void);
 extern long iommu_tce_xchg(struct mm_struct *mm, struct iommu_table *tbl,
 		unsigned long entry, unsigned long *hpa,
 		enum dma_data_direction *direction);
@@ -228,7 +228,8 @@ static inline void iommu_register_group(struct iommu_table_group *table_group,
 {
 }
 
-static inline int iommu_add_device(struct device *dev)
+static inline int iommu_add_device(struct iommu_table_group *table_group,
+		struct device *dev)
 {
 	return 0;
 }
@@ -236,11 +237,6 @@ static inline int iommu_add_device(struct device *dev)
 static inline void iommu_del_device(struct device *dev)
 {
 }
-
-static inline int __init tce_iommu_bus_notifier_init(void)
-{
-        return 0;
-}
 #endif /* !CONFIG_IOMMU_API */
 
 int dma_iommu_mapping_error(struct device *dev, dma_addr_t dma_addr);
diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
index 8ccfdd9..1e85168 100644
--- a/arch/powerpc/kernel/iommu.c
+++ b/arch/powerpc/kernel/iommu.c
@@ -1076,11 +1076,8 @@ void iommu_release_ownership(struct iommu_table *tbl)
 }
 EXPORT_SYMBOL_GPL(iommu_release_ownership);
 
-int iommu_add_device(struct device *dev)
+int iommu_add_device(struct iommu_table_group *table_group, struct device *dev)
 {
-	struct iommu_table *tbl;
-	struct iommu_table_group_link *tgl;
-
 	/*
 	 * The sysfs entries should be populated before
 	 * binding IOMMU group. If sysfs entries isn't
@@ -1096,32 +1093,10 @@ int iommu_add_device(struct device *dev)
 		return -EBUSY;
 	}
 
-	tbl = get_iommu_table_base(dev);
-	if (!tbl) {
-		pr_debug("%s: Skipping device %s with no tbl\n",
-			 __func__, dev_name(dev));
-		return 0;
-	}
-
-	tgl = list_first_entry_or_null(&tbl->it_group_list,
-			struct iommu_table_group_link, next);
-	if (!tgl) {
-		pr_debug("%s: Skipping device %s with no group\n",
-			 __func__, dev_name(dev));
-		return 0;
-	}
 	pr_debug("%s: Adding %s to iommu group %d\n",
-		 __func__, dev_name(dev),
-		 iommu_group_id(tgl->table_group->group));
+		 __func__, dev_name(dev),  iommu_group_id(table_group->group));
 
-	if (PAGE_SIZE < IOMMU_PAGE_SIZE(tbl)) {
-		pr_err("%s: Invalid IOMMU page size %lx (%lx) on %s\n",
-		       __func__, IOMMU_PAGE_SIZE(tbl),
-		       PAGE_SIZE, dev_name(dev));
-		return -EINVAL;
-	}
-
-	return iommu_group_add_device(tgl->table_group->group, dev);
+	return iommu_group_add_device(table_group->group, dev);
 }
 EXPORT_SYMBOL_GPL(iommu_add_device);
 
@@ -1141,31 +1116,4 @@ void iommu_del_device(struct device *dev)
 	iommu_group_remove_device(dev);
 }
 EXPORT_SYMBOL_GPL(iommu_del_device);
-
-static int tce_iommu_bus_notifier(struct notifier_block *nb,
-                unsigned long action, void *data)
-{
-        struct device *dev = data;
-
-        switch (action) {
-        case BUS_NOTIFY_ADD_DEVICE:
-                return iommu_add_device(dev);
-        case BUS_NOTIFY_DEL_DEVICE:
-                if (dev->iommu_group)
-                        iommu_del_device(dev);
-                return 0;
-        default:
-                return 0;
-        }
-}
-
-static struct notifier_block tce_iommu_bus_nb = {
-        .notifier_call = tce_iommu_bus_notifier,
-};
-
-int __init tce_iommu_bus_notifier_init(void)
-{
-        bus_register_notifier(&pci_bus_type, &tce_iommu_bus_nb);
-        return 0;
-}
 #endif /* CONFIG_IOMMU_API */
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index 2f9eb43..0cd7146 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -1940,7 +1940,7 @@ static void pnv_ioda_setup_bus_dma(struct pnv_ioda_pe *pe,
 		set_iommu_table_base(&dev->dev, pe->table_group.tables[0]);
 		set_dma_offset(&dev->dev, pe->tce_bypass_base);
 		if (add_to_group)
-			iommu_add_device(&dev->dev);
+			iommu_add_device(&pe->table_group, &dev->dev);
 
 		if ((pe->flags & PNV_IODA_PE_BUS_ALL) && dev->subordinate)
 			pnv_ioda_setup_bus_dma(pe, dev->subordinate,
@@ -2526,14 +2526,6 @@ static long pnv_pci_ioda2_setup_default_config(struct pnv_ioda_pe *pe)
 	if (!pnv_iommu_bypass_disabled)
 		pnv_pci_ioda2_set_bypass(pe, true);
 
-	/*
-	 * Setting table base here only for carrying iommu_group
-	 * further down to let iommu_add_device() do the job.
-	 * pnv_pci_ioda_dma_dev_setup will override it later anyway.
-	 */
-	if (pe->flags & PNV_IODA_PE_DEV)
-		set_iommu_table_base(&pe->pdev->dev, tbl);
-
 	return 0;
 }
 
diff --git a/arch/powerpc/platforms/powernv/pci.c b/arch/powerpc/platforms/powernv/pci.c
index db230a35..5121fb8 100644
--- a/arch/powerpc/platforms/powernv/pci.c
+++ b/arch/powerpc/platforms/powernv/pci.c
@@ -1127,4 +1127,45 @@ void __init pnv_pci_init(void)
 	set_pci_dma_ops(&dma_iommu_ops);
 }
 
-machine_subsys_initcall_sync(powernv, tce_iommu_bus_notifier_init);
+static int pnv_tce_iommu_bus_notifier(struct notifier_block *nb,
+		unsigned long action, void *data)
+{
+	struct device *dev = data;
+	struct pci_dev *pdev;
+	struct pci_dn *pdn;
+	struct pnv_ioda_pe *pe;
+	struct pci_controller *hose;
+	struct pnv_phb *phb;
+
+	switch (action) {
+	case BUS_NOTIFY_ADD_DEVICE:
+		pdev = to_pci_dev(dev);
+		pdn = pci_get_pdn(pdev);
+		hose = pci_bus_to_host(pdev->bus);
+		phb = hose->private_data;
+
+		WARN_ON_ONCE(!phb);
+		if (!pdn || pdn->pe_number = IODA_INVALID_PE || !phb)
+			return 0;
+
+		pe = &phb->ioda.pe_array[pdn->pe_number];
+		iommu_add_device(&pe->table_group, dev);
+		return 0;
+	case BUS_NOTIFY_DEL_DEVICE:
+		iommu_del_device(dev);
+		return 0;
+	default:
+		return 0;
+	}
+}
+
+static struct notifier_block pnv_tce_iommu_bus_nb = {
+	.notifier_call = pnv_tce_iommu_bus_notifier,
+};
+
+static int __init pnv_tce_iommu_bus_notifier_init(void)
+{
+	bus_register_notifier(&pci_bus_type, &pnv_tce_iommu_bus_nb);
+	return 0;
+}
+machine_subsys_initcall_sync(powernv, pnv_tce_iommu_bus_notifier_init);
diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
index 8f9d3be..a6302ab 100644
--- a/arch/powerpc/platforms/pseries/iommu.c
+++ b/arch/powerpc/platforms/pseries/iommu.c
@@ -57,7 +57,6 @@ static struct iommu_table_group *iommu_pseries_alloc_group(int node)
 {
 	struct iommu_table_group *table_group;
 	struct iommu_table *tbl;
-	struct iommu_table_group_link *tgl;
 
 	table_group = kzalloc_node(sizeof(struct iommu_table_group), GFP_KERNEL,
 			   node);
@@ -68,22 +67,13 @@ static struct iommu_table_group *iommu_pseries_alloc_group(int node)
 	if (!tbl)
 		goto free_group;
 
-	tgl = kzalloc_node(sizeof(struct iommu_table_group_link), GFP_KERNEL,
-			node);
-	if (!tgl)
-		goto free_table;
-
 	INIT_LIST_HEAD_RCU(&tbl->it_group_list);
 	kref_init(&tbl->it_kref);
-	tgl->table_group = table_group;
-	list_add_rcu(&tgl->next, &tbl->it_group_list);
 
 	table_group->tables[0] = tbl;
 
 	return table_group;
 
-free_table:
-	kfree(tbl);
 free_group:
 	kfree(table_group);
 	return NULL;
@@ -93,23 +83,12 @@ static void iommu_pseries_free_group(struct iommu_table_group *table_group,
 		const char *node_name)
 {
 	struct iommu_table *tbl;
-#ifdef CONFIG_IOMMU_API
-	struct iommu_table_group_link *tgl;
-#endif
 
 	if (!table_group)
 		return;
 
 	tbl = table_group->tables[0];
 #ifdef CONFIG_IOMMU_API
-	tgl = list_first_entry_or_null(&tbl->it_group_list,
-			struct iommu_table_group_link, next);
-
-	WARN_ON_ONCE(!tgl);
-	if (tgl) {
-		list_del_rcu(&tgl->next);
-		kfree(tgl);
-	}
 	if (table_group->group) {
 		iommu_group_put(table_group->group);
 		BUG_ON(table_group->group);
@@ -1217,7 +1196,7 @@ static void pci_dma_dev_setup_pSeriesLP(struct pci_dev *dev)
 	}
 
 	set_iommu_table_base(&dev->dev, pci->table_group->tables[0]);
-	iommu_add_device(&dev->dev);
+	iommu_add_device(pci->table_group, &dev->dev);
 }
 
 static int dma_set_mask_pSeriesLP(struct device *dev, u64 dma_mask)
@@ -1422,4 +1401,27 @@ static int __init disable_multitce(char *str)
 
 __setup("multitce=", disable_multitce);
 
+static int tce_iommu_bus_notifier(struct notifier_block *nb,
+		unsigned long action, void *data)
+{
+	struct device *dev = data;
+
+	switch (action) {
+	case BUS_NOTIFY_DEL_DEVICE:
+		iommu_del_device(dev);
+		return 0;
+	default:
+		return 0;
+	}
+}
+
+static struct notifier_block tce_iommu_bus_nb = {
+	.notifier_call = tce_iommu_bus_notifier,
+};
+
+static int __init tce_iommu_bus_notifier_init(void)
+{
+	bus_register_notifier(&pci_bus_type, &tce_iommu_bus_nb);
+	return 0;
+}
 machine_subsys_initcall_sync(pseries, tce_iommu_bus_notifier_init);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 10/19] powerpc/iommu_api: Move IOMMU groups setup to a single place
  2018-11-23  5:52 ` Alexey Kardashevskiy
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  -1 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

Registering new IOMMU groups and adding devices to them are separated in
code and the latter is dug in the DMA setup code which it does not
really belong to.

This moved IOMMU groups setup to a separate helper which registers a group
and adds devices as before. This does not make a difference as IOMMU
groups are not used anyway; the only dependency here is that
iommu_add_device() requires a valid pointer to an iommu_table
(set by set_iommu_table_base()).

To keep the old behaviour, this does not add new IOMMU groups for PEs
with no DMA weigth and also skips NVLink bridges which do not have
pci_controller_ops::setup_bridge (the normal way of adding PEs).

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
 arch/powerpc/platforms/powernv/pci-ioda.c | 80 +++++++++++++++++++----
 1 file changed, 66 insertions(+), 14 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index 0cd7146..c894c38 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -1269,6 +1269,8 @@ static void pnv_ioda_setup_npu_PEs(struct pci_bus *bus)
 		pnv_ioda_setup_npu_PE(pdev);
 }
 
+static void pnv_ioda_setup_bus_iommu_group(struct pnv_ioda_pe *pe);
+
 static void pnv_pci_ioda_setup_PEs(void)
 {
 	struct pci_controller *hose;
@@ -1591,6 +1593,7 @@ static void pnv_ioda_setup_vf_PE(struct pci_dev *pdev, u16 num_vfs)
 		mutex_unlock(&phb->ioda.pe_list_mutex);
 
 		pnv_pci_ioda2_setup_dma_pe(phb, pe);
+		pnv_ioda_setup_bus_iommu_group(pe);
 	}
 }
 
@@ -1930,21 +1933,16 @@ static u64 pnv_pci_ioda_dma_get_required_mask(struct pci_dev *pdev)
 	return mask;
 }
 
-static void pnv_ioda_setup_bus_dma(struct pnv_ioda_pe *pe,
-				   struct pci_bus *bus,
-				   bool add_to_group)
+static void pnv_ioda_setup_bus_dma(struct pnv_ioda_pe *pe, struct pci_bus *bus)
 {
 	struct pci_dev *dev;
 
 	list_for_each_entry(dev, &bus->devices, bus_list) {
 		set_iommu_table_base(&dev->dev, pe->table_group.tables[0]);
 		set_dma_offset(&dev->dev, pe->tce_bypass_base);
-		if (add_to_group)
-			iommu_add_device(&pe->table_group, &dev->dev);
 
 		if ((pe->flags & PNV_IODA_PE_BUS_ALL) && dev->subordinate)
-			pnv_ioda_setup_bus_dma(pe, dev->subordinate,
-					add_to_group);
+			pnv_ioda_setup_bus_dma(pe, dev->subordinate);
 	}
 }
 
@@ -2374,7 +2372,7 @@ static void pnv_pci_ioda1_setup_dma_pe(struct pnv_phb *phb,
 	iommu_init_table(tbl, phb->hose->node);
 
 	if (pe->flags & (PNV_IODA_PE_BUS | PNV_IODA_PE_BUS_ALL))
-		pnv_ioda_setup_bus_dma(pe, pe->pbus, true);
+		pnv_ioda_setup_bus_dma(pe, pe->pbus);
 
 	return;
  fail:
@@ -2607,7 +2605,7 @@ static void pnv_ioda2_take_ownership(struct iommu_table_group *table_group)
 	pnv_pci_ioda2_set_bypass(pe, false);
 	pnv_pci_ioda2_unset_window(&pe->table_group, 0);
 	if (pe->pbus)
-		pnv_ioda_setup_bus_dma(pe, pe->pbus, false);
+		pnv_ioda_setup_bus_dma(pe, pe->pbus);
 	iommu_tce_table_put(tbl);
 }
 
@@ -2618,7 +2616,7 @@ static void pnv_ioda2_release_ownership(struct iommu_table_group *table_group)
 
 	pnv_pci_ioda2_setup_default_config(pe);
 	if (pe->pbus)
-		pnv_ioda_setup_bus_dma(pe, pe->pbus, false);
+		pnv_ioda_setup_bus_dma(pe, pe->pbus);
 }
 
 static struct iommu_table_group_ops pnv_pci_ioda2_ops = {
@@ -2735,12 +2733,68 @@ static struct iommu_table_group_ops pnv_pci_ioda2_npu_ops = {
 	.release_ownership = pnv_ioda2_release_ownership,
 };
 
+static void pnv_ioda_setup_bus_iommu_group_add_devices(struct pnv_ioda_pe *pe,
+		struct pci_bus *bus)
+{
+	struct pci_dev *dev;
+
+	list_for_each_entry(dev, &bus->devices, bus_list) {
+		iommu_add_device(&pe->table_group, &dev->dev);
+
+		if ((pe->flags & PNV_IODA_PE_BUS_ALL) && dev->subordinate)
+			pnv_ioda_setup_bus_iommu_group_add_devices(pe,
+					dev->subordinate);
+	}
+}
+
+static void pnv_ioda_setup_bus_iommu_group(struct pnv_ioda_pe *pe)
+{
+	if (!pnv_pci_ioda_pe_dma_weight(pe))
+		return;
+
+	iommu_register_group(&pe->table_group, pe->phb->hose->global_number,
+			pe->pe_number);
+
+	/*
+	 * set_iommu_table_base(&pe->pdev->dev, tbl) should have been called
+	 * by now
+	 */
+	if (pe->flags & PNV_IODA_PE_DEV)
+		iommu_add_device(&pe->table_group, &pe->pdev->dev);
+	else if (pe->flags & (PNV_IODA_PE_BUS | PNV_IODA_PE_BUS_ALL))
+		pnv_ioda_setup_bus_iommu_group_add_devices(pe, pe->pbus);
+}
+
 static void pnv_pci_ioda_setup_iommu_api(void)
 {
 	struct pci_controller *hose, *tmp;
 	struct pnv_phb *phb;
 	struct pnv_ioda_pe *pe, *gpe;
 
+	/*
+	 * There are 4 types of PEs:
+	 * - PNV_IODA_PE_BUS: a downstream port with an adapter,
+	 *   created from pnv_pci_setup_bridge();
+	 * - PNV_IODA_PE_BUS_ALL: a PCI-PCIX bridge with devices behind it,
+	 *   created from pnv_pci_setup_bridge();
+	 * - PNV_IODA_PE_VF: a SRIOV virtual function,
+	 *   created from pnv_pcibios_sriov_enable();
+	 * - PNV_IODA_PE_DEV: an NPU or OCAPI device,
+	 *   created from pnv_pci_ioda_fixup().
+	 *
+	 * Normally a PE is represented by an IOMMU group, however for
+	 * devices with side channels the groups need to be more strict.
+	 */
+	list_for_each_entry(hose, &hose_list, list_node) {
+		phb = hose->private_data;
+
+		if (phb->type == PNV_PHB_NPU_NVLINK)
+			continue;
+
+		list_for_each_entry(pe, &phb->ioda.pe_list, list)
+			pnv_ioda_setup_bus_iommu_group(pe);
+	}
+
 	/*
 	 * Now we have all PHBs discovered, time to add NPU devices to
 	 * the corresponding IOMMU groups.
@@ -2759,6 +2813,7 @@ static void pnv_pci_ioda_setup_iommu_api(void)
 	}
 }
 #else /* !CONFIG_IOMMU_API */
+static void pnv_ioda_setup_bus_iommu_group(struct pnv_ioda_pe *pe) { }
 static void pnv_pci_ioda_setup_iommu_api(void) { };
 #endif
 
@@ -2801,9 +2856,6 @@ static void pnv_pci_ioda2_setup_dma_pe(struct pnv_phb *phb,
 	/* TVE #1 is selected by PCI address bit 59 */
 	pe->tce_bypass_base = 1ull << 59;
 
-	iommu_register_group(&pe->table_group, phb->hose->global_number,
-			pe->pe_number);
-
 	/* The PE will reserve all possible 32-bits space */
 	pe_info(pe, "Setting up 32-bit TCE table at 0..%08x\n",
 		phb->ioda.m32_pci_base);
@@ -2824,7 +2876,7 @@ static void pnv_pci_ioda2_setup_dma_pe(struct pnv_phb *phb,
 		return;
 
 	if (pe->flags & (PNV_IODA_PE_BUS | PNV_IODA_PE_BUS_ALL))
-		pnv_ioda_setup_bus_dma(pe, pe->pbus, true);
+		pnv_ioda_setup_bus_dma(pe, pe->pbus);
 }
 
 #ifdef CONFIG_PCI_MSI
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 10/19] powerpc/iommu_api: Move IOMMU groups setup to a single place
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  0 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

Registering new IOMMU groups and adding devices to them are separated in
code and the latter is dug in the DMA setup code which it does not
really belong to.

This moved IOMMU groups setup to a separate helper which registers a group
and adds devices as before. This does not make a difference as IOMMU
groups are not used anyway; the only dependency here is that
iommu_add_device() requires a valid pointer to an iommu_table
(set by set_iommu_table_base()).

To keep the old behaviour, this does not add new IOMMU groups for PEs
with no DMA weigth and also skips NVLink bridges which do not have
pci_controller_ops::setup_bridge (the normal way of adding PEs).

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
 arch/powerpc/platforms/powernv/pci-ioda.c | 80 +++++++++++++++++++----
 1 file changed, 66 insertions(+), 14 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index 0cd7146..c894c38 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -1269,6 +1269,8 @@ static void pnv_ioda_setup_npu_PEs(struct pci_bus *bus)
 		pnv_ioda_setup_npu_PE(pdev);
 }
 
+static void pnv_ioda_setup_bus_iommu_group(struct pnv_ioda_pe *pe);
+
 static void pnv_pci_ioda_setup_PEs(void)
 {
 	struct pci_controller *hose;
@@ -1591,6 +1593,7 @@ static void pnv_ioda_setup_vf_PE(struct pci_dev *pdev, u16 num_vfs)
 		mutex_unlock(&phb->ioda.pe_list_mutex);
 
 		pnv_pci_ioda2_setup_dma_pe(phb, pe);
+		pnv_ioda_setup_bus_iommu_group(pe);
 	}
 }
 
@@ -1930,21 +1933,16 @@ static u64 pnv_pci_ioda_dma_get_required_mask(struct pci_dev *pdev)
 	return mask;
 }
 
-static void pnv_ioda_setup_bus_dma(struct pnv_ioda_pe *pe,
-				   struct pci_bus *bus,
-				   bool add_to_group)
+static void pnv_ioda_setup_bus_dma(struct pnv_ioda_pe *pe, struct pci_bus *bus)
 {
 	struct pci_dev *dev;
 
 	list_for_each_entry(dev, &bus->devices, bus_list) {
 		set_iommu_table_base(&dev->dev, pe->table_group.tables[0]);
 		set_dma_offset(&dev->dev, pe->tce_bypass_base);
-		if (add_to_group)
-			iommu_add_device(&pe->table_group, &dev->dev);
 
 		if ((pe->flags & PNV_IODA_PE_BUS_ALL) && dev->subordinate)
-			pnv_ioda_setup_bus_dma(pe, dev->subordinate,
-					add_to_group);
+			pnv_ioda_setup_bus_dma(pe, dev->subordinate);
 	}
 }
 
@@ -2374,7 +2372,7 @@ static void pnv_pci_ioda1_setup_dma_pe(struct pnv_phb *phb,
 	iommu_init_table(tbl, phb->hose->node);
 
 	if (pe->flags & (PNV_IODA_PE_BUS | PNV_IODA_PE_BUS_ALL))
-		pnv_ioda_setup_bus_dma(pe, pe->pbus, true);
+		pnv_ioda_setup_bus_dma(pe, pe->pbus);
 
 	return;
  fail:
@@ -2607,7 +2605,7 @@ static void pnv_ioda2_take_ownership(struct iommu_table_group *table_group)
 	pnv_pci_ioda2_set_bypass(pe, false);
 	pnv_pci_ioda2_unset_window(&pe->table_group, 0);
 	if (pe->pbus)
-		pnv_ioda_setup_bus_dma(pe, pe->pbus, false);
+		pnv_ioda_setup_bus_dma(pe, pe->pbus);
 	iommu_tce_table_put(tbl);
 }
 
@@ -2618,7 +2616,7 @@ static void pnv_ioda2_release_ownership(struct iommu_table_group *table_group)
 
 	pnv_pci_ioda2_setup_default_config(pe);
 	if (pe->pbus)
-		pnv_ioda_setup_bus_dma(pe, pe->pbus, false);
+		pnv_ioda_setup_bus_dma(pe, pe->pbus);
 }
 
 static struct iommu_table_group_ops pnv_pci_ioda2_ops = {
@@ -2735,12 +2733,68 @@ static struct iommu_table_group_ops pnv_pci_ioda2_npu_ops = {
 	.release_ownership = pnv_ioda2_release_ownership,
 };
 
+static void pnv_ioda_setup_bus_iommu_group_add_devices(struct pnv_ioda_pe *pe,
+		struct pci_bus *bus)
+{
+	struct pci_dev *dev;
+
+	list_for_each_entry(dev, &bus->devices, bus_list) {
+		iommu_add_device(&pe->table_group, &dev->dev);
+
+		if ((pe->flags & PNV_IODA_PE_BUS_ALL) && dev->subordinate)
+			pnv_ioda_setup_bus_iommu_group_add_devices(pe,
+					dev->subordinate);
+	}
+}
+
+static void pnv_ioda_setup_bus_iommu_group(struct pnv_ioda_pe *pe)
+{
+	if (!pnv_pci_ioda_pe_dma_weight(pe))
+		return;
+
+	iommu_register_group(&pe->table_group, pe->phb->hose->global_number,
+			pe->pe_number);
+
+	/*
+	 * set_iommu_table_base(&pe->pdev->dev, tbl) should have been called
+	 * by now
+	 */
+	if (pe->flags & PNV_IODA_PE_DEV)
+		iommu_add_device(&pe->table_group, &pe->pdev->dev);
+	else if (pe->flags & (PNV_IODA_PE_BUS | PNV_IODA_PE_BUS_ALL))
+		pnv_ioda_setup_bus_iommu_group_add_devices(pe, pe->pbus);
+}
+
 static void pnv_pci_ioda_setup_iommu_api(void)
 {
 	struct pci_controller *hose, *tmp;
 	struct pnv_phb *phb;
 	struct pnv_ioda_pe *pe, *gpe;
 
+	/*
+	 * There are 4 types of PEs:
+	 * - PNV_IODA_PE_BUS: a downstream port with an adapter,
+	 *   created from pnv_pci_setup_bridge();
+	 * - PNV_IODA_PE_BUS_ALL: a PCI-PCIX bridge with devices behind it,
+	 *   created from pnv_pci_setup_bridge();
+	 * - PNV_IODA_PE_VF: a SRIOV virtual function,
+	 *   created from pnv_pcibios_sriov_enable();
+	 * - PNV_IODA_PE_DEV: an NPU or OCAPI device,
+	 *   created from pnv_pci_ioda_fixup().
+	 *
+	 * Normally a PE is represented by an IOMMU group, however for
+	 * devices with side channels the groups need to be more strict.
+	 */
+	list_for_each_entry(hose, &hose_list, list_node) {
+		phb = hose->private_data;
+
+		if (phb->type = PNV_PHB_NPU_NVLINK)
+			continue;
+
+		list_for_each_entry(pe, &phb->ioda.pe_list, list)
+			pnv_ioda_setup_bus_iommu_group(pe);
+	}
+
 	/*
 	 * Now we have all PHBs discovered, time to add NPU devices to
 	 * the corresponding IOMMU groups.
@@ -2759,6 +2813,7 @@ static void pnv_pci_ioda_setup_iommu_api(void)
 	}
 }
 #else /* !CONFIG_IOMMU_API */
+static void pnv_ioda_setup_bus_iommu_group(struct pnv_ioda_pe *pe) { }
 static void pnv_pci_ioda_setup_iommu_api(void) { };
 #endif
 
@@ -2801,9 +2856,6 @@ static void pnv_pci_ioda2_setup_dma_pe(struct pnv_phb *phb,
 	/* TVE #1 is selected by PCI address bit 59 */
 	pe->tce_bypass_base = 1ull << 59;
 
-	iommu_register_group(&pe->table_group, phb->hose->global_number,
-			pe->pe_number);
-
 	/* The PE will reserve all possible 32-bits space */
 	pe_info(pe, "Setting up 32-bit TCE table at 0..%08x\n",
 		phb->ioda.m32_pci_base);
@@ -2824,7 +2876,7 @@ static void pnv_pci_ioda2_setup_dma_pe(struct pnv_phb *phb,
 		return;
 
 	if (pe->flags & (PNV_IODA_PE_BUS | PNV_IODA_PE_BUS_ALL))
-		pnv_ioda_setup_bus_dma(pe, pe->pbus, true);
+		pnv_ioda_setup_bus_dma(pe, pe->pbus);
 }
 
 #ifdef CONFIG_PCI_MSI
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 11/19] powerpc/powernv: Reference iommu_table while it is linked to a group
  2018-11-23  5:52 ` Alexey Kardashevskiy
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  -1 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

The iommu_table pointer stored in iommu_table_group may get stale
by accident, this adds referencing and removes a redundant comment
about this.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
 arch/powerpc/platforms/powernv/pci-ioda-tce.c | 3 ++-
 arch/powerpc/platforms/powernv/pci-ioda.c     | 4 ----
 2 files changed, 2 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/pci-ioda-tce.c b/arch/powerpc/platforms/powernv/pci-ioda-tce.c
index 7639b21..697449a 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda-tce.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda-tce.c
@@ -368,6 +368,7 @@ void pnv_pci_unlink_table_and_group(struct iommu_table *tbl,
 	found = false;
 	for (i = 0; i < IOMMU_TABLE_GROUP_MAX_TABLES; ++i) {
 		if (table_group->tables[i] == tbl) {
+			iommu_tce_table_put(tbl);
 			table_group->tables[i] = NULL;
 			found = true;
 			break;
@@ -393,7 +394,7 @@ long pnv_pci_link_table_and_group(int node, int num,
 	tgl->table_group = table_group;
 	list_add_rcu(&tgl->next, &tbl->it_group_list);
 
-	table_group->tables[num] = tbl;
+	table_group->tables[num] = iommu_tce_table_get(tbl);
 
 	return 0;
 }
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index c894c38..6193f1d 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -2716,10 +2716,6 @@ static long pnv_pci_ioda2_npu_unset_window(
 
 static void pnv_ioda2_npu_take_ownership(struct iommu_table_group *table_group)
 {
-	/*
-	 * Detach NPU first as pnv_ioda2_take_ownership() will destroy
-	 * the iommu_table if 32bit DMA is enabled.
-	 */
 	pnv_npu_take_ownership(gpe_table_group_to_npe(table_group));
 	pnv_ioda2_take_ownership(table_group);
 }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 11/19] powerpc/powernv: Reference iommu_table while it is linked to a group
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  0 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

The iommu_table pointer stored in iommu_table_group may get stale
by accident, this adds referencing and removes a redundant comment
about this.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
 arch/powerpc/platforms/powernv/pci-ioda-tce.c | 3 ++-
 arch/powerpc/platforms/powernv/pci-ioda.c     | 4 ----
 2 files changed, 2 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/pci-ioda-tce.c b/arch/powerpc/platforms/powernv/pci-ioda-tce.c
index 7639b21..697449a 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda-tce.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda-tce.c
@@ -368,6 +368,7 @@ void pnv_pci_unlink_table_and_group(struct iommu_table *tbl,
 	found = false;
 	for (i = 0; i < IOMMU_TABLE_GROUP_MAX_TABLES; ++i) {
 		if (table_group->tables[i] = tbl) {
+			iommu_tce_table_put(tbl);
 			table_group->tables[i] = NULL;
 			found = true;
 			break;
@@ -393,7 +394,7 @@ long pnv_pci_link_table_and_group(int node, int num,
 	tgl->table_group = table_group;
 	list_add_rcu(&tgl->next, &tbl->it_group_list);
 
-	table_group->tables[num] = tbl;
+	table_group->tables[num] = iommu_tce_table_get(tbl);
 
 	return 0;
 }
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index c894c38..6193f1d 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -2716,10 +2716,6 @@ static long pnv_pci_ioda2_npu_unset_window(
 
 static void pnv_ioda2_npu_take_ownership(struct iommu_table_group *table_group)
 {
-	/*
-	 * Detach NPU first as pnv_ioda2_take_ownership() will destroy
-	 * the iommu_table if 32bit DMA is enabled.
-	 */
 	pnv_npu_take_ownership(gpe_table_group_to_npe(table_group));
 	pnv_ioda2_take_ownership(table_group);
 }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 12/19] powerpc/powernv: Add purge cache OPAL call
  2018-11-23  5:52 ` Alexey Kardashevskiy
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  -1 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

Flushing caches using the dcbf instruction takes quite some time if we
need to flush gigabytes (16GB takes more than 15s); OPAL just added
a big hammer to flush all caches.

This adds opal_purge_cache() which will be used later to flush caches
for coherent GPU memory which might suddenly become unavailable if a GPU
is reset and NVLink is not (re)trained.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
 arch/powerpc/include/asm/opal-api.h            | 3 ++-
 arch/powerpc/include/asm/opal.h                | 1 +
 arch/powerpc/platforms/powernv/opal.c          | 1 +
 arch/powerpc/platforms/powernv/opal-wrappers.S | 1 +
 4 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/opal-api.h b/arch/powerpc/include/asm/opal-api.h
index 870fb7b..55bc640 100644
--- a/arch/powerpc/include/asm/opal-api.h
+++ b/arch/powerpc/include/asm/opal-api.h
@@ -210,7 +210,8 @@
 #define OPAL_PCI_GET_PBCQ_TUNNEL_BAR		164
 #define OPAL_PCI_SET_PBCQ_TUNNEL_BAR		165
 #define	OPAL_NX_COPROC_INIT			167
-#define OPAL_LAST				167
+#define OPAL_CLEAR_CACHE			170
+#define OPAL_LAST				170
 
 #define QUIESCE_HOLD			1 /* Spin all calls at entry */
 #define QUIESCE_REJECT			2 /* Fail all calls with OPAL_BUSY */
diff --git a/arch/powerpc/include/asm/opal.h b/arch/powerpc/include/asm/opal.h
index ff38664..7db576e 100644
--- a/arch/powerpc/include/asm/opal.h
+++ b/arch/powerpc/include/asm/opal.h
@@ -294,6 +294,7 @@ int opal_set_power_shift_ratio(u32 handle, int token, u32 psr);
 int opal_sensor_group_clear(u32 group_hndl, int token);
 int opal_sensor_group_enable(u32 group_hndl, int token, bool enable);
 int opal_nx_coproc_init(uint32_t chip_id, uint32_t ct);
+int opal_purge_cache(void);
 
 s64 opal_signal_system_reset(s32 cpu);
 s64 opal_quiesce(u64 shutdown_type, s32 cpu);
diff --git a/arch/powerpc/platforms/powernv/opal.c b/arch/powerpc/platforms/powernv/opal.c
index beed86f..44ce824 100644
--- a/arch/powerpc/platforms/powernv/opal.c
+++ b/arch/powerpc/platforms/powernv/opal.c
@@ -1113,3 +1113,4 @@ EXPORT_SYMBOL_GPL(opal_int_eoi);
 EXPORT_SYMBOL_GPL(opal_error_code);
 /* Export the below symbol for NX compression */
 EXPORT_SYMBOL(opal_nx_coproc_init);
+EXPORT_SYMBOL(opal_purge_cache);
diff --git a/arch/powerpc/platforms/powernv/opal-wrappers.S b/arch/powerpc/platforms/powernv/opal-wrappers.S
index 2515282..5b886a6 100644
--- a/arch/powerpc/platforms/powernv/opal-wrappers.S
+++ b/arch/powerpc/platforms/powernv/opal-wrappers.S
@@ -331,3 +331,4 @@ OPAL_CALL(opal_pci_set_pbcq_tunnel_bar,		OPAL_PCI_SET_PBCQ_TUNNEL_BAR);
 OPAL_CALL(opal_sensor_read_u64,			OPAL_SENSOR_READ_U64);
 OPAL_CALL(opal_sensor_group_enable,		OPAL_SENSOR_GROUP_ENABLE);
 OPAL_CALL(opal_nx_coproc_init,			OPAL_NX_COPROC_INIT);
+OPAL_CALL(opal_purge_cache,			OPAL_CLEAR_CACHE);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 12/19] powerpc/powernv: Add purge cache OPAL call
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  0 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

Flushing caches using the dcbf instruction takes quite some time if we
need to flush gigabytes (16GB takes more than 15s); OPAL just added
a big hammer to flush all caches.

This adds opal_purge_cache() which will be used later to flush caches
for coherent GPU memory which might suddenly become unavailable if a GPU
is reset and NVLink is not (re)trained.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
 arch/powerpc/include/asm/opal-api.h            | 3 ++-
 arch/powerpc/include/asm/opal.h                | 1 +
 arch/powerpc/platforms/powernv/opal.c          | 1 +
 arch/powerpc/platforms/powernv/opal-wrappers.S | 1 +
 4 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/opal-api.h b/arch/powerpc/include/asm/opal-api.h
index 870fb7b..55bc640 100644
--- a/arch/powerpc/include/asm/opal-api.h
+++ b/arch/powerpc/include/asm/opal-api.h
@@ -210,7 +210,8 @@
 #define OPAL_PCI_GET_PBCQ_TUNNEL_BAR		164
 #define OPAL_PCI_SET_PBCQ_TUNNEL_BAR		165
 #define	OPAL_NX_COPROC_INIT			167
-#define OPAL_LAST				167
+#define OPAL_CLEAR_CACHE			170
+#define OPAL_LAST				170
 
 #define QUIESCE_HOLD			1 /* Spin all calls at entry */
 #define QUIESCE_REJECT			2 /* Fail all calls with OPAL_BUSY */
diff --git a/arch/powerpc/include/asm/opal.h b/arch/powerpc/include/asm/opal.h
index ff38664..7db576e 100644
--- a/arch/powerpc/include/asm/opal.h
+++ b/arch/powerpc/include/asm/opal.h
@@ -294,6 +294,7 @@ int opal_set_power_shift_ratio(u32 handle, int token, u32 psr);
 int opal_sensor_group_clear(u32 group_hndl, int token);
 int opal_sensor_group_enable(u32 group_hndl, int token, bool enable);
 int opal_nx_coproc_init(uint32_t chip_id, uint32_t ct);
+int opal_purge_cache(void);
 
 s64 opal_signal_system_reset(s32 cpu);
 s64 opal_quiesce(u64 shutdown_type, s32 cpu);
diff --git a/arch/powerpc/platforms/powernv/opal.c b/arch/powerpc/platforms/powernv/opal.c
index beed86f..44ce824 100644
--- a/arch/powerpc/platforms/powernv/opal.c
+++ b/arch/powerpc/platforms/powernv/opal.c
@@ -1113,3 +1113,4 @@ EXPORT_SYMBOL_GPL(opal_int_eoi);
 EXPORT_SYMBOL_GPL(opal_error_code);
 /* Export the below symbol for NX compression */
 EXPORT_SYMBOL(opal_nx_coproc_init);
+EXPORT_SYMBOL(opal_purge_cache);
diff --git a/arch/powerpc/platforms/powernv/opal-wrappers.S b/arch/powerpc/platforms/powernv/opal-wrappers.S
index 2515282..5b886a6 100644
--- a/arch/powerpc/platforms/powernv/opal-wrappers.S
+++ b/arch/powerpc/platforms/powernv/opal-wrappers.S
@@ -331,3 +331,4 @@ OPAL_CALL(opal_pci_set_pbcq_tunnel_bar,		OPAL_PCI_SET_PBCQ_TUNNEL_BAR);
 OPAL_CALL(opal_sensor_read_u64,			OPAL_SENSOR_READ_U64);
 OPAL_CALL(opal_sensor_group_enable,		OPAL_SENSOR_GROUP_ENABLE);
 OPAL_CALL(opal_nx_coproc_init,			OPAL_NX_COPROC_INIT);
+OPAL_CALL(opal_purge_cache,			OPAL_CLEAR_CACHE);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 13/19] powerpc/powernv/npu: Move single TVE handling to NPU PE
  2018-11-23  5:52 ` Alexey Kardashevskiy
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  -1 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

Normal PCI PEs have 2 TVEs, one per a DMA window; however NPU PE has only
one which points to one of two tables of the corresponding PCI PE.

So whenever a new DMA window is programmed to PEs, the NPU PE needs to
release old table in order to use the new one.

Commit d41ce7b1bcc3e ("powerpc/powernv/npu: Do not try invalidating 32bit
table when 64bit table is enabled") did just that but in pci-ioda.c
while it actually belongs to npu-dma.c.

This moves the single TVE handling to npu-dma.c. This does not implement
restoring though as it is highly unlikely that we can set the table to
PCI PE and cannot to NPU PE and if that fails, we could only set 32bit
table to NPU PE and this configuration is not really supported or wanted.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
 arch/powerpc/platforms/powernv/npu-dma.c  |  8 +++++++
 arch/powerpc/platforms/powernv/pci-ioda.c | 27 +++--------------------
 2 files changed, 11 insertions(+), 24 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
index ef1457f..26063fb 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -130,6 +130,11 @@ long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
 		tbl->it_level_size : tbl->it_size;
 	const __u64 start_addr = tbl->it_offset << tbl->it_page_shift;
 	const __u64 win_size = tbl->it_size << tbl->it_page_shift;
+	int num2 = (num == 0) ? 1 : 0;
+
+	/* NPU has just one TVE so if there is another table, remove it first */
+	if (npe->table_group.tables[num2])
+		pnv_npu_unset_window(npe, num2);
 
 	pe_info(npe, "Setting up window %llx..%llx pg=%lx\n",
 			start_addr, start_addr + win_size - 1,
@@ -160,6 +165,9 @@ long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num)
 	struct pnv_phb *phb = npe->phb;
 	int64_t rc;
 
+	if (!npe->table_group.tables[num])
+		return 0;
+
 	pe_info(npe, "Removing DMA window\n");
 
 	rc = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index 6193f1d..07f0751 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -2669,23 +2669,14 @@ static struct pnv_ioda_pe *gpe_table_group_to_npe(
 static long pnv_pci_ioda2_npu_set_window(struct iommu_table_group *table_group,
 		int num, struct iommu_table *tbl)
 {
-	struct pnv_ioda_pe *npe = gpe_table_group_to_npe(table_group);
-	int num2 = (num == 0) ? 1 : 0;
 	long ret = pnv_pci_ioda2_set_window(table_group, num, tbl);
 
 	if (ret)
 		return ret;
 
-	if (table_group->tables[num2])
-		pnv_npu_unset_window(npe, num2);
-
-	ret = pnv_npu_set_window(npe, num, tbl);
-	if (ret) {
+	ret = pnv_npu_set_window(gpe_table_group_to_npe(table_group), num, tbl);
+	if (ret)
 		pnv_pci_ioda2_unset_window(table_group, num);
-		if (table_group->tables[num2])
-			pnv_npu_set_window(npe, num2,
-					table_group->tables[num2]);
-	}
 
 	return ret;
 }
@@ -2694,24 +2685,12 @@ static long pnv_pci_ioda2_npu_unset_window(
 		struct iommu_table_group *table_group,
 		int num)
 {
-	struct pnv_ioda_pe *npe = gpe_table_group_to_npe(table_group);
-	int num2 = (num == 0) ? 1 : 0;
 	long ret = pnv_pci_ioda2_unset_window(table_group, num);
 
 	if (ret)
 		return ret;
 
-	if (!npe->table_group.tables[num])
-		return 0;
-
-	ret = pnv_npu_unset_window(npe, num);
-	if (ret)
-		return ret;
-
-	if (table_group->tables[num2])
-		ret = pnv_npu_set_window(npe, num2, table_group->tables[num2]);
-
-	return ret;
+	return pnv_npu_unset_window(gpe_table_group_to_npe(table_group), num);
 }
 
 static void pnv_ioda2_npu_take_ownership(struct iommu_table_group *table_group)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 13/19] powerpc/powernv/npu: Move single TVE handling to NPU PE
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  0 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

Normal PCI PEs have 2 TVEs, one per a DMA window; however NPU PE has only
one which points to one of two tables of the corresponding PCI PE.

So whenever a new DMA window is programmed to PEs, the NPU PE needs to
release old table in order to use the new one.

Commit d41ce7b1bcc3e ("powerpc/powernv/npu: Do not try invalidating 32bit
table when 64bit table is enabled") did just that but in pci-ioda.c
while it actually belongs to npu-dma.c.

This moves the single TVE handling to npu-dma.c. This does not implement
restoring though as it is highly unlikely that we can set the table to
PCI PE and cannot to NPU PE and if that fails, we could only set 32bit
table to NPU PE and this configuration is not really supported or wanted.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
 arch/powerpc/platforms/powernv/npu-dma.c  |  8 +++++++
 arch/powerpc/platforms/powernv/pci-ioda.c | 27 +++--------------------
 2 files changed, 11 insertions(+), 24 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
index ef1457f..26063fb 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -130,6 +130,11 @@ long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
 		tbl->it_level_size : tbl->it_size;
 	const __u64 start_addr = tbl->it_offset << tbl->it_page_shift;
 	const __u64 win_size = tbl->it_size << tbl->it_page_shift;
+	int num2 = (num = 0) ? 1 : 0;
+
+	/* NPU has just one TVE so if there is another table, remove it first */
+	if (npe->table_group.tables[num2])
+		pnv_npu_unset_window(npe, num2);
 
 	pe_info(npe, "Setting up window %llx..%llx pg=%lx\n",
 			start_addr, start_addr + win_size - 1,
@@ -160,6 +165,9 @@ long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num)
 	struct pnv_phb *phb = npe->phb;
 	int64_t rc;
 
+	if (!npe->table_group.tables[num])
+		return 0;
+
 	pe_info(npe, "Removing DMA window\n");
 
 	rc = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index 6193f1d..07f0751 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -2669,23 +2669,14 @@ static struct pnv_ioda_pe *gpe_table_group_to_npe(
 static long pnv_pci_ioda2_npu_set_window(struct iommu_table_group *table_group,
 		int num, struct iommu_table *tbl)
 {
-	struct pnv_ioda_pe *npe = gpe_table_group_to_npe(table_group);
-	int num2 = (num = 0) ? 1 : 0;
 	long ret = pnv_pci_ioda2_set_window(table_group, num, tbl);
 
 	if (ret)
 		return ret;
 
-	if (table_group->tables[num2])
-		pnv_npu_unset_window(npe, num2);
-
-	ret = pnv_npu_set_window(npe, num, tbl);
-	if (ret) {
+	ret = pnv_npu_set_window(gpe_table_group_to_npe(table_group), num, tbl);
+	if (ret)
 		pnv_pci_ioda2_unset_window(table_group, num);
-		if (table_group->tables[num2])
-			pnv_npu_set_window(npe, num2,
-					table_group->tables[num2]);
-	}
 
 	return ret;
 }
@@ -2694,24 +2685,12 @@ static long pnv_pci_ioda2_npu_unset_window(
 		struct iommu_table_group *table_group,
 		int num)
 {
-	struct pnv_ioda_pe *npe = gpe_table_group_to_npe(table_group);
-	int num2 = (num = 0) ? 1 : 0;
 	long ret = pnv_pci_ioda2_unset_window(table_group, num);
 
 	if (ret)
 		return ret;
 
-	if (!npe->table_group.tables[num])
-		return 0;
-
-	ret = pnv_npu_unset_window(npe, num);
-	if (ret)
-		return ret;
-
-	if (table_group->tables[num2])
-		ret = pnv_npu_set_window(npe, num2, table_group->tables[num2]);
-
-	return ret;
+	return pnv_npu_unset_window(gpe_table_group_to_npe(table_group), num);
 }
 
 static void pnv_ioda2_npu_take_ownership(struct iommu_table_group *table_group)
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 14/19] powerpc/powernv/npu: Convert NPU IOMMU helpers to iommu_table_group_ops
  2018-11-23  5:52 ` Alexey Kardashevskiy
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  -1 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

At the moment NPU IOMMU is manipulated directly from the IODA2 PCI
PE code; PCI PE acts as a master to NPU PE. Soon we will have compound
IOMMU groups with several PEs from several different PHB (such as
interconnected GPUs and NPUs) so there will be no single master but
a one big IOMMU group.

This makes a first step and converts an NPU PE with a set of extern
function to a table group.

This should cause no behavioral change. Note that
pnv_npu_release_ownership() has never been implemented.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
 arch/powerpc/platforms/powernv/pci.h      |  5 ----
 arch/powerpc/platforms/powernv/npu-dma.c  | 34 ++++++++++++++++++-----
 arch/powerpc/platforms/powernv/pci-ioda.c | 10 +++++--
 3 files changed, 34 insertions(+), 15 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
index ddb4f02..cf9f748 100644
--- a/arch/powerpc/platforms/powernv/pci.h
+++ b/arch/powerpc/platforms/powernv/pci.h
@@ -216,11 +216,6 @@ extern void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level,
 extern void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass);
 extern void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_phb *phb, bool rm);
 extern struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe);
-extern long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
-		struct iommu_table *tbl);
-extern long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num);
-extern void pnv_npu_take_ownership(struct pnv_ioda_pe *npe);
-extern void pnv_npu_release_ownership(struct pnv_ioda_pe *npe);
 
 /* pci-ioda-tce.c */
 #define POWERNV_IOMMU_DEFAULT_LEVELS	1
diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
index 26063fb..dc629ee 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -121,9 +121,14 @@ static struct pnv_ioda_pe *get_gpu_pci_dev_and_pe(struct pnv_ioda_pe *npe,
 	return pe;
 }
 
-long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
+static long pnv_npu_unset_window(struct iommu_table_group *table_group,
+		int num);
+
+static long pnv_npu_set_window(struct iommu_table_group *table_group, int num,
 		struct iommu_table *tbl)
 {
+	struct pnv_ioda_pe *npe = container_of(table_group, struct pnv_ioda_pe,
+			table_group);
 	struct pnv_phb *phb = npe->phb;
 	int64_t rc;
 	const unsigned long size = tbl->it_indirect_levels ?
@@ -134,7 +139,7 @@ long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
 
 	/* NPU has just one TVE so if there is another table, remove it first */
 	if (npe->table_group.tables[num2])
-		pnv_npu_unset_window(npe, num2);
+		pnv_npu_unset_window(&npe->table_group, num2);
 
 	pe_info(npe, "Setting up window %llx..%llx pg=%lx\n",
 			start_addr, start_addr + win_size - 1,
@@ -160,8 +165,10 @@ long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
 	return 0;
 }
 
-long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num)
+static long pnv_npu_unset_window(struct iommu_table_group *table_group, int num)
 {
+	struct pnv_ioda_pe *npe = container_of(table_group, struct pnv_ioda_pe,
+			table_group);
 	struct pnv_phb *phb = npe->phb;
 	int64_t rc;
 
@@ -206,7 +213,8 @@ static void pnv_npu_dma_set_32(struct pnv_ioda_pe *npe)
 	if (!gpe)
 		return;
 
-	rc = pnv_npu_set_window(npe, 0, gpe->table_group.tables[0]);
+	rc = pnv_npu_set_window(&npe->table_group, 0,
+			gpe->table_group.tables[0]);
 
 	/*
 	 * NVLink devices use the same TCE table configuration as
@@ -231,7 +239,7 @@ static int pnv_npu_dma_set_bypass(struct pnv_ioda_pe *npe)
 	if (phb->type != PNV_PHB_NPU_NVLINK || !npe->pdev)
 		return -EINVAL;
 
-	rc = pnv_npu_unset_window(npe, 0);
+	rc = pnv_npu_unset_window(&npe->table_group, 0);
 	if (rc != OPAL_SUCCESS)
 		return rc;
 
@@ -284,9 +292,12 @@ void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass)
 	}
 }
 
+#ifdef CONFIG_IOMMU_API
 /* Switch ownership from platform code to external user (e.g. VFIO) */
-void pnv_npu_take_ownership(struct pnv_ioda_pe *npe)
+static void pnv_npu_take_ownership(struct iommu_table_group *table_group)
 {
+	struct pnv_ioda_pe *npe = container_of(table_group, struct pnv_ioda_pe,
+			table_group);
 	struct pnv_phb *phb = npe->phb;
 	int64_t rc;
 
@@ -297,7 +308,7 @@ void pnv_npu_take_ownership(struct pnv_ioda_pe *npe)
 	 * if it was enabled at the moment of ownership change.
 	 */
 	if (npe->table_group.tables[0]) {
-		pnv_npu_unset_window(npe, 0);
+		pnv_npu_unset_window(&npe->table_group, 0);
 		return;
 	}
 
@@ -312,6 +323,12 @@ void pnv_npu_take_ownership(struct pnv_ioda_pe *npe)
 	pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
 }
 
+static struct iommu_table_group_ops pnv_pci_npu_ops = {
+	.set_window = pnv_npu_set_window,
+	.unset_window = pnv_npu_unset_window,
+	.take_ownership = pnv_npu_take_ownership,
+};
+
 struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
 {
 	struct pnv_phb *phb = npe->phb;
@@ -322,6 +339,8 @@ struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
 	if (!gpe || !gpdev)
 		return NULL;
 
+	npe->table_group.ops = &pnv_pci_npu_ops;
+
 	list_for_each_entry(npdev, &pbus->devices, bus_list) {
 		gptmp = pnv_pci_get_gpu_dev(npdev);
 
@@ -334,6 +353,7 @@ struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
 
 	return gpe;
 }
+#endif /* !CONFIG_IOMMU_API */
 
 /*
  * NPU2 ATS
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index 07f0751..c9e7bcb 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -2669,12 +2669,13 @@ static struct pnv_ioda_pe *gpe_table_group_to_npe(
 static long pnv_pci_ioda2_npu_set_window(struct iommu_table_group *table_group,
 		int num, struct iommu_table *tbl)
 {
+	struct pnv_ioda_pe *npe = gpe_table_group_to_npe(table_group);
 	long ret = pnv_pci_ioda2_set_window(table_group, num, tbl);
 
 	if (ret)
 		return ret;
 
-	ret = pnv_npu_set_window(gpe_table_group_to_npe(table_group), num, tbl);
+	ret = npe->table_group.ops->set_window(&npe->table_group, num, tbl);
 	if (ret)
 		pnv_pci_ioda2_unset_window(table_group, num);
 
@@ -2685,17 +2686,20 @@ static long pnv_pci_ioda2_npu_unset_window(
 		struct iommu_table_group *table_group,
 		int num)
 {
+	struct pnv_ioda_pe *npe = gpe_table_group_to_npe(table_group);
 	long ret = pnv_pci_ioda2_unset_window(table_group, num);
 
 	if (ret)
 		return ret;
 
-	return pnv_npu_unset_window(gpe_table_group_to_npe(table_group), num);
+	return npe->table_group.ops->unset_window(&npe->table_group, num);
 }
 
 static void pnv_ioda2_npu_take_ownership(struct iommu_table_group *table_group)
 {
-	pnv_npu_take_ownership(gpe_table_group_to_npe(table_group));
+	struct pnv_ioda_pe *npe = gpe_table_group_to_npe(table_group);
+
+	npe->table_group.ops->take_ownership(&npe->table_group);
 	pnv_ioda2_take_ownership(table_group);
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 14/19] powerpc/powernv/npu: Convert NPU IOMMU helpers to iommu_table_group_ops
@ 2018-11-23  5:52   ` Alexey Kardashevskiy
  0 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

At the moment NPU IOMMU is manipulated directly from the IODA2 PCI
PE code; PCI PE acts as a master to NPU PE. Soon we will have compound
IOMMU groups with several PEs from several different PHB (such as
interconnected GPUs and NPUs) so there will be no single master but
a one big IOMMU group.

This makes a first step and converts an NPU PE with a set of extern
function to a table group.

This should cause no behavioral change. Note that
pnv_npu_release_ownership() has never been implemented.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
 arch/powerpc/platforms/powernv/pci.h      |  5 ----
 arch/powerpc/platforms/powernv/npu-dma.c  | 34 ++++++++++++++++++-----
 arch/powerpc/platforms/powernv/pci-ioda.c | 10 +++++--
 3 files changed, 34 insertions(+), 15 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
index ddb4f02..cf9f748 100644
--- a/arch/powerpc/platforms/powernv/pci.h
+++ b/arch/powerpc/platforms/powernv/pci.h
@@ -216,11 +216,6 @@ extern void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level,
 extern void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass);
 extern void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_phb *phb, bool rm);
 extern struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe);
-extern long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
-		struct iommu_table *tbl);
-extern long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num);
-extern void pnv_npu_take_ownership(struct pnv_ioda_pe *npe);
-extern void pnv_npu_release_ownership(struct pnv_ioda_pe *npe);
 
 /* pci-ioda-tce.c */
 #define POWERNV_IOMMU_DEFAULT_LEVELS	1
diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
index 26063fb..dc629ee 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -121,9 +121,14 @@ static struct pnv_ioda_pe *get_gpu_pci_dev_and_pe(struct pnv_ioda_pe *npe,
 	return pe;
 }
 
-long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
+static long pnv_npu_unset_window(struct iommu_table_group *table_group,
+		int num);
+
+static long pnv_npu_set_window(struct iommu_table_group *table_group, int num,
 		struct iommu_table *tbl)
 {
+	struct pnv_ioda_pe *npe = container_of(table_group, struct pnv_ioda_pe,
+			table_group);
 	struct pnv_phb *phb = npe->phb;
 	int64_t rc;
 	const unsigned long size = tbl->it_indirect_levels ?
@@ -134,7 +139,7 @@ long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
 
 	/* NPU has just one TVE so if there is another table, remove it first */
 	if (npe->table_group.tables[num2])
-		pnv_npu_unset_window(npe, num2);
+		pnv_npu_unset_window(&npe->table_group, num2);
 
 	pe_info(npe, "Setting up window %llx..%llx pg=%lx\n",
 			start_addr, start_addr + win_size - 1,
@@ -160,8 +165,10 @@ long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
 	return 0;
 }
 
-long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num)
+static long pnv_npu_unset_window(struct iommu_table_group *table_group, int num)
 {
+	struct pnv_ioda_pe *npe = container_of(table_group, struct pnv_ioda_pe,
+			table_group);
 	struct pnv_phb *phb = npe->phb;
 	int64_t rc;
 
@@ -206,7 +213,8 @@ static void pnv_npu_dma_set_32(struct pnv_ioda_pe *npe)
 	if (!gpe)
 		return;
 
-	rc = pnv_npu_set_window(npe, 0, gpe->table_group.tables[0]);
+	rc = pnv_npu_set_window(&npe->table_group, 0,
+			gpe->table_group.tables[0]);
 
 	/*
 	 * NVLink devices use the same TCE table configuration as
@@ -231,7 +239,7 @@ static int pnv_npu_dma_set_bypass(struct pnv_ioda_pe *npe)
 	if (phb->type != PNV_PHB_NPU_NVLINK || !npe->pdev)
 		return -EINVAL;
 
-	rc = pnv_npu_unset_window(npe, 0);
+	rc = pnv_npu_unset_window(&npe->table_group, 0);
 	if (rc != OPAL_SUCCESS)
 		return rc;
 
@@ -284,9 +292,12 @@ void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass)
 	}
 }
 
+#ifdef CONFIG_IOMMU_API
 /* Switch ownership from platform code to external user (e.g. VFIO) */
-void pnv_npu_take_ownership(struct pnv_ioda_pe *npe)
+static void pnv_npu_take_ownership(struct iommu_table_group *table_group)
 {
+	struct pnv_ioda_pe *npe = container_of(table_group, struct pnv_ioda_pe,
+			table_group);
 	struct pnv_phb *phb = npe->phb;
 	int64_t rc;
 
@@ -297,7 +308,7 @@ void pnv_npu_take_ownership(struct pnv_ioda_pe *npe)
 	 * if it was enabled at the moment of ownership change.
 	 */
 	if (npe->table_group.tables[0]) {
-		pnv_npu_unset_window(npe, 0);
+		pnv_npu_unset_window(&npe->table_group, 0);
 		return;
 	}
 
@@ -312,6 +323,12 @@ void pnv_npu_take_ownership(struct pnv_ioda_pe *npe)
 	pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
 }
 
+static struct iommu_table_group_ops pnv_pci_npu_ops = {
+	.set_window = pnv_npu_set_window,
+	.unset_window = pnv_npu_unset_window,
+	.take_ownership = pnv_npu_take_ownership,
+};
+
 struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
 {
 	struct pnv_phb *phb = npe->phb;
@@ -322,6 +339,8 @@ struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
 	if (!gpe || !gpdev)
 		return NULL;
 
+	npe->table_group.ops = &pnv_pci_npu_ops;
+
 	list_for_each_entry(npdev, &pbus->devices, bus_list) {
 		gptmp = pnv_pci_get_gpu_dev(npdev);
 
@@ -334,6 +353,7 @@ struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
 
 	return gpe;
 }
+#endif /* !CONFIG_IOMMU_API */
 
 /*
  * NPU2 ATS
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index 07f0751..c9e7bcb 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -2669,12 +2669,13 @@ static struct pnv_ioda_pe *gpe_table_group_to_npe(
 static long pnv_pci_ioda2_npu_set_window(struct iommu_table_group *table_group,
 		int num, struct iommu_table *tbl)
 {
+	struct pnv_ioda_pe *npe = gpe_table_group_to_npe(table_group);
 	long ret = pnv_pci_ioda2_set_window(table_group, num, tbl);
 
 	if (ret)
 		return ret;
 
-	ret = pnv_npu_set_window(gpe_table_group_to_npe(table_group), num, tbl);
+	ret = npe->table_group.ops->set_window(&npe->table_group, num, tbl);
 	if (ret)
 		pnv_pci_ioda2_unset_window(table_group, num);
 
@@ -2685,17 +2686,20 @@ static long pnv_pci_ioda2_npu_unset_window(
 		struct iommu_table_group *table_group,
 		int num)
 {
+	struct pnv_ioda_pe *npe = gpe_table_group_to_npe(table_group);
 	long ret = pnv_pci_ioda2_unset_window(table_group, num);
 
 	if (ret)
 		return ret;
 
-	return pnv_npu_unset_window(gpe_table_group_to_npe(table_group), num);
+	return npe->table_group.ops->unset_window(&npe->table_group, num);
 }
 
 static void pnv_ioda2_npu_take_ownership(struct iommu_table_group *table_group)
 {
-	pnv_npu_take_ownership(gpe_table_group_to_npe(table_group));
+	struct pnv_ioda_pe *npe = gpe_table_group_to_npe(table_group);
+
+	npe->table_group.ops->take_ownership(&npe->table_group);
 	pnv_ioda2_take_ownership(table_group);
 }
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 15/19] powerpc/powernv/npu: Add compound IOMMU groups
  2018-11-23  5:52 ` Alexey Kardashevskiy
@ 2018-11-23  5:53   ` Alexey Kardashevskiy
  -1 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:53 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

At the moment the powernv platform registers an IOMMU group for each PE.
There is an exception though: an NVLink bridge which is attached to
the corresponding GPU's IOMMU group making it a master.

Now we have POWER9 systems with GPUs connected to each other directly
bypassing PCI. At the moment we do not control state of these links so
we have to put such interconnected GPUs to one IOMMU group which
means that the old scheme with one GPU as a master won't work - there will
be up to 3 GPUs in such group.

This introduces a npu_comp struct which represents a compound IOMMU
group made of multiple PEs - PCI PEs (for GPUs) and NPU PEs (for NVLink
bridges). This converts the existing NVLink1 code to use the new scheme.
From now on, each PE must have a valid iommu_table_group_ops which will
either be called directly (for a single PE group) or indirectly from
a compound group handlers.

This moves IOMMU group registration for NVLink-connected GPUs to npu-dma.c.
For POWER8, this stores a new compound group pointer in the PE (so a GPU
is still a master); for POWER9 the new group pointer is stored in an NPU
(which is allocated per a PCI host controller).

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
 arch/powerpc/include/asm/pci.h            |   1 +
 arch/powerpc/platforms/powernv/pci.h      |   7 +
 arch/powerpc/platforms/powernv/npu-dma.c  | 285 ++++++++++++++++++++--
 arch/powerpc/platforms/powernv/pci-ioda.c | 155 ++++--------
 4 files changed, 309 insertions(+), 139 deletions(-)

diff --git a/arch/powerpc/include/asm/pci.h b/arch/powerpc/include/asm/pci.h
index baf2886..0c72f18 100644
--- a/arch/powerpc/include/asm/pci.h
+++ b/arch/powerpc/include/asm/pci.h
@@ -132,5 +132,6 @@ extern struct pci_dev *pnv_pci_get_npu_dev(struct pci_dev *gpdev, int index);
 extern int pnv_npu2_init(struct pci_controller *hose);
 extern int pnv_npu2_map_lpar_dev(struct pci_dev *gpdev, unsigned int lparid,
 		unsigned long msr);
+extern int pnv_npu2_unmap_lpar_dev(struct pci_dev *gpdev);
 
 #endif /* __ASM_POWERPC_PCI_H */
diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
index cf9f748..aef4bb5 100644
--- a/arch/powerpc/platforms/powernv/pci.h
+++ b/arch/powerpc/platforms/powernv/pci.h
@@ -62,6 +62,7 @@ struct pnv_ioda_pe {
 
 	/* "Base" iommu table, ie, 4K TCEs, 32-bit DMA */
 	struct iommu_table_group table_group;
+	struct npu_comp		*npucomp;
 
 	/* 64-bit TCE bypass region */
 	bool			tce_bypass_enabled;
@@ -201,6 +202,8 @@ extern void pnv_teardown_msi_irqs(struct pci_dev *pdev);
 extern struct pnv_ioda_pe *pnv_ioda_get_pe(struct pci_dev *dev);
 extern void pnv_set_msi_irq_chip(struct pnv_phb *phb, unsigned int virq);
 extern void pnv_pci_ioda2_set_bypass(struct pnv_ioda_pe *pe, bool enable);
+extern unsigned long pnv_pci_ioda2_get_table_size(__u32 page_shift,
+		__u64 window_size, __u32 levels);
 extern int pnv_eeh_post_init(void);
 
 extern void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level,
@@ -216,6 +219,10 @@ extern void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level,
 extern void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass);
 extern void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_phb *phb, bool rm);
 extern struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe);
+extern struct iommu_table_group *pnv_try_setup_npu_table_group(
+		struct pnv_ioda_pe *pe);
+extern struct iommu_table_group *pnv_npu_compound_attach(
+		struct pnv_ioda_pe *pe);
 
 /* pci-ioda-tce.c */
 #define POWERNV_IOMMU_DEFAULT_LEVELS	1
diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
index dc629ee..09edda6 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -328,31 +328,6 @@ static struct iommu_table_group_ops pnv_pci_npu_ops = {
 	.unset_window = pnv_npu_unset_window,
 	.take_ownership = pnv_npu_take_ownership,
 };
-
-struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
-{
-	struct pnv_phb *phb = npe->phb;
-	struct pci_bus *pbus = phb->hose->bus;
-	struct pci_dev *npdev, *gpdev = NULL, *gptmp;
-	struct pnv_ioda_pe *gpe = get_gpu_pci_dev_and_pe(npe, &gpdev);
-
-	if (!gpe || !gpdev)
-		return NULL;
-
-	npe->table_group.ops = &pnv_pci_npu_ops;
-
-	list_for_each_entry(npdev, &pbus->devices, bus_list) {
-		gptmp = pnv_pci_get_gpu_dev(npdev);
-
-		if (gptmp != gpdev)
-			continue;
-
-		pe_info(gpe, "Attached NPU %s\n", dev_name(&npdev->dev));
-		iommu_group_add_device(gpe->table_group.group, &npdev->dev);
-	}
-
-	return gpe;
-}
 #endif /* !CONFIG_IOMMU_API */
 
 /*
@@ -360,6 +335,17 @@ struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
  */
 /* Maximum possible number of ATSD MMIO registers per NPU */
 #define NV_NMMU_ATSD_REGS 8
+#define NV_NPU_MAX_PE_NUM	16
+
+/*
+ * A compound NPU IOMMU group which might consist of 1 GPU + 2xNPUs (POWER8) or
+ * up to 3 x (GPU + 2xNPUs) (POWER9).
+ */
+struct npu_comp {
+	struct iommu_table_group table_group;
+	int pe_num;
+	struct pnv_ioda_pe *pe[NV_NPU_MAX_PE_NUM];
+};
 
 /* An NPU descriptor, valid for POWER9 only */
 struct npu {
@@ -372,8 +358,257 @@ struct npu {
 
 	/* Do we need to explicitly flush the nest mmu? */
 	bool nmmu_flush;
+
+	struct npu_comp npucomp;
 };
 
+#ifdef CONFIG_IOMMU_API
+static long pnv_npu_peers_create_table_userspace(
+		struct iommu_table_group *table_group,
+		int num, __u32 page_shift, __u64 window_size, __u32 levels,
+		struct iommu_table **ptbl)
+{
+	struct npu_comp *npucomp = container_of(table_group, struct npu_comp,
+			table_group);
+
+	if (!npucomp->pe_num || !npucomp->pe[0] ||
+			!npucomp->pe[0]->table_group.ops ||
+			!npucomp->pe[0]->table_group.ops->create_table)
+		return -EFAULT;
+
+	return npucomp->pe[0]->table_group.ops->create_table(
+			&npucomp->pe[0]->table_group, num, page_shift,
+			window_size, levels, ptbl);
+}
+
+static long pnv_npu_peers_set_window(struct iommu_table_group *table_group,
+		int num, struct iommu_table *tbl)
+{
+	int i, j;
+	long ret = 0;
+	struct npu_comp *npucomp = container_of(table_group, struct npu_comp,
+			table_group);
+
+	for (i = 0; i < npucomp->pe_num; ++i) {
+		struct pnv_ioda_pe *pe = npucomp->pe[i];
+
+		if (!pe->table_group.ops->set_window)
+			continue;
+
+		ret = pe->table_group.ops->set_window(&pe->table_group,
+				num, tbl);
+		if (ret)
+			break;
+	}
+
+	if (ret) {
+		for (j = 0; j < i; ++j) {
+			struct pnv_ioda_pe *pe = npucomp->pe[j];
+
+			if (!pe->table_group.ops->unset_window)
+				continue;
+
+			ret = pe->table_group.ops->unset_window(
+					&pe->table_group, num);
+			if (ret)
+				break;
+		}
+	} else {
+		table_group->tables[num] = iommu_tce_table_get(tbl);
+	}
+
+	return ret;
+}
+
+static long pnv_npu_peers_unset_window(struct iommu_table_group *table_group,
+		int num)
+{
+	int i, j;
+	long ret = 0;
+	struct npu_comp *npucomp = container_of(table_group, struct npu_comp,
+			table_group);
+
+	for (i = 0; i < npucomp->pe_num; ++i) {
+		struct pnv_ioda_pe *pe = npucomp->pe[i];
+
+		WARN_ON(npucomp->table_group.tables[num] !=
+				table_group->tables[num]);
+		if (!npucomp->table_group.tables[num])
+			continue;
+
+		if (!pe->table_group.ops->unset_window)
+			continue;
+
+		ret = pe->table_group.ops->unset_window(&pe->table_group, num);
+		if (ret)
+			break;
+	}
+
+	if (ret) {
+		for (j = 0; j < i; ++j) {
+			struct pnv_ioda_pe *pe = npucomp->pe[j];
+
+			if (!npucomp->table_group.tables[num])
+				continue;
+
+			if (!pe->table_group.ops->set_window)
+				continue;
+
+			ret = pe->table_group.ops->set_window(&pe->table_group,
+					num, table_group->tables[num]);
+			if (ret)
+				break;
+		}
+	} else if (table_group->tables[num]) {
+		iommu_tce_table_put(table_group->tables[num]);
+		table_group->tables[num] = NULL;
+	}
+
+	return ret;
+}
+
+static void pnv_npu_peers_take_ownership(struct iommu_table_group *table_group)
+{
+	int i;
+	struct npu_comp *npucomp = container_of(table_group, struct npu_comp,
+			table_group);
+
+	for (i = 0; i < npucomp->pe_num; ++i) {
+		struct pnv_ioda_pe *pe = npucomp->pe[i];
+
+		if (!pe->table_group.ops->take_ownership)
+			continue;
+		pe->table_group.ops->take_ownership(&pe->table_group);
+	}
+}
+
+static void pnv_npu_peers_release_ownership(
+		struct iommu_table_group *table_group)
+{
+	int i;
+	struct npu_comp *npucomp = container_of(table_group, struct npu_comp,
+			table_group);
+
+	for (i = 0; i < npucomp->pe_num; ++i) {
+		struct pnv_ioda_pe *pe = npucomp->pe[i];
+
+		if (!pe->table_group.ops->release_ownership)
+			continue;
+		pe->table_group.ops->release_ownership(&pe->table_group);
+	}
+}
+
+static struct iommu_table_group_ops pnv_npu_peers_ops = {
+	.get_table_size = pnv_pci_ioda2_get_table_size,
+	.create_table = pnv_npu_peers_create_table_userspace,
+	.set_window = pnv_npu_peers_set_window,
+	.unset_window = pnv_npu_peers_unset_window,
+	.take_ownership = pnv_npu_peers_take_ownership,
+	.release_ownership = pnv_npu_peers_release_ownership,
+};
+
+static void pnv_comp_attach_table_group(struct npu_comp *npucomp,
+		struct pnv_ioda_pe *pe)
+{
+	if (WARN_ON(npucomp->pe_num == NV_NPU_MAX_PE_NUM))
+		return;
+
+	npucomp->pe[npucomp->pe_num] = pe;
+	++npucomp->pe_num;
+}
+
+struct iommu_table_group *pnv_try_setup_npu_table_group(struct pnv_ioda_pe *pe)
+{
+	struct iommu_table_group *table_group;
+	struct npu_comp *npucomp;
+	struct pci_dev *gpdev = NULL;
+	struct pci_controller *hose;
+	struct pci_dev *npdev;
+
+	list_for_each_entry(gpdev, &pe->pbus->devices, bus_list) {
+		npdev = pnv_pci_get_npu_dev(gpdev, 0);
+		if (npdev)
+			break;
+	}
+
+	if (!npdev)
+		/* It is not an NPU attached device, skip */
+		return NULL;
+
+	hose = pci_bus_to_host(npdev->bus);
+
+	if (hose->npu) {
+		table_group = &hose->npu->npucomp.table_group;
+
+		if (!table_group->group) {
+			table_group->ops = &pnv_npu_peers_ops;
+			iommu_register_group(table_group,
+					hose->global_number,
+					pe->pe_number);
+		}
+	} else {
+		/* Create a group for 1 GPU and attached NPUs for POWER8 */
+		pe->npucomp = kzalloc(sizeof(pe->npucomp), GFP_KERNEL);
+		table_group = &pe->npucomp->table_group;
+		table_group->ops = &pnv_npu_peers_ops;
+		iommu_register_group(table_group, hose->global_number,
+				pe->pe_number);
+	}
+
+	/* Steal capabilities from a GPU PE */
+	table_group->max_dynamic_windows_supported =
+		pe->table_group.max_dynamic_windows_supported;
+	table_group->tce32_start = pe->table_group.tce32_start;
+	table_group->tce32_size = pe->table_group.tce32_size;
+	table_group->max_levels = pe->table_group.max_levels;
+	table_group->pgsizes = pe->table_group.pgsizes;
+
+	npucomp = container_of(table_group, struct npu_comp, table_group);
+	pnv_comp_attach_table_group(npucomp, pe);
+
+	return table_group;
+}
+
+struct iommu_table_group *pnv_npu_compound_attach(struct pnv_ioda_pe *pe)
+{
+	struct iommu_table_group *table_group;
+	struct npu_comp *npucomp;
+	struct pci_dev *gpdev = NULL;
+	struct pci_dev *npdev;
+	struct pnv_ioda_pe *gpe = get_gpu_pci_dev_and_pe(pe, &gpdev);
+
+	WARN_ON(!(pe->flags & PNV_IODA_PE_DEV));
+	if (!gpe)
+		return NULL;
+
+	/*
+	 * IODA2 bridges get this set up from
+	 * pci_controller_ops::setup_bridge but NPU bridges do not
+	 * have this hook defined so we do it here.
+	 */
+	pe->table_group.max_dynamic_windows_supported =
+		IOMMU_TABLE_GROUP_MAX_TABLES;
+	pe->table_group.ops = &pnv_pci_npu_ops;
+
+	table_group = iommu_group_get_iommudata(
+			iommu_group_get(&gpdev->dev));
+
+	npucomp = container_of(table_group, struct npu_comp, table_group);
+	pnv_comp_attach_table_group(npucomp, pe);
+
+	list_for_each_entry(npdev, &pe->phb->hose->bus->devices, bus_list) {
+		struct pci_dev *gpdevtmp = pnv_pci_get_gpu_dev(npdev);
+
+		if (gpdevtmp != gpdev)
+			continue;
+
+		iommu_add_device(table_group, &npdev->dev);
+	}
+
+	return table_group;
+}
+#endif /* CONFIG_IOMMU_API */
+
 /* Maximum number of nvlinks per npu */
 #define NV_MAX_LINKS 6
 
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index c9e7bcb..5b435d3 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -190,7 +190,8 @@ static void pnv_ioda_free_pe(struct pnv_ioda_pe *pe)
 	unsigned int pe_num = pe->pe_number;
 
 	WARN_ON(pe->pdev);
-
+	WARN_ON(pe->npucomp); /* NPUs are not supposed to be freed */
+	kfree(pe->npucomp);
 	memset(pe, 0, sizeof(struct pnv_ioda_pe));
 	clear_bit(pe_num, phb->ioda.pe_alloc);
 }
@@ -1269,7 +1270,8 @@ static void pnv_ioda_setup_npu_PEs(struct pci_bus *bus)
 		pnv_ioda_setup_npu_PE(pdev);
 }
 
-static void pnv_ioda_setup_bus_iommu_group(struct pnv_ioda_pe *pe);
+static void pnv_ioda_setup_bus_iommu_group(struct pnv_ioda_pe *pe,
+		struct iommu_table_group *table_group, struct pci_bus *bus);
 
 static void pnv_pci_ioda_setup_PEs(void)
 {
@@ -1593,7 +1595,7 @@ static void pnv_ioda_setup_vf_PE(struct pci_dev *pdev, u16 num_vfs)
 		mutex_unlock(&phb->ioda.pe_list_mutex);
 
 		pnv_pci_ioda2_setup_dma_pe(phb, pe);
-		pnv_ioda_setup_bus_iommu_group(pe);
+		pnv_ioda_setup_bus_iommu_group(pe, &pe->table_group, NULL);
 	}
 }
 
@@ -2554,7 +2556,7 @@ static long pnv_pci_ioda2_unset_window(struct iommu_table_group *table_group,
 #endif
 
 #ifdef CONFIG_IOMMU_API
-static unsigned long pnv_pci_ioda2_get_table_size(__u32 page_shift,
+unsigned long pnv_pci_ioda2_get_table_size(__u32 page_shift,
 		__u64 window_size, __u32 levels)
 {
 	unsigned long bytes = 0;
@@ -2628,127 +2630,38 @@ static struct iommu_table_group_ops pnv_pci_ioda2_ops = {
 	.release_ownership = pnv_ioda2_release_ownership,
 };
 
-static int gpe_table_group_to_npe_cb(struct device *dev, void *opaque)
-{
-	struct pci_controller *hose;
-	struct pnv_phb *phb;
-	struct pnv_ioda_pe **ptmppe = opaque;
-	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
-	struct pci_dn *pdn = pci_get_pdn(pdev);
-
-	if (!pdn || pdn->pe_number == IODA_INVALID_PE)
-		return 0;
-
-	hose = pci_bus_to_host(pdev->bus);
-	phb = hose->private_data;
-	if (phb->type != PNV_PHB_NPU_NVLINK)
-		return 0;
-
-	*ptmppe = &phb->ioda.pe_array[pdn->pe_number];
-
-	return 1;
-}
-
-/*
- * This returns PE of associated NPU.
- * This assumes that NPU is in the same IOMMU group with GPU and there is
- * no other PEs.
- */
-static struct pnv_ioda_pe *gpe_table_group_to_npe(
-		struct iommu_table_group *table_group)
-{
-	struct pnv_ioda_pe *npe = NULL;
-	int ret = iommu_group_for_each_dev(table_group->group, &npe,
-			gpe_table_group_to_npe_cb);
-
-	BUG_ON(!ret || !npe);
-
-	return npe;
-}
-
-static long pnv_pci_ioda2_npu_set_window(struct iommu_table_group *table_group,
-		int num, struct iommu_table *tbl)
-{
-	struct pnv_ioda_pe *npe = gpe_table_group_to_npe(table_group);
-	long ret = pnv_pci_ioda2_set_window(table_group, num, tbl);
-
-	if (ret)
-		return ret;
-
-	ret = npe->table_group.ops->set_window(&npe->table_group, num, tbl);
-	if (ret)
-		pnv_pci_ioda2_unset_window(table_group, num);
-
-	return ret;
-}
-
-static long pnv_pci_ioda2_npu_unset_window(
-		struct iommu_table_group *table_group,
-		int num)
-{
-	struct pnv_ioda_pe *npe = gpe_table_group_to_npe(table_group);
-	long ret = pnv_pci_ioda2_unset_window(table_group, num);
-
-	if (ret)
-		return ret;
-
-	return npe->table_group.ops->unset_window(&npe->table_group, num);
-}
-
-static void pnv_ioda2_npu_take_ownership(struct iommu_table_group *table_group)
-{
-	struct pnv_ioda_pe *npe = gpe_table_group_to_npe(table_group);
-
-	npe->table_group.ops->take_ownership(&npe->table_group);
-	pnv_ioda2_take_ownership(table_group);
-}
-
-static struct iommu_table_group_ops pnv_pci_ioda2_npu_ops = {
-	.get_table_size = pnv_pci_ioda2_get_table_size,
-	.create_table = pnv_pci_ioda2_create_table_userspace,
-	.set_window = pnv_pci_ioda2_npu_set_window,
-	.unset_window = pnv_pci_ioda2_npu_unset_window,
-	.take_ownership = pnv_ioda2_npu_take_ownership,
-	.release_ownership = pnv_ioda2_release_ownership,
-};
-
 static void pnv_ioda_setup_bus_iommu_group_add_devices(struct pnv_ioda_pe *pe,
+		struct iommu_table_group *table_group,
 		struct pci_bus *bus)
 {
 	struct pci_dev *dev;
 
 	list_for_each_entry(dev, &bus->devices, bus_list) {
-		iommu_add_device(&pe->table_group, &dev->dev);
+		iommu_add_device(table_group, &dev->dev);
 
 		if ((pe->flags & PNV_IODA_PE_BUS_ALL) && dev->subordinate)
 			pnv_ioda_setup_bus_iommu_group_add_devices(pe,
-					dev->subordinate);
+					table_group, dev->subordinate);
 	}
 }
 
-static void pnv_ioda_setup_bus_iommu_group(struct pnv_ioda_pe *pe)
+static void pnv_ioda_setup_bus_iommu_group(struct pnv_ioda_pe *pe,
+		struct iommu_table_group *table_group, struct pci_bus *bus)
 {
-	if (!pnv_pci_ioda_pe_dma_weight(pe))
-		return;
 
-	iommu_register_group(&pe->table_group, pe->phb->hose->global_number,
-			pe->pe_number);
-
-	/*
-	 * set_iommu_table_base(&pe->pdev->dev, tbl) should have been called
-	 * by now
-	 */
 	if (pe->flags & PNV_IODA_PE_DEV)
-		iommu_add_device(&pe->table_group, &pe->pdev->dev);
-	else if (pe->flags & (PNV_IODA_PE_BUS | PNV_IODA_PE_BUS_ALL))
-		pnv_ioda_setup_bus_iommu_group_add_devices(pe, pe->pbus);
+		iommu_add_device(table_group, &pe->pdev->dev);
+
+	if ((pe->flags & (PNV_IODA_PE_BUS | PNV_IODA_PE_BUS_ALL)) || bus)
+		pnv_ioda_setup_bus_iommu_group_add_devices(pe, table_group,
+				bus);
 }
 
 static void pnv_pci_ioda_setup_iommu_api(void)
 {
-	struct pci_controller *hose, *tmp;
+	struct pci_controller *hose;
 	struct pnv_phb *phb;
-	struct pnv_ioda_pe *pe, *gpe;
+	struct pnv_ioda_pe *pe;
 
 	/*
 	 * There are 4 types of PEs:
@@ -2770,29 +2683,43 @@ static void pnv_pci_ioda_setup_iommu_api(void)
 		if (phb->type == PNV_PHB_NPU_NVLINK)
 			continue;
 
-		list_for_each_entry(pe, &phb->ioda.pe_list, list)
-			pnv_ioda_setup_bus_iommu_group(pe);
+		list_for_each_entry(pe, &phb->ioda.pe_list, list) {
+			struct iommu_table_group *table_group;
+
+			table_group = pnv_try_setup_npu_table_group(pe);
+			if (!table_group) {
+				if (!pnv_pci_ioda_pe_dma_weight(pe))
+					continue;
+
+				table_group = &pe->table_group;
+				iommu_register_group(&pe->table_group,
+						pe->phb->hose->global_number,
+						pe->pe_number);
+			}
+			pnv_ioda_setup_bus_iommu_group(pe, table_group,
+					pe->pbus);
+		}
 	}
 
 	/*
 	 * Now we have all PHBs discovered, time to add NPU devices to
 	 * the corresponding IOMMU groups.
 	 */
-	list_for_each_entry_safe(hose, tmp, &hose_list, list_node) {
+	list_for_each_entry(hose, &hose_list, list_node) {
 		phb = hose->private_data;
 
 		if (phb->type != PNV_PHB_NPU_NVLINK)
 			continue;
 
-		list_for_each_entry(pe, &phb->ioda.pe_list, list) {
-			gpe = pnv_pci_npu_setup_iommu(pe);
-			if (gpe)
-				gpe->table_group.ops = &pnv_pci_ioda2_npu_ops;
-		}
+		list_for_each_entry(pe, &phb->ioda.pe_list, list)
+			pnv_npu_compound_attach(pe);
 	}
 }
 #else /* !CONFIG_IOMMU_API */
-static void pnv_ioda_setup_bus_iommu_group(struct pnv_ioda_pe *pe) { }
+static void pnv_ioda_setup_bus_iommu_group(struct pnv_ioda_pe *pe,
+		struct iommu_table_group *table_group, struct pci_bus *bus)
+{
+}
 static void pnv_pci_ioda_setup_iommu_api(void) { };
 #endif
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 15/19] powerpc/powernv/npu: Add compound IOMMU groups
@ 2018-11-23  5:53   ` Alexey Kardashevskiy
  0 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:53 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

At the moment the powernv platform registers an IOMMU group for each PE.
There is an exception though: an NVLink bridge which is attached to
the corresponding GPU's IOMMU group making it a master.

Now we have POWER9 systems with GPUs connected to each other directly
bypassing PCI. At the moment we do not control state of these links so
we have to put such interconnected GPUs to one IOMMU group which
means that the old scheme with one GPU as a master won't work - there will
be up to 3 GPUs in such group.

This introduces a npu_comp struct which represents a compound IOMMU
group made of multiple PEs - PCI PEs (for GPUs) and NPU PEs (for NVLink
bridges). This converts the existing NVLink1 code to use the new scheme.
From now on, each PE must have a valid iommu_table_group_ops which will
either be called directly (for a single PE group) or indirectly from
a compound group handlers.

This moves IOMMU group registration for NVLink-connected GPUs to npu-dma.c.
For POWER8, this stores a new compound group pointer in the PE (so a GPU
is still a master); for POWER9 the new group pointer is stored in an NPU
(which is allocated per a PCI host controller).

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
 arch/powerpc/include/asm/pci.h            |   1 +
 arch/powerpc/platforms/powernv/pci.h      |   7 +
 arch/powerpc/platforms/powernv/npu-dma.c  | 285 ++++++++++++++++++++--
 arch/powerpc/platforms/powernv/pci-ioda.c | 155 ++++--------
 4 files changed, 309 insertions(+), 139 deletions(-)

diff --git a/arch/powerpc/include/asm/pci.h b/arch/powerpc/include/asm/pci.h
index baf2886..0c72f18 100644
--- a/arch/powerpc/include/asm/pci.h
+++ b/arch/powerpc/include/asm/pci.h
@@ -132,5 +132,6 @@ extern struct pci_dev *pnv_pci_get_npu_dev(struct pci_dev *gpdev, int index);
 extern int pnv_npu2_init(struct pci_controller *hose);
 extern int pnv_npu2_map_lpar_dev(struct pci_dev *gpdev, unsigned int lparid,
 		unsigned long msr);
+extern int pnv_npu2_unmap_lpar_dev(struct pci_dev *gpdev);
 
 #endif /* __ASM_POWERPC_PCI_H */
diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
index cf9f748..aef4bb5 100644
--- a/arch/powerpc/platforms/powernv/pci.h
+++ b/arch/powerpc/platforms/powernv/pci.h
@@ -62,6 +62,7 @@ struct pnv_ioda_pe {
 
 	/* "Base" iommu table, ie, 4K TCEs, 32-bit DMA */
 	struct iommu_table_group table_group;
+	struct npu_comp		*npucomp;
 
 	/* 64-bit TCE bypass region */
 	bool			tce_bypass_enabled;
@@ -201,6 +202,8 @@ extern void pnv_teardown_msi_irqs(struct pci_dev *pdev);
 extern struct pnv_ioda_pe *pnv_ioda_get_pe(struct pci_dev *dev);
 extern void pnv_set_msi_irq_chip(struct pnv_phb *phb, unsigned int virq);
 extern void pnv_pci_ioda2_set_bypass(struct pnv_ioda_pe *pe, bool enable);
+extern unsigned long pnv_pci_ioda2_get_table_size(__u32 page_shift,
+		__u64 window_size, __u32 levels);
 extern int pnv_eeh_post_init(void);
 
 extern void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level,
@@ -216,6 +219,10 @@ extern void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level,
 extern void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass);
 extern void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_phb *phb, bool rm);
 extern struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe);
+extern struct iommu_table_group *pnv_try_setup_npu_table_group(
+		struct pnv_ioda_pe *pe);
+extern struct iommu_table_group *pnv_npu_compound_attach(
+		struct pnv_ioda_pe *pe);
 
 /* pci-ioda-tce.c */
 #define POWERNV_IOMMU_DEFAULT_LEVELS	1
diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
index dc629ee..09edda6 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -328,31 +328,6 @@ static struct iommu_table_group_ops pnv_pci_npu_ops = {
 	.unset_window = pnv_npu_unset_window,
 	.take_ownership = pnv_npu_take_ownership,
 };
-
-struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
-{
-	struct pnv_phb *phb = npe->phb;
-	struct pci_bus *pbus = phb->hose->bus;
-	struct pci_dev *npdev, *gpdev = NULL, *gptmp;
-	struct pnv_ioda_pe *gpe = get_gpu_pci_dev_and_pe(npe, &gpdev);
-
-	if (!gpe || !gpdev)
-		return NULL;
-
-	npe->table_group.ops = &pnv_pci_npu_ops;
-
-	list_for_each_entry(npdev, &pbus->devices, bus_list) {
-		gptmp = pnv_pci_get_gpu_dev(npdev);
-
-		if (gptmp != gpdev)
-			continue;
-
-		pe_info(gpe, "Attached NPU %s\n", dev_name(&npdev->dev));
-		iommu_group_add_device(gpe->table_group.group, &npdev->dev);
-	}
-
-	return gpe;
-}
 #endif /* !CONFIG_IOMMU_API */
 
 /*
@@ -360,6 +335,17 @@ struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
  */
 /* Maximum possible number of ATSD MMIO registers per NPU */
 #define NV_NMMU_ATSD_REGS 8
+#define NV_NPU_MAX_PE_NUM	16
+
+/*
+ * A compound NPU IOMMU group which might consist of 1 GPU + 2xNPUs (POWER8) or
+ * up to 3 x (GPU + 2xNPUs) (POWER9).
+ */
+struct npu_comp {
+	struct iommu_table_group table_group;
+	int pe_num;
+	struct pnv_ioda_pe *pe[NV_NPU_MAX_PE_NUM];
+};
 
 /* An NPU descriptor, valid for POWER9 only */
 struct npu {
@@ -372,8 +358,257 @@ struct npu {
 
 	/* Do we need to explicitly flush the nest mmu? */
 	bool nmmu_flush;
+
+	struct npu_comp npucomp;
 };
 
+#ifdef CONFIG_IOMMU_API
+static long pnv_npu_peers_create_table_userspace(
+		struct iommu_table_group *table_group,
+		int num, __u32 page_shift, __u64 window_size, __u32 levels,
+		struct iommu_table **ptbl)
+{
+	struct npu_comp *npucomp = container_of(table_group, struct npu_comp,
+			table_group);
+
+	if (!npucomp->pe_num || !npucomp->pe[0] ||
+			!npucomp->pe[0]->table_group.ops ||
+			!npucomp->pe[0]->table_group.ops->create_table)
+		return -EFAULT;
+
+	return npucomp->pe[0]->table_group.ops->create_table(
+			&npucomp->pe[0]->table_group, num, page_shift,
+			window_size, levels, ptbl);
+}
+
+static long pnv_npu_peers_set_window(struct iommu_table_group *table_group,
+		int num, struct iommu_table *tbl)
+{
+	int i, j;
+	long ret = 0;
+	struct npu_comp *npucomp = container_of(table_group, struct npu_comp,
+			table_group);
+
+	for (i = 0; i < npucomp->pe_num; ++i) {
+		struct pnv_ioda_pe *pe = npucomp->pe[i];
+
+		if (!pe->table_group.ops->set_window)
+			continue;
+
+		ret = pe->table_group.ops->set_window(&pe->table_group,
+				num, tbl);
+		if (ret)
+			break;
+	}
+
+	if (ret) {
+		for (j = 0; j < i; ++j) {
+			struct pnv_ioda_pe *pe = npucomp->pe[j];
+
+			if (!pe->table_group.ops->unset_window)
+				continue;
+
+			ret = pe->table_group.ops->unset_window(
+					&pe->table_group, num);
+			if (ret)
+				break;
+		}
+	} else {
+		table_group->tables[num] = iommu_tce_table_get(tbl);
+	}
+
+	return ret;
+}
+
+static long pnv_npu_peers_unset_window(struct iommu_table_group *table_group,
+		int num)
+{
+	int i, j;
+	long ret = 0;
+	struct npu_comp *npucomp = container_of(table_group, struct npu_comp,
+			table_group);
+
+	for (i = 0; i < npucomp->pe_num; ++i) {
+		struct pnv_ioda_pe *pe = npucomp->pe[i];
+
+		WARN_ON(npucomp->table_group.tables[num] !+				table_group->tables[num]);
+		if (!npucomp->table_group.tables[num])
+			continue;
+
+		if (!pe->table_group.ops->unset_window)
+			continue;
+
+		ret = pe->table_group.ops->unset_window(&pe->table_group, num);
+		if (ret)
+			break;
+	}
+
+	if (ret) {
+		for (j = 0; j < i; ++j) {
+			struct pnv_ioda_pe *pe = npucomp->pe[j];
+
+			if (!npucomp->table_group.tables[num])
+				continue;
+
+			if (!pe->table_group.ops->set_window)
+				continue;
+
+			ret = pe->table_group.ops->set_window(&pe->table_group,
+					num, table_group->tables[num]);
+			if (ret)
+				break;
+		}
+	} else if (table_group->tables[num]) {
+		iommu_tce_table_put(table_group->tables[num]);
+		table_group->tables[num] = NULL;
+	}
+
+	return ret;
+}
+
+static void pnv_npu_peers_take_ownership(struct iommu_table_group *table_group)
+{
+	int i;
+	struct npu_comp *npucomp = container_of(table_group, struct npu_comp,
+			table_group);
+
+	for (i = 0; i < npucomp->pe_num; ++i) {
+		struct pnv_ioda_pe *pe = npucomp->pe[i];
+
+		if (!pe->table_group.ops->take_ownership)
+			continue;
+		pe->table_group.ops->take_ownership(&pe->table_group);
+	}
+}
+
+static void pnv_npu_peers_release_ownership(
+		struct iommu_table_group *table_group)
+{
+	int i;
+	struct npu_comp *npucomp = container_of(table_group, struct npu_comp,
+			table_group);
+
+	for (i = 0; i < npucomp->pe_num; ++i) {
+		struct pnv_ioda_pe *pe = npucomp->pe[i];
+
+		if (!pe->table_group.ops->release_ownership)
+			continue;
+		pe->table_group.ops->release_ownership(&pe->table_group);
+	}
+}
+
+static struct iommu_table_group_ops pnv_npu_peers_ops = {
+	.get_table_size = pnv_pci_ioda2_get_table_size,
+	.create_table = pnv_npu_peers_create_table_userspace,
+	.set_window = pnv_npu_peers_set_window,
+	.unset_window = pnv_npu_peers_unset_window,
+	.take_ownership = pnv_npu_peers_take_ownership,
+	.release_ownership = pnv_npu_peers_release_ownership,
+};
+
+static void pnv_comp_attach_table_group(struct npu_comp *npucomp,
+		struct pnv_ioda_pe *pe)
+{
+	if (WARN_ON(npucomp->pe_num = NV_NPU_MAX_PE_NUM))
+		return;
+
+	npucomp->pe[npucomp->pe_num] = pe;
+	++npucomp->pe_num;
+}
+
+struct iommu_table_group *pnv_try_setup_npu_table_group(struct pnv_ioda_pe *pe)
+{
+	struct iommu_table_group *table_group;
+	struct npu_comp *npucomp;
+	struct pci_dev *gpdev = NULL;
+	struct pci_controller *hose;
+	struct pci_dev *npdev;
+
+	list_for_each_entry(gpdev, &pe->pbus->devices, bus_list) {
+		npdev = pnv_pci_get_npu_dev(gpdev, 0);
+		if (npdev)
+			break;
+	}
+
+	if (!npdev)
+		/* It is not an NPU attached device, skip */
+		return NULL;
+
+	hose = pci_bus_to_host(npdev->bus);
+
+	if (hose->npu) {
+		table_group = &hose->npu->npucomp.table_group;
+
+		if (!table_group->group) {
+			table_group->ops = &pnv_npu_peers_ops;
+			iommu_register_group(table_group,
+					hose->global_number,
+					pe->pe_number);
+		}
+	} else {
+		/* Create a group for 1 GPU and attached NPUs for POWER8 */
+		pe->npucomp = kzalloc(sizeof(pe->npucomp), GFP_KERNEL);
+		table_group = &pe->npucomp->table_group;
+		table_group->ops = &pnv_npu_peers_ops;
+		iommu_register_group(table_group, hose->global_number,
+				pe->pe_number);
+	}
+
+	/* Steal capabilities from a GPU PE */
+	table_group->max_dynamic_windows_supported +		pe->table_group.max_dynamic_windows_supported;
+	table_group->tce32_start = pe->table_group.tce32_start;
+	table_group->tce32_size = pe->table_group.tce32_size;
+	table_group->max_levels = pe->table_group.max_levels;
+	table_group->pgsizes = pe->table_group.pgsizes;
+
+	npucomp = container_of(table_group, struct npu_comp, table_group);
+	pnv_comp_attach_table_group(npucomp, pe);
+
+	return table_group;
+}
+
+struct iommu_table_group *pnv_npu_compound_attach(struct pnv_ioda_pe *pe)
+{
+	struct iommu_table_group *table_group;
+	struct npu_comp *npucomp;
+	struct pci_dev *gpdev = NULL;
+	struct pci_dev *npdev;
+	struct pnv_ioda_pe *gpe = get_gpu_pci_dev_and_pe(pe, &gpdev);
+
+	WARN_ON(!(pe->flags & PNV_IODA_PE_DEV));
+	if (!gpe)
+		return NULL;
+
+	/*
+	 * IODA2 bridges get this set up from
+	 * pci_controller_ops::setup_bridge but NPU bridges do not
+	 * have this hook defined so we do it here.
+	 */
+	pe->table_group.max_dynamic_windows_supported +		IOMMU_TABLE_GROUP_MAX_TABLES;
+	pe->table_group.ops = &pnv_pci_npu_ops;
+
+	table_group = iommu_group_get_iommudata(
+			iommu_group_get(&gpdev->dev));
+
+	npucomp = container_of(table_group, struct npu_comp, table_group);
+	pnv_comp_attach_table_group(npucomp, pe);
+
+	list_for_each_entry(npdev, &pe->phb->hose->bus->devices, bus_list) {
+		struct pci_dev *gpdevtmp = pnv_pci_get_gpu_dev(npdev);
+
+		if (gpdevtmp != gpdev)
+			continue;
+
+		iommu_add_device(table_group, &npdev->dev);
+	}
+
+	return table_group;
+}
+#endif /* CONFIG_IOMMU_API */
+
 /* Maximum number of nvlinks per npu */
 #define NV_MAX_LINKS 6
 
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index c9e7bcb..5b435d3 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -190,7 +190,8 @@ static void pnv_ioda_free_pe(struct pnv_ioda_pe *pe)
 	unsigned int pe_num = pe->pe_number;
 
 	WARN_ON(pe->pdev);
-
+	WARN_ON(pe->npucomp); /* NPUs are not supposed to be freed */
+	kfree(pe->npucomp);
 	memset(pe, 0, sizeof(struct pnv_ioda_pe));
 	clear_bit(pe_num, phb->ioda.pe_alloc);
 }
@@ -1269,7 +1270,8 @@ static void pnv_ioda_setup_npu_PEs(struct pci_bus *bus)
 		pnv_ioda_setup_npu_PE(pdev);
 }
 
-static void pnv_ioda_setup_bus_iommu_group(struct pnv_ioda_pe *pe);
+static void pnv_ioda_setup_bus_iommu_group(struct pnv_ioda_pe *pe,
+		struct iommu_table_group *table_group, struct pci_bus *bus);
 
 static void pnv_pci_ioda_setup_PEs(void)
 {
@@ -1593,7 +1595,7 @@ static void pnv_ioda_setup_vf_PE(struct pci_dev *pdev, u16 num_vfs)
 		mutex_unlock(&phb->ioda.pe_list_mutex);
 
 		pnv_pci_ioda2_setup_dma_pe(phb, pe);
-		pnv_ioda_setup_bus_iommu_group(pe);
+		pnv_ioda_setup_bus_iommu_group(pe, &pe->table_group, NULL);
 	}
 }
 
@@ -2554,7 +2556,7 @@ static long pnv_pci_ioda2_unset_window(struct iommu_table_group *table_group,
 #endif
 
 #ifdef CONFIG_IOMMU_API
-static unsigned long pnv_pci_ioda2_get_table_size(__u32 page_shift,
+unsigned long pnv_pci_ioda2_get_table_size(__u32 page_shift,
 		__u64 window_size, __u32 levels)
 {
 	unsigned long bytes = 0;
@@ -2628,127 +2630,38 @@ static struct iommu_table_group_ops pnv_pci_ioda2_ops = {
 	.release_ownership = pnv_ioda2_release_ownership,
 };
 
-static int gpe_table_group_to_npe_cb(struct device *dev, void *opaque)
-{
-	struct pci_controller *hose;
-	struct pnv_phb *phb;
-	struct pnv_ioda_pe **ptmppe = opaque;
-	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
-	struct pci_dn *pdn = pci_get_pdn(pdev);
-
-	if (!pdn || pdn->pe_number = IODA_INVALID_PE)
-		return 0;
-
-	hose = pci_bus_to_host(pdev->bus);
-	phb = hose->private_data;
-	if (phb->type != PNV_PHB_NPU_NVLINK)
-		return 0;
-
-	*ptmppe = &phb->ioda.pe_array[pdn->pe_number];
-
-	return 1;
-}
-
-/*
- * This returns PE of associated NPU.
- * This assumes that NPU is in the same IOMMU group with GPU and there is
- * no other PEs.
- */
-static struct pnv_ioda_pe *gpe_table_group_to_npe(
-		struct iommu_table_group *table_group)
-{
-	struct pnv_ioda_pe *npe = NULL;
-	int ret = iommu_group_for_each_dev(table_group->group, &npe,
-			gpe_table_group_to_npe_cb);
-
-	BUG_ON(!ret || !npe);
-
-	return npe;
-}
-
-static long pnv_pci_ioda2_npu_set_window(struct iommu_table_group *table_group,
-		int num, struct iommu_table *tbl)
-{
-	struct pnv_ioda_pe *npe = gpe_table_group_to_npe(table_group);
-	long ret = pnv_pci_ioda2_set_window(table_group, num, tbl);
-
-	if (ret)
-		return ret;
-
-	ret = npe->table_group.ops->set_window(&npe->table_group, num, tbl);
-	if (ret)
-		pnv_pci_ioda2_unset_window(table_group, num);
-
-	return ret;
-}
-
-static long pnv_pci_ioda2_npu_unset_window(
-		struct iommu_table_group *table_group,
-		int num)
-{
-	struct pnv_ioda_pe *npe = gpe_table_group_to_npe(table_group);
-	long ret = pnv_pci_ioda2_unset_window(table_group, num);
-
-	if (ret)
-		return ret;
-
-	return npe->table_group.ops->unset_window(&npe->table_group, num);
-}
-
-static void pnv_ioda2_npu_take_ownership(struct iommu_table_group *table_group)
-{
-	struct pnv_ioda_pe *npe = gpe_table_group_to_npe(table_group);
-
-	npe->table_group.ops->take_ownership(&npe->table_group);
-	pnv_ioda2_take_ownership(table_group);
-}
-
-static struct iommu_table_group_ops pnv_pci_ioda2_npu_ops = {
-	.get_table_size = pnv_pci_ioda2_get_table_size,
-	.create_table = pnv_pci_ioda2_create_table_userspace,
-	.set_window = pnv_pci_ioda2_npu_set_window,
-	.unset_window = pnv_pci_ioda2_npu_unset_window,
-	.take_ownership = pnv_ioda2_npu_take_ownership,
-	.release_ownership = pnv_ioda2_release_ownership,
-};
-
 static void pnv_ioda_setup_bus_iommu_group_add_devices(struct pnv_ioda_pe *pe,
+		struct iommu_table_group *table_group,
 		struct pci_bus *bus)
 {
 	struct pci_dev *dev;
 
 	list_for_each_entry(dev, &bus->devices, bus_list) {
-		iommu_add_device(&pe->table_group, &dev->dev);
+		iommu_add_device(table_group, &dev->dev);
 
 		if ((pe->flags & PNV_IODA_PE_BUS_ALL) && dev->subordinate)
 			pnv_ioda_setup_bus_iommu_group_add_devices(pe,
-					dev->subordinate);
+					table_group, dev->subordinate);
 	}
 }
 
-static void pnv_ioda_setup_bus_iommu_group(struct pnv_ioda_pe *pe)
+static void pnv_ioda_setup_bus_iommu_group(struct pnv_ioda_pe *pe,
+		struct iommu_table_group *table_group, struct pci_bus *bus)
 {
-	if (!pnv_pci_ioda_pe_dma_weight(pe))
-		return;
 
-	iommu_register_group(&pe->table_group, pe->phb->hose->global_number,
-			pe->pe_number);
-
-	/*
-	 * set_iommu_table_base(&pe->pdev->dev, tbl) should have been called
-	 * by now
-	 */
 	if (pe->flags & PNV_IODA_PE_DEV)
-		iommu_add_device(&pe->table_group, &pe->pdev->dev);
-	else if (pe->flags & (PNV_IODA_PE_BUS | PNV_IODA_PE_BUS_ALL))
-		pnv_ioda_setup_bus_iommu_group_add_devices(pe, pe->pbus);
+		iommu_add_device(table_group, &pe->pdev->dev);
+
+	if ((pe->flags & (PNV_IODA_PE_BUS | PNV_IODA_PE_BUS_ALL)) || bus)
+		pnv_ioda_setup_bus_iommu_group_add_devices(pe, table_group,
+				bus);
 }
 
 static void pnv_pci_ioda_setup_iommu_api(void)
 {
-	struct pci_controller *hose, *tmp;
+	struct pci_controller *hose;
 	struct pnv_phb *phb;
-	struct pnv_ioda_pe *pe, *gpe;
+	struct pnv_ioda_pe *pe;
 
 	/*
 	 * There are 4 types of PEs:
@@ -2770,29 +2683,43 @@ static void pnv_pci_ioda_setup_iommu_api(void)
 		if (phb->type = PNV_PHB_NPU_NVLINK)
 			continue;
 
-		list_for_each_entry(pe, &phb->ioda.pe_list, list)
-			pnv_ioda_setup_bus_iommu_group(pe);
+		list_for_each_entry(pe, &phb->ioda.pe_list, list) {
+			struct iommu_table_group *table_group;
+
+			table_group = pnv_try_setup_npu_table_group(pe);
+			if (!table_group) {
+				if (!pnv_pci_ioda_pe_dma_weight(pe))
+					continue;
+
+				table_group = &pe->table_group;
+				iommu_register_group(&pe->table_group,
+						pe->phb->hose->global_number,
+						pe->pe_number);
+			}
+			pnv_ioda_setup_bus_iommu_group(pe, table_group,
+					pe->pbus);
+		}
 	}
 
 	/*
 	 * Now we have all PHBs discovered, time to add NPU devices to
 	 * the corresponding IOMMU groups.
 	 */
-	list_for_each_entry_safe(hose, tmp, &hose_list, list_node) {
+	list_for_each_entry(hose, &hose_list, list_node) {
 		phb = hose->private_data;
 
 		if (phb->type != PNV_PHB_NPU_NVLINK)
 			continue;
 
-		list_for_each_entry(pe, &phb->ioda.pe_list, list) {
-			gpe = pnv_pci_npu_setup_iommu(pe);
-			if (gpe)
-				gpe->table_group.ops = &pnv_pci_ioda2_npu_ops;
-		}
+		list_for_each_entry(pe, &phb->ioda.pe_list, list)
+			pnv_npu_compound_attach(pe);
 	}
 }
 #else /* !CONFIG_IOMMU_API */
-static void pnv_ioda_setup_bus_iommu_group(struct pnv_ioda_pe *pe) { }
+static void pnv_ioda_setup_bus_iommu_group(struct pnv_ioda_pe *pe,
+		struct iommu_table_group *table_group, struct pci_bus *bus)
+{
+}
 static void pnv_pci_ioda_setup_iommu_api(void) { };
 #endif
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 16/19] powerpc/powernv/npu: Add release_ownership hook
  2018-11-23  5:52 ` Alexey Kardashevskiy
@ 2018-11-23  5:53   ` Alexey Kardashevskiy
  -1 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:53 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

In order to make ATS work and translate addresses for arbitrary
LPID and PID, we need to program an NPU with LPID and allow PID wildcard
matching with a specific MSR mask.

This implements a helper to assign a GPU to LPAR and program the NPU
with a wildcard for PID and a helper to do clean-up. The helper takes
MSR (only DR/HV/PR/SF bits are allowed) to program them into NPU2 for
ATS checkout requests support.

This exports pnv_npu2_unmap_lpar_dev() as following patches will use it
from the VFIO driver.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
 arch/powerpc/platforms/powernv/npu-dma.c | 53 ++++++++++++++++++++++++
 1 file changed, 53 insertions(+)

diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
index 09edda6..41cc3c4 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -300,6 +300,7 @@ static void pnv_npu_take_ownership(struct iommu_table_group *table_group)
 			table_group);
 	struct pnv_phb *phb = npe->phb;
 	int64_t rc;
+	struct pci_dev *gpdev = NULL;
 
 	/*
 	 * Note: NPU has just a single TVE in the hardware which means that
@@ -321,12 +322,28 @@ static void pnv_npu_take_ownership(struct iommu_table_group *table_group)
 		return;
 	}
 	pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
+
+	get_gpu_pci_dev_and_pe(npe, &gpdev);
+	if (gpdev)
+		pnv_npu2_unmap_lpar_dev(gpdev);
+}
+
+static void pnv_npu_release_ownership(struct iommu_table_group *table_group)
+{
+	struct pnv_ioda_pe *npe = container_of(table_group, struct pnv_ioda_pe,
+			table_group);
+	struct pci_dev *gpdev = NULL;
+
+	get_gpu_pci_dev_and_pe(npe, &gpdev);
+	if (gpdev)
+		pnv_npu2_map_lpar_dev(gpdev, 0, MSR_DR | MSR_PR | MSR_HV);
 }
 
 static struct iommu_table_group_ops pnv_pci_npu_ops = {
 	.set_window = pnv_npu_set_window,
 	.unset_window = pnv_npu_unset_window,
 	.take_ownership = pnv_npu_take_ownership,
+	.release_ownership = pnv_npu_release_ownership,
 };
 #endif /* !CONFIG_IOMMU_API */
 
@@ -1231,3 +1248,39 @@ void pnv_npu2_map_lpar(struct pnv_ioda_pe *gpe, unsigned long msr)
 	list_for_each_entry(gpdev, &gpe->pbus->devices, bus_list)
 		pnv_npu2_map_lpar_dev(gpdev, 0, msr);
 }
+
+int pnv_npu2_unmap_lpar_dev(struct pci_dev *gpdev)
+{
+	int ret;
+	struct pci_dev *npdev = pnv_pci_get_npu_dev(gpdev, 0);
+	struct pci_controller *hose;
+	struct pnv_phb *nphb;
+
+	if (!npdev)
+		return -ENODEV;
+
+	hose = pci_bus_to_host(npdev->bus);
+	nphb = hose->private_data;
+
+	dev_dbg(&gpdev->dev, "destroy context opalid=%llu\n",
+			nphb->opal_id);
+	ret = opal_npu_destroy_context(nphb->opal_id, 0/*__unused*/,
+			PCI_DEVID(gpdev->bus->number, gpdev->devfn));
+	if (ret < 0) {
+		dev_err(&gpdev->dev, "Failed to destroy context: %d\n", ret);
+		return ret;
+	}
+
+	/* Set LPID to 0 anyway, just to be safe */
+	dev_dbg(&gpdev->dev, "Map LPAR opalid=%llu lparid=0\n", nphb->opal_id);
+	ret = opal_npu_map_lpar(nphb->opal_id,
+			PCI_DEVID(gpdev->bus->number, gpdev->devfn), 0 /*LPID*/,
+			0 /* LPCR bits */);
+	if (ret)
+		dev_err(&gpdev->dev, "Error %d mapping device to LPAR\n", ret);
+
+	opal_purge_cache();
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(pnv_npu2_unmap_lpar_dev);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 16/19] powerpc/powernv/npu: Add release_ownership hook
@ 2018-11-23  5:53   ` Alexey Kardashevskiy
  0 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:53 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

In order to make ATS work and translate addresses for arbitrary
LPID and PID, we need to program an NPU with LPID and allow PID wildcard
matching with a specific MSR mask.

This implements a helper to assign a GPU to LPAR and program the NPU
with a wildcard for PID and a helper to do clean-up. The helper takes
MSR (only DR/HV/PR/SF bits are allowed) to program them into NPU2 for
ATS checkout requests support.

This exports pnv_npu2_unmap_lpar_dev() as following patches will use it
from the VFIO driver.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
 arch/powerpc/platforms/powernv/npu-dma.c | 53 ++++++++++++++++++++++++
 1 file changed, 53 insertions(+)

diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
index 09edda6..41cc3c4 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -300,6 +300,7 @@ static void pnv_npu_take_ownership(struct iommu_table_group *table_group)
 			table_group);
 	struct pnv_phb *phb = npe->phb;
 	int64_t rc;
+	struct pci_dev *gpdev = NULL;
 
 	/*
 	 * Note: NPU has just a single TVE in the hardware which means that
@@ -321,12 +322,28 @@ static void pnv_npu_take_ownership(struct iommu_table_group *table_group)
 		return;
 	}
 	pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
+
+	get_gpu_pci_dev_and_pe(npe, &gpdev);
+	if (gpdev)
+		pnv_npu2_unmap_lpar_dev(gpdev);
+}
+
+static void pnv_npu_release_ownership(struct iommu_table_group *table_group)
+{
+	struct pnv_ioda_pe *npe = container_of(table_group, struct pnv_ioda_pe,
+			table_group);
+	struct pci_dev *gpdev = NULL;
+
+	get_gpu_pci_dev_and_pe(npe, &gpdev);
+	if (gpdev)
+		pnv_npu2_map_lpar_dev(gpdev, 0, MSR_DR | MSR_PR | MSR_HV);
 }
 
 static struct iommu_table_group_ops pnv_pci_npu_ops = {
 	.set_window = pnv_npu_set_window,
 	.unset_window = pnv_npu_unset_window,
 	.take_ownership = pnv_npu_take_ownership,
+	.release_ownership = pnv_npu_release_ownership,
 };
 #endif /* !CONFIG_IOMMU_API */
 
@@ -1231,3 +1248,39 @@ void pnv_npu2_map_lpar(struct pnv_ioda_pe *gpe, unsigned long msr)
 	list_for_each_entry(gpdev, &gpe->pbus->devices, bus_list)
 		pnv_npu2_map_lpar_dev(gpdev, 0, msr);
 }
+
+int pnv_npu2_unmap_lpar_dev(struct pci_dev *gpdev)
+{
+	int ret;
+	struct pci_dev *npdev = pnv_pci_get_npu_dev(gpdev, 0);
+	struct pci_controller *hose;
+	struct pnv_phb *nphb;
+
+	if (!npdev)
+		return -ENODEV;
+
+	hose = pci_bus_to_host(npdev->bus);
+	nphb = hose->private_data;
+
+	dev_dbg(&gpdev->dev, "destroy context opalid=%llu\n",
+			nphb->opal_id);
+	ret = opal_npu_destroy_context(nphb->opal_id, 0/*__unused*/,
+			PCI_DEVID(gpdev->bus->number, gpdev->devfn));
+	if (ret < 0) {
+		dev_err(&gpdev->dev, "Failed to destroy context: %d\n", ret);
+		return ret;
+	}
+
+	/* Set LPID to 0 anyway, just to be safe */
+	dev_dbg(&gpdev->dev, "Map LPAR opalid=%llu lparid=0\n", nphb->opal_id);
+	ret = opal_npu_map_lpar(nphb->opal_id,
+			PCI_DEVID(gpdev->bus->number, gpdev->devfn), 0 /*LPID*/,
+			0 /* LPCR bits */);
+	if (ret)
+		dev_err(&gpdev->dev, "Error %d mapping device to LPAR\n", ret);
+
+	opal_purge_cache();
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(pnv_npu2_unmap_lpar_dev);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 17/19] vfio_pci: Allow mapping extra regions
  2018-11-23  5:52 ` Alexey Kardashevskiy
@ 2018-11-23  5:53   ` Alexey Kardashevskiy
  -1 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:53 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

So far we only allowed mapping of MMIO BARs to the userspace. However
there there are GPUs with on-board coherent RAM accessible via side
channels which we also want to map to the userspace. The first client
for this is NVIDIA V100 GPU with NVLink2 direct links to a POWER9
NPU-enabled CPU; such GPUs have 16GB RAM which is coherently mapped
to the system address space, we are going to export these as an extra
PCI region.

We already support extra PCI regions and this adds support for mapping
them to the userspace.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
Changes:
v2:
* reverted one of mistakenly removed error checks
---
 drivers/vfio/pci/vfio_pci_private.h | 3 +++
 drivers/vfio/pci/vfio_pci.c         | 9 +++++++++
 2 files changed, 12 insertions(+)

diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
index cde3b5d..86aab05 100644
--- a/drivers/vfio/pci/vfio_pci_private.h
+++ b/drivers/vfio/pci/vfio_pci_private.h
@@ -59,6 +59,9 @@ struct vfio_pci_regops {
 		      size_t count, loff_t *ppos, bool iswrite);
 	void	(*release)(struct vfio_pci_device *vdev,
 			   struct vfio_pci_region *region);
+	int	(*mmap)(struct vfio_pci_device *vdev,
+			struct vfio_pci_region *region,
+			struct vm_area_struct *vma);
 };
 
 struct vfio_pci_region {
diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index fef5002..4a6f7c0 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -1130,6 +1130,15 @@ static int vfio_pci_mmap(void *device_data, struct vm_area_struct *vma)
 		return -EINVAL;
 	if ((vma->vm_flags & VM_SHARED) == 0)
 		return -EINVAL;
+	if (index >= VFIO_PCI_NUM_REGIONS) {
+		int regnum = index - VFIO_PCI_NUM_REGIONS;
+		struct vfio_pci_region *region = vdev->region + regnum;
+
+		if (region && region->ops && region->ops->mmap &&
+		    (region->flags & VFIO_REGION_INFO_FLAG_MMAP))
+			return region->ops->mmap(vdev, region, vma);
+		return -EINVAL;
+	}
 	if (index >= VFIO_PCI_ROM_REGION_INDEX)
 		return -EINVAL;
 	if (!vdev->bar_mmap_supported[index])
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 17/19] vfio_pci: Allow mapping extra regions
@ 2018-11-23  5:53   ` Alexey Kardashevskiy
  0 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:53 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

So far we only allowed mapping of MMIO BARs to the userspace. However
there there are GPUs with on-board coherent RAM accessible via side
channels which we also want to map to the userspace. The first client
for this is NVIDIA V100 GPU with NVLink2 direct links to a POWER9
NPU-enabled CPU; such GPUs have 16GB RAM which is coherently mapped
to the system address space, we are going to export these as an extra
PCI region.

We already support extra PCI regions and this adds support for mapping
them to the userspace.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
Changes:
v2:
* reverted one of mistakenly removed error checks
---
 drivers/vfio/pci/vfio_pci_private.h | 3 +++
 drivers/vfio/pci/vfio_pci.c         | 9 +++++++++
 2 files changed, 12 insertions(+)

diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
index cde3b5d..86aab05 100644
--- a/drivers/vfio/pci/vfio_pci_private.h
+++ b/drivers/vfio/pci/vfio_pci_private.h
@@ -59,6 +59,9 @@ struct vfio_pci_regops {
 		      size_t count, loff_t *ppos, bool iswrite);
 	void	(*release)(struct vfio_pci_device *vdev,
 			   struct vfio_pci_region *region);
+	int	(*mmap)(struct vfio_pci_device *vdev,
+			struct vfio_pci_region *region,
+			struct vm_area_struct *vma);
 };
 
 struct vfio_pci_region {
diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index fef5002..4a6f7c0 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -1130,6 +1130,15 @@ static int vfio_pci_mmap(void *device_data, struct vm_area_struct *vma)
 		return -EINVAL;
 	if ((vma->vm_flags & VM_SHARED) = 0)
 		return -EINVAL;
+	if (index >= VFIO_PCI_NUM_REGIONS) {
+		int regnum = index - VFIO_PCI_NUM_REGIONS;
+		struct vfio_pci_region *region = vdev->region + regnum;
+
+		if (region && region->ops && region->ops->mmap &&
+		    (region->flags & VFIO_REGION_INFO_FLAG_MMAP))
+			return region->ops->mmap(vdev, region, vma);
+		return -EINVAL;
+	}
 	if (index >= VFIO_PCI_ROM_REGION_INDEX)
 		return -EINVAL;
 	if (!vdev->bar_mmap_supported[index])
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 18/19] vfio_pci: Allow regions to add own capabilities
  2018-11-23  5:52 ` Alexey Kardashevskiy
@ 2018-11-23  5:53   ` Alexey Kardashevskiy
  -1 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:53 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

VFIO regions already support region capabilities with a limited set of
fields. However the subdriver might have to report to the userspace
additional bits.

This adds an add_capability() hook to vfio_pci_regops.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v3:
* removed confusing rationale for the patch, the next patch makes
use of it anyway
---
 drivers/vfio/pci/vfio_pci_private.h | 3 +++
 drivers/vfio/pci/vfio_pci.c         | 6 ++++++
 2 files changed, 9 insertions(+)

diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
index 86aab05..93c1738 100644
--- a/drivers/vfio/pci/vfio_pci_private.h
+++ b/drivers/vfio/pci/vfio_pci_private.h
@@ -62,6 +62,9 @@ struct vfio_pci_regops {
 	int	(*mmap)(struct vfio_pci_device *vdev,
 			struct vfio_pci_region *region,
 			struct vm_area_struct *vma);
+	int	(*add_capability)(struct vfio_pci_device *vdev,
+				  struct vfio_pci_region *region,
+				  struct vfio_info_cap *caps);
 };
 
 struct vfio_pci_region {
diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index 4a6f7c0..6cb70cf 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -763,6 +763,12 @@ static long vfio_pci_ioctl(void *device_data,
 			if (ret)
 				return ret;
 
+			if (vdev->region[i].ops->add_capability) {
+				ret = vdev->region[i].ops->add_capability(vdev,
+						&vdev->region[i], &caps);
+				if (ret)
+					return ret;
+			}
 		}
 		}
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 18/19] vfio_pci: Allow regions to add own capabilities
@ 2018-11-23  5:53   ` Alexey Kardashevskiy
  0 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:53 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

VFIO regions already support region capabilities with a limited set of
fields. However the subdriver might have to report to the userspace
additional bits.

This adds an add_capability() hook to vfio_pci_regops.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v3:
* removed confusing rationale for the patch, the next patch makes
use of it anyway
---
 drivers/vfio/pci/vfio_pci_private.h | 3 +++
 drivers/vfio/pci/vfio_pci.c         | 6 ++++++
 2 files changed, 9 insertions(+)

diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
index 86aab05..93c1738 100644
--- a/drivers/vfio/pci/vfio_pci_private.h
+++ b/drivers/vfio/pci/vfio_pci_private.h
@@ -62,6 +62,9 @@ struct vfio_pci_regops {
 	int	(*mmap)(struct vfio_pci_device *vdev,
 			struct vfio_pci_region *region,
 			struct vm_area_struct *vma);
+	int	(*add_capability)(struct vfio_pci_device *vdev,
+				  struct vfio_pci_region *region,
+				  struct vfio_info_cap *caps);
 };
 
 struct vfio_pci_region {
diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index 4a6f7c0..6cb70cf 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -763,6 +763,12 @@ static long vfio_pci_ioctl(void *device_data,
 			if (ret)
 				return ret;
 
+			if (vdev->region[i].ops->add_capability) {
+				ret = vdev->region[i].ops->add_capability(vdev,
+						&vdev->region[i], &caps);
+				if (ret)
+					return ret;
+			}
 		}
 		}
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 19/19] vfio_pci: Add NVIDIA GV100GL [Tesla V100 SXM2] subdriver
  2018-11-23  5:52 ` Alexey Kardashevskiy
@ 2018-11-23  5:53   ` Alexey Kardashevskiy
  -1 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:53 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

POWER9 Witherspoon machines come with 4 or 6 V100 GPUs which are not
pluggable PCIe devices but still have PCIe links which are used
for config space and MMIO. In addition to that the GPUs have 6 NVLinks
which are connected to other GPUs and the POWER9 CPU. POWER9 chips
have a special unit on a die called an NPU which is an NVLink2 host bus
adapter with p2p connections to 2 to 3 GPUs, 3 or 2 NVLinks to each.
These systems also support ATS (address translation services) which is
a part of the NVLink2 protocol. Such GPUs also share on-board RAM
(16GB or 32GB) to the system via the same NVLink2 so a CPU has
cache-coherent access to a GPU RAM.

This exports GPU RAM to the userspace as a new VFIO device region. This
preregisters the new memory as device memory as it might be used for DMA.
This inserts pfns from the fault handler as the GPU memory is not onlined
until the vendor driver is loaded and trained the NVLinks so doing this
earlier causes low level errors which we fence in the firmware so
it does not hurt the host system but still better be avoided.

This exports an ATSD (Address Translation Shootdown) register of NPU which
allows TLB invalidations inside GPU for an operating system. The register
conveniently occupies a single 64k page. It is also presented to
the userspace as a new VFIO device region.

In order to provide the userspace with the information about GPU-to-NVLink
connections, this exports an additional capability called "tgt"
(which is an abbreviated host system bus address). The "tgt" property
tells the GPU its own system address and allows the guest driver to
conglomerate the routing information so each GPU knows how to get directly
to the other GPUs.

For ATS to work, the nest MMU (an NVIDIA block in a P9 CPU) needs to
know LPID (a logical partition ID or a KVM guest hardware ID in other
words) and PID (a memory context ID of a userspace process, not to be
confused with a linux pid). This assigns a GPU to LPID in the NPU and
this is why this adds a listener for KVM on an IOMMU group. A PID comes
via NVLink from a GPU and NPU uses a PID wildcard to pass it through.

This requires coherent memory and ATSD to be available on the host as
the GPU vendor only supports configurations with both features enabled
and other configurations are known not to work. Because of this and
because of the ways the features are advertised to the host system
(which is a device tree with very platform specific properties),
this requires enabled POWERNV platform.

The V100 GPUs do not advertise none of these capabilities via the config
space and there are more than just one device ID so this relies on
the platform to tell whether these GPUs have special abilities such as
NVLinks.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v4:
* added nvlink-speed to the NPU bridge capability as this turned out to
be not a constant value
* instead of looking at the exact device ID (which also changes from system
to system), now this (indirectly) looks at the device tree to know
if GPU and NPU support NVLink

v3:
* reworded the commit log about tgt
* added tracepoints (do we want them enabled for entire vfio-pci?)
* added code comments
* added write|mmap flags to the new regions
* auto enabled VFIO_PCI_NVLINK2 config option
* added 'tgt' capability to a GPU so QEMU can recreate ibm,npu and ibm,gpu
references; there are required by the NVIDIA driver
* keep notifier registered only for short time
---
 drivers/vfio/pci/Makefile           |   1 +
 drivers/vfio/pci/trace.h            | 102 +++++++
 drivers/vfio/pci/vfio_pci_private.h |   2 +
 include/uapi/linux/vfio.h           |  27 ++
 drivers/vfio/pci/vfio_pci.c         |  37 ++-
 drivers/vfio/pci/vfio_pci_nvlink2.c | 448 ++++++++++++++++++++++++++++
 drivers/vfio/pci/Kconfig            |   6 +
 7 files changed, 621 insertions(+), 2 deletions(-)
 create mode 100644 drivers/vfio/pci/trace.h
 create mode 100644 drivers/vfio/pci/vfio_pci_nvlink2.c

diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile
index 76d8ec0..9662c06 100644
--- a/drivers/vfio/pci/Makefile
+++ b/drivers/vfio/pci/Makefile
@@ -1,5 +1,6 @@
 
 vfio-pci-y := vfio_pci.o vfio_pci_intrs.o vfio_pci_rdwr.o vfio_pci_config.o
 vfio-pci-$(CONFIG_VFIO_PCI_IGD) += vfio_pci_igd.o
+vfio-pci-$(CONFIG_VFIO_PCI_NVLINK2) += vfio_pci_nvlink2.o
 
 obj-$(CONFIG_VFIO_PCI) += vfio-pci.o
diff --git a/drivers/vfio/pci/trace.h b/drivers/vfio/pci/trace.h
new file mode 100644
index 0000000..b80d2d3
--- /dev/null
+++ b/drivers/vfio/pci/trace.h
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * VFIO PCI mmap/mmap_fault tracepoints
+ *
+ * Copyright (C) 2018 IBM Corp.  All rights reserved.
+ *     Author: Alexey Kardashevskiy <aik@ozlabs.ru>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM vfio_pci
+
+#if !defined(_TRACE_VFIO_PCI_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_VFIO_PCI_H
+
+#include <linux/tracepoint.h>
+
+TRACE_EVENT(vfio_pci_nvgpu_mmap_fault,
+	TP_PROTO(struct pci_dev *pdev, unsigned long hpa, unsigned long ua,
+			vm_fault_t ret),
+	TP_ARGS(pdev, hpa, ua, ret),
+
+	TP_STRUCT__entry(
+		__field(const char *, name)
+		__field(unsigned long, hpa)
+		__field(unsigned long, ua)
+		__field(int, ret)
+	),
+
+	TP_fast_assign(
+		__entry->name = dev_name(&pdev->dev),
+		__entry->hpa = hpa;
+		__entry->ua = ua;
+		__entry->ret = ret;
+	),
+
+	TP_printk("%s: %lx -> %lx ret=%d", __entry->name, __entry->hpa,
+			__entry->ua, __entry->ret)
+);
+
+TRACE_EVENT(vfio_pci_nvgpu_mmap,
+	TP_PROTO(struct pci_dev *pdev, unsigned long hpa, unsigned long ua,
+			unsigned long size, int ret),
+	TP_ARGS(pdev, hpa, ua, size, ret),
+
+	TP_STRUCT__entry(
+		__field(const char *, name)
+		__field(unsigned long, hpa)
+		__field(unsigned long, ua)
+		__field(unsigned long, size)
+		__field(int, ret)
+	),
+
+	TP_fast_assign(
+		__entry->name = dev_name(&pdev->dev),
+		__entry->hpa = hpa;
+		__entry->ua = ua;
+		__entry->size = size;
+		__entry->ret = ret;
+	),
+
+	TP_printk("%s: %lx -> %lx size=%lx ret=%d", __entry->name, __entry->hpa,
+			__entry->ua, __entry->size, __entry->ret)
+);
+
+TRACE_EVENT(vfio_pci_npu2_mmap,
+	TP_PROTO(struct pci_dev *pdev, unsigned long hpa, unsigned long ua,
+			unsigned long size, int ret),
+	TP_ARGS(pdev, hpa, ua, size, ret),
+
+	TP_STRUCT__entry(
+		__field(const char *, name)
+		__field(unsigned long, hpa)
+		__field(unsigned long, ua)
+		__field(unsigned long, size)
+		__field(int, ret)
+	),
+
+	TP_fast_assign(
+		__entry->name = dev_name(&pdev->dev),
+		__entry->hpa = hpa;
+		__entry->ua = ua;
+		__entry->size = size;
+		__entry->ret = ret;
+	),
+
+	TP_printk("%s: %lx -> %lx size=%lx ret=%d", __entry->name, __entry->hpa,
+			__entry->ua, __entry->size, __entry->ret)
+);
+
+#endif /* _TRACE_SUBSYS_H */
+
+#undef TRACE_INCLUDE_PATH
+#define TRACE_INCLUDE_PATH .
+#undef TRACE_INCLUDE_FILE
+#define TRACE_INCLUDE_FILE trace
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
index 93c1738..7639241 100644
--- a/drivers/vfio/pci/vfio_pci_private.h
+++ b/drivers/vfio/pci/vfio_pci_private.h
@@ -163,4 +163,6 @@ static inline int vfio_pci_igd_init(struct vfio_pci_device *vdev)
 	return -ENODEV;
 }
 #endif
+extern int vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev);
+extern int vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev);
 #endif /* VFIO_PCI_PRIVATE_H */
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index 8131028..547e71e 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -353,6 +353,20 @@ struct vfio_region_gfx_edid {
 #define VFIO_DEVICE_GFX_LINK_STATE_DOWN  2
 };
 
+/* 10de vendor sub-type
+ *
+ * NVIDIA GPU NVlink2 RAM is coherent RAM mapped onto the host address space.
+ */
+#define VFIO_REGION_SUBTYPE_NVIDIA_NVLINK2_RAM	(1)
+
+/*
+ * 1014 vendor sub-type
+ *
+ * IBM NPU NVlink2 ATSD (Address Translation Shootdown) register of NPU
+ * to do TLB invalidation on a GPU.
+ */
+#define VFIO_REGION_SUBTYPE_IBM_NVLINK2_ATSD	(1)
+
 /*
  * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
  * which allows direct access to non-MSIX registers which happened to be within
@@ -363,6 +377,19 @@ struct vfio_region_gfx_edid {
  */
 #define VFIO_REGION_INFO_CAP_MSIX_MAPPABLE	3
 
+/*
+ * Capability with compressed real address (aka SSA - small system address)
+ * where GPU RAM is mapped on a system bus. Used by a GPU for DMA routing.
+ */
+#define VFIO_REGION_INFO_CAP_NPU2		4
+
+struct vfio_region_info_cap_npu2 {
+	struct vfio_info_cap_header header;
+	__u64 tgt;
+	__u32 link_speed;
+	__u32 __pad;
+};
+
 /**
  * VFIO_DEVICE_GET_IRQ_INFO - _IOWR(VFIO_TYPE, VFIO_BASE + 9,
  *				    struct vfio_irq_info)
diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index 6cb70cf..b8a53f9 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -224,6 +224,16 @@ static bool vfio_pci_nointx(struct pci_dev *pdev)
 	return false;
 }
 
+int __weak vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev)
+{
+	return -ENODEV;
+}
+
+int __weak vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev)
+{
+	return -ENODEV;
+}
+
 static int vfio_pci_enable(struct vfio_pci_device *vdev)
 {
 	struct pci_dev *pdev = vdev->pdev;
@@ -302,14 +312,37 @@ static int vfio_pci_enable(struct vfio_pci_device *vdev)
 		if (ret) {
 			dev_warn(&vdev->pdev->dev,
 				 "Failed to setup Intel IGD regions\n");
-			vfio_pci_disable(vdev);
-			return ret;
+			goto disable_exit;
+		}
+	}
+
+	if (pdev->vendor == PCI_VENDOR_ID_NVIDIA &&
+	    IS_ENABLED(CONFIG_VFIO_PCI_NVLINK2)) {
+		ret = vfio_pci_nvdia_v100_nvlink2_init(vdev);
+		if (ret && ret != -ENODEV) {
+			dev_warn(&vdev->pdev->dev,
+				 "Failed to setup NVIDIA NV2 RAM region\n");
+			goto disable_exit;
+		}
+	}
+
+	if (pdev->vendor == PCI_VENDOR_ID_IBM &&
+	    IS_ENABLED(CONFIG_VFIO_PCI_NVLINK2)) {
+		ret = vfio_pci_ibm_npu2_init(vdev);
+		if (ret && ret != -ENODEV) {
+			dev_warn(&vdev->pdev->dev,
+					"Failed to setup NVIDIA NV2 ATSD region\n");
+			goto disable_exit;
 		}
 	}
 
 	vfio_pci_probe_mmaps(vdev);
 
 	return 0;
+
+disable_exit:
+	vfio_pci_disable(vdev);
+	return ret;
 }
 
 static void vfio_pci_disable(struct vfio_pci_device *vdev)
diff --git a/drivers/vfio/pci/vfio_pci_nvlink2.c b/drivers/vfio/pci/vfio_pci_nvlink2.c
new file mode 100644
index 0000000..e8e06c3
--- /dev/null
+++ b/drivers/vfio/pci/vfio_pci_nvlink2.c
@@ -0,0 +1,448 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * VFIO PCI NVIDIA Whitherspoon GPU support a.k.a. NVLink2.
+ *
+ * Copyright (C) 2018 IBM Corp.  All rights reserved.
+ *     Author: Alexey Kardashevskiy <aik@ozlabs.ru>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Register an on-GPU RAM region for cacheable access.
+ *
+ * Derived from original vfio_pci_igd.c:
+ * Copyright (C) 2016 Red Hat, Inc.  All rights reserved.
+ *	Author: Alex Williamson <alex.williamson@redhat.com>
+ */
+
+#include <linux/io.h>
+#include <linux/pci.h>
+#include <linux/uaccess.h>
+#include <linux/vfio.h>
+#include <linux/sched/mm.h>
+#include <linux/mmu_context.h>
+#include <asm/kvm_ppc.h>
+#include "vfio_pci_private.h"
+
+#define CREATE_TRACE_POINTS
+#include "trace.h"
+
+EXPORT_TRACEPOINT_SYMBOL_GPL(vfio_pci_nvgpu_mmap_fault);
+EXPORT_TRACEPOINT_SYMBOL_GPL(vfio_pci_nvgpu_mmap);
+EXPORT_TRACEPOINT_SYMBOL_GPL(vfio_pci_npu2_mmap);
+
+struct vfio_pci_nvgpu_data {
+	unsigned long gpu_hpa; /* GPU RAM physical address */
+	unsigned long gpu_tgt; /* TGT address of corresponding GPU RAM */
+	unsigned long useraddr; /* GPU RAM userspace address */
+	unsigned long size; /* Size of the GPU RAM window (usually 128GB) */
+	void *base; /* GPU RAM virtual address, for emulated access */
+	struct mm_struct *mm;
+	struct mm_iommu_table_group_mem_t *mem; /* Pre-registered RAM descr. */
+	struct pci_dev *gpdev;
+	struct notifier_block group_notifier;
+};
+
+static size_t vfio_pci_nvgpu_rw(struct vfio_pci_device *vdev,
+		char __user *buf, size_t count, loff_t *ppos, bool iswrite)
+{
+	unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) - VFIO_PCI_NUM_REGIONS;
+	struct vfio_pci_nvgpu_data *data = vdev->region[i].data;
+	loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK;
+
+	if (pos >= vdev->region[i].size)
+		return -EINVAL;
+
+	count = min(count, (size_t)(vdev->region[i].size - pos));
+
+	if (iswrite) {
+		if (copy_from_user(data->base + pos, buf, count))
+			return -EFAULT;
+	} else {
+		if (copy_to_user(buf, data->base + pos, count))
+			return -EFAULT;
+	}
+	*ppos += count;
+
+	return count;
+}
+
+static void vfio_pci_nvgpu_release(struct vfio_pci_device *vdev,
+		struct vfio_pci_region *region)
+{
+	struct vfio_pci_nvgpu_data *data = region->data;
+	long ret;
+
+	/* If there were any mappings at all... */
+	if (data->mm) {
+		ret = mm_iommu_put(data->mm, data->mem);
+		WARN_ON(ret);
+
+		mmdrop(data->mm);
+	}
+
+	vfio_unregister_notifier(&data->gpdev->dev, VFIO_GROUP_NOTIFY,
+			&data->group_notifier);
+
+	pnv_npu2_unmap_lpar_dev(data->gpdev);
+
+	memunmap(data->base);
+	kfree(data);
+}
+
+static vm_fault_t vfio_pci_nvgpu_mmap_fault(struct vm_fault *vmf)
+{
+	vm_fault_t ret;
+	struct vm_area_struct *vma = vmf->vma;
+	struct vfio_pci_region *region = vma->vm_private_data;
+	struct vfio_pci_nvgpu_data *data = region->data;
+	unsigned long vmf_off = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
+	unsigned long nv2pg = data->gpu_hpa >> PAGE_SHIFT;
+	unsigned long vm_pgoff = vma->vm_pgoff &
+		((1U << (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT)) - 1);
+	unsigned long pfn = nv2pg + vm_pgoff + vmf_off;
+
+	ret = vmf_insert_pfn(vma, vmf->address, pfn);
+	trace_vfio_pci_nvgpu_mmap_fault(data->gpdev, pfn << PAGE_SHIFT,
+			vmf->address, ret);
+
+	return ret;
+}
+
+static const struct vm_operations_struct vfio_pci_nvgpu_mmap_vmops = {
+	.fault = vfio_pci_nvgpu_mmap_fault,
+};
+
+static int vfio_pci_nvgpu_mmap(struct vfio_pci_device *vdev,
+		struct vfio_pci_region *region, struct vm_area_struct *vma)
+{
+	long ret;
+	struct vfio_pci_nvgpu_data *data = region->data;
+
+	if (data->useraddr)
+		return -EPERM;
+
+	if (vma->vm_end - vma->vm_start > data->size)
+		return -EINVAL;
+
+	vma->vm_private_data = region;
+	vma->vm_flags |= VM_PFNMAP;
+	vma->vm_ops = &vfio_pci_nvgpu_mmap_vmops;
+
+	/*
+	 * Calling mm_iommu_newdev() here once as the region is not
+	 * registered yet and therefore right initialization will happen now.
+	 * Other places will use mm_iommu_find() which returns
+	 * registered @mem and does not go gup().
+	 */
+	data->useraddr = vma->vm_start;
+	data->mm = current->mm;
+
+	atomic_inc(&data->mm->mm_count);
+	ret = mm_iommu_newdev(data->mm, data->useraddr,
+			(vma->vm_end - vma->vm_start) >> PAGE_SHIFT,
+			data->gpu_hpa, &data->mem);
+
+	trace_vfio_pci_nvgpu_mmap(vdev->pdev, data->gpu_hpa, data->useraddr,
+			vma->vm_end - vma->vm_start, ret);
+
+	return ret;
+}
+
+static int vfio_pci_nvgpu_add_capability(struct vfio_pci_device *vdev,
+		struct vfio_pci_region *region, struct vfio_info_cap *caps)
+{
+	struct vfio_pci_nvgpu_data *data = region->data;
+	struct vfio_region_info_cap_npu2 cap;
+
+	cap.header.id = VFIO_REGION_INFO_CAP_NPU2;
+	cap.header.version = 1;
+	cap.tgt = data->gpu_tgt;
+
+	return vfio_info_add_capability(caps, &cap.header, sizeof(cap));
+}
+
+static const struct vfio_pci_regops vfio_pci_nvgpu_regops = {
+	.rw = vfio_pci_nvgpu_rw,
+	.release = vfio_pci_nvgpu_release,
+	.mmap = vfio_pci_nvgpu_mmap,
+	.add_capability = vfio_pci_nvgpu_add_capability,
+};
+
+static int vfio_pci_nvgpu_group_notifier(struct notifier_block *nb,
+		unsigned long action, void *opaque)
+{
+	struct kvm *kvm = opaque;
+	struct vfio_pci_nvgpu_data *data = container_of(nb,
+			struct vfio_pci_nvgpu_data,
+			group_notifier);
+
+	if (action == VFIO_GROUP_NOTIFY_SET_KVM && kvm &&
+			pnv_npu2_map_lpar_dev(data->gpdev,
+				kvm->arch.lpid, MSR_DR | MSR_PR))
+		return NOTIFY_BAD;
+
+	return NOTIFY_OK;
+}
+
+int vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev)
+{
+	int ret;
+	u64 reg[2];
+	u64 tgt = 0;
+	struct device_node *npu_node, *mem_node;
+	struct pci_dev *npu_dev;
+	struct vfio_pci_nvgpu_data *data;
+	uint32_t mem_phandle = 0;
+	unsigned long events = VFIO_GROUP_NOTIFY_SET_KVM;
+
+	/*
+	 * PCI config space does not tell us about NVLink presense but
+	 * platform does, use this.
+	 */
+	npu_dev = pnv_pci_get_npu_dev(vdev->pdev, 0);
+	if (!npu_dev)
+		return -ENODEV;
+
+	npu_node = pci_device_to_OF_node(npu_dev);
+	if (!npu_node)
+		return -EINVAL;
+
+	if (of_property_read_u32(npu_node, "memory-region", &mem_phandle))
+		return -EINVAL;
+
+	mem_node = of_find_node_by_phandle(mem_phandle);
+	if (!mem_node)
+		return -EINVAL;
+
+	if (of_property_read_variable_u64_array(mem_node, "reg", reg,
+				ARRAY_SIZE(reg), ARRAY_SIZE(reg)) !=
+			ARRAY_SIZE(reg))
+		return -EINVAL;
+
+	if (of_property_read_u64(npu_node, "ibm,device-tgt-addr", &tgt)) {
+		dev_warn(&vdev->pdev->dev, "No ibm,device-tgt-addr found\n");
+		return -EFAULT;
+	}
+
+	data = kzalloc(sizeof(*data), GFP_KERNEL);
+	if (!data)
+		return -ENOMEM;
+
+	data->gpu_hpa = reg[0];
+	data->gpu_tgt = tgt;
+	data->size = reg[1];
+	data->base = memremap(data->gpu_hpa, data->size, MEMREMAP_WB);
+	if (!data->base) {
+		ret = -ENOMEM;
+		goto free_exit;
+	}
+
+	dev_dbg(&vdev->pdev->dev, "%lx..%lx\n", data->gpu_hpa,
+			data->gpu_hpa + data->size - 1);
+
+	data->gpdev = vdev->pdev;
+	data->group_notifier.notifier_call = vfio_pci_nvgpu_group_notifier;
+
+	ret = vfio_register_notifier(&data->gpdev->dev, VFIO_GROUP_NOTIFY,
+			&events, &data->group_notifier);
+	if (ret)
+		goto free_exit;
+
+	/*
+	 * We have just set KVM, we do not need the listener anymore.
+	 * Also, keeping it registered means that if more than one GPU is
+	 * assigned, we will get several similar notifiers notifying about
+	 * the same device again which does not help with anything.
+	 */
+	vfio_unregister_notifier(&data->gpdev->dev, VFIO_GROUP_NOTIFY,
+			&data->group_notifier);
+
+	ret = vfio_pci_register_dev_region(vdev,
+			PCI_VENDOR_ID_NVIDIA | VFIO_REGION_TYPE_PCI_VENDOR_TYPE,
+			VFIO_REGION_SUBTYPE_NVIDIA_NVLINK2_RAM,
+			&vfio_pci_nvgpu_regops,
+			data->size,
+			VFIO_REGION_INFO_FLAG_READ |
+			VFIO_REGION_INFO_FLAG_WRITE |
+			VFIO_REGION_INFO_FLAG_MMAP,
+			data);
+	if (ret)
+		goto free_exit;
+
+	return 0;
+free_exit:
+	kfree(data);
+
+	return ret;
+}
+
+/*
+ * IBM NPU2 bridge
+ */
+struct vfio_pci_npu2_data {
+	void *base; /* ATSD register virtual address, for emulated access */
+	unsigned long mmio_atsd; /* ATSD physical address */
+	unsigned long gpu_tgt; /* TGT address of corresponding GPU RAM */
+	unsigned int link_speed; /* The link speed from DT's ibm,nvlink-speed */
+};
+
+static size_t vfio_pci_npu2_rw(struct vfio_pci_device *vdev,
+		char __user *buf, size_t count, loff_t *ppos, bool iswrite)
+{
+	unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) - VFIO_PCI_NUM_REGIONS;
+	struct vfio_pci_npu2_data *data = vdev->region[i].data;
+	loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK;
+
+	if (pos >= vdev->region[i].size)
+		return -EINVAL;
+
+	count = min(count, (size_t)(vdev->region[i].size - pos));
+
+	if (iswrite) {
+		if (copy_from_user(data->base + pos, buf, count))
+			return -EFAULT;
+	} else {
+		if (copy_to_user(buf, data->base + pos, count))
+			return -EFAULT;
+	}
+	*ppos += count;
+
+	return count;
+}
+
+static int vfio_pci_npu2_mmap(struct vfio_pci_device *vdev,
+		struct vfio_pci_region *region, struct vm_area_struct *vma)
+{
+	int ret;
+	struct vfio_pci_npu2_data *data = region->data;
+	unsigned long req_len = vma->vm_end - vma->vm_start;
+
+	if (req_len != PAGE_SIZE)
+		return -EINVAL;
+
+	vma->vm_flags |= VM_PFNMAP;
+	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+
+	ret = remap_pfn_range(vma, vma->vm_start, data->mmio_atsd >> PAGE_SHIFT,
+			req_len, vma->vm_page_prot);
+	trace_vfio_pci_npu2_mmap(vdev->pdev, data->mmio_atsd, vma->vm_start,
+			vma->vm_end - vma->vm_start, ret);
+
+	return ret;
+}
+
+static void vfio_pci_npu2_release(struct vfio_pci_device *vdev,
+		struct vfio_pci_region *region)
+{
+	struct vfio_pci_npu2_data *data = region->data;
+
+	memunmap(data->base);
+	kfree(data);
+}
+
+static int vfio_pci_npu2_add_capability(struct vfio_pci_device *vdev,
+		struct vfio_pci_region *region, struct vfio_info_cap *caps)
+{
+	struct vfio_pci_npu2_data *data = region->data;
+	struct vfio_region_info_cap_npu2 cap;
+
+	cap.header.id = VFIO_REGION_INFO_CAP_NPU2;
+	cap.header.version = 1;
+	cap.tgt = data->gpu_tgt;
+	cap.link_speed = data->link_speed;
+
+	return vfio_info_add_capability(caps, &cap.header, sizeof(cap));
+}
+
+static const struct vfio_pci_regops vfio_pci_npu2_regops = {
+	.rw = vfio_pci_npu2_rw,
+	.mmap = vfio_pci_npu2_mmap,
+	.release = vfio_pci_npu2_release,
+	.add_capability = vfio_pci_npu2_add_capability,
+};
+
+int vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev)
+{
+	int ret;
+	struct vfio_pci_npu2_data *data;
+	struct device_node *nvlink_dn;
+	u32 nvlink_index = 0;
+	struct pci_dev *npdev = vdev->pdev;
+	struct device_node *npu_node = pci_device_to_OF_node(npdev);
+	struct pci_controller *hose = pci_bus_to_host(npdev->bus);
+	u64 mmio_atsd = 0;
+	u64 tgt = 0;
+	u32 link_speed = 0xff;
+
+	/*
+	 * PCI config space does not tell us about NVLink presense but
+	 * platform does, use this.
+	 */
+	if (!pnv_pci_get_gpu_dev(vdev->pdev))
+		return -ENODEV;
+
+	/*
+	 * NPU2 normally has 8 ATSD registers (for concurrency) and 6 links
+	 * so we can allocate one register per link.
+	 * Since skiboot only exposes one (a bug), use this as a fallback
+	 * which is safe as we do not split GPUs attached to the same NPU.
+	 */
+	nvlink_dn = of_parse_phandle(npdev->dev.of_node, "ibm,nvlink", 0);
+	if (WARN_ON(of_property_read_u32(nvlink_dn, "ibm,npu-link-index",
+			&nvlink_index)))
+		return -ENODEV;
+
+	if (of_property_read_u64_index(hose->dn, "ibm,mmio-atsd", nvlink_index,
+			&mmio_atsd)) {
+		if (of_property_read_u64_index(hose->dn, "ibm,mmio-atsd", 0,
+					&mmio_atsd)) {
+			dev_warn(&vdev->pdev->dev, "No ATSD found\n");
+			return -EFAULT;
+		}
+		dev_warn(&vdev->pdev->dev, "Fallback to ATSD#0\n");
+	}
+
+	if (of_property_read_u64(npu_node, "ibm,device-tgt-addr", &tgt)) {
+		dev_warn(&vdev->pdev->dev, "No ibm,device-tgt-addr found\n");
+		return -EFAULT;
+	}
+
+	if (of_property_read_u32(npu_node, "ibm,nvlink-speed", &link_speed)) {
+		dev_warn(&vdev->pdev->dev, "No ibm,nvlink-speed found\n");
+		return -EFAULT;
+	}
+
+	data = kzalloc(sizeof(*data), GFP_KERNEL);
+	if (!data)
+		return -ENOMEM;
+
+	data->mmio_atsd = mmio_atsd;
+	data->gpu_tgt = tgt;
+	data->link_speed = link_speed;
+	data->base = memremap(data->mmio_atsd, SZ_64K, MEMREMAP_WT);
+	if (!data->base) {
+		ret = -ENOMEM;
+		goto free_exit;
+	}
+
+	ret = vfio_pci_register_dev_region(vdev,
+			PCI_VENDOR_ID_IBM | VFIO_REGION_TYPE_PCI_VENDOR_TYPE,
+			VFIO_REGION_SUBTYPE_IBM_NVLINK2_ATSD,
+			&vfio_pci_npu2_regops,
+			PAGE_SIZE,
+			VFIO_REGION_INFO_FLAG_READ |
+			VFIO_REGION_INFO_FLAG_WRITE |
+			VFIO_REGION_INFO_FLAG_MMAP,
+			data);
+	if (ret)
+		goto free_exit;
+
+	return 0;
+
+free_exit:
+	kfree(data);
+
+	return ret;
+}
diff --git a/drivers/vfio/pci/Kconfig b/drivers/vfio/pci/Kconfig
index 42dc1d3..d0f8e4f 100644
--- a/drivers/vfio/pci/Kconfig
+++ b/drivers/vfio/pci/Kconfig
@@ -38,3 +38,9 @@ config VFIO_PCI_IGD
 	  and LPC bridge config space.
 
 	  To enable Intel IGD assignment through vfio-pci, say Y.
+
+config VFIO_PCI_NVLINK2
+	def_bool y
+	depends on VFIO_PCI && PPC_POWERNV
+	help
+	  VFIO PCI support for P9 Witherspoon machine with NVIDIA V100 GPUs
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH kernel v4 19/19] vfio_pci: Add NVIDIA GV100GL [Tesla V100 SXM2] subdriver
@ 2018-11-23  5:53   ` Alexey Kardashevskiy
  0 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-11-23  5:53 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Jose Ricardo Ziviani, Alexey Kardashevskiy, Alistair Popple,
	Daniel Henrique Barboza, Alex Williamson, kvm-ppc, Sam Bobroff,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

POWER9 Witherspoon machines come with 4 or 6 V100 GPUs which are not
pluggable PCIe devices but still have PCIe links which are used
for config space and MMIO. In addition to that the GPUs have 6 NVLinks
which are connected to other GPUs and the POWER9 CPU. POWER9 chips
have a special unit on a die called an NPU which is an NVLink2 host bus
adapter with p2p connections to 2 to 3 GPUs, 3 or 2 NVLinks to each.
These systems also support ATS (address translation services) which is
a part of the NVLink2 protocol. Such GPUs also share on-board RAM
(16GB or 32GB) to the system via the same NVLink2 so a CPU has
cache-coherent access to a GPU RAM.

This exports GPU RAM to the userspace as a new VFIO device region. This
preregisters the new memory as device memory as it might be used for DMA.
This inserts pfns from the fault handler as the GPU memory is not onlined
until the vendor driver is loaded and trained the NVLinks so doing this
earlier causes low level errors which we fence in the firmware so
it does not hurt the host system but still better be avoided.

This exports an ATSD (Address Translation Shootdown) register of NPU which
allows TLB invalidations inside GPU for an operating system. The register
conveniently occupies a single 64k page. It is also presented to
the userspace as a new VFIO device region.

In order to provide the userspace with the information about GPU-to-NVLink
connections, this exports an additional capability called "tgt"
(which is an abbreviated host system bus address). The "tgt" property
tells the GPU its own system address and allows the guest driver to
conglomerate the routing information so each GPU knows how to get directly
to the other GPUs.

For ATS to work, the nest MMU (an NVIDIA block in a P9 CPU) needs to
know LPID (a logical partition ID or a KVM guest hardware ID in other
words) and PID (a memory context ID of a userspace process, not to be
confused with a linux pid). This assigns a GPU to LPID in the NPU and
this is why this adds a listener for KVM on an IOMMU group. A PID comes
via NVLink from a GPU and NPU uses a PID wildcard to pass it through.

This requires coherent memory and ATSD to be available on the host as
the GPU vendor only supports configurations with both features enabled
and other configurations are known not to work. Because of this and
because of the ways the features are advertised to the host system
(which is a device tree with very platform specific properties),
this requires enabled POWERNV platform.

The V100 GPUs do not advertise none of these capabilities via the config
space and there are more than just one device ID so this relies on
the platform to tell whether these GPUs have special abilities such as
NVLinks.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v4:
* added nvlink-speed to the NPU bridge capability as this turned out to
be not a constant value
* instead of looking at the exact device ID (which also changes from system
to system), now this (indirectly) looks at the device tree to know
if GPU and NPU support NVLink

v3:
* reworded the commit log about tgt
* added tracepoints (do we want them enabled for entire vfio-pci?)
* added code comments
* added write|mmap flags to the new regions
* auto enabled VFIO_PCI_NVLINK2 config option
* added 'tgt' capability to a GPU so QEMU can recreate ibm,npu and ibm,gpu
references; there are required by the NVIDIA driver
* keep notifier registered only for short time
---
 drivers/vfio/pci/Makefile           |   1 +
 drivers/vfio/pci/trace.h            | 102 +++++++
 drivers/vfio/pci/vfio_pci_private.h |   2 +
 include/uapi/linux/vfio.h           |  27 ++
 drivers/vfio/pci/vfio_pci.c         |  37 ++-
 drivers/vfio/pci/vfio_pci_nvlink2.c | 448 ++++++++++++++++++++++++++++
 drivers/vfio/pci/Kconfig            |   6 +
 7 files changed, 621 insertions(+), 2 deletions(-)
 create mode 100644 drivers/vfio/pci/trace.h
 create mode 100644 drivers/vfio/pci/vfio_pci_nvlink2.c

diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile
index 76d8ec0..9662c06 100644
--- a/drivers/vfio/pci/Makefile
+++ b/drivers/vfio/pci/Makefile
@@ -1,5 +1,6 @@
 
 vfio-pci-y := vfio_pci.o vfio_pci_intrs.o vfio_pci_rdwr.o vfio_pci_config.o
 vfio-pci-$(CONFIG_VFIO_PCI_IGD) += vfio_pci_igd.o
+vfio-pci-$(CONFIG_VFIO_PCI_NVLINK2) += vfio_pci_nvlink2.o
 
 obj-$(CONFIG_VFIO_PCI) += vfio-pci.o
diff --git a/drivers/vfio/pci/trace.h b/drivers/vfio/pci/trace.h
new file mode 100644
index 0000000..b80d2d3
--- /dev/null
+++ b/drivers/vfio/pci/trace.h
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * VFIO PCI mmap/mmap_fault tracepoints
+ *
+ * Copyright (C) 2018 IBM Corp.  All rights reserved.
+ *     Author: Alexey Kardashevskiy <aik@ozlabs.ru>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM vfio_pci
+
+#if !defined(_TRACE_VFIO_PCI_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_VFIO_PCI_H
+
+#include <linux/tracepoint.h>
+
+TRACE_EVENT(vfio_pci_nvgpu_mmap_fault,
+	TP_PROTO(struct pci_dev *pdev, unsigned long hpa, unsigned long ua,
+			vm_fault_t ret),
+	TP_ARGS(pdev, hpa, ua, ret),
+
+	TP_STRUCT__entry(
+		__field(const char *, name)
+		__field(unsigned long, hpa)
+		__field(unsigned long, ua)
+		__field(int, ret)
+	),
+
+	TP_fast_assign(
+		__entry->name = dev_name(&pdev->dev),
+		__entry->hpa = hpa;
+		__entry->ua = ua;
+		__entry->ret = ret;
+	),
+
+	TP_printk("%s: %lx -> %lx ret=%d", __entry->name, __entry->hpa,
+			__entry->ua, __entry->ret)
+);
+
+TRACE_EVENT(vfio_pci_nvgpu_mmap,
+	TP_PROTO(struct pci_dev *pdev, unsigned long hpa, unsigned long ua,
+			unsigned long size, int ret),
+	TP_ARGS(pdev, hpa, ua, size, ret),
+
+	TP_STRUCT__entry(
+		__field(const char *, name)
+		__field(unsigned long, hpa)
+		__field(unsigned long, ua)
+		__field(unsigned long, size)
+		__field(int, ret)
+	),
+
+	TP_fast_assign(
+		__entry->name = dev_name(&pdev->dev),
+		__entry->hpa = hpa;
+		__entry->ua = ua;
+		__entry->size = size;
+		__entry->ret = ret;
+	),
+
+	TP_printk("%s: %lx -> %lx size=%lx ret=%d", __entry->name, __entry->hpa,
+			__entry->ua, __entry->size, __entry->ret)
+);
+
+TRACE_EVENT(vfio_pci_npu2_mmap,
+	TP_PROTO(struct pci_dev *pdev, unsigned long hpa, unsigned long ua,
+			unsigned long size, int ret),
+	TP_ARGS(pdev, hpa, ua, size, ret),
+
+	TP_STRUCT__entry(
+		__field(const char *, name)
+		__field(unsigned long, hpa)
+		__field(unsigned long, ua)
+		__field(unsigned long, size)
+		__field(int, ret)
+	),
+
+	TP_fast_assign(
+		__entry->name = dev_name(&pdev->dev),
+		__entry->hpa = hpa;
+		__entry->ua = ua;
+		__entry->size = size;
+		__entry->ret = ret;
+	),
+
+	TP_printk("%s: %lx -> %lx size=%lx ret=%d", __entry->name, __entry->hpa,
+			__entry->ua, __entry->size, __entry->ret)
+);
+
+#endif /* _TRACE_SUBSYS_H */
+
+#undef TRACE_INCLUDE_PATH
+#define TRACE_INCLUDE_PATH .
+#undef TRACE_INCLUDE_FILE
+#define TRACE_INCLUDE_FILE trace
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
index 93c1738..7639241 100644
--- a/drivers/vfio/pci/vfio_pci_private.h
+++ b/drivers/vfio/pci/vfio_pci_private.h
@@ -163,4 +163,6 @@ static inline int vfio_pci_igd_init(struct vfio_pci_device *vdev)
 	return -ENODEV;
 }
 #endif
+extern int vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev);
+extern int vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev);
 #endif /* VFIO_PCI_PRIVATE_H */
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index 8131028..547e71e 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -353,6 +353,20 @@ struct vfio_region_gfx_edid {
 #define VFIO_DEVICE_GFX_LINK_STATE_DOWN  2
 };
 
+/* 10de vendor sub-type
+ *
+ * NVIDIA GPU NVlink2 RAM is coherent RAM mapped onto the host address space.
+ */
+#define VFIO_REGION_SUBTYPE_NVIDIA_NVLINK2_RAM	(1)
+
+/*
+ * 1014 vendor sub-type
+ *
+ * IBM NPU NVlink2 ATSD (Address Translation Shootdown) register of NPU
+ * to do TLB invalidation on a GPU.
+ */
+#define VFIO_REGION_SUBTYPE_IBM_NVLINK2_ATSD	(1)
+
 /*
  * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
  * which allows direct access to non-MSIX registers which happened to be within
@@ -363,6 +377,19 @@ struct vfio_region_gfx_edid {
  */
 #define VFIO_REGION_INFO_CAP_MSIX_MAPPABLE	3
 
+/*
+ * Capability with compressed real address (aka SSA - small system address)
+ * where GPU RAM is mapped on a system bus. Used by a GPU for DMA routing.
+ */
+#define VFIO_REGION_INFO_CAP_NPU2		4
+
+struct vfio_region_info_cap_npu2 {
+	struct vfio_info_cap_header header;
+	__u64 tgt;
+	__u32 link_speed;
+	__u32 __pad;
+};
+
 /**
  * VFIO_DEVICE_GET_IRQ_INFO - _IOWR(VFIO_TYPE, VFIO_BASE + 9,
  *				    struct vfio_irq_info)
diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index 6cb70cf..b8a53f9 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -224,6 +224,16 @@ static bool vfio_pci_nointx(struct pci_dev *pdev)
 	return false;
 }
 
+int __weak vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev)
+{
+	return -ENODEV;
+}
+
+int __weak vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev)
+{
+	return -ENODEV;
+}
+
 static int vfio_pci_enable(struct vfio_pci_device *vdev)
 {
 	struct pci_dev *pdev = vdev->pdev;
@@ -302,14 +312,37 @@ static int vfio_pci_enable(struct vfio_pci_device *vdev)
 		if (ret) {
 			dev_warn(&vdev->pdev->dev,
 				 "Failed to setup Intel IGD regions\n");
-			vfio_pci_disable(vdev);
-			return ret;
+			goto disable_exit;
+		}
+	}
+
+	if (pdev->vendor = PCI_VENDOR_ID_NVIDIA &&
+	    IS_ENABLED(CONFIG_VFIO_PCI_NVLINK2)) {
+		ret = vfio_pci_nvdia_v100_nvlink2_init(vdev);
+		if (ret && ret != -ENODEV) {
+			dev_warn(&vdev->pdev->dev,
+				 "Failed to setup NVIDIA NV2 RAM region\n");
+			goto disable_exit;
+		}
+	}
+
+	if (pdev->vendor = PCI_VENDOR_ID_IBM &&
+	    IS_ENABLED(CONFIG_VFIO_PCI_NVLINK2)) {
+		ret = vfio_pci_ibm_npu2_init(vdev);
+		if (ret && ret != -ENODEV) {
+			dev_warn(&vdev->pdev->dev,
+					"Failed to setup NVIDIA NV2 ATSD region\n");
+			goto disable_exit;
 		}
 	}
 
 	vfio_pci_probe_mmaps(vdev);
 
 	return 0;
+
+disable_exit:
+	vfio_pci_disable(vdev);
+	return ret;
 }
 
 static void vfio_pci_disable(struct vfio_pci_device *vdev)
diff --git a/drivers/vfio/pci/vfio_pci_nvlink2.c b/drivers/vfio/pci/vfio_pci_nvlink2.c
new file mode 100644
index 0000000..e8e06c3
--- /dev/null
+++ b/drivers/vfio/pci/vfio_pci_nvlink2.c
@@ -0,0 +1,448 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * VFIO PCI NVIDIA Whitherspoon GPU support a.k.a. NVLink2.
+ *
+ * Copyright (C) 2018 IBM Corp.  All rights reserved.
+ *     Author: Alexey Kardashevskiy <aik@ozlabs.ru>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Register an on-GPU RAM region for cacheable access.
+ *
+ * Derived from original vfio_pci_igd.c:
+ * Copyright (C) 2016 Red Hat, Inc.  All rights reserved.
+ *	Author: Alex Williamson <alex.williamson@redhat.com>
+ */
+
+#include <linux/io.h>
+#include <linux/pci.h>
+#include <linux/uaccess.h>
+#include <linux/vfio.h>
+#include <linux/sched/mm.h>
+#include <linux/mmu_context.h>
+#include <asm/kvm_ppc.h>
+#include "vfio_pci_private.h"
+
+#define CREATE_TRACE_POINTS
+#include "trace.h"
+
+EXPORT_TRACEPOINT_SYMBOL_GPL(vfio_pci_nvgpu_mmap_fault);
+EXPORT_TRACEPOINT_SYMBOL_GPL(vfio_pci_nvgpu_mmap);
+EXPORT_TRACEPOINT_SYMBOL_GPL(vfio_pci_npu2_mmap);
+
+struct vfio_pci_nvgpu_data {
+	unsigned long gpu_hpa; /* GPU RAM physical address */
+	unsigned long gpu_tgt; /* TGT address of corresponding GPU RAM */
+	unsigned long useraddr; /* GPU RAM userspace address */
+	unsigned long size; /* Size of the GPU RAM window (usually 128GB) */
+	void *base; /* GPU RAM virtual address, for emulated access */
+	struct mm_struct *mm;
+	struct mm_iommu_table_group_mem_t *mem; /* Pre-registered RAM descr. */
+	struct pci_dev *gpdev;
+	struct notifier_block group_notifier;
+};
+
+static size_t vfio_pci_nvgpu_rw(struct vfio_pci_device *vdev,
+		char __user *buf, size_t count, loff_t *ppos, bool iswrite)
+{
+	unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) - VFIO_PCI_NUM_REGIONS;
+	struct vfio_pci_nvgpu_data *data = vdev->region[i].data;
+	loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK;
+
+	if (pos >= vdev->region[i].size)
+		return -EINVAL;
+
+	count = min(count, (size_t)(vdev->region[i].size - pos));
+
+	if (iswrite) {
+		if (copy_from_user(data->base + pos, buf, count))
+			return -EFAULT;
+	} else {
+		if (copy_to_user(buf, data->base + pos, count))
+			return -EFAULT;
+	}
+	*ppos += count;
+
+	return count;
+}
+
+static void vfio_pci_nvgpu_release(struct vfio_pci_device *vdev,
+		struct vfio_pci_region *region)
+{
+	struct vfio_pci_nvgpu_data *data = region->data;
+	long ret;
+
+	/* If there were any mappings at all... */
+	if (data->mm) {
+		ret = mm_iommu_put(data->mm, data->mem);
+		WARN_ON(ret);
+
+		mmdrop(data->mm);
+	}
+
+	vfio_unregister_notifier(&data->gpdev->dev, VFIO_GROUP_NOTIFY,
+			&data->group_notifier);
+
+	pnv_npu2_unmap_lpar_dev(data->gpdev);
+
+	memunmap(data->base);
+	kfree(data);
+}
+
+static vm_fault_t vfio_pci_nvgpu_mmap_fault(struct vm_fault *vmf)
+{
+	vm_fault_t ret;
+	struct vm_area_struct *vma = vmf->vma;
+	struct vfio_pci_region *region = vma->vm_private_data;
+	struct vfio_pci_nvgpu_data *data = region->data;
+	unsigned long vmf_off = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
+	unsigned long nv2pg = data->gpu_hpa >> PAGE_SHIFT;
+	unsigned long vm_pgoff = vma->vm_pgoff &
+		((1U << (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT)) - 1);
+	unsigned long pfn = nv2pg + vm_pgoff + vmf_off;
+
+	ret = vmf_insert_pfn(vma, vmf->address, pfn);
+	trace_vfio_pci_nvgpu_mmap_fault(data->gpdev, pfn << PAGE_SHIFT,
+			vmf->address, ret);
+
+	return ret;
+}
+
+static const struct vm_operations_struct vfio_pci_nvgpu_mmap_vmops = {
+	.fault = vfio_pci_nvgpu_mmap_fault,
+};
+
+static int vfio_pci_nvgpu_mmap(struct vfio_pci_device *vdev,
+		struct vfio_pci_region *region, struct vm_area_struct *vma)
+{
+	long ret;
+	struct vfio_pci_nvgpu_data *data = region->data;
+
+	if (data->useraddr)
+		return -EPERM;
+
+	if (vma->vm_end - vma->vm_start > data->size)
+		return -EINVAL;
+
+	vma->vm_private_data = region;
+	vma->vm_flags |= VM_PFNMAP;
+	vma->vm_ops = &vfio_pci_nvgpu_mmap_vmops;
+
+	/*
+	 * Calling mm_iommu_newdev() here once as the region is not
+	 * registered yet and therefore right initialization will happen now.
+	 * Other places will use mm_iommu_find() which returns
+	 * registered @mem and does not go gup().
+	 */
+	data->useraddr = vma->vm_start;
+	data->mm = current->mm;
+
+	atomic_inc(&data->mm->mm_count);
+	ret = mm_iommu_newdev(data->mm, data->useraddr,
+			(vma->vm_end - vma->vm_start) >> PAGE_SHIFT,
+			data->gpu_hpa, &data->mem);
+
+	trace_vfio_pci_nvgpu_mmap(vdev->pdev, data->gpu_hpa, data->useraddr,
+			vma->vm_end - vma->vm_start, ret);
+
+	return ret;
+}
+
+static int vfio_pci_nvgpu_add_capability(struct vfio_pci_device *vdev,
+		struct vfio_pci_region *region, struct vfio_info_cap *caps)
+{
+	struct vfio_pci_nvgpu_data *data = region->data;
+	struct vfio_region_info_cap_npu2 cap;
+
+	cap.header.id = VFIO_REGION_INFO_CAP_NPU2;
+	cap.header.version = 1;
+	cap.tgt = data->gpu_tgt;
+
+	return vfio_info_add_capability(caps, &cap.header, sizeof(cap));
+}
+
+static const struct vfio_pci_regops vfio_pci_nvgpu_regops = {
+	.rw = vfio_pci_nvgpu_rw,
+	.release = vfio_pci_nvgpu_release,
+	.mmap = vfio_pci_nvgpu_mmap,
+	.add_capability = vfio_pci_nvgpu_add_capability,
+};
+
+static int vfio_pci_nvgpu_group_notifier(struct notifier_block *nb,
+		unsigned long action, void *opaque)
+{
+	struct kvm *kvm = opaque;
+	struct vfio_pci_nvgpu_data *data = container_of(nb,
+			struct vfio_pci_nvgpu_data,
+			group_notifier);
+
+	if (action = VFIO_GROUP_NOTIFY_SET_KVM && kvm &&
+			pnv_npu2_map_lpar_dev(data->gpdev,
+				kvm->arch.lpid, MSR_DR | MSR_PR))
+		return NOTIFY_BAD;
+
+	return NOTIFY_OK;
+}
+
+int vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev)
+{
+	int ret;
+	u64 reg[2];
+	u64 tgt = 0;
+	struct device_node *npu_node, *mem_node;
+	struct pci_dev *npu_dev;
+	struct vfio_pci_nvgpu_data *data;
+	uint32_t mem_phandle = 0;
+	unsigned long events = VFIO_GROUP_NOTIFY_SET_KVM;
+
+	/*
+	 * PCI config space does not tell us about NVLink presense but
+	 * platform does, use this.
+	 */
+	npu_dev = pnv_pci_get_npu_dev(vdev->pdev, 0);
+	if (!npu_dev)
+		return -ENODEV;
+
+	npu_node = pci_device_to_OF_node(npu_dev);
+	if (!npu_node)
+		return -EINVAL;
+
+	if (of_property_read_u32(npu_node, "memory-region", &mem_phandle))
+		return -EINVAL;
+
+	mem_node = of_find_node_by_phandle(mem_phandle);
+	if (!mem_node)
+		return -EINVAL;
+
+	if (of_property_read_variable_u64_array(mem_node, "reg", reg,
+				ARRAY_SIZE(reg), ARRAY_SIZE(reg)) !+			ARRAY_SIZE(reg))
+		return -EINVAL;
+
+	if (of_property_read_u64(npu_node, "ibm,device-tgt-addr", &tgt)) {
+		dev_warn(&vdev->pdev->dev, "No ibm,device-tgt-addr found\n");
+		return -EFAULT;
+	}
+
+	data = kzalloc(sizeof(*data), GFP_KERNEL);
+	if (!data)
+		return -ENOMEM;
+
+	data->gpu_hpa = reg[0];
+	data->gpu_tgt = tgt;
+	data->size = reg[1];
+	data->base = memremap(data->gpu_hpa, data->size, MEMREMAP_WB);
+	if (!data->base) {
+		ret = -ENOMEM;
+		goto free_exit;
+	}
+
+	dev_dbg(&vdev->pdev->dev, "%lx..%lx\n", data->gpu_hpa,
+			data->gpu_hpa + data->size - 1);
+
+	data->gpdev = vdev->pdev;
+	data->group_notifier.notifier_call = vfio_pci_nvgpu_group_notifier;
+
+	ret = vfio_register_notifier(&data->gpdev->dev, VFIO_GROUP_NOTIFY,
+			&events, &data->group_notifier);
+	if (ret)
+		goto free_exit;
+
+	/*
+	 * We have just set KVM, we do not need the listener anymore.
+	 * Also, keeping it registered means that if more than one GPU is
+	 * assigned, we will get several similar notifiers notifying about
+	 * the same device again which does not help with anything.
+	 */
+	vfio_unregister_notifier(&data->gpdev->dev, VFIO_GROUP_NOTIFY,
+			&data->group_notifier);
+
+	ret = vfio_pci_register_dev_region(vdev,
+			PCI_VENDOR_ID_NVIDIA | VFIO_REGION_TYPE_PCI_VENDOR_TYPE,
+			VFIO_REGION_SUBTYPE_NVIDIA_NVLINK2_RAM,
+			&vfio_pci_nvgpu_regops,
+			data->size,
+			VFIO_REGION_INFO_FLAG_READ |
+			VFIO_REGION_INFO_FLAG_WRITE |
+			VFIO_REGION_INFO_FLAG_MMAP,
+			data);
+	if (ret)
+		goto free_exit;
+
+	return 0;
+free_exit:
+	kfree(data);
+
+	return ret;
+}
+
+/*
+ * IBM NPU2 bridge
+ */
+struct vfio_pci_npu2_data {
+	void *base; /* ATSD register virtual address, for emulated access */
+	unsigned long mmio_atsd; /* ATSD physical address */
+	unsigned long gpu_tgt; /* TGT address of corresponding GPU RAM */
+	unsigned int link_speed; /* The link speed from DT's ibm,nvlink-speed */
+};
+
+static size_t vfio_pci_npu2_rw(struct vfio_pci_device *vdev,
+		char __user *buf, size_t count, loff_t *ppos, bool iswrite)
+{
+	unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) - VFIO_PCI_NUM_REGIONS;
+	struct vfio_pci_npu2_data *data = vdev->region[i].data;
+	loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK;
+
+	if (pos >= vdev->region[i].size)
+		return -EINVAL;
+
+	count = min(count, (size_t)(vdev->region[i].size - pos));
+
+	if (iswrite) {
+		if (copy_from_user(data->base + pos, buf, count))
+			return -EFAULT;
+	} else {
+		if (copy_to_user(buf, data->base + pos, count))
+			return -EFAULT;
+	}
+	*ppos += count;
+
+	return count;
+}
+
+static int vfio_pci_npu2_mmap(struct vfio_pci_device *vdev,
+		struct vfio_pci_region *region, struct vm_area_struct *vma)
+{
+	int ret;
+	struct vfio_pci_npu2_data *data = region->data;
+	unsigned long req_len = vma->vm_end - vma->vm_start;
+
+	if (req_len != PAGE_SIZE)
+		return -EINVAL;
+
+	vma->vm_flags |= VM_PFNMAP;
+	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+
+	ret = remap_pfn_range(vma, vma->vm_start, data->mmio_atsd >> PAGE_SHIFT,
+			req_len, vma->vm_page_prot);
+	trace_vfio_pci_npu2_mmap(vdev->pdev, data->mmio_atsd, vma->vm_start,
+			vma->vm_end - vma->vm_start, ret);
+
+	return ret;
+}
+
+static void vfio_pci_npu2_release(struct vfio_pci_device *vdev,
+		struct vfio_pci_region *region)
+{
+	struct vfio_pci_npu2_data *data = region->data;
+
+	memunmap(data->base);
+	kfree(data);
+}
+
+static int vfio_pci_npu2_add_capability(struct vfio_pci_device *vdev,
+		struct vfio_pci_region *region, struct vfio_info_cap *caps)
+{
+	struct vfio_pci_npu2_data *data = region->data;
+	struct vfio_region_info_cap_npu2 cap;
+
+	cap.header.id = VFIO_REGION_INFO_CAP_NPU2;
+	cap.header.version = 1;
+	cap.tgt = data->gpu_tgt;
+	cap.link_speed = data->link_speed;
+
+	return vfio_info_add_capability(caps, &cap.header, sizeof(cap));
+}
+
+static const struct vfio_pci_regops vfio_pci_npu2_regops = {
+	.rw = vfio_pci_npu2_rw,
+	.mmap = vfio_pci_npu2_mmap,
+	.release = vfio_pci_npu2_release,
+	.add_capability = vfio_pci_npu2_add_capability,
+};
+
+int vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev)
+{
+	int ret;
+	struct vfio_pci_npu2_data *data;
+	struct device_node *nvlink_dn;
+	u32 nvlink_index = 0;
+	struct pci_dev *npdev = vdev->pdev;
+	struct device_node *npu_node = pci_device_to_OF_node(npdev);
+	struct pci_controller *hose = pci_bus_to_host(npdev->bus);
+	u64 mmio_atsd = 0;
+	u64 tgt = 0;
+	u32 link_speed = 0xff;
+
+	/*
+	 * PCI config space does not tell us about NVLink presense but
+	 * platform does, use this.
+	 */
+	if (!pnv_pci_get_gpu_dev(vdev->pdev))
+		return -ENODEV;
+
+	/*
+	 * NPU2 normally has 8 ATSD registers (for concurrency) and 6 links
+	 * so we can allocate one register per link.
+	 * Since skiboot only exposes one (a bug), use this as a fallback
+	 * which is safe as we do not split GPUs attached to the same NPU.
+	 */
+	nvlink_dn = of_parse_phandle(npdev->dev.of_node, "ibm,nvlink", 0);
+	if (WARN_ON(of_property_read_u32(nvlink_dn, "ibm,npu-link-index",
+			&nvlink_index)))
+		return -ENODEV;
+
+	if (of_property_read_u64_index(hose->dn, "ibm,mmio-atsd", nvlink_index,
+			&mmio_atsd)) {
+		if (of_property_read_u64_index(hose->dn, "ibm,mmio-atsd", 0,
+					&mmio_atsd)) {
+			dev_warn(&vdev->pdev->dev, "No ATSD found\n");
+			return -EFAULT;
+		}
+		dev_warn(&vdev->pdev->dev, "Fallback to ATSD#0\n");
+	}
+
+	if (of_property_read_u64(npu_node, "ibm,device-tgt-addr", &tgt)) {
+		dev_warn(&vdev->pdev->dev, "No ibm,device-tgt-addr found\n");
+		return -EFAULT;
+	}
+
+	if (of_property_read_u32(npu_node, "ibm,nvlink-speed", &link_speed)) {
+		dev_warn(&vdev->pdev->dev, "No ibm,nvlink-speed found\n");
+		return -EFAULT;
+	}
+
+	data = kzalloc(sizeof(*data), GFP_KERNEL);
+	if (!data)
+		return -ENOMEM;
+
+	data->mmio_atsd = mmio_atsd;
+	data->gpu_tgt = tgt;
+	data->link_speed = link_speed;
+	data->base = memremap(data->mmio_atsd, SZ_64K, MEMREMAP_WT);
+	if (!data->base) {
+		ret = -ENOMEM;
+		goto free_exit;
+	}
+
+	ret = vfio_pci_register_dev_region(vdev,
+			PCI_VENDOR_ID_IBM | VFIO_REGION_TYPE_PCI_VENDOR_TYPE,
+			VFIO_REGION_SUBTYPE_IBM_NVLINK2_ATSD,
+			&vfio_pci_npu2_regops,
+			PAGE_SIZE,
+			VFIO_REGION_INFO_FLAG_READ |
+			VFIO_REGION_INFO_FLAG_WRITE |
+			VFIO_REGION_INFO_FLAG_MMAP,
+			data);
+	if (ret)
+		goto free_exit;
+
+	return 0;
+
+free_exit:
+	kfree(data);
+
+	return ret;
+}
diff --git a/drivers/vfio/pci/Kconfig b/drivers/vfio/pci/Kconfig
index 42dc1d3..d0f8e4f 100644
--- a/drivers/vfio/pci/Kconfig
+++ b/drivers/vfio/pci/Kconfig
@@ -38,3 +38,9 @@ config VFIO_PCI_IGD
 	  and LPC bridge config space.
 
 	  To enable Intel IGD assignment through vfio-pci, say Y.
+
+config VFIO_PCI_NVLINK2
+	def_bool y
+	depends on VFIO_PCI && PPC_POWERNV
+	help
+	  VFIO PCI support for P9 Witherspoon machine with NVIDIA V100 GPUs
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 01/19] powerpc/ioda/npu: Call skiboot's hot reset hook when disabling NPU2
  2018-11-23  5:52   ` Alexey Kardashevskiy
@ 2018-12-05  4:21     ` David Gibson
  -1 siblings, 0 replies; 70+ messages in thread
From: David Gibson @ 2018-12-05  4:21 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Alex Williamson, Jose Ricardo Ziviani, Sam Bobroff,
	Alistair Popple, Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab

[-- Attachment #1: Type: text/plain, Size: 2454 bytes --]

On Fri, Nov 23, 2018 at 04:52:46PM +1100, Alexey Kardashevskiy wrote:
> The skiboot firmware has a hot reset handler which fences the NVIDIA V100
> GPU RAM on Witherspoons and makes accesses no-op instead of throwing HMIs:
> https://github.com/open-power/skiboot/commit/fca2b2b839a67
> 
> Now we are going to pass V100 via VFIO which most certainly involves
> KVM guests which are often terminated without getting a chance to offline
> GPU RAM so we end up with a running machine with misconfigured memory.
> Accessing this memory produces hardware management interrupts (HMI)
> which bring the host down.
> 
> To suppress HMIs, this wires up this hot reset hook to vfio_pci_disable()
> via pci_disable_device() which switches NPU2 to a safe mode and prevents
> HMIs.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> Acked-by: Alistair Popple <alistair@popple.id.au>

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

> ---
> Changes:
> v2:
> * updated the commit log
> ---
>  arch/powerpc/platforms/powernv/pci-ioda.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
> index 9ee7a30..29c6837 100644
> --- a/arch/powerpc/platforms/powernv/pci-ioda.c
> +++ b/arch/powerpc/platforms/powernv/pci-ioda.c
> @@ -3676,6 +3676,15 @@ static void pnv_pci_release_device(struct pci_dev *pdev)
>  		pnv_ioda_release_pe(pe);
>  }
>  
> +static void pnv_npu_disable_device(struct pci_dev *pdev)
> +{
> +	struct eeh_dev *edev = pci_dev_to_eeh_dev(pdev);
> +	struct eeh_pe *eehpe = edev ? edev->pe : NULL;
> +
> +	if (eehpe && eeh_ops && eeh_ops->reset)
> +		eeh_ops->reset(eehpe, EEH_RESET_HOT);
> +}
> +
>  static void pnv_pci_ioda_shutdown(struct pci_controller *hose)
>  {
>  	struct pnv_phb *phb = hose->private_data;
> @@ -3720,6 +3729,7 @@ static const struct pci_controller_ops pnv_npu_ioda_controller_ops = {
>  	.reset_secondary_bus	= pnv_pci_reset_secondary_bus,
>  	.dma_set_mask		= pnv_npu_dma_set_mask,
>  	.shutdown		= pnv_pci_ioda_shutdown,
> +	.disable_device		= pnv_npu_disable_device,
>  };
>  
>  static const struct pci_controller_ops pnv_npu_ocapi_ioda_controller_ops = {

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 01/19] powerpc/ioda/npu: Call skiboot's hot reset hook when disabling NPU2
@ 2018-12-05  4:21     ` David Gibson
  0 siblings, 0 replies; 70+ messages in thread
From: David Gibson @ 2018-12-05  4:21 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Alex Williamson, Jose Ricardo Ziviani, Sam Bobroff,
	Alistair Popple, Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab

[-- Attachment #1: Type: text/plain, Size: 2454 bytes --]

On Fri, Nov 23, 2018 at 04:52:46PM +1100, Alexey Kardashevskiy wrote:
> The skiboot firmware has a hot reset handler which fences the NVIDIA V100
> GPU RAM on Witherspoons and makes accesses no-op instead of throwing HMIs:
> https://github.com/open-power/skiboot/commit/fca2b2b839a67
> 
> Now we are going to pass V100 via VFIO which most certainly involves
> KVM guests which are often terminated without getting a chance to offline
> GPU RAM so we end up with a running machine with misconfigured memory.
> Accessing this memory produces hardware management interrupts (HMI)
> which bring the host down.
> 
> To suppress HMIs, this wires up this hot reset hook to vfio_pci_disable()
> via pci_disable_device() which switches NPU2 to a safe mode and prevents
> HMIs.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> Acked-by: Alistair Popple <alistair@popple.id.au>

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

> ---
> Changes:
> v2:
> * updated the commit log
> ---
>  arch/powerpc/platforms/powernv/pci-ioda.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
> index 9ee7a30..29c6837 100644
> --- a/arch/powerpc/platforms/powernv/pci-ioda.c
> +++ b/arch/powerpc/platforms/powernv/pci-ioda.c
> @@ -3676,6 +3676,15 @@ static void pnv_pci_release_device(struct pci_dev *pdev)
>  		pnv_ioda_release_pe(pe);
>  }
>  
> +static void pnv_npu_disable_device(struct pci_dev *pdev)
> +{
> +	struct eeh_dev *edev = pci_dev_to_eeh_dev(pdev);
> +	struct eeh_pe *eehpe = edev ? edev->pe : NULL;
> +
> +	if (eehpe && eeh_ops && eeh_ops->reset)
> +		eeh_ops->reset(eehpe, EEH_RESET_HOT);
> +}
> +
>  static void pnv_pci_ioda_shutdown(struct pci_controller *hose)
>  {
>  	struct pnv_phb *phb = hose->private_data;
> @@ -3720,6 +3729,7 @@ static const struct pci_controller_ops pnv_npu_ioda_controller_ops = {
>  	.reset_secondary_bus	= pnv_pci_reset_secondary_bus,
>  	.dma_set_mask		= pnv_npu_dma_set_mask,
>  	.shutdown		= pnv_pci_ioda_shutdown,
> +	.disable_device		= pnv_npu_disable_device,
>  };
>  
>  static const struct pci_controller_ops pnv_npu_ocapi_ioda_controller_ops = {

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 02/19] powerpc/mm/iommu/vfio_spapr_tce: Change mm_iommu_get to reference a region
  2018-11-23  5:52   ` Alexey Kardashevskiy
@ 2018-12-05  4:25     ` David Gibson
  -1 siblings, 0 replies; 70+ messages in thread
From: David Gibson @ 2018-12-05  4:25 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Alex Williamson, Jose Ricardo Ziviani, Sam Bobroff,
	Alistair Popple, Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab

[-- Attachment #1: Type: text/plain, Size: 7595 bytes --]

On Fri, Nov 23, 2018 at 04:52:47PM +1100, Alexey Kardashevskiy wrote:
> Normally mm_iommu_get() is supposed to add a reference and
> mm_iommu_put() to remove it. However historically mm_iommu_find() does
> the referencing and mm_iommu_get() is doing allocation and referencing.
> 
> We are going to add another helper to preregister device memory so
> instead of having mm_iommu_new() which pre-registers the normal memory
> and references the region, we need separate helpers for pre-registering
> and referencing.
> 
> This renames:
> - mm_iommu_get to mm_iommu_new;
> - mm_iommu_find to mm_iommu_get.
> 
> To make the mm_iommu_get name reflect what it is supposed to do, this
> changes mm_iommu_get() to reference the region so from now on for every
> mm_iommu_get() we need a matching mm_iommu_put().
> 
> This removes the check for exact match as the check for overlap is
> enough now.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>

Reviewed-by: David Gibson <david@gibson.drobpear.id.au>

> ---
> Changes:
> v4:
> * squashed "powerpc/mm/iommu: Make mm_iommu_new() fail on existing regions" into this
> 
> v2:
> * merged 2 patches into one
> ---
>  arch/powerpc/include/asm/mmu_context.h |  4 +--
>  arch/powerpc/mm/mmu_context_iommu.c    | 19 +++++++------
>  drivers/vfio/vfio_iommu_spapr_tce.c    | 37 +++++++++++++++++---------
>  3 files changed, 35 insertions(+), 25 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
> index 0381394..2d6b00d 100644
> --- a/arch/powerpc/include/asm/mmu_context.h
> +++ b/arch/powerpc/include/asm/mmu_context.h
> @@ -21,7 +21,7 @@ struct mm_iommu_table_group_mem_t;
>  
>  extern int isolate_lru_page(struct page *page);	/* from internal.h */
>  extern bool mm_iommu_preregistered(struct mm_struct *mm);
> -extern long mm_iommu_get(struct mm_struct *mm,
> +extern long mm_iommu_new(struct mm_struct *mm,
>  		unsigned long ua, unsigned long entries,
>  		struct mm_iommu_table_group_mem_t **pmem);
>  extern long mm_iommu_put(struct mm_struct *mm,
> @@ -32,7 +32,7 @@ extern struct mm_iommu_table_group_mem_t *mm_iommu_lookup(struct mm_struct *mm,
>  		unsigned long ua, unsigned long size);
>  extern struct mm_iommu_table_group_mem_t *mm_iommu_lookup_rm(
>  		struct mm_struct *mm, unsigned long ua, unsigned long size);
> -extern struct mm_iommu_table_group_mem_t *mm_iommu_find(struct mm_struct *mm,
> +extern struct mm_iommu_table_group_mem_t *mm_iommu_get(struct mm_struct *mm,
>  		unsigned long ua, unsigned long entries);
>  extern long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
>  		unsigned long ua, unsigned int pageshift, unsigned long *hpa);
> diff --git a/arch/powerpc/mm/mmu_context_iommu.c b/arch/powerpc/mm/mmu_context_iommu.c
> index 1d5161f..580d89e 100644
> --- a/arch/powerpc/mm/mmu_context_iommu.c
> +++ b/arch/powerpc/mm/mmu_context_iommu.c
> @@ -89,7 +89,7 @@ bool mm_iommu_preregistered(struct mm_struct *mm)
>  }
>  EXPORT_SYMBOL_GPL(mm_iommu_preregistered);
>  
> -long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
> +long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
>  		struct mm_iommu_table_group_mem_t **pmem)
>  {
>  	struct mm_iommu_table_group_mem_t *mem;
> @@ -102,12 +102,6 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
>  
>  	list_for_each_entry_rcu(mem, &mm->context.iommu_group_mem_list,
>  			next) {
> -		if ((mem->ua == ua) && (mem->entries == entries)) {
> -			++mem->used;
> -			*pmem = mem;
> -			goto unlock_exit;
> -		}
> -
>  		/* Overlap? */
>  		if ((mem->ua < (ua + (entries << PAGE_SHIFT))) &&
>  				(ua < (mem->ua +
> @@ -202,7 +196,7 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
>  
>  	return ret;
>  }
> -EXPORT_SYMBOL_GPL(mm_iommu_get);
> +EXPORT_SYMBOL_GPL(mm_iommu_new);
>  
>  static void mm_iommu_unpin(struct mm_iommu_table_group_mem_t *mem)
>  {
> @@ -318,21 +312,26 @@ struct mm_iommu_table_group_mem_t *mm_iommu_lookup_rm(struct mm_struct *mm,
>  	return ret;
>  }
>  
> -struct mm_iommu_table_group_mem_t *mm_iommu_find(struct mm_struct *mm,
> +struct mm_iommu_table_group_mem_t *mm_iommu_get(struct mm_struct *mm,
>  		unsigned long ua, unsigned long entries)
>  {
>  	struct mm_iommu_table_group_mem_t *mem, *ret = NULL;
>  
> +	mutex_lock(&mem_list_mutex);
> +
>  	list_for_each_entry_rcu(mem, &mm->context.iommu_group_mem_list, next) {
>  		if ((mem->ua == ua) && (mem->entries == entries)) {
>  			ret = mem;
> +			++mem->used;
>  			break;
>  		}
>  	}
>  
> +	mutex_unlock(&mem_list_mutex);
> +
>  	return ret;
>  }
> -EXPORT_SYMBOL_GPL(mm_iommu_find);
> +EXPORT_SYMBOL_GPL(mm_iommu_get);
>  
>  long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
>  		unsigned long ua, unsigned int pageshift, unsigned long *hpa)
> diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c
> index ad63725..56db071 100644
> --- a/drivers/vfio/vfio_iommu_spapr_tce.c
> +++ b/drivers/vfio/vfio_iommu_spapr_tce.c
> @@ -151,12 +151,13 @@ static long tce_iommu_unregister_pages(struct tce_container *container,
>  {
>  	struct mm_iommu_table_group_mem_t *mem;
>  	struct tce_iommu_prereg *tcemem;
> -	bool found = false;
> +	bool found;
> +	long ret;
>  
>  	if ((vaddr & ~PAGE_MASK) || (size & ~PAGE_MASK))
>  		return -EINVAL;
>  
> -	mem = mm_iommu_find(container->mm, vaddr, size >> PAGE_SHIFT);
> +	mem = mm_iommu_get(container->mm, vaddr, size >> PAGE_SHIFT);
>  	if (!mem)
>  		return -ENOENT;
>  
> @@ -168,9 +169,13 @@ static long tce_iommu_unregister_pages(struct tce_container *container,
>  	}
>  
>  	if (!found)
> -		return -ENOENT;
> +		ret = -ENOENT;
> +	else
> +		ret = tce_iommu_prereg_free(container, tcemem);
>  
> -	return tce_iommu_prereg_free(container, tcemem);
> +	mm_iommu_put(container->mm, mem);
> +
> +	return ret;
>  }
>  
>  static long tce_iommu_register_pages(struct tce_container *container,
> @@ -185,22 +190,24 @@ static long tce_iommu_register_pages(struct tce_container *container,
>  			((vaddr + size) < vaddr))
>  		return -EINVAL;
>  
> -	mem = mm_iommu_find(container->mm, vaddr, entries);
> +	mem = mm_iommu_get(container->mm, vaddr, entries);
>  	if (mem) {
>  		list_for_each_entry(tcemem, &container->prereg_list, next) {
> -			if (tcemem->mem == mem)
> -				return -EBUSY;
> +			if (tcemem->mem == mem) {
> +				ret = -EBUSY;
> +				goto put_exit;
> +			}
>  		}
> +	} else {
> +		ret = mm_iommu_new(container->mm, vaddr, entries, &mem);
> +		if (ret)
> +			return ret;
>  	}
>  
> -	ret = mm_iommu_get(container->mm, vaddr, entries, &mem);
> -	if (ret)
> -		return ret;
> -
>  	tcemem = kzalloc(sizeof(*tcemem), GFP_KERNEL);
>  	if (!tcemem) {
> -		mm_iommu_put(container->mm, mem);
> -		return -ENOMEM;
> +		ret = -ENOMEM;
> +		goto put_exit;
>  	}
>  
>  	tcemem->mem = mem;
> @@ -209,6 +216,10 @@ static long tce_iommu_register_pages(struct tce_container *container,
>  	container->enabled = true;
>  
>  	return 0;
> +
> +put_exit:
> +	mm_iommu_put(container->mm, mem);
> +	return ret;
>  }
>  
>  static bool tce_page_is_contained(struct page *page, unsigned page_shift)

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 02/19] powerpc/mm/iommu/vfio_spapr_tce: Change mm_iommu_get to reference a regi
@ 2018-12-05  4:25     ` David Gibson
  0 siblings, 0 replies; 70+ messages in thread
From: David Gibson @ 2018-12-05  4:25 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Alex Williamson, Jose Ricardo Ziviani, Sam Bobroff,
	Alistair Popple, Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab

[-- Attachment #1: Type: text/plain, Size: 7595 bytes --]

On Fri, Nov 23, 2018 at 04:52:47PM +1100, Alexey Kardashevskiy wrote:
> Normally mm_iommu_get() is supposed to add a reference and
> mm_iommu_put() to remove it. However historically mm_iommu_find() does
> the referencing and mm_iommu_get() is doing allocation and referencing.
> 
> We are going to add another helper to preregister device memory so
> instead of having mm_iommu_new() which pre-registers the normal memory
> and references the region, we need separate helpers for pre-registering
> and referencing.
> 
> This renames:
> - mm_iommu_get to mm_iommu_new;
> - mm_iommu_find to mm_iommu_get.
> 
> To make the mm_iommu_get name reflect what it is supposed to do, this
> changes mm_iommu_get() to reference the region so from now on for every
> mm_iommu_get() we need a matching mm_iommu_put().
> 
> This removes the check for exact match as the check for overlap is
> enough now.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>

Reviewed-by: David Gibson <david@gibson.drobpear.id.au>

> ---
> Changes:
> v4:
> * squashed "powerpc/mm/iommu: Make mm_iommu_new() fail on existing regions" into this
> 
> v2:
> * merged 2 patches into one
> ---
>  arch/powerpc/include/asm/mmu_context.h |  4 +--
>  arch/powerpc/mm/mmu_context_iommu.c    | 19 +++++++------
>  drivers/vfio/vfio_iommu_spapr_tce.c    | 37 +++++++++++++++++---------
>  3 files changed, 35 insertions(+), 25 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
> index 0381394..2d6b00d 100644
> --- a/arch/powerpc/include/asm/mmu_context.h
> +++ b/arch/powerpc/include/asm/mmu_context.h
> @@ -21,7 +21,7 @@ struct mm_iommu_table_group_mem_t;
>  
>  extern int isolate_lru_page(struct page *page);	/* from internal.h */
>  extern bool mm_iommu_preregistered(struct mm_struct *mm);
> -extern long mm_iommu_get(struct mm_struct *mm,
> +extern long mm_iommu_new(struct mm_struct *mm,
>  		unsigned long ua, unsigned long entries,
>  		struct mm_iommu_table_group_mem_t **pmem);
>  extern long mm_iommu_put(struct mm_struct *mm,
> @@ -32,7 +32,7 @@ extern struct mm_iommu_table_group_mem_t *mm_iommu_lookup(struct mm_struct *mm,
>  		unsigned long ua, unsigned long size);
>  extern struct mm_iommu_table_group_mem_t *mm_iommu_lookup_rm(
>  		struct mm_struct *mm, unsigned long ua, unsigned long size);
> -extern struct mm_iommu_table_group_mem_t *mm_iommu_find(struct mm_struct *mm,
> +extern struct mm_iommu_table_group_mem_t *mm_iommu_get(struct mm_struct *mm,
>  		unsigned long ua, unsigned long entries);
>  extern long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
>  		unsigned long ua, unsigned int pageshift, unsigned long *hpa);
> diff --git a/arch/powerpc/mm/mmu_context_iommu.c b/arch/powerpc/mm/mmu_context_iommu.c
> index 1d5161f..580d89e 100644
> --- a/arch/powerpc/mm/mmu_context_iommu.c
> +++ b/arch/powerpc/mm/mmu_context_iommu.c
> @@ -89,7 +89,7 @@ bool mm_iommu_preregistered(struct mm_struct *mm)
>  }
>  EXPORT_SYMBOL_GPL(mm_iommu_preregistered);
>  
> -long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
> +long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
>  		struct mm_iommu_table_group_mem_t **pmem)
>  {
>  	struct mm_iommu_table_group_mem_t *mem;
> @@ -102,12 +102,6 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
>  
>  	list_for_each_entry_rcu(mem, &mm->context.iommu_group_mem_list,
>  			next) {
> -		if ((mem->ua == ua) && (mem->entries == entries)) {
> -			++mem->used;
> -			*pmem = mem;
> -			goto unlock_exit;
> -		}
> -
>  		/* Overlap? */
>  		if ((mem->ua < (ua + (entries << PAGE_SHIFT))) &&
>  				(ua < (mem->ua +
> @@ -202,7 +196,7 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
>  
>  	return ret;
>  }
> -EXPORT_SYMBOL_GPL(mm_iommu_get);
> +EXPORT_SYMBOL_GPL(mm_iommu_new);
>  
>  static void mm_iommu_unpin(struct mm_iommu_table_group_mem_t *mem)
>  {
> @@ -318,21 +312,26 @@ struct mm_iommu_table_group_mem_t *mm_iommu_lookup_rm(struct mm_struct *mm,
>  	return ret;
>  }
>  
> -struct mm_iommu_table_group_mem_t *mm_iommu_find(struct mm_struct *mm,
> +struct mm_iommu_table_group_mem_t *mm_iommu_get(struct mm_struct *mm,
>  		unsigned long ua, unsigned long entries)
>  {
>  	struct mm_iommu_table_group_mem_t *mem, *ret = NULL;
>  
> +	mutex_lock(&mem_list_mutex);
> +
>  	list_for_each_entry_rcu(mem, &mm->context.iommu_group_mem_list, next) {
>  		if ((mem->ua == ua) && (mem->entries == entries)) {
>  			ret = mem;
> +			++mem->used;
>  			break;
>  		}
>  	}
>  
> +	mutex_unlock(&mem_list_mutex);
> +
>  	return ret;
>  }
> -EXPORT_SYMBOL_GPL(mm_iommu_find);
> +EXPORT_SYMBOL_GPL(mm_iommu_get);
>  
>  long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
>  		unsigned long ua, unsigned int pageshift, unsigned long *hpa)
> diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c
> index ad63725..56db071 100644
> --- a/drivers/vfio/vfio_iommu_spapr_tce.c
> +++ b/drivers/vfio/vfio_iommu_spapr_tce.c
> @@ -151,12 +151,13 @@ static long tce_iommu_unregister_pages(struct tce_container *container,
>  {
>  	struct mm_iommu_table_group_mem_t *mem;
>  	struct tce_iommu_prereg *tcemem;
> -	bool found = false;
> +	bool found;
> +	long ret;
>  
>  	if ((vaddr & ~PAGE_MASK) || (size & ~PAGE_MASK))
>  		return -EINVAL;
>  
> -	mem = mm_iommu_find(container->mm, vaddr, size >> PAGE_SHIFT);
> +	mem = mm_iommu_get(container->mm, vaddr, size >> PAGE_SHIFT);
>  	if (!mem)
>  		return -ENOENT;
>  
> @@ -168,9 +169,13 @@ static long tce_iommu_unregister_pages(struct tce_container *container,
>  	}
>  
>  	if (!found)
> -		return -ENOENT;
> +		ret = -ENOENT;
> +	else
> +		ret = tce_iommu_prereg_free(container, tcemem);
>  
> -	return tce_iommu_prereg_free(container, tcemem);
> +	mm_iommu_put(container->mm, mem);
> +
> +	return ret;
>  }
>  
>  static long tce_iommu_register_pages(struct tce_container *container,
> @@ -185,22 +190,24 @@ static long tce_iommu_register_pages(struct tce_container *container,
>  			((vaddr + size) < vaddr))
>  		return -EINVAL;
>  
> -	mem = mm_iommu_find(container->mm, vaddr, entries);
> +	mem = mm_iommu_get(container->mm, vaddr, entries);
>  	if (mem) {
>  		list_for_each_entry(tcemem, &container->prereg_list, next) {
> -			if (tcemem->mem == mem)
> -				return -EBUSY;
> +			if (tcemem->mem == mem) {
> +				ret = -EBUSY;
> +				goto put_exit;
> +			}
>  		}
> +	} else {
> +		ret = mm_iommu_new(container->mm, vaddr, entries, &mem);
> +		if (ret)
> +			return ret;
>  	}
>  
> -	ret = mm_iommu_get(container->mm, vaddr, entries, &mem);
> -	if (ret)
> -		return ret;
> -
>  	tcemem = kzalloc(sizeof(*tcemem), GFP_KERNEL);
>  	if (!tcemem) {
> -		mm_iommu_put(container->mm, mem);
> -		return -ENOMEM;
> +		ret = -ENOMEM;
> +		goto put_exit;
>  	}
>  
>  	tcemem->mem = mem;
> @@ -209,6 +216,10 @@ static long tce_iommu_register_pages(struct tce_container *container,
>  	container->enabled = true;
>  
>  	return 0;
> +
> +put_exit:
> +	mm_iommu_put(container->mm, mem);
> +	return ret;
>  }
>  
>  static bool tce_page_is_contained(struct page *page, unsigned page_shift)

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 03/19] powerpc/vfio/iommu/kvm: Do not pin device memory
  2018-11-23  5:52   ` Alexey Kardashevskiy
@ 2018-12-05  4:35     ` David Gibson
  -1 siblings, 0 replies; 70+ messages in thread
From: David Gibson @ 2018-12-05  4:35 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Alex Williamson, Jose Ricardo Ziviani, Sam Bobroff,
	Alistair Popple, Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab

[-- Attachment #1: Type: text/plain, Size: 17030 bytes --]

On Fri, Nov 23, 2018 at 04:52:48PM +1100, Alexey Kardashevskiy wrote:
> This new memory does not have page structs as it is not plugged to
> the host so gup() will fail anyway.
> 
> This adds 2 helpers:
> - mm_iommu_newdev() to preregister the "memory device" memory so
> the rest of API can still be used;
> - mm_iommu_is_devmem() to know if the physical address is one of thise
> new regions which we must avoid unpinning of.
> 
> This adds @mm to tce_page_is_contained() and iommu_tce_xchg() to test
> if the memory is device memory to avoid pfn_to_page().
> 
> This adds a check for device memory in mm_iommu_ua_mark_dirty_rm() which
> does delayed pages dirtying.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

> ---
> Changes:
> v4:
> * added device memory check in the real mode path
> ---
>  arch/powerpc/include/asm/iommu.h       |  5 +-
>  arch/powerpc/include/asm/mmu_context.h |  5 ++
>  arch/powerpc/kernel/iommu.c            |  9 ++-
>  arch/powerpc/kvm/book3s_64_vio.c       | 18 +++---
>  arch/powerpc/mm/mmu_context_iommu.c    | 86 +++++++++++++++++++++++---
>  drivers/vfio/vfio_iommu_spapr_tce.c    | 28 ++++++---
>  6 files changed, 119 insertions(+), 32 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/iommu.h b/arch/powerpc/include/asm/iommu.h
> index 35db0cb..a8aeac0 100644
> --- a/arch/powerpc/include/asm/iommu.h
> +++ b/arch/powerpc/include/asm/iommu.h
> @@ -218,8 +218,9 @@ extern void iommu_register_group(struct iommu_table_group *table_group,
>  extern int iommu_add_device(struct device *dev);
>  extern void iommu_del_device(struct device *dev);
>  extern int __init tce_iommu_bus_notifier_init(void);
> -extern long iommu_tce_xchg(struct iommu_table *tbl, unsigned long entry,
> -		unsigned long *hpa, enum dma_data_direction *direction);
> +extern long iommu_tce_xchg(struct mm_struct *mm, struct iommu_table *tbl,
> +		unsigned long entry, unsigned long *hpa,
> +		enum dma_data_direction *direction);
>  #else
>  static inline void iommu_register_group(struct iommu_table_group *table_group,
>  					int pci_domain_number,
> diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
> index 2d6b00d..f0f9f3d 100644
> --- a/arch/powerpc/include/asm/mmu_context.h
> +++ b/arch/powerpc/include/asm/mmu_context.h
> @@ -24,6 +24,9 @@ extern bool mm_iommu_preregistered(struct mm_struct *mm);
>  extern long mm_iommu_new(struct mm_struct *mm,
>  		unsigned long ua, unsigned long entries,
>  		struct mm_iommu_table_group_mem_t **pmem);
> +extern long mm_iommu_newdev(struct mm_struct *mm, unsigned long ua,
> +		unsigned long entries, unsigned long dev_hpa,
> +		struct mm_iommu_table_group_mem_t **pmem);
>  extern long mm_iommu_put(struct mm_struct *mm,
>  		struct mm_iommu_table_group_mem_t *mem);
>  extern void mm_iommu_init(struct mm_struct *mm);
> @@ -39,6 +42,8 @@ extern long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
>  extern long mm_iommu_ua_to_hpa_rm(struct mm_iommu_table_group_mem_t *mem,
>  		unsigned long ua, unsigned int pageshift, unsigned long *hpa);
>  extern void mm_iommu_ua_mark_dirty_rm(struct mm_struct *mm, unsigned long ua);
> +extern bool mm_iommu_is_devmem(struct mm_struct *mm, unsigned long hpa,
> +		unsigned int pageshift);
>  extern long mm_iommu_mapped_inc(struct mm_iommu_table_group_mem_t *mem);
>  extern void mm_iommu_mapped_dec(struct mm_iommu_table_group_mem_t *mem);
>  #endif
> diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
> index f0dc680..8ccfdd9 100644
> --- a/arch/powerpc/kernel/iommu.c
> +++ b/arch/powerpc/kernel/iommu.c
> @@ -47,6 +47,7 @@
>  #include <asm/fadump.h>
>  #include <asm/vio.h>
>  #include <asm/tce.h>
> +#include <asm/mmu_context.h>
>  
>  #define DBG(...)
>  
> @@ -993,15 +994,17 @@ int iommu_tce_check_gpa(unsigned long page_shift, unsigned long gpa)
>  }
>  EXPORT_SYMBOL_GPL(iommu_tce_check_gpa);
>  
> -long iommu_tce_xchg(struct iommu_table *tbl, unsigned long entry,
> -		unsigned long *hpa, enum dma_data_direction *direction)
> +long iommu_tce_xchg(struct mm_struct *mm, struct iommu_table *tbl,
> +		unsigned long entry, unsigned long *hpa,
> +		enum dma_data_direction *direction)
>  {
>  	long ret;
>  
>  	ret = tbl->it_ops->exchange(tbl, entry, hpa, direction);
>  
>  	if (!ret && ((*direction == DMA_FROM_DEVICE) ||
> -			(*direction == DMA_BIDIRECTIONAL)))
> +			(*direction == DMA_BIDIRECTIONAL)) &&
> +			!mm_iommu_is_devmem(mm, *hpa, tbl->it_page_shift))
>  		SetPageDirty(pfn_to_page(*hpa >> PAGE_SHIFT));
>  
>  	/* if (unlikely(ret))
> diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c
> index 62a8d03..532ab797 100644
> --- a/arch/powerpc/kvm/book3s_64_vio.c
> +++ b/arch/powerpc/kvm/book3s_64_vio.c
> @@ -397,12 +397,13 @@ static long kvmppc_tce_validate(struct kvmppc_spapr_tce_table *stt,
>  	return H_SUCCESS;
>  }
>  
> -static void kvmppc_clear_tce(struct iommu_table *tbl, unsigned long entry)
> +static void kvmppc_clear_tce(struct mm_struct *mm, struct iommu_table *tbl,
> +		unsigned long entry)
>  {
>  	unsigned long hpa = 0;
>  	enum dma_data_direction dir = DMA_NONE;
>  
> -	iommu_tce_xchg(tbl, entry, &hpa, &dir);
> +	iommu_tce_xchg(mm, tbl, entry, &hpa, &dir);
>  }
>  
>  static long kvmppc_tce_iommu_mapped_dec(struct kvm *kvm,
> @@ -433,7 +434,7 @@ static long kvmppc_tce_iommu_do_unmap(struct kvm *kvm,
>  	unsigned long hpa = 0;
>  	long ret;
>  
> -	if (WARN_ON_ONCE(iommu_tce_xchg(tbl, entry, &hpa, &dir)))
> +	if (WARN_ON_ONCE(iommu_tce_xchg(kvm->mm, tbl, entry, &hpa, &dir)))
>  		return H_TOO_HARD;
>  
>  	if (dir == DMA_NONE)
> @@ -441,7 +442,7 @@ static long kvmppc_tce_iommu_do_unmap(struct kvm *kvm,
>  
>  	ret = kvmppc_tce_iommu_mapped_dec(kvm, tbl, entry);
>  	if (ret != H_SUCCESS)
> -		iommu_tce_xchg(tbl, entry, &hpa, &dir);
> +		iommu_tce_xchg(kvm->mm, tbl, entry, &hpa, &dir);
>  
>  	return ret;
>  }
> @@ -487,7 +488,7 @@ long kvmppc_tce_iommu_do_map(struct kvm *kvm, struct iommu_table *tbl,
>  	if (mm_iommu_mapped_inc(mem))
>  		return H_TOO_HARD;
>  
> -	ret = iommu_tce_xchg(tbl, entry, &hpa, &dir);
> +	ret = iommu_tce_xchg(kvm->mm, tbl, entry, &hpa, &dir);
>  	if (WARN_ON_ONCE(ret)) {
>  		mm_iommu_mapped_dec(mem);
>  		return H_TOO_HARD;
> @@ -566,7 +567,7 @@ long kvmppc_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
>  					entry, ua, dir);
>  
>  		if (ret != H_SUCCESS) {
> -			kvmppc_clear_tce(stit->tbl, entry);
> +			kvmppc_clear_tce(vcpu->kvm->mm, stit->tbl, entry);
>  			goto unlock_exit;
>  		}
>  	}
> @@ -655,7 +656,8 @@ long kvmppc_h_put_tce_indirect(struct kvm_vcpu *vcpu,
>  					iommu_tce_direction(tce));
>  
>  			if (ret != H_SUCCESS) {
> -				kvmppc_clear_tce(stit->tbl, entry);
> +				kvmppc_clear_tce(vcpu->kvm->mm, stit->tbl,
> +						entry);
>  				goto unlock_exit;
>  			}
>  		}
> @@ -704,7 +706,7 @@ long kvmppc_h_stuff_tce(struct kvm_vcpu *vcpu,
>  				return ret;
>  
>  			WARN_ON_ONCE(1);
> -			kvmppc_clear_tce(stit->tbl, entry);
> +			kvmppc_clear_tce(vcpu->kvm->mm, stit->tbl, entry);
>  		}
>  	}
>  
> diff --git a/arch/powerpc/mm/mmu_context_iommu.c b/arch/powerpc/mm/mmu_context_iommu.c
> index 580d89e..663feb0 100644
> --- a/arch/powerpc/mm/mmu_context_iommu.c
> +++ b/arch/powerpc/mm/mmu_context_iommu.c
> @@ -47,6 +47,8 @@ struct mm_iommu_table_group_mem_t {
>  		struct page **hpages;	/* vmalloc'ed */
>  		phys_addr_t *hpas;
>  	};
> +#define MM_IOMMU_TABLE_INVALID_HPA	((uint64_t)-1)
> +	u64 dev_hpa;		/* Device memory base address */
>  };
>  
>  static long mm_iommu_adjust_locked_vm(struct mm_struct *mm,
> @@ -89,7 +91,8 @@ bool mm_iommu_preregistered(struct mm_struct *mm)
>  }
>  EXPORT_SYMBOL_GPL(mm_iommu_preregistered);
>  
> -long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
> +static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua,
> +		unsigned long entries, unsigned long dev_hpa,
>  		struct mm_iommu_table_group_mem_t **pmem)
>  {
>  	struct mm_iommu_table_group_mem_t *mem;
> @@ -112,11 +115,13 @@ long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
>  
>  	}
>  
> -	ret = mm_iommu_adjust_locked_vm(mm, entries, true);
> -	if (ret)
> -		goto unlock_exit;
> +	if (dev_hpa == MM_IOMMU_TABLE_INVALID_HPA) {
> +		ret = mm_iommu_adjust_locked_vm(mm, entries, true);
> +		if (ret)
> +			goto unlock_exit;
>  
> -	locked_entries = entries;
> +		locked_entries = entries;
> +	}
>  
>  	mem = kzalloc(sizeof(*mem), GFP_KERNEL);
>  	if (!mem) {
> @@ -124,6 +129,13 @@ long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
>  		goto unlock_exit;
>  	}
>  
> +	if (dev_hpa != MM_IOMMU_TABLE_INVALID_HPA) {
> +		mem->pageshift = __ffs(dev_hpa | (entries << PAGE_SHIFT));
> +		mem->dev_hpa = dev_hpa;
> +		goto good_exit;
> +	}
> +	mem->dev_hpa = MM_IOMMU_TABLE_INVALID_HPA;
> +
>  	/*
>  	 * For a starting point for a maximum page size calculation
>  	 * we use @ua and @entries natural alignment to allow IOMMU pages
> @@ -180,6 +192,7 @@ long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
>  
>  	}
>  
> +good_exit:
>  	atomic64_set(&mem->mapped, 1);
>  	mem->used = 1;
>  	mem->ua = ua;
> @@ -196,13 +209,31 @@ long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
>  
>  	return ret;
>  }
> +
> +long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
> +		struct mm_iommu_table_group_mem_t **pmem)
> +{
> +	return mm_iommu_do_alloc(mm, ua, entries, MM_IOMMU_TABLE_INVALID_HPA,
> +			pmem);
> +}
>  EXPORT_SYMBOL_GPL(mm_iommu_new);
>  
> +long mm_iommu_newdev(struct mm_struct *mm, unsigned long ua,
> +		unsigned long entries, unsigned long dev_hpa,
> +		struct mm_iommu_table_group_mem_t **pmem)
> +{
> +	return mm_iommu_do_alloc(mm, ua, entries, dev_hpa, pmem);
> +}
> +EXPORT_SYMBOL_GPL(mm_iommu_newdev);
> +
>  static void mm_iommu_unpin(struct mm_iommu_table_group_mem_t *mem)
>  {
>  	long i;
>  	struct page *page = NULL;
>  
> +	if (!mem->hpas)
> +		return;
> +
>  	for (i = 0; i < mem->entries; ++i) {
>  		if (!mem->hpas[i])
>  			continue;
> @@ -244,6 +275,7 @@ static void mm_iommu_release(struct mm_iommu_table_group_mem_t *mem)
>  long mm_iommu_put(struct mm_struct *mm, struct mm_iommu_table_group_mem_t *mem)
>  {
>  	long ret = 0;
> +	unsigned long entries, dev_hpa;
>  
>  	mutex_lock(&mem_list_mutex);
>  
> @@ -265,9 +297,12 @@ long mm_iommu_put(struct mm_struct *mm, struct mm_iommu_table_group_mem_t *mem)
>  	}
>  
>  	/* @mapped became 0 so now mappings are disabled, release the region */
> +	entries = mem->entries;
> +	dev_hpa = mem->dev_hpa;
>  	mm_iommu_release(mem);
>  
> -	mm_iommu_adjust_locked_vm(mm, mem->entries, false);
> +	if (dev_hpa == MM_IOMMU_TABLE_INVALID_HPA)
> +		mm_iommu_adjust_locked_vm(mm, entries, false);
>  
>  unlock_exit:
>  	mutex_unlock(&mem_list_mutex);
> @@ -337,7 +372,7 @@ long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
>  		unsigned long ua, unsigned int pageshift, unsigned long *hpa)
>  {
>  	const long entry = (ua - mem->ua) >> PAGE_SHIFT;
> -	u64 *va = &mem->hpas[entry];
> +	u64 *va;
>  
>  	if (entry >= mem->entries)
>  		return -EFAULT;
> @@ -345,6 +380,12 @@ long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
>  	if (pageshift > mem->pageshift)
>  		return -EFAULT;
>  
> +	if (!mem->hpas) {
> +		*hpa = mem->dev_hpa + (ua - mem->ua);
> +		return 0;
> +	}
> +
> +	va = &mem->hpas[entry];
>  	*hpa = (*va & MM_IOMMU_TABLE_GROUP_PAGE_MASK) | (ua & ~PAGE_MASK);
>  
>  	return 0;
> @@ -355,7 +396,6 @@ long mm_iommu_ua_to_hpa_rm(struct mm_iommu_table_group_mem_t *mem,
>  		unsigned long ua, unsigned int pageshift, unsigned long *hpa)
>  {
>  	const long entry = (ua - mem->ua) >> PAGE_SHIFT;
> -	void *va = &mem->hpas[entry];
>  	unsigned long *pa;
>  
>  	if (entry >= mem->entries)
> @@ -364,7 +404,12 @@ long mm_iommu_ua_to_hpa_rm(struct mm_iommu_table_group_mem_t *mem,
>  	if (pageshift > mem->pageshift)
>  		return -EFAULT;
>  
> -	pa = (void *) vmalloc_to_phys(va);
> +	if (!mem->hpas) {
> +		*hpa = mem->dev_hpa + (ua - mem->ua);
> +		return 0;
> +	}
> +
> +	pa = (void *) vmalloc_to_phys(&mem->hpas[entry]);
>  	if (!pa)
>  		return -EFAULT;
>  
> @@ -384,6 +429,9 @@ extern void mm_iommu_ua_mark_dirty_rm(struct mm_struct *mm, unsigned long ua)
>  	if (!mem)
>  		return;
>  
> +	if (mem->dev_hpa != MM_IOMMU_TABLE_INVALID_HPA)
> +		return;
> +
>  	entry = (ua - mem->ua) >> PAGE_SHIFT;
>  	va = &mem->hpas[entry];
>  
> @@ -394,6 +442,26 @@ extern void mm_iommu_ua_mark_dirty_rm(struct mm_struct *mm, unsigned long ua)
>  	*pa |= MM_IOMMU_TABLE_GROUP_PAGE_DIRTY;
>  }
>  
> +extern bool mm_iommu_is_devmem(struct mm_struct *mm, unsigned long hpa,
> +		unsigned int pageshift)
> +{
> +	struct mm_iommu_table_group_mem_t *mem;
> +	const unsigned long pagesize = 1UL << pageshift;
> +
> +	list_for_each_entry_rcu(mem, &mm->context.iommu_group_mem_list, next) {
> +		if (mem->dev_hpa == MM_IOMMU_TABLE_INVALID_HPA)
> +			continue;
> +
> +		if ((mem->dev_hpa <= hpa) &&
> +				(hpa + pagesize <= mem->dev_hpa +
> +				 (mem->entries << PAGE_SHIFT)))
> +			return true;
> +	}
> +
> +	return false;
> +}
> +EXPORT_SYMBOL_GPL(mm_iommu_is_devmem);
> +
>  long mm_iommu_mapped_inc(struct mm_iommu_table_group_mem_t *mem)
>  {
>  	if (atomic64_inc_not_zero(&mem->mapped))
> diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c
> index 56db071..ed89137 100644
> --- a/drivers/vfio/vfio_iommu_spapr_tce.c
> +++ b/drivers/vfio/vfio_iommu_spapr_tce.c
> @@ -222,8 +222,15 @@ static long tce_iommu_register_pages(struct tce_container *container,
>  	return ret;
>  }
>  
> -static bool tce_page_is_contained(struct page *page, unsigned page_shift)
> +static bool tce_page_is_contained(struct mm_struct *mm, unsigned long hpa,
> +		unsigned int page_shift)
>  {
> +	struct page *page;
> +
> +	if (mm_iommu_is_devmem(mm, hpa, page_shift))
> +		return true;
> +
> +	page = pfn_to_page(hpa >> PAGE_SHIFT);
>  	/*
>  	 * Check that the TCE table granularity is not bigger than the size of
>  	 * a page we just found. Otherwise the hardware can get access to
> @@ -499,7 +506,8 @@ static int tce_iommu_clear(struct tce_container *container,
>  
>  		direction = DMA_NONE;
>  		oldhpa = 0;
> -		ret = iommu_tce_xchg(tbl, entry, &oldhpa, &direction);
> +		ret = iommu_tce_xchg(container->mm, tbl, entry, &oldhpa,
> +				&direction);
>  		if (ret)
>  			continue;
>  
> @@ -537,7 +545,6 @@ static long tce_iommu_build(struct tce_container *container,
>  		enum dma_data_direction direction)
>  {
>  	long i, ret = 0;
> -	struct page *page;
>  	unsigned long hpa;
>  	enum dma_data_direction dirtmp;
>  
> @@ -548,15 +555,16 @@ static long tce_iommu_build(struct tce_container *container,
>  		if (ret)
>  			break;
>  
> -		page = pfn_to_page(hpa >> PAGE_SHIFT);
> -		if (!tce_page_is_contained(page, tbl->it_page_shift)) {
> +		if (!tce_page_is_contained(container->mm, hpa,
> +				tbl->it_page_shift)) {
>  			ret = -EPERM;
>  			break;
>  		}
>  
>  		hpa |= offset;
>  		dirtmp = direction;
> -		ret = iommu_tce_xchg(tbl, entry + i, &hpa, &dirtmp);
> +		ret = iommu_tce_xchg(container->mm, tbl, entry + i, &hpa,
> +				&dirtmp);
>  		if (ret) {
>  			tce_iommu_unuse_page(container, hpa);
>  			pr_err("iommu_tce: %s failed ioba=%lx, tce=%lx, ret=%ld\n",
> @@ -583,7 +591,6 @@ static long tce_iommu_build_v2(struct tce_container *container,
>  		enum dma_data_direction direction)
>  {
>  	long i, ret = 0;
> -	struct page *page;
>  	unsigned long hpa;
>  	enum dma_data_direction dirtmp;
>  
> @@ -596,8 +603,8 @@ static long tce_iommu_build_v2(struct tce_container *container,
>  		if (ret)
>  			break;
>  
> -		page = pfn_to_page(hpa >> PAGE_SHIFT);
> -		if (!tce_page_is_contained(page, tbl->it_page_shift)) {
> +		if (!tce_page_is_contained(container->mm, hpa,
> +				tbl->it_page_shift)) {
>  			ret = -EPERM;
>  			break;
>  		}
> @@ -610,7 +617,8 @@ static long tce_iommu_build_v2(struct tce_container *container,
>  		if (mm_iommu_mapped_inc(mem))
>  			break;
>  
> -		ret = iommu_tce_xchg(tbl, entry + i, &hpa, &dirtmp);
> +		ret = iommu_tce_xchg(container->mm, tbl, entry + i, &hpa,
> +				&dirtmp);
>  		if (ret) {
>  			/* dirtmp cannot be DMA_NONE here */
>  			tce_iommu_unuse_page_v2(container, tbl, entry + i);

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 03/19] powerpc/vfio/iommu/kvm: Do not pin device memory
@ 2018-12-05  4:35     ` David Gibson
  0 siblings, 0 replies; 70+ messages in thread
From: David Gibson @ 2018-12-05  4:35 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Alex Williamson, Jose Ricardo Ziviani, Sam Bobroff,
	Alistair Popple, Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab

[-- Attachment #1: Type: text/plain, Size: 17030 bytes --]

On Fri, Nov 23, 2018 at 04:52:48PM +1100, Alexey Kardashevskiy wrote:
> This new memory does not have page structs as it is not plugged to
> the host so gup() will fail anyway.
> 
> This adds 2 helpers:
> - mm_iommu_newdev() to preregister the "memory device" memory so
> the rest of API can still be used;
> - mm_iommu_is_devmem() to know if the physical address is one of thise
> new regions which we must avoid unpinning of.
> 
> This adds @mm to tce_page_is_contained() and iommu_tce_xchg() to test
> if the memory is device memory to avoid pfn_to_page().
> 
> This adds a check for device memory in mm_iommu_ua_mark_dirty_rm() which
> does delayed pages dirtying.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

> ---
> Changes:
> v4:
> * added device memory check in the real mode path
> ---
>  arch/powerpc/include/asm/iommu.h       |  5 +-
>  arch/powerpc/include/asm/mmu_context.h |  5 ++
>  arch/powerpc/kernel/iommu.c            |  9 ++-
>  arch/powerpc/kvm/book3s_64_vio.c       | 18 +++---
>  arch/powerpc/mm/mmu_context_iommu.c    | 86 +++++++++++++++++++++++---
>  drivers/vfio/vfio_iommu_spapr_tce.c    | 28 ++++++---
>  6 files changed, 119 insertions(+), 32 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/iommu.h b/arch/powerpc/include/asm/iommu.h
> index 35db0cb..a8aeac0 100644
> --- a/arch/powerpc/include/asm/iommu.h
> +++ b/arch/powerpc/include/asm/iommu.h
> @@ -218,8 +218,9 @@ extern void iommu_register_group(struct iommu_table_group *table_group,
>  extern int iommu_add_device(struct device *dev);
>  extern void iommu_del_device(struct device *dev);
>  extern int __init tce_iommu_bus_notifier_init(void);
> -extern long iommu_tce_xchg(struct iommu_table *tbl, unsigned long entry,
> -		unsigned long *hpa, enum dma_data_direction *direction);
> +extern long iommu_tce_xchg(struct mm_struct *mm, struct iommu_table *tbl,
> +		unsigned long entry, unsigned long *hpa,
> +		enum dma_data_direction *direction);
>  #else
>  static inline void iommu_register_group(struct iommu_table_group *table_group,
>  					int pci_domain_number,
> diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
> index 2d6b00d..f0f9f3d 100644
> --- a/arch/powerpc/include/asm/mmu_context.h
> +++ b/arch/powerpc/include/asm/mmu_context.h
> @@ -24,6 +24,9 @@ extern bool mm_iommu_preregistered(struct mm_struct *mm);
>  extern long mm_iommu_new(struct mm_struct *mm,
>  		unsigned long ua, unsigned long entries,
>  		struct mm_iommu_table_group_mem_t **pmem);
> +extern long mm_iommu_newdev(struct mm_struct *mm, unsigned long ua,
> +		unsigned long entries, unsigned long dev_hpa,
> +		struct mm_iommu_table_group_mem_t **pmem);
>  extern long mm_iommu_put(struct mm_struct *mm,
>  		struct mm_iommu_table_group_mem_t *mem);
>  extern void mm_iommu_init(struct mm_struct *mm);
> @@ -39,6 +42,8 @@ extern long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
>  extern long mm_iommu_ua_to_hpa_rm(struct mm_iommu_table_group_mem_t *mem,
>  		unsigned long ua, unsigned int pageshift, unsigned long *hpa);
>  extern void mm_iommu_ua_mark_dirty_rm(struct mm_struct *mm, unsigned long ua);
> +extern bool mm_iommu_is_devmem(struct mm_struct *mm, unsigned long hpa,
> +		unsigned int pageshift);
>  extern long mm_iommu_mapped_inc(struct mm_iommu_table_group_mem_t *mem);
>  extern void mm_iommu_mapped_dec(struct mm_iommu_table_group_mem_t *mem);
>  #endif
> diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
> index f0dc680..8ccfdd9 100644
> --- a/arch/powerpc/kernel/iommu.c
> +++ b/arch/powerpc/kernel/iommu.c
> @@ -47,6 +47,7 @@
>  #include <asm/fadump.h>
>  #include <asm/vio.h>
>  #include <asm/tce.h>
> +#include <asm/mmu_context.h>
>  
>  #define DBG(...)
>  
> @@ -993,15 +994,17 @@ int iommu_tce_check_gpa(unsigned long page_shift, unsigned long gpa)
>  }
>  EXPORT_SYMBOL_GPL(iommu_tce_check_gpa);
>  
> -long iommu_tce_xchg(struct iommu_table *tbl, unsigned long entry,
> -		unsigned long *hpa, enum dma_data_direction *direction)
> +long iommu_tce_xchg(struct mm_struct *mm, struct iommu_table *tbl,
> +		unsigned long entry, unsigned long *hpa,
> +		enum dma_data_direction *direction)
>  {
>  	long ret;
>  
>  	ret = tbl->it_ops->exchange(tbl, entry, hpa, direction);
>  
>  	if (!ret && ((*direction == DMA_FROM_DEVICE) ||
> -			(*direction == DMA_BIDIRECTIONAL)))
> +			(*direction == DMA_BIDIRECTIONAL)) &&
> +			!mm_iommu_is_devmem(mm, *hpa, tbl->it_page_shift))
>  		SetPageDirty(pfn_to_page(*hpa >> PAGE_SHIFT));
>  
>  	/* if (unlikely(ret))
> diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c
> index 62a8d03..532ab797 100644
> --- a/arch/powerpc/kvm/book3s_64_vio.c
> +++ b/arch/powerpc/kvm/book3s_64_vio.c
> @@ -397,12 +397,13 @@ static long kvmppc_tce_validate(struct kvmppc_spapr_tce_table *stt,
>  	return H_SUCCESS;
>  }
>  
> -static void kvmppc_clear_tce(struct iommu_table *tbl, unsigned long entry)
> +static void kvmppc_clear_tce(struct mm_struct *mm, struct iommu_table *tbl,
> +		unsigned long entry)
>  {
>  	unsigned long hpa = 0;
>  	enum dma_data_direction dir = DMA_NONE;
>  
> -	iommu_tce_xchg(tbl, entry, &hpa, &dir);
> +	iommu_tce_xchg(mm, tbl, entry, &hpa, &dir);
>  }
>  
>  static long kvmppc_tce_iommu_mapped_dec(struct kvm *kvm,
> @@ -433,7 +434,7 @@ static long kvmppc_tce_iommu_do_unmap(struct kvm *kvm,
>  	unsigned long hpa = 0;
>  	long ret;
>  
> -	if (WARN_ON_ONCE(iommu_tce_xchg(tbl, entry, &hpa, &dir)))
> +	if (WARN_ON_ONCE(iommu_tce_xchg(kvm->mm, tbl, entry, &hpa, &dir)))
>  		return H_TOO_HARD;
>  
>  	if (dir == DMA_NONE)
> @@ -441,7 +442,7 @@ static long kvmppc_tce_iommu_do_unmap(struct kvm *kvm,
>  
>  	ret = kvmppc_tce_iommu_mapped_dec(kvm, tbl, entry);
>  	if (ret != H_SUCCESS)
> -		iommu_tce_xchg(tbl, entry, &hpa, &dir);
> +		iommu_tce_xchg(kvm->mm, tbl, entry, &hpa, &dir);
>  
>  	return ret;
>  }
> @@ -487,7 +488,7 @@ long kvmppc_tce_iommu_do_map(struct kvm *kvm, struct iommu_table *tbl,
>  	if (mm_iommu_mapped_inc(mem))
>  		return H_TOO_HARD;
>  
> -	ret = iommu_tce_xchg(tbl, entry, &hpa, &dir);
> +	ret = iommu_tce_xchg(kvm->mm, tbl, entry, &hpa, &dir);
>  	if (WARN_ON_ONCE(ret)) {
>  		mm_iommu_mapped_dec(mem);
>  		return H_TOO_HARD;
> @@ -566,7 +567,7 @@ long kvmppc_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
>  					entry, ua, dir);
>  
>  		if (ret != H_SUCCESS) {
> -			kvmppc_clear_tce(stit->tbl, entry);
> +			kvmppc_clear_tce(vcpu->kvm->mm, stit->tbl, entry);
>  			goto unlock_exit;
>  		}
>  	}
> @@ -655,7 +656,8 @@ long kvmppc_h_put_tce_indirect(struct kvm_vcpu *vcpu,
>  					iommu_tce_direction(tce));
>  
>  			if (ret != H_SUCCESS) {
> -				kvmppc_clear_tce(stit->tbl, entry);
> +				kvmppc_clear_tce(vcpu->kvm->mm, stit->tbl,
> +						entry);
>  				goto unlock_exit;
>  			}
>  		}
> @@ -704,7 +706,7 @@ long kvmppc_h_stuff_tce(struct kvm_vcpu *vcpu,
>  				return ret;
>  
>  			WARN_ON_ONCE(1);
> -			kvmppc_clear_tce(stit->tbl, entry);
> +			kvmppc_clear_tce(vcpu->kvm->mm, stit->tbl, entry);
>  		}
>  	}
>  
> diff --git a/arch/powerpc/mm/mmu_context_iommu.c b/arch/powerpc/mm/mmu_context_iommu.c
> index 580d89e..663feb0 100644
> --- a/arch/powerpc/mm/mmu_context_iommu.c
> +++ b/arch/powerpc/mm/mmu_context_iommu.c
> @@ -47,6 +47,8 @@ struct mm_iommu_table_group_mem_t {
>  		struct page **hpages;	/* vmalloc'ed */
>  		phys_addr_t *hpas;
>  	};
> +#define MM_IOMMU_TABLE_INVALID_HPA	((uint64_t)-1)
> +	u64 dev_hpa;		/* Device memory base address */
>  };
>  
>  static long mm_iommu_adjust_locked_vm(struct mm_struct *mm,
> @@ -89,7 +91,8 @@ bool mm_iommu_preregistered(struct mm_struct *mm)
>  }
>  EXPORT_SYMBOL_GPL(mm_iommu_preregistered);
>  
> -long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
> +static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua,
> +		unsigned long entries, unsigned long dev_hpa,
>  		struct mm_iommu_table_group_mem_t **pmem)
>  {
>  	struct mm_iommu_table_group_mem_t *mem;
> @@ -112,11 +115,13 @@ long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
>  
>  	}
>  
> -	ret = mm_iommu_adjust_locked_vm(mm, entries, true);
> -	if (ret)
> -		goto unlock_exit;
> +	if (dev_hpa == MM_IOMMU_TABLE_INVALID_HPA) {
> +		ret = mm_iommu_adjust_locked_vm(mm, entries, true);
> +		if (ret)
> +			goto unlock_exit;
>  
> -	locked_entries = entries;
> +		locked_entries = entries;
> +	}
>  
>  	mem = kzalloc(sizeof(*mem), GFP_KERNEL);
>  	if (!mem) {
> @@ -124,6 +129,13 @@ long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
>  		goto unlock_exit;
>  	}
>  
> +	if (dev_hpa != MM_IOMMU_TABLE_INVALID_HPA) {
> +		mem->pageshift = __ffs(dev_hpa | (entries << PAGE_SHIFT));
> +		mem->dev_hpa = dev_hpa;
> +		goto good_exit;
> +	}
> +	mem->dev_hpa = MM_IOMMU_TABLE_INVALID_HPA;
> +
>  	/*
>  	 * For a starting point for a maximum page size calculation
>  	 * we use @ua and @entries natural alignment to allow IOMMU pages
> @@ -180,6 +192,7 @@ long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
>  
>  	}
>  
> +good_exit:
>  	atomic64_set(&mem->mapped, 1);
>  	mem->used = 1;
>  	mem->ua = ua;
> @@ -196,13 +209,31 @@ long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
>  
>  	return ret;
>  }
> +
> +long mm_iommu_new(struct mm_struct *mm, unsigned long ua, unsigned long entries,
> +		struct mm_iommu_table_group_mem_t **pmem)
> +{
> +	return mm_iommu_do_alloc(mm, ua, entries, MM_IOMMU_TABLE_INVALID_HPA,
> +			pmem);
> +}
>  EXPORT_SYMBOL_GPL(mm_iommu_new);
>  
> +long mm_iommu_newdev(struct mm_struct *mm, unsigned long ua,
> +		unsigned long entries, unsigned long dev_hpa,
> +		struct mm_iommu_table_group_mem_t **pmem)
> +{
> +	return mm_iommu_do_alloc(mm, ua, entries, dev_hpa, pmem);
> +}
> +EXPORT_SYMBOL_GPL(mm_iommu_newdev);
> +
>  static void mm_iommu_unpin(struct mm_iommu_table_group_mem_t *mem)
>  {
>  	long i;
>  	struct page *page = NULL;
>  
> +	if (!mem->hpas)
> +		return;
> +
>  	for (i = 0; i < mem->entries; ++i) {
>  		if (!mem->hpas[i])
>  			continue;
> @@ -244,6 +275,7 @@ static void mm_iommu_release(struct mm_iommu_table_group_mem_t *mem)
>  long mm_iommu_put(struct mm_struct *mm, struct mm_iommu_table_group_mem_t *mem)
>  {
>  	long ret = 0;
> +	unsigned long entries, dev_hpa;
>  
>  	mutex_lock(&mem_list_mutex);
>  
> @@ -265,9 +297,12 @@ long mm_iommu_put(struct mm_struct *mm, struct mm_iommu_table_group_mem_t *mem)
>  	}
>  
>  	/* @mapped became 0 so now mappings are disabled, release the region */
> +	entries = mem->entries;
> +	dev_hpa = mem->dev_hpa;
>  	mm_iommu_release(mem);
>  
> -	mm_iommu_adjust_locked_vm(mm, mem->entries, false);
> +	if (dev_hpa == MM_IOMMU_TABLE_INVALID_HPA)
> +		mm_iommu_adjust_locked_vm(mm, entries, false);
>  
>  unlock_exit:
>  	mutex_unlock(&mem_list_mutex);
> @@ -337,7 +372,7 @@ long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
>  		unsigned long ua, unsigned int pageshift, unsigned long *hpa)
>  {
>  	const long entry = (ua - mem->ua) >> PAGE_SHIFT;
> -	u64 *va = &mem->hpas[entry];
> +	u64 *va;
>  
>  	if (entry >= mem->entries)
>  		return -EFAULT;
> @@ -345,6 +380,12 @@ long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
>  	if (pageshift > mem->pageshift)
>  		return -EFAULT;
>  
> +	if (!mem->hpas) {
> +		*hpa = mem->dev_hpa + (ua - mem->ua);
> +		return 0;
> +	}
> +
> +	va = &mem->hpas[entry];
>  	*hpa = (*va & MM_IOMMU_TABLE_GROUP_PAGE_MASK) | (ua & ~PAGE_MASK);
>  
>  	return 0;
> @@ -355,7 +396,6 @@ long mm_iommu_ua_to_hpa_rm(struct mm_iommu_table_group_mem_t *mem,
>  		unsigned long ua, unsigned int pageshift, unsigned long *hpa)
>  {
>  	const long entry = (ua - mem->ua) >> PAGE_SHIFT;
> -	void *va = &mem->hpas[entry];
>  	unsigned long *pa;
>  
>  	if (entry >= mem->entries)
> @@ -364,7 +404,12 @@ long mm_iommu_ua_to_hpa_rm(struct mm_iommu_table_group_mem_t *mem,
>  	if (pageshift > mem->pageshift)
>  		return -EFAULT;
>  
> -	pa = (void *) vmalloc_to_phys(va);
> +	if (!mem->hpas) {
> +		*hpa = mem->dev_hpa + (ua - mem->ua);
> +		return 0;
> +	}
> +
> +	pa = (void *) vmalloc_to_phys(&mem->hpas[entry]);
>  	if (!pa)
>  		return -EFAULT;
>  
> @@ -384,6 +429,9 @@ extern void mm_iommu_ua_mark_dirty_rm(struct mm_struct *mm, unsigned long ua)
>  	if (!mem)
>  		return;
>  
> +	if (mem->dev_hpa != MM_IOMMU_TABLE_INVALID_HPA)
> +		return;
> +
>  	entry = (ua - mem->ua) >> PAGE_SHIFT;
>  	va = &mem->hpas[entry];
>  
> @@ -394,6 +442,26 @@ extern void mm_iommu_ua_mark_dirty_rm(struct mm_struct *mm, unsigned long ua)
>  	*pa |= MM_IOMMU_TABLE_GROUP_PAGE_DIRTY;
>  }
>  
> +extern bool mm_iommu_is_devmem(struct mm_struct *mm, unsigned long hpa,
> +		unsigned int pageshift)
> +{
> +	struct mm_iommu_table_group_mem_t *mem;
> +	const unsigned long pagesize = 1UL << pageshift;
> +
> +	list_for_each_entry_rcu(mem, &mm->context.iommu_group_mem_list, next) {
> +		if (mem->dev_hpa == MM_IOMMU_TABLE_INVALID_HPA)
> +			continue;
> +
> +		if ((mem->dev_hpa <= hpa) &&
> +				(hpa + pagesize <= mem->dev_hpa +
> +				 (mem->entries << PAGE_SHIFT)))
> +			return true;
> +	}
> +
> +	return false;
> +}
> +EXPORT_SYMBOL_GPL(mm_iommu_is_devmem);
> +
>  long mm_iommu_mapped_inc(struct mm_iommu_table_group_mem_t *mem)
>  {
>  	if (atomic64_inc_not_zero(&mem->mapped))
> diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c
> index 56db071..ed89137 100644
> --- a/drivers/vfio/vfio_iommu_spapr_tce.c
> +++ b/drivers/vfio/vfio_iommu_spapr_tce.c
> @@ -222,8 +222,15 @@ static long tce_iommu_register_pages(struct tce_container *container,
>  	return ret;
>  }
>  
> -static bool tce_page_is_contained(struct page *page, unsigned page_shift)
> +static bool tce_page_is_contained(struct mm_struct *mm, unsigned long hpa,
> +		unsigned int page_shift)
>  {
> +	struct page *page;
> +
> +	if (mm_iommu_is_devmem(mm, hpa, page_shift))
> +		return true;
> +
> +	page = pfn_to_page(hpa >> PAGE_SHIFT);
>  	/*
>  	 * Check that the TCE table granularity is not bigger than the size of
>  	 * a page we just found. Otherwise the hardware can get access to
> @@ -499,7 +506,8 @@ static int tce_iommu_clear(struct tce_container *container,
>  
>  		direction = DMA_NONE;
>  		oldhpa = 0;
> -		ret = iommu_tce_xchg(tbl, entry, &oldhpa, &direction);
> +		ret = iommu_tce_xchg(container->mm, tbl, entry, &oldhpa,
> +				&direction);
>  		if (ret)
>  			continue;
>  
> @@ -537,7 +545,6 @@ static long tce_iommu_build(struct tce_container *container,
>  		enum dma_data_direction direction)
>  {
>  	long i, ret = 0;
> -	struct page *page;
>  	unsigned long hpa;
>  	enum dma_data_direction dirtmp;
>  
> @@ -548,15 +555,16 @@ static long tce_iommu_build(struct tce_container *container,
>  		if (ret)
>  			break;
>  
> -		page = pfn_to_page(hpa >> PAGE_SHIFT);
> -		if (!tce_page_is_contained(page, tbl->it_page_shift)) {
> +		if (!tce_page_is_contained(container->mm, hpa,
> +				tbl->it_page_shift)) {
>  			ret = -EPERM;
>  			break;
>  		}
>  
>  		hpa |= offset;
>  		dirtmp = direction;
> -		ret = iommu_tce_xchg(tbl, entry + i, &hpa, &dirtmp);
> +		ret = iommu_tce_xchg(container->mm, tbl, entry + i, &hpa,
> +				&dirtmp);
>  		if (ret) {
>  			tce_iommu_unuse_page(container, hpa);
>  			pr_err("iommu_tce: %s failed ioba=%lx, tce=%lx, ret=%ld\n",
> @@ -583,7 +591,6 @@ static long tce_iommu_build_v2(struct tce_container *container,
>  		enum dma_data_direction direction)
>  {
>  	long i, ret = 0;
> -	struct page *page;
>  	unsigned long hpa;
>  	enum dma_data_direction dirtmp;
>  
> @@ -596,8 +603,8 @@ static long tce_iommu_build_v2(struct tce_container *container,
>  		if (ret)
>  			break;
>  
> -		page = pfn_to_page(hpa >> PAGE_SHIFT);
> -		if (!tce_page_is_contained(page, tbl->it_page_shift)) {
> +		if (!tce_page_is_contained(container->mm, hpa,
> +				tbl->it_page_shift)) {
>  			ret = -EPERM;
>  			break;
>  		}
> @@ -610,7 +617,8 @@ static long tce_iommu_build_v2(struct tce_container *container,
>  		if (mm_iommu_mapped_inc(mem))
>  			break;
>  
> -		ret = iommu_tce_xchg(tbl, entry + i, &hpa, &dirtmp);
> +		ret = iommu_tce_xchg(container->mm, tbl, entry + i, &hpa,
> +				&dirtmp);
>  		if (ret) {
>  			/* dirtmp cannot be DMA_NONE here */
>  			tce_iommu_unuse_page_v2(container, tbl, entry + i);

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 04/19] powerpc/powernv: Move npu struct from pnv_phb to pci_controller
  2018-11-23  5:52   ` Alexey Kardashevskiy
@ 2018-12-05  5:14     ` David Gibson
  -1 siblings, 0 replies; 70+ messages in thread
From: David Gibson @ 2018-12-05  5:14 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Alex Williamson, Jose Ricardo Ziviani, Sam Bobroff,
	Alistair Popple, Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab

[-- Attachment #1: Type: text/plain, Size: 8066 bytes --]

On Fri, Nov 23, 2018 at 04:52:49PM +1100, Alexey Kardashevskiy wrote:
> The powernv PCI code stores NPU data in the pnv_phb struct. The latter
> is referenced by pci_controller::private_data. We are going to have NPU2
> support in the pseries platform as well but it does not store any
> private_data in in the pci_controller struct; and even if it did,
> it would be a different data structure.
> 
> This makes npu a pointer and stores it one level higher in
> the pci_controller struct.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> ---
> Changes:
> v4:
> * changed subj from "powerpc/powernv: Detach npu struct from pnv_phb"
> * got rid of global list of npus - store them now in pci_controller
> * got rid of npdev_to_npu() helper
> ---
>  arch/powerpc/include/asm/pci-bridge.h    |  1 +
>  arch/powerpc/platforms/powernv/pci.h     | 16 -----
>  arch/powerpc/platforms/powernv/npu-dma.c | 81 ++++++++++++++++++------
>  3 files changed, 64 insertions(+), 34 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/pci-bridge.h b/arch/powerpc/include/asm/pci-bridge.h
> index 94d4490..aee4fcc 100644
> --- a/arch/powerpc/include/asm/pci-bridge.h
> +++ b/arch/powerpc/include/asm/pci-bridge.h
> @@ -129,6 +129,7 @@ struct pci_controller {
>  #endif	/* CONFIG_PPC64 */
>  
>  	void *private_data;
> +	struct npu *npu;
>  };
>  
>  /* These are used for config access before all the PCI probing
> diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
> index 2131373..f2d50974 100644
> --- a/arch/powerpc/platforms/powernv/pci.h
> +++ b/arch/powerpc/platforms/powernv/pci.h
> @@ -8,9 +8,6 @@
>  
>  struct pci_dn;
>  
> -/* Maximum possible number of ATSD MMIO registers per NPU */
> -#define NV_NMMU_ATSD_REGS 8
> -
>  enum pnv_phb_type {
>  	PNV_PHB_IODA1		= 0,
>  	PNV_PHB_IODA2		= 1,
> @@ -176,19 +173,6 @@ struct pnv_phb {
>  	unsigned int		diag_data_size;
>  	u8			*diag_data;
>  
> -	/* Nvlink2 data */
> -	struct npu {
> -		int index;
> -		__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
> -		unsigned int mmio_atsd_count;
> -
> -		/* Bitmask for MMIO register usage */
> -		unsigned long mmio_atsd_usage;
> -
> -		/* Do we need to explicitly flush the nest mmu? */
> -		bool nmmu_flush;
> -	} npu;
> -
>  	int p2p_target_count;
>  };
>  
> diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
> index 91d488f..7dd5c0e5 100644
> --- a/arch/powerpc/platforms/powernv/npu-dma.c
> +++ b/arch/powerpc/platforms/powernv/npu-dma.c
> @@ -327,6 +327,25 @@ struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
>  	return gpe;
>  }
>  
> +/*
> + * NPU2 ATS
> + */
> +/* Maximum possible number of ATSD MMIO registers per NPU */
> +#define NV_NMMU_ATSD_REGS 8
> +
> +/* An NPU descriptor, valid for POWER9 only */
> +struct npu {
> +	int index;
> +	__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
> +	unsigned int mmio_atsd_count;
> +
> +	/* Bitmask for MMIO register usage */
> +	unsigned long mmio_atsd_usage;
> +
> +	/* Do we need to explicitly flush the nest mmu? */
> +	bool nmmu_flush;
> +};
> +
>  /* Maximum number of nvlinks per npu */
>  #define NV_MAX_LINKS 6
>  
> @@ -478,7 +497,6 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
>  	int i, j;
>  	struct npu *npu;
>  	struct pci_dev *npdev;
> -	struct pnv_phb *nphb;
>  
>  	for (i = 0; i <= max_npu2_index; i++) {
>  		mmio_atsd_reg[i].reg = -1;
> @@ -493,8 +511,10 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
>  			if (!npdev)
>  				continue;
>  
> -			nphb = pci_bus_to_host(npdev->bus)->private_data;
> -			npu = &nphb->npu;
> +			npu = pci_bus_to_host(npdev->bus)->npu;
> +			if (!npu)
> +				continue;

This patch changes a bunch of places that used to unconditionally
locate an NPU now have a failure path.

Given that this used to always have an NPU, doesn't that mean that if
the NPU is not present something has already gone wrong, and we should
WARN_ON() or something?

>  			mmio_atsd_reg[i].npu = npu;
>  			mmio_atsd_reg[i].reg = get_mmio_atsd_reg(npu);
>  			while (mmio_atsd_reg[i].reg < 0) {
> @@ -662,6 +682,7 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev,
>  	struct pnv_phb *nphb;
>  	struct npu *npu;
>  	struct npu_context *npu_context;
> +	struct pci_controller *hose;
>  
>  	/*
>  	 * At present we don't support GPUs connected to multiple NPUs and I'm
> @@ -689,8 +710,11 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev,
>  		return ERR_PTR(-EINVAL);
>  	}
>  
> -	nphb = pci_bus_to_host(npdev->bus)->private_data;
> -	npu = &nphb->npu;
> +	hose = pci_bus_to_host(npdev->bus);
> +	nphb = hose->private_data;
> +	npu = hose->npu;
> +	if (!npu)
> +		return ERR_PTR(-ENODEV);
>  
>  	/*
>  	 * Setup the NPU context table for a particular GPU. These need to be
> @@ -764,7 +788,7 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev,
>  	 */
>  	WRITE_ONCE(npu_context->npdev[npu->index][nvlink_index], npdev);
>  
> -	if (!nphb->npu.nmmu_flush) {
> +	if (!npu->nmmu_flush) {
>  		/*
>  		 * If we're not explicitly flushing ourselves we need to mark
>  		 * the thread for global flushes
> @@ -802,6 +826,7 @@ void pnv_npu2_destroy_context(struct npu_context *npu_context,
>  	struct pci_dev *npdev = pnv_pci_get_npu_dev(gpdev, 0);
>  	struct device_node *nvlink_dn;
>  	u32 nvlink_index;
> +	struct pci_controller *hose;
>  
>  	if (WARN_ON(!npdev))
>  		return;
> @@ -809,8 +834,11 @@ void pnv_npu2_destroy_context(struct npu_context *npu_context,
>  	if (!firmware_has_feature(FW_FEATURE_OPAL))
>  		return;
>  
> -	nphb = pci_bus_to_host(npdev->bus)->private_data;
> -	npu = &nphb->npu;
> +	hose = pci_bus_to_host(npdev->bus);
> +	nphb = hose->private_data;
> +	npu = hose->npu;
> +	if (!npu)
> +		return;
>  	nvlink_dn = of_parse_phandle(npdev->dev.of_node, "ibm,nvlink", 0);
>  	if (WARN_ON(of_property_read_u32(nvlink_dn, "ibm,npu-link-index",
>  							&nvlink_index)))
> @@ -888,9 +916,15 @@ int pnv_npu2_init(struct pnv_phb *phb)
>  	struct pci_dev *gpdev;
>  	static int npu_index;
>  	uint64_t rc = 0;
> +	struct pci_controller *hose = phb->hose;
> +	struct npu *npu;
> +	int ret;
>  
> -	phb->npu.nmmu_flush =
> -		of_property_read_bool(phb->hose->dn, "ibm,nmmu-flush");
> +	npu = kzalloc(sizeof(*npu), GFP_KERNEL);
> +	if (!npu)
> +		return -ENOMEM;
> +
> +	npu->nmmu_flush = of_property_read_bool(hose->dn, "ibm,nmmu-flush");
>  	for_each_child_of_node(phb->hose->dn, dn) {
>  		gpdev = pnv_pci_get_gpu_dev(get_pci_dev(dn));
>  		if (gpdev) {
> @@ -904,18 +938,29 @@ int pnv_npu2_init(struct pnv_phb *phb)
>  		}
>  	}
>  
> -	for (i = 0; !of_property_read_u64_index(phb->hose->dn, "ibm,mmio-atsd",
> +	for (i = 0; !of_property_read_u64_index(hose->dn, "ibm,mmio-atsd",
>  							i, &mmio_atsd); i++)
> -		phb->npu.mmio_atsd_regs[i] = ioremap(mmio_atsd, 32);
> +		npu->mmio_atsd_regs[i] = ioremap(mmio_atsd, 32);
>  
> -	pr_info("NPU%lld: Found %d MMIO ATSD registers", phb->opal_id, i);
> -	phb->npu.mmio_atsd_count = i;
> -	phb->npu.mmio_atsd_usage = 0;
> +	pr_info("NPU%d: Found %d MMIO ATSD registers", hose->global_number, i);
> +	npu->mmio_atsd_count = i;
> +	npu->mmio_atsd_usage = 0;
>  	npu_index++;
> -	if (WARN_ON(npu_index >= NV_MAX_NPUS))
> -		return -ENOSPC;
> +	if (WARN_ON(npu_index >= NV_MAX_NPUS)) {
> +		ret = -ENOSPC;
> +		goto fail_exit;
> +	}
>  	max_npu2_index = npu_index;
> -	phb->npu.index = npu_index;
> +	npu->index = npu_index;
> +	hose->npu = npu;
>  
>  	return 0;
> +
> +fail_exit:
> +	for (i = 0; i < npu->mmio_atsd_count; ++i)
> +		iounmap(npu->mmio_atsd_regs[i]);
> +
> +	kfree(npu);
> +
> +	return ret;
>  }

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 04/19] powerpc/powernv: Move npu struct from pnv_phb to pci_controller
@ 2018-12-05  5:14     ` David Gibson
  0 siblings, 0 replies; 70+ messages in thread
From: David Gibson @ 2018-12-05  5:14 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Alex Williamson, Jose Ricardo Ziviani, Sam Bobroff,
	Alistair Popple, Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab

[-- Attachment #1: Type: text/plain, Size: 8066 bytes --]

On Fri, Nov 23, 2018 at 04:52:49PM +1100, Alexey Kardashevskiy wrote:
> The powernv PCI code stores NPU data in the pnv_phb struct. The latter
> is referenced by pci_controller::private_data. We are going to have NPU2
> support in the pseries platform as well but it does not store any
> private_data in in the pci_controller struct; and even if it did,
> it would be a different data structure.
> 
> This makes npu a pointer and stores it one level higher in
> the pci_controller struct.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> ---
> Changes:
> v4:
> * changed subj from "powerpc/powernv: Detach npu struct from pnv_phb"
> * got rid of global list of npus - store them now in pci_controller
> * got rid of npdev_to_npu() helper
> ---
>  arch/powerpc/include/asm/pci-bridge.h    |  1 +
>  arch/powerpc/platforms/powernv/pci.h     | 16 -----
>  arch/powerpc/platforms/powernv/npu-dma.c | 81 ++++++++++++++++++------
>  3 files changed, 64 insertions(+), 34 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/pci-bridge.h b/arch/powerpc/include/asm/pci-bridge.h
> index 94d4490..aee4fcc 100644
> --- a/arch/powerpc/include/asm/pci-bridge.h
> +++ b/arch/powerpc/include/asm/pci-bridge.h
> @@ -129,6 +129,7 @@ struct pci_controller {
>  #endif	/* CONFIG_PPC64 */
>  
>  	void *private_data;
> +	struct npu *npu;
>  };
>  
>  /* These are used for config access before all the PCI probing
> diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
> index 2131373..f2d50974 100644
> --- a/arch/powerpc/platforms/powernv/pci.h
> +++ b/arch/powerpc/platforms/powernv/pci.h
> @@ -8,9 +8,6 @@
>  
>  struct pci_dn;
>  
> -/* Maximum possible number of ATSD MMIO registers per NPU */
> -#define NV_NMMU_ATSD_REGS 8
> -
>  enum pnv_phb_type {
>  	PNV_PHB_IODA1		= 0,
>  	PNV_PHB_IODA2		= 1,
> @@ -176,19 +173,6 @@ struct pnv_phb {
>  	unsigned int		diag_data_size;
>  	u8			*diag_data;
>  
> -	/* Nvlink2 data */
> -	struct npu {
> -		int index;
> -		__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
> -		unsigned int mmio_atsd_count;
> -
> -		/* Bitmask for MMIO register usage */
> -		unsigned long mmio_atsd_usage;
> -
> -		/* Do we need to explicitly flush the nest mmu? */
> -		bool nmmu_flush;
> -	} npu;
> -
>  	int p2p_target_count;
>  };
>  
> diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
> index 91d488f..7dd5c0e5 100644
> --- a/arch/powerpc/platforms/powernv/npu-dma.c
> +++ b/arch/powerpc/platforms/powernv/npu-dma.c
> @@ -327,6 +327,25 @@ struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
>  	return gpe;
>  }
>  
> +/*
> + * NPU2 ATS
> + */
> +/* Maximum possible number of ATSD MMIO registers per NPU */
> +#define NV_NMMU_ATSD_REGS 8
> +
> +/* An NPU descriptor, valid for POWER9 only */
> +struct npu {
> +	int index;
> +	__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
> +	unsigned int mmio_atsd_count;
> +
> +	/* Bitmask for MMIO register usage */
> +	unsigned long mmio_atsd_usage;
> +
> +	/* Do we need to explicitly flush the nest mmu? */
> +	bool nmmu_flush;
> +};
> +
>  /* Maximum number of nvlinks per npu */
>  #define NV_MAX_LINKS 6
>  
> @@ -478,7 +497,6 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
>  	int i, j;
>  	struct npu *npu;
>  	struct pci_dev *npdev;
> -	struct pnv_phb *nphb;
>  
>  	for (i = 0; i <= max_npu2_index; i++) {
>  		mmio_atsd_reg[i].reg = -1;
> @@ -493,8 +511,10 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
>  			if (!npdev)
>  				continue;
>  
> -			nphb = pci_bus_to_host(npdev->bus)->private_data;
> -			npu = &nphb->npu;
> +			npu = pci_bus_to_host(npdev->bus)->npu;
> +			if (!npu)
> +				continue;

This patch changes a bunch of places that used to unconditionally
locate an NPU now have a failure path.

Given that this used to always have an NPU, doesn't that mean that if
the NPU is not present something has already gone wrong, and we should
WARN_ON() or something?

>  			mmio_atsd_reg[i].npu = npu;
>  			mmio_atsd_reg[i].reg = get_mmio_atsd_reg(npu);
>  			while (mmio_atsd_reg[i].reg < 0) {
> @@ -662,6 +682,7 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev,
>  	struct pnv_phb *nphb;
>  	struct npu *npu;
>  	struct npu_context *npu_context;
> +	struct pci_controller *hose;
>  
>  	/*
>  	 * At present we don't support GPUs connected to multiple NPUs and I'm
> @@ -689,8 +710,11 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev,
>  		return ERR_PTR(-EINVAL);
>  	}
>  
> -	nphb = pci_bus_to_host(npdev->bus)->private_data;
> -	npu = &nphb->npu;
> +	hose = pci_bus_to_host(npdev->bus);
> +	nphb = hose->private_data;
> +	npu = hose->npu;
> +	if (!npu)
> +		return ERR_PTR(-ENODEV);
>  
>  	/*
>  	 * Setup the NPU context table for a particular GPU. These need to be
> @@ -764,7 +788,7 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev,
>  	 */
>  	WRITE_ONCE(npu_context->npdev[npu->index][nvlink_index], npdev);
>  
> -	if (!nphb->npu.nmmu_flush) {
> +	if (!npu->nmmu_flush) {
>  		/*
>  		 * If we're not explicitly flushing ourselves we need to mark
>  		 * the thread for global flushes
> @@ -802,6 +826,7 @@ void pnv_npu2_destroy_context(struct npu_context *npu_context,
>  	struct pci_dev *npdev = pnv_pci_get_npu_dev(gpdev, 0);
>  	struct device_node *nvlink_dn;
>  	u32 nvlink_index;
> +	struct pci_controller *hose;
>  
>  	if (WARN_ON(!npdev))
>  		return;
> @@ -809,8 +834,11 @@ void pnv_npu2_destroy_context(struct npu_context *npu_context,
>  	if (!firmware_has_feature(FW_FEATURE_OPAL))
>  		return;
>  
> -	nphb = pci_bus_to_host(npdev->bus)->private_data;
> -	npu = &nphb->npu;
> +	hose = pci_bus_to_host(npdev->bus);
> +	nphb = hose->private_data;
> +	npu = hose->npu;
> +	if (!npu)
> +		return;
>  	nvlink_dn = of_parse_phandle(npdev->dev.of_node, "ibm,nvlink", 0);
>  	if (WARN_ON(of_property_read_u32(nvlink_dn, "ibm,npu-link-index",
>  							&nvlink_index)))
> @@ -888,9 +916,15 @@ int pnv_npu2_init(struct pnv_phb *phb)
>  	struct pci_dev *gpdev;
>  	static int npu_index;
>  	uint64_t rc = 0;
> +	struct pci_controller *hose = phb->hose;
> +	struct npu *npu;
> +	int ret;
>  
> -	phb->npu.nmmu_flush =
> -		of_property_read_bool(phb->hose->dn, "ibm,nmmu-flush");
> +	npu = kzalloc(sizeof(*npu), GFP_KERNEL);
> +	if (!npu)
> +		return -ENOMEM;
> +
> +	npu->nmmu_flush = of_property_read_bool(hose->dn, "ibm,nmmu-flush");
>  	for_each_child_of_node(phb->hose->dn, dn) {
>  		gpdev = pnv_pci_get_gpu_dev(get_pci_dev(dn));
>  		if (gpdev) {
> @@ -904,18 +938,29 @@ int pnv_npu2_init(struct pnv_phb *phb)
>  		}
>  	}
>  
> -	for (i = 0; !of_property_read_u64_index(phb->hose->dn, "ibm,mmio-atsd",
> +	for (i = 0; !of_property_read_u64_index(hose->dn, "ibm,mmio-atsd",
>  							i, &mmio_atsd); i++)
> -		phb->npu.mmio_atsd_regs[i] = ioremap(mmio_atsd, 32);
> +		npu->mmio_atsd_regs[i] = ioremap(mmio_atsd, 32);
>  
> -	pr_info("NPU%lld: Found %d MMIO ATSD registers", phb->opal_id, i);
> -	phb->npu.mmio_atsd_count = i;
> -	phb->npu.mmio_atsd_usage = 0;
> +	pr_info("NPU%d: Found %d MMIO ATSD registers", hose->global_number, i);
> +	npu->mmio_atsd_count = i;
> +	npu->mmio_atsd_usage = 0;
>  	npu_index++;
> -	if (WARN_ON(npu_index >= NV_MAX_NPUS))
> -		return -ENOSPC;
> +	if (WARN_ON(npu_index >= NV_MAX_NPUS)) {
> +		ret = -ENOSPC;
> +		goto fail_exit;
> +	}
>  	max_npu2_index = npu_index;
> -	phb->npu.index = npu_index;
> +	npu->index = npu_index;
> +	hose->npu = npu;
>  
>  	return 0;
> +
> +fail_exit:
> +	for (i = 0; i < npu->mmio_atsd_count; ++i)
> +		iounmap(npu->mmio_atsd_regs[i]);
> +
> +	kfree(npu);
> +
> +	return ret;
>  }

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 04/19] powerpc/powernv: Move npu struct from pnv_phb to pci_controller
  2018-12-05  5:14     ` David Gibson
@ 2018-12-05  5:47       ` Alexey Kardashevskiy
  -1 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-12-05  5:47 UTC (permalink / raw)
  To: David Gibson
  Cc: Alex Williamson, Jose Ricardo Ziviani, Sam Bobroff,
	Alistair Popple, Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab



On 05/12/2018 16:14, David Gibson wrote:
> On Fri, Nov 23, 2018 at 04:52:49PM +1100, Alexey Kardashevskiy wrote:
>> The powernv PCI code stores NPU data in the pnv_phb struct. The latter
>> is referenced by pci_controller::private_data. We are going to have NPU2
>> support in the pseries platform as well but it does not store any
>> private_data in in the pci_controller struct; and even if it did,
>> it would be a different data structure.
>>
>> This makes npu a pointer and stores it one level higher in
>> the pci_controller struct.
>>
>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>> ---
>> Changes:
>> v4:
>> * changed subj from "powerpc/powernv: Detach npu struct from pnv_phb"
>> * got rid of global list of npus - store them now in pci_controller
>> * got rid of npdev_to_npu() helper
>> ---
>>  arch/powerpc/include/asm/pci-bridge.h    |  1 +
>>  arch/powerpc/platforms/powernv/pci.h     | 16 -----
>>  arch/powerpc/platforms/powernv/npu-dma.c | 81 ++++++++++++++++++------
>>  3 files changed, 64 insertions(+), 34 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/pci-bridge.h b/arch/powerpc/include/asm/pci-bridge.h
>> index 94d4490..aee4fcc 100644
>> --- a/arch/powerpc/include/asm/pci-bridge.h
>> +++ b/arch/powerpc/include/asm/pci-bridge.h
>> @@ -129,6 +129,7 @@ struct pci_controller {
>>  #endif	/* CONFIG_PPC64 */
>>  
>>  	void *private_data;
>> +	struct npu *npu;
>>  };
>>  
>>  /* These are used for config access before all the PCI probing
>> diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
>> index 2131373..f2d50974 100644
>> --- a/arch/powerpc/platforms/powernv/pci.h
>> +++ b/arch/powerpc/platforms/powernv/pci.h
>> @@ -8,9 +8,6 @@
>>  
>>  struct pci_dn;
>>  
>> -/* Maximum possible number of ATSD MMIO registers per NPU */
>> -#define NV_NMMU_ATSD_REGS 8
>> -
>>  enum pnv_phb_type {
>>  	PNV_PHB_IODA1		= 0,
>>  	PNV_PHB_IODA2		= 1,
>> @@ -176,19 +173,6 @@ struct pnv_phb {
>>  	unsigned int		diag_data_size;
>>  	u8			*diag_data;
>>  
>> -	/* Nvlink2 data */
>> -	struct npu {
>> -		int index;
>> -		__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
>> -		unsigned int mmio_atsd_count;
>> -
>> -		/* Bitmask for MMIO register usage */
>> -		unsigned long mmio_atsd_usage;
>> -
>> -		/* Do we need to explicitly flush the nest mmu? */
>> -		bool nmmu_flush;
>> -	} npu;
>> -
>>  	int p2p_target_count;
>>  };
>>  
>> diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
>> index 91d488f..7dd5c0e5 100644
>> --- a/arch/powerpc/platforms/powernv/npu-dma.c
>> +++ b/arch/powerpc/platforms/powernv/npu-dma.c
>> @@ -327,6 +327,25 @@ struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
>>  	return gpe;
>>  }
>>  
>> +/*
>> + * NPU2 ATS
>> + */
>> +/* Maximum possible number of ATSD MMIO registers per NPU */
>> +#define NV_NMMU_ATSD_REGS 8
>> +
>> +/* An NPU descriptor, valid for POWER9 only */
>> +struct npu {
>> +	int index;
>> +	__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
>> +	unsigned int mmio_atsd_count;
>> +
>> +	/* Bitmask for MMIO register usage */
>> +	unsigned long mmio_atsd_usage;
>> +
>> +	/* Do we need to explicitly flush the nest mmu? */
>> +	bool nmmu_flush;
>> +};
>> +
>>  /* Maximum number of nvlinks per npu */
>>  #define NV_MAX_LINKS 6
>>  
>> @@ -478,7 +497,6 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
>>  	int i, j;
>>  	struct npu *npu;
>>  	struct pci_dev *npdev;
>> -	struct pnv_phb *nphb;
>>  
>>  	for (i = 0; i <= max_npu2_index; i++) {
>>  		mmio_atsd_reg[i].reg = -1;
>> @@ -493,8 +511,10 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
>>  			if (!npdev)
>>  				continue;
>>  
>> -			nphb = pci_bus_to_host(npdev->bus)->private_data;
>> -			npu = &nphb->npu;
>> +			npu = pci_bus_to_host(npdev->bus)->npu;
>> +			if (!npu)
>> +				continue;
> 
> This patch changes a bunch of places that used to unconditionally
> locate an NPU now have a failure path.
> 
> Given that this used to always have an NPU, doesn't that mean that if
> the NPU is not present something has already gone wrong, and we should
> WARN_ON() or something?



That means this is a leftover since I dropped that npdev_to_npu helper
which could help but there was no real value in it. I'll remove the
check here in the next respin.

I'll probably add checks for npu!=NULL where we used to have
firmware_has_feature(FW_FEATURE_OPAL) in 05/19.



-- 
Alexey

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 04/19] powerpc/powernv: Move npu struct from pnv_phb to pci_controller
@ 2018-12-05  5:47       ` Alexey Kardashevskiy
  0 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-12-05  5:47 UTC (permalink / raw)
  To: David Gibson
  Cc: Alex Williamson, Jose Ricardo Ziviani, Sam Bobroff,
	Alistair Popple, Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab



On 05/12/2018 16:14, David Gibson wrote:
> On Fri, Nov 23, 2018 at 04:52:49PM +1100, Alexey Kardashevskiy wrote:
>> The powernv PCI code stores NPU data in the pnv_phb struct. The latter
>> is referenced by pci_controller::private_data. We are going to have NPU2
>> support in the pseries platform as well but it does not store any
>> private_data in in the pci_controller struct; and even if it did,
>> it would be a different data structure.
>>
>> This makes npu a pointer and stores it one level higher in
>> the pci_controller struct.
>>
>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>> ---
>> Changes:
>> v4:
>> * changed subj from "powerpc/powernv: Detach npu struct from pnv_phb"
>> * got rid of global list of npus - store them now in pci_controller
>> * got rid of npdev_to_npu() helper
>> ---
>>  arch/powerpc/include/asm/pci-bridge.h    |  1 +
>>  arch/powerpc/platforms/powernv/pci.h     | 16 -----
>>  arch/powerpc/platforms/powernv/npu-dma.c | 81 ++++++++++++++++++------
>>  3 files changed, 64 insertions(+), 34 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/pci-bridge.h b/arch/powerpc/include/asm/pci-bridge.h
>> index 94d4490..aee4fcc 100644
>> --- a/arch/powerpc/include/asm/pci-bridge.h
>> +++ b/arch/powerpc/include/asm/pci-bridge.h
>> @@ -129,6 +129,7 @@ struct pci_controller {
>>  #endif	/* CONFIG_PPC64 */
>>  
>>  	void *private_data;
>> +	struct npu *npu;
>>  };
>>  
>>  /* These are used for config access before all the PCI probing
>> diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
>> index 2131373..f2d50974 100644
>> --- a/arch/powerpc/platforms/powernv/pci.h
>> +++ b/arch/powerpc/platforms/powernv/pci.h
>> @@ -8,9 +8,6 @@
>>  
>>  struct pci_dn;
>>  
>> -/* Maximum possible number of ATSD MMIO registers per NPU */
>> -#define NV_NMMU_ATSD_REGS 8
>> -
>>  enum pnv_phb_type {
>>  	PNV_PHB_IODA1		= 0,
>>  	PNV_PHB_IODA2		= 1,
>> @@ -176,19 +173,6 @@ struct pnv_phb {
>>  	unsigned int		diag_data_size;
>>  	u8			*diag_data;
>>  
>> -	/* Nvlink2 data */
>> -	struct npu {
>> -		int index;
>> -		__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
>> -		unsigned int mmio_atsd_count;
>> -
>> -		/* Bitmask for MMIO register usage */
>> -		unsigned long mmio_atsd_usage;
>> -
>> -		/* Do we need to explicitly flush the nest mmu? */
>> -		bool nmmu_flush;
>> -	} npu;
>> -
>>  	int p2p_target_count;
>>  };
>>  
>> diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
>> index 91d488f..7dd5c0e5 100644
>> --- a/arch/powerpc/platforms/powernv/npu-dma.c
>> +++ b/arch/powerpc/platforms/powernv/npu-dma.c
>> @@ -327,6 +327,25 @@ struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
>>  	return gpe;
>>  }
>>  
>> +/*
>> + * NPU2 ATS
>> + */
>> +/* Maximum possible number of ATSD MMIO registers per NPU */
>> +#define NV_NMMU_ATSD_REGS 8
>> +
>> +/* An NPU descriptor, valid for POWER9 only */
>> +struct npu {
>> +	int index;
>> +	__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
>> +	unsigned int mmio_atsd_count;
>> +
>> +	/* Bitmask for MMIO register usage */
>> +	unsigned long mmio_atsd_usage;
>> +
>> +	/* Do we need to explicitly flush the nest mmu? */
>> +	bool nmmu_flush;
>> +};
>> +
>>  /* Maximum number of nvlinks per npu */
>>  #define NV_MAX_LINKS 6
>>  
>> @@ -478,7 +497,6 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
>>  	int i, j;
>>  	struct npu *npu;
>>  	struct pci_dev *npdev;
>> -	struct pnv_phb *nphb;
>>  
>>  	for (i = 0; i <= max_npu2_index; i++) {
>>  		mmio_atsd_reg[i].reg = -1;
>> @@ -493,8 +511,10 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
>>  			if (!npdev)
>>  				continue;
>>  
>> -			nphb = pci_bus_to_host(npdev->bus)->private_data;
>> -			npu = &nphb->npu;
>> +			npu = pci_bus_to_host(npdev->bus)->npu;
>> +			if (!npu)
>> +				continue;
> 
> This patch changes a bunch of places that used to unconditionally
> locate an NPU now have a failure path.
> 
> Given that this used to always have an NPU, doesn't that mean that if
> the NPU is not present something has already gone wrong, and we should
> WARN_ON() or something?



That means this is a leftover since I dropped that npdev_to_npu helper
which could help but there was no real value in it. I'll remove the
check here in the next respin.

I'll probably add checks for npu!=NULL where we used to have
firmware_has_feature(FW_FEATURE_OPAL) in 05/19.



-- 
Alexey

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 04/19] powerpc/powernv: Move npu struct from pnv_phb to pci_controller
  2018-12-05  5:47       ` Alexey Kardashevskiy
@ 2018-12-05  6:17         ` Alexey Kardashevskiy
  -1 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-12-05  6:17 UTC (permalink / raw)
  To: David Gibson
  Cc: Alex Williamson, Jose Ricardo Ziviani, Sam Bobroff,
	Alistair Popple, Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab



On 05/12/2018 16:47, Alexey Kardashevskiy wrote:
> 
> 
> On 05/12/2018 16:14, David Gibson wrote:
>> On Fri, Nov 23, 2018 at 04:52:49PM +1100, Alexey Kardashevskiy wrote:
>>> The powernv PCI code stores NPU data in the pnv_phb struct. The latter
>>> is referenced by pci_controller::private_data. We are going to have NPU2
>>> support in the pseries platform as well but it does not store any
>>> private_data in in the pci_controller struct; and even if it did,
>>> it would be a different data structure.
>>>
>>> This makes npu a pointer and stores it one level higher in
>>> the pci_controller struct.
>>>
>>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>>> ---
>>> Changes:
>>> v4:
>>> * changed subj from "powerpc/powernv: Detach npu struct from pnv_phb"
>>> * got rid of global list of npus - store them now in pci_controller
>>> * got rid of npdev_to_npu() helper
>>> ---
>>>  arch/powerpc/include/asm/pci-bridge.h    |  1 +
>>>  arch/powerpc/platforms/powernv/pci.h     | 16 -----
>>>  arch/powerpc/platforms/powernv/npu-dma.c | 81 ++++++++++++++++++------
>>>  3 files changed, 64 insertions(+), 34 deletions(-)
>>>
>>> diff --git a/arch/powerpc/include/asm/pci-bridge.h b/arch/powerpc/include/asm/pci-bridge.h
>>> index 94d4490..aee4fcc 100644
>>> --- a/arch/powerpc/include/asm/pci-bridge.h
>>> +++ b/arch/powerpc/include/asm/pci-bridge.h
>>> @@ -129,6 +129,7 @@ struct pci_controller {
>>>  #endif	/* CONFIG_PPC64 */
>>>  
>>>  	void *private_data;
>>> +	struct npu *npu;
>>>  };
>>>  
>>>  /* These are used for config access before all the PCI probing
>>> diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
>>> index 2131373..f2d50974 100644
>>> --- a/arch/powerpc/platforms/powernv/pci.h
>>> +++ b/arch/powerpc/platforms/powernv/pci.h
>>> @@ -8,9 +8,6 @@
>>>  
>>>  struct pci_dn;
>>>  
>>> -/* Maximum possible number of ATSD MMIO registers per NPU */
>>> -#define NV_NMMU_ATSD_REGS 8
>>> -
>>>  enum pnv_phb_type {
>>>  	PNV_PHB_IODA1		= 0,
>>>  	PNV_PHB_IODA2		= 1,
>>> @@ -176,19 +173,6 @@ struct pnv_phb {
>>>  	unsigned int		diag_data_size;
>>>  	u8			*diag_data;
>>>  
>>> -	/* Nvlink2 data */
>>> -	struct npu {
>>> -		int index;
>>> -		__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
>>> -		unsigned int mmio_atsd_count;
>>> -
>>> -		/* Bitmask for MMIO register usage */
>>> -		unsigned long mmio_atsd_usage;
>>> -
>>> -		/* Do we need to explicitly flush the nest mmu? */
>>> -		bool nmmu_flush;
>>> -	} npu;
>>> -
>>>  	int p2p_target_count;
>>>  };
>>>  
>>> diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
>>> index 91d488f..7dd5c0e5 100644
>>> --- a/arch/powerpc/platforms/powernv/npu-dma.c
>>> +++ b/arch/powerpc/platforms/powernv/npu-dma.c
>>> @@ -327,6 +327,25 @@ struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
>>>  	return gpe;
>>>  }
>>>  
>>> +/*
>>> + * NPU2 ATS
>>> + */
>>> +/* Maximum possible number of ATSD MMIO registers per NPU */
>>> +#define NV_NMMU_ATSD_REGS 8
>>> +
>>> +/* An NPU descriptor, valid for POWER9 only */
>>> +struct npu {
>>> +	int index;
>>> +	__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
>>> +	unsigned int mmio_atsd_count;
>>> +
>>> +	/* Bitmask for MMIO register usage */
>>> +	unsigned long mmio_atsd_usage;
>>> +
>>> +	/* Do we need to explicitly flush the nest mmu? */
>>> +	bool nmmu_flush;
>>> +};
>>> +
>>>  /* Maximum number of nvlinks per npu */
>>>  #define NV_MAX_LINKS 6
>>>  
>>> @@ -478,7 +497,6 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
>>>  	int i, j;
>>>  	struct npu *npu;
>>>  	struct pci_dev *npdev;
>>> -	struct pnv_phb *nphb;
>>>  
>>>  	for (i = 0; i <= max_npu2_index; i++) {
>>>  		mmio_atsd_reg[i].reg = -1;
>>> @@ -493,8 +511,10 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
>>>  			if (!npdev)
>>>  				continue;
>>>  
>>> -			nphb = pci_bus_to_host(npdev->bus)->private_data;
>>> -			npu = &nphb->npu;
>>> +			npu = pci_bus_to_host(npdev->bus)->npu;
>>> +			if (!npu)
>>> +				continue;
>>
>> This patch changes a bunch of places that used to unconditionally
>> locate an NPU now have a failure path.
>>
>> Given that this used to always have an NPU, doesn't that mean that if
>> the NPU is not present something has already gone wrong, and we should
>> WARN_ON() or something?
> 
> 
> 
> That means this is a leftover since I dropped that npdev_to_npu helper
> which could help but there was no real value in it. I'll remove the
> check here in the next respin.


Well, technically kmalloc() can fail in pnv_npu2_init() (but not later)
so can (in theory) end up with an NPU PHB and npu==NULL but it is sooo
unlikely...



> 
> I'll probably add checks for npu!=NULL where we used to have
> firmware_has_feature(FW_FEATURE_OPAL) in 05/19.



-- 
Alexey

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 04/19] powerpc/powernv: Move npu struct from pnv_phb to pci_controller
@ 2018-12-05  6:17         ` Alexey Kardashevskiy
  0 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-12-05  6:17 UTC (permalink / raw)
  To: David Gibson
  Cc: Alex Williamson, Jose Ricardo Ziviani, Sam Bobroff,
	Alistair Popple, Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab



On 05/12/2018 16:47, Alexey Kardashevskiy wrote:
> 
> 
> On 05/12/2018 16:14, David Gibson wrote:
>> On Fri, Nov 23, 2018 at 04:52:49PM +1100, Alexey Kardashevskiy wrote:
>>> The powernv PCI code stores NPU data in the pnv_phb struct. The latter
>>> is referenced by pci_controller::private_data. We are going to have NPU2
>>> support in the pseries platform as well but it does not store any
>>> private_data in in the pci_controller struct; and even if it did,
>>> it would be a different data structure.
>>>
>>> This makes npu a pointer and stores it one level higher in
>>> the pci_controller struct.
>>>
>>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>>> ---
>>> Changes:
>>> v4:
>>> * changed subj from "powerpc/powernv: Detach npu struct from pnv_phb"
>>> * got rid of global list of npus - store them now in pci_controller
>>> * got rid of npdev_to_npu() helper
>>> ---
>>>  arch/powerpc/include/asm/pci-bridge.h    |  1 +
>>>  arch/powerpc/platforms/powernv/pci.h     | 16 -----
>>>  arch/powerpc/platforms/powernv/npu-dma.c | 81 ++++++++++++++++++------
>>>  3 files changed, 64 insertions(+), 34 deletions(-)
>>>
>>> diff --git a/arch/powerpc/include/asm/pci-bridge.h b/arch/powerpc/include/asm/pci-bridge.h
>>> index 94d4490..aee4fcc 100644
>>> --- a/arch/powerpc/include/asm/pci-bridge.h
>>> +++ b/arch/powerpc/include/asm/pci-bridge.h
>>> @@ -129,6 +129,7 @@ struct pci_controller {
>>>  #endif	/* CONFIG_PPC64 */
>>>  
>>>  	void *private_data;
>>> +	struct npu *npu;
>>>  };
>>>  
>>>  /* These are used for config access before all the PCI probing
>>> diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
>>> index 2131373..f2d50974 100644
>>> --- a/arch/powerpc/platforms/powernv/pci.h
>>> +++ b/arch/powerpc/platforms/powernv/pci.h
>>> @@ -8,9 +8,6 @@
>>>  
>>>  struct pci_dn;
>>>  
>>> -/* Maximum possible number of ATSD MMIO registers per NPU */
>>> -#define NV_NMMU_ATSD_REGS 8
>>> -
>>>  enum pnv_phb_type {
>>>  	PNV_PHB_IODA1		= 0,
>>>  	PNV_PHB_IODA2		= 1,
>>> @@ -176,19 +173,6 @@ struct pnv_phb {
>>>  	unsigned int		diag_data_size;
>>>  	u8			*diag_data;
>>>  
>>> -	/* Nvlink2 data */
>>> -	struct npu {
>>> -		int index;
>>> -		__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
>>> -		unsigned int mmio_atsd_count;
>>> -
>>> -		/* Bitmask for MMIO register usage */
>>> -		unsigned long mmio_atsd_usage;
>>> -
>>> -		/* Do we need to explicitly flush the nest mmu? */
>>> -		bool nmmu_flush;
>>> -	} npu;
>>> -
>>>  	int p2p_target_count;
>>>  };
>>>  
>>> diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
>>> index 91d488f..7dd5c0e5 100644
>>> --- a/arch/powerpc/platforms/powernv/npu-dma.c
>>> +++ b/arch/powerpc/platforms/powernv/npu-dma.c
>>> @@ -327,6 +327,25 @@ struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
>>>  	return gpe;
>>>  }
>>>  
>>> +/*
>>> + * NPU2 ATS
>>> + */
>>> +/* Maximum possible number of ATSD MMIO registers per NPU */
>>> +#define NV_NMMU_ATSD_REGS 8
>>> +
>>> +/* An NPU descriptor, valid for POWER9 only */
>>> +struct npu {
>>> +	int index;
>>> +	__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
>>> +	unsigned int mmio_atsd_count;
>>> +
>>> +	/* Bitmask for MMIO register usage */
>>> +	unsigned long mmio_atsd_usage;
>>> +
>>> +	/* Do we need to explicitly flush the nest mmu? */
>>> +	bool nmmu_flush;
>>> +};
>>> +
>>>  /* Maximum number of nvlinks per npu */
>>>  #define NV_MAX_LINKS 6
>>>  
>>> @@ -478,7 +497,6 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
>>>  	int i, j;
>>>  	struct npu *npu;
>>>  	struct pci_dev *npdev;
>>> -	struct pnv_phb *nphb;
>>>  
>>>  	for (i = 0; i <= max_npu2_index; i++) {
>>>  		mmio_atsd_reg[i].reg = -1;
>>> @@ -493,8 +511,10 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
>>>  			if (!npdev)
>>>  				continue;
>>>  
>>> -			nphb = pci_bus_to_host(npdev->bus)->private_data;
>>> -			npu = &nphb->npu;
>>> +			npu = pci_bus_to_host(npdev->bus)->npu;
>>> +			if (!npu)
>>> +				continue;
>>
>> This patch changes a bunch of places that used to unconditionally
>> locate an NPU now have a failure path.
>>
>> Given that this used to always have an NPU, doesn't that mean that if
>> the NPU is not present something has already gone wrong, and we should
>> WARN_ON() or something?
> 
> 
> 
> That means this is a leftover since I dropped that npdev_to_npu helper
> which could help but there was no real value in it. I'll remove the
> check here in the next respin.


Well, technically kmalloc() can fail in pnv_npu2_init() (but not later)
so can (in theory) end up with an NPU PHB and npu=NULL but it is sooo
unlikely...



> 
> I'll probably add checks for npu!=NULL where we used to have
> firmware_has_feature(FW_FEATURE_OPAL) in 05/19.



-- 
Alexey

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 04/19] powerpc/powernv: Move npu struct from pnv_phb to pci_controller
  2018-12-05  6:17         ` Alexey Kardashevskiy
@ 2018-12-05 22:40           ` David Gibson
  -1 siblings, 0 replies; 70+ messages in thread
From: David Gibson @ 2018-12-05 22:40 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Alex Williamson, Jose Ricardo Ziviani, Sam Bobroff,
	Alistair Popple, Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab

[-- Attachment #1: Type: text/plain, Size: 5476 bytes --]

On Wed, Dec 05, 2018 at 05:17:57PM +1100, Alexey Kardashevskiy wrote:
> 
> 
> On 05/12/2018 16:47, Alexey Kardashevskiy wrote:
> > 
> > 
> > On 05/12/2018 16:14, David Gibson wrote:
> >> On Fri, Nov 23, 2018 at 04:52:49PM +1100, Alexey Kardashevskiy wrote:
> >>> The powernv PCI code stores NPU data in the pnv_phb struct. The latter
> >>> is referenced by pci_controller::private_data. We are going to have NPU2
> >>> support in the pseries platform as well but it does not store any
> >>> private_data in in the pci_controller struct; and even if it did,
> >>> it would be a different data structure.
> >>>
> >>> This makes npu a pointer and stores it one level higher in
> >>> the pci_controller struct.
> >>>
> >>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> >>> ---
> >>> Changes:
> >>> v4:
> >>> * changed subj from "powerpc/powernv: Detach npu struct from pnv_phb"
> >>> * got rid of global list of npus - store them now in pci_controller
> >>> * got rid of npdev_to_npu() helper
> >>> ---
> >>>  arch/powerpc/include/asm/pci-bridge.h    |  1 +
> >>>  arch/powerpc/platforms/powernv/pci.h     | 16 -----
> >>>  arch/powerpc/platforms/powernv/npu-dma.c | 81 ++++++++++++++++++------
> >>>  3 files changed, 64 insertions(+), 34 deletions(-)
> >>>
> >>> diff --git a/arch/powerpc/include/asm/pci-bridge.h b/arch/powerpc/include/asm/pci-bridge.h
> >>> index 94d4490..aee4fcc 100644
> >>> --- a/arch/powerpc/include/asm/pci-bridge.h
> >>> +++ b/arch/powerpc/include/asm/pci-bridge.h
> >>> @@ -129,6 +129,7 @@ struct pci_controller {
> >>>  #endif	/* CONFIG_PPC64 */
> >>>  
> >>>  	void *private_data;
> >>> +	struct npu *npu;
> >>>  };
> >>>  
> >>>  /* These are used for config access before all the PCI probing
> >>> diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
> >>> index 2131373..f2d50974 100644
> >>> --- a/arch/powerpc/platforms/powernv/pci.h
> >>> +++ b/arch/powerpc/platforms/powernv/pci.h
> >>> @@ -8,9 +8,6 @@
> >>>  
> >>>  struct pci_dn;
> >>>  
> >>> -/* Maximum possible number of ATSD MMIO registers per NPU */
> >>> -#define NV_NMMU_ATSD_REGS 8
> >>> -
> >>>  enum pnv_phb_type {
> >>>  	PNV_PHB_IODA1		= 0,
> >>>  	PNV_PHB_IODA2		= 1,
> >>> @@ -176,19 +173,6 @@ struct pnv_phb {
> >>>  	unsigned int		diag_data_size;
> >>>  	u8			*diag_data;
> >>>  
> >>> -	/* Nvlink2 data */
> >>> -	struct npu {
> >>> -		int index;
> >>> -		__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
> >>> -		unsigned int mmio_atsd_count;
> >>> -
> >>> -		/* Bitmask for MMIO register usage */
> >>> -		unsigned long mmio_atsd_usage;
> >>> -
> >>> -		/* Do we need to explicitly flush the nest mmu? */
> >>> -		bool nmmu_flush;
> >>> -	} npu;
> >>> -
> >>>  	int p2p_target_count;
> >>>  };
> >>>  
> >>> diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
> >>> index 91d488f..7dd5c0e5 100644
> >>> --- a/arch/powerpc/platforms/powernv/npu-dma.c
> >>> +++ b/arch/powerpc/platforms/powernv/npu-dma.c
> >>> @@ -327,6 +327,25 @@ struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
> >>>  	return gpe;
> >>>  }
> >>>  
> >>> +/*
> >>> + * NPU2 ATS
> >>> + */
> >>> +/* Maximum possible number of ATSD MMIO registers per NPU */
> >>> +#define NV_NMMU_ATSD_REGS 8
> >>> +
> >>> +/* An NPU descriptor, valid for POWER9 only */
> >>> +struct npu {
> >>> +	int index;
> >>> +	__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
> >>> +	unsigned int mmio_atsd_count;
> >>> +
> >>> +	/* Bitmask for MMIO register usage */
> >>> +	unsigned long mmio_atsd_usage;
> >>> +
> >>> +	/* Do we need to explicitly flush the nest mmu? */
> >>> +	bool nmmu_flush;
> >>> +};
> >>> +
> >>>  /* Maximum number of nvlinks per npu */
> >>>  #define NV_MAX_LINKS 6
> >>>  
> >>> @@ -478,7 +497,6 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
> >>>  	int i, j;
> >>>  	struct npu *npu;
> >>>  	struct pci_dev *npdev;
> >>> -	struct pnv_phb *nphb;
> >>>  
> >>>  	for (i = 0; i <= max_npu2_index; i++) {
> >>>  		mmio_atsd_reg[i].reg = -1;
> >>> @@ -493,8 +511,10 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
> >>>  			if (!npdev)
> >>>  				continue;
> >>>  
> >>> -			nphb = pci_bus_to_host(npdev->bus)->private_data;
> >>> -			npu = &nphb->npu;
> >>> +			npu = pci_bus_to_host(npdev->bus)->npu;
> >>> +			if (!npu)
> >>> +				continue;
> >>
> >> This patch changes a bunch of places that used to unconditionally
> >> locate an NPU now have a failure path.
> >>
> >> Given that this used to always have an NPU, doesn't that mean that if
> >> the NPU is not present something has already gone wrong, and we should
> >> WARN_ON() or something?
> > 
> > 
> > 
> > That means this is a leftover since I dropped that npdev_to_npu helper
> > which could help but there was no real value in it. I'll remove the
> > check here in the next respin.
> 
> 
> Well, technically kmalloc() can fail in pnv_npu2_init() (but not later)
> so can (in theory) end up with an NPU PHB and npu==NULL but it is sooo
> unlikely...

More to the point, shouldn't you then fail immediately, rather than
leaving the NULL floating around for later code?

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 04/19] powerpc/powernv: Move npu struct from pnv_phb to pci_controller
@ 2018-12-05 22:40           ` David Gibson
  0 siblings, 0 replies; 70+ messages in thread
From: David Gibson @ 2018-12-05 22:40 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Alex Williamson, Jose Ricardo Ziviani, Sam Bobroff,
	Alistair Popple, Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab

[-- Attachment #1: Type: text/plain, Size: 5476 bytes --]

On Wed, Dec 05, 2018 at 05:17:57PM +1100, Alexey Kardashevskiy wrote:
> 
> 
> On 05/12/2018 16:47, Alexey Kardashevskiy wrote:
> > 
> > 
> > On 05/12/2018 16:14, David Gibson wrote:
> >> On Fri, Nov 23, 2018 at 04:52:49PM +1100, Alexey Kardashevskiy wrote:
> >>> The powernv PCI code stores NPU data in the pnv_phb struct. The latter
> >>> is referenced by pci_controller::private_data. We are going to have NPU2
> >>> support in the pseries platform as well but it does not store any
> >>> private_data in in the pci_controller struct; and even if it did,
> >>> it would be a different data structure.
> >>>
> >>> This makes npu a pointer and stores it one level higher in
> >>> the pci_controller struct.
> >>>
> >>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> >>> ---
> >>> Changes:
> >>> v4:
> >>> * changed subj from "powerpc/powernv: Detach npu struct from pnv_phb"
> >>> * got rid of global list of npus - store them now in pci_controller
> >>> * got rid of npdev_to_npu() helper
> >>> ---
> >>>  arch/powerpc/include/asm/pci-bridge.h    |  1 +
> >>>  arch/powerpc/platforms/powernv/pci.h     | 16 -----
> >>>  arch/powerpc/platforms/powernv/npu-dma.c | 81 ++++++++++++++++++------
> >>>  3 files changed, 64 insertions(+), 34 deletions(-)
> >>>
> >>> diff --git a/arch/powerpc/include/asm/pci-bridge.h b/arch/powerpc/include/asm/pci-bridge.h
> >>> index 94d4490..aee4fcc 100644
> >>> --- a/arch/powerpc/include/asm/pci-bridge.h
> >>> +++ b/arch/powerpc/include/asm/pci-bridge.h
> >>> @@ -129,6 +129,7 @@ struct pci_controller {
> >>>  #endif	/* CONFIG_PPC64 */
> >>>  
> >>>  	void *private_data;
> >>> +	struct npu *npu;
> >>>  };
> >>>  
> >>>  /* These are used for config access before all the PCI probing
> >>> diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
> >>> index 2131373..f2d50974 100644
> >>> --- a/arch/powerpc/platforms/powernv/pci.h
> >>> +++ b/arch/powerpc/platforms/powernv/pci.h
> >>> @@ -8,9 +8,6 @@
> >>>  
> >>>  struct pci_dn;
> >>>  
> >>> -/* Maximum possible number of ATSD MMIO registers per NPU */
> >>> -#define NV_NMMU_ATSD_REGS 8
> >>> -
> >>>  enum pnv_phb_type {
> >>>  	PNV_PHB_IODA1		= 0,
> >>>  	PNV_PHB_IODA2		= 1,
> >>> @@ -176,19 +173,6 @@ struct pnv_phb {
> >>>  	unsigned int		diag_data_size;
> >>>  	u8			*diag_data;
> >>>  
> >>> -	/* Nvlink2 data */
> >>> -	struct npu {
> >>> -		int index;
> >>> -		__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
> >>> -		unsigned int mmio_atsd_count;
> >>> -
> >>> -		/* Bitmask for MMIO register usage */
> >>> -		unsigned long mmio_atsd_usage;
> >>> -
> >>> -		/* Do we need to explicitly flush the nest mmu? */
> >>> -		bool nmmu_flush;
> >>> -	} npu;
> >>> -
> >>>  	int p2p_target_count;
> >>>  };
> >>>  
> >>> diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
> >>> index 91d488f..7dd5c0e5 100644
> >>> --- a/arch/powerpc/platforms/powernv/npu-dma.c
> >>> +++ b/arch/powerpc/platforms/powernv/npu-dma.c
> >>> @@ -327,6 +327,25 @@ struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
> >>>  	return gpe;
> >>>  }
> >>>  
> >>> +/*
> >>> + * NPU2 ATS
> >>> + */
> >>> +/* Maximum possible number of ATSD MMIO registers per NPU */
> >>> +#define NV_NMMU_ATSD_REGS 8
> >>> +
> >>> +/* An NPU descriptor, valid for POWER9 only */
> >>> +struct npu {
> >>> +	int index;
> >>> +	__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
> >>> +	unsigned int mmio_atsd_count;
> >>> +
> >>> +	/* Bitmask for MMIO register usage */
> >>> +	unsigned long mmio_atsd_usage;
> >>> +
> >>> +	/* Do we need to explicitly flush the nest mmu? */
> >>> +	bool nmmu_flush;
> >>> +};
> >>> +
> >>>  /* Maximum number of nvlinks per npu */
> >>>  #define NV_MAX_LINKS 6
> >>>  
> >>> @@ -478,7 +497,6 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
> >>>  	int i, j;
> >>>  	struct npu *npu;
> >>>  	struct pci_dev *npdev;
> >>> -	struct pnv_phb *nphb;
> >>>  
> >>>  	for (i = 0; i <= max_npu2_index; i++) {
> >>>  		mmio_atsd_reg[i].reg = -1;
> >>> @@ -493,8 +511,10 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
> >>>  			if (!npdev)
> >>>  				continue;
> >>>  
> >>> -			nphb = pci_bus_to_host(npdev->bus)->private_data;
> >>> -			npu = &nphb->npu;
> >>> +			npu = pci_bus_to_host(npdev->bus)->npu;
> >>> +			if (!npu)
> >>> +				continue;
> >>
> >> This patch changes a bunch of places that used to unconditionally
> >> locate an NPU now have a failure path.
> >>
> >> Given that this used to always have an NPU, doesn't that mean that if
> >> the NPU is not present something has already gone wrong, and we should
> >> WARN_ON() or something?
> > 
> > 
> > 
> > That means this is a leftover since I dropped that npdev_to_npu helper
> > which could help but there was no real value in it. I'll remove the
> > check here in the next respin.
> 
> 
> Well, technically kmalloc() can fail in pnv_npu2_init() (but not later)
> so can (in theory) end up with an NPU PHB and npu==NULL but it is sooo
> unlikely...

More to the point, shouldn't you then fail immediately, rather than
leaving the NULL floating around for later code?

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 04/19] powerpc/powernv: Move npu struct from pnv_phb to pci_controller
  2018-12-05 22:40           ` David Gibson
@ 2018-12-10  2:50             ` Alexey Kardashevskiy
  -1 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-12-10  2:50 UTC (permalink / raw)
  To: David Gibson
  Cc: Alex Williamson, Jose Ricardo Ziviani, Sam Bobroff,
	Alistair Popple, Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab



On 06/12/2018 09:40, David Gibson wrote:
> On Wed, Dec 05, 2018 at 05:17:57PM +1100, Alexey Kardashevskiy wrote:
>>
>>
>> On 05/12/2018 16:47, Alexey Kardashevskiy wrote:
>>>
>>>
>>> On 05/12/2018 16:14, David Gibson wrote:
>>>> On Fri, Nov 23, 2018 at 04:52:49PM +1100, Alexey Kardashevskiy wrote:
>>>>> The powernv PCI code stores NPU data in the pnv_phb struct. The latter
>>>>> is referenced by pci_controller::private_data. We are going to have NPU2
>>>>> support in the pseries platform as well but it does not store any
>>>>> private_data in in the pci_controller struct; and even if it did,
>>>>> it would be a different data structure.
>>>>>
>>>>> This makes npu a pointer and stores it one level higher in
>>>>> the pci_controller struct.
>>>>>
>>>>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>>>>> ---
>>>>> Changes:
>>>>> v4:
>>>>> * changed subj from "powerpc/powernv: Detach npu struct from pnv_phb"
>>>>> * got rid of global list of npus - store them now in pci_controller
>>>>> * got rid of npdev_to_npu() helper
>>>>> ---
>>>>>  arch/powerpc/include/asm/pci-bridge.h    |  1 +
>>>>>  arch/powerpc/platforms/powernv/pci.h     | 16 -----
>>>>>  arch/powerpc/platforms/powernv/npu-dma.c | 81 ++++++++++++++++++------
>>>>>  3 files changed, 64 insertions(+), 34 deletions(-)
>>>>>
>>>>> diff --git a/arch/powerpc/include/asm/pci-bridge.h b/arch/powerpc/include/asm/pci-bridge.h
>>>>> index 94d4490..aee4fcc 100644
>>>>> --- a/arch/powerpc/include/asm/pci-bridge.h
>>>>> +++ b/arch/powerpc/include/asm/pci-bridge.h
>>>>> @@ -129,6 +129,7 @@ struct pci_controller {
>>>>>  #endif	/* CONFIG_PPC64 */
>>>>>  
>>>>>  	void *private_data;
>>>>> +	struct npu *npu;
>>>>>  };
>>>>>  
>>>>>  /* These are used for config access before all the PCI probing
>>>>> diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
>>>>> index 2131373..f2d50974 100644
>>>>> --- a/arch/powerpc/platforms/powernv/pci.h
>>>>> +++ b/arch/powerpc/platforms/powernv/pci.h
>>>>> @@ -8,9 +8,6 @@
>>>>>  
>>>>>  struct pci_dn;
>>>>>  
>>>>> -/* Maximum possible number of ATSD MMIO registers per NPU */
>>>>> -#define NV_NMMU_ATSD_REGS 8
>>>>> -
>>>>>  enum pnv_phb_type {
>>>>>  	PNV_PHB_IODA1		= 0,
>>>>>  	PNV_PHB_IODA2		= 1,
>>>>> @@ -176,19 +173,6 @@ struct pnv_phb {
>>>>>  	unsigned int		diag_data_size;
>>>>>  	u8			*diag_data;
>>>>>  
>>>>> -	/* Nvlink2 data */
>>>>> -	struct npu {
>>>>> -		int index;
>>>>> -		__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
>>>>> -		unsigned int mmio_atsd_count;
>>>>> -
>>>>> -		/* Bitmask for MMIO register usage */
>>>>> -		unsigned long mmio_atsd_usage;
>>>>> -
>>>>> -		/* Do we need to explicitly flush the nest mmu? */
>>>>> -		bool nmmu_flush;
>>>>> -	} npu;
>>>>> -
>>>>>  	int p2p_target_count;
>>>>>  };
>>>>>  
>>>>> diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
>>>>> index 91d488f..7dd5c0e5 100644
>>>>> --- a/arch/powerpc/platforms/powernv/npu-dma.c
>>>>> +++ b/arch/powerpc/platforms/powernv/npu-dma.c
>>>>> @@ -327,6 +327,25 @@ struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
>>>>>  	return gpe;
>>>>>  }
>>>>>  
>>>>> +/*
>>>>> + * NPU2 ATS
>>>>> + */
>>>>> +/* Maximum possible number of ATSD MMIO registers per NPU */
>>>>> +#define NV_NMMU_ATSD_REGS 8
>>>>> +
>>>>> +/* An NPU descriptor, valid for POWER9 only */
>>>>> +struct npu {
>>>>> +	int index;
>>>>> +	__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
>>>>> +	unsigned int mmio_atsd_count;
>>>>> +
>>>>> +	/* Bitmask for MMIO register usage */
>>>>> +	unsigned long mmio_atsd_usage;
>>>>> +
>>>>> +	/* Do we need to explicitly flush the nest mmu? */
>>>>> +	bool nmmu_flush;
>>>>> +};
>>>>> +
>>>>>  /* Maximum number of nvlinks per npu */
>>>>>  #define NV_MAX_LINKS 6
>>>>>  
>>>>> @@ -478,7 +497,6 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
>>>>>  	int i, j;
>>>>>  	struct npu *npu;
>>>>>  	struct pci_dev *npdev;
>>>>> -	struct pnv_phb *nphb;
>>>>>  
>>>>>  	for (i = 0; i <= max_npu2_index; i++) {
>>>>>  		mmio_atsd_reg[i].reg = -1;
>>>>> @@ -493,8 +511,10 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
>>>>>  			if (!npdev)
>>>>>  				continue;
>>>>>  
>>>>> -			nphb = pci_bus_to_host(npdev->bus)->private_data;
>>>>> -			npu = &nphb->npu;
>>>>> +			npu = pci_bus_to_host(npdev->bus)->npu;
>>>>> +			if (!npu)
>>>>> +				continue;
>>>>
>>>> This patch changes a bunch of places that used to unconditionally
>>>> locate an NPU now have a failure path.
>>>>
>>>> Given that this used to always have an NPU, doesn't that mean that if
>>>> the NPU is not present something has already gone wrong, and we should
>>>> WARN_ON() or something?
>>>
>>>
>>>
>>> That means this is a leftover since I dropped that npdev_to_npu helper
>>> which could help but there was no real value in it. I'll remove the
>>> check here in the next respin.
>>
>>
>> Well, technically kmalloc() can fail in pnv_npu2_init() (but not later)
>> so can (in theory) end up with an NPU PHB and npu==NULL but it is sooo
>> unlikely...
> 
> More to the point, shouldn't you then fail immediately, rather than
> leaving the NULL floating around for later code?

Not sure I am following. pnv_npu2_init() is called at boot time so
failing-and-not-leaving-null-pointer here means panic and I definitely
do not want that. I am adding !npu checks in the next respin though,
pretty much replacing firmware_has_feature(FW_FEATURE_OPAL), do i miss
anything here?




-- 
Alexey

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 04/19] powerpc/powernv: Move npu struct from pnv_phb to pci_controller
@ 2018-12-10  2:50             ` Alexey Kardashevskiy
  0 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-12-10  2:50 UTC (permalink / raw)
  To: David Gibson
  Cc: Alex Williamson, Jose Ricardo Ziviani, Sam Bobroff,
	Alistair Popple, Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab



On 06/12/2018 09:40, David Gibson wrote:
> On Wed, Dec 05, 2018 at 05:17:57PM +1100, Alexey Kardashevskiy wrote:
>>
>>
>> On 05/12/2018 16:47, Alexey Kardashevskiy wrote:
>>>
>>>
>>> On 05/12/2018 16:14, David Gibson wrote:
>>>> On Fri, Nov 23, 2018 at 04:52:49PM +1100, Alexey Kardashevskiy wrote:
>>>>> The powernv PCI code stores NPU data in the pnv_phb struct. The latter
>>>>> is referenced by pci_controller::private_data. We are going to have NPU2
>>>>> support in the pseries platform as well but it does not store any
>>>>> private_data in in the pci_controller struct; and even if it did,
>>>>> it would be a different data structure.
>>>>>
>>>>> This makes npu a pointer and stores it one level higher in
>>>>> the pci_controller struct.
>>>>>
>>>>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>>>>> ---
>>>>> Changes:
>>>>> v4:
>>>>> * changed subj from "powerpc/powernv: Detach npu struct from pnv_phb"
>>>>> * got rid of global list of npus - store them now in pci_controller
>>>>> * got rid of npdev_to_npu() helper
>>>>> ---
>>>>>  arch/powerpc/include/asm/pci-bridge.h    |  1 +
>>>>>  arch/powerpc/platforms/powernv/pci.h     | 16 -----
>>>>>  arch/powerpc/platforms/powernv/npu-dma.c | 81 ++++++++++++++++++------
>>>>>  3 files changed, 64 insertions(+), 34 deletions(-)
>>>>>
>>>>> diff --git a/arch/powerpc/include/asm/pci-bridge.h b/arch/powerpc/include/asm/pci-bridge.h
>>>>> index 94d4490..aee4fcc 100644
>>>>> --- a/arch/powerpc/include/asm/pci-bridge.h
>>>>> +++ b/arch/powerpc/include/asm/pci-bridge.h
>>>>> @@ -129,6 +129,7 @@ struct pci_controller {
>>>>>  #endif	/* CONFIG_PPC64 */
>>>>>  
>>>>>  	void *private_data;
>>>>> +	struct npu *npu;
>>>>>  };
>>>>>  
>>>>>  /* These are used for config access before all the PCI probing
>>>>> diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
>>>>> index 2131373..f2d50974 100644
>>>>> --- a/arch/powerpc/platforms/powernv/pci.h
>>>>> +++ b/arch/powerpc/platforms/powernv/pci.h
>>>>> @@ -8,9 +8,6 @@
>>>>>  
>>>>>  struct pci_dn;
>>>>>  
>>>>> -/* Maximum possible number of ATSD MMIO registers per NPU */
>>>>> -#define NV_NMMU_ATSD_REGS 8
>>>>> -
>>>>>  enum pnv_phb_type {
>>>>>  	PNV_PHB_IODA1		= 0,
>>>>>  	PNV_PHB_IODA2		= 1,
>>>>> @@ -176,19 +173,6 @@ struct pnv_phb {
>>>>>  	unsigned int		diag_data_size;
>>>>>  	u8			*diag_data;
>>>>>  
>>>>> -	/* Nvlink2 data */
>>>>> -	struct npu {
>>>>> -		int index;
>>>>> -		__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
>>>>> -		unsigned int mmio_atsd_count;
>>>>> -
>>>>> -		/* Bitmask for MMIO register usage */
>>>>> -		unsigned long mmio_atsd_usage;
>>>>> -
>>>>> -		/* Do we need to explicitly flush the nest mmu? */
>>>>> -		bool nmmu_flush;
>>>>> -	} npu;
>>>>> -
>>>>>  	int p2p_target_count;
>>>>>  };
>>>>>  
>>>>> diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
>>>>> index 91d488f..7dd5c0e5 100644
>>>>> --- a/arch/powerpc/platforms/powernv/npu-dma.c
>>>>> +++ b/arch/powerpc/platforms/powernv/npu-dma.c
>>>>> @@ -327,6 +327,25 @@ struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
>>>>>  	return gpe;
>>>>>  }
>>>>>  
>>>>> +/*
>>>>> + * NPU2 ATS
>>>>> + */
>>>>> +/* Maximum possible number of ATSD MMIO registers per NPU */
>>>>> +#define NV_NMMU_ATSD_REGS 8
>>>>> +
>>>>> +/* An NPU descriptor, valid for POWER9 only */
>>>>> +struct npu {
>>>>> +	int index;
>>>>> +	__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
>>>>> +	unsigned int mmio_atsd_count;
>>>>> +
>>>>> +	/* Bitmask for MMIO register usage */
>>>>> +	unsigned long mmio_atsd_usage;
>>>>> +
>>>>> +	/* Do we need to explicitly flush the nest mmu? */
>>>>> +	bool nmmu_flush;
>>>>> +};
>>>>> +
>>>>>  /* Maximum number of nvlinks per npu */
>>>>>  #define NV_MAX_LINKS 6
>>>>>  
>>>>> @@ -478,7 +497,6 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
>>>>>  	int i, j;
>>>>>  	struct npu *npu;
>>>>>  	struct pci_dev *npdev;
>>>>> -	struct pnv_phb *nphb;
>>>>>  
>>>>>  	for (i = 0; i <= max_npu2_index; i++) {
>>>>>  		mmio_atsd_reg[i].reg = -1;
>>>>> @@ -493,8 +511,10 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
>>>>>  			if (!npdev)
>>>>>  				continue;
>>>>>  
>>>>> -			nphb = pci_bus_to_host(npdev->bus)->private_data;
>>>>> -			npu = &nphb->npu;
>>>>> +			npu = pci_bus_to_host(npdev->bus)->npu;
>>>>> +			if (!npu)
>>>>> +				continue;
>>>>
>>>> This patch changes a bunch of places that used to unconditionally
>>>> locate an NPU now have a failure path.
>>>>
>>>> Given that this used to always have an NPU, doesn't that mean that if
>>>> the NPU is not present something has already gone wrong, and we should
>>>> WARN_ON() or something?
>>>
>>>
>>>
>>> That means this is a leftover since I dropped that npdev_to_npu helper
>>> which could help but there was no real value in it. I'll remove the
>>> check here in the next respin.
>>
>>
>> Well, technically kmalloc() can fail in pnv_npu2_init() (but not later)
>> so can (in theory) end up with an NPU PHB and npu=NULL but it is sooo
>> unlikely...
> 
> More to the point, shouldn't you then fail immediately, rather than
> leaving the NULL floating around for later code?

Not sure I am following. pnv_npu2_init() is called at boot time so
failing-and-not-leaving-null-pointer here means panic and I definitely
do not want that. I am adding !npu checks in the next respin though,
pretty much replacing firmware_has_feature(FW_FEATURE_OPAL), do i miss
anything here?




-- 
Alexey

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 04/19] powerpc/powernv: Move npu struct from pnv_phb to pci_controller
  2018-12-10  2:50             ` Alexey Kardashevskiy
@ 2018-12-10  3:42               ` David Gibson
  -1 siblings, 0 replies; 70+ messages in thread
From: David Gibson @ 2018-12-10  3:42 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Alex Williamson, Jose Ricardo Ziviani, Sam Bobroff,
	Alistair Popple, Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab

[-- Attachment #1: Type: text/plain, Size: 6329 bytes --]

On Mon, Dec 10, 2018 at 01:50:35PM +1100, Alexey Kardashevskiy wrote:
> 
> 
> On 06/12/2018 09:40, David Gibson wrote:
> > On Wed, Dec 05, 2018 at 05:17:57PM +1100, Alexey Kardashevskiy wrote:
> >>
> >>
> >> On 05/12/2018 16:47, Alexey Kardashevskiy wrote:
> >>>
> >>>
> >>> On 05/12/2018 16:14, David Gibson wrote:
> >>>> On Fri, Nov 23, 2018 at 04:52:49PM +1100, Alexey Kardashevskiy wrote:
> >>>>> The powernv PCI code stores NPU data in the pnv_phb struct. The latter
> >>>>> is referenced by pci_controller::private_data. We are going to have NPU2
> >>>>> support in the pseries platform as well but it does not store any
> >>>>> private_data in in the pci_controller struct; and even if it did,
> >>>>> it would be a different data structure.
> >>>>>
> >>>>> This makes npu a pointer and stores it one level higher in
> >>>>> the pci_controller struct.
> >>>>>
> >>>>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> >>>>> ---
> >>>>> Changes:
> >>>>> v4:
> >>>>> * changed subj from "powerpc/powernv: Detach npu struct from pnv_phb"
> >>>>> * got rid of global list of npus - store them now in pci_controller
> >>>>> * got rid of npdev_to_npu() helper
> >>>>> ---
> >>>>>  arch/powerpc/include/asm/pci-bridge.h    |  1 +
> >>>>>  arch/powerpc/platforms/powernv/pci.h     | 16 -----
> >>>>>  arch/powerpc/platforms/powernv/npu-dma.c | 81 ++++++++++++++++++------
> >>>>>  3 files changed, 64 insertions(+), 34 deletions(-)
> >>>>>
> >>>>> diff --git a/arch/powerpc/include/asm/pci-bridge.h b/arch/powerpc/include/asm/pci-bridge.h
> >>>>> index 94d4490..aee4fcc 100644
> >>>>> --- a/arch/powerpc/include/asm/pci-bridge.h
> >>>>> +++ b/arch/powerpc/include/asm/pci-bridge.h
> >>>>> @@ -129,6 +129,7 @@ struct pci_controller {
> >>>>>  #endif	/* CONFIG_PPC64 */
> >>>>>  
> >>>>>  	void *private_data;
> >>>>> +	struct npu *npu;
> >>>>>  };
> >>>>>  
> >>>>>  /* These are used for config access before all the PCI probing
> >>>>> diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
> >>>>> index 2131373..f2d50974 100644
> >>>>> --- a/arch/powerpc/platforms/powernv/pci.h
> >>>>> +++ b/arch/powerpc/platforms/powernv/pci.h
> >>>>> @@ -8,9 +8,6 @@
> >>>>>  
> >>>>>  struct pci_dn;
> >>>>>  
> >>>>> -/* Maximum possible number of ATSD MMIO registers per NPU */
> >>>>> -#define NV_NMMU_ATSD_REGS 8
> >>>>> -
> >>>>>  enum pnv_phb_type {
> >>>>>  	PNV_PHB_IODA1		= 0,
> >>>>>  	PNV_PHB_IODA2		= 1,
> >>>>> @@ -176,19 +173,6 @@ struct pnv_phb {
> >>>>>  	unsigned int		diag_data_size;
> >>>>>  	u8			*diag_data;
> >>>>>  
> >>>>> -	/* Nvlink2 data */
> >>>>> -	struct npu {
> >>>>> -		int index;
> >>>>> -		__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
> >>>>> -		unsigned int mmio_atsd_count;
> >>>>> -
> >>>>> -		/* Bitmask for MMIO register usage */
> >>>>> -		unsigned long mmio_atsd_usage;
> >>>>> -
> >>>>> -		/* Do we need to explicitly flush the nest mmu? */
> >>>>> -		bool nmmu_flush;
> >>>>> -	} npu;
> >>>>> -
> >>>>>  	int p2p_target_count;
> >>>>>  };
> >>>>>  
> >>>>> diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
> >>>>> index 91d488f..7dd5c0e5 100644
> >>>>> --- a/arch/powerpc/platforms/powernv/npu-dma.c
> >>>>> +++ b/arch/powerpc/platforms/powernv/npu-dma.c
> >>>>> @@ -327,6 +327,25 @@ struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
> >>>>>  	return gpe;
> >>>>>  }
> >>>>>  
> >>>>> +/*
> >>>>> + * NPU2 ATS
> >>>>> + */
> >>>>> +/* Maximum possible number of ATSD MMIO registers per NPU */
> >>>>> +#define NV_NMMU_ATSD_REGS 8
> >>>>> +
> >>>>> +/* An NPU descriptor, valid for POWER9 only */
> >>>>> +struct npu {
> >>>>> +	int index;
> >>>>> +	__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
> >>>>> +	unsigned int mmio_atsd_count;
> >>>>> +
> >>>>> +	/* Bitmask for MMIO register usage */
> >>>>> +	unsigned long mmio_atsd_usage;
> >>>>> +
> >>>>> +	/* Do we need to explicitly flush the nest mmu? */
> >>>>> +	bool nmmu_flush;
> >>>>> +};
> >>>>> +
> >>>>>  /* Maximum number of nvlinks per npu */
> >>>>>  #define NV_MAX_LINKS 6
> >>>>>  
> >>>>> @@ -478,7 +497,6 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
> >>>>>  	int i, j;
> >>>>>  	struct npu *npu;
> >>>>>  	struct pci_dev *npdev;
> >>>>> -	struct pnv_phb *nphb;
> >>>>>  
> >>>>>  	for (i = 0; i <= max_npu2_index; i++) {
> >>>>>  		mmio_atsd_reg[i].reg = -1;
> >>>>> @@ -493,8 +511,10 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
> >>>>>  			if (!npdev)
> >>>>>  				continue;
> >>>>>  
> >>>>> -			nphb = pci_bus_to_host(npdev->bus)->private_data;
> >>>>> -			npu = &nphb->npu;
> >>>>> +			npu = pci_bus_to_host(npdev->bus)->npu;
> >>>>> +			if (!npu)
> >>>>> +				continue;
> >>>>
> >>>> This patch changes a bunch of places that used to unconditionally
> >>>> locate an NPU now have a failure path.
> >>>>
> >>>> Given that this used to always have an NPU, doesn't that mean that if
> >>>> the NPU is not present something has already gone wrong, and we should
> >>>> WARN_ON() or something?
> >>>
> >>>
> >>>
> >>> That means this is a leftover since I dropped that npdev_to_npu helper
> >>> which could help but there was no real value in it. I'll remove the
> >>> check here in the next respin.
> >>
> >>
> >> Well, technically kmalloc() can fail in pnv_npu2_init() (but not later)
> >> so can (in theory) end up with an NPU PHB and npu==NULL but it is sooo
> >> unlikely...
> > 
> > More to the point, shouldn't you then fail immediately, rather than
> > leaving the NULL floating around for later code?
> 
> Not sure I am following. pnv_npu2_init() is called at boot time so
> failing-and-not-leaving-null-pointer here means panic and I definitely
> do not want that.

Well, if it's a choice between panic then and panic later, I'm not so
sure that's true.

> I am adding !npu checks in the next respin though,
> pretty much replacing firmware_has_feature(FW_FEATURE_OPAL), do i miss
> anything here?

That seems reasonable.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 04/19] powerpc/powernv: Move npu struct from pnv_phb to pci_controller
@ 2018-12-10  3:42               ` David Gibson
  0 siblings, 0 replies; 70+ messages in thread
From: David Gibson @ 2018-12-10  3:42 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Alex Williamson, Jose Ricardo Ziviani, Sam Bobroff,
	Alistair Popple, Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab

[-- Attachment #1: Type: text/plain, Size: 6329 bytes --]

On Mon, Dec 10, 2018 at 01:50:35PM +1100, Alexey Kardashevskiy wrote:
> 
> 
> On 06/12/2018 09:40, David Gibson wrote:
> > On Wed, Dec 05, 2018 at 05:17:57PM +1100, Alexey Kardashevskiy wrote:
> >>
> >>
> >> On 05/12/2018 16:47, Alexey Kardashevskiy wrote:
> >>>
> >>>
> >>> On 05/12/2018 16:14, David Gibson wrote:
> >>>> On Fri, Nov 23, 2018 at 04:52:49PM +1100, Alexey Kardashevskiy wrote:
> >>>>> The powernv PCI code stores NPU data in the pnv_phb struct. The latter
> >>>>> is referenced by pci_controller::private_data. We are going to have NPU2
> >>>>> support in the pseries platform as well but it does not store any
> >>>>> private_data in in the pci_controller struct; and even if it did,
> >>>>> it would be a different data structure.
> >>>>>
> >>>>> This makes npu a pointer and stores it one level higher in
> >>>>> the pci_controller struct.
> >>>>>
> >>>>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> >>>>> ---
> >>>>> Changes:
> >>>>> v4:
> >>>>> * changed subj from "powerpc/powernv: Detach npu struct from pnv_phb"
> >>>>> * got rid of global list of npus - store them now in pci_controller
> >>>>> * got rid of npdev_to_npu() helper
> >>>>> ---
> >>>>>  arch/powerpc/include/asm/pci-bridge.h    |  1 +
> >>>>>  arch/powerpc/platforms/powernv/pci.h     | 16 -----
> >>>>>  arch/powerpc/platforms/powernv/npu-dma.c | 81 ++++++++++++++++++------
> >>>>>  3 files changed, 64 insertions(+), 34 deletions(-)
> >>>>>
> >>>>> diff --git a/arch/powerpc/include/asm/pci-bridge.h b/arch/powerpc/include/asm/pci-bridge.h
> >>>>> index 94d4490..aee4fcc 100644
> >>>>> --- a/arch/powerpc/include/asm/pci-bridge.h
> >>>>> +++ b/arch/powerpc/include/asm/pci-bridge.h
> >>>>> @@ -129,6 +129,7 @@ struct pci_controller {
> >>>>>  #endif	/* CONFIG_PPC64 */
> >>>>>  
> >>>>>  	void *private_data;
> >>>>> +	struct npu *npu;
> >>>>>  };
> >>>>>  
> >>>>>  /* These are used for config access before all the PCI probing
> >>>>> diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
> >>>>> index 2131373..f2d50974 100644
> >>>>> --- a/arch/powerpc/platforms/powernv/pci.h
> >>>>> +++ b/arch/powerpc/platforms/powernv/pci.h
> >>>>> @@ -8,9 +8,6 @@
> >>>>>  
> >>>>>  struct pci_dn;
> >>>>>  
> >>>>> -/* Maximum possible number of ATSD MMIO registers per NPU */
> >>>>> -#define NV_NMMU_ATSD_REGS 8
> >>>>> -
> >>>>>  enum pnv_phb_type {
> >>>>>  	PNV_PHB_IODA1		= 0,
> >>>>>  	PNV_PHB_IODA2		= 1,
> >>>>> @@ -176,19 +173,6 @@ struct pnv_phb {
> >>>>>  	unsigned int		diag_data_size;
> >>>>>  	u8			*diag_data;
> >>>>>  
> >>>>> -	/* Nvlink2 data */
> >>>>> -	struct npu {
> >>>>> -		int index;
> >>>>> -		__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
> >>>>> -		unsigned int mmio_atsd_count;
> >>>>> -
> >>>>> -		/* Bitmask for MMIO register usage */
> >>>>> -		unsigned long mmio_atsd_usage;
> >>>>> -
> >>>>> -		/* Do we need to explicitly flush the nest mmu? */
> >>>>> -		bool nmmu_flush;
> >>>>> -	} npu;
> >>>>> -
> >>>>>  	int p2p_target_count;
> >>>>>  };
> >>>>>  
> >>>>> diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
> >>>>> index 91d488f..7dd5c0e5 100644
> >>>>> --- a/arch/powerpc/platforms/powernv/npu-dma.c
> >>>>> +++ b/arch/powerpc/platforms/powernv/npu-dma.c
> >>>>> @@ -327,6 +327,25 @@ struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
> >>>>>  	return gpe;
> >>>>>  }
> >>>>>  
> >>>>> +/*
> >>>>> + * NPU2 ATS
> >>>>> + */
> >>>>> +/* Maximum possible number of ATSD MMIO registers per NPU */
> >>>>> +#define NV_NMMU_ATSD_REGS 8
> >>>>> +
> >>>>> +/* An NPU descriptor, valid for POWER9 only */
> >>>>> +struct npu {
> >>>>> +	int index;
> >>>>> +	__be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS];
> >>>>> +	unsigned int mmio_atsd_count;
> >>>>> +
> >>>>> +	/* Bitmask for MMIO register usage */
> >>>>> +	unsigned long mmio_atsd_usage;
> >>>>> +
> >>>>> +	/* Do we need to explicitly flush the nest mmu? */
> >>>>> +	bool nmmu_flush;
> >>>>> +};
> >>>>> +
> >>>>>  /* Maximum number of nvlinks per npu */
> >>>>>  #define NV_MAX_LINKS 6
> >>>>>  
> >>>>> @@ -478,7 +497,6 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
> >>>>>  	int i, j;
> >>>>>  	struct npu *npu;
> >>>>>  	struct pci_dev *npdev;
> >>>>> -	struct pnv_phb *nphb;
> >>>>>  
> >>>>>  	for (i = 0; i <= max_npu2_index; i++) {
> >>>>>  		mmio_atsd_reg[i].reg = -1;
> >>>>> @@ -493,8 +511,10 @@ static void acquire_atsd_reg(struct npu_context *npu_context,
> >>>>>  			if (!npdev)
> >>>>>  				continue;
> >>>>>  
> >>>>> -			nphb = pci_bus_to_host(npdev->bus)->private_data;
> >>>>> -			npu = &nphb->npu;
> >>>>> +			npu = pci_bus_to_host(npdev->bus)->npu;
> >>>>> +			if (!npu)
> >>>>> +				continue;
> >>>>
> >>>> This patch changes a bunch of places that used to unconditionally
> >>>> locate an NPU now have a failure path.
> >>>>
> >>>> Given that this used to always have an NPU, doesn't that mean that if
> >>>> the NPU is not present something has already gone wrong, and we should
> >>>> WARN_ON() or something?
> >>>
> >>>
> >>>
> >>> That means this is a leftover since I dropped that npdev_to_npu helper
> >>> which could help but there was no real value in it. I'll remove the
> >>> check here in the next respin.
> >>
> >>
> >> Well, technically kmalloc() can fail in pnv_npu2_init() (but not later)
> >> so can (in theory) end up with an NPU PHB and npu==NULL but it is sooo
> >> unlikely...
> > 
> > More to the point, shouldn't you then fail immediately, rather than
> > leaving the NULL floating around for later code?
> 
> Not sure I am following. pnv_npu2_init() is called at boot time so
> failing-and-not-leaving-null-pointer here means panic and I definitely
> do not want that.

Well, if it's a choice between panic then and panic later, I'm not so
sure that's true.

> I am adding !npu checks in the next respin though,
> pretty much replacing firmware_has_feature(FW_FEATURE_OPAL), do i miss
> anything here?

That seems reasonable.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 19/19] vfio_pci: Add NVIDIA GV100GL [Tesla V100 SXM2] subdriver
  2018-11-23  5:53   ` Alexey Kardashevskiy
@ 2018-12-11  0:08     ` Alex Williamson
  -1 siblings, 0 replies; 70+ messages in thread
From: Alex Williamson @ 2018-12-11  0:08 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Jose Ricardo Ziviani, Sam Bobroff, Alistair Popple,
	Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

On Fri, 23 Nov 2018 16:53:04 +1100
Alexey Kardashevskiy <aik@ozlabs.ru> wrote:

> POWER9 Witherspoon machines come with 4 or 6 V100 GPUs which are not
> pluggable PCIe devices but still have PCIe links which are used
> for config space and MMIO. In addition to that the GPUs have 6 NVLinks
> which are connected to other GPUs and the POWER9 CPU. POWER9 chips
> have a special unit on a die called an NPU which is an NVLink2 host bus
> adapter with p2p connections to 2 to 3 GPUs, 3 or 2 NVLinks to each.
> These systems also support ATS (address translation services) which is
> a part of the NVLink2 protocol. Such GPUs also share on-board RAM
> (16GB or 32GB) to the system via the same NVLink2 so a CPU has
> cache-coherent access to a GPU RAM.
> 
> This exports GPU RAM to the userspace as a new VFIO device region. This
> preregisters the new memory as device memory as it might be used for DMA.
> This inserts pfns from the fault handler as the GPU memory is not onlined
> until the vendor driver is loaded and trained the NVLinks so doing this
> earlier causes low level errors which we fence in the firmware so
> it does not hurt the host system but still better be avoided.
> 
> This exports an ATSD (Address Translation Shootdown) register of NPU which
> allows TLB invalidations inside GPU for an operating system. The register
> conveniently occupies a single 64k page. It is also presented to
> the userspace as a new VFIO device region.
> 
> In order to provide the userspace with the information about GPU-to-NVLink
> connections, this exports an additional capability called "tgt"
> (which is an abbreviated host system bus address). The "tgt" property
> tells the GPU its own system address and allows the guest driver to
> conglomerate the routing information so each GPU knows how to get directly
> to the other GPUs.
> 
> For ATS to work, the nest MMU (an NVIDIA block in a P9 CPU) needs to
> know LPID (a logical partition ID or a KVM guest hardware ID in other
> words) and PID (a memory context ID of a userspace process, not to be
> confused with a linux pid). This assigns a GPU to LPID in the NPU and
> this is why this adds a listener for KVM on an IOMMU group. A PID comes
> via NVLink from a GPU and NPU uses a PID wildcard to pass it through.
> 
> This requires coherent memory and ATSD to be available on the host as
> the GPU vendor only supports configurations with both features enabled
> and other configurations are known not to work. Because of this and
> because of the ways the features are advertised to the host system
> (which is a device tree with very platform specific properties),
> this requires enabled POWERNV platform.
> 
> The V100 GPUs do not advertise none of these capabilities via the config

s/none/any/

> space and there are more than just one device ID so this relies on
> the platform to tell whether these GPUs have special abilities such as
> NVLinks.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> ---
> Changes:
> v4:
> * added nvlink-speed to the NPU bridge capability as this turned out to
> be not a constant value
> * instead of looking at the exact device ID (which also changes from system
> to system), now this (indirectly) looks at the device tree to know
> if GPU and NPU support NVLink
> 
> v3:
> * reworded the commit log about tgt
> * added tracepoints (do we want them enabled for entire vfio-pci?)
> * added code comments
> * added write|mmap flags to the new regions
> * auto enabled VFIO_PCI_NVLINK2 config option
> * added 'tgt' capability to a GPU so QEMU can recreate ibm,npu and ibm,gpu
> references; there are required by the NVIDIA driver
> * keep notifier registered only for short time
> ---
>  drivers/vfio/pci/Makefile           |   1 +
>  drivers/vfio/pci/trace.h            | 102 +++++++
>  drivers/vfio/pci/vfio_pci_private.h |   2 +
>  include/uapi/linux/vfio.h           |  27 ++
>  drivers/vfio/pci/vfio_pci.c         |  37 ++-
>  drivers/vfio/pci/vfio_pci_nvlink2.c | 448 ++++++++++++++++++++++++++++
>  drivers/vfio/pci/Kconfig            |   6 +
>  7 files changed, 621 insertions(+), 2 deletions(-)
>  create mode 100644 drivers/vfio/pci/trace.h
>  create mode 100644 drivers/vfio/pci/vfio_pci_nvlink2.c
> 
> diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile
> index 76d8ec0..9662c06 100644
> --- a/drivers/vfio/pci/Makefile
> +++ b/drivers/vfio/pci/Makefile
> @@ -1,5 +1,6 @@
>  
>  vfio-pci-y := vfio_pci.o vfio_pci_intrs.o vfio_pci_rdwr.o vfio_pci_config.o
>  vfio-pci-$(CONFIG_VFIO_PCI_IGD) += vfio_pci_igd.o
> +vfio-pci-$(CONFIG_VFIO_PCI_NVLINK2) += vfio_pci_nvlink2.o
>  
>  obj-$(CONFIG_VFIO_PCI) += vfio-pci.o
...
> diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
> index 93c1738..7639241 100644
> --- a/drivers/vfio/pci/vfio_pci_private.h
> +++ b/drivers/vfio/pci/vfio_pci_private.h
> @@ -163,4 +163,6 @@ static inline int vfio_pci_igd_init(struct vfio_pci_device *vdev)
>  	return -ENODEV;
>  }
>  #endif
> +extern int vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev);
> +extern int vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev);
>  #endif /* VFIO_PCI_PRIVATE_H */
> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> index 8131028..547e71e 100644
> --- a/include/uapi/linux/vfio.h
> +++ b/include/uapi/linux/vfio.h
> @@ -353,6 +353,20 @@ struct vfio_region_gfx_edid {
>  #define VFIO_DEVICE_GFX_LINK_STATE_DOWN  2
>  };
>  
> +/* 10de vendor sub-type
> + *
> + * NVIDIA GPU NVlink2 RAM is coherent RAM mapped onto the host address space.
> + */

nit, prefer the comment style below leaving the first line of a
multi-line comment empty, coding style.

> +#define VFIO_REGION_SUBTYPE_NVIDIA_NVLINK2_RAM	(1)
> +
> +/*
> + * 1014 vendor sub-type
> + *
> + * IBM NPU NVlink2 ATSD (Address Translation Shootdown) register of NPU
> + * to do TLB invalidation on a GPU.
> + */
> +#define VFIO_REGION_SUBTYPE_IBM_NVLINK2_ATSD	(1)
> +
>  /*
>   * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
>   * which allows direct access to non-MSIX registers which happened to be within
> @@ -363,6 +377,19 @@ struct vfio_region_gfx_edid {
>   */
>  #define VFIO_REGION_INFO_CAP_MSIX_MAPPABLE	3
>  
> +/*
> + * Capability with compressed real address (aka SSA - small system address)
> + * where GPU RAM is mapped on a system bus. Used by a GPU for DMA routing.
> + */
> +#define VFIO_REGION_INFO_CAP_NPU2		4
> +
> +struct vfio_region_info_cap_npu2 {
> +	struct vfio_info_cap_header header;
> +	__u64 tgt;
> +	__u32 link_speed;
> +	__u32 __pad;
> +};
> +
>  /**
>   * VFIO_DEVICE_GET_IRQ_INFO - _IOWR(VFIO_TYPE, VFIO_BASE + 9,
>   *				    struct vfio_irq_info)
> diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
> index 6cb70cf..b8a53f9 100644
> --- a/drivers/vfio/pci/vfio_pci.c
> +++ b/drivers/vfio/pci/vfio_pci.c
> @@ -224,6 +224,16 @@ static bool vfio_pci_nointx(struct pci_dev *pdev)
>  	return false;
>  }
>  
> +int __weak vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev)
> +{
> +	return -ENODEV;
> +}
> +
> +int __weak vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev)
> +{
> +	return -ENODEV;
> +}
> +

Why not static inlines in vfio_pci_private.h like we do for igd hooks?

...
>  static void vfio_pci_disable(struct vfio_pci_device *vdev)
> diff --git a/drivers/vfio/pci/vfio_pci_nvlink2.c b/drivers/vfio/pci/vfio_pci_nvlink2.c
> new file mode 100644
> index 0000000..e8e06c3
> --- /dev/null
> +++ b/drivers/vfio/pci/vfio_pci_nvlink2.c
...
> +static int vfio_pci_nvgpu_mmap(struct vfio_pci_device *vdev,
> +		struct vfio_pci_region *region, struct vm_area_struct *vma)
> +{
> +	long ret;
> +	struct vfio_pci_nvgpu_data *data = region->data;
> +
> +	if (data->useraddr)
> +		return -EPERM;
> +
> +	if (vma->vm_end - vma->vm_start > data->size)
> +		return -EINVAL;
> +
> +	vma->vm_private_data = region;
> +	vma->vm_flags |= VM_PFNMAP;
> +	vma->vm_ops = &vfio_pci_nvgpu_mmap_vmops;
> +
> +	/*
> +	 * Calling mm_iommu_newdev() here once as the region is not
> +	 * registered yet and therefore right initialization will happen now.
> +	 * Other places will use mm_iommu_find() which returns
> +	 * registered @mem and does not go gup().
> +	 */
> +	data->useraddr = vma->vm_start;
> +	data->mm = current->mm;
> +
> +	atomic_inc(&data->mm->mm_count);
> +	ret = mm_iommu_newdev(data->mm, data->useraddr,
> +			(vma->vm_end - vma->vm_start) >> PAGE_SHIFT,
> +			data->gpu_hpa, &data->mem);
> +
> +	trace_vfio_pci_nvgpu_mmap(vdev->pdev, data->gpu_hpa, data->useraddr,
> +			vma->vm_end - vma->vm_start, ret);
> +
> +	return ret;

It's unfortunate that all these mm_iommu_foo function return long, this
function returns int, which made me go down the rabbit hole to see what
mm_iommu_newdev() and therefore mmio_iommu_do_alloc() can return.  Can
you do a translation somewhere so this doesn't look like a possible
overflow?  Thanks,

Alex

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 19/19] vfio_pci: Add NVIDIA GV100GL [Tesla V100 SXM2] subdriver
@ 2018-12-11  0:08     ` Alex Williamson
  0 siblings, 0 replies; 70+ messages in thread
From: Alex Williamson @ 2018-12-11  0:08 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Jose Ricardo Ziviani, Sam Bobroff, Alistair Popple,
	Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

On Fri, 23 Nov 2018 16:53:04 +1100
Alexey Kardashevskiy <aik@ozlabs.ru> wrote:

> POWER9 Witherspoon machines come with 4 or 6 V100 GPUs which are not
> pluggable PCIe devices but still have PCIe links which are used
> for config space and MMIO. In addition to that the GPUs have 6 NVLinks
> which are connected to other GPUs and the POWER9 CPU. POWER9 chips
> have a special unit on a die called an NPU which is an NVLink2 host bus
> adapter with p2p connections to 2 to 3 GPUs, 3 or 2 NVLinks to each.
> These systems also support ATS (address translation services) which is
> a part of the NVLink2 protocol. Such GPUs also share on-board RAM
> (16GB or 32GB) to the system via the same NVLink2 so a CPU has
> cache-coherent access to a GPU RAM.
> 
> This exports GPU RAM to the userspace as a new VFIO device region. This
> preregisters the new memory as device memory as it might be used for DMA.
> This inserts pfns from the fault handler as the GPU memory is not onlined
> until the vendor driver is loaded and trained the NVLinks so doing this
> earlier causes low level errors which we fence in the firmware so
> it does not hurt the host system but still better be avoided.
> 
> This exports an ATSD (Address Translation Shootdown) register of NPU which
> allows TLB invalidations inside GPU for an operating system. The register
> conveniently occupies a single 64k page. It is also presented to
> the userspace as a new VFIO device region.
> 
> In order to provide the userspace with the information about GPU-to-NVLink
> connections, this exports an additional capability called "tgt"
> (which is an abbreviated host system bus address). The "tgt" property
> tells the GPU its own system address and allows the guest driver to
> conglomerate the routing information so each GPU knows how to get directly
> to the other GPUs.
> 
> For ATS to work, the nest MMU (an NVIDIA block in a P9 CPU) needs to
> know LPID (a logical partition ID or a KVM guest hardware ID in other
> words) and PID (a memory context ID of a userspace process, not to be
> confused with a linux pid). This assigns a GPU to LPID in the NPU and
> this is why this adds a listener for KVM on an IOMMU group. A PID comes
> via NVLink from a GPU and NPU uses a PID wildcard to pass it through.
> 
> This requires coherent memory and ATSD to be available on the host as
> the GPU vendor only supports configurations with both features enabled
> and other configurations are known not to work. Because of this and
> because of the ways the features are advertised to the host system
> (which is a device tree with very platform specific properties),
> this requires enabled POWERNV platform.
> 
> The V100 GPUs do not advertise none of these capabilities via the config

s/none/any/

> space and there are more than just one device ID so this relies on
> the platform to tell whether these GPUs have special abilities such as
> NVLinks.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> ---
> Changes:
> v4:
> * added nvlink-speed to the NPU bridge capability as this turned out to
> be not a constant value
> * instead of looking at the exact device ID (which also changes from system
> to system), now this (indirectly) looks at the device tree to know
> if GPU and NPU support NVLink
> 
> v3:
> * reworded the commit log about tgt
> * added tracepoints (do we want them enabled for entire vfio-pci?)
> * added code comments
> * added write|mmap flags to the new regions
> * auto enabled VFIO_PCI_NVLINK2 config option
> * added 'tgt' capability to a GPU so QEMU can recreate ibm,npu and ibm,gpu
> references; there are required by the NVIDIA driver
> * keep notifier registered only for short time
> ---
>  drivers/vfio/pci/Makefile           |   1 +
>  drivers/vfio/pci/trace.h            | 102 +++++++
>  drivers/vfio/pci/vfio_pci_private.h |   2 +
>  include/uapi/linux/vfio.h           |  27 ++
>  drivers/vfio/pci/vfio_pci.c         |  37 ++-
>  drivers/vfio/pci/vfio_pci_nvlink2.c | 448 ++++++++++++++++++++++++++++
>  drivers/vfio/pci/Kconfig            |   6 +
>  7 files changed, 621 insertions(+), 2 deletions(-)
>  create mode 100644 drivers/vfio/pci/trace.h
>  create mode 100644 drivers/vfio/pci/vfio_pci_nvlink2.c
> 
> diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile
> index 76d8ec0..9662c06 100644
> --- a/drivers/vfio/pci/Makefile
> +++ b/drivers/vfio/pci/Makefile
> @@ -1,5 +1,6 @@
>  
>  vfio-pci-y := vfio_pci.o vfio_pci_intrs.o vfio_pci_rdwr.o vfio_pci_config.o
>  vfio-pci-$(CONFIG_VFIO_PCI_IGD) += vfio_pci_igd.o
> +vfio-pci-$(CONFIG_VFIO_PCI_NVLINK2) += vfio_pci_nvlink2.o
>  
>  obj-$(CONFIG_VFIO_PCI) += vfio-pci.o
...
> diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
> index 93c1738..7639241 100644
> --- a/drivers/vfio/pci/vfio_pci_private.h
> +++ b/drivers/vfio/pci/vfio_pci_private.h
> @@ -163,4 +163,6 @@ static inline int vfio_pci_igd_init(struct vfio_pci_device *vdev)
>  	return -ENODEV;
>  }
>  #endif
> +extern int vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev);
> +extern int vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev);
>  #endif /* VFIO_PCI_PRIVATE_H */
> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> index 8131028..547e71e 100644
> --- a/include/uapi/linux/vfio.h
> +++ b/include/uapi/linux/vfio.h
> @@ -353,6 +353,20 @@ struct vfio_region_gfx_edid {
>  #define VFIO_DEVICE_GFX_LINK_STATE_DOWN  2
>  };
>  
> +/* 10de vendor sub-type
> + *
> + * NVIDIA GPU NVlink2 RAM is coherent RAM mapped onto the host address space.
> + */

nit, prefer the comment style below leaving the first line of a
multi-line comment empty, coding style.

> +#define VFIO_REGION_SUBTYPE_NVIDIA_NVLINK2_RAM	(1)
> +
> +/*
> + * 1014 vendor sub-type
> + *
> + * IBM NPU NVlink2 ATSD (Address Translation Shootdown) register of NPU
> + * to do TLB invalidation on a GPU.
> + */
> +#define VFIO_REGION_SUBTYPE_IBM_NVLINK2_ATSD	(1)
> +
>  /*
>   * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
>   * which allows direct access to non-MSIX registers which happened to be within
> @@ -363,6 +377,19 @@ struct vfio_region_gfx_edid {
>   */
>  #define VFIO_REGION_INFO_CAP_MSIX_MAPPABLE	3
>  
> +/*
> + * Capability with compressed real address (aka SSA - small system address)
> + * where GPU RAM is mapped on a system bus. Used by a GPU for DMA routing.
> + */
> +#define VFIO_REGION_INFO_CAP_NPU2		4
> +
> +struct vfio_region_info_cap_npu2 {
> +	struct vfio_info_cap_header header;
> +	__u64 tgt;
> +	__u32 link_speed;
> +	__u32 __pad;
> +};
> +
>  /**
>   * VFIO_DEVICE_GET_IRQ_INFO - _IOWR(VFIO_TYPE, VFIO_BASE + 9,
>   *				    struct vfio_irq_info)
> diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
> index 6cb70cf..b8a53f9 100644
> --- a/drivers/vfio/pci/vfio_pci.c
> +++ b/drivers/vfio/pci/vfio_pci.c
> @@ -224,6 +224,16 @@ static bool vfio_pci_nointx(struct pci_dev *pdev)
>  	return false;
>  }
>  
> +int __weak vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev)
> +{
> +	return -ENODEV;
> +}
> +
> +int __weak vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev)
> +{
> +	return -ENODEV;
> +}
> +

Why not static inlines in vfio_pci_private.h like we do for igd hooks?

...
>  static void vfio_pci_disable(struct vfio_pci_device *vdev)
> diff --git a/drivers/vfio/pci/vfio_pci_nvlink2.c b/drivers/vfio/pci/vfio_pci_nvlink2.c
> new file mode 100644
> index 0000000..e8e06c3
> --- /dev/null
> +++ b/drivers/vfio/pci/vfio_pci_nvlink2.c
...
> +static int vfio_pci_nvgpu_mmap(struct vfio_pci_device *vdev,
> +		struct vfio_pci_region *region, struct vm_area_struct *vma)
> +{
> +	long ret;
> +	struct vfio_pci_nvgpu_data *data = region->data;
> +
> +	if (data->useraddr)
> +		return -EPERM;
> +
> +	if (vma->vm_end - vma->vm_start > data->size)
> +		return -EINVAL;
> +
> +	vma->vm_private_data = region;
> +	vma->vm_flags |= VM_PFNMAP;
> +	vma->vm_ops = &vfio_pci_nvgpu_mmap_vmops;
> +
> +	/*
> +	 * Calling mm_iommu_newdev() here once as the region is not
> +	 * registered yet and therefore right initialization will happen now.
> +	 * Other places will use mm_iommu_find() which returns
> +	 * registered @mem and does not go gup().
> +	 */
> +	data->useraddr = vma->vm_start;
> +	data->mm = current->mm;
> +
> +	atomic_inc(&data->mm->mm_count);
> +	ret = mm_iommu_newdev(data->mm, data->useraddr,
> +			(vma->vm_end - vma->vm_start) >> PAGE_SHIFT,
> +			data->gpu_hpa, &data->mem);
> +
> +	trace_vfio_pci_nvgpu_mmap(vdev->pdev, data->gpu_hpa, data->useraddr,
> +			vma->vm_end - vma->vm_start, ret);
> +
> +	return ret;

It's unfortunate that all these mm_iommu_foo function return long, this
function returns int, which made me go down the rabbit hole to see what
mm_iommu_newdev() and therefore mmio_iommu_do_alloc() can return.  Can
you do a translation somewhere so this doesn't look like a possible
overflow?  Thanks,

Alex

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 17/19] vfio_pci: Allow mapping extra regions
  2018-11-23  5:53   ` Alexey Kardashevskiy
@ 2018-12-11  0:09     ` Alex Williamson
  -1 siblings, 0 replies; 70+ messages in thread
From: Alex Williamson @ 2018-12-11  0:09 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Jose Ricardo Ziviani, Sam Bobroff, Alistair Popple,
	Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

On Fri, 23 Nov 2018 16:53:02 +1100
Alexey Kardashevskiy <aik@ozlabs.ru> wrote:

> So far we only allowed mapping of MMIO BARs to the userspace. However
> there there are GPUs with on-board coherent RAM accessible via side

s/there there/there/

Otherwise:

Acked-by: Alex Williamson <alex.williamson@redhat.com>

> channels which we also want to map to the userspace. The first client
> for this is NVIDIA V100 GPU with NVLink2 direct links to a POWER9
> NPU-enabled CPU; such GPUs have 16GB RAM which is coherently mapped
> to the system address space, we are going to export these as an extra
> PCI region.
> 
> We already support extra PCI regions and this adds support for mapping
> them to the userspace.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
> ---
> Changes:
> v2:
> * reverted one of mistakenly removed error checks
> ---
>  drivers/vfio/pci/vfio_pci_private.h | 3 +++
>  drivers/vfio/pci/vfio_pci.c         | 9 +++++++++
>  2 files changed, 12 insertions(+)
> 
> diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
> index cde3b5d..86aab05 100644
> --- a/drivers/vfio/pci/vfio_pci_private.h
> +++ b/drivers/vfio/pci/vfio_pci_private.h
> @@ -59,6 +59,9 @@ struct vfio_pci_regops {
>  		      size_t count, loff_t *ppos, bool iswrite);
>  	void	(*release)(struct vfio_pci_device *vdev,
>  			   struct vfio_pci_region *region);
> +	int	(*mmap)(struct vfio_pci_device *vdev,
> +			struct vfio_pci_region *region,
> +			struct vm_area_struct *vma);
>  };
>  
>  struct vfio_pci_region {
> diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
> index fef5002..4a6f7c0 100644
> --- a/drivers/vfio/pci/vfio_pci.c
> +++ b/drivers/vfio/pci/vfio_pci.c
> @@ -1130,6 +1130,15 @@ static int vfio_pci_mmap(void *device_data, struct vm_area_struct *vma)
>  		return -EINVAL;
>  	if ((vma->vm_flags & VM_SHARED) == 0)
>  		return -EINVAL;
> +	if (index >= VFIO_PCI_NUM_REGIONS) {
> +		int regnum = index - VFIO_PCI_NUM_REGIONS;
> +		struct vfio_pci_region *region = vdev->region + regnum;
> +
> +		if (region && region->ops && region->ops->mmap &&
> +		    (region->flags & VFIO_REGION_INFO_FLAG_MMAP))
> +			return region->ops->mmap(vdev, region, vma);
> +		return -EINVAL;
> +	}
>  	if (index >= VFIO_PCI_ROM_REGION_INDEX)
>  		return -EINVAL;
>  	if (!vdev->bar_mmap_supported[index])


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 17/19] vfio_pci: Allow mapping extra regions
@ 2018-12-11  0:09     ` Alex Williamson
  0 siblings, 0 replies; 70+ messages in thread
From: Alex Williamson @ 2018-12-11  0:09 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Jose Ricardo Ziviani, Sam Bobroff, Alistair Popple,
	Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

On Fri, 23 Nov 2018 16:53:02 +1100
Alexey Kardashevskiy <aik@ozlabs.ru> wrote:

> So far we only allowed mapping of MMIO BARs to the userspace. However
> there there are GPUs with on-board coherent RAM accessible via side

s/there there/there/

Otherwise:

Acked-by: Alex Williamson <alex.williamson@redhat.com>

> channels which we also want to map to the userspace. The first client
> for this is NVIDIA V100 GPU with NVLink2 direct links to a POWER9
> NPU-enabled CPU; such GPUs have 16GB RAM which is coherently mapped
> to the system address space, we are going to export these as an extra
> PCI region.
> 
> We already support extra PCI regions and this adds support for mapping
> them to the userspace.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
> ---
> Changes:
> v2:
> * reverted one of mistakenly removed error checks
> ---
>  drivers/vfio/pci/vfio_pci_private.h | 3 +++
>  drivers/vfio/pci/vfio_pci.c         | 9 +++++++++
>  2 files changed, 12 insertions(+)
> 
> diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
> index cde3b5d..86aab05 100644
> --- a/drivers/vfio/pci/vfio_pci_private.h
> +++ b/drivers/vfio/pci/vfio_pci_private.h
> @@ -59,6 +59,9 @@ struct vfio_pci_regops {
>  		      size_t count, loff_t *ppos, bool iswrite);
>  	void	(*release)(struct vfio_pci_device *vdev,
>  			   struct vfio_pci_region *region);
> +	int	(*mmap)(struct vfio_pci_device *vdev,
> +			struct vfio_pci_region *region,
> +			struct vm_area_struct *vma);
>  };
>  
>  struct vfio_pci_region {
> diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
> index fef5002..4a6f7c0 100644
> --- a/drivers/vfio/pci/vfio_pci.c
> +++ b/drivers/vfio/pci/vfio_pci.c
> @@ -1130,6 +1130,15 @@ static int vfio_pci_mmap(void *device_data, struct vm_area_struct *vma)
>  		return -EINVAL;
>  	if ((vma->vm_flags & VM_SHARED) = 0)
>  		return -EINVAL;
> +	if (index >= VFIO_PCI_NUM_REGIONS) {
> +		int regnum = index - VFIO_PCI_NUM_REGIONS;
> +		struct vfio_pci_region *region = vdev->region + regnum;
> +
> +		if (region && region->ops && region->ops->mmap &&
> +		    (region->flags & VFIO_REGION_INFO_FLAG_MMAP))
> +			return region->ops->mmap(vdev, region, vma);
> +		return -EINVAL;
> +	}
>  	if (index >= VFIO_PCI_ROM_REGION_INDEX)
>  		return -EINVAL;
>  	if (!vdev->bar_mmap_supported[index])

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 18/19] vfio_pci: Allow regions to add own capabilities
  2018-11-23  5:53   ` Alexey Kardashevskiy
@ 2018-12-11  0:10     ` Alex Williamson
  -1 siblings, 0 replies; 70+ messages in thread
From: Alex Williamson @ 2018-12-11  0:10 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Jose Ricardo Ziviani, Sam Bobroff, Alistair Popple,
	Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

On Fri, 23 Nov 2018 16:53:03 +1100
Alexey Kardashevskiy <aik@ozlabs.ru> wrote:

> VFIO regions already support region capabilities with a limited set of
> fields. However the subdriver might have to report to the userspace
> additional bits.
> 
> This adds an add_capability() hook to vfio_pci_regops.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> ---
> Changes:
> v3:
> * removed confusing rationale for the patch, the next patch makes
> use of it anyway
> ---
>  drivers/vfio/pci/vfio_pci_private.h | 3 +++
>  drivers/vfio/pci/vfio_pci.c         | 6 ++++++
>  2 files changed, 9 insertions(+)

Acked-by: Alex Williamson <alex.williamson@redhat.com>

> diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
> index 86aab05..93c1738 100644
> --- a/drivers/vfio/pci/vfio_pci_private.h
> +++ b/drivers/vfio/pci/vfio_pci_private.h
> @@ -62,6 +62,9 @@ struct vfio_pci_regops {
>  	int	(*mmap)(struct vfio_pci_device *vdev,
>  			struct vfio_pci_region *region,
>  			struct vm_area_struct *vma);
> +	int	(*add_capability)(struct vfio_pci_device *vdev,
> +				  struct vfio_pci_region *region,
> +				  struct vfio_info_cap *caps);
>  };
>  
>  struct vfio_pci_region {
> diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
> index 4a6f7c0..6cb70cf 100644
> --- a/drivers/vfio/pci/vfio_pci.c
> +++ b/drivers/vfio/pci/vfio_pci.c
> @@ -763,6 +763,12 @@ static long vfio_pci_ioctl(void *device_data,
>  			if (ret)
>  				return ret;
>  
> +			if (vdev->region[i].ops->add_capability) {
> +				ret = vdev->region[i].ops->add_capability(vdev,
> +						&vdev->region[i], &caps);
> +				if (ret)
> +					return ret;
> +			}
>  		}
>  		}
>  


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 18/19] vfio_pci: Allow regions to add own capabilities
@ 2018-12-11  0:10     ` Alex Williamson
  0 siblings, 0 replies; 70+ messages in thread
From: Alex Williamson @ 2018-12-11  0:10 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Jose Ricardo Ziviani, Sam Bobroff, Alistair Popple,
	Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

On Fri, 23 Nov 2018 16:53:03 +1100
Alexey Kardashevskiy <aik@ozlabs.ru> wrote:

> VFIO regions already support region capabilities with a limited set of
> fields. However the subdriver might have to report to the userspace
> additional bits.
> 
> This adds an add_capability() hook to vfio_pci_regops.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> ---
> Changes:
> v3:
> * removed confusing rationale for the patch, the next patch makes
> use of it anyway
> ---
>  drivers/vfio/pci/vfio_pci_private.h | 3 +++
>  drivers/vfio/pci/vfio_pci.c         | 6 ++++++
>  2 files changed, 9 insertions(+)

Acked-by: Alex Williamson <alex.williamson@redhat.com>

> diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
> index 86aab05..93c1738 100644
> --- a/drivers/vfio/pci/vfio_pci_private.h
> +++ b/drivers/vfio/pci/vfio_pci_private.h
> @@ -62,6 +62,9 @@ struct vfio_pci_regops {
>  	int	(*mmap)(struct vfio_pci_device *vdev,
>  			struct vfio_pci_region *region,
>  			struct vm_area_struct *vma);
> +	int	(*add_capability)(struct vfio_pci_device *vdev,
> +				  struct vfio_pci_region *region,
> +				  struct vfio_info_cap *caps);
>  };
>  
>  struct vfio_pci_region {
> diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
> index 4a6f7c0..6cb70cf 100644
> --- a/drivers/vfio/pci/vfio_pci.c
> +++ b/drivers/vfio/pci/vfio_pci.c
> @@ -763,6 +763,12 @@ static long vfio_pci_ioctl(void *device_data,
>  			if (ret)
>  				return ret;
>  
> +			if (vdev->region[i].ops->add_capability) {
> +				ret = vdev->region[i].ops->add_capability(vdev,
> +						&vdev->region[i], &caps);
> +				if (ret)
> +					return ret;
> +			}
>  		}
>  		}
>  

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 19/19] vfio_pci: Add NVIDIA GV100GL [Tesla V100 SXM2] subdriver
  2018-12-11  0:08     ` Alex Williamson
@ 2018-12-11  0:57       ` Alexey Kardashevskiy
  -1 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-12-11  0:57 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Jose Ricardo Ziviani, Sam Bobroff, Alistair Popple,
	Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson



On 11/12/2018 11:08, Alex Williamson wrote:
> On Fri, 23 Nov 2018 16:53:04 +1100
> Alexey Kardashevskiy <aik@ozlabs.ru> wrote:
> 
>> POWER9 Witherspoon machines come with 4 or 6 V100 GPUs which are not
>> pluggable PCIe devices but still have PCIe links which are used
>> for config space and MMIO. In addition to that the GPUs have 6 NVLinks
>> which are connected to other GPUs and the POWER9 CPU. POWER9 chips
>> have a special unit on a die called an NPU which is an NVLink2 host bus
>> adapter with p2p connections to 2 to 3 GPUs, 3 or 2 NVLinks to each.
>> These systems also support ATS (address translation services) which is
>> a part of the NVLink2 protocol. Such GPUs also share on-board RAM
>> (16GB or 32GB) to the system via the same NVLink2 so a CPU has
>> cache-coherent access to a GPU RAM.
>>
>> This exports GPU RAM to the userspace as a new VFIO device region. This
>> preregisters the new memory as device memory as it might be used for DMA.
>> This inserts pfns from the fault handler as the GPU memory is not onlined
>> until the vendor driver is loaded and trained the NVLinks so doing this
>> earlier causes low level errors which we fence in the firmware so
>> it does not hurt the host system but still better be avoided.
>>
>> This exports an ATSD (Address Translation Shootdown) register of NPU which
>> allows TLB invalidations inside GPU for an operating system. The register
>> conveniently occupies a single 64k page. It is also presented to
>> the userspace as a new VFIO device region.
>>
>> In order to provide the userspace with the information about GPU-to-NVLink
>> connections, this exports an additional capability called "tgt"
>> (which is an abbreviated host system bus address). The "tgt" property
>> tells the GPU its own system address and allows the guest driver to
>> conglomerate the routing information so each GPU knows how to get directly
>> to the other GPUs.
>>
>> For ATS to work, the nest MMU (an NVIDIA block in a P9 CPU) needs to
>> know LPID (a logical partition ID or a KVM guest hardware ID in other
>> words) and PID (a memory context ID of a userspace process, not to be
>> confused with a linux pid). This assigns a GPU to LPID in the NPU and
>> this is why this adds a listener for KVM on an IOMMU group. A PID comes
>> via NVLink from a GPU and NPU uses a PID wildcard to pass it through.
>>
>> This requires coherent memory and ATSD to be available on the host as
>> the GPU vendor only supports configurations with both features enabled
>> and other configurations are known not to work. Because of this and
>> because of the ways the features are advertised to the host system
>> (which is a device tree with very platform specific properties),
>> this requires enabled POWERNV platform.
>>
>> The V100 GPUs do not advertise none of these capabilities via the config
> 
> s/none/any/
> 
>> space and there are more than just one device ID so this relies on
>> the platform to tell whether these GPUs have special abilities such as
>> NVLinks.
>>
>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>> ---
>> Changes:
>> v4:
>> * added nvlink-speed to the NPU bridge capability as this turned out to
>> be not a constant value
>> * instead of looking at the exact device ID (which also changes from system
>> to system), now this (indirectly) looks at the device tree to know
>> if GPU and NPU support NVLink
>>
>> v3:
>> * reworded the commit log about tgt
>> * added tracepoints (do we want them enabled for entire vfio-pci?)
>> * added code comments
>> * added write|mmap flags to the new regions
>> * auto enabled VFIO_PCI_NVLINK2 config option
>> * added 'tgt' capability to a GPU so QEMU can recreate ibm,npu and ibm,gpu
>> references; there are required by the NVIDIA driver
>> * keep notifier registered only for short time
>> ---
>>  drivers/vfio/pci/Makefile           |   1 +
>>  drivers/vfio/pci/trace.h            | 102 +++++++
>>  drivers/vfio/pci/vfio_pci_private.h |   2 +
>>  include/uapi/linux/vfio.h           |  27 ++
>>  drivers/vfio/pci/vfio_pci.c         |  37 ++-
>>  drivers/vfio/pci/vfio_pci_nvlink2.c | 448 ++++++++++++++++++++++++++++
>>  drivers/vfio/pci/Kconfig            |   6 +
>>  7 files changed, 621 insertions(+), 2 deletions(-)
>>  create mode 100644 drivers/vfio/pci/trace.h
>>  create mode 100644 drivers/vfio/pci/vfio_pci_nvlink2.c
>>
>> diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile
>> index 76d8ec0..9662c06 100644
>> --- a/drivers/vfio/pci/Makefile
>> +++ b/drivers/vfio/pci/Makefile
>> @@ -1,5 +1,6 @@
>>  
>>  vfio-pci-y := vfio_pci.o vfio_pci_intrs.o vfio_pci_rdwr.o vfio_pci_config.o
>>  vfio-pci-$(CONFIG_VFIO_PCI_IGD) += vfio_pci_igd.o
>> +vfio-pci-$(CONFIG_VFIO_PCI_NVLINK2) += vfio_pci_nvlink2.o
>>  
>>  obj-$(CONFIG_VFIO_PCI) += vfio-pci.o
> ...
>> diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
>> index 93c1738..7639241 100644
>> --- a/drivers/vfio/pci/vfio_pci_private.h
>> +++ b/drivers/vfio/pci/vfio_pci_private.h
>> @@ -163,4 +163,6 @@ static inline int vfio_pci_igd_init(struct vfio_pci_device *vdev)
>>  	return -ENODEV;
>>  }
>>  #endif
>> +extern int vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev);
>> +extern int vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev);
>>  #endif /* VFIO_PCI_PRIVATE_H */
>> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
>> index 8131028..547e71e 100644
>> --- a/include/uapi/linux/vfio.h
>> +++ b/include/uapi/linux/vfio.h
>> @@ -353,6 +353,20 @@ struct vfio_region_gfx_edid {
>>  #define VFIO_DEVICE_GFX_LINK_STATE_DOWN  2
>>  };
>>  
>> +/* 10de vendor sub-type
>> + *
>> + * NVIDIA GPU NVlink2 RAM is coherent RAM mapped onto the host address space.
>> + */
> 
> nit, prefer the comment style below leaving the first line of a
> multi-line comment empty, coding style.
> 
>> +#define VFIO_REGION_SUBTYPE_NVIDIA_NVLINK2_RAM	(1)
>> +
>> +/*
>> + * 1014 vendor sub-type
>> + *
>> + * IBM NPU NVlink2 ATSD (Address Translation Shootdown) register of NPU
>> + * to do TLB invalidation on a GPU.
>> + */
>> +#define VFIO_REGION_SUBTYPE_IBM_NVLINK2_ATSD	(1)
>> +
>>  /*
>>   * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
>>   * which allows direct access to non-MSIX registers which happened to be within
>> @@ -363,6 +377,19 @@ struct vfio_region_gfx_edid {
>>   */
>>  #define VFIO_REGION_INFO_CAP_MSIX_MAPPABLE	3
>>  
>> +/*
>> + * Capability with compressed real address (aka SSA - small system address)
>> + * where GPU RAM is mapped on a system bus. Used by a GPU for DMA routing.
>> + */
>> +#define VFIO_REGION_INFO_CAP_NPU2		4
>> +
>> +struct vfio_region_info_cap_npu2 {
>> +	struct vfio_info_cap_header header;
>> +	__u64 tgt;
>> +	__u32 link_speed;
>> +	__u32 __pad;
>> +};
>> +
>>  /**
>>   * VFIO_DEVICE_GET_IRQ_INFO - _IOWR(VFIO_TYPE, VFIO_BASE + 9,
>>   *				    struct vfio_irq_info)
>> diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
>> index 6cb70cf..b8a53f9 100644
>> --- a/drivers/vfio/pci/vfio_pci.c
>> +++ b/drivers/vfio/pci/vfio_pci.c
>> @@ -224,6 +224,16 @@ static bool vfio_pci_nointx(struct pci_dev *pdev)
>>  	return false;
>>  }
>>  
>> +int __weak vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev)
>> +{
>> +	return -ENODEV;
>> +}
>> +
>> +int __weak vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev)
>> +{
>> +	return -ENODEV;
>> +}
>> +
> 
> Why not static inlines in vfio_pci_private.h like we do for igd hooks?
> 
> ...


Because the earlier review suggested to do "weak definition" and I took
it literally :) I'll make it inline.



>>  static void vfio_pci_disable(struct vfio_pci_device *vdev)
>> diff --git a/drivers/vfio/pci/vfio_pci_nvlink2.c b/drivers/vfio/pci/vfio_pci_nvlink2.c
>> new file mode 100644
>> index 0000000..e8e06c3
>> --- /dev/null
>> +++ b/drivers/vfio/pci/vfio_pci_nvlink2.c
> ...
>> +static int vfio_pci_nvgpu_mmap(struct vfio_pci_device *vdev,
>> +		struct vfio_pci_region *region, struct vm_area_struct *vma)
>> +{
>> +	long ret;
>> +	struct vfio_pci_nvgpu_data *data = region->data;
>> +
>> +	if (data->useraddr)
>> +		return -EPERM;
>> +
>> +	if (vma->vm_end - vma->vm_start > data->size)
>> +		return -EINVAL;
>> +
>> +	vma->vm_private_data = region;
>> +	vma->vm_flags |= VM_PFNMAP;
>> +	vma->vm_ops = &vfio_pci_nvgpu_mmap_vmops;
>> +
>> +	/*
>> +	 * Calling mm_iommu_newdev() here once as the region is not
>> +	 * registered yet and therefore right initialization will happen now.
>> +	 * Other places will use mm_iommu_find() which returns
>> +	 * registered @mem and does not go gup().
>> +	 */
>> +	data->useraddr = vma->vm_start;
>> +	data->mm = current->mm;
>> +
>> +	atomic_inc(&data->mm->mm_count);
>> +	ret = mm_iommu_newdev(data->mm, data->useraddr,
>> +			(vma->vm_end - vma->vm_start) >> PAGE_SHIFT,
>> +			data->gpu_hpa, &data->mem);
>> +
>> +	trace_vfio_pci_nvgpu_mmap(vdev->pdev, data->gpu_hpa, data->useraddr,
>> +			vma->vm_end - vma->vm_start, ret);
>> +
>> +	return ret;
> 
> It's unfortunate that all these mm_iommu_foo function return long, this
> function returns int, which made me go down the rabbit hole to see what
> mm_iommu_newdev() and therefore mmio_iommu_do_alloc() can return.  Can
> you do a translation somewhere so this doesn't look like a possible
> overflow?  Thanks,


This is not a new thing - gcc produces less assembly for ppc64 if long
is used and this is why I stick to longs. So I have 2 options here:
change all mm_iommu_xxxx to return int (I'd rather not) or change the
@ret type here from long to int, will the latter be ok?



-- 
Alexey

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 19/19] vfio_pci: Add NVIDIA GV100GL [Tesla V100 SXM2] subdriver
@ 2018-12-11  0:57       ` Alexey Kardashevskiy
  0 siblings, 0 replies; 70+ messages in thread
From: Alexey Kardashevskiy @ 2018-12-11  0:57 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Jose Ricardo Ziviani, Sam Bobroff, Alistair Popple,
	Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson



On 11/12/2018 11:08, Alex Williamson wrote:
> On Fri, 23 Nov 2018 16:53:04 +1100
> Alexey Kardashevskiy <aik@ozlabs.ru> wrote:
> 
>> POWER9 Witherspoon machines come with 4 or 6 V100 GPUs which are not
>> pluggable PCIe devices but still have PCIe links which are used
>> for config space and MMIO. In addition to that the GPUs have 6 NVLinks
>> which are connected to other GPUs and the POWER9 CPU. POWER9 chips
>> have a special unit on a die called an NPU which is an NVLink2 host bus
>> adapter with p2p connections to 2 to 3 GPUs, 3 or 2 NVLinks to each.
>> These systems also support ATS (address translation services) which is
>> a part of the NVLink2 protocol. Such GPUs also share on-board RAM
>> (16GB or 32GB) to the system via the same NVLink2 so a CPU has
>> cache-coherent access to a GPU RAM.
>>
>> This exports GPU RAM to the userspace as a new VFIO device region. This
>> preregisters the new memory as device memory as it might be used for DMA.
>> This inserts pfns from the fault handler as the GPU memory is not onlined
>> until the vendor driver is loaded and trained the NVLinks so doing this
>> earlier causes low level errors which we fence in the firmware so
>> it does not hurt the host system but still better be avoided.
>>
>> This exports an ATSD (Address Translation Shootdown) register of NPU which
>> allows TLB invalidations inside GPU for an operating system. The register
>> conveniently occupies a single 64k page. It is also presented to
>> the userspace as a new VFIO device region.
>>
>> In order to provide the userspace with the information about GPU-to-NVLink
>> connections, this exports an additional capability called "tgt"
>> (which is an abbreviated host system bus address). The "tgt" property
>> tells the GPU its own system address and allows the guest driver to
>> conglomerate the routing information so each GPU knows how to get directly
>> to the other GPUs.
>>
>> For ATS to work, the nest MMU (an NVIDIA block in a P9 CPU) needs to
>> know LPID (a logical partition ID or a KVM guest hardware ID in other
>> words) and PID (a memory context ID of a userspace process, not to be
>> confused with a linux pid). This assigns a GPU to LPID in the NPU and
>> this is why this adds a listener for KVM on an IOMMU group. A PID comes
>> via NVLink from a GPU and NPU uses a PID wildcard to pass it through.
>>
>> This requires coherent memory and ATSD to be available on the host as
>> the GPU vendor only supports configurations with both features enabled
>> and other configurations are known not to work. Because of this and
>> because of the ways the features are advertised to the host system
>> (which is a device tree with very platform specific properties),
>> this requires enabled POWERNV platform.
>>
>> The V100 GPUs do not advertise none of these capabilities via the config
> 
> s/none/any/
> 
>> space and there are more than just one device ID so this relies on
>> the platform to tell whether these GPUs have special abilities such as
>> NVLinks.
>>
>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>> ---
>> Changes:
>> v4:
>> * added nvlink-speed to the NPU bridge capability as this turned out to
>> be not a constant value
>> * instead of looking at the exact device ID (which also changes from system
>> to system), now this (indirectly) looks at the device tree to know
>> if GPU and NPU support NVLink
>>
>> v3:
>> * reworded the commit log about tgt
>> * added tracepoints (do we want them enabled for entire vfio-pci?)
>> * added code comments
>> * added write|mmap flags to the new regions
>> * auto enabled VFIO_PCI_NVLINK2 config option
>> * added 'tgt' capability to a GPU so QEMU can recreate ibm,npu and ibm,gpu
>> references; there are required by the NVIDIA driver
>> * keep notifier registered only for short time
>> ---
>>  drivers/vfio/pci/Makefile           |   1 +
>>  drivers/vfio/pci/trace.h            | 102 +++++++
>>  drivers/vfio/pci/vfio_pci_private.h |   2 +
>>  include/uapi/linux/vfio.h           |  27 ++
>>  drivers/vfio/pci/vfio_pci.c         |  37 ++-
>>  drivers/vfio/pci/vfio_pci_nvlink2.c | 448 ++++++++++++++++++++++++++++
>>  drivers/vfio/pci/Kconfig            |   6 +
>>  7 files changed, 621 insertions(+), 2 deletions(-)
>>  create mode 100644 drivers/vfio/pci/trace.h
>>  create mode 100644 drivers/vfio/pci/vfio_pci_nvlink2.c
>>
>> diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile
>> index 76d8ec0..9662c06 100644
>> --- a/drivers/vfio/pci/Makefile
>> +++ b/drivers/vfio/pci/Makefile
>> @@ -1,5 +1,6 @@
>>  
>>  vfio-pci-y := vfio_pci.o vfio_pci_intrs.o vfio_pci_rdwr.o vfio_pci_config.o
>>  vfio-pci-$(CONFIG_VFIO_PCI_IGD) += vfio_pci_igd.o
>> +vfio-pci-$(CONFIG_VFIO_PCI_NVLINK2) += vfio_pci_nvlink2.o
>>  
>>  obj-$(CONFIG_VFIO_PCI) += vfio-pci.o
> ...
>> diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
>> index 93c1738..7639241 100644
>> --- a/drivers/vfio/pci/vfio_pci_private.h
>> +++ b/drivers/vfio/pci/vfio_pci_private.h
>> @@ -163,4 +163,6 @@ static inline int vfio_pci_igd_init(struct vfio_pci_device *vdev)
>>  	return -ENODEV;
>>  }
>>  #endif
>> +extern int vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev);
>> +extern int vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev);
>>  #endif /* VFIO_PCI_PRIVATE_H */
>> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
>> index 8131028..547e71e 100644
>> --- a/include/uapi/linux/vfio.h
>> +++ b/include/uapi/linux/vfio.h
>> @@ -353,6 +353,20 @@ struct vfio_region_gfx_edid {
>>  #define VFIO_DEVICE_GFX_LINK_STATE_DOWN  2
>>  };
>>  
>> +/* 10de vendor sub-type
>> + *
>> + * NVIDIA GPU NVlink2 RAM is coherent RAM mapped onto the host address space.
>> + */
> 
> nit, prefer the comment style below leaving the first line of a
> multi-line comment empty, coding style.
> 
>> +#define VFIO_REGION_SUBTYPE_NVIDIA_NVLINK2_RAM	(1)
>> +
>> +/*
>> + * 1014 vendor sub-type
>> + *
>> + * IBM NPU NVlink2 ATSD (Address Translation Shootdown) register of NPU
>> + * to do TLB invalidation on a GPU.
>> + */
>> +#define VFIO_REGION_SUBTYPE_IBM_NVLINK2_ATSD	(1)
>> +
>>  /*
>>   * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
>>   * which allows direct access to non-MSIX registers which happened to be within
>> @@ -363,6 +377,19 @@ struct vfio_region_gfx_edid {
>>   */
>>  #define VFIO_REGION_INFO_CAP_MSIX_MAPPABLE	3
>>  
>> +/*
>> + * Capability with compressed real address (aka SSA - small system address)
>> + * where GPU RAM is mapped on a system bus. Used by a GPU for DMA routing.
>> + */
>> +#define VFIO_REGION_INFO_CAP_NPU2		4
>> +
>> +struct vfio_region_info_cap_npu2 {
>> +	struct vfio_info_cap_header header;
>> +	__u64 tgt;
>> +	__u32 link_speed;
>> +	__u32 __pad;
>> +};
>> +
>>  /**
>>   * VFIO_DEVICE_GET_IRQ_INFO - _IOWR(VFIO_TYPE, VFIO_BASE + 9,
>>   *				    struct vfio_irq_info)
>> diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
>> index 6cb70cf..b8a53f9 100644
>> --- a/drivers/vfio/pci/vfio_pci.c
>> +++ b/drivers/vfio/pci/vfio_pci.c
>> @@ -224,6 +224,16 @@ static bool vfio_pci_nointx(struct pci_dev *pdev)
>>  	return false;
>>  }
>>  
>> +int __weak vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev)
>> +{
>> +	return -ENODEV;
>> +}
>> +
>> +int __weak vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev)
>> +{
>> +	return -ENODEV;
>> +}
>> +
> 
> Why not static inlines in vfio_pci_private.h like we do for igd hooks?
> 
> ...


Because the earlier review suggested to do "weak definition" and I took
it literally :) I'll make it inline.



>>  static void vfio_pci_disable(struct vfio_pci_device *vdev)
>> diff --git a/drivers/vfio/pci/vfio_pci_nvlink2.c b/drivers/vfio/pci/vfio_pci_nvlink2.c
>> new file mode 100644
>> index 0000000..e8e06c3
>> --- /dev/null
>> +++ b/drivers/vfio/pci/vfio_pci_nvlink2.c
> ...
>> +static int vfio_pci_nvgpu_mmap(struct vfio_pci_device *vdev,
>> +		struct vfio_pci_region *region, struct vm_area_struct *vma)
>> +{
>> +	long ret;
>> +	struct vfio_pci_nvgpu_data *data = region->data;
>> +
>> +	if (data->useraddr)
>> +		return -EPERM;
>> +
>> +	if (vma->vm_end - vma->vm_start > data->size)
>> +		return -EINVAL;
>> +
>> +	vma->vm_private_data = region;
>> +	vma->vm_flags |= VM_PFNMAP;
>> +	vma->vm_ops = &vfio_pci_nvgpu_mmap_vmops;
>> +
>> +	/*
>> +	 * Calling mm_iommu_newdev() here once as the region is not
>> +	 * registered yet and therefore right initialization will happen now.
>> +	 * Other places will use mm_iommu_find() which returns
>> +	 * registered @mem and does not go gup().
>> +	 */
>> +	data->useraddr = vma->vm_start;
>> +	data->mm = current->mm;
>> +
>> +	atomic_inc(&data->mm->mm_count);
>> +	ret = mm_iommu_newdev(data->mm, data->useraddr,
>> +			(vma->vm_end - vma->vm_start) >> PAGE_SHIFT,
>> +			data->gpu_hpa, &data->mem);
>> +
>> +	trace_vfio_pci_nvgpu_mmap(vdev->pdev, data->gpu_hpa, data->useraddr,
>> +			vma->vm_end - vma->vm_start, ret);
>> +
>> +	return ret;
> 
> It's unfortunate that all these mm_iommu_foo function return long, this
> function returns int, which made me go down the rabbit hole to see what
> mm_iommu_newdev() and therefore mmio_iommu_do_alloc() can return.  Can
> you do a translation somewhere so this doesn't look like a possible
> overflow?  Thanks,


This is not a new thing - gcc produces less assembly for ppc64 if long
is used and this is why I stick to longs. So I have 2 options here:
change all mm_iommu_xxxx to return int (I'd rather not) or change the
@ret type here from long to int, will the latter be ok?



-- 
Alexey

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 19/19] vfio_pci: Add NVIDIA GV100GL [Tesla V100 SXM2] subdriver
  2018-12-11  0:57       ` Alexey Kardashevskiy
@ 2018-12-11  1:27         ` Alex Williamson
  -1 siblings, 0 replies; 70+ messages in thread
From: Alex Williamson @ 2018-12-11  1:27 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Jose Ricardo Ziviani, Sam Bobroff, Alistair Popple,
	Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

On Tue, 11 Dec 2018 11:57:20 +1100
Alexey Kardashevskiy <aik@ozlabs.ru> wrote:

> On 11/12/2018 11:08, Alex Williamson wrote:
> > On Fri, 23 Nov 2018 16:53:04 +1100
> > Alexey Kardashevskiy <aik@ozlabs.ru> wrote:
> >   
> >> POWER9 Witherspoon machines come with 4 or 6 V100 GPUs which are not
> >> pluggable PCIe devices but still have PCIe links which are used
> >> for config space and MMIO. In addition to that the GPUs have 6 NVLinks
> >> which are connected to other GPUs and the POWER9 CPU. POWER9 chips
> >> have a special unit on a die called an NPU which is an NVLink2 host bus
> >> adapter with p2p connections to 2 to 3 GPUs, 3 or 2 NVLinks to each.
> >> These systems also support ATS (address translation services) which is
> >> a part of the NVLink2 protocol. Such GPUs also share on-board RAM
> >> (16GB or 32GB) to the system via the same NVLink2 so a CPU has
> >> cache-coherent access to a GPU RAM.
> >>
> >> This exports GPU RAM to the userspace as a new VFIO device region. This
> >> preregisters the new memory as device memory as it might be used for DMA.
> >> This inserts pfns from the fault handler as the GPU memory is not onlined
> >> until the vendor driver is loaded and trained the NVLinks so doing this
> >> earlier causes low level errors which we fence in the firmware so
> >> it does not hurt the host system but still better be avoided.
> >>
> >> This exports an ATSD (Address Translation Shootdown) register of NPU which
> >> allows TLB invalidations inside GPU for an operating system. The register
> >> conveniently occupies a single 64k page. It is also presented to
> >> the userspace as a new VFIO device region.
> >>
> >> In order to provide the userspace with the information about GPU-to-NVLink
> >> connections, this exports an additional capability called "tgt"
> >> (which is an abbreviated host system bus address). The "tgt" property
> >> tells the GPU its own system address and allows the guest driver to
> >> conglomerate the routing information so each GPU knows how to get directly
> >> to the other GPUs.
> >>
> >> For ATS to work, the nest MMU (an NVIDIA block in a P9 CPU) needs to
> >> know LPID (a logical partition ID or a KVM guest hardware ID in other
> >> words) and PID (a memory context ID of a userspace process, not to be
> >> confused with a linux pid). This assigns a GPU to LPID in the NPU and
> >> this is why this adds a listener for KVM on an IOMMU group. A PID comes
> >> via NVLink from a GPU and NPU uses a PID wildcard to pass it through.
> >>
> >> This requires coherent memory and ATSD to be available on the host as
> >> the GPU vendor only supports configurations with both features enabled
> >> and other configurations are known not to work. Because of this and
> >> because of the ways the features are advertised to the host system
> >> (which is a device tree with very platform specific properties),
> >> this requires enabled POWERNV platform.
> >>
> >> The V100 GPUs do not advertise none of these capabilities via the config  
> > 
> > s/none/any/
> >   
> >> space and there are more than just one device ID so this relies on
> >> the platform to tell whether these GPUs have special abilities such as
> >> NVLinks.
> >>
> >> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> >> ---
> >> Changes:
> >> v4:
> >> * added nvlink-speed to the NPU bridge capability as this turned out to
> >> be not a constant value
> >> * instead of looking at the exact device ID (which also changes from system
> >> to system), now this (indirectly) looks at the device tree to know
> >> if GPU and NPU support NVLink
> >>
> >> v3:
> >> * reworded the commit log about tgt
> >> * added tracepoints (do we want them enabled for entire vfio-pci?)
> >> * added code comments
> >> * added write|mmap flags to the new regions
> >> * auto enabled VFIO_PCI_NVLINK2 config option
> >> * added 'tgt' capability to a GPU so QEMU can recreate ibm,npu and ibm,gpu
> >> references; there are required by the NVIDIA driver
> >> * keep notifier registered only for short time
> >> ---
> >>  drivers/vfio/pci/Makefile           |   1 +
> >>  drivers/vfio/pci/trace.h            | 102 +++++++
> >>  drivers/vfio/pci/vfio_pci_private.h |   2 +
> >>  include/uapi/linux/vfio.h           |  27 ++
> >>  drivers/vfio/pci/vfio_pci.c         |  37 ++-
> >>  drivers/vfio/pci/vfio_pci_nvlink2.c | 448 ++++++++++++++++++++++++++++
> >>  drivers/vfio/pci/Kconfig            |   6 +
> >>  7 files changed, 621 insertions(+), 2 deletions(-)
> >>  create mode 100644 drivers/vfio/pci/trace.h
> >>  create mode 100644 drivers/vfio/pci/vfio_pci_nvlink2.c
> >>
> >> diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile
> >> index 76d8ec0..9662c06 100644
> >> --- a/drivers/vfio/pci/Makefile
> >> +++ b/drivers/vfio/pci/Makefile
> >> @@ -1,5 +1,6 @@
> >>  
> >>  vfio-pci-y := vfio_pci.o vfio_pci_intrs.o vfio_pci_rdwr.o vfio_pci_config.o
> >>  vfio-pci-$(CONFIG_VFIO_PCI_IGD) += vfio_pci_igd.o
> >> +vfio-pci-$(CONFIG_VFIO_PCI_NVLINK2) += vfio_pci_nvlink2.o
> >>  
> >>  obj-$(CONFIG_VFIO_PCI) += vfio-pci.o  
> > ...  
> >> diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
> >> index 93c1738..7639241 100644
> >> --- a/drivers/vfio/pci/vfio_pci_private.h
> >> +++ b/drivers/vfio/pci/vfio_pci_private.h
> >> @@ -163,4 +163,6 @@ static inline int vfio_pci_igd_init(struct vfio_pci_device *vdev)
> >>  	return -ENODEV;
> >>  }
> >>  #endif
> >> +extern int vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev);
> >> +extern int vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev);
> >>  #endif /* VFIO_PCI_PRIVATE_H */
> >> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> >> index 8131028..547e71e 100644
> >> --- a/include/uapi/linux/vfio.h
> >> +++ b/include/uapi/linux/vfio.h
> >> @@ -353,6 +353,20 @@ struct vfio_region_gfx_edid {
> >>  #define VFIO_DEVICE_GFX_LINK_STATE_DOWN  2
> >>  };
> >>  
> >> +/* 10de vendor sub-type
> >> + *
> >> + * NVIDIA GPU NVlink2 RAM is coherent RAM mapped onto the host address space.
> >> + */  
> > 
> > nit, prefer the comment style below leaving the first line of a
> > multi-line comment empty, coding style.
> >   
> >> +#define VFIO_REGION_SUBTYPE_NVIDIA_NVLINK2_RAM	(1)
> >> +
> >> +/*
> >> + * 1014 vendor sub-type
> >> + *
> >> + * IBM NPU NVlink2 ATSD (Address Translation Shootdown) register of NPU
> >> + * to do TLB invalidation on a GPU.
> >> + */
> >> +#define VFIO_REGION_SUBTYPE_IBM_NVLINK2_ATSD	(1)
> >> +
> >>  /*
> >>   * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
> >>   * which allows direct access to non-MSIX registers which happened to be within
> >> @@ -363,6 +377,19 @@ struct vfio_region_gfx_edid {
> >>   */
> >>  #define VFIO_REGION_INFO_CAP_MSIX_MAPPABLE	3
> >>  
> >> +/*
> >> + * Capability with compressed real address (aka SSA - small system address)
> >> + * where GPU RAM is mapped on a system bus. Used by a GPU for DMA routing.
> >> + */
> >> +#define VFIO_REGION_INFO_CAP_NPU2		4
> >> +
> >> +struct vfio_region_info_cap_npu2 {
> >> +	struct vfio_info_cap_header header;
> >> +	__u64 tgt;
> >> +	__u32 link_speed;
> >> +	__u32 __pad;
> >> +};
> >> +
> >>  /**
> >>   * VFIO_DEVICE_GET_IRQ_INFO - _IOWR(VFIO_TYPE, VFIO_BASE + 9,
> >>   *				    struct vfio_irq_info)
> >> diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
> >> index 6cb70cf..b8a53f9 100644
> >> --- a/drivers/vfio/pci/vfio_pci.c
> >> +++ b/drivers/vfio/pci/vfio_pci.c
> >> @@ -224,6 +224,16 @@ static bool vfio_pci_nointx(struct pci_dev *pdev)
> >>  	return false;
> >>  }
> >>  
> >> +int __weak vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev)
> >> +{
> >> +	return -ENODEV;
> >> +}
> >> +
> >> +int __weak vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev)
> >> +{
> >> +	return -ENODEV;
> >> +}
> >> +  
> > 
> > Why not static inlines in vfio_pci_private.h like we do for igd hooks?
> > 
> > ...  
> 
> 
> Because the earlier review suggested to do "weak definition" and I took
> it literally :) I'll make it inline.

Oops, that was from me, huh.  Functionally equivalent, but we know
deterministically that we only need this code on ppc, it's not like
some module might provide it externally, and it's more consistent with
igd.  Sorry for the runaround.

> >>  static void vfio_pci_disable(struct vfio_pci_device *vdev)
> >> diff --git a/drivers/vfio/pci/vfio_pci_nvlink2.c b/drivers/vfio/pci/vfio_pci_nvlink2.c
> >> new file mode 100644
> >> index 0000000..e8e06c3
> >> --- /dev/null
> >> +++ b/drivers/vfio/pci/vfio_pci_nvlink2.c  
> > ...  
> >> +static int vfio_pci_nvgpu_mmap(struct vfio_pci_device *vdev,
> >> +		struct vfio_pci_region *region, struct vm_area_struct *vma)
> >> +{
> >> +	long ret;
> >> +	struct vfio_pci_nvgpu_data *data = region->data;
> >> +
> >> +	if (data->useraddr)
> >> +		return -EPERM;
> >> +
> >> +	if (vma->vm_end - vma->vm_start > data->size)
> >> +		return -EINVAL;
> >> +
> >> +	vma->vm_private_data = region;
> >> +	vma->vm_flags |= VM_PFNMAP;
> >> +	vma->vm_ops = &vfio_pci_nvgpu_mmap_vmops;
> >> +
> >> +	/*
> >> +	 * Calling mm_iommu_newdev() here once as the region is not
> >> +	 * registered yet and therefore right initialization will happen now.
> >> +	 * Other places will use mm_iommu_find() which returns
> >> +	 * registered @mem and does not go gup().
> >> +	 */
> >> +	data->useraddr = vma->vm_start;
> >> +	data->mm = current->mm;
> >> +
> >> +	atomic_inc(&data->mm->mm_count);
> >> +	ret = mm_iommu_newdev(data->mm, data->useraddr,
> >> +			(vma->vm_end - vma->vm_start) >> PAGE_SHIFT,
> >> +			data->gpu_hpa, &data->mem);
> >> +
> >> +	trace_vfio_pci_nvgpu_mmap(vdev->pdev, data->gpu_hpa, data->useraddr,
> >> +			vma->vm_end - vma->vm_start, ret);
> >> +
> >> +	return ret;  
> > 
> > It's unfortunate that all these mm_iommu_foo function return long, this
> > function returns int, which made me go down the rabbit hole to see what
> > mm_iommu_newdev() and therefore mmio_iommu_do_alloc() can return.  Can
> > you do a translation somewhere so this doesn't look like a possible
> > overflow?  Thanks,  
> 
> 
> This is not a new thing - gcc produces less assembly for ppc64 if long
> is used and this is why I stick to longs. So I have 2 options here:
> change all mm_iommu_xxxx to return int (I'd rather not) or change the
> @ret type here from long to int, will the latter be ok?

I guess I'd do the latter, use int for ret and cast the return of
mm_iommu_newdev().  Thanks,

Alex

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 19/19] vfio_pci: Add NVIDIA GV100GL [Tesla V100 SXM2] subdriver
@ 2018-12-11  1:27         ` Alex Williamson
  0 siblings, 0 replies; 70+ messages in thread
From: Alex Williamson @ 2018-12-11  1:27 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Jose Ricardo Ziviani, Sam Bobroff, Alistair Popple,
	Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

On Tue, 11 Dec 2018 11:57:20 +1100
Alexey Kardashevskiy <aik@ozlabs.ru> wrote:

> On 11/12/2018 11:08, Alex Williamson wrote:
> > On Fri, 23 Nov 2018 16:53:04 +1100
> > Alexey Kardashevskiy <aik@ozlabs.ru> wrote:
> >   
> >> POWER9 Witherspoon machines come with 4 or 6 V100 GPUs which are not
> >> pluggable PCIe devices but still have PCIe links which are used
> >> for config space and MMIO. In addition to that the GPUs have 6 NVLinks
> >> which are connected to other GPUs and the POWER9 CPU. POWER9 chips
> >> have a special unit on a die called an NPU which is an NVLink2 host bus
> >> adapter with p2p connections to 2 to 3 GPUs, 3 or 2 NVLinks to each.
> >> These systems also support ATS (address translation services) which is
> >> a part of the NVLink2 protocol. Such GPUs also share on-board RAM
> >> (16GB or 32GB) to the system via the same NVLink2 so a CPU has
> >> cache-coherent access to a GPU RAM.
> >>
> >> This exports GPU RAM to the userspace as a new VFIO device region. This
> >> preregisters the new memory as device memory as it might be used for DMA.
> >> This inserts pfns from the fault handler as the GPU memory is not onlined
> >> until the vendor driver is loaded and trained the NVLinks so doing this
> >> earlier causes low level errors which we fence in the firmware so
> >> it does not hurt the host system but still better be avoided.
> >>
> >> This exports an ATSD (Address Translation Shootdown) register of NPU which
> >> allows TLB invalidations inside GPU for an operating system. The register
> >> conveniently occupies a single 64k page. It is also presented to
> >> the userspace as a new VFIO device region.
> >>
> >> In order to provide the userspace with the information about GPU-to-NVLink
> >> connections, this exports an additional capability called "tgt"
> >> (which is an abbreviated host system bus address). The "tgt" property
> >> tells the GPU its own system address and allows the guest driver to
> >> conglomerate the routing information so each GPU knows how to get directly
> >> to the other GPUs.
> >>
> >> For ATS to work, the nest MMU (an NVIDIA block in a P9 CPU) needs to
> >> know LPID (a logical partition ID or a KVM guest hardware ID in other
> >> words) and PID (a memory context ID of a userspace process, not to be
> >> confused with a linux pid). This assigns a GPU to LPID in the NPU and
> >> this is why this adds a listener for KVM on an IOMMU group. A PID comes
> >> via NVLink from a GPU and NPU uses a PID wildcard to pass it through.
> >>
> >> This requires coherent memory and ATSD to be available on the host as
> >> the GPU vendor only supports configurations with both features enabled
> >> and other configurations are known not to work. Because of this and
> >> because of the ways the features are advertised to the host system
> >> (which is a device tree with very platform specific properties),
> >> this requires enabled POWERNV platform.
> >>
> >> The V100 GPUs do not advertise none of these capabilities via the config  
> > 
> > s/none/any/
> >   
> >> space and there are more than just one device ID so this relies on
> >> the platform to tell whether these GPUs have special abilities such as
> >> NVLinks.
> >>
> >> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> >> ---
> >> Changes:
> >> v4:
> >> * added nvlink-speed to the NPU bridge capability as this turned out to
> >> be not a constant value
> >> * instead of looking at the exact device ID (which also changes from system
> >> to system), now this (indirectly) looks at the device tree to know
> >> if GPU and NPU support NVLink
> >>
> >> v3:
> >> * reworded the commit log about tgt
> >> * added tracepoints (do we want them enabled for entire vfio-pci?)
> >> * added code comments
> >> * added write|mmap flags to the new regions
> >> * auto enabled VFIO_PCI_NVLINK2 config option
> >> * added 'tgt' capability to a GPU so QEMU can recreate ibm,npu and ibm,gpu
> >> references; there are required by the NVIDIA driver
> >> * keep notifier registered only for short time
> >> ---
> >>  drivers/vfio/pci/Makefile           |   1 +
> >>  drivers/vfio/pci/trace.h            | 102 +++++++
> >>  drivers/vfio/pci/vfio_pci_private.h |   2 +
> >>  include/uapi/linux/vfio.h           |  27 ++
> >>  drivers/vfio/pci/vfio_pci.c         |  37 ++-
> >>  drivers/vfio/pci/vfio_pci_nvlink2.c | 448 ++++++++++++++++++++++++++++
> >>  drivers/vfio/pci/Kconfig            |   6 +
> >>  7 files changed, 621 insertions(+), 2 deletions(-)
> >>  create mode 100644 drivers/vfio/pci/trace.h
> >>  create mode 100644 drivers/vfio/pci/vfio_pci_nvlink2.c
> >>
> >> diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile
> >> index 76d8ec0..9662c06 100644
> >> --- a/drivers/vfio/pci/Makefile
> >> +++ b/drivers/vfio/pci/Makefile
> >> @@ -1,5 +1,6 @@
> >>  
> >>  vfio-pci-y := vfio_pci.o vfio_pci_intrs.o vfio_pci_rdwr.o vfio_pci_config.o
> >>  vfio-pci-$(CONFIG_VFIO_PCI_IGD) += vfio_pci_igd.o
> >> +vfio-pci-$(CONFIG_VFIO_PCI_NVLINK2) += vfio_pci_nvlink2.o
> >>  
> >>  obj-$(CONFIG_VFIO_PCI) += vfio-pci.o  
> > ...  
> >> diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
> >> index 93c1738..7639241 100644
> >> --- a/drivers/vfio/pci/vfio_pci_private.h
> >> +++ b/drivers/vfio/pci/vfio_pci_private.h
> >> @@ -163,4 +163,6 @@ static inline int vfio_pci_igd_init(struct vfio_pci_device *vdev)
> >>  	return -ENODEV;
> >>  }
> >>  #endif
> >> +extern int vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev);
> >> +extern int vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev);
> >>  #endif /* VFIO_PCI_PRIVATE_H */
> >> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> >> index 8131028..547e71e 100644
> >> --- a/include/uapi/linux/vfio.h
> >> +++ b/include/uapi/linux/vfio.h
> >> @@ -353,6 +353,20 @@ struct vfio_region_gfx_edid {
> >>  #define VFIO_DEVICE_GFX_LINK_STATE_DOWN  2
> >>  };
> >>  
> >> +/* 10de vendor sub-type
> >> + *
> >> + * NVIDIA GPU NVlink2 RAM is coherent RAM mapped onto the host address space.
> >> + */  
> > 
> > nit, prefer the comment style below leaving the first line of a
> > multi-line comment empty, coding style.
> >   
> >> +#define VFIO_REGION_SUBTYPE_NVIDIA_NVLINK2_RAM	(1)
> >> +
> >> +/*
> >> + * 1014 vendor sub-type
> >> + *
> >> + * IBM NPU NVlink2 ATSD (Address Translation Shootdown) register of NPU
> >> + * to do TLB invalidation on a GPU.
> >> + */
> >> +#define VFIO_REGION_SUBTYPE_IBM_NVLINK2_ATSD	(1)
> >> +
> >>  /*
> >>   * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
> >>   * which allows direct access to non-MSIX registers which happened to be within
> >> @@ -363,6 +377,19 @@ struct vfio_region_gfx_edid {
> >>   */
> >>  #define VFIO_REGION_INFO_CAP_MSIX_MAPPABLE	3
> >>  
> >> +/*
> >> + * Capability with compressed real address (aka SSA - small system address)
> >> + * where GPU RAM is mapped on a system bus. Used by a GPU for DMA routing.
> >> + */
> >> +#define VFIO_REGION_INFO_CAP_NPU2		4
> >> +
> >> +struct vfio_region_info_cap_npu2 {
> >> +	struct vfio_info_cap_header header;
> >> +	__u64 tgt;
> >> +	__u32 link_speed;
> >> +	__u32 __pad;
> >> +};
> >> +
> >>  /**
> >>   * VFIO_DEVICE_GET_IRQ_INFO - _IOWR(VFIO_TYPE, VFIO_BASE + 9,
> >>   *				    struct vfio_irq_info)
> >> diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
> >> index 6cb70cf..b8a53f9 100644
> >> --- a/drivers/vfio/pci/vfio_pci.c
> >> +++ b/drivers/vfio/pci/vfio_pci.c
> >> @@ -224,6 +224,16 @@ static bool vfio_pci_nointx(struct pci_dev *pdev)
> >>  	return false;
> >>  }
> >>  
> >> +int __weak vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev)
> >> +{
> >> +	return -ENODEV;
> >> +}
> >> +
> >> +int __weak vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev)
> >> +{
> >> +	return -ENODEV;
> >> +}
> >> +  
> > 
> > Why not static inlines in vfio_pci_private.h like we do for igd hooks?
> > 
> > ...  
> 
> 
> Because the earlier review suggested to do "weak definition" and I took
> it literally :) I'll make it inline.

Oops, that was from me, huh.  Functionally equivalent, but we know
deterministically that we only need this code on ppc, it's not like
some module might provide it externally, and it's more consistent with
igd.  Sorry for the runaround.

> >>  static void vfio_pci_disable(struct vfio_pci_device *vdev)
> >> diff --git a/drivers/vfio/pci/vfio_pci_nvlink2.c b/drivers/vfio/pci/vfio_pci_nvlink2.c
> >> new file mode 100644
> >> index 0000000..e8e06c3
> >> --- /dev/null
> >> +++ b/drivers/vfio/pci/vfio_pci_nvlink2.c  
> > ...  
> >> +static int vfio_pci_nvgpu_mmap(struct vfio_pci_device *vdev,
> >> +		struct vfio_pci_region *region, struct vm_area_struct *vma)
> >> +{
> >> +	long ret;
> >> +	struct vfio_pci_nvgpu_data *data = region->data;
> >> +
> >> +	if (data->useraddr)
> >> +		return -EPERM;
> >> +
> >> +	if (vma->vm_end - vma->vm_start > data->size)
> >> +		return -EINVAL;
> >> +
> >> +	vma->vm_private_data = region;
> >> +	vma->vm_flags |= VM_PFNMAP;
> >> +	vma->vm_ops = &vfio_pci_nvgpu_mmap_vmops;
> >> +
> >> +	/*
> >> +	 * Calling mm_iommu_newdev() here once as the region is not
> >> +	 * registered yet and therefore right initialization will happen now.
> >> +	 * Other places will use mm_iommu_find() which returns
> >> +	 * registered @mem and does not go gup().
> >> +	 */
> >> +	data->useraddr = vma->vm_start;
> >> +	data->mm = current->mm;
> >> +
> >> +	atomic_inc(&data->mm->mm_count);
> >> +	ret = mm_iommu_newdev(data->mm, data->useraddr,
> >> +			(vma->vm_end - vma->vm_start) >> PAGE_SHIFT,
> >> +			data->gpu_hpa, &data->mem);
> >> +
> >> +	trace_vfio_pci_nvgpu_mmap(vdev->pdev, data->gpu_hpa, data->useraddr,
> >> +			vma->vm_end - vma->vm_start, ret);
> >> +
> >> +	return ret;  
> > 
> > It's unfortunate that all these mm_iommu_foo function return long, this
> > function returns int, which made me go down the rabbit hole to see what
> > mm_iommu_newdev() and therefore mmio_iommu_do_alloc() can return.  Can
> > you do a translation somewhere so this doesn't look like a possible
> > overflow?  Thanks,  
> 
> 
> This is not a new thing - gcc produces less assembly for ppc64 if long
> is used and this is why I stick to longs. So I have 2 options here:
> change all mm_iommu_xxxx to return int (I'd rather not) or change the
> @ret type here from long to int, will the latter be ok?

I guess I'd do the latter, use int for ret and cast the return of
mm_iommu_newdev().  Thanks,

Alex

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 03/19] powerpc/vfio/iommu/kvm: Do not pin device memory
  2018-11-23  5:52   ` Alexey Kardashevskiy
@ 2018-12-13  3:25     ` Paul Mackerras
  -1 siblings, 0 replies; 70+ messages in thread
From: Paul Mackerras @ 2018-12-13  3:25 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Alex Williamson, Jose Ricardo Ziviani, Sam Bobroff,
	Alistair Popple, Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

On Fri, Nov 23, 2018 at 04:52:48PM +1100, Alexey Kardashevskiy wrote:
> This new memory does not have page structs as it is not plugged to
> the host so gup() will fail anyway.
> 
> This adds 2 helpers:
> - mm_iommu_newdev() to preregister the "memory device" memory so
> the rest of API can still be used;
> - mm_iommu_is_devmem() to know if the physical address is one of thise
> new regions which we must avoid unpinning of.
> 
> This adds @mm to tce_page_is_contained() and iommu_tce_xchg() to test
> if the memory is device memory to avoid pfn_to_page().
> 
> This adds a check for device memory in mm_iommu_ua_mark_dirty_rm() which
> does delayed pages dirtying.

This mostly looks good, but I have one concern:

> -static bool tce_page_is_contained(struct page *page, unsigned page_shift)
> +static bool tce_page_is_contained(struct mm_struct *mm, unsigned long hpa,
> +		unsigned int page_shift)
>  {
> +	struct page *page;
> +
> +	if (mm_iommu_is_devmem(mm, hpa, page_shift))
> +		return true;
> +
> +	page = pfn_to_page(hpa >> PAGE_SHIFT);

Is it possible for userspace or a guest to cause us to get here with
hpa value that is bogus?  If so what does pfn_to_page do with that
pfn, and do we handle that correctly?

(I realize that if there is a problem here, it's a problem that
already exists in the code without this patch.)

Paul.

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH kernel v4 03/19] powerpc/vfio/iommu/kvm: Do not pin device memory
@ 2018-12-13  3:25     ` Paul Mackerras
  0 siblings, 0 replies; 70+ messages in thread
From: Paul Mackerras @ 2018-12-13  3:25 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Alex Williamson, Jose Ricardo Ziviani, Sam Bobroff,
	Alistair Popple, Daniel Henrique Barboza, linuxppc-dev, kvm-ppc,
	Piotr Jaroszynski, Oliver O'Halloran, Andrew Donnellan,
	Leonardo Augusto Guimarães Garcia, Reza Arbab, David Gibson

On Fri, Nov 23, 2018 at 04:52:48PM +1100, Alexey Kardashevskiy wrote:
> This new memory does not have page structs as it is not plugged to
> the host so gup() will fail anyway.
> 
> This adds 2 helpers:
> - mm_iommu_newdev() to preregister the "memory device" memory so
> the rest of API can still be used;
> - mm_iommu_is_devmem() to know if the physical address is one of thise
> new regions which we must avoid unpinning of.
> 
> This adds @mm to tce_page_is_contained() and iommu_tce_xchg() to test
> if the memory is device memory to avoid pfn_to_page().
> 
> This adds a check for device memory in mm_iommu_ua_mark_dirty_rm() which
> does delayed pages dirtying.

This mostly looks good, but I have one concern:

> -static bool tce_page_is_contained(struct page *page, unsigned page_shift)
> +static bool tce_page_is_contained(struct mm_struct *mm, unsigned long hpa,
> +		unsigned int page_shift)
>  {
> +	struct page *page;
> +
> +	if (mm_iommu_is_devmem(mm, hpa, page_shift))
> +		return true;
> +
> +	page = pfn_to_page(hpa >> PAGE_SHIFT);

Is it possible for userspace or a guest to cause us to get here with
hpa value that is bogus?  If so what does pfn_to_page do with that
pfn, and do we handle that correctly?

(I realize that if there is a problem here, it's a problem that
already exists in the code without this patch.)

Paul.

^ permalink raw reply	[flat|nested] 70+ messages in thread

end of thread, other threads:[~2018-12-13  3:28 UTC | newest]

Thread overview: 70+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-23  5:52 [PATCH kernel v4 00/19] powerpc/powernv/npu, vfio: NVIDIA V100 + P9 passthrough Alexey Kardashevskiy
2018-11-23  5:52 ` Alexey Kardashevskiy
2018-11-23  5:52 ` [PATCH kernel v4 01/19] powerpc/ioda/npu: Call skiboot's hot reset hook when disabling NPU2 Alexey Kardashevskiy
2018-11-23  5:52   ` Alexey Kardashevskiy
2018-12-05  4:21   ` David Gibson
2018-12-05  4:21     ` David Gibson
2018-11-23  5:52 ` [PATCH kernel v4 02/19] powerpc/mm/iommu/vfio_spapr_tce: Change mm_iommu_get to reference a region Alexey Kardashevskiy
2018-11-23  5:52   ` Alexey Kardashevskiy
2018-12-05  4:25   ` David Gibson
2018-12-05  4:25     ` [PATCH kernel v4 02/19] powerpc/mm/iommu/vfio_spapr_tce: Change mm_iommu_get to reference a regi David Gibson
2018-11-23  5:52 ` [PATCH kernel v4 03/19] powerpc/vfio/iommu/kvm: Do not pin device memory Alexey Kardashevskiy
2018-11-23  5:52   ` Alexey Kardashevskiy
2018-12-05  4:35   ` David Gibson
2018-12-05  4:35     ` David Gibson
2018-12-13  3:25   ` Paul Mackerras
2018-12-13  3:25     ` Paul Mackerras
2018-11-23  5:52 ` [PATCH kernel v4 04/19] powerpc/powernv: Move npu struct from pnv_phb to pci_controller Alexey Kardashevskiy
2018-11-23  5:52   ` Alexey Kardashevskiy
2018-12-05  5:14   ` David Gibson
2018-12-05  5:14     ` David Gibson
2018-12-05  5:47     ` Alexey Kardashevskiy
2018-12-05  5:47       ` Alexey Kardashevskiy
2018-12-05  6:17       ` Alexey Kardashevskiy
2018-12-05  6:17         ` Alexey Kardashevskiy
2018-12-05 22:40         ` David Gibson
2018-12-05 22:40           ` David Gibson
2018-12-10  2:50           ` Alexey Kardashevskiy
2018-12-10  2:50             ` Alexey Kardashevskiy
2018-12-10  3:42             ` David Gibson
2018-12-10  3:42               ` David Gibson
2018-11-23  5:52 ` [PATCH kernel v4 05/19] powerpc/powernv/npu: Move OPAL calls away from context manipulation Alexey Kardashevskiy
2018-11-23  5:52   ` Alexey Kardashevskiy
2018-11-23  5:52 ` [PATCH kernel v4 06/19] powerpc/pseries/iommu: Use memory@ nodes in max RAM address calculation Alexey Kardashevskiy
2018-11-23  5:52   ` Alexey Kardashevskiy
2018-11-23  5:52 ` [PATCH kernel v4 07/19] powerpc/pseries/npu: Enable platform support Alexey Kardashevskiy
2018-11-23  5:52   ` Alexey Kardashevskiy
2018-11-23  5:52 ` [PATCH kernel v4 08/19] powerpc/pseries: Remove IOMMU API support for non-LPAR systems Alexey Kardashevskiy
2018-11-23  5:52   ` Alexey Kardashevskiy
2018-11-23  5:52 ` [PATCH kernel v4 09/19] powerpc/powernv/pseries: Rework device adding to IOMMU groups Alexey Kardashevskiy
2018-11-23  5:52   ` Alexey Kardashevskiy
2018-11-23  5:52 ` [PATCH kernel v4 10/19] powerpc/iommu_api: Move IOMMU groups setup to a single place Alexey Kardashevskiy
2018-11-23  5:52   ` Alexey Kardashevskiy
2018-11-23  5:52 ` [PATCH kernel v4 11/19] powerpc/powernv: Reference iommu_table while it is linked to a group Alexey Kardashevskiy
2018-11-23  5:52   ` Alexey Kardashevskiy
2018-11-23  5:52 ` [PATCH kernel v4 12/19] powerpc/powernv: Add purge cache OPAL call Alexey Kardashevskiy
2018-11-23  5:52   ` Alexey Kardashevskiy
2018-11-23  5:52 ` [PATCH kernel v4 13/19] powerpc/powernv/npu: Move single TVE handling to NPU PE Alexey Kardashevskiy
2018-11-23  5:52   ` Alexey Kardashevskiy
2018-11-23  5:52 ` [PATCH kernel v4 14/19] powerpc/powernv/npu: Convert NPU IOMMU helpers to iommu_table_group_ops Alexey Kardashevskiy
2018-11-23  5:52   ` Alexey Kardashevskiy
2018-11-23  5:53 ` [PATCH kernel v4 15/19] powerpc/powernv/npu: Add compound IOMMU groups Alexey Kardashevskiy
2018-11-23  5:53   ` Alexey Kardashevskiy
2018-11-23  5:53 ` [PATCH kernel v4 16/19] powerpc/powernv/npu: Add release_ownership hook Alexey Kardashevskiy
2018-11-23  5:53   ` Alexey Kardashevskiy
2018-11-23  5:53 ` [PATCH kernel v4 17/19] vfio_pci: Allow mapping extra regions Alexey Kardashevskiy
2018-11-23  5:53   ` Alexey Kardashevskiy
2018-12-11  0:09   ` Alex Williamson
2018-12-11  0:09     ` Alex Williamson
2018-11-23  5:53 ` [PATCH kernel v4 18/19] vfio_pci: Allow regions to add own capabilities Alexey Kardashevskiy
2018-11-23  5:53   ` Alexey Kardashevskiy
2018-12-11  0:10   ` Alex Williamson
2018-12-11  0:10     ` Alex Williamson
2018-11-23  5:53 ` [PATCH kernel v4 19/19] vfio_pci: Add NVIDIA GV100GL [Tesla V100 SXM2] subdriver Alexey Kardashevskiy
2018-11-23  5:53   ` Alexey Kardashevskiy
2018-12-11  0:08   ` Alex Williamson
2018-12-11  0:08     ` Alex Williamson
2018-12-11  0:57     ` Alexey Kardashevskiy
2018-12-11  0:57       ` Alexey Kardashevskiy
2018-12-11  1:27       ` Alex Williamson
2018-12-11  1:27         ` Alex Williamson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.