linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2] powerpc/dma: Define map/unmap mmio resource callbacks
@ 2020-04-16 16:57 Max Gurtovoy
  2020-04-16 16:57 ` [PATCH 2/2] powerpc/powernv: Enable and setup PCI P2P Max Gurtovoy
  2020-04-17  6:55 ` [PATCH 1/2] powerpc/dma: Define map/unmap mmio resource callbacks Christoph Hellwig
  0 siblings, 2 replies; 8+ messages in thread
From: Max Gurtovoy @ 2020-04-16 16:57 UTC (permalink / raw)
  To: linux-pci, hch; +Cc: fbarrat, clsoto, idanw, maxg, aneela, shlomin

Define the map_resource/unmap_resource callbacks for the dma_iommu_ops
used by several powerpc platforms. The map_resource callback is called
when trying to map a mmio resource through the dma_map_resource()
driver API.

For now, the callback returns an invalid address for devices using
translations, but will "direct" map the resource when in bypass
mode. Previous behavior for dma_map_resource() was to always return an
invalid address.

We also call an optional platform-specific controller op in
case some setup is needed for the platform.

Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
---
 arch/powerpc/include/asm/pci-bridge.h |  6 ++++++
 arch/powerpc/kernel/dma-iommu.c       | 35 +++++++++++++++++++++++++++++++++++
 2 files changed, 41 insertions(+)

diff --git a/arch/powerpc/include/asm/pci-bridge.h b/arch/powerpc/include/asm/pci-bridge.h
index 69f4cb3..d14b4ee 100644
--- a/arch/powerpc/include/asm/pci-bridge.h
+++ b/arch/powerpc/include/asm/pci-bridge.h
@@ -44,6 +44,12 @@ struct pci_controller_ops {
 #endif
 
 	void		(*shutdown)(struct pci_controller *hose);
+	int		(*dma_map_resource)(struct pci_dev *pdev,
+					    phys_addr_t phys_addr, size_t size,
+					    enum dma_data_direction dir);
+	void		(*dma_unmap_resource)(struct pci_dev *pdev,
+					      dma_addr_t addr, size_t size,
+					      enum dma_data_direction dir);
 };
 
 /*
diff --git a/arch/powerpc/kernel/dma-iommu.c b/arch/powerpc/kernel/dma-iommu.c
index e486d1d..dc53acc 100644
--- a/arch/powerpc/kernel/dma-iommu.c
+++ b/arch/powerpc/kernel/dma-iommu.c
@@ -108,6 +108,39 @@ static void dma_iommu_unmap_sg(struct device *dev, struct scatterlist *sglist,
 		dma_direct_unmap_sg(dev, sglist, nelems, direction, attrs);
 }
 
+static dma_addr_t dma_iommu_map_resource(struct device *dev,
+					 phys_addr_t phys_addr, size_t size,
+					 enum dma_data_direction dir,
+					 unsigned long attrs)
+{
+	struct pci_dev *pdev = to_pci_dev(dev);
+	struct pci_controller *phb = pci_bus_to_host(pdev->bus);
+
+	if (dma_iommu_map_bypass(dev, attrs)) {
+		if (phb->controller_ops.dma_map_resource &&
+		    phb->controller_ops.dma_map_resource(pdev, phys_addr, size,
+							 dir))
+			return DMA_MAPPING_ERROR;
+		return dma_direct_map_resource(dev, phys_addr, size,
+					       dir, attrs);
+	}
+	return DMA_MAPPING_ERROR;
+}
+
+static void dma_iommu_unmap_resource(struct device *dev, dma_addr_t dma_handle,
+				     size_t size, enum dma_data_direction dir,
+				     unsigned long attrs)
+{
+	struct pci_dev *pdev = to_pci_dev(dev);
+	struct pci_controller *phb = pci_bus_to_host(pdev->bus);
+
+	if (dma_iommu_map_bypass(dev, attrs)) {
+		if (phb->controller_ops.dma_unmap_resource)
+			phb->controller_ops.dma_unmap_resource(pdev, dma_handle,
+							size, dir);
+	}
+}
+
 static bool dma_iommu_bypass_supported(struct device *dev, u64 mask)
 {
 	struct pci_dev *pdev = to_pci_dev(dev);
@@ -199,6 +232,8 @@ extern void dma_iommu_sync_sg_for_device(struct device *dev,
 	.free			= dma_iommu_free_coherent,
 	.map_sg			= dma_iommu_map_sg,
 	.unmap_sg		= dma_iommu_unmap_sg,
+	.map_resource		= dma_iommu_map_resource,
+	.unmap_resource		= dma_iommu_unmap_resource,
 	.dma_supported		= dma_iommu_dma_supported,
 	.map_page		= dma_iommu_map_page,
 	.unmap_page		= dma_iommu_unmap_page,
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 2/2] powerpc/powernv: Enable and setup PCI P2P
  2020-04-16 16:57 [PATCH 1/2] powerpc/dma: Define map/unmap mmio resource callbacks Max Gurtovoy
@ 2020-04-16 16:57 ` Max Gurtovoy
  2020-04-17  7:02   ` Christoph Hellwig
  2020-04-17  6:55 ` [PATCH 1/2] powerpc/dma: Define map/unmap mmio resource callbacks Christoph Hellwig
  1 sibling, 1 reply; 8+ messages in thread
From: Max Gurtovoy @ 2020-04-16 16:57 UTC (permalink / raw)
  To: linux-pci, hch; +Cc: fbarrat, clsoto, idanw, maxg, aneela, shlomin

Implement the generic dma_map_resource callback on the PCI controller
for powernv. This will enable PCI P2P on POWER9 architecture. It will
allow catching a cross-PHB mmio mapping, which needs to be setup in
hardware by calling opal. Both the initiator and target PHBs need to be
configured, so we look for which PHB owns the mmio address being mapped.

Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
[maxg: added CONFIG_PCI_P2PDMA wrappers]
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
---
 arch/powerpc/include/asm/opal.h            |   5 +-
 arch/powerpc/platforms/powernv/opal-call.c |   1 +
 arch/powerpc/platforms/powernv/pci-ioda.c  | 164 +++++++++++++++++++++++++++++
 arch/powerpc/platforms/powernv/pci.h       |   9 ++
 4 files changed, 178 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/opal.h b/arch/powerpc/include/asm/opal.h
index 9986ac3..d1b35c4 100644
--- a/arch/powerpc/include/asm/opal.h
+++ b/arch/powerpc/include/asm/opal.h
@@ -284,7 +284,10 @@ int64_t opal_xive_set_queue_state(uint64_t vp, uint32_t prio,
 				  uint32_t qtoggle,
 				  uint32_t qindex);
 int64_t opal_xive_get_vp_state(uint64_t vp, __be64 *out_w01);
-
+#ifdef CONFIG_PCI_P2PDMA
+int64_t opal_pci_set_p2p(uint64_t phb_init, uint64_t phb_target,
+			 uint64_t desc, uint16_t pe_number);
+#endif
 int64_t opal_imc_counters_init(uint32_t type, uint64_t address,
 							uint64_t cpu_pir);
 int64_t opal_imc_counters_start(uint32_t type, uint64_t cpu_pir);
diff --git a/arch/powerpc/platforms/powernv/opal-call.c b/arch/powerpc/platforms/powernv/opal-call.c
index 5cd0f52..442d5445c 100644
--- a/arch/powerpc/platforms/powernv/opal-call.c
+++ b/arch/powerpc/platforms/powernv/opal-call.c
@@ -273,6 +273,7 @@ int64_t name(int64_t a0, int64_t a1, int64_t a2, int64_t a3,	\
 OPAL_CALL(opal_imc_counters_init,		OPAL_IMC_COUNTERS_INIT);
 OPAL_CALL(opal_imc_counters_start,		OPAL_IMC_COUNTERS_START);
 OPAL_CALL(opal_imc_counters_stop,		OPAL_IMC_COUNTERS_STOP);
+OPAL_CALL(opal_pci_set_p2p,			OPAL_PCI_SET_P2P);
 OPAL_CALL(opal_get_powercap,			OPAL_GET_POWERCAP);
 OPAL_CALL(opal_set_powercap,			OPAL_SET_POWERCAP);
 OPAL_CALL(opal_get_power_shift_ratio,		OPAL_GET_POWER_SHIFT_RATIO);
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index 57d3a6a..00a1dfc 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -3706,6 +3706,166 @@ static void pnv_pci_ioda_dma_bus_setup(struct pci_bus *bus)
 	}
 }
 
+#ifdef CONFIG_PCI_P2PDMA
+static DEFINE_MUTEX(p2p_mutex);
+
+static int pnv_pci_ioda_set_p2p(struct pci_dev *initiator,
+				struct pnv_phb *phb_target,
+				u64 desc)
+{
+	struct pci_controller *hose;
+	struct pnv_phb *phb_init;
+	struct pnv_ioda_pe *pe_init;
+	int rc;
+
+	if (!opal_check_token(OPAL_PCI_SET_P2P))
+		return -ENXIO;
+
+	hose = pci_bus_to_host(initiator->bus);
+	phb_init = hose->private_data;
+
+	pe_init = pnv_ioda_get_pe(initiator);
+	if (!pe_init)
+		return -ENODEV;
+
+	/*
+	 * Configuring the initiator's PHB requires to adjust its
+	 * TVE#1 setting. Since the same device can be an initiator
+	 * several times for different target devices, we need to keep
+	 * a reference count to know when we can restore the default
+	 * bypass setting on its TVE#1 when disabling. Opal is not
+	 * tracking PE states, so we add a reference count on the PE
+	 * in linux.
+	 *
+	 * For the target, the configuration is per PHB, so we keep a
+	 * target reference count on the PHB.
+	 */
+	mutex_lock(&p2p_mutex);
+
+	if (desc & OPAL_PCI_P2P_ENABLE) {
+		/* always go to opal to validate the configuration */
+		rc = opal_pci_set_p2p(phb_init->opal_id, phb_target->opal_id,
+				      desc, pe_init->pe_number);
+
+		if (rc != OPAL_SUCCESS) {
+			rc = -EIO;
+			goto out;
+		}
+
+		pe_init->p2p_initiator_count++;
+		phb_target->p2p_target_count++;
+	} else {
+		if (!pe_init->p2p_initiator_count ||
+		    !phb_target->p2p_target_count) {
+			rc = -EINVAL;
+			goto out;
+		}
+
+		if (--pe_init->p2p_initiator_count == 0)
+			pnv_pci_ioda2_set_bypass(pe_init, true);
+
+		if (--phb_target->p2p_target_count == 0) {
+			rc = opal_pci_set_p2p(phb_init->opal_id,
+					      phb_target->opal_id, desc,
+					      pe_init->pe_number);
+			if (rc != OPAL_SUCCESS) {
+				rc = -EIO;
+				goto out;
+			}
+		}
+	}
+	rc = 0;
+out:
+	mutex_unlock(&p2p_mutex);
+	return rc;
+}
+
+static bool pnv_pci_controller_owns_addr(struct pci_controller *hose,
+					 phys_addr_t addr, size_t size)
+{
+	struct resource *r;
+	int i;
+
+	/*
+	 * it seems safe to assume the full range is under the same
+	 * PHB, so we can ignore the size
+	 */
+	for (i = 0; i < 3; i++) {
+		r = &hose->mem_resources[i];
+		if (r->flags && (addr >= r->start) && (addr < r->end))
+			return true;
+	}
+	return false;
+}
+
+/*
+ * find the phb owning a mmio address if not owned locally
+ */
+static struct pnv_phb *pnv_pci_find_owning_phb(struct pci_dev *pdev,
+					       phys_addr_t addr, size_t size)
+{
+	struct pci_controller *hose = pci_bus_to_host(pdev->bus);
+	struct pnv_phb *phb;
+
+	/* fast path */
+	if (pnv_pci_controller_owns_addr(hose, addr, size))
+		return NULL;
+
+	list_for_each_entry(hose, &hose_list, list_node) {
+		phb = hose->private_data;
+		if (phb->type != PNV_PHB_NPU_NVLINK &&
+		    phb->type != PNV_PHB_NPU_OCAPI) {
+			if (pnv_pci_controller_owns_addr(hose, addr, size))
+				return phb;
+		}
+	}
+	return NULL;
+}
+
+static int pnv_pci_dma_map_resource(struct pci_dev *pdev,
+				    phys_addr_t phys_addr, size_t size,
+				    enum dma_data_direction dir)
+{
+	struct pnv_phb *target_phb;
+	int rc;
+	u64 desc;
+
+	target_phb = pnv_pci_find_owning_phb(pdev, phys_addr, size);
+	if (target_phb) {
+		desc = OPAL_PCI_P2P_ENABLE;
+		if (dir == DMA_TO_DEVICE)
+			desc |= OPAL_PCI_P2P_STORE;
+		else if (dir == DMA_FROM_DEVICE)
+			desc |= OPAL_PCI_P2P_LOAD;
+		else if (dir == DMA_BIDIRECTIONAL)
+			desc |= OPAL_PCI_P2P_LOAD | OPAL_PCI_P2P_STORE;
+		rc = pnv_pci_ioda_set_p2p(pdev, target_phb, desc);
+		if (rc) {
+			dev_err(&pdev->dev, "Failed to setup PCI peer-to-peer for address %llx: %d\n",
+				phys_addr, rc);
+			return rc;
+		}
+	}
+	return 0;
+}
+
+static void pnv_pci_dma_unmap_resource(struct pci_dev *pdev,
+				       dma_addr_t addr, size_t size,
+				       enum dma_data_direction dir)
+{
+	struct pnv_phb *target_phb;
+	int rc;
+
+	target_phb = pnv_pci_find_owning_phb(pdev, addr, size);
+	if (target_phb) {
+		rc = pnv_pci_ioda_set_p2p(pdev, target_phb, 0);
+		if (rc)
+			dev_err(&pdev->dev, "Failed to undo PCI peer-to-peer setup for address %llx: %d\n",
+				addr, rc);
+	}
+}
+#endif
+
 static const struct pci_controller_ops pnv_pci_ioda_controller_ops = {
 	.dma_dev_setup		= pnv_pci_ioda_dma_dev_setup,
 	.dma_bus_setup		= pnv_pci_ioda_dma_bus_setup,
@@ -3718,6 +3878,10 @@ static void pnv_pci_ioda_dma_bus_setup(struct pci_bus *bus)
 	.setup_bridge		= pnv_pci_setup_bridge,
 	.reset_secondary_bus	= pnv_pci_reset_secondary_bus,
 	.shutdown		= pnv_pci_ioda_shutdown,
+#ifdef CONFIG_PCI_P2PDMA
+	.dma_map_resource	= pnv_pci_dma_map_resource,
+	.dma_unmap_resource	= pnv_pci_dma_unmap_resource,
+#endif
 };
 
 static const struct pci_controller_ops pnv_npu_ioda_controller_ops = {
diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
index d3bbdea..5f85d9c 100644
--- a/arch/powerpc/platforms/powernv/pci.h
+++ b/arch/powerpc/platforms/powernv/pci.h
@@ -79,6 +79,10 @@ struct pnv_ioda_pe {
 	struct pnv_ioda_pe	*master;
 	struct list_head	slaves;
 
+#ifdef CONFIG_PCI_P2PDMA
+	/* PCI peer-to-peer*/
+	int			p2p_initiator_count;
+#endif
 	/* Link in list of PE#s */
 	struct list_head	list;
 };
@@ -168,6 +172,11 @@ struct pnv_phb {
 	/* PHB and hub diagnostics */
 	unsigned int		diag_data_size;
 	u8			*diag_data;
+
+#ifdef CONFIG_PCI_P2PDMA
+	/* PCI peer-to-peer*/
+	int			p2p_target_count;
+#endif
 };
 
 extern struct pci_ops pnv_pci_ops;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] powerpc/dma: Define map/unmap mmio resource callbacks
  2020-04-16 16:57 [PATCH 1/2] powerpc/dma: Define map/unmap mmio resource callbacks Max Gurtovoy
  2020-04-16 16:57 ` [PATCH 2/2] powerpc/powernv: Enable and setup PCI P2P Max Gurtovoy
@ 2020-04-17  6:55 ` Christoph Hellwig
  2020-04-19 10:14   ` Max Gurtovoy
  1 sibling, 1 reply; 8+ messages in thread
From: Christoph Hellwig @ 2020-04-17  6:55 UTC (permalink / raw)
  To: Max Gurtovoy; +Cc: linux-pci, hch, fbarrat, clsoto, idanw, aneela, shlomin

>  		dma_direct_unmap_sg(dev, sglist, nelems, direction, attrs);
>  }
>  
> +static dma_addr_t dma_iommu_map_resource(struct device *dev,
> +					 phys_addr_t phys_addr, size_t size,
> +					 enum dma_data_direction dir,
> +					 unsigned long attrs)
> +{
> +	struct pci_dev *pdev = to_pci_dev(dev);
> +	struct pci_controller *phb = pci_bus_to_host(pdev->bus);
> +
> +	if (dma_iommu_map_bypass(dev, attrs)) {
> +		if (phb->controller_ops.dma_map_resource &&
> +		    phb->controller_ops.dma_map_resource(pdev, phys_addr, size,
> +							 dir))
> +			return DMA_MAPPING_ERROR;
> +		return dma_direct_map_resource(dev, phys_addr, size,
> +					       dir, attrs);
> +	}
> +	return DMA_MAPPING_ERROR;

Just a nitpick, but to the return errors early would be a much more
logical flow.  Also if the callback just decides if the mapping is
possible in bypass mode it should have that in the name:

	struct pci_controller_ops *ops = &phb->controller_ops;

	if (!dma_iommu_map_bypass(dev, attrs) ||
	    !ops->dma_can_direct_map_resource ||
	    !ops->dma_can_direct_map_resource(pdev, phys_addr, size, dir)
		return DMA_MAPPING_ERROR;
	return dma_direct_map_resource(dev, phys_addr, size, dir, attrs);

> +	if (dma_iommu_map_bypass(dev, attrs)) {
> +		if (phb->controller_ops.dma_unmap_resource)
> +			phb->controller_ops.dma_unmap_resource(pdev, dma_handle,
> +							size, dir);
> +	}

This can do a little early return as well, coupled with a WARN_ON_ONCE
as we should never end up in the unmap path for a setup where the map didn't
work.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] powerpc/powernv: Enable and setup PCI P2P
  2020-04-16 16:57 ` [PATCH 2/2] powerpc/powernv: Enable and setup PCI P2P Max Gurtovoy
@ 2020-04-17  7:02   ` Christoph Hellwig
  2020-04-17 11:16     ` Max Gurtovoy
  0 siblings, 1 reply; 8+ messages in thread
From: Christoph Hellwig @ 2020-04-17  7:02 UTC (permalink / raw)
  To: Max Gurtovoy; +Cc: linux-pci, hch, fbarrat, clsoto, idanw, aneela, shlomin

> +#ifdef CONFIG_PCI_P2PDMA
> +int64_t opal_pci_set_p2p(uint64_t phb_init, uint64_t phb_target,
> +			 uint64_t desc, uint16_t pe_number);
> +#endif

There should be no need for the ifdef here.

> +	/*
> +	 * Configuring the initiator's PHB requires to adjust its
> +	 * TVE#1 setting. Since the same device can be an initiator
> +	 * several times for different target devices, we need to keep
> +	 * a reference count to know when we can restore the default
> +	 * bypass setting on its TVE#1 when disabling. Opal is not
> +	 * tracking PE states, so we add a reference count on the PE
> +	 * in linux.
> +	 *
> +	 * For the target, the configuration is per PHB, so we keep a
> +	 * target reference count on the PHB.
> +	 */

This could be shortened a bit by using up the whole 80 lines available
in source files.

> +	mutex_lock(&p2p_mutex);
> +
> +	if (desc & OPAL_PCI_P2P_ENABLE) {
> +		/* always go to opal to validate the configuration */
> +		rc = opal_pci_set_p2p(phb_init->opal_id, phb_target->opal_id,
> +				      desc, pe_init->pe_number);
> +
> +		if (rc != OPAL_SUCCESS) {
> +			rc = -EIO;
> +			goto out;
> +		}
> +
> +		pe_init->p2p_initiator_count++;
> +		phb_target->p2p_target_count++;
> +	} else {
> +		if (!pe_init->p2p_initiator_count ||
> +		    !phb_target->p2p_target_count) {
> +			rc = -EINVAL;
> +			goto out;
> +		}
> +
> +		if (--pe_init->p2p_initiator_count == 0)
> +			pnv_pci_ioda2_set_bypass(pe_init, true);
> +
> +		if (--phb_target->p2p_target_count == 0) {
> +			rc = opal_pci_set_p2p(phb_init->opal_id,
> +					      phb_target->opal_id, desc,
> +					      pe_init->pe_number);
> +			if (rc != OPAL_SUCCESS) {
> +				rc = -EIO;
> +				goto out;
> +			}
> +		}
> +	}
> +	rc = 0;
> +out:
> +	mutex_unlock(&p2p_mutex);
> +	return rc;
> +}

The enable and disable path shares almost no code, why not split it into
two functions?

> +static bool pnv_pci_controller_owns_addr(struct pci_controller *hose,
> +					 phys_addr_t addr, size_t size)
> +{
> +	struct resource *r;
> +	int i;
> +
> +	/*
> +	 * it seems safe to assume the full range is under the same
> +	 * PHB, so we can ignore the size

Capitalize the first character in a multi-line comment, and use up the
whole 80 chars.

> +	 */
> +	for (i = 0; i < 3; i++) {

Replace the magic 3 with ARRAY_SIZE(hose->mem_resources) ?


> +		r = &hose->mem_resources[i];

Move the r declaration here and initialize it on the same line.

> +		if (r->flags && (addr >= r->start) && (addr < r->end))

No need for the inner braces.

> +			return true;
> +	}
> +	return false;
> +}
> +
> +/*
> + * find the phb owning a mmio address if not owned locally
> + */
> +static struct pnv_phb *pnv_pci_find_owning_phb(struct pci_dev *pdev,
> +					       phys_addr_t addr, size_t size)
> +{
> +	struct pci_controller *hose = pci_bus_to_host(pdev->bus);
> +	struct pnv_phb *phb;
> +
> +	/* fast path */
> +	if (pnv_pci_controller_owns_addr(hose, addr, size))
> +		return NULL;

Maybe open code the pci_bus_to_host(pdev->bus) call here, as we just
overwrite host in the list iteration?

> +
> +	list_for_each_entry(hose, &hose_list, list_node) {
> +		phb = hose->private_data;

Move the ohb declaration here and initialize it on the same line.

> +		if (phb->type != PNV_PHB_NPU_NVLINK &&
> +		    phb->type != PNV_PHB_NPU_OCAPI) {
> +			if (pnv_pci_controller_owns_addr(hose, addr, size))
> +				return phb;
> +		}
> +	}


> +	return NULL;
> +}
> +
> +static int pnv_pci_dma_map_resource(struct pci_dev *pdev,
> +				    phys_addr_t phys_addr, size_t size,
> +				    enum dma_data_direction dir)
> +{
> +	struct pnv_phb *target_phb;
> +	int rc;
> +	u64 desc;
> +
> +	target_phb = pnv_pci_find_owning_phb(pdev, phys_addr, size);
> +	if (target_phb) {

Return early here for the !target_phb case?

> +		desc = OPAL_PCI_P2P_ENABLE;
> +		if (dir == DMA_TO_DEVICE)
> +			desc |= OPAL_PCI_P2P_STORE;
> +		else if (dir == DMA_FROM_DEVICE)
> +			desc |= OPAL_PCI_P2P_LOAD;
> +		else if (dir == DMA_BIDIRECTIONAL)
> +			desc |= OPAL_PCI_P2P_LOAD | OPAL_PCI_P2P_STORE;

Seems like this could be split into a little helper.

> +		rc = pnv_pci_ioda_set_p2p(pdev, target_phb, desc);
> +		if (rc) {
> +			dev_err(&pdev->dev, "Failed to setup PCI peer-to-peer for address %llx: %d\n",
> +				phys_addr, rc);
> +			return rc;
> +		}

No need for this printk, the callers should already deal with mapping
failures.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] powerpc/powernv: Enable and setup PCI P2P
  2020-04-17  7:02   ` Christoph Hellwig
@ 2020-04-17 11:16     ` Max Gurtovoy
  2020-04-17 14:04       ` Oliver O'Halloran
  0 siblings, 1 reply; 8+ messages in thread
From: Max Gurtovoy @ 2020-04-17 11:16 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-pci, fbarrat, clsoto, idanw, aneela, shlomin


On 4/17/2020 10:02 AM, Christoph Hellwig wrote:
>> +#ifdef CONFIG_PCI_P2PDMA
>> +int64_t opal_pci_set_p2p(uint64_t phb_init, uint64_t phb_target,
>> +			 uint64_t desc, uint16_t pe_number);
>> +#endif
> There should be no need for the ifdef here.
>
>> +	/*
>> +	 * Configuring the initiator's PHB requires to adjust its
>> +	 * TVE#1 setting. Since the same device can be an initiator
>> +	 * several times for different target devices, we need to keep
>> +	 * a reference count to know when we can restore the default
>> +	 * bypass setting on its TVE#1 when disabling. Opal is not
>> +	 * tracking PE states, so we add a reference count on the PE
>> +	 * in linux.
>> +	 *
>> +	 * For the target, the configuration is per PHB, so we keep a
>> +	 * target reference count on the PHB.
>> +	 */
> This could be shortened a bit by using up the whole 80 lines available
> in source files.
>
>> +	mutex_lock(&p2p_mutex);
>> +
>> +	if (desc & OPAL_PCI_P2P_ENABLE) {
>> +		/* always go to opal to validate the configuration */
>> +		rc = opal_pci_set_p2p(phb_init->opal_id, phb_target->opal_id,
>> +				      desc, pe_init->pe_number);
>> +
>> +		if (rc != OPAL_SUCCESS) {
>> +			rc = -EIO;
>> +			goto out;
>> +		}
>> +
>> +		pe_init->p2p_initiator_count++;
>> +		phb_target->p2p_target_count++;
>> +	} else {
>> +		if (!pe_init->p2p_initiator_count ||
>> +		    !phb_target->p2p_target_count) {
>> +			rc = -EINVAL;
>> +			goto out;
>> +		}
>> +
>> +		if (--pe_init->p2p_initiator_count == 0)
>> +			pnv_pci_ioda2_set_bypass(pe_init, true);
>> +
>> +		if (--phb_target->p2p_target_count == 0) {
>> +			rc = opal_pci_set_p2p(phb_init->opal_id,
>> +					      phb_target->opal_id, desc,
>> +					      pe_init->pe_number);
>> +			if (rc != OPAL_SUCCESS) {
>> +				rc = -EIO;
>> +				goto out;
>> +			}
>> +		}
>> +	}
>> +	rc = 0;
>> +out:
>> +	mutex_unlock(&p2p_mutex);
>> +	return rc;
>> +}
> The enable and disable path shares almost no code, why not split it into
> two functions?

how about also changing the defines OPAL_PCI_P2P_* to an enum ?

/* PCI p2p operation descriptors */
enum opal_pci_p2p {

         OPAL_PCI_P2P_DISABLE    = 0,

         OPAL_PCI_P2P_ENABLE     = (1 << 0),
         OPAL_PCI_P2P_LOAD       = (1 << 1),
         OPAL_PCI_P2P_STORE      = (1 << 2),
};

Fred ?


>> +static bool pnv_pci_controller_owns_addr(struct pci_controller *hose,
>> +					 phys_addr_t addr, size_t size)
>> +{
>> +	struct resource *r;
>> +	int i;
>> +
>> +	/*
>> +	 * it seems safe to assume the full range is under the same
>> +	 * PHB, so we can ignore the size
> Capitalize the first character in a multi-line comment, and use up the
> whole 80 chars.
>
>> +	 */
>> +	for (i = 0; i < 3; i++) {
> Replace the magic 3 with ARRAY_SIZE(hose->mem_resources) ?
>
>
>> +		r = &hose->mem_resources[i];
> Move the r declaration here and initialize it on the same line.
>
>> +		if (r->flags && (addr >= r->start) && (addr < r->end))
> No need for the inner braces.
>
>> +			return true;
>> +	}
>> +	return false;
>> +}
>> +
>> +/*
>> + * find the phb owning a mmio address if not owned locally
>> + */
>> +static struct pnv_phb *pnv_pci_find_owning_phb(struct pci_dev *pdev,
>> +					       phys_addr_t addr, size_t size)
>> +{
>> +	struct pci_controller *hose = pci_bus_to_host(pdev->bus);
>> +	struct pnv_phb *phb;
>> +
>> +	/* fast path */
>> +	if (pnv_pci_controller_owns_addr(hose, addr, size))
>> +		return NULL;
> Maybe open code the pci_bus_to_host(pdev->bus) call here, as we just
> overwrite host in the list iteration?

if this is more readable so no problem:

if (pnv_pci_controller_owns_addr(pdev->bus->sysdata, addr, size))


>
>> +
>> +	list_for_each_entry(hose, &hose_list, list_node) {
>> +		phb = hose->private_data;
> Move the ohb declaration here and initialize it on the same line.
>
>> +		if (phb->type != PNV_PHB_NPU_NVLINK &&
>> +		    phb->type != PNV_PHB_NPU_OCAPI) {
>> +			if (pnv_pci_controller_owns_addr(hose, addr, size))
>> +				return phb;
>> +		}
>> +	}
>
>> +	return NULL;
>> +}
>> +
>> +static int pnv_pci_dma_map_resource(struct pci_dev *pdev,
>> +				    phys_addr_t phys_addr, size_t size,
>> +				    enum dma_data_direction dir)
>> +{
>> +	struct pnv_phb *target_phb;
>> +	int rc;
>> +	u64 desc;
>> +
>> +	target_phb = pnv_pci_find_owning_phb(pdev, phys_addr, size);
>> +	if (target_phb) {
> Return early here for the !target_phb case?
>
>> +		desc = OPAL_PCI_P2P_ENABLE;
>> +		if (dir == DMA_TO_DEVICE)
>> +			desc |= OPAL_PCI_P2P_STORE;
>> +		else if (dir == DMA_FROM_DEVICE)
>> +			desc |= OPAL_PCI_P2P_LOAD;
>> +		else if (dir == DMA_BIDIRECTIONAL)
>> +			desc |= OPAL_PCI_P2P_LOAD | OPAL_PCI_P2P_STORE;
> Seems like this could be split into a little helper.

sure.


>
>> +		rc = pnv_pci_ioda_set_p2p(pdev, target_phb, desc);
>> +		if (rc) {
>> +			dev_err(&pdev->dev, "Failed to setup PCI peer-to-peer for address %llx: %d\n",
>> +				phys_addr, rc);
>> +			return rc;
>> +		}
> No need for this printk, the callers should already deal with mapping
> failures.
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] powerpc/powernv: Enable and setup PCI P2P
  2020-04-17 11:16     ` Max Gurtovoy
@ 2020-04-17 14:04       ` Oliver O'Halloran
  2020-04-19 11:46         ` Max Gurtovoy
  0 siblings, 1 reply; 8+ messages in thread
From: Oliver O'Halloran @ 2020-04-17 14:04 UTC (permalink / raw)
  To: Max Gurtovoy
  Cc: Christoph Hellwig, linux-pci, Frederic Barrat, Carol L Soto,
	idanw, aneela, shlomin

On Fri, Apr 17, 2020 at 9:17 PM Max Gurtovoy <maxg@mellanox.com> wrote:
>
> > The enable and disable path shares almost no code, why not split it into
> > two functions?
>
> how about also changing the defines OPAL_PCI_P2P_* to an enum ?
>
> /* PCI p2p operation descriptors */
> enum opal_pci_p2p {
>
>          OPAL_PCI_P2P_DISABLE    = 0,
>
>          OPAL_PCI_P2P_ENABLE     = (1 << 0),
>          OPAL_PCI_P2P_LOAD       = (1 << 1),
>          OPAL_PCI_P2P_STORE      = (1 << 2),
> };
>
> Fred ?

I'd rather you didn't. We try to keep the definitions in opal-api.h
the same as the skiboot's opal-api.h since the skiboot version is
canonical.

Also, generally patches to anything PowerNV related go through the
powerpc tree rather than the pci tree even if they're PCI related. Can
you make sure you have linuxppc-dev CCed when you post v2.

Oliver

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] powerpc/dma: Define map/unmap mmio resource callbacks
  2020-04-17  6:55 ` [PATCH 1/2] powerpc/dma: Define map/unmap mmio resource callbacks Christoph Hellwig
@ 2020-04-19 10:14   ` Max Gurtovoy
  0 siblings, 0 replies; 8+ messages in thread
From: Max Gurtovoy @ 2020-04-19 10:14 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-pci, fbarrat, clsoto, idanw, aneela, shlomin


On 4/17/2020 9:55 AM, Christoph Hellwig wrote:
>>   		dma_direct_unmap_sg(dev, sglist, nelems, direction, attrs);
>>   }
>>   
>> +static dma_addr_t dma_iommu_map_resource(struct device *dev,
>> +					 phys_addr_t phys_addr, size_t size,
>> +					 enum dma_data_direction dir,
>> +					 unsigned long attrs)
>> +{
>> +	struct pci_dev *pdev = to_pci_dev(dev);
>> +	struct pci_controller *phb = pci_bus_to_host(pdev->bus);
>> +
>> +	if (dma_iommu_map_bypass(dev, attrs)) {
>> +		if (phb->controller_ops.dma_map_resource &&
>> +		    phb->controller_ops.dma_map_resource(pdev, phys_addr, size,
>> +							 dir))
>> +			return DMA_MAPPING_ERROR;
>> +		return dma_direct_map_resource(dev, phys_addr, size,
>> +					       dir, attrs);
>> +	}
>> +	return DMA_MAPPING_ERROR;
> Just a nitpick, but to the return errors early would be a much more
> logical flow.  Also if the callback just decides if the mapping is
> possible in bypass mode it should have that in the name:
>
> 	struct pci_controller_ops *ops = &phb->controller_ops;
>
> 	if (!dma_iommu_map_bypass(dev, attrs) ||
> 	    !ops->dma_can_direct_map_resource ||
> 	    !ops->dma_can_direct_map_resource(pdev, phys_addr, size, dir)
> 		return DMA_MAPPING_ERROR;
> 	return dma_direct_map_resource(dev, phys_addr, size, dir, attrs);

maybe call it dma_direct_map_resource/dma_direct_unmap_resource for 
symmetry ?

do you mean (small logic change):

         if (!dma_iommu_map_bypass(dev, attrs) ||
             !ops->dma_direct_map_resource ||
             ops->dma_direct_map_resource(pdev, phys_addr, size, dir))
                 return DMA_MAPPING_ERROR;

         return dma_direct_map_resource(dev, phys_addr, size, dir, attrs);


>
>> +	if (dma_iommu_map_bypass(dev, attrs)) {
>> +		if (phb->controller_ops.dma_unmap_resource)
>> +			phb->controller_ops.dma_unmap_resource(pdev, dma_handle,
>> +							size, dir);
>> +	}
> This can do a little early return as well, coupled with a WARN_ON_ONCE
> as we should never end up in the unmap path for a setup where the map didn't
> work.

how about:

         struct pci_controller_ops *ops = &phb->controller_ops;

         if (dma_iommu_map_bypass(dev, attrs) && 
ops->dma_direct_unmap_resource)
                 ops->dma_direct_unmap_resource(pdev, dma_handle, size, 
dir);



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] powerpc/powernv: Enable and setup PCI P2P
  2020-04-17 14:04       ` Oliver O'Halloran
@ 2020-04-19 11:46         ` Max Gurtovoy
  0 siblings, 0 replies; 8+ messages in thread
From: Max Gurtovoy @ 2020-04-19 11:46 UTC (permalink / raw)
  To: Oliver O'Halloran
  Cc: Christoph Hellwig, linux-pci, Frederic Barrat, Carol L Soto,
	idanw, aneela, shlomin


On 4/17/2020 5:04 PM, Oliver O'Halloran wrote:
> On Fri, Apr 17, 2020 at 9:17 PM Max Gurtovoy <maxg@mellanox.com> wrote:
>>> The enable and disable path shares almost no code, why not split it into
>>> two functions?
>> how about also changing the defines OPAL_PCI_P2P_* to an enum ?
>>
>> /* PCI p2p operation descriptors */
>> enum opal_pci_p2p {
>>
>>           OPAL_PCI_P2P_DISABLE    = 0,
>>
>>           OPAL_PCI_P2P_ENABLE     = (1 << 0),
>>           OPAL_PCI_P2P_LOAD       = (1 << 1),
>>           OPAL_PCI_P2P_STORE      = (1 << 2),
>> };
>>
>> Fred ?
> I'd rather you didn't. We try to keep the definitions in opal-api.h
> the same as the skiboot's opal-api.h since the skiboot version is
> canonical.
>
> Also, generally patches to anything PowerNV related go through the
> powerpc tree rather than the pci tree even if they're PCI related. Can
> you make sure you have linuxppc-dev CCed when you post v2.

Sure, no problem.


>
> Oliver

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-04-19 11:46 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-16 16:57 [PATCH 1/2] powerpc/dma: Define map/unmap mmio resource callbacks Max Gurtovoy
2020-04-16 16:57 ` [PATCH 2/2] powerpc/powernv: Enable and setup PCI P2P Max Gurtovoy
2020-04-17  7:02   ` Christoph Hellwig
2020-04-17 11:16     ` Max Gurtovoy
2020-04-17 14:04       ` Oliver O'Halloran
2020-04-19 11:46         ` Max Gurtovoy
2020-04-17  6:55 ` [PATCH 1/2] powerpc/dma: Define map/unmap mmio resource callbacks Christoph Hellwig
2020-04-19 10:14   ` Max Gurtovoy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).