linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/5] DMA mapping changes for SCSI core
@ 2022-06-27 15:25 John Garry
  2022-06-27 15:25 ` [PATCH v4 1/5] dma-mapping: Add dma_opt_mapping_size() John Garry
                   ` (4 more replies)
  0 siblings, 5 replies; 18+ messages in thread
From: John Garry @ 2022-06-27 15:25 UTC (permalink / raw)
  To: damien.lemoal, joro, will, jejb, martin.petersen, hch,
	m.szyprowski, robin.murphy
  Cc: linux-doc, linux-kernel, linux-ide, iommu, iommu, linux-scsi,
	linuxarm, John Garry

As reported in [0], DMA mappings whose size exceeds the IOMMU IOVA caching
limit may see a big performance hit.

This series introduces a new DMA mapping API, dma_opt_mapping_size(), so
that drivers may know this limit when performance is a factor in the
mapping.

The SCSI SAS transport code is modified only to use this limit. For now I
did not want to touch other hosts as I have a concern that this change
could cause a performance regression.

I also added a patch for libata-scsi as it does not currently honour the
shost max_sectors limit.

[0] https://lore.kernel.org/linux-iommu/20210129092120.1482-1-thunder.leizhen@huawei.com/
[1] https://lore.kernel.org/linux-iommu/f5b78c9c-312e-70ab-ecbb-f14623a4b6e3@arm.com/

Changes since v3:
- Apply max DMA optimial limit to SAS hosts only
  Note: Even though "scsi: core: Cap shost max_sectors only once when
  adding" is a subset of a previous patch I did not transfer the RB tags
- Rebase on v5.19-rc4

Changes since v2:
- Rebase on v5.19-rc1
- Add Damien's tag to 2/4 (thanks)

Changes since v1:
- Relocate scsi_add_host_with_dma() dma_dev check (Reported by Dan)
- Add tags from Damien and Martin (thanks)
  - note: I only added Martin's tag to the SCSI patch

John Garry (5):
  dma-mapping: Add dma_opt_mapping_size()
  dma-iommu: Add iommu_dma_opt_mapping_size()
  scsi: core: Cap shost max_sectors according to DMA mapping limits only
    once
  scsi: scsi_transport_sas: Cap shost max_sectors according to DMA
    optimal mapping limit
  libata-scsi: Cap ata_device->max_sectors according to
    shost->max_sectors

 Documentation/core-api/dma-api.rst |  9 +++++++++
 drivers/ata/libata-scsi.c          |  1 +
 drivers/iommu/dma-iommu.c          |  6 ++++++
 drivers/iommu/iova.c               |  5 +++++
 drivers/scsi/hosts.c               |  5 +++++
 drivers/scsi/scsi_lib.c            |  4 ----
 drivers/scsi/scsi_transport_sas.c  |  6 ++++++
 include/linux/dma-map-ops.h        |  1 +
 include/linux/dma-mapping.h        |  5 +++++
 include/linux/iova.h               |  2 ++
 kernel/dma/mapping.c               | 12 ++++++++++++
 11 files changed, 52 insertions(+), 4 deletions(-)

-- 
2.35.3


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v4 1/5] dma-mapping: Add dma_opt_mapping_size()
  2022-06-27 15:25 [PATCH v4 0/5] DMA mapping changes for SCSI core John Garry
@ 2022-06-27 15:25 ` John Garry
  2022-06-28 11:23   ` Robin Murphy
  2022-06-27 15:25 ` [PATCH v4 2/5] dma-iommu: Add iommu_dma_opt_mapping_size() John Garry
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 18+ messages in thread
From: John Garry @ 2022-06-27 15:25 UTC (permalink / raw)
  To: damien.lemoal, joro, will, jejb, martin.petersen, hch,
	m.szyprowski, robin.murphy
  Cc: linux-doc, linux-kernel, linux-ide, iommu, iommu, linux-scsi,
	linuxarm, John Garry

Streaming DMA mapping involving an IOMMU may be much slower for larger
total mapping size. This is because every IOMMU DMA mapping requires an
IOVA to be allocated and freed. IOVA sizes above a certain limit are not
cached, which can have a big impact on DMA mapping performance.

Provide an API for device drivers to know this "optimal" limit, such that
they may try to produce mapping which don't exceed it.

Signed-off-by: John Garry <john.garry@huawei.com>
Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>
---
 Documentation/core-api/dma-api.rst |  9 +++++++++
 include/linux/dma-map-ops.h        |  1 +
 include/linux/dma-mapping.h        |  5 +++++
 kernel/dma/mapping.c               | 12 ++++++++++++
 4 files changed, 27 insertions(+)

diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dma-api.rst
index 6d6d0edd2d27..b3cd9763d28b 100644
--- a/Documentation/core-api/dma-api.rst
+++ b/Documentation/core-api/dma-api.rst
@@ -204,6 +204,15 @@ Returns the maximum size of a mapping for the device. The size parameter
 of the mapping functions like dma_map_single(), dma_map_page() and
 others should not be larger than the returned value.
 
+::
+
+	size_t
+	dma_opt_mapping_size(struct device *dev);
+
+Returns the maximum optimal size of a mapping for the device. Mapping large
+buffers may take longer so device drivers are advised to limit total DMA
+streaming mappings length to the returned value.
+
 ::
 
 	bool
diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
index 0d5b06b3a4a6..98ceba6fa848 100644
--- a/include/linux/dma-map-ops.h
+++ b/include/linux/dma-map-ops.h
@@ -69,6 +69,7 @@ struct dma_map_ops {
 	int (*dma_supported)(struct device *dev, u64 mask);
 	u64 (*get_required_mask)(struct device *dev);
 	size_t (*max_mapping_size)(struct device *dev);
+	size_t (*opt_mapping_size)(void);
 	unsigned long (*get_merge_boundary)(struct device *dev);
 };
 
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index dca2b1355bb1..fe3849434b2a 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -144,6 +144,7 @@ int dma_set_mask(struct device *dev, u64 mask);
 int dma_set_coherent_mask(struct device *dev, u64 mask);
 u64 dma_get_required_mask(struct device *dev);
 size_t dma_max_mapping_size(struct device *dev);
+size_t dma_opt_mapping_size(struct device *dev);
 bool dma_need_sync(struct device *dev, dma_addr_t dma_addr);
 unsigned long dma_get_merge_boundary(struct device *dev);
 struct sg_table *dma_alloc_noncontiguous(struct device *dev, size_t size,
@@ -266,6 +267,10 @@ static inline size_t dma_max_mapping_size(struct device *dev)
 {
 	return 0;
 }
+static inline size_t dma_opt_mapping_size(struct device *dev)
+{
+	return 0;
+}
 static inline bool dma_need_sync(struct device *dev, dma_addr_t dma_addr)
 {
 	return false;
diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index db7244291b74..1bfe11b1edb6 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -773,6 +773,18 @@ size_t dma_max_mapping_size(struct device *dev)
 }
 EXPORT_SYMBOL_GPL(dma_max_mapping_size);
 
+size_t dma_opt_mapping_size(struct device *dev)
+{
+	const struct dma_map_ops *ops = get_dma_ops(dev);
+	size_t size = SIZE_MAX;
+
+	if (ops && ops->opt_mapping_size)
+		size = ops->opt_mapping_size();
+
+	return min(dma_max_mapping_size(dev), size);
+}
+EXPORT_SYMBOL_GPL(dma_opt_mapping_size);
+
 bool dma_need_sync(struct device *dev, dma_addr_t dma_addr)
 {
 	const struct dma_map_ops *ops = get_dma_ops(dev);
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v4 2/5] dma-iommu: Add iommu_dma_opt_mapping_size()
  2022-06-27 15:25 [PATCH v4 0/5] DMA mapping changes for SCSI core John Garry
  2022-06-27 15:25 ` [PATCH v4 1/5] dma-mapping: Add dma_opt_mapping_size() John Garry
@ 2022-06-27 15:25 ` John Garry
  2022-06-28 10:56   ` Robin Murphy
  2022-06-27 15:25 ` [PATCH v4 3/5] scsi: core: Cap shost max_sectors according to DMA mapping limits only once John Garry
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 18+ messages in thread
From: John Garry @ 2022-06-27 15:25 UTC (permalink / raw)
  To: damien.lemoal, joro, will, jejb, martin.petersen, hch,
	m.szyprowski, robin.murphy
  Cc: linux-doc, linux-kernel, linux-ide, iommu, iommu, linux-scsi,
	linuxarm, John Garry

Add the IOMMU callback for DMA mapping API dma_opt_mapping_size(), which
allows the drivers to know the optimal mapping limit and thus limit the
requested IOVA lengths.

This value is based on the IOVA rcache range limit, as IOVAs allocated
above this limit must always be newly allocated, which may be quite slow.

Signed-off-by: John Garry <john.garry@huawei.com>
Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>
---
 drivers/iommu/dma-iommu.c | 6 ++++++
 drivers/iommu/iova.c      | 5 +++++
 include/linux/iova.h      | 2 ++
 3 files changed, 13 insertions(+)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index f90251572a5d..9e1586447ee8 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -1459,6 +1459,11 @@ static unsigned long iommu_dma_get_merge_boundary(struct device *dev)
 	return (1UL << __ffs(domain->pgsize_bitmap)) - 1;
 }
 
+static size_t iommu_dma_opt_mapping_size(void)
+{
+	return iova_rcache_range();
+}
+
 static const struct dma_map_ops iommu_dma_ops = {
 	.alloc			= iommu_dma_alloc,
 	.free			= iommu_dma_free,
@@ -1479,6 +1484,7 @@ static const struct dma_map_ops iommu_dma_ops = {
 	.map_resource		= iommu_dma_map_resource,
 	.unmap_resource		= iommu_dma_unmap_resource,
 	.get_merge_boundary	= iommu_dma_get_merge_boundary,
+	.opt_mapping_size	= iommu_dma_opt_mapping_size,
 };
 
 /*
diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index db77aa675145..9f00b58d546e 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -26,6 +26,11 @@ static unsigned long iova_rcache_get(struct iova_domain *iovad,
 static void free_cpu_cached_iovas(unsigned int cpu, struct iova_domain *iovad);
 static void free_iova_rcaches(struct iova_domain *iovad);
 
+unsigned long iova_rcache_range(void)
+{
+	return PAGE_SIZE << (IOVA_RANGE_CACHE_MAX_SIZE - 1);
+}
+
 static int iova_cpuhp_dead(unsigned int cpu, struct hlist_node *node)
 {
 	struct iova_domain *iovad;
diff --git a/include/linux/iova.h b/include/linux/iova.h
index 320a70e40233..c6ba6d95d79c 100644
--- a/include/linux/iova.h
+++ b/include/linux/iova.h
@@ -79,6 +79,8 @@ static inline unsigned long iova_pfn(struct iova_domain *iovad, dma_addr_t iova)
 int iova_cache_get(void);
 void iova_cache_put(void);
 
+unsigned long iova_rcache_range(void);
+
 void free_iova(struct iova_domain *iovad, unsigned long pfn);
 void __free_iova(struct iova_domain *iovad, struct iova *iova);
 struct iova *alloc_iova(struct iova_domain *iovad, unsigned long size,
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v4 3/5] scsi: core: Cap shost max_sectors according to DMA mapping limits only once
  2022-06-27 15:25 [PATCH v4 0/5] DMA mapping changes for SCSI core John Garry
  2022-06-27 15:25 ` [PATCH v4 1/5] dma-mapping: Add dma_opt_mapping_size() John Garry
  2022-06-27 15:25 ` [PATCH v4 2/5] dma-iommu: Add iommu_dma_opt_mapping_size() John Garry
@ 2022-06-27 15:25 ` John Garry
  2022-06-27 15:25 ` [PATCH v4 4/5] scsi: scsi_transport_sas: Cap shost max_sectors according to DMA optimal mapping limit John Garry
  2022-06-27 15:25 ` [PATCH v4 5/5] libata-scsi: Cap ata_device->max_sectors according to shost->max_sectors John Garry
  4 siblings, 0 replies; 18+ messages in thread
From: John Garry @ 2022-06-27 15:25 UTC (permalink / raw)
  To: damien.lemoal, joro, will, jejb, martin.petersen, hch,
	m.szyprowski, robin.murphy
  Cc: linux-doc, linux-kernel, linux-ide, iommu, iommu, linux-scsi,
	linuxarm, John Garry

The shost->max_sectors is repeatedly capped according to the host DMA
mapping limit for each sdev in __scsi_init_queue(). This is unnecessary, so
set only once when adding the host.

Signed-off-by: John Garry <john.garry@huawei.com>
---
 drivers/scsi/hosts.c    | 5 +++++
 drivers/scsi/scsi_lib.c | 4 ----
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
index 8352f90d997d..d04bd2c7c9f1 100644
--- a/drivers/scsi/hosts.c
+++ b/drivers/scsi/hosts.c
@@ -236,6 +236,11 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
 
 	shost->dma_dev = dma_dev;
 
+	if (dma_dev->dma_mask) {
+		shost->max_sectors = min_t(unsigned int, shost->max_sectors,
+				dma_max_mapping_size(dma_dev) >> SECTOR_SHIFT);
+	}
+
 	error = scsi_mq_setup_tags(shost);
 	if (error)
 		goto fail;
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 6ffc9e4258a8..6ce8acea322a 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1884,10 +1884,6 @@ void __scsi_init_queue(struct Scsi_Host *shost, struct request_queue *q)
 		blk_queue_max_integrity_segments(q, shost->sg_prot_tablesize);
 	}
 
-	if (dev->dma_mask) {
-		shost->max_sectors = min_t(unsigned int, shost->max_sectors,
-				dma_max_mapping_size(dev) >> SECTOR_SHIFT);
-	}
 	blk_queue_max_hw_sectors(q, shost->max_sectors);
 	blk_queue_segment_boundary(q, shost->dma_boundary);
 	dma_set_seg_boundary(dev, shost->dma_boundary);
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v4 4/5] scsi: scsi_transport_sas: Cap shost max_sectors according to DMA optimal mapping limit
  2022-06-27 15:25 [PATCH v4 0/5] DMA mapping changes for SCSI core John Garry
                   ` (2 preceding siblings ...)
  2022-06-27 15:25 ` [PATCH v4 3/5] scsi: core: Cap shost max_sectors according to DMA mapping limits only once John Garry
@ 2022-06-27 15:25 ` John Garry
  2022-06-27 15:25 ` [PATCH v4 5/5] libata-scsi: Cap ata_device->max_sectors according to shost->max_sectors John Garry
  4 siblings, 0 replies; 18+ messages in thread
From: John Garry @ 2022-06-27 15:25 UTC (permalink / raw)
  To: damien.lemoal, joro, will, jejb, martin.petersen, hch,
	m.szyprowski, robin.murphy
  Cc: linux-doc, linux-kernel, linux-ide, iommu, iommu, linux-scsi,
	linuxarm, John Garry

Streaming DMA mappings may be considerably slower when mappings go through
an IOMMU and the total mapping length is somewhat long. This is because the
IOMMU IOVA code allocates and free an IOVA for each mapping, which may
affect performance.

For performance reasons set the request queue max_sectors from
dma_opt_mapping_size(), which knows this mapping limit.

Signed-off-by: John Garry <john.garry@huawei.com>
---
 drivers/scsi/scsi_transport_sas.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/scsi/scsi_transport_sas.c b/drivers/scsi/scsi_transport_sas.c
index 12bff64dade6..1b45248748e0 100644
--- a/drivers/scsi/scsi_transport_sas.c
+++ b/drivers/scsi/scsi_transport_sas.c
@@ -225,6 +225,7 @@ static int sas_host_setup(struct transport_container *tc, struct device *dev,
 {
 	struct Scsi_Host *shost = dev_to_shost(dev);
 	struct sas_host_attrs *sas_host = to_sas_host_attrs(shost);
+	struct device *dma_dev = shost->dma_dev;
 
 	INIT_LIST_HEAD(&sas_host->rphy_list);
 	mutex_init(&sas_host->lock);
@@ -236,6 +237,11 @@ static int sas_host_setup(struct transport_container *tc, struct device *dev,
 		dev_printk(KERN_ERR, dev, "fail to a bsg device %d\n",
 			   shost->host_no);
 
+	if (dma_dev) {
+		shost->max_sectors = min_t(unsigned int, shost->max_sectors,
+				dma_opt_mapping_size(dma_dev) >> SECTOR_SHIFT);
+	}
+
 	return 0;
 }
 
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v4 5/5] libata-scsi: Cap ata_device->max_sectors according to shost->max_sectors
  2022-06-27 15:25 [PATCH v4 0/5] DMA mapping changes for SCSI core John Garry
                   ` (3 preceding siblings ...)
  2022-06-27 15:25 ` [PATCH v4 4/5] scsi: scsi_transport_sas: Cap shost max_sectors according to DMA optimal mapping limit John Garry
@ 2022-06-27 15:25 ` John Garry
  2022-06-27 23:24   ` Damien Le Moal
  4 siblings, 1 reply; 18+ messages in thread
From: John Garry @ 2022-06-27 15:25 UTC (permalink / raw)
  To: damien.lemoal, joro, will, jejb, martin.petersen, hch,
	m.szyprowski, robin.murphy
  Cc: linux-doc, linux-kernel, linux-ide, iommu, iommu, linux-scsi,
	linuxarm, John Garry

ATA devices (struct ata_device) have a max_sectors field which is
configured internally in libata. This is then used to (re)configure the
associated sdev request queue max_sectors value from how it is earlier set
in __scsi_init_queue(). In __scsi_init_queue() the max_sectors value is set
according to shost limits, which includes host DMA mapping limits.

Cap the ata_device max_sectors according to shost->max_sectors to respect
this shost limit.

Signed-off-by: John Garry <john.garry@huawei.com>
Acked-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>
---
 drivers/ata/libata-scsi.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
index 86dbb1cdfabd..24a43d540d9f 100644
--- a/drivers/ata/libata-scsi.c
+++ b/drivers/ata/libata-scsi.c
@@ -1060,6 +1060,7 @@ int ata_scsi_dev_config(struct scsi_device *sdev, struct ata_device *dev)
 		dev->flags |= ATA_DFLAG_NO_UNLOAD;
 
 	/* configure max sectors */
+	dev->max_sectors = min(dev->max_sectors, sdev->host->max_sectors);
 	blk_queue_max_hw_sectors(q, dev->max_sectors);
 
 	if (dev->class == ATA_DEV_ATAPI) {
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 5/5] libata-scsi: Cap ata_device->max_sectors according to shost->max_sectors
  2022-06-27 15:25 ` [PATCH v4 5/5] libata-scsi: Cap ata_device->max_sectors according to shost->max_sectors John Garry
@ 2022-06-27 23:24   ` Damien Le Moal
  2022-06-28  7:54     ` John Garry
  0 siblings, 1 reply; 18+ messages in thread
From: Damien Le Moal @ 2022-06-27 23:24 UTC (permalink / raw)
  To: John Garry, joro, will, jejb, martin.petersen, hch, m.szyprowski,
	robin.murphy
  Cc: linux-doc, linux-kernel, linux-ide, iommu, iommu, linux-scsi, linuxarm

On 6/28/22 00:25, John Garry wrote:
> ATA devices (struct ata_device) have a max_sectors field which is
> configured internally in libata. This is then used to (re)configure the
> associated sdev request queue max_sectors value from how it is earlier set
> in __scsi_init_queue(). In __scsi_init_queue() the max_sectors value is set
> according to shost limits, which includes host DMA mapping limits.
> 
> Cap the ata_device max_sectors according to shost->max_sectors to respect
> this shost limit.
> 
> Signed-off-by: John Garry <john.garry@huawei.com>
> Acked-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>

Nit: please change the patch title to "ata: libata-scsi: Cap ..."

> ---
>  drivers/ata/libata-scsi.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
> index 86dbb1cdfabd..24a43d540d9f 100644
> --- a/drivers/ata/libata-scsi.c
> +++ b/drivers/ata/libata-scsi.c
> @@ -1060,6 +1060,7 @@ int ata_scsi_dev_config(struct scsi_device *sdev, struct ata_device *dev)
>  		dev->flags |= ATA_DFLAG_NO_UNLOAD;
>  
>  	/* configure max sectors */
> +	dev->max_sectors = min(dev->max_sectors, sdev->host->max_sectors);
>  	blk_queue_max_hw_sectors(q, dev->max_sectors);
>  
>  	if (dev->class == ATA_DEV_ATAPI) {


-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 5/5] libata-scsi: Cap ata_device->max_sectors according to shost->max_sectors
  2022-06-27 23:24   ` Damien Le Moal
@ 2022-06-28  7:54     ` John Garry
  2022-06-28  9:14       ` Damien Le Moal
  0 siblings, 1 reply; 18+ messages in thread
From: John Garry @ 2022-06-28  7:54 UTC (permalink / raw)
  To: Damien Le Moal, joro, will, jejb, martin.petersen, hch,
	m.szyprowski, robin.murphy
  Cc: linux-doc, linux-kernel, linux-ide, iommu, iommu, linux-scsi, linuxarm

On 28/06/2022 00:24, Damien Le Moal wrote:
> On 6/28/22 00:25, John Garry wrote:
>> ATA devices (struct ata_device) have a max_sectors field which is
>> configured internally in libata. This is then used to (re)configure the
>> associated sdev request queue max_sectors value from how it is earlier set
>> in __scsi_init_queue(). In __scsi_init_queue() the max_sectors value is set
>> according to shost limits, which includes host DMA mapping limits.
>>
>> Cap the ata_device max_sectors according to shost->max_sectors to respect
>> this shost limit.
>>
>> Signed-off-by: John Garry<john.garry@huawei.com>
>> Acked-by: Damien Le Moal<damien.lemoal@opensource.wdc.com>
> Nit: please change the patch title to "ata: libata-scsi: Cap ..."
> 

ok, but it's going to be an even longer title :)

BTW, this patch has no real dependency on the rest of the series, so 
could be taken separately if you prefer.

Thanks,
John

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 5/5] libata-scsi: Cap ata_device->max_sectors according to shost->max_sectors
  2022-06-28  7:54     ` John Garry
@ 2022-06-28  9:14       ` Damien Le Moal
  2022-06-28 11:33         ` John Garry
  0 siblings, 1 reply; 18+ messages in thread
From: Damien Le Moal @ 2022-06-28  9:14 UTC (permalink / raw)
  To: John Garry, joro, will, jejb, martin.petersen, hch, m.szyprowski,
	robin.murphy
  Cc: linux-doc, linux-kernel, linux-ide, iommu, iommu, linux-scsi, linuxarm

On 6/28/22 16:54, John Garry wrote:
> On 28/06/2022 00:24, Damien Le Moal wrote:
>> On 6/28/22 00:25, John Garry wrote:
>>> ATA devices (struct ata_device) have a max_sectors field which is
>>> configured internally in libata. This is then used to (re)configure the
>>> associated sdev request queue max_sectors value from how it is earlier set
>>> in __scsi_init_queue(). In __scsi_init_queue() the max_sectors value is set
>>> according to shost limits, which includes host DMA mapping limits.
>>>
>>> Cap the ata_device max_sectors according to shost->max_sectors to respect
>>> this shost limit.
>>>
>>> Signed-off-by: John Garry<john.garry@huawei.com>
>>> Acked-by: Damien Le Moal<damien.lemoal@opensource.wdc.com>
>> Nit: please change the patch title to "ata: libata-scsi: Cap ..."
>>
> 
> ok, but it's going to be an even longer title :)
> 
> BTW, this patch has no real dependency on the rest of the series, so 
> could be taken separately if you prefer.

Sure, you can send it separately. Adding it through the scsi tree is fine too.

> 
> Thanks,
> John


-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 2/5] dma-iommu: Add iommu_dma_opt_mapping_size()
  2022-06-27 15:25 ` [PATCH v4 2/5] dma-iommu: Add iommu_dma_opt_mapping_size() John Garry
@ 2022-06-28 10:56   ` Robin Murphy
  0 siblings, 0 replies; 18+ messages in thread
From: Robin Murphy @ 2022-06-28 10:56 UTC (permalink / raw)
  To: John Garry, damien.lemoal, joro, will, jejb, martin.petersen,
	hch, m.szyprowski
  Cc: linux-doc, linux-kernel, linux-ide, iommu, iommu, linux-scsi, linuxarm

On 2022-06-27 16:25, John Garry wrote:
> Add the IOMMU callback for DMA mapping API dma_opt_mapping_size(), which
> allows the drivers to know the optimal mapping limit and thus limit the
> requested IOVA lengths.
> 
> This value is based on the IOVA rcache range limit, as IOVAs allocated
> above this limit must always be newly allocated, which may be quite slow.

Acked-by: Robin Murphy <robin.murphy@arm.com>

> Signed-off-by: John Garry <john.garry@huawei.com>
> Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>
> ---
>   drivers/iommu/dma-iommu.c | 6 ++++++
>   drivers/iommu/iova.c      | 5 +++++
>   include/linux/iova.h      | 2 ++
>   3 files changed, 13 insertions(+)
> 
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index f90251572a5d..9e1586447ee8 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -1459,6 +1459,11 @@ static unsigned long iommu_dma_get_merge_boundary(struct device *dev)
>   	return (1UL << __ffs(domain->pgsize_bitmap)) - 1;
>   }
>   
> +static size_t iommu_dma_opt_mapping_size(void)
> +{
> +	return iova_rcache_range();
> +}
> +
>   static const struct dma_map_ops iommu_dma_ops = {
>   	.alloc			= iommu_dma_alloc,
>   	.free			= iommu_dma_free,
> @@ -1479,6 +1484,7 @@ static const struct dma_map_ops iommu_dma_ops = {
>   	.map_resource		= iommu_dma_map_resource,
>   	.unmap_resource		= iommu_dma_unmap_resource,
>   	.get_merge_boundary	= iommu_dma_get_merge_boundary,
> +	.opt_mapping_size	= iommu_dma_opt_mapping_size,
>   };
>   
>   /*
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index db77aa675145..9f00b58d546e 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -26,6 +26,11 @@ static unsigned long iova_rcache_get(struct iova_domain *iovad,
>   static void free_cpu_cached_iovas(unsigned int cpu, struct iova_domain *iovad);
>   static void free_iova_rcaches(struct iova_domain *iovad);
>   
> +unsigned long iova_rcache_range(void)
> +{
> +	return PAGE_SIZE << (IOVA_RANGE_CACHE_MAX_SIZE - 1);
> +}
> +
>   static int iova_cpuhp_dead(unsigned int cpu, struct hlist_node *node)
>   {
>   	struct iova_domain *iovad;
> diff --git a/include/linux/iova.h b/include/linux/iova.h
> index 320a70e40233..c6ba6d95d79c 100644
> --- a/include/linux/iova.h
> +++ b/include/linux/iova.h
> @@ -79,6 +79,8 @@ static inline unsigned long iova_pfn(struct iova_domain *iovad, dma_addr_t iova)
>   int iova_cache_get(void);
>   void iova_cache_put(void);
>   
> +unsigned long iova_rcache_range(void);
> +
>   void free_iova(struct iova_domain *iovad, unsigned long pfn);
>   void __free_iova(struct iova_domain *iovad, struct iova *iova);
>   struct iova *alloc_iova(struct iova_domain *iovad, unsigned long size,

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 1/5] dma-mapping: Add dma_opt_mapping_size()
  2022-06-27 15:25 ` [PATCH v4 1/5] dma-mapping: Add dma_opt_mapping_size() John Garry
@ 2022-06-28 11:23   ` Robin Murphy
  2022-06-28 11:27     ` John Garry
  0 siblings, 1 reply; 18+ messages in thread
From: Robin Murphy @ 2022-06-28 11:23 UTC (permalink / raw)
  To: John Garry, damien.lemoal, joro, will, jejb, martin.petersen,
	hch, m.szyprowski
  Cc: linux-doc, linux-kernel, linux-ide, iommu, iommu, linux-scsi, linuxarm

On 2022-06-27 16:25, John Garry wrote:
> Streaming DMA mapping involving an IOMMU may be much slower for larger
> total mapping size. This is because every IOMMU DMA mapping requires an
> IOVA to be allocated and freed. IOVA sizes above a certain limit are not
> cached, which can have a big impact on DMA mapping performance.
> 
> Provide an API for device drivers to know this "optimal" limit, such that
> they may try to produce mapping which don't exceed it.
> 
> Signed-off-by: John Garry <john.garry@huawei.com>
> Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>
> ---
>   Documentation/core-api/dma-api.rst |  9 +++++++++
>   include/linux/dma-map-ops.h        |  1 +
>   include/linux/dma-mapping.h        |  5 +++++
>   kernel/dma/mapping.c               | 12 ++++++++++++
>   4 files changed, 27 insertions(+)
> 
> diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dma-api.rst
> index 6d6d0edd2d27..b3cd9763d28b 100644
> --- a/Documentation/core-api/dma-api.rst
> +++ b/Documentation/core-api/dma-api.rst
> @@ -204,6 +204,15 @@ Returns the maximum size of a mapping for the device. The size parameter
>   of the mapping functions like dma_map_single(), dma_map_page() and
>   others should not be larger than the returned value.
>   
> +::
> +
> +	size_t
> +	dma_opt_mapping_size(struct device *dev);
> +
> +Returns the maximum optimal size of a mapping for the device. Mapping large
> +buffers may take longer so device drivers are advised to limit total DMA
> +streaming mappings length to the returned value.

Nit: I'm not sure "advised" is necessarily the right thing to say in 
general - that's only really true for a caller who cares about 
throughput of churning through short-lived mappings more than anything 
else, and doesn't take a significant hit overall from splitting up 
larger requests. I do think it's good to clarify the exact context of 
"optimal" here, but I'd prefer to be objectively clear that it's for 
workloads where the up-front mapping overhead dominates.

Thanks,
Robin.

> +
>   ::
>   
>   	bool
> diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
> index 0d5b06b3a4a6..98ceba6fa848 100644
> --- a/include/linux/dma-map-ops.h
> +++ b/include/linux/dma-map-ops.h
> @@ -69,6 +69,7 @@ struct dma_map_ops {
>   	int (*dma_supported)(struct device *dev, u64 mask);
>   	u64 (*get_required_mask)(struct device *dev);
>   	size_t (*max_mapping_size)(struct device *dev);
> +	size_t (*opt_mapping_size)(void);
>   	unsigned long (*get_merge_boundary)(struct device *dev);
>   };
>   
> diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
> index dca2b1355bb1..fe3849434b2a 100644
> --- a/include/linux/dma-mapping.h
> +++ b/include/linux/dma-mapping.h
> @@ -144,6 +144,7 @@ int dma_set_mask(struct device *dev, u64 mask);
>   int dma_set_coherent_mask(struct device *dev, u64 mask);
>   u64 dma_get_required_mask(struct device *dev);
>   size_t dma_max_mapping_size(struct device *dev);
> +size_t dma_opt_mapping_size(struct device *dev);
>   bool dma_need_sync(struct device *dev, dma_addr_t dma_addr);
>   unsigned long dma_get_merge_boundary(struct device *dev);
>   struct sg_table *dma_alloc_noncontiguous(struct device *dev, size_t size,
> @@ -266,6 +267,10 @@ static inline size_t dma_max_mapping_size(struct device *dev)
>   {
>   	return 0;
>   }
> +static inline size_t dma_opt_mapping_size(struct device *dev)
> +{
> +	return 0;
> +}
>   static inline bool dma_need_sync(struct device *dev, dma_addr_t dma_addr)
>   {
>   	return false;
> diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
> index db7244291b74..1bfe11b1edb6 100644
> --- a/kernel/dma/mapping.c
> +++ b/kernel/dma/mapping.c
> @@ -773,6 +773,18 @@ size_t dma_max_mapping_size(struct device *dev)
>   }
>   EXPORT_SYMBOL_GPL(dma_max_mapping_size);
>   
> +size_t dma_opt_mapping_size(struct device *dev)
> +{
> +	const struct dma_map_ops *ops = get_dma_ops(dev);
> +	size_t size = SIZE_MAX;
> +
> +	if (ops && ops->opt_mapping_size)
> +		size = ops->opt_mapping_size();
> +
> +	return min(dma_max_mapping_size(dev), size);
> +}
> +EXPORT_SYMBOL_GPL(dma_opt_mapping_size);
> +
>   bool dma_need_sync(struct device *dev, dma_addr_t dma_addr)
>   {
>   	const struct dma_map_ops *ops = get_dma_ops(dev);

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 1/5] dma-mapping: Add dma_opt_mapping_size()
  2022-06-28 11:23   ` Robin Murphy
@ 2022-06-28 11:27     ` John Garry
  2022-06-29 11:57       ` John Garry
  0 siblings, 1 reply; 18+ messages in thread
From: John Garry @ 2022-06-28 11:27 UTC (permalink / raw)
  To: Robin Murphy, damien.lemoal, joro, will, jejb, martin.petersen,
	hch, m.szyprowski
  Cc: linux-doc, linux-kernel, linux-ide, iommu, iommu, linux-scsi, linuxarm

On 28/06/2022 12:23, Robin Murphy wrote:
>> +
>> +    size_t
>> +    dma_opt_mapping_size(struct device *dev);
>> +
>> +Returns the maximum optimal size of a mapping for the device. Mapping 
>> large
>> +buffers may take longer so device drivers are advised to limit total DMA
>> +streaming mappings length to the returned value.
> 
> Nit: I'm not sure "advised" is necessarily the right thing to say in 
> general - that's only really true for a caller who cares about 
> throughput of churning through short-lived mappings more than anything 
> else, and doesn't take a significant hit overall from splitting up 
> larger requests. I do think it's good to clarify the exact context of 
> "optimal" here, but I'd prefer to be objectively clear that it's for 
> workloads where the up-front mapping overhead dominates.

Ok, sure, I can make that clear.

Thanks,
John

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 5/5] libata-scsi: Cap ata_device->max_sectors according to shost->max_sectors
  2022-06-28  9:14       ` Damien Le Moal
@ 2022-06-28 11:33         ` John Garry
  2022-06-29  5:40           ` Christoph Hellwig
  0 siblings, 1 reply; 18+ messages in thread
From: John Garry @ 2022-06-28 11:33 UTC (permalink / raw)
  To: Damien Le Moal, hch
  Cc: linux-doc, linux-kernel, linux-ide, iommu, iommu, linux-scsi,
	linuxarm, joro, will, jejb, martin.petersen, m.szyprowski,
	robin.murphy

On 28/06/2022 10:14, Damien Le Moal wrote:
>> BTW, this patch has no real dependency on the rest of the series, so
>> could be taken separately if you prefer.
> Sure, you can send it separately. Adding it through the scsi tree is fine too.
> 

Well Christoph originally offered to take this series via the 
dma-mapping tree.

@Christoph, is that still ok with you? If so, would you rather I send 
this libata patch separately?

Thanks,
john

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 5/5] libata-scsi: Cap ata_device->max_sectors according to shost->max_sectors
  2022-06-28 11:33         ` John Garry
@ 2022-06-29  5:40           ` Christoph Hellwig
  2022-06-29  5:58             ` Damien Le Moal
  0 siblings, 1 reply; 18+ messages in thread
From: Christoph Hellwig @ 2022-06-29  5:40 UTC (permalink / raw)
  To: John Garry
  Cc: Damien Le Moal, hch, linux-doc, linux-kernel, linux-ide, iommu,
	iommu, linux-scsi, linuxarm, joro, will, jejb, martin.petersen,
	m.szyprowski, robin.murphy

On Tue, Jun 28, 2022 at 12:33:58PM +0100, John Garry wrote:
> Well Christoph originally offered to take this series via the dma-mapping 
> tree.
>
> @Christoph, is that still ok with you? If so, would you rather I send this 
> libata patch separately?

The offer still stands, and I don't really care where the libata
patch is routed.  Just tell me what you prefer.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 5/5] libata-scsi: Cap ata_device->max_sectors according to shost->max_sectors
  2022-06-29  5:40           ` Christoph Hellwig
@ 2022-06-29  5:58             ` Damien Le Moal
  2022-06-29  7:43               ` John Garry
  0 siblings, 1 reply; 18+ messages in thread
From: Damien Le Moal @ 2022-06-29  5:58 UTC (permalink / raw)
  To: Christoph Hellwig, John Garry
  Cc: linux-doc, linux-kernel, linux-ide, iommu, iommu, linux-scsi,
	linuxarm, joro, will, jejb, martin.petersen, m.szyprowski,
	robin.murphy

On 6/29/22 14:40, Christoph Hellwig wrote:
> On Tue, Jun 28, 2022 at 12:33:58PM +0100, John Garry wrote:
>> Well Christoph originally offered to take this series via the dma-mapping 
>> tree.
>>
>> @Christoph, is that still ok with you? If so, would you rather I send this 
>> libata patch separately?
> 
> The offer still stands, and I don't really care where the libata
> patch is routed.  Just tell me what you prefer.

If it is 100% independent from the other patches, I can take it.
Otherwise, feel free to take it !

-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 5/5] libata-scsi: Cap ata_device->max_sectors according to shost->max_sectors
  2022-06-29  5:58             ` Damien Le Moal
@ 2022-06-29  7:43               ` John Garry
  2022-06-29  8:24                 ` Damien Le Moal
  0 siblings, 1 reply; 18+ messages in thread
From: John Garry @ 2022-06-29  7:43 UTC (permalink / raw)
  To: Damien Le Moal, Christoph Hellwig
  Cc: linux-doc, linux-kernel, linux-ide, iommu, iommu, linux-scsi,
	linuxarm, joro, will, jejb, martin.petersen, m.szyprowski,
	robin.murphy

On 29/06/2022 06:58, Damien Le Moal wrote:
> On 6/29/22 14:40, Christoph Hellwig wrote:
>> On Tue, Jun 28, 2022 at 12:33:58PM +0100, John Garry wrote:
>>> Well Christoph originally offered to take this series via the dma-mapping
>>> tree.
>>>
>>> @Christoph, is that still ok with you? If so, would you rather I send this
>>> libata patch separately?
>>
>> The offer still stands, and I don't really care where the libata
>> patch is routed.  Just tell me what you prefer.

Cheers.

> 
> If it is 100% independent from the other patches, I can take it.
> Otherwise, feel free to take it !
> 

I'll just keep the all together - it's easier in case I need to change 
anything.

Thanks!

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 5/5] libata-scsi: Cap ata_device->max_sectors according to shost->max_sectors
  2022-06-29  7:43               ` John Garry
@ 2022-06-29  8:24                 ` Damien Le Moal
  0 siblings, 0 replies; 18+ messages in thread
From: Damien Le Moal @ 2022-06-29  8:24 UTC (permalink / raw)
  To: John Garry, Christoph Hellwig
  Cc: linux-doc, linux-kernel, linux-ide, iommu, iommu, linux-scsi,
	linuxarm, joro, will, jejb, martin.petersen, m.szyprowski,
	robin.murphy

On 6/29/22 16:43, John Garry wrote:
> On 29/06/2022 06:58, Damien Le Moal wrote:
>> On 6/29/22 14:40, Christoph Hellwig wrote:
>>> On Tue, Jun 28, 2022 at 12:33:58PM +0100, John Garry wrote:
>>>> Well Christoph originally offered to take this series via the dma-mapping
>>>> tree.
>>>>
>>>> @Christoph, is that still ok with you? If so, would you rather I send this
>>>> libata patch separately?
>>>
>>> The offer still stands, and I don't really care where the libata
>>> patch is routed.  Just tell me what you prefer.
> 
> Cheers.
> 
>>
>> If it is 100% independent from the other patches, I can take it.
>> Otherwise, feel free to take it !
>>
> 
> I'll just keep the all together - it's easier in case I need to change 
> anything.

Works for me.

> 
> Thanks!


-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 1/5] dma-mapping: Add dma_opt_mapping_size()
  2022-06-28 11:27     ` John Garry
@ 2022-06-29 11:57       ` John Garry
  0 siblings, 0 replies; 18+ messages in thread
From: John Garry @ 2022-06-29 11:57 UTC (permalink / raw)
  To: Robin Murphy
  Cc: linux-scsi, linux-doc, linuxarm, iommu, linux-kernel, linux-ide,
	iommu, damien.lemoal, jejb, martin.petersen, m.szyprowski, joro,
	hch, will

On 28/06/2022 12:27, John Garry via iommu wrote:
> On 28/06/2022 12:23, Robin Murphy wrote:
>>> +
>>> +    size_t
>>> +    dma_opt_mapping_size(struct device *dev);
>>> +
>>> +Returns the maximum optimal size of a mapping for the device. 
>>> Mapping large
>>> +buffers may take longer so device drivers are advised to limit total 
>>> DMA
>>> +streaming mappings length to the returned value.
>>
>> Nit: I'm not sure "advised" is necessarily the right thing to say in 
>> general - that's only really true for a caller who cares about 
>> throughput of churning through short-lived mappings more than anything 
>> else, and doesn't take a significant hit overall from splitting up 
>> larger requests. I do think it's good to clarify the exact context of 
>> "optimal" here, but I'd prefer to be objectively clear that it's for 
>> workloads where the up-front mapping overhead dominates.
> 
I'm going to go with something like this:

size_t
dma_opt_mapping_size(struct device *dev);

Returns the maximum optimal size of a mapping for the device.

Mapping larger buffers may take much longer in certain scenarios. In 
addition, for high-rate short-lived streaming mappings the upfront time 
spent on the mapping may account for an appreciable part of the total 
request lifetime. As such, if splitting larger requests incurs no 
significant performance penalty, then device drivers are advised to 
limit total DMA streaming mappings length to the returned value.

Let me know if you would like it further amended.

Cheers,
John

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2022-06-29 11:58 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-27 15:25 [PATCH v4 0/5] DMA mapping changes for SCSI core John Garry
2022-06-27 15:25 ` [PATCH v4 1/5] dma-mapping: Add dma_opt_mapping_size() John Garry
2022-06-28 11:23   ` Robin Murphy
2022-06-28 11:27     ` John Garry
2022-06-29 11:57       ` John Garry
2022-06-27 15:25 ` [PATCH v4 2/5] dma-iommu: Add iommu_dma_opt_mapping_size() John Garry
2022-06-28 10:56   ` Robin Murphy
2022-06-27 15:25 ` [PATCH v4 3/5] scsi: core: Cap shost max_sectors according to DMA mapping limits only once John Garry
2022-06-27 15:25 ` [PATCH v4 4/5] scsi: scsi_transport_sas: Cap shost max_sectors according to DMA optimal mapping limit John Garry
2022-06-27 15:25 ` [PATCH v4 5/5] libata-scsi: Cap ata_device->max_sectors according to shost->max_sectors John Garry
2022-06-27 23:24   ` Damien Le Moal
2022-06-28  7:54     ` John Garry
2022-06-28  9:14       ` Damien Le Moal
2022-06-28 11:33         ` John Garry
2022-06-29  5:40           ` Christoph Hellwig
2022-06-29  5:58             ` Damien Le Moal
2022-06-29  7:43               ` John Garry
2022-06-29  8:24                 ` Damien Le Moal

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).