* [PATCH 0/4] DMA mapping changes for SCSI core
@ 2022-05-20 8:23 John Garry
2022-05-20 8:23 ` [PATCH 1/4] dma-mapping: Add dma_opt_mapping_size() John Garry
` (4 more replies)
0 siblings, 5 replies; 20+ messages in thread
From: John Garry @ 2022-05-20 8:23 UTC (permalink / raw)
To: damien.lemoal, joro, will, jejb, martin.petersen, hch,
m.szyprowski, robin.murphy
Cc: linux-doc, linux-kernel, linux-ide, iommu, linux-scsi, liyihang6,
chenxiang66, thunder.leizhen, John Garry
As reported in [0], DMA mappings whose size exceeds the IOMMU IOVA caching
limit may see a big performance hit.
This series introduces a new DMA mapping API, dma_opt_mapping_size(), so
that drivers may know this limit when performance is a factor in the
mapping.
Robin didn't like using dma_max_mapping_size() for this [1]. An
alternative to adding the new API would be to add a "hard_limit" arg
to dma_max_mapping_size(). This would mean fixing up all current users,
but it would be good to do that anyway as not all users require a hard
limit.
The SCSI core coded is modified to use this limit.
I also added a patch for libata-scsi as it does not currently honour the
shost max_sectors limit.
[0] https://lore.kernel.org/linux-iommu/20210129092120.1482-1-thunder.leizhen@huawei.com/
[1] https://lore.kernel.org/linux-iommu/f5b78c9c-312e-70ab-ecbb-f14623a4b6e3@arm.com/
John Garry (4):
dma-mapping: Add dma_opt_mapping_size()
dma-iommu: Add iommu_dma_opt_mapping_size()
scsi: core: Cap shost max_sectors according to DMA optimum mapping
limits
libata-scsi: Cap ata_device->max_sectors according to
shost->max_sectors
Documentation/core-api/dma-api.rst | 9 +++++++++
drivers/ata/libata-scsi.c | 1 +
drivers/iommu/dma-iommu.c | 6 ++++++
drivers/iommu/iova.c | 5 +++++
drivers/scsi/hosts.c | 5 +++++
drivers/scsi/scsi_lib.c | 4 ----
include/linux/dma-map-ops.h | 1 +
include/linux/dma-mapping.h | 5 +++++
include/linux/iova.h | 2 ++
kernel/dma/mapping.c | 12 ++++++++++++
10 files changed, 46 insertions(+), 4 deletions(-)
--
2.26.2
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 1/4] dma-mapping: Add dma_opt_mapping_size()
2022-05-20 8:23 [PATCH 0/4] DMA mapping changes for SCSI core John Garry
@ 2022-05-20 8:23 ` John Garry
2022-05-20 23:32 ` Damien Le Moal
2022-05-20 8:23 ` [PATCH 2/4] dma-iommu: Add iommu_dma_opt_mapping_size() John Garry
` (3 subsequent siblings)
4 siblings, 1 reply; 20+ messages in thread
From: John Garry @ 2022-05-20 8:23 UTC (permalink / raw)
To: damien.lemoal, joro, will, jejb, martin.petersen, hch,
m.szyprowski, robin.murphy
Cc: linux-doc, linux-kernel, linux-ide, iommu, linux-scsi, liyihang6,
chenxiang66, thunder.leizhen, John Garry
Streaming DMA mapping involving an IOMMU may be much slower for larger
total mapping size. This is because every IOMMU DMA mapping requires an
IOVA to be allocated and freed. IOVA sizes above a certain limit are not
cached, which can have a big impact on DMA mapping performance.
Provide an API for device drivers to know this "optimal" limit, such that
they may try to produce mapping which don't exceed it.
Signed-off-by: John Garry <john.garry@huawei.com>
---
Documentation/core-api/dma-api.rst | 9 +++++++++
include/linux/dma-map-ops.h | 1 +
include/linux/dma-mapping.h | 5 +++++
kernel/dma/mapping.c | 12 ++++++++++++
4 files changed, 27 insertions(+)
diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dma-api.rst
index 6d6d0edd2d27..b3cd9763d28b 100644
--- a/Documentation/core-api/dma-api.rst
+++ b/Documentation/core-api/dma-api.rst
@@ -204,6 +204,15 @@ Returns the maximum size of a mapping for the device. The size parameter
of the mapping functions like dma_map_single(), dma_map_page() and
others should not be larger than the returned value.
+::
+
+ size_t
+ dma_opt_mapping_size(struct device *dev);
+
+Returns the maximum optimal size of a mapping for the device. Mapping large
+buffers may take longer so device drivers are advised to limit total DMA
+streaming mappings length to the returned value.
+
::
bool
diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
index 0d5b06b3a4a6..98ceba6fa848 100644
--- a/include/linux/dma-map-ops.h
+++ b/include/linux/dma-map-ops.h
@@ -69,6 +69,7 @@ struct dma_map_ops {
int (*dma_supported)(struct device *dev, u64 mask);
u64 (*get_required_mask)(struct device *dev);
size_t (*max_mapping_size)(struct device *dev);
+ size_t (*opt_mapping_size)(void);
unsigned long (*get_merge_boundary)(struct device *dev);
};
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index dca2b1355bb1..fe3849434b2a 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -144,6 +144,7 @@ int dma_set_mask(struct device *dev, u64 mask);
int dma_set_coherent_mask(struct device *dev, u64 mask);
u64 dma_get_required_mask(struct device *dev);
size_t dma_max_mapping_size(struct device *dev);
+size_t dma_opt_mapping_size(struct device *dev);
bool dma_need_sync(struct device *dev, dma_addr_t dma_addr);
unsigned long dma_get_merge_boundary(struct device *dev);
struct sg_table *dma_alloc_noncontiguous(struct device *dev, size_t size,
@@ -266,6 +267,10 @@ static inline size_t dma_max_mapping_size(struct device *dev)
{
return 0;
}
+static inline size_t dma_opt_mapping_size(struct device *dev)
+{
+ return 0;
+}
static inline bool dma_need_sync(struct device *dev, dma_addr_t dma_addr)
{
return false;
diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index db7244291b74..1bfe11b1edb6 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -773,6 +773,18 @@ size_t dma_max_mapping_size(struct device *dev)
}
EXPORT_SYMBOL_GPL(dma_max_mapping_size);
+size_t dma_opt_mapping_size(struct device *dev)
+{
+ const struct dma_map_ops *ops = get_dma_ops(dev);
+ size_t size = SIZE_MAX;
+
+ if (ops && ops->opt_mapping_size)
+ size = ops->opt_mapping_size();
+
+ return min(dma_max_mapping_size(dev), size);
+}
+EXPORT_SYMBOL_GPL(dma_opt_mapping_size);
+
bool dma_need_sync(struct device *dev, dma_addr_t dma_addr)
{
const struct dma_map_ops *ops = get_dma_ops(dev);
--
2.26.2
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 2/4] dma-iommu: Add iommu_dma_opt_mapping_size()
2022-05-20 8:23 [PATCH 0/4] DMA mapping changes for SCSI core John Garry
2022-05-20 8:23 ` [PATCH 1/4] dma-mapping: Add dma_opt_mapping_size() John Garry
@ 2022-05-20 8:23 ` John Garry
2022-05-20 23:33 ` Damien Le Moal
2022-05-23 7:34 ` Damien Le Moal
2022-05-20 8:23 ` [PATCH 3/4] scsi: core: Cap shost max_sectors according to DMA optimum mapping limits John Garry
` (2 subsequent siblings)
4 siblings, 2 replies; 20+ messages in thread
From: John Garry @ 2022-05-20 8:23 UTC (permalink / raw)
To: damien.lemoal, joro, will, jejb, martin.petersen, hch,
m.szyprowski, robin.murphy
Cc: linux-doc, linux-kernel, linux-ide, iommu, linux-scsi, liyihang6,
chenxiang66, thunder.leizhen, John Garry
Add the IOMMU callback for DMA mapping API dma_opt_mapping_size(), which
allows the drivers to know the optimal mapping limit and thus limit the
requested IOVA lengths.
This value is based on the IOVA rcache range limit, as IOVAs allocated
above this limit must always be newly allocated, which may be quite slow.
Signed-off-by: John Garry <john.garry@huawei.com>
---
drivers/iommu/dma-iommu.c | 6 ++++++
drivers/iommu/iova.c | 5 +++++
include/linux/iova.h | 2 ++
3 files changed, 13 insertions(+)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 09f6e1c0f9c0..f619e41b9172 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -1442,6 +1442,11 @@ static unsigned long iommu_dma_get_merge_boundary(struct device *dev)
return (1UL << __ffs(domain->pgsize_bitmap)) - 1;
}
+static size_t iommu_dma_opt_mapping_size(void)
+{
+ return iova_rcache_range();
+}
+
static const struct dma_map_ops iommu_dma_ops = {
.alloc = iommu_dma_alloc,
.free = iommu_dma_free,
@@ -1462,6 +1467,7 @@ static const struct dma_map_ops iommu_dma_ops = {
.map_resource = iommu_dma_map_resource,
.unmap_resource = iommu_dma_unmap_resource,
.get_merge_boundary = iommu_dma_get_merge_boundary,
+ .opt_mapping_size = iommu_dma_opt_mapping_size,
};
/*
diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index db77aa675145..9f00b58d546e 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -26,6 +26,11 @@ static unsigned long iova_rcache_get(struct iova_domain *iovad,
static void free_cpu_cached_iovas(unsigned int cpu, struct iova_domain *iovad);
static void free_iova_rcaches(struct iova_domain *iovad);
+unsigned long iova_rcache_range(void)
+{
+ return PAGE_SIZE << (IOVA_RANGE_CACHE_MAX_SIZE - 1);
+}
+
static int iova_cpuhp_dead(unsigned int cpu, struct hlist_node *node)
{
struct iova_domain *iovad;
diff --git a/include/linux/iova.h b/include/linux/iova.h
index 320a70e40233..c6ba6d95d79c 100644
--- a/include/linux/iova.h
+++ b/include/linux/iova.h
@@ -79,6 +79,8 @@ static inline unsigned long iova_pfn(struct iova_domain *iovad, dma_addr_t iova)
int iova_cache_get(void);
void iova_cache_put(void);
+unsigned long iova_rcache_range(void);
+
void free_iova(struct iova_domain *iovad, unsigned long pfn);
void __free_iova(struct iova_domain *iovad, struct iova *iova);
struct iova *alloc_iova(struct iova_domain *iovad, unsigned long size,
--
2.26.2
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 3/4] scsi: core: Cap shost max_sectors according to DMA optimum mapping limits
2022-05-20 8:23 [PATCH 0/4] DMA mapping changes for SCSI core John Garry
2022-05-20 8:23 ` [PATCH 1/4] dma-mapping: Add dma_opt_mapping_size() John Garry
2022-05-20 8:23 ` [PATCH 2/4] dma-iommu: Add iommu_dma_opt_mapping_size() John Garry
@ 2022-05-20 8:23 ` John Garry
2022-05-20 23:30 ` Damien Le Moal
2022-05-23 11:08 ` Dan Carpenter
2022-05-20 8:23 ` [PATCH 4/4] libata-scsi: Cap ata_device->max_sectors according to shost->max_sectors John Garry
2022-05-22 13:13 ` [PATCH 0/4] DMA mapping changes for SCSI core Christoph Hellwig
4 siblings, 2 replies; 20+ messages in thread
From: John Garry @ 2022-05-20 8:23 UTC (permalink / raw)
To: damien.lemoal, joro, will, jejb, martin.petersen, hch,
m.szyprowski, robin.murphy
Cc: linux-doc, linux-kernel, linux-ide, iommu, linux-scsi, liyihang6,
chenxiang66, thunder.leizhen, John Garry
Streaming DMA mappings may be considerably slower when mappings go through
an IOMMU and the total mapping length is somewhat long. This is because the
IOMMU IOVA code allocates and free an IOVA for each mapping, which may
affect performance.
For performance reasons set the request_queue max_sectors from
dma_opt_mapping_size(), which knows this mapping limit.
In addition, the shost->max_sectors is repeatedly set for each sdev in
__scsi_init_queue(). This is unnecessary, so set once when adding the
host.
Signed-off-by: John Garry <john.garry@huawei.com>
---
drivers/scsi/hosts.c | 5 +++++
drivers/scsi/scsi_lib.c | 4 ----
2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
index f69b77cbf538..a3ae6345473b 100644
--- a/drivers/scsi/hosts.c
+++ b/drivers/scsi/hosts.c
@@ -225,6 +225,11 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
shost->cmd_per_lun = min_t(int, shost->cmd_per_lun,
shost->can_queue);
+ if (dma_dev->dma_mask) {
+ shost->max_sectors = min_t(unsigned int, shost->max_sectors,
+ dma_opt_mapping_size(dma_dev) >> SECTOR_SHIFT);
+ }
+
error = scsi_init_sense_cache(shost);
if (error)
goto fail;
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 8d18cc7e510e..2d43bb8799bd 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1884,10 +1884,6 @@ void __scsi_init_queue(struct Scsi_Host *shost, struct request_queue *q)
blk_queue_max_integrity_segments(q, shost->sg_prot_tablesize);
}
- if (dev->dma_mask) {
- shost->max_sectors = min_t(unsigned int, shost->max_sectors,
- dma_max_mapping_size(dev) >> SECTOR_SHIFT);
- }
blk_queue_max_hw_sectors(q, shost->max_sectors);
blk_queue_segment_boundary(q, shost->dma_boundary);
dma_set_seg_boundary(dev, shost->dma_boundary);
--
2.26.2
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 4/4] libata-scsi: Cap ata_device->max_sectors according to shost->max_sectors
2022-05-20 8:23 [PATCH 0/4] DMA mapping changes for SCSI core John Garry
` (2 preceding siblings ...)
2022-05-20 8:23 ` [PATCH 3/4] scsi: core: Cap shost max_sectors according to DMA optimum mapping limits John Garry
@ 2022-05-20 8:23 ` John Garry
2022-05-20 23:34 ` Damien Le Moal
2022-05-22 13:13 ` [PATCH 0/4] DMA mapping changes for SCSI core Christoph Hellwig
4 siblings, 1 reply; 20+ messages in thread
From: John Garry @ 2022-05-20 8:23 UTC (permalink / raw)
To: damien.lemoal, joro, will, jejb, martin.petersen, hch,
m.szyprowski, robin.murphy
Cc: linux-doc, linux-kernel, linux-ide, iommu, linux-scsi, liyihang6,
chenxiang66, thunder.leizhen, John Garry
ATA devices (struct ata_device) have a max_sectors field which is
configured internally in libata. This is then used to (re)configure the
associated sdev request queue max_sectors value from how it is earlier set
in __scsi_init_queue(). In __scsi_init_queue() the max_sectors value is set
according to shost limits, which includes host DMA mapping limits.
Cap the ata_device max_sectors according to shost->max_sectors to respect
this shost limit.
Signed-off-by: John Garry <john.garry@huawei.com>
---
drivers/ata/libata-scsi.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
index 06c9d90238d9..25fe89791641 100644
--- a/drivers/ata/libata-scsi.c
+++ b/drivers/ata/libata-scsi.c
@@ -1036,6 +1036,7 @@ int ata_scsi_dev_config(struct scsi_device *sdev, struct ata_device *dev)
dev->flags |= ATA_DFLAG_NO_UNLOAD;
/* configure max sectors */
+ dev->max_sectors = min(dev->max_sectors, sdev->host->max_sectors);
blk_queue_max_hw_sectors(q, dev->max_sectors);
if (dev->class == ATA_DEV_ATAPI) {
--
2.26.2
^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH 3/4] scsi: core: Cap shost max_sectors according to DMA optimum mapping limits
2022-05-20 8:23 ` [PATCH 3/4] scsi: core: Cap shost max_sectors according to DMA optimum mapping limits John Garry
@ 2022-05-20 23:30 ` Damien Le Moal
2022-05-23 6:53 ` John Garry
2022-05-23 11:08 ` Dan Carpenter
1 sibling, 1 reply; 20+ messages in thread
From: Damien Le Moal @ 2022-05-20 23:30 UTC (permalink / raw)
To: John Garry, joro, will, jejb, martin.petersen, hch, m.szyprowski,
robin.murphy
Cc: linux-doc, linux-kernel, linux-ide, iommu, linux-scsi, liyihang6,
chenxiang66, thunder.leizhen
On 5/20/22 17:23, John Garry wrote:
> Streaming DMA mappings may be considerably slower when mappings go through
> an IOMMU and the total mapping length is somewhat long. This is because the
> IOMMU IOVA code allocates and free an IOVA for each mapping, which may
> affect performance.
>
> For performance reasons set the request_queue max_sectors from
> dma_opt_mapping_size(), which knows this mapping limit.
>
> In addition, the shost->max_sectors is repeatedly set for each sdev in
> __scsi_init_queue(). This is unnecessary, so set once when adding the
> host.
>
> Signed-off-by: John Garry <john.garry@huawei.com>
> ---
> drivers/scsi/hosts.c | 5 +++++
> drivers/scsi/scsi_lib.c | 4 ----
> 2 files changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
> index f69b77cbf538..a3ae6345473b 100644
> --- a/drivers/scsi/hosts.c
> +++ b/drivers/scsi/hosts.c
> @@ -225,6 +225,11 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
> shost->cmd_per_lun = min_t(int, shost->cmd_per_lun,
> shost->can_queue);
>
> + if (dma_dev->dma_mask) {
> + shost->max_sectors = min_t(unsigned int, shost->max_sectors,
> + dma_opt_mapping_size(dma_dev) >> SECTOR_SHIFT);
> + }
Nit: you could drop the curly brackets here.
> +
> error = scsi_init_sense_cache(shost);
> if (error)
> goto fail;
> diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
> index 8d18cc7e510e..2d43bb8799bd 100644
> --- a/drivers/scsi/scsi_lib.c
> +++ b/drivers/scsi/scsi_lib.c
> @@ -1884,10 +1884,6 @@ void __scsi_init_queue(struct Scsi_Host *shost, struct request_queue *q)
> blk_queue_max_integrity_segments(q, shost->sg_prot_tablesize);
> }
>
> - if (dev->dma_mask) {
> - shost->max_sectors = min_t(unsigned int, shost->max_sectors,
> - dma_max_mapping_size(dev) >> SECTOR_SHIFT);
> - }
> blk_queue_max_hw_sectors(q, shost->max_sectors);
> blk_queue_segment_boundary(q, shost->dma_boundary);
> dma_set_seg_boundary(dev, shost->dma_boundary);
Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 1/4] dma-mapping: Add dma_opt_mapping_size()
2022-05-20 8:23 ` [PATCH 1/4] dma-mapping: Add dma_opt_mapping_size() John Garry
@ 2022-05-20 23:32 ` Damien Le Moal
0 siblings, 0 replies; 20+ messages in thread
From: Damien Le Moal @ 2022-05-20 23:32 UTC (permalink / raw)
To: John Garry, joro, will, jejb, martin.petersen, hch, m.szyprowski,
robin.murphy
Cc: linux-doc, linux-kernel, linux-ide, iommu, linux-scsi, liyihang6,
chenxiang66, thunder.leizhen
On 5/20/22 17:23, John Garry wrote:
> Streaming DMA mapping involving an IOMMU may be much slower for larger
> total mapping size. This is because every IOMMU DMA mapping requires an
> IOVA to be allocated and freed. IOVA sizes above a certain limit are not
> cached, which can have a big impact on DMA mapping performance.
>
> Provide an API for device drivers to know this "optimal" limit, such that
> they may try to produce mapping which don't exceed it.
>
> Signed-off-by: John Garry <john.garry@huawei.com>
> ---
> Documentation/core-api/dma-api.rst | 9 +++++++++
> include/linux/dma-map-ops.h | 1 +
> include/linux/dma-mapping.h | 5 +++++
> kernel/dma/mapping.c | 12 ++++++++++++
> 4 files changed, 27 insertions(+)
>
> diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dma-api.rst
> index 6d6d0edd2d27..b3cd9763d28b 100644
> --- a/Documentation/core-api/dma-api.rst
> +++ b/Documentation/core-api/dma-api.rst
> @@ -204,6 +204,15 @@ Returns the maximum size of a mapping for the device. The size parameter
> of the mapping functions like dma_map_single(), dma_map_page() and
> others should not be larger than the returned value.
>
> +::
> +
> + size_t
> + dma_opt_mapping_size(struct device *dev);
> +
> +Returns the maximum optimal size of a mapping for the device. Mapping large
> +buffers may take longer so device drivers are advised to limit total DMA
> +streaming mappings length to the returned value.
> +
> ::
>
> bool
> diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
> index 0d5b06b3a4a6..98ceba6fa848 100644
> --- a/include/linux/dma-map-ops.h
> +++ b/include/linux/dma-map-ops.h
> @@ -69,6 +69,7 @@ struct dma_map_ops {
> int (*dma_supported)(struct device *dev, u64 mask);
> u64 (*get_required_mask)(struct device *dev);
> size_t (*max_mapping_size)(struct device *dev);
> + size_t (*opt_mapping_size)(void);
> unsigned long (*get_merge_boundary)(struct device *dev);
> };
>
> diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
> index dca2b1355bb1..fe3849434b2a 100644
> --- a/include/linux/dma-mapping.h
> +++ b/include/linux/dma-mapping.h
> @@ -144,6 +144,7 @@ int dma_set_mask(struct device *dev, u64 mask);
> int dma_set_coherent_mask(struct device *dev, u64 mask);
> u64 dma_get_required_mask(struct device *dev);
> size_t dma_max_mapping_size(struct device *dev);
> +size_t dma_opt_mapping_size(struct device *dev);
> bool dma_need_sync(struct device *dev, dma_addr_t dma_addr);
> unsigned long dma_get_merge_boundary(struct device *dev);
> struct sg_table *dma_alloc_noncontiguous(struct device *dev, size_t size,
> @@ -266,6 +267,10 @@ static inline size_t dma_max_mapping_size(struct device *dev)
> {
> return 0;
> }
> +static inline size_t dma_opt_mapping_size(struct device *dev)
> +{
> + return 0;
> +}
> static inline bool dma_need_sync(struct device *dev, dma_addr_t dma_addr)
> {
> return false;
> diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
> index db7244291b74..1bfe11b1edb6 100644
> --- a/kernel/dma/mapping.c
> +++ b/kernel/dma/mapping.c
> @@ -773,6 +773,18 @@ size_t dma_max_mapping_size(struct device *dev)
> }
> EXPORT_SYMBOL_GPL(dma_max_mapping_size);
>
> +size_t dma_opt_mapping_size(struct device *dev)
> +{
> + const struct dma_map_ops *ops = get_dma_ops(dev);
> + size_t size = SIZE_MAX;
> +
> + if (ops && ops->opt_mapping_size)
> + size = ops->opt_mapping_size();
> +
> + return min(dma_max_mapping_size(dev), size);
> +}
> +EXPORT_SYMBOL_GPL(dma_opt_mapping_size);
> +
> bool dma_need_sync(struct device *dev, dma_addr_t dma_addr)
> {
> const struct dma_map_ops *ops = get_dma_ops(dev);
Looks OK to me.
Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 2/4] dma-iommu: Add iommu_dma_opt_mapping_size()
2022-05-20 8:23 ` [PATCH 2/4] dma-iommu: Add iommu_dma_opt_mapping_size() John Garry
@ 2022-05-20 23:33 ` Damien Le Moal
2022-05-23 7:01 ` John Garry
2022-05-23 7:34 ` Damien Le Moal
1 sibling, 1 reply; 20+ messages in thread
From: Damien Le Moal @ 2022-05-20 23:33 UTC (permalink / raw)
To: John Garry, joro, will, jejb, martin.petersen, hch, m.szyprowski,
robin.murphy
Cc: linux-doc, linux-kernel, linux-ide, iommu, linux-scsi, liyihang6,
chenxiang66, thunder.leizhen
On 5/20/22 17:23, John Garry wrote:
> Add the IOMMU callback for DMA mapping API dma_opt_mapping_size(), which
> allows the drivers to know the optimal mapping limit and thus limit the
> requested IOVA lengths.
>
> This value is based on the IOVA rcache range limit, as IOVAs allocated
> above this limit must always be newly allocated, which may be quite slow.
>
> Signed-off-by: John Garry <john.garry@huawei.com>
> ---
> drivers/iommu/dma-iommu.c | 6 ++++++
> drivers/iommu/iova.c | 5 +++++
> include/linux/iova.h | 2 ++
> 3 files changed, 13 insertions(+)
>
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 09f6e1c0f9c0..f619e41b9172 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -1442,6 +1442,11 @@ static unsigned long iommu_dma_get_merge_boundary(struct device *dev)
> return (1UL << __ffs(domain->pgsize_bitmap)) - 1;
> }
>
> +static size_t iommu_dma_opt_mapping_size(void)
> +{
> + return iova_rcache_range();
> +}
> +
> static const struct dma_map_ops iommu_dma_ops = {
> .alloc = iommu_dma_alloc,
> .free = iommu_dma_free,
> @@ -1462,6 +1467,7 @@ static const struct dma_map_ops iommu_dma_ops = {
> .map_resource = iommu_dma_map_resource,
> .unmap_resource = iommu_dma_unmap_resource,
> .get_merge_boundary = iommu_dma_get_merge_boundary,
> + .opt_mapping_size = iommu_dma_opt_mapping_size,
> };
>
> /*
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index db77aa675145..9f00b58d546e 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -26,6 +26,11 @@ static unsigned long iova_rcache_get(struct iova_domain *iovad,
> static void free_cpu_cached_iovas(unsigned int cpu, struct iova_domain *iovad);
> static void free_iova_rcaches(struct iova_domain *iovad);
>
> +unsigned long iova_rcache_range(void)
Why not a size_t return type ?
> +{
> + return PAGE_SIZE << (IOVA_RANGE_CACHE_MAX_SIZE - 1);
> +}
> +
> static int iova_cpuhp_dead(unsigned int cpu, struct hlist_node *node)
> {
> struct iova_domain *iovad;
> diff --git a/include/linux/iova.h b/include/linux/iova.h
> index 320a70e40233..c6ba6d95d79c 100644
> --- a/include/linux/iova.h
> +++ b/include/linux/iova.h
> @@ -79,6 +79,8 @@ static inline unsigned long iova_pfn(struct iova_domain *iovad, dma_addr_t iova)
> int iova_cache_get(void);
> void iova_cache_put(void);
>
> +unsigned long iova_rcache_range(void);
> +
> void free_iova(struct iova_domain *iovad, unsigned long pfn);
> void __free_iova(struct iova_domain *iovad, struct iova *iova);
> struct iova *alloc_iova(struct iova_domain *iovad, unsigned long size,
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 4/4] libata-scsi: Cap ata_device->max_sectors according to shost->max_sectors
2022-05-20 8:23 ` [PATCH 4/4] libata-scsi: Cap ata_device->max_sectors according to shost->max_sectors John Garry
@ 2022-05-20 23:34 ` Damien Le Moal
0 siblings, 0 replies; 20+ messages in thread
From: Damien Le Moal @ 2022-05-20 23:34 UTC (permalink / raw)
To: John Garry, joro, will, jejb, martin.petersen, hch, m.szyprowski,
robin.murphy
Cc: linux-doc, linux-kernel, linux-ide, iommu, linux-scsi, liyihang6,
chenxiang66, thunder.leizhen
On 5/20/22 17:23, John Garry wrote:
> ATA devices (struct ata_device) have a max_sectors field which is
> configured internally in libata. This is then used to (re)configure the
> associated sdev request queue max_sectors value from how it is earlier set
> in __scsi_init_queue(). In __scsi_init_queue() the max_sectors value is set
> according to shost limits, which includes host DMA mapping limits.
>
> Cap the ata_device max_sectors according to shost->max_sectors to respect
> this shost limit.
>
> Signed-off-by: John Garry <john.garry@huawei.com>
> ---
> drivers/ata/libata-scsi.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
> index 06c9d90238d9..25fe89791641 100644
> --- a/drivers/ata/libata-scsi.c
> +++ b/drivers/ata/libata-scsi.c
> @@ -1036,6 +1036,7 @@ int ata_scsi_dev_config(struct scsi_device *sdev, struct ata_device *dev)
> dev->flags |= ATA_DFLAG_NO_UNLOAD;
>
> /* configure max sectors */
> + dev->max_sectors = min(dev->max_sectors, sdev->host->max_sectors);
> blk_queue_max_hw_sectors(q, dev->max_sectors);
>
> if (dev->class == ATA_DEV_ATAPI) {
Acked-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 0/4] DMA mapping changes for SCSI core
2022-05-20 8:23 [PATCH 0/4] DMA mapping changes for SCSI core John Garry
` (3 preceding siblings ...)
2022-05-20 8:23 ` [PATCH 4/4] libata-scsi: Cap ata_device->max_sectors according to shost->max_sectors John Garry
@ 2022-05-22 13:13 ` Christoph Hellwig
2022-05-22 22:22 ` Damien Le Moal
2022-05-24 2:41 ` Martin K. Petersen
4 siblings, 2 replies; 20+ messages in thread
From: Christoph Hellwig @ 2022-05-22 13:13 UTC (permalink / raw)
To: John Garry
Cc: damien.lemoal, joro, will, jejb, martin.petersen, hch,
m.szyprowski, robin.murphy, linux-doc, linux-kernel, linux-ide,
iommu, linux-scsi, liyihang6, chenxiang66, thunder.leizhen
The whole series looks fine to me. I'll happily queue it up in the
dma-mapping tree if the SCSI and ATA maintainers are ok with that.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 0/4] DMA mapping changes for SCSI core
2022-05-22 13:13 ` [PATCH 0/4] DMA mapping changes for SCSI core Christoph Hellwig
@ 2022-05-22 22:22 ` Damien Le Moal
2022-05-23 12:00 ` John Garry
2022-05-24 2:41 ` Martin K. Petersen
1 sibling, 1 reply; 20+ messages in thread
From: Damien Le Moal @ 2022-05-22 22:22 UTC (permalink / raw)
To: Christoph Hellwig, John Garry
Cc: joro, will, jejb, martin.petersen, m.szyprowski, robin.murphy,
linux-doc, linux-kernel, linux-ide, iommu, linux-scsi, liyihang6,
chenxiang66, thunder.leizhen
On 2022/05/22 22:13, Christoph Hellwig wrote:
> The whole series looks fine to me. I'll happily queue it up in the
> dma-mapping tree if the SCSI and ATA maintainers are ok with that.
>
Fine with me. I sent an acked-by for the libata bit.
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 3/4] scsi: core: Cap shost max_sectors according to DMA optimum mapping limits
2022-05-20 23:30 ` Damien Le Moal
@ 2022-05-23 6:53 ` John Garry
2022-05-23 7:33 ` Damien Le Moal
0 siblings, 1 reply; 20+ messages in thread
From: John Garry @ 2022-05-23 6:53 UTC (permalink / raw)
To: Damien Le Moal, joro, will, jejb, martin.petersen, hch,
m.szyprowski, robin.murphy
Cc: linux-doc, linux-kernel, linux-ide, iommu, linux-scsi, liyihang6,
chenxiang66, thunder.leizhen
On 21/05/2022 00:30, Damien Le Moal wrote:
>> diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
>> index f69b77cbf538..a3ae6345473b 100644
>> --- a/drivers/scsi/hosts.c
>> +++ b/drivers/scsi/hosts.c
>> @@ -225,6 +225,11 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
>> shost->cmd_per_lun = min_t(int, shost->cmd_per_lun,
>> shost->can_queue);
>>
Hi Damien,
>> + if (dma_dev->dma_mask) {
>> + shost->max_sectors = min_t(unsigned int, shost->max_sectors,
>> + dma_opt_mapping_size(dma_dev) >> SECTOR_SHIFT);
>> + }
> Nit: you could drop the curly brackets here.
Some people prefer this style - multi-line statements have curly
brackets, while single-line statements conform to the official coding
style (and don't use brackets).
I'll just stick with what we have unless there is a consensus to change.
Thanks,
John
>
>> +
>> error = scsi_init_sense_cache(shost);
>> if (error)
>> goto fail;
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 2/4] dma-iommu: Add iommu_dma_opt_mapping_size()
2022-05-20 23:33 ` Damien Le Moal
@ 2022-05-23 7:01 ` John Garry
2022-05-23 7:32 ` Damien Le Moal
0 siblings, 1 reply; 20+ messages in thread
From: John Garry @ 2022-05-23 7:01 UTC (permalink / raw)
To: Damien Le Moal, joro, will, jejb, martin.petersen, hch,
m.szyprowski, robin.murphy
Cc: linux-doc, linux-kernel, linux-ide, iommu, linux-scsi, liyihang6,
chenxiang66, thunder.leizhen
On 21/05/2022 00:33, Damien Le Moal wrote:
Hi Damien,
>> +unsigned long iova_rcache_range(void)
> Why not a size_t return type ?
The IOVA code generally uses unsigned long for size/range while
dam-iommu uses size_t as appropiate, so I'm just sticking to that.
>
>> +{
>> + return PAGE_SIZE << (IOVA_RANGE_CACHE_MAX_SIZE - 1);
>> +}
>> +
Thanks,
John
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 2/4] dma-iommu: Add iommu_dma_opt_mapping_size()
2022-05-23 7:01 ` John Garry
@ 2022-05-23 7:32 ` Damien Le Moal
0 siblings, 0 replies; 20+ messages in thread
From: Damien Le Moal @ 2022-05-23 7:32 UTC (permalink / raw)
To: John Garry, joro, will, jejb, martin.petersen, hch, m.szyprowski,
robin.murphy
Cc: linux-doc, linux-kernel, linux-ide, iommu, linux-scsi, liyihang6,
chenxiang66, thunder.leizhen
On 2022/05/23 16:01, John Garry wrote:
> On 21/05/2022 00:33, Damien Le Moal wrote:
>
> Hi Damien,
>
>>> +unsigned long iova_rcache_range(void)
>> Why not a size_t return type ?
>
> The IOVA code generally uses unsigned long for size/range while
> dam-iommu uses size_t as appropiate, so I'm just sticking to that.
OK.
>
>>
>>> +{
>>> + return PAGE_SIZE << (IOVA_RANGE_CACHE_MAX_SIZE - 1);
>>> +}
>>> +
>
> Thanks,
> John
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 3/4] scsi: core: Cap shost max_sectors according to DMA optimum mapping limits
2022-05-23 6:53 ` John Garry
@ 2022-05-23 7:33 ` Damien Le Moal
0 siblings, 0 replies; 20+ messages in thread
From: Damien Le Moal @ 2022-05-23 7:33 UTC (permalink / raw)
To: John Garry, joro, will, jejb, martin.petersen, hch, m.szyprowski,
robin.murphy
Cc: linux-doc, linux-kernel, linux-ide, iommu, linux-scsi, liyihang6,
chenxiang66, thunder.leizhen
On 2022/05/23 15:53, John Garry wrote:
> On 21/05/2022 00:30, Damien Le Moal wrote:
>>> diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
>>> index f69b77cbf538..a3ae6345473b 100644
>>> --- a/drivers/scsi/hosts.c
>>> +++ b/drivers/scsi/hosts.c
>>> @@ -225,6 +225,11 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
>>> shost->cmd_per_lun = min_t(int, shost->cmd_per_lun,
>>> shost->can_queue);
>>>
>
> Hi Damien,
>
>>> + if (dma_dev->dma_mask) {
>>> + shost->max_sectors = min_t(unsigned int, shost->max_sectors,
>>> + dma_opt_mapping_size(dma_dev) >> SECTOR_SHIFT);
>>> + }
>> Nit: you could drop the curly brackets here.
>
> Some people prefer this style - multi-line statements have curly
> brackets, while single-line statements conform to the official coding
> style (and don't use brackets).
OK.
>
> I'll just stick with what we have unless there is a consensus to change.
>
> Thanks,
> John
>
>>
>>> +
>>> error = scsi_init_sense_cache(shost);
>>> if (error)
>>> goto fail;
>
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 2/4] dma-iommu: Add iommu_dma_opt_mapping_size()
2022-05-20 8:23 ` [PATCH 2/4] dma-iommu: Add iommu_dma_opt_mapping_size() John Garry
2022-05-20 23:33 ` Damien Le Moal
@ 2022-05-23 7:34 ` Damien Le Moal
1 sibling, 0 replies; 20+ messages in thread
From: Damien Le Moal @ 2022-05-23 7:34 UTC (permalink / raw)
To: John Garry, joro, will, jejb, martin.petersen, hch, m.szyprowski,
robin.murphy
Cc: linux-doc, linux-kernel, linux-ide, iommu, linux-scsi, liyihang6,
chenxiang66, thunder.leizhen
On 2022/05/20 17:23, John Garry wrote:
> Add the IOMMU callback for DMA mapping API dma_opt_mapping_size(), which
> allows the drivers to know the optimal mapping limit and thus limit the
> requested IOVA lengths.
>
> This value is based on the IOVA rcache range limit, as IOVAs allocated
> above this limit must always be newly allocated, which may be quite slow.
>
> Signed-off-by: John Garry <john.garry@huawei.com>
> ---
> drivers/iommu/dma-iommu.c | 6 ++++++
> drivers/iommu/iova.c | 5 +++++
> include/linux/iova.h | 2 ++
> 3 files changed, 13 insertions(+)
>
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 09f6e1c0f9c0..f619e41b9172 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -1442,6 +1442,11 @@ static unsigned long iommu_dma_get_merge_boundary(struct device *dev)
> return (1UL << __ffs(domain->pgsize_bitmap)) - 1;
> }
>
> +static size_t iommu_dma_opt_mapping_size(void)
> +{
> + return iova_rcache_range();
> +}
> +
> static const struct dma_map_ops iommu_dma_ops = {
> .alloc = iommu_dma_alloc,
> .free = iommu_dma_free,
> @@ -1462,6 +1467,7 @@ static const struct dma_map_ops iommu_dma_ops = {
> .map_resource = iommu_dma_map_resource,
> .unmap_resource = iommu_dma_unmap_resource,
> .get_merge_boundary = iommu_dma_get_merge_boundary,
> + .opt_mapping_size = iommu_dma_opt_mapping_size,
> };
>
> /*
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index db77aa675145..9f00b58d546e 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -26,6 +26,11 @@ static unsigned long iova_rcache_get(struct iova_domain *iovad,
> static void free_cpu_cached_iovas(unsigned int cpu, struct iova_domain *iovad);
> static void free_iova_rcaches(struct iova_domain *iovad);
>
> +unsigned long iova_rcache_range(void)
> +{
> + return PAGE_SIZE << (IOVA_RANGE_CACHE_MAX_SIZE - 1);
> +}
> +
> static int iova_cpuhp_dead(unsigned int cpu, struct hlist_node *node)
> {
> struct iova_domain *iovad;
> diff --git a/include/linux/iova.h b/include/linux/iova.h
> index 320a70e40233..c6ba6d95d79c 100644
> --- a/include/linux/iova.h
> +++ b/include/linux/iova.h
> @@ -79,6 +79,8 @@ static inline unsigned long iova_pfn(struct iova_domain *iovad, dma_addr_t iova)
> int iova_cache_get(void);
> void iova_cache_put(void);
>
> +unsigned long iova_rcache_range(void);
> +
> void free_iova(struct iova_domain *iovad, unsigned long pfn);
> void __free_iova(struct iova_domain *iovad, struct iova *iova);
> struct iova *alloc_iova(struct iova_domain *iovad, unsigned long size,
Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 3/4] scsi: core: Cap shost max_sectors according to DMA optimum mapping limits
2022-05-20 8:23 ` [PATCH 3/4] scsi: core: Cap shost max_sectors according to DMA optimum mapping limits John Garry
2022-05-20 23:30 ` Damien Le Moal
@ 2022-05-23 11:08 ` Dan Carpenter
2022-05-23 11:56 ` John Garry
1 sibling, 1 reply; 20+ messages in thread
From: Dan Carpenter @ 2022-05-23 11:08 UTC (permalink / raw)
To: kbuild, John Garry, damien.lemoal, joro, will, jejb,
martin.petersen, hch, m.szyprowski, robin.murphy
Cc: lkp, kbuild-all, linux-doc, linux-kernel, linux-ide, iommu,
linux-scsi, liyihang6, chenxiang66, thunder.leizhen, John Garry
Hi John,
url: https://github.com/intel-lab-lkp/linux/commits/John-Garry/DMA-mapping-changes-for-SCSI-core/20220520-163049
base: https://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git next
config: s390-randconfig-m031-20220519 (https://download.01.org/0day-ci/archive/20220521/202205210545.gkS834ds-lkp@intel.com/config)
compiler: s390-linux-gcc (GCC) 11.3.0
If you fix the issue, kindly add following tag where applicable
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
smatch warnings:
drivers/scsi/hosts.c:243 scsi_add_host_with_dma() warn: variable dereferenced before check 'dma_dev' (see line 228)
vim +/dma_dev +243 drivers/scsi/hosts.c
d139b9bd0e52dd James Bottomley 2009-11-05 209 int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
d139b9bd0e52dd James Bottomley 2009-11-05 210 struct device *dma_dev)
^1da177e4c3f41 Linus Torvalds 2005-04-16 211 {
^1da177e4c3f41 Linus Torvalds 2005-04-16 212 struct scsi_host_template *sht = shost->hostt;
^1da177e4c3f41 Linus Torvalds 2005-04-16 213 int error = -EINVAL;
^1da177e4c3f41 Linus Torvalds 2005-04-16 214
91921e016a2199 Hannes Reinecke 2014-06-25 215 shost_printk(KERN_INFO, shost, "%s\n",
^1da177e4c3f41 Linus Torvalds 2005-04-16 216 sht->info ? sht->info(shost) : sht->name);
^1da177e4c3f41 Linus Torvalds 2005-04-16 217
^1da177e4c3f41 Linus Torvalds 2005-04-16 218 if (!shost->can_queue) {
91921e016a2199 Hannes Reinecke 2014-06-25 219 shost_printk(KERN_ERR, shost,
91921e016a2199 Hannes Reinecke 2014-06-25 220 "can_queue = 0 no longer supported\n");
542bd1377a9630 James Bottomley 2008-04-21 221 goto fail;
^1da177e4c3f41 Linus Torvalds 2005-04-16 222 }
^1da177e4c3f41 Linus Torvalds 2005-04-16 223
50b6cb3516365c Dexuan Cui 2021-10-07 224 /* Use min_t(int, ...) in case shost->can_queue exceeds SHRT_MAX */
50b6cb3516365c Dexuan Cui 2021-10-07 225 shost->cmd_per_lun = min_t(int, shost->cmd_per_lun,
ea2f0f77538c50 John Garry 2021-05-19 226 shost->can_queue);
ea2f0f77538c50 John Garry 2021-05-19 227
2ad7ba6ca08593 John Garry 2022-05-20 @228 if (dma_dev->dma_mask) {
^^^^^^^^^^^^^^^^^
The patch adds a new unchecked dereference
2ad7ba6ca08593 John Garry 2022-05-20 229 shost->max_sectors = min_t(unsigned int, shost->max_sectors,
2ad7ba6ca08593 John Garry 2022-05-20 230 dma_opt_mapping_size(dma_dev) >> SECTOR_SHIFT);
2ad7ba6ca08593 John Garry 2022-05-20 231 }
2ad7ba6ca08593 John Garry 2022-05-20 232
0a6ac4ee7c2109 Christoph Hellwig 2017-01-03 233 error = scsi_init_sense_cache(shost);
0a6ac4ee7c2109 Christoph Hellwig 2017-01-03 234 if (error)
0a6ac4ee7c2109 Christoph Hellwig 2017-01-03 235 goto fail;
0a6ac4ee7c2109 Christoph Hellwig 2017-01-03 236
d285203cf647d7 Christoph Hellwig 2014-01-17 237 error = scsi_mq_setup_tags(shost);
542bd1377a9630 James Bottomley 2008-04-21 238 if (error)
542bd1377a9630 James Bottomley 2008-04-21 239 goto fail;
d285203cf647d7 Christoph Hellwig 2014-01-17 240
^1da177e4c3f41 Linus Torvalds 2005-04-16 241 if (!shost->shost_gendev.parent)
^1da177e4c3f41 Linus Torvalds 2005-04-16 242 shost->shost_gendev.parent = dev ? dev : &platform_bus;
3c8d9a957d0ae6 James Bottomley 2012-05-04 @243 if (!dma_dev)
^^^^^^^^
The old code checked for NULL
3c8d9a957d0ae6 James Bottomley 2012-05-04 244 dma_dev = shost->shost_gendev.parent;
3c8d9a957d0ae6 James Bottomley 2012-05-04 245
d139b9bd0e52dd James Bottomley 2009-11-05 246 shost->dma_dev = dma_dev;
^1da177e4c3f41 Linus Torvalds 2005-04-16 247
5c6fab9d558470 Mika Westerberg 2016-02-18 248 /*
5c6fab9d558470 Mika Westerberg 2016-02-18 249 * Increase usage count temporarily here so that calling
5c6fab9d558470 Mika Westerberg 2016-02-18 250 * scsi_autopm_put_host() will trigger runtime idle if there is
5c6fab9d558470 Mika Westerberg 2016-02-18 251 * nothing else preventing suspending the device.
5c6fab9d558470 Mika Westerberg 2016-02-18 252 */
5c6fab9d558470 Mika Westerberg 2016-02-18 253 pm_runtime_get_noresume(&shost->shost_gendev);
bc4f24014de58f Alan Stern 2010-06-17 254 pm_runtime_set_active(&shost->shost_gendev);
bc4f24014de58f Alan Stern 2010-06-17 255 pm_runtime_enable(&shost->shost_gendev);
bc4f24014de58f Alan Stern 2010-06-17 256 device_enable_async_suspend(&shost->shost_gendev);
bc4f24014de58f Alan Stern 2010-06-17 257
--
0-DAY CI Kernel Test Service
https://01.org/lkp
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 3/4] scsi: core: Cap shost max_sectors according to DMA optimum mapping limits
2022-05-23 11:08 ` Dan Carpenter
@ 2022-05-23 11:56 ` John Garry
0 siblings, 0 replies; 20+ messages in thread
From: John Garry @ 2022-05-23 11:56 UTC (permalink / raw)
To: Dan Carpenter, kbuild, damien.lemoal, joro, will, jejb,
martin.petersen, hch, m.szyprowski, robin.murphy
Cc: lkp, kbuild-all, linux-doc, linux-kernel, linux-ide, iommu,
linux-scsi, liyihang6, chenxiang66, thunder.leizhen
On 23/05/2022 12:08, Dan Carpenter wrote:
Thanks for the report
> 50b6cb3516365c Dexuan Cui 2021-10-07 224 /* Use min_t(int, ...) in case shost->can_queue exceeds SHRT_MAX */
> 50b6cb3516365c Dexuan Cui 2021-10-07 225 shost->cmd_per_lun = min_t(int, shost->cmd_per_lun,
> ea2f0f77538c50 John Garry 2021-05-19 226 shost->can_queue);
> ea2f0f77538c50 John Garry 2021-05-19 227
> 2ad7ba6ca08593 John Garry 2022-05-20 @228 if (dma_dev->dma_mask) {
> ^^^^^^^^^^^^^^^^^
I knew that we fixed up dma_dev to be non-NULL, but I thought it was
earlier in this function...
> The patch adds a new unchecked dereference
>
> 2ad7ba6ca08593 John Garry 2022-05-20 229 shost->max_sectors = min_t(unsigned int, shost->max_sectors,
> 2ad7ba6ca08593 John Garry 2022-05-20 230 dma_opt_mapping_size(dma_dev) >> SECTOR_SHIFT);
> 2ad7ba6ca08593 John Garry 2022-05-20 231 }
> 2ad7ba6ca08593 John Garry 2022-05-20 232
> 0a6ac4ee7c2109 Christoph Hellwig 2017-01-03 233 error = scsi_init_sense_cache(shost);
> 0a6ac4ee7c2109 Christoph Hellwig 2017-01-03 234 if (error)
> 0a6ac4ee7c2109 Christoph Hellwig 2017-01-03 235 goto fail;
> 0a6ac4ee7c2109 Christoph Hellwig 2017-01-03 236
> d285203cf647d7 Christoph Hellwig 2014-01-17 237 error = scsi_mq_setup_tags(shost);
> 542bd1377a9630 James Bottomley 2008-04-21 238 if (error)
> 542bd1377a9630 James Bottomley 2008-04-21 239 goto fail;
> d285203cf647d7 Christoph Hellwig 2014-01-17 240
> ^1da177e4c3f41 Linus Torvalds 2005-04-16 241 if (!shost->shost_gendev.parent)
> ^1da177e4c3f41 Linus Torvalds 2005-04-16 242 shost->shost_gendev.parent = dev ? dev : &platform_bus;
> 3c8d9a957d0ae6 James Bottomley 2012-05-04 @243 if (!dma_dev)
> ^^^^^^^^
Cheers,
John
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 0/4] DMA mapping changes for SCSI core
2022-05-22 22:22 ` Damien Le Moal
@ 2022-05-23 12:00 ` John Garry
0 siblings, 0 replies; 20+ messages in thread
From: John Garry @ 2022-05-23 12:00 UTC (permalink / raw)
To: Damien Le Moal, Christoph Hellwig
Cc: joro, will, jejb, martin.petersen, m.szyprowski, robin.murphy,
linux-doc, linux-kernel, linux-ide, iommu, linux-scsi, liyihang6,
chenxiang66, thunder.leizhen
On 22/05/2022 23:22, Damien Le Moal wrote:
> On 2022/05/22 22:13, Christoph Hellwig wrote:
>> The whole series looks fine to me. I'll happily queue it up in the
>> dma-mapping tree if the SCSI and ATA maintainers are ok with that.
>>
>
> Fine with me. I sent an acked-by for the libata bit.
>
Thanks, I'm going to have to post a v2 and I figure that with the timing
that I'll have to wait for v5.20 now.
Cheers,
John
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 0/4] DMA mapping changes for SCSI core
2022-05-22 13:13 ` [PATCH 0/4] DMA mapping changes for SCSI core Christoph Hellwig
2022-05-22 22:22 ` Damien Le Moal
@ 2022-05-24 2:41 ` Martin K. Petersen
1 sibling, 0 replies; 20+ messages in thread
From: Martin K. Petersen @ 2022-05-24 2:41 UTC (permalink / raw)
To: Christoph Hellwig
Cc: John Garry, damien.lemoal, joro, will, jejb, martin.petersen,
m.szyprowski, robin.murphy, linux-doc, linux-kernel, linux-ide,
iommu, linux-scsi, liyihang6, chenxiang66, thunder.leizhen
Christoph,
> The whole series looks fine to me. I'll happily queue it up in the
> dma-mapping tree if the SCSI and ATA maintainers are ok with that.
Works for me.
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
--
Martin K. Petersen Oracle Linux Engineering
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2022-05-24 2:42 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-20 8:23 [PATCH 0/4] DMA mapping changes for SCSI core John Garry
2022-05-20 8:23 ` [PATCH 1/4] dma-mapping: Add dma_opt_mapping_size() John Garry
2022-05-20 23:32 ` Damien Le Moal
2022-05-20 8:23 ` [PATCH 2/4] dma-iommu: Add iommu_dma_opt_mapping_size() John Garry
2022-05-20 23:33 ` Damien Le Moal
2022-05-23 7:01 ` John Garry
2022-05-23 7:32 ` Damien Le Moal
2022-05-23 7:34 ` Damien Le Moal
2022-05-20 8:23 ` [PATCH 3/4] scsi: core: Cap shost max_sectors according to DMA optimum mapping limits John Garry
2022-05-20 23:30 ` Damien Le Moal
2022-05-23 6:53 ` John Garry
2022-05-23 7:33 ` Damien Le Moal
2022-05-23 11:08 ` Dan Carpenter
2022-05-23 11:56 ` John Garry
2022-05-20 8:23 ` [PATCH 4/4] libata-scsi: Cap ata_device->max_sectors according to shost->max_sectors John Garry
2022-05-20 23:34 ` Damien Le Moal
2022-05-22 13:13 ` [PATCH 0/4] DMA mapping changes for SCSI core Christoph Hellwig
2022-05-22 22:22 ` Damien Le Moal
2022-05-23 12:00 ` John Garry
2022-05-24 2:41 ` Martin K. Petersen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).