All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH rdma-next] RDMA/core: Add weak ordering dma attr to dma mapping
@ 2020-02-12  7:35 Leon Romanovsky
  2020-02-13 19:20 ` Jason Gunthorpe
  0 siblings, 1 reply; 2+ messages in thread
From: Leon Romanovsky @ 2020-02-12  7:35 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Michael Guralnik, RDMA mailing list, Leon Romanovsky

From: Michael Guralnik <michaelgur@mellanox.com>

For memory regions registered with IB_ACCESS_RELAXED_ORDERING will be
dma mapped with the DMA_ATTR_WEAK_ORDERING.

This will allow reads and writes to the mapping to be weakly ordered,
such change can enhance performance on some supporting architectures.

Signed-off-by: Michael Guralnik <michaelgur@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/umem.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
index 06b6125b5ae1..82455a1392f1 100644
--- a/drivers/infiniband/core/umem.c
+++ b/drivers/infiniband/core/umem.c
@@ -197,6 +197,7 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr,
 	unsigned long lock_limit;
 	unsigned long new_pinned;
 	unsigned long cur_base;
+	unsigned long dma_attr = 0;
 	struct mm_struct *mm;
 	unsigned long npages;
 	int ret;
@@ -278,10 +279,12 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr,
 
 	sg_mark_end(sg);
 
-	umem->nmap = ib_dma_map_sg(device,
-				   umem->sg_head.sgl,
-				   umem->sg_nents,
-				   DMA_BIDIRECTIONAL);
+	if (access & IB_ACCESS_RELAXED_ORDERING)
+		dma_attr |= DMA_ATTR_WEAK_ORDERING;
+
+	umem->nmap =
+		ib_dma_map_sg_attrs(device, umem->sg_head.sgl, umem->sg_nents,
+				    DMA_BIDIRECTIONAL, dma_attr);
 
 	if (!umem->nmap) {
 		ret = -ENOMEM;
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH rdma-next] RDMA/core: Add weak ordering dma attr to dma mapping
  2020-02-12  7:35 [PATCH rdma-next] RDMA/core: Add weak ordering dma attr to dma mapping Leon Romanovsky
@ 2020-02-13 19:20 ` Jason Gunthorpe
  0 siblings, 0 replies; 2+ messages in thread
From: Jason Gunthorpe @ 2020-02-13 19:20 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Doug Ledford, Michael Guralnik, RDMA mailing list, Leon Romanovsky

On Wed, Feb 12, 2020 at 09:35:59AM +0200, Leon Romanovsky wrote:
> From: Michael Guralnik <michaelgur@mellanox.com>
> 
> For memory regions registered with IB_ACCESS_RELAXED_ORDERING will be
> dma mapped with the DMA_ATTR_WEAK_ORDERING.
> 
> This will allow reads and writes to the mapping to be weakly ordered,
> such change can enhance performance on some supporting architectures.
> 
> Signed-off-by: Michael Guralnik <michaelgur@mellanox.com>
> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
> ---
>  drivers/infiniband/core/umem.c | 11 +++++++----
>  1 file changed, 7 insertions(+), 4 deletions(-)

Applied to for-next

Thanks,
Jason

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-02-13 19:20 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-12  7:35 [PATCH rdma-next] RDMA/core: Add weak ordering dma attr to dma mapping Leon Romanovsky
2020-02-13 19:20 ` Jason Gunthorpe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.