linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH rdma-next 0/4] Improve ODP by using HMM API
@ 2020-09-14 11:39 Leon Romanovsky
  2020-09-14 11:39 ` [PATCH rdma-next 1/4] IB/core: Improve ODP to use hmm_range_fault() Leon Romanovsky
                   ` (3 more replies)
  0 siblings, 4 replies; 15+ messages in thread
From: Leon Romanovsky @ 2020-09-14 11:39 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, linux-kernel, linux-rdma, Yishai Hadas

From: Leon Romanovsky <leonro@nvidia.com>

Based on:
https://lore.kernel.org/lkml/20200914112653.345244-1-leon@kernel.org/

---------------------------------------------------------------------------------------
From Yishai:

This series improves ODP performance by moving to use the HMM API as of below.

The get_user_pages_remote() functionality was replaced by HMM:
- No need anymore to allocate and free memory to hold its output per call.
- No need anymore to use the put_page() to unpin the pages.
- The logic to detect contiguous pages is done based on the returned order
  from HMM, no need to run per page, and evaluate.

Moving to use the HMM enables to reduce page faults in the system by using the
snapshot mode. This mode allows existing pages in the CPU to become presented
to the device without faulting.

This non-faulting mode may be used explicitly by an application with some new
option of advice MR (i.e. PREFETCH_NO_FAULT) and is used upon ODP MR
registration internally as part of initiating the device page table.

To achieve the above, internal changes in the ODP data structures were done
and some flows were cleaned-up/adapted accordingly.

Thanks

Yishai Hadas (4):
  IB/core: Improve ODP to use hmm_range_fault()
  IB/core: Enable ODP sync without faulting
  RDMA/mlx5: Extend advice MR to support non faulting mode
  RDMA/mlx5: Sync device with CPU pages upon ODP MR registration

 drivers/infiniband/Kconfig              |   1 +
 drivers/infiniband/core/umem_odp.c      | 286 ++++++++++--------------
 drivers/infiniband/hw/mlx5/mlx5_ib.h    |   6 +
 drivers/infiniband/hw/mlx5/mr.c         |  14 +-
 drivers/infiniband/hw/mlx5/odp.c        |  50 +++--
 include/rdma/ib_umem_odp.h              |  21 +-
 include/uapi/rdma/ib_user_ioctl_verbs.h |   1 +
 7 files changed, 178 insertions(+), 201 deletions(-)

--
2.26.2


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH rdma-next 1/4] IB/core: Improve ODP to use hmm_range_fault()
  2020-09-14 11:39 [PATCH rdma-next 0/4] Improve ODP by using HMM API Leon Romanovsky
@ 2020-09-14 11:39 ` Leon Romanovsky
  2020-09-16 16:45   ` Christoph Hellwig
  2020-09-14 11:39 ` [PATCH rdma-next 2/4] IB/core: Enable ODP sync without faulting Leon Romanovsky
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 15+ messages in thread
From: Leon Romanovsky @ 2020-09-14 11:39 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Yishai Hadas, linux-rdma

From: Yishai Hadas <yishaih@mellanox.com>

Move to use hmm_range_fault() which improve performance in few aspects
comparing previous functionality of get_user_pages_remote().

This includes:
- Dropping the need to allocate and free memory to hold its output.
- No need any more to use the put_page() to unpin the pages.
- The logic to detect contiguous pages is done based on the returned order
  from HMM, no need to run per page and evaluate.

In addition,
Moving to use the HMM enables to reduce page faults in the system as it
exposes the snapshot mode, this will be introduced in next patches from
this series.

As part of this cleanup some flows and use the required data structures
to work with HMM.

Signed-off-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/infiniband/Kconfig         |   1 +
 drivers/infiniband/core/umem_odp.c | 273 +++++++++++------------------
 drivers/infiniband/hw/mlx5/odp.c   |  25 +--
 include/rdma/ib_umem_odp.h         |  21 +--
 4 files changed, 123 insertions(+), 197 deletions(-)

diff --git a/drivers/infiniband/Kconfig b/drivers/infiniband/Kconfig
index 2f299c0be2e5..b8bf1ca2b72c 100644
--- a/drivers/infiniband/Kconfig
+++ b/drivers/infiniband/Kconfig
@@ -49,6 +49,7 @@ config INFINIBAND_ON_DEMAND_PAGING
 	depends on INFINIBAND_USER_MEM
 	select MMU_NOTIFIER
 	select INTERVAL_TREE
+	select HMM_MIRROR
 	default y
 	help
 	  On demand paging support for the InfiniBand subsystem.
diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c
index cc6b4befde7c..8db3e78eb087 100644
--- a/drivers/infiniband/core/umem_odp.c
+++ b/drivers/infiniband/core/umem_odp.c
@@ -40,6 +40,7 @@
 #include <linux/vmalloc.h>
 #include <linux/hugetlb.h>
 #include <linux/interval_tree.h>
+#include <linux/hmm.h>
 #include <linux/pagemap.h>
 
 #include <rdma/ib_verbs.h>
@@ -60,7 +61,7 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp,
 		size_t page_size = 1UL << umem_odp->page_shift;
 		unsigned long start;
 		unsigned long end;
-		size_t pages;
+		size_t ndmas, npfns;
 
 		start = ALIGN_DOWN(umem_odp->umem.address, page_size);
 		if (check_add_overflow(umem_odp->umem.address,
@@ -71,20 +72,21 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp,
 		if (unlikely(end < page_size))
 			return -EOVERFLOW;
 
-		pages = (end - start) >> umem_odp->page_shift;
-		if (!pages)
+		ndmas = (end - start) >> umem_odp->page_shift;
+		if (!ndmas)
 			return -EINVAL;
 
-		umem_odp->page_list = kvcalloc(
-			pages, sizeof(*umem_odp->page_list), GFP_KERNEL);
-		if (!umem_odp->page_list)
+		npfns = (end - start) >> PAGE_SHIFT;
+		umem_odp->pfn_list = kvcalloc(
+			npfns, sizeof(*umem_odp->pfn_list), GFP_KERNEL);
+		if (!umem_odp->pfn_list)
 			return -ENOMEM;
 
 		umem_odp->dma_list = kvcalloc(
-			pages, sizeof(*umem_odp->dma_list), GFP_KERNEL);
+			ndmas, sizeof(*umem_odp->dma_list), GFP_KERNEL);
 		if (!umem_odp->dma_list) {
 			ret = -ENOMEM;
-			goto out_page_list;
+			goto out_pfn_list;
 		}
 
 		ret = mmu_interval_notifier_insert(&umem_odp->notifier,
@@ -98,8 +100,8 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp,
 
 out_dma_list:
 	kvfree(umem_odp->dma_list);
-out_page_list:
-	kvfree(umem_odp->page_list);
+out_pfn_list:
+	kvfree(umem_odp->pfn_list);
 	return ret;
 }
 
@@ -276,7 +278,7 @@ void ib_umem_odp_release(struct ib_umem_odp *umem_odp)
 		mutex_unlock(&umem_odp->umem_mutex);
 		mmu_interval_notifier_remove(&umem_odp->notifier);
 		kvfree(umem_odp->dma_list);
-		kvfree(umem_odp->page_list);
+		kvfree(umem_odp->pfn_list);
 	}
 	put_pid(umem_odp->tgid);
 	kfree(umem_odp);
@@ -287,87 +289,52 @@ EXPORT_SYMBOL(ib_umem_odp_release);
  * Map for DMA and insert a single page into the on-demand paging page tables.
  *
  * @umem: the umem to insert the page to.
- * @page_index: index in the umem to add the page to.
+ * @dma_index: index in the umem to add the dma to.
  * @page: the page struct to map and add.
  * @access_mask: access permissions needed for this page.
  * @current_seq: sequence number for synchronization with invalidations.
  *               the sequence number is taken from
  *               umem_odp->notifiers_seq.
  *
- * The function returns -EFAULT if the DMA mapping operation fails. It returns
- * -EAGAIN if a concurrent invalidation prevents us from updating the page.
+ * The function returns -EFAULT if the DMA mapping operation fails.
  *
- * The page is released via put_page even if the operation failed. For on-demand
- * pinning, the page is released whenever it isn't stored in the umem.
  */
 static int ib_umem_odp_map_dma_single_page(
 		struct ib_umem_odp *umem_odp,
-		unsigned int page_index,
+		unsigned int dma_index,
 		struct page *page,
-		u64 access_mask,
-		unsigned long current_seq)
+		u64 access_mask)
 {
 	struct ib_device *dev = umem_odp->umem.ibdev;
 	dma_addr_t dma_addr;
-	int ret = 0;
 
-	if (mmu_interval_check_retry(&umem_odp->notifier, current_seq)) {
-		ret = -EAGAIN;
-		goto out;
-	}
-	if (!(umem_odp->dma_list[page_index])) {
-		dma_addr =
-			ib_dma_map_page(dev, page, 0, BIT(umem_odp->page_shift),
-					DMA_BIDIRECTIONAL);
-		if (ib_dma_mapping_error(dev, dma_addr)) {
-			ret = -EFAULT;
-			goto out;
-		}
-		umem_odp->dma_list[page_index] = dma_addr | access_mask;
-		umem_odp->page_list[page_index] = page;
-		umem_odp->npages++;
-	} else if (umem_odp->page_list[page_index] == page) {
-		umem_odp->dma_list[page_index] |= access_mask;
-	} else {
-		/*
-		 * This is a race here where we could have done:
-		 *
-		 *         CPU0                             CPU1
-		 *   get_user_pages()
-		 *                                       invalidate()
-		 *                                       page_fault()
-		 *   mutex_lock(umem_mutex)
-		 *    page from GUP != page in ODP
-		 *
-		 * It should be prevented by the retry test above as reading
-		 * the seq number should be reliable under the
-		 * umem_mutex. Thus something is really not working right if
-		 * things get here.
-		 */
-		WARN(true,
-		     "Got different pages in IB device and from get_user_pages. IB device page: %p, gup page: %p\n",
-		     umem_odp->page_list[page_index], page);
-		ret = -EAGAIN;
+	if (umem_odp->dma_list[dma_index]) {
+		umem_odp->dma_list[dma_index] &= ODP_DMA_ADDR_MASK;
+		umem_odp->dma_list[dma_index] |= access_mask;
+		return 0;
 	}
 
-out:
-	put_page(page);
-	return ret;
+	dma_addr =
+		ib_dma_map_page(dev, page, 0, BIT(umem_odp->page_shift),
+				DMA_BIDIRECTIONAL);
+	if (ib_dma_mapping_error(dev, dma_addr))
+		return -EFAULT;
+
+	umem_odp->dma_list[dma_index] = dma_addr | access_mask;
+	umem_odp->npages++;
+	return 0;
 }
 
 /**
- * ib_umem_odp_map_dma_pages - Pin and DMA map userspace memory in an ODP MR.
+ * ib_umem_odp_map_dma_and_lock - DMA map userspace memory in an ODP MR and lock it.
  *
- * Pins the range of pages passed in the argument, and maps them to
- * DMA addresses. The DMA addresses of the mapped pages is updated in
- * umem_odp->dma_list.
+ * Maps the range passed in the argument to DMA addresses.
+ * The DMA addresses of the mapped pages is updated in umem_odp->dma_list.
+ * Upon success the ODP MR will be locked to let caller complete its device
+ * page table update.
  *
  * Returns the number of pages mapped in success, negative error code
  * for failure.
- * An -EAGAIN error code is returned when a concurrent mmu notifier prevents
- * the function from completing its task.
- * An -ENOENT error code indicates that userspace process is being terminated
- * and mm was already destroyed.
  * @umem_odp: the umem to map and pin
  * @user_virt: the address from which we need to map.
  * @bcnt: the minimal number of bytes to pin and map. The mapping might be
@@ -376,21 +343,18 @@ static int ib_umem_odp_map_dma_single_page(
  *        the return value.
  * @access_mask: bit mask of the requested access permissions for the given
  *               range.
- * @current_seq: the MMU notifiers sequance value for synchronization with
- *               invalidations. the sequance number is read from
- *               umem_odp->notifiers_seq before calling this function
  */
-int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 user_virt,
-			      u64 bcnt, u64 access_mask,
-			      unsigned long current_seq)
+int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt,
+				 u64 bcnt, u64 access_mask)
+			__acquires(&umem_odp->umem_mutex)
 {
 	struct task_struct *owning_process  = NULL;
 	struct mm_struct *owning_mm = umem_odp->umem.owning_mm;
-	struct page       **local_page_list = NULL;
-	u64 page_mask, off;
-	int j, k, ret = 0, start_idx, npages = 0;
-	unsigned int flags = 0, page_shift;
-	phys_addr_t p = 0;
+	int pfn_index, dma_index, ret = 0, start_idx;
+	unsigned int page_shift, hmm_order, pfn_start_idx;
+	unsigned long num_pfns, current_seq;
+	struct hmm_range range = {};
+	unsigned long timeout;
 
 	if (access_mask == 0)
 		return -EINVAL;
@@ -399,15 +363,7 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 user_virt,
 	    user_virt + bcnt > ib_umem_end(umem_odp))
 		return -EFAULT;
 
-	local_page_list = (struct page **)__get_free_page(GFP_KERNEL);
-	if (!local_page_list)
-		return -ENOMEM;
-
 	page_shift = umem_odp->page_shift;
-	page_mask = ~(BIT(page_shift) - 1);
-	off = user_virt & (~page_mask);
-	user_virt = user_virt & page_mask;
-	bcnt += off; /* Charge for the first page offset as well. */
 
 	/*
 	 * owning_process is allowed to be NULL, this means somehow the mm is
@@ -420,99 +376,88 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 user_virt,
 		goto out_put_task;
 	}
 
+	range.notifier = &umem_odp->notifier;
+	range.start = ALIGN_DOWN(user_virt, 1UL << page_shift);
+	range.end = ALIGN(user_virt + bcnt, 1UL << page_shift);
+	pfn_start_idx = (range.start - ib_umem_start(umem_odp)) >> PAGE_SHIFT;
+	num_pfns = (range.end - range.start) >> PAGE_SHIFT;
+	range.default_flags = HMM_PFN_REQ_FAULT;
+
 	if (access_mask & ODP_WRITE_ALLOWED_BIT)
-		flags |= FOLL_WRITE;
+		range.default_flags |= HMM_PFN_REQ_WRITE;
+
+	range.hmm_pfns = &(umem_odp->pfn_list[pfn_start_idx]);
+	timeout = jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT);
+
+retry:
+	current_seq = range.notifier_seq =
+		mmu_interval_read_begin(&umem_odp->notifier);
+
+	mmap_read_lock(owning_mm);
+	ret = hmm_range_fault(&range);
+	mmap_read_unlock(owning_mm);
+	if (unlikely(ret)) {
+		if (ret == -EBUSY && !time_after(jiffies, timeout))
+			goto retry;
+		goto out_put_mm;
+	}
 
-	start_idx = (user_virt - ib_umem_start(umem_odp)) >> page_shift;
-	k = start_idx;
+	start_idx = (range.start - ib_umem_start(umem_odp)) >> page_shift;
+	dma_index = start_idx;
 
-	while (bcnt > 0) {
-		const size_t gup_num_pages = min_t(size_t,
-				ALIGN(bcnt, PAGE_SIZE) / PAGE_SIZE,
-				PAGE_SIZE / sizeof(struct page *));
+	mutex_lock(&umem_odp->umem_mutex);
+	if (mmu_interval_read_retry(&umem_odp->notifier, current_seq)) {
+		mutex_unlock(&umem_odp->umem_mutex);
+		goto retry;
+	}
 
-		mmap_read_lock(owning_mm);
+	for (pfn_index = 0; pfn_index < num_pfns;
+		pfn_index += 1 << (page_shift - PAGE_SHIFT), dma_index++) {
 		/*
-		 * Note: this might result in redundent page getting. We can
-		 * avoid this by checking dma_list to be 0 before calling
-		 * get_user_pages. However, this make the code much more
-		 * complex (and doesn't gain us much performance in most use
-		 * cases).
+		 * Since we asked for hmm_range_fault() to populate pages,
+		 * it shouldn't return an error entry on success.
 		 */
-		npages = get_user_pages_remote(owning_mm,
-				user_virt, gup_num_pages,
-				flags, local_page_list, NULL, NULL);
-		mmap_read_unlock(owning_mm);
-
-		if (npages < 0) {
-			if (npages != -EAGAIN)
-				pr_warn("fail to get %zu user pages with error %d\n", gup_num_pages, npages);
-			else
-				pr_debug("fail to get %zu user pages with error %d\n", gup_num_pages, npages);
+		WARN_ON(range.hmm_pfns[pfn_index] & HMM_PFN_ERROR);
+		WARN_ON(!(range.hmm_pfns[pfn_index] & HMM_PFN_VALID));
+		hmm_order = hmm_pfn_to_map_order(range.hmm_pfns[pfn_index]);
+		/* If a hugepage was detected and ODP wasn't set for, the umem
+		 * page_shift will be used, the opposite case is an error.
+		 */
+		if (hmm_order + PAGE_SHIFT < page_shift) {
+			ret = -EINVAL;
+			pr_debug("%s: un-expected hmm_order %d, page_shift %d\n",
+				 __func__, hmm_order, page_shift);
 			break;
 		}
 
-		bcnt -= min_t(size_t, npages << PAGE_SHIFT, bcnt);
-		mutex_lock(&umem_odp->umem_mutex);
-		for (j = 0; j < npages; j++, user_virt += PAGE_SIZE) {
-			if (user_virt & ~page_mask) {
-				p += PAGE_SIZE;
-				if (page_to_phys(local_page_list[j]) != p) {
-					ret = -EFAULT;
-					break;
-				}
-				put_page(local_page_list[j]);
-				continue;
-			}
-
-			ret = ib_umem_odp_map_dma_single_page(
-					umem_odp, k, local_page_list[j],
-					access_mask, current_seq);
-			if (ret < 0) {
-				if (ret != -EAGAIN)
-					pr_warn("ib_umem_odp_map_dma_single_page failed with error %d\n", ret);
-				else
-					pr_debug("ib_umem_odp_map_dma_single_page failed with error %d\n", ret);
-				break;
-			}
-
-			p = page_to_phys(local_page_list[j]);
-			k++;
-		}
-		mutex_unlock(&umem_odp->umem_mutex);
-
+		ret = ib_umem_odp_map_dma_single_page(
+				umem_odp, dma_index, hmm_pfn_to_page(range.hmm_pfns[pfn_index]),
+				access_mask);
 		if (ret < 0) {
-			/*
-			 * Release pages, remembering that the first page
-			 * to hit an error was already released by
-			 * ib_umem_odp_map_dma_single_page().
-			 */
-			if (npages - (j + 1) > 0)
-				release_pages(&local_page_list[j+1],
-					      npages - (j + 1));
+			pr_debug("ib_umem_odp_map_dma_single_page failed with error %d\n", ret);
 			break;
 		}
 	}
+	/* upon sucesss lock should stay on hold for the callee */
+	if (!ret)
+		ret = dma_index - start_idx;
+	else
+		mutex_unlock(&umem_odp->umem_mutex);
 
-	if (ret >= 0) {
-		if (npages < 0 && k == start_idx)
-			ret = npages;
-		else
-			ret = k - start_idx;
-	}
-
+out_put_mm:
 	mmput(owning_mm);
 out_put_task:
 	if (owning_process)
 		put_task_struct(owning_process);
-	free_page((unsigned long)local_page_list);
 	return ret;
 }
-EXPORT_SYMBOL(ib_umem_odp_map_dma_pages);
+EXPORT_SYMBOL(ib_umem_odp_map_dma_and_lock);
 
 void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 virt,
 				 u64 bound)
 {
+	dma_addr_t dma_addr;
+	dma_addr_t dma;
 	int idx;
 	u64 addr;
 	struct ib_device *dev = umem_odp->umem.ibdev;
@@ -521,19 +466,14 @@ void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 virt,
 
 	virt = max_t(u64, virt, ib_umem_start(umem_odp));
 	bound = min_t(u64, bound, ib_umem_end(umem_odp));
-	/* Note that during the run of this function, the
-	 * notifiers_count of the MR is > 0, preventing any racing
-	 * faults from completion. We might be racing with other
-	 * invalidations, so we must make sure we free each page only
-	 * once. */
 	for (addr = virt; addr < bound; addr += BIT(umem_odp->page_shift)) {
 		idx = (addr - ib_umem_start(umem_odp)) >> umem_odp->page_shift;
-		if (umem_odp->page_list[idx]) {
-			struct page *page = umem_odp->page_list[idx];
-			dma_addr_t dma = umem_odp->dma_list[idx];
-			dma_addr_t dma_addr = dma & ODP_DMA_ADDR_MASK;
+		dma = umem_odp->dma_list[idx];
+		dma_addr = dma & ODP_DMA_ADDR_MASK;
 
-			WARN_ON(!dma_addr);
+		if (dma_addr) {
+			unsigned long pfn_idx = (addr - ib_umem_start(umem_odp)) >> PAGE_SHIFT;
+			struct page *page = hmm_pfn_to_page(umem_odp->pfn_list[pfn_idx]);
 
 			ib_dma_unmap_page(dev, dma_addr,
 					  BIT(umem_odp->page_shift),
@@ -551,7 +491,6 @@ void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 virt,
 				 */
 				set_page_dirty(head_page);
 			}
-			umem_odp->page_list[idx] = NULL;
 			umem_odp->dma_list[idx] = 0;
 			umem_odp->npages--;
 		}
diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
index e40a80c6636c..962ee36abc7b 100644
--- a/drivers/infiniband/hw/mlx5/odp.c
+++ b/drivers/infiniband/hw/mlx5/odp.c
@@ -671,7 +671,6 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp,
 {
 	int page_shift, ret, np;
 	bool downgrade = flags & MLX5_PF_FLAGS_DOWNGRADE;
-	unsigned long current_seq;
 	u64 access_mask;
 	u64 start_idx;
 
@@ -682,25 +681,17 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp,
 	if (odp->umem.writable && !downgrade)
 		access_mask |= ODP_WRITE_ALLOWED_BIT;
 
-	current_seq = mmu_interval_read_begin(&odp->notifier);
-
-	np = ib_umem_odp_map_dma_pages(odp, user_va, bcnt, access_mask,
-				       current_seq);
+	np = ib_umem_odp_map_dma_and_lock(odp, user_va, bcnt, access_mask);
 	if (np < 0)
 		return np;
 
-	mutex_lock(&odp->umem_mutex);
-	if (!mmu_interval_read_retry(&odp->notifier, current_seq)) {
-		/*
-		 * No need to check whether the MTTs really belong to
-		 * this MR, since ib_umem_odp_map_dma_pages already
-		 * checks this.
-		 */
-		ret = mlx5_ib_update_xlt(mr, start_idx, np,
-					 page_shift, MLX5_IB_UPD_XLT_ATOMIC);
-	} else {
-		ret = -EAGAIN;
-	}
+	/*
+	 * No need to check whether the MTTs really belong to
+	 * this MR, since ib_umem_odp_map_dma_and_lock already
+	 * checks this.
+	 */
+	ret = mlx5_ib_update_xlt(mr, start_idx, np,
+				 page_shift, MLX5_IB_UPD_XLT_ATOMIC);
 	mutex_unlock(&odp->umem_mutex);
 
 	if (ret < 0) {
diff --git a/include/rdma/ib_umem_odp.h b/include/rdma/ib_umem_odp.h
index d16d2c17e733..a53b62ac8a9d 100644
--- a/include/rdma/ib_umem_odp.h
+++ b/include/rdma/ib_umem_odp.h
@@ -14,17 +14,13 @@ struct ib_umem_odp {
 	struct mmu_interval_notifier notifier;
 	struct pid *tgid;
 
+	/* An array of the pfns included in the on-demand paging umem. */
+	unsigned long *pfn_list;
+
 	/*
-	 * An array of the pages included in the on-demand paging umem.
-	 * Indices of pages that are currently not mapped into the device will
-	 * contain NULL.
-	 */
-	struct page		**page_list;
-	/*
-	 * An array of the same size as page_list, with DMA addresses mapped
-	 * for pages the pages in page_list. The lower two bits designate
-	 * access permissions. See ODP_READ_ALLOWED_BIT and
-	 * ODP_WRITE_ALLOWED_BIT.
+	 * An array with DMA addresses mapped for pfns in pfn_list.
+	 * The lower two bits designate access permissions.
+	 * See ODP_READ_ALLOWED_BIT and ODP_WRITE_ALLOWED_BIT.
 	 */
 	dma_addr_t		*dma_list;
 	/*
@@ -97,9 +93,8 @@ ib_umem_odp_alloc_child(struct ib_umem_odp *root_umem, unsigned long addr,
 			const struct mmu_interval_notifier_ops *ops);
 void ib_umem_odp_release(struct ib_umem_odp *umem_odp);
 
-int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 start_offset,
-			      u64 bcnt, u64 access_mask,
-			      unsigned long current_seq);
+int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 start_offset,
+				 u64 bcnt, u64 access_mask);
 
 void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 start_offset,
 				 u64 bound);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH rdma-next 2/4] IB/core: Enable ODP sync without faulting
  2020-09-14 11:39 [PATCH rdma-next 0/4] Improve ODP by using HMM API Leon Romanovsky
  2020-09-14 11:39 ` [PATCH rdma-next 1/4] IB/core: Improve ODP to use hmm_range_fault() Leon Romanovsky
@ 2020-09-14 11:39 ` Leon Romanovsky
  2020-09-16 16:47   ` Christoph Hellwig
  2020-09-14 11:39 ` [PATCH rdma-next 3/4] RDMA/mlx5: Extend advice MR to support non faulting mode Leon Romanovsky
  2020-09-14 11:39 ` [PATCH rdma-next 4/4] RDMA/mlx5: Sync device with CPU pages upon ODP MR registration Leon Romanovsky
  3 siblings, 1 reply; 15+ messages in thread
From: Leon Romanovsky @ 2020-09-14 11:39 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Yishai Hadas, linux-rdma

From: Yishai Hadas <yishaih@mellanox.com>

Enable ODP sync without faulting, this improves performance by reducing
the number of page faults in the system.

The gain from this option is that the device page table can be aligned
with the presented pages in the CPU page table without causing page
faults.

As of that, the overhead on data path from hardware point of view to
trigger a fault which end-up by calling the driver to bring the pages
will be dropped.

Signed-off-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/infiniband/core/umem_odp.c | 35 +++++++++++++++++++++---------
 drivers/infiniband/hw/mlx5/odp.c   |  2 +-
 include/rdma/ib_umem_odp.h         |  2 +-
 3 files changed, 27 insertions(+), 12 deletions(-)

diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c
index 8db3e78eb087..e47ccb196516 100644
--- a/drivers/infiniband/core/umem_odp.c
+++ b/drivers/infiniband/core/umem_odp.c
@@ -343,9 +343,10 @@ static int ib_umem_odp_map_dma_single_page(
  *        the return value.
  * @access_mask: bit mask of the requested access permissions for the given
  *               range.
+ * @fault: is faulting required for the given range
  */
 int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt,
-				 u64 bcnt, u64 access_mask)
+				 u64 bcnt, u64 access_mask, bool fault)
 			__acquires(&umem_odp->umem_mutex)
 {
 	struct task_struct *owning_process  = NULL;
@@ -381,10 +382,12 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt,
 	range.end = ALIGN(user_virt + bcnt, 1UL << page_shift);
 	pfn_start_idx = (range.start - ib_umem_start(umem_odp)) >> PAGE_SHIFT;
 	num_pfns = (range.end - range.start) >> PAGE_SHIFT;
-	range.default_flags = HMM_PFN_REQ_FAULT;
+	if (fault) {
+		range.default_flags = HMM_PFN_REQ_FAULT;
 
-	if (access_mask & ODP_WRITE_ALLOWED_BIT)
-		range.default_flags |= HMM_PFN_REQ_WRITE;
+		if (access_mask & ODP_WRITE_ALLOWED_BIT)
+			range.default_flags |= HMM_PFN_REQ_WRITE;
+	}
 
 	range.hmm_pfns = &(umem_odp->pfn_list[pfn_start_idx]);
 	timeout = jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT);
@@ -413,12 +416,24 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt,
 
 	for (pfn_index = 0; pfn_index < num_pfns;
 		pfn_index += 1 << (page_shift - PAGE_SHIFT), dma_index++) {
-		/*
-		 * Since we asked for hmm_range_fault() to populate pages,
-		 * it shouldn't return an error entry on success.
-		 */
-		WARN_ON(range.hmm_pfns[pfn_index] & HMM_PFN_ERROR);
-		WARN_ON(!(range.hmm_pfns[pfn_index] & HMM_PFN_VALID));
+
+		if (fault) {
+			/*
+			 * Since we asked for hmm_range_fault() to populate pages,
+			 * it shouldn't return an error entry on success.
+			 */
+			WARN_ON(range.hmm_pfns[pfn_index] & HMM_PFN_ERROR);
+			WARN_ON(!(range.hmm_pfns[pfn_index] & HMM_PFN_VALID));
+		} else {
+			if (!(range.hmm_pfns[pfn_index] & HMM_PFN_VALID)) {
+				WARN_ON(umem_odp->dma_list[dma_index]);
+				continue;
+			}
+			access_mask = (range.hmm_pfns[pfn_index] & HMM_PFN_WRITE) ?
+				(ODP_READ_ALLOWED_BIT | ODP_WRITE_ALLOWED_BIT) :
+				 ODP_READ_ALLOWED_BIT;
+		}
+
 		hmm_order = hmm_pfn_to_map_order(range.hmm_pfns[pfn_index]);
 		/* If a hugepage was detected and ODP wasn't set for, the umem
 		 * page_shift will be used, the opposite case is an error.
diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
index 962ee36abc7b..133d54b6f447 100644
--- a/drivers/infiniband/hw/mlx5/odp.c
+++ b/drivers/infiniband/hw/mlx5/odp.c
@@ -681,7 +681,7 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp,
 	if (odp->umem.writable && !downgrade)
 		access_mask |= ODP_WRITE_ALLOWED_BIT;
 
-	np = ib_umem_odp_map_dma_and_lock(odp, user_va, bcnt, access_mask);
+	np = ib_umem_odp_map_dma_and_lock(odp, user_va, bcnt, access_mask, true);
 	if (np < 0)
 		return np;
 
diff --git a/include/rdma/ib_umem_odp.h b/include/rdma/ib_umem_odp.h
index a53b62ac8a9d..0844c1d05ac6 100644
--- a/include/rdma/ib_umem_odp.h
+++ b/include/rdma/ib_umem_odp.h
@@ -94,7 +94,7 @@ ib_umem_odp_alloc_child(struct ib_umem_odp *root_umem, unsigned long addr,
 void ib_umem_odp_release(struct ib_umem_odp *umem_odp);
 
 int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 start_offset,
-				 u64 bcnt, u64 access_mask);
+				 u64 bcnt, u64 access_mask, bool fault);
 
 void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 start_offset,
 				 u64 bound);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH rdma-next 3/4] RDMA/mlx5: Extend advice MR to support non faulting mode
  2020-09-14 11:39 [PATCH rdma-next 0/4] Improve ODP by using HMM API Leon Romanovsky
  2020-09-14 11:39 ` [PATCH rdma-next 1/4] IB/core: Improve ODP to use hmm_range_fault() Leon Romanovsky
  2020-09-14 11:39 ` [PATCH rdma-next 2/4] IB/core: Enable ODP sync without faulting Leon Romanovsky
@ 2020-09-14 11:39 ` Leon Romanovsky
  2020-09-14 11:39 ` [PATCH rdma-next 4/4] RDMA/mlx5: Sync device with CPU pages upon ODP MR registration Leon Romanovsky
  3 siblings, 0 replies; 15+ messages in thread
From: Leon Romanovsky @ 2020-09-14 11:39 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Yishai Hadas, linux-rdma

From: Yishai Hadas <yishaih@mellanox.com>

Extend advice MR to support non faulting mode, this improves performance
by eliminating page faults and bring only the existing CPU pages.

Signed-off-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/infiniband/hw/mlx5/mr.c         | 3 ++-
 drivers/infiniband/hw/mlx5/odp.c        | 7 ++++++-
 include/uapi/rdma/ib_user_ioctl_verbs.h | 1 +
 3 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index 6b0d6109afc6..dea65e511a3e 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -1321,7 +1321,8 @@ int mlx5_ib_advise_mr(struct ib_pd *pd,
 		      struct uverbs_attr_bundle *attrs)
 {
 	if (advice != IB_UVERBS_ADVISE_MR_ADVICE_PREFETCH &&
-	    advice != IB_UVERBS_ADVISE_MR_ADVICE_PREFETCH_WRITE)
+	    advice != IB_UVERBS_ADVISE_MR_ADVICE_PREFETCH_WRITE &&
+	    advice != IB_UVERBS_ADVISE_MR_ADVICE_PREFETCH_NO_FAULT)
 		return -EOPNOTSUPP;
 
 	return mlx5_ib_advise_mr_prefetch(pd, advice, flags,
diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
index 133d54b6f447..adfb926a7906 100644
--- a/drivers/infiniband/hw/mlx5/odp.c
+++ b/drivers/infiniband/hw/mlx5/odp.c
@@ -665,6 +665,7 @@ void mlx5_ib_fence_odp_mr(struct mlx5_ib_mr *mr)
 }
 
 #define MLX5_PF_FLAGS_DOWNGRADE BIT(1)
+#define MLX5_PF_FLAGS_SNAPSHOT BIT(2)
 static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp,
 			     u64 user_va, size_t bcnt, u32 *bytes_mapped,
 			     u32 flags)
@@ -673,6 +674,7 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp,
 	bool downgrade = flags & MLX5_PF_FLAGS_DOWNGRADE;
 	u64 access_mask;
 	u64 start_idx;
+	bool fault = !(flags & MLX5_PF_FLAGS_SNAPSHOT);
 
 	page_shift = odp->page_shift;
 	start_idx = (user_va - ib_umem_start(odp)) >> page_shift;
@@ -681,7 +683,7 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp,
 	if (odp->umem.writable && !downgrade)
 		access_mask |= ODP_WRITE_ALLOWED_BIT;
 
-	np = ib_umem_odp_map_dma_and_lock(odp, user_va, bcnt, access_mask, true);
+	np = ib_umem_odp_map_dma_and_lock(odp, user_va, bcnt, access_mask, fault);
 	if (np < 0)
 		return np;
 
@@ -1852,6 +1854,9 @@ int mlx5_ib_advise_mr_prefetch(struct ib_pd *pd,
 	if (advice == IB_UVERBS_ADVISE_MR_ADVICE_PREFETCH)
 		pf_flags |= MLX5_PF_FLAGS_DOWNGRADE;
 
+	if (advice == IB_UVERBS_ADVISE_MR_ADVICE_PREFETCH_NO_FAULT)
+		pf_flags |= MLX5_PF_FLAGS_SNAPSHOT;
+
 	if (flags & IB_UVERBS_ADVISE_MR_FLAG_FLUSH)
 		return mlx5_ib_prefetch_sg_list(pd, advice, pf_flags, sg_list,
 						num_sge);
diff --git a/include/uapi/rdma/ib_user_ioctl_verbs.h b/include/uapi/rdma/ib_user_ioctl_verbs.h
index cfea82acfe57..22483799cd07 100644
--- a/include/uapi/rdma/ib_user_ioctl_verbs.h
+++ b/include/uapi/rdma/ib_user_ioctl_verbs.h
@@ -208,6 +208,7 @@ enum ib_uverbs_read_counters_flags {
 enum ib_uverbs_advise_mr_advice {
 	IB_UVERBS_ADVISE_MR_ADVICE_PREFETCH,
 	IB_UVERBS_ADVISE_MR_ADVICE_PREFETCH_WRITE,
+	IB_UVERBS_ADVISE_MR_ADVICE_PREFETCH_NO_FAULT,
 };
 
 enum ib_uverbs_advise_mr_flag {
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH rdma-next 4/4] RDMA/mlx5: Sync device with CPU pages upon ODP MR registration
  2020-09-14 11:39 [PATCH rdma-next 0/4] Improve ODP by using HMM API Leon Romanovsky
                   ` (2 preceding siblings ...)
  2020-09-14 11:39 ` [PATCH rdma-next 3/4] RDMA/mlx5: Extend advice MR to support non faulting mode Leon Romanovsky
@ 2020-09-14 11:39 ` Leon Romanovsky
  3 siblings, 0 replies; 15+ messages in thread
From: Leon Romanovsky @ 2020-09-14 11:39 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Yishai Hadas, linux-rdma

From: Yishai Hadas <yishaih@mellanox.com>

Sync device with CPU pages upon ODP MR registration.
This reduce potential page faults down the road and improve performance.

Signed-off-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/infiniband/hw/mlx5/mlx5_ib.h |  6 ++++++
 drivers/infiniband/hw/mlx5/mr.c      | 11 +++++++----
 drivers/infiniband/hw/mlx5/odp.c     | 22 +++++++++++++++++++++-
 3 files changed, 34 insertions(+), 5 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index 6ab3efb75b21..8e77a262e44c 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -1283,6 +1283,7 @@ void mlx5_odp_populate_xlt(void *xlt, size_t idx, size_t nentries,
 int mlx5_ib_advise_mr_prefetch(struct ib_pd *pd,
 			       enum ib_uverbs_advise_mr_advice advice,
 			       u32 flags, struct ib_sge *sg_list, u32 num_sge);
+int mlx5_ib_init_odp_mr(struct mlx5_ib_mr *mr, u64 user_va, size_t bcnt, bool enable);
 #else /* CONFIG_INFINIBAND_ON_DEMAND_PAGING */
 static inline void mlx5_ib_internal_fill_odp_caps(struct mlx5_ib_dev *dev)
 {
@@ -1304,6 +1305,11 @@ mlx5_ib_advise_mr_prefetch(struct ib_pd *pd,
 {
 	return -EOPNOTSUPP;
 }
+static inline int mlx5_ib_init_odp_mr(struct mlx5_ib_mr *mr, u64 user_va,
+				      size_t bcnt, bool enable)
+{
+	return -EOPNOTSUPP;
+}
 #endif /* CONFIG_INFINIBAND_ON_DEMAND_PAGING */
 
 extern const struct mmu_interval_notifier_ops mlx5_mn_ops;
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index dea65e511a3e..234a5d25a072 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -1431,7 +1431,7 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 	mr->umem = umem;
 	set_mr_fields(dev, mr, npages, length, access_flags);
 
-	if (xlt_with_umr) {
+	if (xlt_with_umr && !(access_flags & IB_ACCESS_ON_DEMAND)) {
 		/*
 		 * If the MR was created with reg_create then it will be
 		 * configured properly but left disabled. It is safe to go ahead
@@ -1439,9 +1439,6 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 		 */
 		int update_xlt_flags = MLX5_IB_UPD_XLT_ENABLE;
 
-		if (access_flags & IB_ACCESS_ON_DEMAND)
-			update_xlt_flags |= MLX5_IB_UPD_XLT_ZAP;
-
 		err = mlx5_ib_update_xlt(mr, 0, ncont, page_shift,
 					 update_xlt_flags);
 		if (err) {
@@ -1467,6 +1464,12 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 			dereg_mr(dev, mr);
 			return ERR_PTR(err);
 		}
+
+		err = mlx5_ib_init_odp_mr(mr, start, length, xlt_with_umr);
+		if (err) {
+			dereg_mr(dev, mr);
+			return ERR_PTR(err);
+		}
 	}
 
 	return &mr->ibmr;
diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
index adfb926a7906..05767c1f4ab9 100644
--- a/drivers/infiniband/hw/mlx5/odp.c
+++ b/drivers/infiniband/hw/mlx5/odp.c
@@ -666,6 +666,7 @@ void mlx5_ib_fence_odp_mr(struct mlx5_ib_mr *mr)
 
 #define MLX5_PF_FLAGS_DOWNGRADE BIT(1)
 #define MLX5_PF_FLAGS_SNAPSHOT BIT(2)
+#define MLX5_PF_FLAGS_ENABLE BIT(3)
 static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp,
 			     u64 user_va, size_t bcnt, u32 *bytes_mapped,
 			     u32 flags)
@@ -675,6 +676,10 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp,
 	u64 access_mask;
 	u64 start_idx;
 	bool fault = !(flags & MLX5_PF_FLAGS_SNAPSHOT);
+	u32 xlt_flags = MLX5_IB_UPD_XLT_ATOMIC;
+
+	if (flags & MLX5_PF_FLAGS_ENABLE)
+		xlt_flags |= MLX5_IB_UPD_XLT_ENABLE;
 
 	page_shift = odp->page_shift;
 	start_idx = (user_va - ib_umem_start(odp)) >> page_shift;
@@ -693,7 +698,7 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp,
 	 * checks this.
 	 */
 	ret = mlx5_ib_update_xlt(mr, start_idx, np,
-				 page_shift, MLX5_IB_UPD_XLT_ATOMIC);
+				 page_shift, xlt_flags);
 	mutex_unlock(&odp->umem_mutex);
 
 	if (ret < 0) {
@@ -828,6 +833,21 @@ static int pagefault_mr(struct mlx5_ib_mr *mr, u64 io_virt, size_t bcnt,
 				     flags);
 }
 
+int mlx5_ib_init_odp_mr(struct mlx5_ib_mr *mr, u64 user_va, size_t bcnt,
+			bool enable)
+{
+	u32 flags = MLX5_PF_FLAGS_SNAPSHOT;
+	int ret;
+
+	if (enable)
+		flags |= MLX5_PF_FLAGS_ENABLE;
+
+	ret = pagefault_real_mr(mr, to_ib_umem_odp(mr->umem),
+				user_va, bcnt, NULL,
+				flags);
+	return ret >= 0 ? 0 : ret;
+}
+
 struct pf_frame {
 	struct pf_frame *next;
 	u32 key;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH rdma-next 1/4] IB/core: Improve ODP to use hmm_range_fault()
  2020-09-14 11:39 ` [PATCH rdma-next 1/4] IB/core: Improve ODP to use hmm_range_fault() Leon Romanovsky
@ 2020-09-16 16:45   ` Christoph Hellwig
  2020-09-16 17:21     ` Jason Gunthorpe
  2020-09-17  9:54     ` Yishai Hadas
  0 siblings, 2 replies; 15+ messages in thread
From: Christoph Hellwig @ 2020-09-16 16:45 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: Doug Ledford, Jason Gunthorpe, Yishai Hadas, linux-rdma

> In addition,
> Moving to use the HMM enables to reduce page faults in the system as it
> exposes the snapshot mode, this will be introduced in next patches from
> this series.
> 
> As part of this cleanup some flows and use the required data structures
> to work with HMM.

Just saying HMM here seems weird.  The function is hmm_range_fault.
And it really needs to grow a better name eventually..

>  		unsigned long start;
>  		unsigned long end;
> -		size_t pages;
> +		size_t ndmas, npfns;
>  
>  		start = ALIGN_DOWN(umem_odp->umem.address, page_size);
>  		if (check_add_overflow(umem_odp->umem.address,
> @@ -71,20 +72,21 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp,
>  		if (unlikely(end < page_size))
>  			return -EOVERFLOW;
>  
> -		pages = (end - start) >> umem_odp->page_shift;
> -		if (!pages)
> +		ndmas = (end - start) >> umem_odp->page_shift;
> +		if (!ndmas)
>  			return -EINVAL;
>  
> -		umem_odp->page_list = kvcalloc(
> -			pages, sizeof(*umem_odp->page_list), GFP_KERNEL);
> -		if (!umem_odp->page_list)
> +		npfns = (end - start) >> PAGE_SHIFT;
> +		umem_odp->pfn_list = kvcalloc(
> +			npfns, sizeof(*umem_odp->pfn_list), GFP_KERNEL);
> +		if (!umem_odp->pfn_list)
>  			return -ENOMEM;
>  
>  		umem_odp->dma_list = kvcalloc(
> -			pages, sizeof(*umem_odp->dma_list), GFP_KERNEL);
> +			ndmas, sizeof(*umem_odp->dma_list), GFP_KERNEL);
>  		if (!umem_odp->dma_list) {
>  			ret = -ENOMEM;
> -			goto out_page_list;
> +			goto out_pfn_list;

Why do you rename these variables?  We're still mapping pages.

>  static int ib_umem_odp_map_dma_single_page(
>  		struct ib_umem_odp *umem_odp,
> +		unsigned int dma_index,
>  		struct page *page,
> +		u64 access_mask)
>  {
>  	struct ib_device *dev = umem_odp->umem.ibdev;
>  	dma_addr_t dma_addr;
>  
> +	if (umem_odp->dma_list[dma_index]) {

Note that 0 is a valid DMA address.  I think due the access bit this
works, but it is a little subtle..

> +		umem_odp->dma_list[dma_index] &= ODP_DMA_ADDR_MASK;
> +		umem_odp->dma_list[dma_index] |= access_mask;

This looks a little weird.  Instead of &= ODP_DMA_ADDR_MASK I'd do a
&= ~ACCESS_MASK as that makes the code a lot more logical.

But more importantly except for (dma_addr_t)-1 (DMA_MAPPING_ERROR)
all dma_addr_t values are valid, so taking more than a single bit
from a dma_addr_t is not strictly speaking correct.

> +		return 0;
>  	}
>  
> +	dma_addr =
> +		ib_dma_map_page(dev, page, 0, BIT(umem_odp->page_shift),
> +				DMA_BIDIRECTIONAL);

The use of the BIT macro which is already obsfucating in its intended
place is really out of place here.

> +	if (ib_dma_mapping_error(dev, dma_addr))
> +		return -EFAULT;
> +
> +	umem_odp->dma_list[dma_index] = dma_addr | access_mask;
> +	umem_odp->npages++;
> +	return 0;

I'd change the calling conventions and write the whole thing as:

static int ib_umem_odp_map_dma_single_page(struct ib_umem_odp *umem_odp
		unsigned int dma_index, struct page *page, u64 access_mask)
{
 	struct ib_device *dev = umem_odp->umem.ibdev;
	dma_addr_t *dma_addr = &umem_odp->dma_list[dma_index];

	if (!dma_addr) {
		*dma_addr = ib_dma_map_page(dev, page, 0,
				1 << umem_odp->page_shift,
				DMA_BIDIRECTIONAL);
		if (ib_dma_mapping_error(dev, *dma_addr))
			return -EFAULT;
		umem_odp->npages++;
	}
	*dma_addr &= ~ODP_ACCESS_MASK;
	*dma_addr |= access_mask;
	return 0;
}

> +	/*
> +	 * No need to check whether the MTTs really belong to
> +	 * this MR, since ib_umem_odp_map_dma_and_lock already
> +	 * checks this.
> +	 */

You could easily squeeze the three lines into two while still staying
under 80 characters for each line.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH rdma-next 2/4] IB/core: Enable ODP sync without faulting
  2020-09-14 11:39 ` [PATCH rdma-next 2/4] IB/core: Enable ODP sync without faulting Leon Romanovsky
@ 2020-09-16 16:47   ` Christoph Hellwig
  2020-09-16 18:19     ` Leon Romanovsky
  2020-09-17 10:05     ` Yishai Hadas
  0 siblings, 2 replies; 15+ messages in thread
From: Christoph Hellwig @ 2020-09-16 16:47 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: Doug Ledford, Jason Gunthorpe, Yishai Hadas, linux-rdma

> +		if (fault) {
> +			/*
> +			 * Since we asked for hmm_range_fault() to populate pages,

Totally pointless line over 80 characters.

> +			access_mask = (range.hmm_pfns[pfn_index] & HMM_PFN_WRITE) ?
> +				(ODP_READ_ALLOWED_BIT | ODP_WRITE_ALLOWED_BIT) :
> +				 ODP_READ_ALLOWED_BIT;
> +		}

Another weird overly long line, caused by rather osfucated code.  This
really should be something like:

			access_mask = ODP_READ_ALLOWED_BIT;
			if (range.hmm_pfns[pfn_index] & HMM_PFN_WRITE)
				access_mask |= ODP_WRITE_ALLOWED_BIT;


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH rdma-next 1/4] IB/core: Improve ODP to use hmm_range_fault()
  2020-09-16 16:45   ` Christoph Hellwig
@ 2020-09-16 17:21     ` Jason Gunthorpe
  2020-09-16 17:22       ` Christoph Hellwig
  2020-09-17 10:01       ` Yishai Hadas
  2020-09-17  9:54     ` Yishai Hadas
  1 sibling, 2 replies; 15+ messages in thread
From: Jason Gunthorpe @ 2020-09-16 17:21 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Leon Romanovsky, Doug Ledford, Yishai Hadas, linux-rdma

On Wed, Sep 16, 2020 at 05:45:16PM +0100, Christoph Hellwig wrote:
 
> Note that 0 is a valid DMA address.  I think due the access bit this
> works, but it is a little subtle..

It should be checked again carefully.. This looks a bit questionable
if the flags are what makes it work:

+               dma_addr = dma & ODP_DMA_ADDR_MASK;
+               if (dma_addr) {

> But more importantly except for (dma_addr_t)-1 (DMA_MAPPING_ERROR)
> all dma_addr_t values are valid, so taking more than a single bit
> from a dma_addr_t is not strictly speaking correct.

This is the result of dma_map_page(). The HW requires that
dma_map_page(page_size) returns a DMA address that is page_size
aligned, even for huge pages.

So at least the last 12 bits of the DMA address are always 0, they can
be used for flags. This is also the HW representation in the page
table.

Last time you pointed at this code you agreed this alignment was met,
is it still OK?

Jason

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH rdma-next 1/4] IB/core: Improve ODP to use hmm_range_fault()
  2020-09-16 17:21     ` Jason Gunthorpe
@ 2020-09-16 17:22       ` Christoph Hellwig
  2020-09-17 10:01       ` Yishai Hadas
  1 sibling, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2020-09-16 17:22 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Christoph Hellwig, Leon Romanovsky, Doug Ledford, Yishai Hadas,
	linux-rdma

On Wed, Sep 16, 2020 at 02:21:00PM -0300, Jason Gunthorpe wrote:
> +               dma_addr = dma & ODP_DMA_ADDR_MASK;
> +               if (dma_addr) {
> 
> > But more importantly except for (dma_addr_t)-1 (DMA_MAPPING_ERROR)
> > all dma_addr_t values are valid, so taking more than a single bit
> > from a dma_addr_t is not strictly speaking correct.
> 
> This is the result of dma_map_page(). The HW requires that
> dma_map_page(page_size) returns a DMA address that is page_size
> aligned, even for huge pages.
> 
> So at least the last 12 bits of the DMA address are always 0, they can
> be used for flags. This is also the HW representation in the page
> table.
> 
> Last time you pointed at this code you agreed this alignment was met,
> is it still OK?

True, the alignment actually makes this work.  I'd still suggest to
throw in a comment about the actual usage.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH rdma-next 2/4] IB/core: Enable ODP sync without faulting
  2020-09-16 16:47   ` Christoph Hellwig
@ 2020-09-16 18:19     ` Leon Romanovsky
  2020-09-16 18:25       ` Christoph Hellwig
  2020-09-17 10:05     ` Yishai Hadas
  1 sibling, 1 reply; 15+ messages in thread
From: Leon Romanovsky @ 2020-09-16 18:19 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Doug Ledford, Jason Gunthorpe, Yishai Hadas, linux-rdma

On Wed, Sep 16, 2020 at 05:47:06PM +0100, Christoph Hellwig wrote:
> > +		if (fault) {
> > +			/*
> > +			 * Since we asked for hmm_range_fault() to populate pages,
>
> Totally pointless line over 80 characters.

checkpatch.pl was updated to allow 100 symbols.

Regarding "pointless", at least for me it wasn't clear why we don't
have HMM_PFN_ERROR check in non-fault path so I asked form Yishai to
add this comment.

>
> > +			access_mask = (range.hmm_pfns[pfn_index] & HMM_PFN_WRITE) ?
> > +				(ODP_READ_ALLOWED_BIT | ODP_WRITE_ALLOWED_BIT) :
> > +				 ODP_READ_ALLOWED_BIT;
> > +		}
>
> Another weird overly long line, caused by rather osfucated code.  This
> really should be something like:
>
> 			access_mask = ODP_READ_ALLOWED_BIT;
> 			if (range.hmm_pfns[pfn_index] & HMM_PFN_WRITE)
> 				access_mask |= ODP_WRITE_ALLOWED_BIT;

Sure

>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH rdma-next 2/4] IB/core: Enable ODP sync without faulting
  2020-09-16 18:19     ` Leon Romanovsky
@ 2020-09-16 18:25       ` Christoph Hellwig
  2020-09-16 18:30         ` Leon Romanovsky
  0 siblings, 1 reply; 15+ messages in thread
From: Christoph Hellwig @ 2020-09-16 18:25 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Christoph Hellwig, Doug Ledford, Jason Gunthorpe, Yishai Hadas,
	linux-rdma

On Wed, Sep 16, 2020 at 09:19:11PM +0300, Leon Romanovsky wrote:
> On Wed, Sep 16, 2020 at 05:47:06PM +0100, Christoph Hellwig wrote:
> > > +		if (fault) {
> > > +			/*
> > > +			 * Since we asked for hmm_range_fault() to populate pages,
> >
> > Totally pointless line over 80 characters.
> 
> checkpatch.pl was updated to allow 100 symbols.

checkpatch.pl doesn't matter, it is in fact pretty silly.  The coding
style document says:

"The preferred limit on the length of a single line is 80 columns.

Statements longer than 80 columns should be broken into sensible chunks,
unless exceeding 80 columns significantly increases readability and does
not hide information."

and that absolutely is not the case here, you could trivially move the
last word into the next line and would not even increase the line count.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH rdma-next 2/4] IB/core: Enable ODP sync without faulting
  2020-09-16 18:25       ` Christoph Hellwig
@ 2020-09-16 18:30         ` Leon Romanovsky
  0 siblings, 0 replies; 15+ messages in thread
From: Leon Romanovsky @ 2020-09-16 18:30 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Doug Ledford, Jason Gunthorpe, Yishai Hadas, linux-rdma

On Wed, Sep 16, 2020 at 07:25:23PM +0100, Christoph Hellwig wrote:
> On Wed, Sep 16, 2020 at 09:19:11PM +0300, Leon Romanovsky wrote:
> > On Wed, Sep 16, 2020 at 05:47:06PM +0100, Christoph Hellwig wrote:
> > > > +		if (fault) {
> > > > +			/*
> > > > +			 * Since we asked for hmm_range_fault() to populate pages,
> > >
> > > Totally pointless line over 80 characters.
> >
> > checkpatch.pl was updated to allow 100 symbols.
>
> checkpatch.pl doesn't matter, it is in fact pretty silly.  The coding
> style document says:
>
> "The preferred limit on the length of a single line is 80 columns.
>
> Statements longer than 80 columns should be broken into sensible chunks,
> unless exceeding 80 columns significantly increases readability and does
> not hide information."
>
> and that absolutely is not the case here, you could trivially move the
> last word into the next line and would not even increase the line count.

I'm not arguing, just explained why it was missed.

Thanks

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH rdma-next 1/4] IB/core: Improve ODP to use hmm_range_fault()
  2020-09-16 16:45   ` Christoph Hellwig
  2020-09-16 17:21     ` Jason Gunthorpe
@ 2020-09-17  9:54     ` Yishai Hadas
  1 sibling, 0 replies; 15+ messages in thread
From: Yishai Hadas @ 2020-09-17  9:54 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Leon Romanovsky, Doug Ledford, Jason Gunthorpe, Yishai Hadas, linux-rdma

On 9/16/2020 7:45 PM, Christoph Hellwig wrote:
>> In addition,
>> Moving to use the HMM enables to reduce page faults in the system as it
>> exposes the snapshot mode, this will be introduced in next patches from
>> this series.
>>
>> As part of this cleanup some flows and use the required data structures
>> to work with HMM.
> Just saying HMM here seems weird.  The function is hmm_range_fault.
> And it really needs to grow a better name eventually..

Sure, will rephrase.

>>   		unsigned long start;
>>   		unsigned long end;
>> -		size_t pages;
>> +		size_t ndmas, npfns;
>>   
>>   		start = ALIGN_DOWN(umem_odp->umem.address, page_size);
>>   		if (check_add_overflow(umem_odp->umem.address,
>> @@ -71,20 +72,21 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp,
>>   		if (unlikely(end < page_size))
>>   			return -EOVERFLOW;
>>   
>> -		pages = (end - start) >> umem_odp->page_shift;
>> -		if (!pages)
>> +		ndmas = (end - start) >> umem_odp->page_shift;
>> +		if (!ndmas)
>>   			return -EINVAL;
>>   
>> -		umem_odp->page_list = kvcalloc(
>> -			pages, sizeof(*umem_odp->page_list), GFP_KERNEL);
>> -		if (!umem_odp->page_list)
>> +		npfns = (end - start) >> PAGE_SHIFT;
>> +		umem_odp->pfn_list = kvcalloc(
>> +			npfns, sizeof(*umem_odp->pfn_list), GFP_KERNEL);
>> +		if (!umem_odp->pfn_list)
>>   			return -ENOMEM;
>>   
>>   		umem_odp->dma_list = kvcalloc(
>> -			pages, sizeof(*umem_odp->dma_list), GFP_KERNEL);
>> +			ndmas, sizeof(*umem_odp->dma_list), GFP_KERNEL);
>>   		if (!umem_odp->dma_list) {
>>   			ret = -ENOMEM;
>> -			goto out_page_list;
>> +			goto out_pfn_list;
> Why do you rename these variables?  We're still mapping pages.

The pfn_list is allocated in a size to match the system page size as 
used by hmm_range_fault() by contrast to the dma_list that its size 
depends on umem page size.
As of that I have separated to 2 variables and renamed to clarify their 
usage (i.e no struct page ** is allocated but unsigned long *)

>>   static int ib_umem_odp_map_dma_single_page(
>>   		struct ib_umem_odp *umem_odp,
>> +		unsigned int dma_index,
>>   		struct page *page,
>> +		u64 access_mask)
>>   {
>>   	struct ib_device *dev = umem_odp->umem.ibdev;
>>   	dma_addr_t dma_addr;
>>   
>> +	if (umem_odp->dma_list[dma_index]) {
> Note that 0 is a valid DMA address.  I think due the access bit this
> works, but it is a little subtle..

I have added a note to clarify the usage as you have suggested in the 
next email will be part of V1, the flags will do the work.

By the way see below [1], existing code assumes that NULL is not valid 
but new code won't rely on any more.

[1] 
https://elixir.bootlin.com/linux/v5.3-rc7/source/drivers/infiniband/core/umem_odp.c#L727

>> +		umem_odp->dma_list[dma_index] &= ODP_DMA_ADDR_MASK;
>> +		umem_odp->dma_list[dma_index] |= access_mask;
> This looks a little weird.  Instead of &= ODP_DMA_ADDR_MASK I'd do a
> &= ~ACCESS_MASK as that makes the code a lot more logical.
>
> But more importantly except for (dma_addr_t)-1 (DMA_MAPPING_ERROR)
> all dma_addr_t values are valid, so taking more than a single bit
> from a dma_addr_t is not strictly speaking correct.
>
>> +		return 0;
>>   	}
>>   
>> +	dma_addr =
>> +		ib_dma_map_page(dev, page, 0, BIT(umem_odp->page_shift),
>> +				DMA_BIDIRECTIONAL);
> The use of the BIT macro which is already obsfucating in its intended
> place is really out of place here.
>
>> +	if (ib_dma_mapping_error(dev, dma_addr))
>> +		return -EFAULT;
>> +
>> +	umem_odp->dma_list[dma_index] = dma_addr | access_mask;
>> +	umem_odp->npages++;
>> +	return 0;
> I'd change the calling conventions and write the whole thing as:
>
> static int ib_umem_odp_map_dma_single_page(struct ib_umem_odp *umem_odp
> 		unsigned int dma_index, struct page *page, u64 access_mask)
> {
>   	struct ib_device *dev = umem_odp->umem.ibdev;
> 	dma_addr_t *dma_addr = &umem_odp->dma_list[dma_index];
>
> 	if (!dma_addr) {

You referred to if (!*dma_addr), right ? no problem will use your 
suggestion below.

> 		*dma_addr = ib_dma_map_page(dev, page, 0,
> 				1 << umem_odp->page_shift,
> 				DMA_BIDIRECTIONAL);
> 		if (ib_dma_mapping_error(dev, *dma_addr))
> 			return -EFAULT;
> 		umem_odp->npages++;
> 	}
> 	*dma_addr &= ~ODP_ACCESS_MASK;
There isn't such a mask, I found it clear to use ODP_DMA_ADDR_MASK.
> 	*dma_addr |= access_mask;
> 	return 0;
> }
>
>> +	/*
>> +	 * No need to check whether the MTTs really belong to
>> +	 * this MR, since ib_umem_odp_map_dma_and_lock already
>> +	 * checks this.
>> +	 */
> You could easily squeeze the three lines into two while still staying
> under 80 characters for each line.

No problem, will change.

Yishai


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH rdma-next 1/4] IB/core: Improve ODP to use hmm_range_fault()
  2020-09-16 17:21     ` Jason Gunthorpe
  2020-09-16 17:22       ` Christoph Hellwig
@ 2020-09-17 10:01       ` Yishai Hadas
  1 sibling, 0 replies; 15+ messages in thread
From: Yishai Hadas @ 2020-09-17 10:01 UTC (permalink / raw)
  To: Jason Gunthorpe, Christoph Hellwig
  Cc: Leon Romanovsky, Doug Ledford, Yishai Hadas, linux-rdma

On 9/16/2020 8:21 PM, Jason Gunthorpe wrote:
> On Wed, Sep 16, 2020 at 05:45:16PM +0100, Christoph Hellwig wrote:
>   
>> Note that 0 is a valid DMA address.  I think due the access bit this
>> works, but it is a little subtle..
> It should be checked again carefully.. This looks a bit questionable
> if the flags are what makes it work:
>
> +               dma_addr = dma & ODP_DMA_ADDR_MASK;
> +               if (dma_addr) {

I have changed the check to be on dma, the masking to get the 
dma_address will be just later on so this should be fine even for 
potential valid NULL dma address.
Will be part of V1.

>> But more importantly except for (dma_addr_t)-1 (DMA_MAPPING_ERROR)
>> all dma_addr_t values are valid, so taking more than a single bit
>> from a dma_addr_t is not strictly speaking correct.
> This is the result of dma_map_page(). The HW requires that
> dma_map_page(page_size) returns a DMA address that is page_size
> aligned, even for huge pages.
>
> So at least the last 12 bits of the DMA address are always 0, they can
> be used for flags. This is also the HW representation in the page
> table.
>
> Last time you pointed at this code you agreed this alignment was met,
> is it still OK?
>
> Jason

Yes, this was confirmed by Christoph in next mail.

Thanks,
Yishai


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH rdma-next 2/4] IB/core: Enable ODP sync without faulting
  2020-09-16 16:47   ` Christoph Hellwig
  2020-09-16 18:19     ` Leon Romanovsky
@ 2020-09-17 10:05     ` Yishai Hadas
  1 sibling, 0 replies; 15+ messages in thread
From: Yishai Hadas @ 2020-09-17 10:05 UTC (permalink / raw)
  To: Christoph Hellwig, Leon Romanovsky
  Cc: Doug Ledford, Jason Gunthorpe, Yishai Hadas, linux-rdma

On 9/16/2020 7:47 PM, Christoph Hellwig wrote:
>> +		if (fault) {
>> +			/*
>> +			 * Since we asked for hmm_range_fault() to populate pages,
> Totally pointless line over 80 characters.
I'll leave the comment as was asked by Leon, will fix to match 80 
characters.
>> +			access_mask = (range.hmm_pfns[pfn_index] & HMM_PFN_WRITE) ?
>> +				(ODP_READ_ALLOWED_BIT | ODP_WRITE_ALLOWED_BIT) :
>> +				 ODP_READ_ALLOWED_BIT;
>> +		}
> Another weird overly long line, caused by rather osfucated code.  This
> really should be something like:
>
> 			access_mask = ODP_READ_ALLOWED_BIT;
> 			if (range.hmm_pfns[pfn_index] & HMM_PFN_WRITE)
> 				access_mask |= ODP_WRITE_ALLOWED_BIT;
>
No problem, will be part of V1.


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2020-09-17 10:05 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-14 11:39 [PATCH rdma-next 0/4] Improve ODP by using HMM API Leon Romanovsky
2020-09-14 11:39 ` [PATCH rdma-next 1/4] IB/core: Improve ODP to use hmm_range_fault() Leon Romanovsky
2020-09-16 16:45   ` Christoph Hellwig
2020-09-16 17:21     ` Jason Gunthorpe
2020-09-16 17:22       ` Christoph Hellwig
2020-09-17 10:01       ` Yishai Hadas
2020-09-17  9:54     ` Yishai Hadas
2020-09-14 11:39 ` [PATCH rdma-next 2/4] IB/core: Enable ODP sync without faulting Leon Romanovsky
2020-09-16 16:47   ` Christoph Hellwig
2020-09-16 18:19     ` Leon Romanovsky
2020-09-16 18:25       ` Christoph Hellwig
2020-09-16 18:30         ` Leon Romanovsky
2020-09-17 10:05     ` Yishai Hadas
2020-09-14 11:39 ` [PATCH rdma-next 3/4] RDMA/mlx5: Extend advice MR to support non faulting mode Leon Romanovsky
2020-09-14 11:39 ` [PATCH rdma-next 4/4] RDMA/mlx5: Sync device with CPU pages upon ODP MR registration Leon Romanovsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).