linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH rdma-next v4 0/4] Dynamicaly allocate SG table from the pages
@ 2020-09-27  6:46 Leon Romanovsky
  2020-09-27  6:46 ` [PATCH rdma-next v4 1/4] lib/scatterlist: Add support in dynamic allocation of SG table from pages Leon Romanovsky
                   ` (3 more replies)
  0 siblings, 4 replies; 17+ messages in thread
From: Leon Romanovsky @ 2020-09-27  6:46 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, Christoph Hellwig, Daniel Vetter, David Airlie,
	dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen, linux-kernel,
	linux-rdma, Maor Gottlieb, Rodrigo Vivi, Roland Scheidegger,
	Tvrtko Ursulin, VMware Graphics

From: Leon Romanovsky <leonro@nvidia.com>


Changelog:
v4:
 * Fixed formatting in first patch.
 * Added fix (clear tmp_netnts) in first patch to fix i915 failure.
 * Added test patches
v3: https://lore.kernel.org/linux-rdma/20200922083958.2150803-1-leon@kernel.org/
 * Squashed Christopher's suggestion to avoid introduced new API, but extend existing one.
v2: https://lore.kernel.org/linux-rdma/20200916140726.839377-1-leon@kernel.org
 * Fixed indentations and comments
 * Deleted sg_alloc_next()
 * Squashed lib/scatterlist patches into one
v1: https://lore.kernel.org/lkml/20200910134259.1304543-1-leon@kernel.org
 * Changed _sg_chain to be __sg_chain
 * Added dependency on ARCH_NO_SG_CHAIN
 * Removed struct sg_append
v0:
 * https://lore.kernel.org/lkml/20200903121853.1145976-1-leon@kernel.org

--------------------------------------------------------------------------
From Maor:

This series extends __sg_alloc_table_from_pages to allow chaining of
new pages to already initialized SG table.

This allows drivers to utilize the optimization of merging contiguous
pages without a need to pre allocate all the pages and hold them in
a very large temporary buffer prior to the call to SG table initialization.

The second patch changes the Infiniband driver to use the new API. It
removes duplicate functionality from the code and benefits the
optimization of allocating dynamic SG table from pages.

In huge pages system of 2MB page size, without this change, the SG table
would contain x512 SG entries.
E.g. for 100GB memory registration:

             Number of entries      Size
    Before        26214400          600.0MB
    After            51200            1.2MB

Thanks

Maor Gottlieb (2):
  lib/scatterlist: Add support in dynamic allocation of SG table from
    pages
  RDMA/umem: Move to allocate SG table from pages

Tvrtko Ursulin (2):
  tools/testing/scatterlist: Rejuvenate bit-rotten test
  tools/testing/scatterlist: Show errors in human readable form

 drivers/gpu/drm/i915/gem/i915_gem_userptr.c |  12 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c  |  15 +-
 drivers/infiniband/core/umem.c              |  92 ++---------
 include/linux/scatterlist.h                 |  43 +++---
 lib/scatterlist.c                           | 160 +++++++++++++++-----
 lib/sg_pool.c                               |   3 +-
 tools/testing/scatterlist/Makefile          |   3 +-
 tools/testing/scatterlist/linux/mm.h        |  35 +++++
 tools/testing/scatterlist/main.c            |  53 +++++--
 9 files changed, 248 insertions(+), 168 deletions(-)

--
2.26.2


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH rdma-next v4 1/4] lib/scatterlist: Add support in dynamic allocation of SG table from pages
  2020-09-27  6:46 [PATCH rdma-next v4 0/4] Dynamicaly allocate SG table from the pages Leon Romanovsky
@ 2020-09-27  6:46 ` Leon Romanovsky
  2020-10-02 15:02   ` Jason Gunthorpe
  2020-09-27  6:46 ` [PATCH rdma-next v4 2/4] tools/testing/scatterlist: Rejuvenate bit-rotten test Leon Romanovsky
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 17+ messages in thread
From: Leon Romanovsky @ 2020-09-27  6:46 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Maor Gottlieb, Christoph Hellwig, Daniel Vetter, David Airlie,
	dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen, linux-kernel,
	linux-rdma, Maor Gottlieb, Rodrigo Vivi, Roland Scheidegger,
	Tvrtko Ursulin, VMware Graphics

From: Maor Gottlieb <maorg@mellanox.com>

Extend __sg_alloc_table_from_pages to support dynamic allocation of
SG table from pages. It should be used by drivers that can't supply
all the pages at one time.

This function returns the last populated SGE in the table. Users should
pass it as an argument to the function from the second call and forward.
As before, nents will be equal to the number of populated SGEs (chunks).

With this new extension, drivers can benefit the optimization of merging
contiguous pages without a need to allocate all pages in advance and
hold them in a large buffer.

E.g. with the Infiniband driver that allocates a single page for hold the
pages. For 1TB memory registration, the temporary buffer would consume only
4KB, instead of 2GB.

Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c |  12 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c  |  15 +-
 include/linux/scatterlist.h                 |  43 +++---
 lib/scatterlist.c                           | 160 +++++++++++++++-----
 lib/sg_pool.c                               |   3 +-
 tools/testing/scatterlist/main.c            |   9 +-
 6 files changed, 165 insertions(+), 77 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 12b30075134a..f2eaed6aca3d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -403,6 +403,7 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
 	unsigned int max_segment = i915_sg_segment_size();
 	struct sg_table *st;
 	unsigned int sg_page_sizes;
+	struct scatterlist *sg;
 	int ret;

 	st = kmalloc(sizeof(*st), GFP_KERNEL);
@@ -410,13 +411,12 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
 		return ERR_PTR(-ENOMEM);

 alloc_table:
-	ret = __sg_alloc_table_from_pages(st, pvec, num_pages,
-					  0, num_pages << PAGE_SHIFT,
-					  max_segment,
-					  GFP_KERNEL);
-	if (ret) {
+	sg = __sg_alloc_table_from_pages(st, pvec, num_pages, 0,
+					 num_pages << PAGE_SHIFT, max_segment,
+					 NULL, 0, GFP_KERNEL);
+	if (IS_ERR(sg)) {
 		kfree(st);
-		return ERR_PTR(ret);
+		return ERR_CAST(sg);
 	}

 	ret = i915_gem_gtt_prepare_pages(obj, st);
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
index ab524ab3b0b4..f22acd398b1f 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
@@ -419,6 +419,7 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt)
 	int ret = 0;
 	static size_t sgl_size;
 	static size_t sgt_size;
+	struct scatterlist *sg;

 	if (vmw_tt->mapped)
 		return 0;
@@ -441,13 +442,15 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt)
 		if (unlikely(ret != 0))
 			return ret;

-		ret = __sg_alloc_table_from_pages
-			(&vmw_tt->sgt, vsgt->pages, vsgt->num_pages, 0,
-			 (unsigned long) vsgt->num_pages << PAGE_SHIFT,
-			 dma_get_max_seg_size(dev_priv->dev->dev),
-			 GFP_KERNEL);
-		if (unlikely(ret != 0))
+		sg = __sg_alloc_table_from_pages(&vmw_tt->sgt, vsgt->pages,
+				vsgt->num_pages, 0,
+				(unsigned long) vsgt->num_pages << PAGE_SHIFT,
+				dma_get_max_seg_size(dev_priv->dev->dev),
+				NULL, 0, GFP_KERNEL);
+		if (IS_ERR(sg)) {
+			ret = PTR_ERR(sg);
 			goto out_sg_alloc_fail;
+		}

 		if (vsgt->num_pages > vmw_tt->sgt.nents) {
 			uint64_t over_alloc =
diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
index 45cf7b69d852..c24cc667b56b 100644
--- a/include/linux/scatterlist.h
+++ b/include/linux/scatterlist.h
@@ -165,6 +165,22 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf,
 #define for_each_sgtable_dma_sg(sgt, sg, i)	\
 	for_each_sg((sgt)->sgl, sg, (sgt)->nents, i)

+static inline void __sg_chain(struct scatterlist *chain_sg,
+			      struct scatterlist *sgl)
+{
+	/*
+	 * offset and length are unused for chain entry. Clear them.
+	 */
+	chain_sg->offset = 0;
+	chain_sg->length = 0;
+
+	/*
+	 * Set lowest bit to indicate a link pointer, and make sure to clear
+	 * the termination bit if it happens to be set.
+	 */
+	chain_sg->page_link = ((unsigned long) sgl | SG_CHAIN) & ~SG_END;
+}
+
 /**
  * sg_chain - Chain two sglists together
  * @prv:	First scatterlist
@@ -178,18 +194,7 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf,
 static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents,
 			    struct scatterlist *sgl)
 {
-	/*
-	 * offset and length are unused for chain entry.  Clear them.
-	 */
-	prv[prv_nents - 1].offset = 0;
-	prv[prv_nents - 1].length = 0;
-
-	/*
-	 * Set lowest bit to indicate a link pointer, and make sure to clear
-	 * the termination bit if it happens to be set.
-	 */
-	prv[prv_nents - 1].page_link = ((unsigned long) sgl | SG_CHAIN)
-					& ~SG_END;
+	__sg_chain(&prv[prv_nents - 1], sgl);
 }

 /**
@@ -283,13 +288,15 @@ typedef void (sg_free_fn)(struct scatterlist *, unsigned int);
 void __sg_free_table(struct sg_table *, unsigned int, unsigned int,
 		     sg_free_fn *);
 void sg_free_table(struct sg_table *);
-int __sg_alloc_table(struct sg_table *, unsigned int, unsigned int,
-		     struct scatterlist *, unsigned int, gfp_t, sg_alloc_fn *);
+int __sg_alloc_table(struct sg_table *, struct scatterlist *, unsigned int,
+		unsigned int, struct scatterlist *, unsigned int,
+		gfp_t, sg_alloc_fn *);
 int sg_alloc_table(struct sg_table *, unsigned int, gfp_t);
-int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
-				unsigned int n_pages, unsigned int offset,
-				unsigned long size, unsigned int max_segment,
-				gfp_t gfp_mask);
+struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt,
+		struct page **pages, unsigned int n_pages, unsigned int offset,
+		unsigned long size, unsigned int max_segment,
+		struct scatterlist *prv, unsigned int left_pages,
+		gfp_t gfp_mask);
 int sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
 			      unsigned int n_pages, unsigned int offset,
 			      unsigned long size, gfp_t gfp_mask);
diff --git a/lib/scatterlist.c b/lib/scatterlist.c
index 5d63a8857f36..c200d3442f47 100644
--- a/lib/scatterlist.c
+++ b/lib/scatterlist.c
@@ -245,6 +245,7 @@ EXPORT_SYMBOL(sg_free_table);
 /**
  * __sg_alloc_table - Allocate and initialize an sg table with given allocator
  * @table:	The sg table header to use
+ * @prv:	Last populated sge in sgt
  * @nents:	Number of entries in sg list
  * @max_ents:	The maximum number of entries the allocator returns per call
  * @nents_first_chunk: Number of entries int the (preallocated) first
@@ -263,17 +264,15 @@ EXPORT_SYMBOL(sg_free_table);
  *   __sg_free_table() to cleanup any leftover allocations.
  *
  **/
-int __sg_alloc_table(struct sg_table *table, unsigned int nents,
-		     unsigned int max_ents, struct scatterlist *first_chunk,
-		     unsigned int nents_first_chunk, gfp_t gfp_mask,
-		     sg_alloc_fn *alloc_fn)
+int __sg_alloc_table(struct sg_table *table, struct scatterlist *prv,
+		unsigned int nents, unsigned int max_ents,
+		struct scatterlist *first_chunk,
+		unsigned int nents_first_chunk, gfp_t gfp_mask,
+		sg_alloc_fn *alloc_fn)
 {
-	struct scatterlist *sg, *prv;
-	unsigned int left;
-	unsigned curr_max_ents = nents_first_chunk ?: max_ents;
-	unsigned prv_max_ents;
-
-	memset(table, 0, sizeof(*table));
+	unsigned int curr_max_ents = nents_first_chunk ?: max_ents;
+	unsigned int left, prv_max_ents = 0;
+	struct scatterlist *sg;

 	if (nents == 0)
 		return -EINVAL;
@@ -283,7 +282,6 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
 #endif

 	left = nents;
-	prv = NULL;
 	do {
 		unsigned int sg_size, alloc_size = left;

@@ -308,7 +306,7 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
 			 * linkage.  Without this, sg_kfree() may get
 			 * confused.
 			 */
-			if (prv)
+			if (prv_max_ents)
 				table->nents = ++table->orig_nents;

 			return -ENOMEM;
@@ -321,10 +319,18 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
 		 * If this is the first mapping, assign the sg table header.
 		 * If this is not the first mapping, chain previous part.
 		 */
-		if (prv)
-			sg_chain(prv, prv_max_ents, sg);
-		else
+		if (!prv)
 			table->sgl = sg;
+		else if (prv_max_ents)
+			sg_chain(prv, prv_max_ents, sg);
+		else {
+			__sg_chain(prv, sg);
+			/*
+			 * We decrease one since the prvious last sge in used to
+			 * chain the chunks together.
+			 */
+			table->nents = table->orig_nents -= 1;
+		}

 		/*
 		 * If no more entries after this one, mark the end
@@ -356,7 +362,8 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask)
 {
 	int ret;

-	ret = __sg_alloc_table(table, nents, SG_MAX_SINGLE_ALLOC,
+	memset(table, 0, sizeof(*table));
+	ret = __sg_alloc_table(table, NULL, nents, SG_MAX_SINGLE_ALLOC,
 			       NULL, 0, gfp_mask, sg_kmalloc);
 	if (unlikely(ret))
 		__sg_free_table(table, SG_MAX_SINGLE_ALLOC, 0, sg_kfree);
@@ -365,6 +372,30 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask)
 }
 EXPORT_SYMBOL(sg_alloc_table);

+static struct scatterlist *get_next_sg(struct sg_table *table,
+		struct scatterlist *prv, unsigned long left_npages,
+		gfp_t gfp_mask)
+{
+	struct scatterlist *next_sg;
+	int ret;
+
+	/* If table was just allocated */
+	if (!prv)
+		return table->sgl;
+
+	/* Check if last entry should be keeped for chainning */
+	next_sg = sg_next(prv);
+	if (!sg_is_last(next_sg) || left_npages == 1)
+		return next_sg;
+
+	ret = __sg_alloc_table(table, next_sg,
+			min_t(unsigned long, left_npages, SG_MAX_SINGLE_ALLOC),
+			SG_MAX_SINGLE_ALLOC, NULL, 0, gfp_mask, sg_kmalloc);
+	if (ret)
+		return ERR_PTR(ret);
+	return sg_next(prv);
+}
+
 /**
  * __sg_alloc_table_from_pages - Allocate and initialize an sg table from
  *			         an array of pages
@@ -374,29 +405,49 @@ EXPORT_SYMBOL(sg_alloc_table);
  * @offset:      Offset from start of the first page to the start of a buffer
  * @size:        Number of valid bytes in the buffer (after offset)
  * @max_segment: Maximum size of a scatterlist node in bytes (page aligned)
+ * @prv:	 Last populated sge in sgt
+ * @left_pages:  Left pages caller have to set after this call
  * @gfp_mask:	 GFP allocation mask
  *
- *  Description:
- *    Allocate and initialize an sg table from a list of pages. Contiguous
- *    ranges of the pages are squashed into a single scatterlist node up to the
- *    maximum size specified in @max_segment. An user may provide an offset at a
- *    start and a size of valid data in a buffer specified by the page array.
- *    The returned sg table is released by sg_free_table.
+ * Description:
+ *    If @prv is NULL, allocate and initialize an sg table from a list of pages,
+ *    else reuse the scatterlist passed in at @prv.
+ *    Contiguous ranges of the pages are squashed into a single scatterlist
+ *    entry up to the maximum size specified in @max_segment.  A user may
+ *    provide an offset at a start and a size of valid data in a buffer
+ *    specified by the page array.
  *
  * Returns:
- *   0 on success, negative error on failure
+ *   Last SGE in sgt on success, PTR_ERR on otherwise.
+ *   The allocation in @sgt must be released by sg_free_table.
+ *
+ * Notes:
+ *   If this function returns non-0 (eg failure), the caller must call
+ *   sg_free_table() to cleanup any leftover allocations.
  */
-int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
-				unsigned int n_pages, unsigned int offset,
-				unsigned long size, unsigned int max_segment,
-				gfp_t gfp_mask)
+struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt,
+		struct page **pages, unsigned int n_pages, unsigned int offset,
+		unsigned long size, unsigned int max_segment,
+		struct scatterlist *prv, unsigned int left_pages,
+		gfp_t gfp_mask)
 {
-	unsigned int chunks, cur_page, seg_len, i;
+	unsigned int chunks, cur_page, seg_len, i, prv_len = 0;
+	struct scatterlist *s = prv;
+	unsigned int table_size;
+	unsigned int tmp_nents;
 	int ret;
-	struct scatterlist *s;

 	if (WARN_ON(!max_segment || offset_in_page(max_segment)))
-		return -EINVAL;
+		return ERR_PTR(-EINVAL);
+	if (IS_ENABLED(CONFIG_ARCH_NO_SG_CHAIN) && prv)
+		return ERR_PTR(-EOPNOTSUPP);
+
+	tmp_nents = prv ? sgt->nents : 0;
+
+	if (prv &&
+	    page_to_pfn(sg_page(prv)) + (prv->length >> PAGE_SHIFT) ==
+	    page_to_pfn(pages[0]))
+		prv_len = prv->length;

 	/* compute number of contiguous chunks */
 	chunks = 1;
@@ -410,13 +461,17 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
 		}
 	}

-	ret = sg_alloc_table(sgt, chunks, gfp_mask);
-	if (unlikely(ret))
-		return ret;
+	if (!prv) {
+		/* Only the last allocation could be less than the maximum */
+		table_size = left_pages ? SG_MAX_SINGLE_ALLOC : chunks;
+		ret = sg_alloc_table(sgt, table_size, gfp_mask);
+		if (unlikely(ret))
+			return ERR_PTR(ret);
+	}

 	/* merging chunks and putting them into the scatterlist */
 	cur_page = 0;
-	for_each_sg(sgt->sgl, s, sgt->orig_nents, i) {
+	for (i = 0; i < chunks; i++) {
 		unsigned int j, chunk_size;

 		/* look for the end of the current chunk */
@@ -425,19 +480,41 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
 			seg_len += PAGE_SIZE;
 			if (seg_len >= max_segment ||
 			    page_to_pfn(pages[j]) !=
-			    page_to_pfn(pages[j - 1]) + 1)
+				    page_to_pfn(pages[j - 1]) + 1)
 				break;
 		}

 		chunk_size = ((j - cur_page) << PAGE_SHIFT) - offset;
-		sg_set_page(s, pages[cur_page],
-			    min_t(unsigned long, size, chunk_size), offset);
+		chunk_size = min_t(unsigned long, size, chunk_size);
+		if (!i && prv_len) {
+			if (max_segment - prv->length >= chunk_size) {
+				sg_set_page(s, sg_page(s),
+					    s->length + chunk_size, s->offset);
+				goto next;
+			}
+		}
+
+		/* Pass how many chunks might left */
+		s = get_next_sg(sgt, s, chunks - i + left_pages, gfp_mask);
+		if (IS_ERR(s)) {
+			/*
+			 * Adjust entry length to be as before function was
+			 * called.
+			 */
+			if (prv_len)
+				prv->length = prv_len;
+			goto out;
+		}
+		sg_set_page(s, pages[cur_page], chunk_size, offset);
+		tmp_nents++;
+next:
 		size -= chunk_size;
 		offset = 0;
 		cur_page = j;
 	}
-
-	return 0;
+	sgt->nents = tmp_nents;
+out:
+	return s;
 }
 EXPORT_SYMBOL(__sg_alloc_table_from_pages);

@@ -465,8 +542,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
 			      unsigned int n_pages, unsigned int offset,
 			      unsigned long size, gfp_t gfp_mask)
 {
-	return __sg_alloc_table_from_pages(sgt, pages, n_pages, offset, size,
-					   SCATTERLIST_MAX_SEGMENT, gfp_mask);
+	return PTR_ERR_OR_ZERO(__sg_alloc_table_from_pages(sgt, pages, n_pages,
+			offset, size, SCATTERLIST_MAX_SEGMENT, NULL, 0,
+			gfp_mask));
 }
 EXPORT_SYMBOL(sg_alloc_table_from_pages);

diff --git a/lib/sg_pool.c b/lib/sg_pool.c
index db29e5c1f790..c449248bf5d5 100644
--- a/lib/sg_pool.c
+++ b/lib/sg_pool.c
@@ -129,7 +129,8 @@ int sg_alloc_table_chained(struct sg_table *table, int nents,
 		nents_first_chunk = 0;
 	}

-	ret = __sg_alloc_table(table, nents, SG_CHUNK_SIZE,
+	memset(table, 0, sizeof(*table));
+	ret = __sg_alloc_table(table, NULL, nents, SG_CHUNK_SIZE,
 			       first_chunk, nents_first_chunk,
 			       GFP_ATOMIC, sg_pool_alloc);
 	if (unlikely(ret))
diff --git a/tools/testing/scatterlist/main.c b/tools/testing/scatterlist/main.c
index 0a1464181226..4899359a31ac 100644
--- a/tools/testing/scatterlist/main.c
+++ b/tools/testing/scatterlist/main.c
@@ -55,14 +55,13 @@ int main(void)
 	for (i = 0, test = tests; test->expected_segments; test++, i++) {
 		struct page *pages[MAX_PAGES];
 		struct sg_table st;
-		int ret;
+		struct scatterlist *sg;

 		set_pages(pages, test->pfn, test->num_pages);

-		ret = __sg_alloc_table_from_pages(&st, pages, test->num_pages,
-						  0, test->size, test->max_seg,
-						  GFP_KERNEL);
-		assert(ret == test->alloc_ret);
+		sg = __sg_alloc_table_from_pages(&st, pages, test->num_pages, 0,
+				test->size, test->max_seg, NULL, 0, GFP_KERNEL);
+		assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);

 		if (test->alloc_ret)
 			continue;
--
2.26.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH rdma-next v4 2/4] tools/testing/scatterlist: Rejuvenate bit-rotten test
  2020-09-27  6:46 [PATCH rdma-next v4 0/4] Dynamicaly allocate SG table from the pages Leon Romanovsky
  2020-09-27  6:46 ` [PATCH rdma-next v4 1/4] lib/scatterlist: Add support in dynamic allocation of SG table from pages Leon Romanovsky
@ 2020-09-27  6:46 ` Leon Romanovsky
  2020-09-27  6:46 ` [PATCH rdma-next v4 3/4] tools/testing/scatterlist: Show errors in human readable form Leon Romanovsky
  2020-09-27  6:46 ` [PATCH rdma-next v4 4/4] RDMA/umem: Move to allocate SG table from pages Leon Romanovsky
  3 siblings, 0 replies; 17+ messages in thread
From: Leon Romanovsky @ 2020-09-27  6:46 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Tvrtko Ursulin, Christoph Hellwig, Daniel Vetter, David Airlie,
	dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen, linux-kernel,
	linux-rdma, Maor Gottlieb, Rodrigo Vivi, Roland Scheidegger,
	VMware Graphics

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

A couple small tweaks are needed to make the test build and run
on current kernels.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 tools/testing/scatterlist/Makefile   |  3 ++-
 tools/testing/scatterlist/linux/mm.h | 35 ++++++++++++++++++++++++++++
 2 files changed, 37 insertions(+), 1 deletion(-)

diff --git a/tools/testing/scatterlist/Makefile b/tools/testing/scatterlist/Makefile
index cbb003d9305e..c65233876622 100644
--- a/tools/testing/scatterlist/Makefile
+++ b/tools/testing/scatterlist/Makefile
@@ -14,7 +14,7 @@ targets: include $(TARGETS)
 main: $(OFILES)

 clean:
-	$(RM) $(TARGETS) $(OFILES) scatterlist.c linux/scatterlist.h linux/highmem.h linux/kmemleak.h asm/io.h
+	$(RM) $(TARGETS) $(OFILES) scatterlist.c linux/scatterlist.h linux/highmem.h linux/kmemleak.h linux/slab.h asm/io.h
 	@rmdir asm

 scatterlist.c: ../../../lib/scatterlist.c
@@ -28,4 +28,5 @@ include: ../../../include/linux/scatterlist.h
 	@touch asm/io.h
 	@touch linux/highmem.h
 	@touch linux/kmemleak.h
+	@touch linux/slab.h
 	@cp $< linux/scatterlist.h
diff --git a/tools/testing/scatterlist/linux/mm.h b/tools/testing/scatterlist/linux/mm.h
index 6f9ac14aa800..6ae907f375d2 100644
--- a/tools/testing/scatterlist/linux/mm.h
+++ b/tools/testing/scatterlist/linux/mm.h
@@ -114,6 +114,12 @@ static inline void *kmalloc(unsigned int size, unsigned int flags)
 	return malloc(size);
 }

+static inline void *
+kmalloc_array(unsigned int n, unsigned int size, unsigned int flags)
+{
+	return malloc(n * size);
+}
+
 #define kfree(x) free(x)

 #define kmemleak_alloc(a, b, c, d)
@@ -122,4 +128,33 @@ static inline void *kmalloc(unsigned int size, unsigned int flags)
 #define PageSlab(p) (0)
 #define flush_kernel_dcache_page(p)

+#define MAX_ERRNO	4095
+
+#define IS_ERR_VALUE(x) unlikely((unsigned long)(void *)(x) >= (unsigned long)-MAX_ERRNO)
+
+static inline void * __must_check ERR_PTR(long error)
+{
+	return (void *) error;
+}
+
+static inline long __must_check PTR_ERR(__force const void *ptr)
+{
+	return (long) ptr;
+}
+
+static inline bool __must_check IS_ERR(__force const void *ptr)
+{
+	return IS_ERR_VALUE((unsigned long)ptr);
+}
+
+static inline int __must_check PTR_ERR_OR_ZERO(__force const void *ptr)
+{
+	if (IS_ERR(ptr))
+		return PTR_ERR(ptr);
+	else
+		return 0;
+}
+
+#define IS_ENABLED(x) (0)
+
 #endif
--
2.26.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH rdma-next v4 3/4] tools/testing/scatterlist: Show errors in human readable form
  2020-09-27  6:46 [PATCH rdma-next v4 0/4] Dynamicaly allocate SG table from the pages Leon Romanovsky
  2020-09-27  6:46 ` [PATCH rdma-next v4 1/4] lib/scatterlist: Add support in dynamic allocation of SG table from pages Leon Romanovsky
  2020-09-27  6:46 ` [PATCH rdma-next v4 2/4] tools/testing/scatterlist: Rejuvenate bit-rotten test Leon Romanovsky
@ 2020-09-27  6:46 ` Leon Romanovsky
  2020-09-27  6:46 ` [PATCH rdma-next v4 4/4] RDMA/umem: Move to allocate SG table from pages Leon Romanovsky
  3 siblings, 0 replies; 17+ messages in thread
From: Leon Romanovsky @ 2020-09-27  6:46 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Tvrtko Ursulin, Christoph Hellwig, Daniel Vetter, David Airlie,
	dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen, linux-kernel,
	linux-rdma, Maor Gottlieb, Rodrigo Vivi, Roland Scheidegger,
	VMware Graphics

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Instead of just asserting dump some more useful info about what the test
saw versus what it expected to see.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 tools/testing/scatterlist/main.c | 44 ++++++++++++++++++++++++--------
 1 file changed, 34 insertions(+), 10 deletions(-)

diff --git a/tools/testing/scatterlist/main.c b/tools/testing/scatterlist/main.c
index 4899359a31ac..b2c7e9f7b8d3 100644
--- a/tools/testing/scatterlist/main.c
+++ b/tools/testing/scatterlist/main.c
@@ -5,6 +5,15 @@

 #define MAX_PAGES (64)

+struct test {
+	int alloc_ret;
+	unsigned num_pages;
+	unsigned *pfn;
+	unsigned size;
+	unsigned int max_seg;
+	unsigned int expected_segments;
+};
+
 static void set_pages(struct page **pages, const unsigned *array, unsigned num)
 {
 	unsigned int i;
@@ -17,17 +26,32 @@ static void set_pages(struct page **pages, const unsigned *array, unsigned num)

 #define pfn(...) (unsigned []){ __VA_ARGS__ }

+static void fail(struct test *test, struct sg_table *st, const char *cond)
+{
+	unsigned int i;
+
+	fprintf(stderr, "Failed on '%s'!\n\n", cond);
+
+	printf("size = %u, max segment = %u, expected nents = %u\nst->nents = %u, st->orig_nents= %u\n",
+	       test->size, test->max_seg, test->expected_segments, st->nents,
+	       st->orig_nents);
+
+	printf("%u input PFNs:", test->num_pages);
+	for (i = 0; i < test->num_pages; i++)
+		printf(" %x", test->pfn[i]);
+	printf("\n");
+
+	exit(1);
+}
+
+#define VALIDATE(cond, st, test) \
+	if (!(cond)) \
+		fail((test), (st), #cond);
+
 int main(void)
 {
 	const unsigned int sgmax = SCATTERLIST_MAX_SEGMENT;
-	struct test {
-		int alloc_ret;
-		unsigned num_pages;
-		unsigned *pfn;
-		unsigned size;
-		unsigned int max_seg;
-		unsigned int expected_segments;
-	} *test, tests[] = {
+	struct test *test, tests[] = {
 		{ -EINVAL, 1, pfn(0), PAGE_SIZE, PAGE_SIZE + 1, 1 },
 		{ -EINVAL, 1, pfn(0), PAGE_SIZE, 0, 1 },
 		{ -EINVAL, 1, pfn(0), PAGE_SIZE, sgmax + 1, 1 },
@@ -66,8 +90,8 @@ int main(void)
 		if (test->alloc_ret)
 			continue;

-		assert(st.nents == test->expected_segments);
-		assert(st.orig_nents == test->expected_segments);
+		VALIDATE(st.nents == test->expected_segments, &st, test);
+		VALIDATE(st.orig_nents == test->expected_segments, &st, test);

 		sg_free_table(&st);
 	}
--
2.26.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH rdma-next v4 4/4] RDMA/umem: Move to allocate SG table from pages
  2020-09-27  6:46 [PATCH rdma-next v4 0/4] Dynamicaly allocate SG table from the pages Leon Romanovsky
                   ` (2 preceding siblings ...)
  2020-09-27  6:46 ` [PATCH rdma-next v4 3/4] tools/testing/scatterlist: Show errors in human readable form Leon Romanovsky
@ 2020-09-27  6:46 ` Leon Romanovsky
  2020-09-29 19:59   ` Jason Gunthorpe
  3 siblings, 1 reply; 17+ messages in thread
From: Leon Romanovsky @ 2020-09-27  6:46 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Maor Gottlieb, Christoph Hellwig, Daniel Vetter, David Airlie,
	dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen, linux-kernel,
	linux-rdma, Rodrigo Vivi, Roland Scheidegger, Tvrtko Ursulin,
	VMware Graphics

From: Maor Gottlieb <maorg@nvidia.com>

Remove the implementation of ib_umem_add_sg_table and instead
call to __sg_alloc_table_from_pages which already has the logic to
merge contiguous pages.

Besides that it removes duplicated functionality, it reduces the
memory consumption of the SG table significantly. Prior to this
patch, the SG table was allocated in advance regardless consideration
of contiguous pages.

In huge pages system of 2MB page size, without this change, the SG table
would contain x512 SG entries. E.g. for 100GB memory registration:

	 Number of entries	Size
Before 	      26214400          600.0MB
After            51200		  1.2MB

Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/infiniband/core/umem.c | 92 +++++-----------------------------
 1 file changed, 12 insertions(+), 80 deletions(-)

diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
index 01b680b62846..0ef736970aba 100644
--- a/drivers/infiniband/core/umem.c
+++ b/drivers/infiniband/core/umem.c
@@ -63,73 +63,6 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d
 	sg_free_table(&umem->sg_head);
 }

-/* ib_umem_add_sg_table - Add N contiguous pages to scatter table
- *
- * sg: current scatterlist entry
- * page_list: array of npage struct page pointers
- * npages: number of pages in page_list
- * max_seg_sz: maximum segment size in bytes
- * nents: [out] number of entries in the scatterlist
- *
- * Return new end of scatterlist
- */
-static struct scatterlist *ib_umem_add_sg_table(struct scatterlist *sg,
-						struct page **page_list,
-						unsigned long npages,
-						unsigned int max_seg_sz,
-						int *nents)
-{
-	unsigned long first_pfn;
-	unsigned long i = 0;
-	bool update_cur_sg = false;
-	bool first = !sg_page(sg);
-
-	/* Check if new page_list is contiguous with end of previous page_list.
-	 * sg->length here is a multiple of PAGE_SIZE and sg->offset is 0.
-	 */
-	if (!first && (page_to_pfn(sg_page(sg)) + (sg->length >> PAGE_SHIFT) ==
-		       page_to_pfn(page_list[0])))
-		update_cur_sg = true;
-
-	while (i != npages) {
-		unsigned long len;
-		struct page *first_page = page_list[i];
-
-		first_pfn = page_to_pfn(first_page);
-
-		/* Compute the number of contiguous pages we have starting
-		 * at i
-		 */
-		for (len = 0; i != npages &&
-			      first_pfn + len == page_to_pfn(page_list[i]) &&
-			      len < (max_seg_sz >> PAGE_SHIFT);
-		     len++)
-			i++;
-
-		/* Squash N contiguous pages from page_list into current sge */
-		if (update_cur_sg) {
-			if ((max_seg_sz - sg->length) >= (len << PAGE_SHIFT)) {
-				sg_set_page(sg, sg_page(sg),
-					    sg->length + (len << PAGE_SHIFT),
-					    0);
-				update_cur_sg = false;
-				continue;
-			}
-			update_cur_sg = false;
-		}
-
-		/* Squash N contiguous pages into next sge or first sge */
-		if (!first)
-			sg = sg_next(sg);
-
-		(*nents)++;
-		sg_set_page(sg, first_page, len << PAGE_SHIFT, 0);
-		first = false;
-	}
-
-	return sg;
-}
-
 /**
  * ib_umem_find_best_pgsz - Find best HW page size to use for this MR
  *
@@ -221,7 +154,7 @@ static struct ib_umem *__ib_umem_get(struct ib_device *device,
 	struct mm_struct *mm;
 	unsigned long npages;
 	int ret;
-	struct scatterlist *sg;
+	struct scatterlist *sg = NULL;
 	unsigned int gup_flags = FOLL_WRITE;

 	/*
@@ -276,15 +209,9 @@ static struct ib_umem *__ib_umem_get(struct ib_device *device,

 	cur_base = addr & PAGE_MASK;

-	ret = sg_alloc_table(&umem->sg_head, npages, GFP_KERNEL);
-	if (ret)
-		goto vma;
-
 	if (!umem->writable)
 		gup_flags |= FOLL_FORCE;

-	sg = umem->sg_head.sgl;
-
 	while (npages) {
 		cond_resched();
 		ret = pin_user_pages_fast(cur_base,
@@ -296,11 +223,17 @@ static struct ib_umem *__ib_umem_get(struct ib_device *device,
 			goto umem_release;

 		cur_base += ret * PAGE_SIZE;
-		npages   -= ret;
-
-		sg = ib_umem_add_sg_table(sg, page_list, ret,
-			dma_get_max_seg_size(device->dma_device),
-			&umem->sg_nents);
+		npages -= ret;
+		sg = __sg_alloc_table_from_pages(
+			&umem->sg_head, page_list, ret, 0, ret << PAGE_SHIFT,
+			dma_get_max_seg_size(device->dma_device), sg, npages,
+			GFP_KERNEL);
+		umem->sg_nents = umem->sg_head.nents;
+		if (IS_ERR(sg)) {
+			unpin_user_pages_dirty_lock(page_list, ret, 0);
+			ret = PTR_ERR(sg);
+			goto umem_release;
+		}
 	}

 	sg_mark_end(sg);
@@ -322,7 +255,6 @@ static struct ib_umem *__ib_umem_get(struct ib_device *device,

 umem_release:
 	__ib_umem_release(device, umem, 0);
-vma:
 	atomic64_sub(ib_umem_num_pages(umem), &mm->pinned_vm);
 out:
 	free_page((unsigned long) page_list);
--
2.26.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH rdma-next v4 4/4] RDMA/umem: Move to allocate SG table from pages
  2020-09-27  6:46 ` [PATCH rdma-next v4 4/4] RDMA/umem: Move to allocate SG table from pages Leon Romanovsky
@ 2020-09-29 19:59   ` Jason Gunthorpe
  2020-09-30  9:53     ` Leon Romanovsky
  0 siblings, 1 reply; 17+ messages in thread
From: Jason Gunthorpe @ 2020-09-29 19:59 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Doug Ledford, Maor Gottlieb, Christoph Hellwig, Daniel Vetter,
	David Airlie, dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen,
	linux-kernel, linux-rdma, Rodrigo Vivi, Roland Scheidegger,
	Tvrtko Ursulin, VMware Graphics

On Sun, Sep 27, 2020 at 09:46:47AM +0300, Leon Romanovsky wrote:
> @@ -296,11 +223,17 @@ static struct ib_umem *__ib_umem_get(struct ib_device *device,
>  			goto umem_release;
> 
>  		cur_base += ret * PAGE_SIZE;
> -		npages   -= ret;
> -
> -		sg = ib_umem_add_sg_table(sg, page_list, ret,
> -			dma_get_max_seg_size(device->dma_device),
> -			&umem->sg_nents);
> +		npages -= ret;
> +		sg = __sg_alloc_table_from_pages(
> +			&umem->sg_head, page_list, ret, 0, ret << PAGE_SHIFT,
> +			dma_get_max_seg_size(device->dma_device), sg, npages,
> +			GFP_KERNEL);
> +		umem->sg_nents = umem->sg_head.nents;
> +		if (IS_ERR(sg)) {
> +			unpin_user_pages_dirty_lock(page_list, ret, 0);
> +			ret = PTR_ERR(sg);
> +			goto umem_release;
> +		}
>  	}
> 
>  	sg_mark_end(sg);

Does it still need the sg_mark_end?

Jason

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH rdma-next v4 4/4] RDMA/umem: Move to allocate SG table from pages
  2020-09-29 19:59   ` Jason Gunthorpe
@ 2020-09-30  9:53     ` Leon Romanovsky
  2020-09-30 11:45       ` Jason Gunthorpe
  0 siblings, 1 reply; 17+ messages in thread
From: Leon Romanovsky @ 2020-09-30  9:53 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Doug Ledford, Maor Gottlieb, Christoph Hellwig, Daniel Vetter,
	David Airlie, dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen,
	linux-kernel, linux-rdma, Rodrigo Vivi, Roland Scheidegger,
	Tvrtko Ursulin, VMware Graphics

On Tue, Sep 29, 2020 at 04:59:29PM -0300, Jason Gunthorpe wrote:
> On Sun, Sep 27, 2020 at 09:46:47AM +0300, Leon Romanovsky wrote:
> > @@ -296,11 +223,17 @@ static struct ib_umem *__ib_umem_get(struct ib_device *device,
> >  			goto umem_release;
> >
> >  		cur_base += ret * PAGE_SIZE;
> > -		npages   -= ret;
> > -
> > -		sg = ib_umem_add_sg_table(sg, page_list, ret,
> > -			dma_get_max_seg_size(device->dma_device),
> > -			&umem->sg_nents);
> > +		npages -= ret;
> > +		sg = __sg_alloc_table_from_pages(
> > +			&umem->sg_head, page_list, ret, 0, ret << PAGE_SHIFT,
> > +			dma_get_max_seg_size(device->dma_device), sg, npages,
> > +			GFP_KERNEL);
> > +		umem->sg_nents = umem->sg_head.nents;
> > +		if (IS_ERR(sg)) {
> > +			unpin_user_pages_dirty_lock(page_list, ret, 0);
> > +			ret = PTR_ERR(sg);
> > +			goto umem_release;
> > +		}
> >  	}
> >
> >  	sg_mark_end(sg);
>
> Does it still need the sg_mark_end?

It is preserved here for correctness, the release logic doesn't rely on
this marker, but it is better to leave it.

Thanks

>
> Jason

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH rdma-next v4 4/4] RDMA/umem: Move to allocate SG table from pages
  2020-09-30  9:53     ` Leon Romanovsky
@ 2020-09-30 11:45       ` Jason Gunthorpe
  2020-09-30 11:53         ` Maor Gottlieb
  0 siblings, 1 reply; 17+ messages in thread
From: Jason Gunthorpe @ 2020-09-30 11:45 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Doug Ledford, Maor Gottlieb, Christoph Hellwig, Daniel Vetter,
	David Airlie, dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen,
	linux-kernel, linux-rdma, Rodrigo Vivi, Roland Scheidegger,
	Tvrtko Ursulin, VMware Graphics

On Wed, Sep 30, 2020 at 12:53:21PM +0300, Leon Romanovsky wrote:
> On Tue, Sep 29, 2020 at 04:59:29PM -0300, Jason Gunthorpe wrote:
> > On Sun, Sep 27, 2020 at 09:46:47AM +0300, Leon Romanovsky wrote:
> > > @@ -296,11 +223,17 @@ static struct ib_umem *__ib_umem_get(struct ib_device *device,
> > >  			goto umem_release;
> > >
> > >  		cur_base += ret * PAGE_SIZE;
> > > -		npages   -= ret;
> > > -
> > > -		sg = ib_umem_add_sg_table(sg, page_list, ret,
> > > -			dma_get_max_seg_size(device->dma_device),
> > > -			&umem->sg_nents);
> > > +		npages -= ret;
> > > +		sg = __sg_alloc_table_from_pages(
> > > +			&umem->sg_head, page_list, ret, 0, ret << PAGE_SHIFT,
> > > +			dma_get_max_seg_size(device->dma_device), sg, npages,
> > > +			GFP_KERNEL);
> > > +		umem->sg_nents = umem->sg_head.nents;
> > > +		if (IS_ERR(sg)) {
> > > +			unpin_user_pages_dirty_lock(page_list, ret, 0);
> > > +			ret = PTR_ERR(sg);
> > > +			goto umem_release;
> > > +		}
> > >  	}
> > >
> > >  	sg_mark_end(sg);
> >
> > Does it still need the sg_mark_end?
> 
> It is preserved here for correctness, the release logic doesn't rely on
> this marker, but it is better to leave it.

I mean, my read of __sg_alloc_table_from_pages() is that it already
placed it, the final __alloc_table() does it?

Jason

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH rdma-next v4 4/4] RDMA/umem: Move to allocate SG table from pages
  2020-09-30 11:45       ` Jason Gunthorpe
@ 2020-09-30 11:53         ` Maor Gottlieb
  2020-09-30 11:58           ` Jason Gunthorpe
  0 siblings, 1 reply; 17+ messages in thread
From: Maor Gottlieb @ 2020-09-30 11:53 UTC (permalink / raw)
  To: Jason Gunthorpe, Leon Romanovsky
  Cc: Doug Ledford, Christoph Hellwig, Daniel Vetter, David Airlie,
	dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen, linux-kernel,
	linux-rdma, Rodrigo Vivi, Roland Scheidegger, Tvrtko Ursulin,
	VMware Graphics


On 9/30/2020 2:45 PM, Jason Gunthorpe wrote:
> On Wed, Sep 30, 2020 at 12:53:21PM +0300, Leon Romanovsky wrote:
>> On Tue, Sep 29, 2020 at 04:59:29PM -0300, Jason Gunthorpe wrote:
>>> On Sun, Sep 27, 2020 at 09:46:47AM +0300, Leon Romanovsky wrote:
>>>> @@ -296,11 +223,17 @@ static struct ib_umem *__ib_umem_get(struct ib_device *device,
>>>>   			goto umem_release;
>>>>
>>>>   		cur_base += ret * PAGE_SIZE;
>>>> -		npages   -= ret;
>>>> -
>>>> -		sg = ib_umem_add_sg_table(sg, page_list, ret,
>>>> -			dma_get_max_seg_size(device->dma_device),
>>>> -			&umem->sg_nents);
>>>> +		npages -= ret;
>>>> +		sg = __sg_alloc_table_from_pages(
>>>> +			&umem->sg_head, page_list, ret, 0, ret << PAGE_SHIFT,
>>>> +			dma_get_max_seg_size(device->dma_device), sg, npages,
>>>> +			GFP_KERNEL);
>>>> +		umem->sg_nents = umem->sg_head.nents;
>>>> +		if (IS_ERR(sg)) {
>>>> +			unpin_user_pages_dirty_lock(page_list, ret, 0);
>>>> +			ret = PTR_ERR(sg);
>>>> +			goto umem_release;
>>>> +		}
>>>>   	}
>>>>
>>>>   	sg_mark_end(sg);
>>> Does it still need the sg_mark_end?
>> It is preserved here for correctness, the release logic doesn't rely on
>> this marker, but it is better to leave it.
> I mean, my read of __sg_alloc_table_from_pages() is that it already
> placed it, the final __alloc_table() does it?
>
> Jason


It marks the last allocated sge, but not the last populated sge (with page).


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH rdma-next v4 4/4] RDMA/umem: Move to allocate SG table from pages
  2020-09-30 11:53         ` Maor Gottlieb
@ 2020-09-30 11:58           ` Jason Gunthorpe
  2020-09-30 15:05             ` Maor Gottlieb
  0 siblings, 1 reply; 17+ messages in thread
From: Jason Gunthorpe @ 2020-09-30 11:58 UTC (permalink / raw)
  To: Maor Gottlieb
  Cc: Leon Romanovsky, Doug Ledford, Christoph Hellwig, Daniel Vetter,
	David Airlie, dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen,
	linux-kernel, linux-rdma, Rodrigo Vivi, Roland Scheidegger,
	Tvrtko Ursulin, VMware Graphics

On Wed, Sep 30, 2020 at 02:53:58PM +0300, Maor Gottlieb wrote:
> 
> On 9/30/2020 2:45 PM, Jason Gunthorpe wrote:
> > On Wed, Sep 30, 2020 at 12:53:21PM +0300, Leon Romanovsky wrote:
> > > On Tue, Sep 29, 2020 at 04:59:29PM -0300, Jason Gunthorpe wrote:
> > > > On Sun, Sep 27, 2020 at 09:46:47AM +0300, Leon Romanovsky wrote:
> > > > > @@ -296,11 +223,17 @@ static struct ib_umem *__ib_umem_get(struct ib_device *device,
> > > > >   			goto umem_release;
> > > > > 
> > > > >   		cur_base += ret * PAGE_SIZE;
> > > > > -		npages   -= ret;
> > > > > -
> > > > > -		sg = ib_umem_add_sg_table(sg, page_list, ret,
> > > > > -			dma_get_max_seg_size(device->dma_device),
> > > > > -			&umem->sg_nents);
> > > > > +		npages -= ret;
> > > > > +		sg = __sg_alloc_table_from_pages(
> > > > > +			&umem->sg_head, page_list, ret, 0, ret << PAGE_SHIFT,
> > > > > +			dma_get_max_seg_size(device->dma_device), sg, npages,
> > > > > +			GFP_KERNEL);
> > > > > +		umem->sg_nents = umem->sg_head.nents;
> > > > > +		if (IS_ERR(sg)) {
> > > > > +			unpin_user_pages_dirty_lock(page_list, ret, 0);
> > > > > +			ret = PTR_ERR(sg);
> > > > > +			goto umem_release;
> > > > > +		}
> > > > >   	}
> > > > > 
> > > > >   	sg_mark_end(sg);
> > > > Does it still need the sg_mark_end?
> > > It is preserved here for correctness, the release logic doesn't rely on
> > > this marker, but it is better to leave it.
> > I mean, my read of __sg_alloc_table_from_pages() is that it already
> > placed it, the final __alloc_table() does it?
> 
> It marks the last allocated sge, but not the last populated sge (with page).

Why are those different?

It looks like the last iteration calls __alloc_table() with an exact
number of sges

+	if (!prv) {
+		/* Only the last allocation could be less than the maximum */
+		table_size = left_pages ? SG_MAX_SINGLE_ALLOC : chunks;
+		ret = sg_alloc_table(sgt, table_size, gfp_mask);
+		if (unlikely(ret))
+			return ERR_PTR(ret);
+	}

Jason 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH rdma-next v4 4/4] RDMA/umem: Move to allocate SG table from pages
  2020-09-30 11:58           ` Jason Gunthorpe
@ 2020-09-30 15:05             ` Maor Gottlieb
  2020-09-30 15:14               ` Jason Gunthorpe
  0 siblings, 1 reply; 17+ messages in thread
From: Maor Gottlieb @ 2020-09-30 15:05 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Leon Romanovsky, Doug Ledford, Christoph Hellwig, Daniel Vetter,
	David Airlie, dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen,
	linux-kernel, linux-rdma, Rodrigo Vivi, Roland Scheidegger,
	Tvrtko Ursulin, VMware Graphics


On 9/30/2020 2:58 PM, Jason Gunthorpe wrote:
> On Wed, Sep 30, 2020 at 02:53:58PM +0300, Maor Gottlieb wrote:
>> On 9/30/2020 2:45 PM, Jason Gunthorpe wrote:
>>> On Wed, Sep 30, 2020 at 12:53:21PM +0300, Leon Romanovsky wrote:
>>>> On Tue, Sep 29, 2020 at 04:59:29PM -0300, Jason Gunthorpe wrote:
>>>>> On Sun, Sep 27, 2020 at 09:46:47AM +0300, Leon Romanovsky wrote:
>>>>>> @@ -296,11 +223,17 @@ static struct ib_umem *__ib_umem_get(struct ib_device *device,
>>>>>>    			goto umem_release;
>>>>>>
>>>>>>    		cur_base += ret * PAGE_SIZE;
>>>>>> -		npages   -= ret;
>>>>>> -
>>>>>> -		sg = ib_umem_add_sg_table(sg, page_list, ret,
>>>>>> -			dma_get_max_seg_size(device->dma_device),
>>>>>> -			&umem->sg_nents);
>>>>>> +		npages -= ret;
>>>>>> +		sg = __sg_alloc_table_from_pages(
>>>>>> +			&umem->sg_head, page_list, ret, 0, ret << PAGE_SHIFT,
>>>>>> +			dma_get_max_seg_size(device->dma_device), sg, npages,
>>>>>> +			GFP_KERNEL);
>>>>>> +		umem->sg_nents = umem->sg_head.nents;
>>>>>> +		if (IS_ERR(sg)) {
>>>>>> +			unpin_user_pages_dirty_lock(page_list, ret, 0);
>>>>>> +			ret = PTR_ERR(sg);
>>>>>> +			goto umem_release;
>>>>>> +		}
>>>>>>    	}
>>>>>>
>>>>>>    	sg_mark_end(sg);
>>>>> Does it still need the sg_mark_end?
>>>> It is preserved here for correctness, the release logic doesn't rely on
>>>> this marker, but it is better to leave it.
>>> I mean, my read of __sg_alloc_table_from_pages() is that it already
>>> placed it, the final __alloc_table() does it?
>> It marks the last allocated sge, but not the last populated sge (with page).
> Why are those different?
>
> It looks like the last iteration calls __alloc_table() with an exact
> number of sges
>
> +	if (!prv) {
> +		/* Only the last allocation could be less than the maximum */
> +		table_size = left_pages ? SG_MAX_SINGLE_ALLOC : chunks;
> +		ret = sg_alloc_table(sgt, table_size, gfp_mask);
> +		if (unlikely(ret))
> +			return ERR_PTR(ret);
> +	}
>
> Jason

This is right only for the last iteration. E.g. in the first iteration 
in case that there are more pages (left_pages), then we allocate 
SG_MAX_SINGLE_ALLOC.  We don't know how many pages from the second 
iteration will be squashed to the SGE from the first iteration.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH rdma-next v4 4/4] RDMA/umem: Move to allocate SG table from pages
  2020-09-30 15:05             ` Maor Gottlieb
@ 2020-09-30 15:14               ` Jason Gunthorpe
  2020-09-30 15:40                 ` Maor Gottlieb
  2020-09-30 16:51                 ` Leon Romanovsky
  0 siblings, 2 replies; 17+ messages in thread
From: Jason Gunthorpe @ 2020-09-30 15:14 UTC (permalink / raw)
  To: Maor Gottlieb
  Cc: Leon Romanovsky, Doug Ledford, Christoph Hellwig, Daniel Vetter,
	David Airlie, dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen,
	linux-kernel, linux-rdma, Rodrigo Vivi, Roland Scheidegger,
	Tvrtko Ursulin, VMware Graphics

On Wed, Sep 30, 2020 at 06:05:15PM +0300, Maor Gottlieb wrote:
> This is right only for the last iteration. E.g. in the first iteration in
> case that there are more pages (left_pages), then we allocate
> SG_MAX_SINGLE_ALLOC.  We don't know how many pages from the second iteration
> will be squashed to the SGE from the first iteration.

Well, it is 0 or 1 SGE's. Check if the first page is mergable and
subtract one from the required length?

I dislike this sg_mark_end() it is something that should be internal,
IMHO.

Jason

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH rdma-next v4 4/4] RDMA/umem: Move to allocate SG table from pages
  2020-09-30 15:14               ` Jason Gunthorpe
@ 2020-09-30 15:40                 ` Maor Gottlieb
  2020-09-30 16:51                 ` Leon Romanovsky
  1 sibling, 0 replies; 17+ messages in thread
From: Maor Gottlieb @ 2020-09-30 15:40 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Leon Romanovsky, Doug Ledford, Christoph Hellwig, Daniel Vetter,
	David Airlie, dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen,
	linux-kernel, linux-rdma, Rodrigo Vivi, Roland Scheidegger,
	Tvrtko Ursulin, VMware Graphics


On 9/30/2020 6:14 PM, Jason Gunthorpe wrote:
> On Wed, Sep 30, 2020 at 06:05:15PM +0300, Maor Gottlieb wrote:
>> This is right only for the last iteration. E.g. in the first iteration in
>> case that there are more pages (left_pages), then we allocate
>> SG_MAX_SINGLE_ALLOC.  We don't know how many pages from the second iteration
>> will be squashed to the SGE from the first iteration.
> Well, it is 0 or 1 SGE's. Check if the first page is mergable and
> subtract one from the required length?
>
> I dislike this sg_mark_end() it is something that should be internal,
> IMHO.

I can move it to __sg_alloc_table_from_pages:

         sgt->nents = tmp_nents;
+ if (!left_pages)
+         sg_mark_end(s);
  out:
         return s;

>
> Jason

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH rdma-next v4 4/4] RDMA/umem: Move to allocate SG table from pages
  2020-09-30 15:14               ` Jason Gunthorpe
  2020-09-30 15:40                 ` Maor Gottlieb
@ 2020-09-30 16:51                 ` Leon Romanovsky
  1 sibling, 0 replies; 17+ messages in thread
From: Leon Romanovsky @ 2020-09-30 16:51 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Maor Gottlieb, Doug Ledford, Christoph Hellwig, Daniel Vetter,
	David Airlie, dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen,
	linux-kernel, linux-rdma, Rodrigo Vivi, Roland Scheidegger,
	Tvrtko Ursulin, VMware Graphics

On Wed, Sep 30, 2020 at 12:14:06PM -0300, Jason Gunthorpe wrote:
> On Wed, Sep 30, 2020 at 06:05:15PM +0300, Maor Gottlieb wrote:
> > This is right only for the last iteration. E.g. in the first iteration in
> > case that there are more pages (left_pages), then we allocate
> > SG_MAX_SINGLE_ALLOC.  We don't know how many pages from the second iteration
> > will be squashed to the SGE from the first iteration.
>
> Well, it is 0 or 1 SGE's. Check if the first page is mergable and
> subtract one from the required length?
>
> I dislike this sg_mark_end() it is something that should be internal,
> IMHO.

I don't think so, but Maor provided possible solution.
Can you take the patches?

Thanks

>
> Jason

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH rdma-next v4 1/4] lib/scatterlist: Add support in dynamic allocation of SG table from pages
  2020-09-27  6:46 ` [PATCH rdma-next v4 1/4] lib/scatterlist: Add support in dynamic allocation of SG table from pages Leon Romanovsky
@ 2020-10-02 15:02   ` Jason Gunthorpe
  2020-10-02 16:11     ` Maor Gottlieb
  0 siblings, 1 reply; 17+ messages in thread
From: Jason Gunthorpe @ 2020-10-02 15:02 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Doug Ledford, Maor Gottlieb, Christoph Hellwig, Daniel Vetter,
	David Airlie, dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen,
	linux-kernel, linux-rdma, Maor Gottlieb, Rodrigo Vivi,
	Roland Scheidegger, Tvrtko Ursulin, VMware Graphics

On Sun, Sep 27, 2020 at 09:46:44AM +0300, Leon Romanovsky wrote:
> +struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt,
> +		struct page **pages, unsigned int n_pages, unsigned int offset,
> +		unsigned long size, unsigned int max_segment,
> +		struct scatterlist *prv, unsigned int left_pages,
> +		gfp_t gfp_mask)
>  {
> -	unsigned int chunks, cur_page, seg_len, i;
> +	unsigned int chunks, cur_page, seg_len, i, prv_len = 0;
> +	struct scatterlist *s = prv;
> +	unsigned int table_size;
> +	unsigned int tmp_nents;
>  	int ret;
> -	struct scatterlist *s;
> 
>  	if (WARN_ON(!max_segment || offset_in_page(max_segment)))
> -		return -EINVAL;
> +		return ERR_PTR(-EINVAL);
> +	if (IS_ENABLED(CONFIG_ARCH_NO_SG_CHAIN) && prv)
> +		return ERR_PTR(-EOPNOTSUPP);
> +
> +	tmp_nents = prv ? sgt->nents : 0;
> +
> +	if (prv &&
> +	    page_to_pfn(sg_page(prv)) + (prv->length >> PAGE_SHIFT) ==

This calculation of the end doesn't consider sg->offset

> +	    page_to_pfn(pages[0]))
> +		prv_len = prv->length;
> 
>  	/* compute number of contiguous chunks */
>  	chunks = 1;
> @@ -410,13 +461,17 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
>  		}
>  	}
> 
> -	ret = sg_alloc_table(sgt, chunks, gfp_mask);
> -	if (unlikely(ret))
> -		return ret;
> +	if (!prv) {
> +		/* Only the last allocation could be less than the maximum */
> +		table_size = left_pages ? SG_MAX_SINGLE_ALLOC : chunks;
> +		ret = sg_alloc_table(sgt, table_size, gfp_mask);
> +		if (unlikely(ret))
> +			return ERR_PTR(ret);
> +	}

This is basically redundant right? Now that get_next_sg() can allocate
SGs it can just build them one by one, no need to preallocate.

Actually all the changes the the allocation seem like overkill, just
allocate a single new array directly in get_next_sg() whenever it
needs.

Something like this:

@@ -365,6 +372,37 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask)
 }
 EXPORT_SYMBOL(sg_alloc_table);
 
+static struct scatterlist *get_next_sg(struct sg_table *table,
+		struct scatterlist *cur, unsigned long needed_sges,
+		gfp_t gfp_mask)
+{
+	struct scatterlist *new_sg;
+	unsigned int alloc_size;
+
+	if (cur) {
+		struct scatterlist *next_sg = sg_next(cur);
+
+		/* Check if last entry should be keeped for chainning */
+		if (!sg_is_last(next_sg) || needed_sges == 1)
+			return next_sg;
+	}
+
+	alloc_size = min_t(unsigned long, needed_sges, SG_MAX_SINGLE_ALLOC);
+	new_sg = sg_kmalloc(alloc_size, gfp_mask);
+	if (!new_sg)
+		return ERR_PTR(-ENOMEM);
+	sg_init_table(new_sg, alloc_size);
+	if (cur) {
+		__sg_chain(cur, new_sg);
+		table->orig_nents += alloc_size - 1;
+	} else {
+		table->sgl = new_sg;
+		table->orig_nents = alloc_size;
+		table->nents = 0;
+	}
+	return new_sg;
+}
+
 /**
  * __sg_alloc_table_from_pages - Allocate and initialize an sg table from
  *			         an array of pages
@@ -374,29 +412,64 @@ EXPORT_SYMBOL(sg_alloc_table);
  * @offset:      Offset from start of the first page to the start of a buffer
  * @size:        Number of valid bytes in the buffer (after offset)
  * @max_segment: Maximum size of a scatterlist node in bytes (page aligned)
+ * @prv:	 Last populated sge in sgt
+ * @left_pages:  Left pages caller have to set after this call
  * @gfp_mask:	 GFP allocation mask
  *
- *  Description:
- *    Allocate and initialize an sg table from a list of pages. Contiguous
- *    ranges of the pages are squashed into a single scatterlist node up to the
- *    maximum size specified in @max_segment. An user may provide an offset at a
- *    start and a size of valid data in a buffer specified by the page array.
- *    The returned sg table is released by sg_free_table.
+ * Description:
+ *    If @prv is NULL, allocate and initialize an sg table from a list of pages,
+ *    else reuse the scatterlist passed in at @prv.
+ *    Contiguous ranges of the pages are squashed into a single scatterlist
+ *    entry up to the maximum size specified in @max_segment.  A user may
+ *    provide an offset at a start and a size of valid data in a buffer
+ *    specified by the page array.
  *
  * Returns:
- *   0 on success, negative error on failure
+ *   Last SGE in sgt on success, PTR_ERR on otherwise.
+ *   The allocation in @sgt must be released by sg_free_table.
+ *
+ * Notes:
+ *   If this function returns non-0 (eg failure), the caller must call
+ *   sg_free_table() to cleanup any leftover allocations.
  */
-int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
-				unsigned int n_pages, unsigned int offset,
-				unsigned long size, unsigned int max_segment,
-				gfp_t gfp_mask)
+struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt,
+		struct page **pages, unsigned int n_pages, unsigned int offset,
+		unsigned long size, unsigned int max_segment,
+		struct scatterlist *prv, unsigned int left_pages,
+		gfp_t gfp_mask)
 {
-	unsigned int chunks, cur_page, seg_len, i;
-	int ret;
-	struct scatterlist *s;
+	unsigned int chunks, cur_page, seg_len, i, prv_len = 0;
+	unsigned int added_nents = 0;
+	struct scatterlist *s = prv;
 
 	if (WARN_ON(!max_segment || offset_in_page(max_segment)))
-		return -EINVAL;
+		return ERR_PTR(-EINVAL);
+	if (IS_ENABLED(CONFIG_ARCH_NO_SG_CHAIN) && prv)
+		return ERR_PTR(-EOPNOTSUPP);
+
+	if (prv) {
+		unsigned long paddr = (page_to_pfn(sg_page(prv)) * PAGE_SIZE +
+				       prv->offset + prv->length) /
+				      PAGE_SIZE;
+
+		if (WARN_ON(offset))
+			return ERR_PTR(-EINVAL);
+
+		/* Merge contiguous pages into the last SG */
+		prv_len = prv->length;
+		while (n_pages && page_to_pfn(pages[0]) == paddr) {
+			if (prv->length + PAGE_SIZE > max_segment)
+				break;
+			prv->length += PAGE_SIZE;
+			paddr++;
+			pages++;
+			n_pages--;
+		}
+		if (!n_pages) {
+			sg_mark_end(sg_next(prv));
+			return prv;
+		}
+	}
 
 	/* compute number of contiguous chunks */
 	chunks = 1;
@@ -410,13 +483,9 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
 		}
 	}
 
-	ret = sg_alloc_table(sgt, chunks, gfp_mask);
-	if (unlikely(ret))
-		return ret;
-
 	/* merging chunks and putting them into the scatterlist */
 	cur_page = 0;
-	for_each_sg(sgt->sgl, s, sgt->orig_nents, i) {
+	for (i = 0; i < chunks; i++) {
 		unsigned int j, chunk_size;
 
 		/* look for the end of the current chunk */
@@ -429,15 +498,28 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
 				break;
 		}
 
+		/* Pass how many chunks might be left */
+		s = get_next_sg(sgt, s, chunks - i + left_pages, gfp_mask);
+		if (IS_ERR(s)) {
+			/*
+			 * Adjust entry length to be as before function was
+			 * called.
+			 */
+			if (prv)
+				prv->length = prv_len;
+			return s;
+		}
 		chunk_size = ((j - cur_page) << PAGE_SHIFT) - offset;
 		sg_set_page(s, pages[cur_page],
 			    min_t(unsigned long, size, chunk_size), offset);
+		added_nents++;
 		size -= chunk_size;
 		offset = 0;
 		cur_page = j;
 	}
-
-	return 0;
+	sgt->nents += added_nents;
+	sg_mark_end(s);
+	return s;
 }
 EXPORT_SYMBOL(__sg_alloc_table_from_pages);
 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH rdma-next v4 1/4] lib/scatterlist: Add support in dynamic allocation of SG table from pages
  2020-10-02 15:02   ` Jason Gunthorpe
@ 2020-10-02 16:11     ` Maor Gottlieb
  2020-10-02 16:19       ` Jason Gunthorpe
  0 siblings, 1 reply; 17+ messages in thread
From: Maor Gottlieb @ 2020-10-02 16:11 UTC (permalink / raw)
  To: Jason Gunthorpe, Leon Romanovsky
  Cc: Doug Ledford, Maor Gottlieb, Christoph Hellwig, Daniel Vetter,
	David Airlie, dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen,
	linux-kernel, linux-rdma, Rodrigo Vivi, Roland Scheidegger,
	Tvrtko Ursulin, VMware Graphics


On 10/2/2020 6:02 PM, Jason Gunthorpe wrote:
> On Sun, Sep 27, 2020 at 09:46:44AM +0300, Leon Romanovsky wrote:
>> +struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt,
>> +		struct page **pages, unsigned int n_pages, unsigned int offset,
>> +		unsigned long size, unsigned int max_segment,
>> +		struct scatterlist *prv, unsigned int left_pages,
>> +		gfp_t gfp_mask)
>>   {
>> -	unsigned int chunks, cur_page, seg_len, i;
>> +	unsigned int chunks, cur_page, seg_len, i, prv_len = 0;
>> +	struct scatterlist *s = prv;
>> +	unsigned int table_size;
>> +	unsigned int tmp_nents;
>>   	int ret;
>> -	struct scatterlist *s;
>>
>>   	if (WARN_ON(!max_segment || offset_in_page(max_segment)))
>> -		return -EINVAL;
>> +		return ERR_PTR(-EINVAL);
>> +	if (IS_ENABLED(CONFIG_ARCH_NO_SG_CHAIN) && prv)
>> +		return ERR_PTR(-EOPNOTSUPP);
>> +
>> +	tmp_nents = prv ? sgt->nents : 0;
>> +
>> +	if (prv &&
>> +	    page_to_pfn(sg_page(prv)) + (prv->length >> PAGE_SHIFT) ==
> This calculation of the end doesn't consider sg->offset

Right, should be fixed.
>
>> +	    page_to_pfn(pages[0]))
>> +		prv_len = prv->length;
>>
>>   	/* compute number of contiguous chunks */
>>   	chunks = 1;
>> @@ -410,13 +461,17 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
>>   		}
>>   	}
>>
>> -	ret = sg_alloc_table(sgt, chunks, gfp_mask);
>> -	if (unlikely(ret))
>> -		return ret;
>> +	if (!prv) {
>> +		/* Only the last allocation could be less than the maximum */
>> +		table_size = left_pages ? SG_MAX_SINGLE_ALLOC : chunks;
>> +		ret = sg_alloc_table(sgt, table_size, gfp_mask);
>> +		if (unlikely(ret))
>> +			return ERR_PTR(ret);
>> +	}
> This is basically redundant right? Now that get_next_sg() can allocate
> SGs it can just build them one by one, no need to preallocate.
>
> Actually all the changes the the allocation seem like overkill, just
> allocate a single new array directly in get_next_sg() whenever it
> needs.

No, only the last allocation could be less than maximum. (as written in 
the comment).
I am preferring to stick with the current implementation and fix the offset.
>
> Something like this:
>
> @@ -365,6 +372,37 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask)
>   }
>   EXPORT_SYMBOL(sg_alloc_table);
>   
> +static struct scatterlist *get_next_sg(struct sg_table *table,
> +		struct scatterlist *cur, unsigned long needed_sges,
> +		gfp_t gfp_mask)
> +{
> +	struct scatterlist *new_sg;
> +	unsigned int alloc_size;
> +
> +	if (cur) {
> +		struct scatterlist *next_sg = sg_next(cur);
> +
> +		/* Check if last entry should be keeped for chainning */
> +		if (!sg_is_last(next_sg) || needed_sges == 1)
> +			return next_sg;
> +	}
> +
> +	alloc_size = min_t(unsigned long, needed_sges, SG_MAX_SINGLE_ALLOC);
> +	new_sg = sg_kmalloc(alloc_size, gfp_mask);
> +	if (!new_sg)
> +		return ERR_PTR(-ENOMEM);
> +	sg_init_table(new_sg, alloc_size);
> +	if (cur) {
> +		__sg_chain(cur, new_sg);
> +		table->orig_nents += alloc_size - 1;
> +	} else {
> +		table->sgl = new_sg;
> +		table->orig_nents = alloc_size;
> +		table->nents = 0;
> +	}
> +	return new_sg;
> +}
> +
>   /**
>    * __sg_alloc_table_from_pages - Allocate and initialize an sg table from
>    *			         an array of pages
> @@ -374,29 +412,64 @@ EXPORT_SYMBOL(sg_alloc_table);
>    * @offset:      Offset from start of the first page to the start of a buffer
>    * @size:        Number of valid bytes in the buffer (after offset)
>    * @max_segment: Maximum size of a scatterlist node in bytes (page aligned)
> + * @prv:	 Last populated sge in sgt
> + * @left_pages:  Left pages caller have to set after this call
>    * @gfp_mask:	 GFP allocation mask
>    *
> - *  Description:
> - *    Allocate and initialize an sg table from a list of pages. Contiguous
> - *    ranges of the pages are squashed into a single scatterlist node up to the
> - *    maximum size specified in @max_segment. An user may provide an offset at a
> - *    start and a size of valid data in a buffer specified by the page array.
> - *    The returned sg table is released by sg_free_table.
> + * Description:
> + *    If @prv is NULL, allocate and initialize an sg table from a list of pages,
> + *    else reuse the scatterlist passed in at @prv.
> + *    Contiguous ranges of the pages are squashed into a single scatterlist
> + *    entry up to the maximum size specified in @max_segment.  A user may
> + *    provide an offset at a start and a size of valid data in a buffer
> + *    specified by the page array.
>    *
>    * Returns:
> - *   0 on success, negative error on failure
> + *   Last SGE in sgt on success, PTR_ERR on otherwise.
> + *   The allocation in @sgt must be released by sg_free_table.
> + *
> + * Notes:
> + *   If this function returns non-0 (eg failure), the caller must call
> + *   sg_free_table() to cleanup any leftover allocations.
>    */
> -int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
> -				unsigned int n_pages, unsigned int offset,
> -				unsigned long size, unsigned int max_segment,
> -				gfp_t gfp_mask)
> +struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt,
> +		struct page **pages, unsigned int n_pages, unsigned int offset,
> +		unsigned long size, unsigned int max_segment,
> +		struct scatterlist *prv, unsigned int left_pages,
> +		gfp_t gfp_mask)
>   {
> -	unsigned int chunks, cur_page, seg_len, i;
> -	int ret;
> -	struct scatterlist *s;
> +	unsigned int chunks, cur_page, seg_len, i, prv_len = 0;
> +	unsigned int added_nents = 0;
> +	struct scatterlist *s = prv;
>   
>   	if (WARN_ON(!max_segment || offset_in_page(max_segment)))
> -		return -EINVAL;
> +		return ERR_PTR(-EINVAL);
> +	if (IS_ENABLED(CONFIG_ARCH_NO_SG_CHAIN) && prv)
> +		return ERR_PTR(-EOPNOTSUPP);
> +
> +	if (prv) {
> +		unsigned long paddr = (page_to_pfn(sg_page(prv)) * PAGE_SIZE +
> +				       prv->offset + prv->length) /
> +				      PAGE_SIZE;
> +
> +		if (WARN_ON(offset))
> +			return ERR_PTR(-EINVAL);
> +
> +		/* Merge contiguous pages into the last SG */
> +		prv_len = prv->length;
> +		while (n_pages && page_to_pfn(pages[0]) == paddr) {
> +			if (prv->length + PAGE_SIZE > max_segment)
> +				break;
> +			prv->length += PAGE_SIZE;
> +			paddr++;
> +			pages++;
> +			n_pages--;
> +		}
> +		if (!n_pages) {
> +			sg_mark_end(sg_next(prv));
> +			return prv;
> +		}
> +	}
>   
>   	/* compute number of contiguous chunks */
>   	chunks = 1;
> @@ -410,13 +483,9 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
>   		}
>   	}
>   
> -	ret = sg_alloc_table(sgt, chunks, gfp_mask);
> -	if (unlikely(ret))
> -		return ret;
> -
>   	/* merging chunks and putting them into the scatterlist */
>   	cur_page = 0;
> -	for_each_sg(sgt->sgl, s, sgt->orig_nents, i) {
> +	for (i = 0; i < chunks; i++) {
>   		unsigned int j, chunk_size;
>   
>   		/* look for the end of the current chunk */
> @@ -429,15 +498,28 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
>   				break;
>   		}
>   
> +		/* Pass how many chunks might be left */
> +		s = get_next_sg(sgt, s, chunks - i + left_pages, gfp_mask);
> +		if (IS_ERR(s)) {
> +			/*
> +			 * Adjust entry length to be as before function was
> +			 * called.
> +			 */
> +			if (prv)
> +				prv->length = prv_len;
> +			return s;
> +		}
>   		chunk_size = ((j - cur_page) << PAGE_SHIFT) - offset;
>   		sg_set_page(s, pages[cur_page],
>   			    min_t(unsigned long, size, chunk_size), offset);
> +		added_nents++;
>   		size -= chunk_size;
>   		offset = 0;
>   		cur_page = j;
>   	}
> -
> -	return 0;
> +	sgt->nents += added_nents;
> +	sg_mark_end(s);
> +	return s;
>   }
>   EXPORT_SYMBOL(__sg_alloc_table_from_pages);
>   

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH rdma-next v4 1/4] lib/scatterlist: Add support in dynamic allocation of SG table from pages
  2020-10-02 16:11     ` Maor Gottlieb
@ 2020-10-02 16:19       ` Jason Gunthorpe
  0 siblings, 0 replies; 17+ messages in thread
From: Jason Gunthorpe @ 2020-10-02 16:19 UTC (permalink / raw)
  To: Maor Gottlieb
  Cc: Leon Romanovsky, Doug Ledford, Maor Gottlieb, Christoph Hellwig,
	Daniel Vetter, David Airlie, dri-devel, intel-gfx, Jani Nikula,
	Joonas Lahtinen, linux-kernel, linux-rdma, Rodrigo Vivi,
	Roland Scheidegger, Tvrtko Ursulin, VMware Graphics

On Fri, Oct 02, 2020 at 07:11:33PM +0300, Maor Gottlieb wrote:
> 
> On 10/2/2020 6:02 PM, Jason Gunthorpe wrote:
> > On Sun, Sep 27, 2020 at 09:46:44AM +0300, Leon Romanovsky wrote:
> > > +struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt,
> > > +		struct page **pages, unsigned int n_pages, unsigned int offset,
> > > +		unsigned long size, unsigned int max_segment,
> > > +		struct scatterlist *prv, unsigned int left_pages,
> > > +		gfp_t gfp_mask)
> > >   {
> > > -	unsigned int chunks, cur_page, seg_len, i;
> > > +	unsigned int chunks, cur_page, seg_len, i, prv_len = 0;
> > > +	struct scatterlist *s = prv;
> > > +	unsigned int table_size;
> > > +	unsigned int tmp_nents;
> > >   	int ret;
> > > -	struct scatterlist *s;
> > > 
> > >   	if (WARN_ON(!max_segment || offset_in_page(max_segment)))
> > > -		return -EINVAL;
> > > +		return ERR_PTR(-EINVAL);
> > > +	if (IS_ENABLED(CONFIG_ARCH_NO_SG_CHAIN) && prv)
> > > +		return ERR_PTR(-EOPNOTSUPP);
> > > +
> > > +	tmp_nents = prv ? sgt->nents : 0;
> > > +
> > > +	if (prv &&
> > > +	    page_to_pfn(sg_page(prv)) + (prv->length >> PAGE_SHIFT) ==
> > This calculation of the end doesn't consider sg->offset
> 
> Right, should be fixed.
> > 
> > > +	    page_to_pfn(pages[0]))
> > > +		prv_len = prv->length;
> > > 
> > >   	/* compute number of contiguous chunks */
> > >   	chunks = 1;
> > > @@ -410,13 +461,17 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
> > >   		}
> > >   	}
> > > 
> > > -	ret = sg_alloc_table(sgt, chunks, gfp_mask);
> > > -	if (unlikely(ret))
> > > -		return ret;
> > > +	if (!prv) {
> > > +		/* Only the last allocation could be less than the maximum */
> > > +		table_size = left_pages ? SG_MAX_SINGLE_ALLOC : chunks;
> > > +		ret = sg_alloc_table(sgt, table_size, gfp_mask);
> > > +		if (unlikely(ret))
> > > +			return ERR_PTR(ret);
> > > +	}
> > This is basically redundant right? Now that get_next_sg() can allocate
> > SGs it can just build them one by one, no need to preallocate.
> > 
> > Actually all the changes the the allocation seem like overkill, just
> > allocate a single new array directly in get_next_sg() whenever it
> > needs.
> 
> No, only the last allocation could be less than maximum. (as written in the
> comment).

The point is that get_next_sg is fully redundent with
sg_alloc_table() because it is always used in cases when prv is
set. There is zero reason to call sg_alloc_table here in the one case
where prv is not set.

Further this cleans up the spagehtti goto in the middle of the for
loop and avoids allocating an extra chunk if the page list fully fits
in prv.

Given how much smaller it is I think you should look more carefully.

Jason

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2020-10-02 16:19 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-27  6:46 [PATCH rdma-next v4 0/4] Dynamicaly allocate SG table from the pages Leon Romanovsky
2020-09-27  6:46 ` [PATCH rdma-next v4 1/4] lib/scatterlist: Add support in dynamic allocation of SG table from pages Leon Romanovsky
2020-10-02 15:02   ` Jason Gunthorpe
2020-10-02 16:11     ` Maor Gottlieb
2020-10-02 16:19       ` Jason Gunthorpe
2020-09-27  6:46 ` [PATCH rdma-next v4 2/4] tools/testing/scatterlist: Rejuvenate bit-rotten test Leon Romanovsky
2020-09-27  6:46 ` [PATCH rdma-next v4 3/4] tools/testing/scatterlist: Show errors in human readable form Leon Romanovsky
2020-09-27  6:46 ` [PATCH rdma-next v4 4/4] RDMA/umem: Move to allocate SG table from pages Leon Romanovsky
2020-09-29 19:59   ` Jason Gunthorpe
2020-09-30  9:53     ` Leon Romanovsky
2020-09-30 11:45       ` Jason Gunthorpe
2020-09-30 11:53         ` Maor Gottlieb
2020-09-30 11:58           ` Jason Gunthorpe
2020-09-30 15:05             ` Maor Gottlieb
2020-09-30 15:14               ` Jason Gunthorpe
2020-09-30 15:40                 ` Maor Gottlieb
2020-09-30 16:51                 ` Leon Romanovsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).