All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH rdma-next v3 0/2] Dynamicaly allocate SG table from the pages
@ 2020-09-22  8:39 ` Leon Romanovsky
  0 siblings, 0 replies; 43+ messages in thread
From: Leon Romanovsky @ 2020-09-22  8:39 UTC (permalink / raw)
  Cc: Leon Romanovsky, Daniel Vetter, David Airlie, dri-devel,
	intel-gfx, Jani Nikula, Joonas Lahtinen, linux-kernel,
	linux-rdma, Maor Gottlieb, Rodrigo Vivi, Roland Scheidegger,
	VMware Graphics

From: Leon Romanovsky <leonro@nvidia.com>

Changelog:
v3:
 * Squashed Christopher's suggestion to avoid introduced new API, but extend existing one.
v2: https://lore.kernel.org/linux-rdma/20200916140726.839377-1-leon@kernel.org
 * Fixed indentations and comments
 * Deleted sg_alloc_next()
 * Squashed lib/scatterlist patches into one
v1: https://lore.kernel.org/lkml/20200910134259.1304543-1-leon@kernel.org
 * Changed _sg_chain to be __sg_chain
 * Added dependency on ARCH_NO_SG_CHAIN
 * Removed struct sg_append
v0:
 * https://lore.kernel.org/lkml/20200903121853.1145976-1-leon@kernel.org

--------------------------------------------------------------------------
From Maor:

This series extends __sg_alloc_table_from_pages to allow chaining of
new pages to already initialized SG table.

This allows drivers to utilize the optimization of merging contiguous
pages without a need to pre allocate all the pages and hold them in
a very large temporary buffer prior to the call to SG table initialization.

The second patch changes the Infiniband driver to use the new API. It
removes duplicate functionality from the code and benefits the
optimization of allocating dynamic SG table from pages.

In huge pages system of 2MB page size, without this change, the SG table
would contain x512 SG entries.
E.g. for 100GB memory registration:

             Number of entries      Size
    Before        26214400          600.0MB
    After            51200            1.2MB

Thanks

Maor Gottlieb (2):
  lib/scatterlist: Add support in dynamic allocation of SG table from
    pages
  RDMA/umem: Move to allocate SG table from pages

 drivers/gpu/drm/i915/gem/i915_gem_userptr.c |  12 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c  |  15 +-
 drivers/infiniband/core/umem.c              |  92 ++----------
 include/linux/scatterlist.h                 |  43 +++---
 lib/scatterlist.c                           | 158 +++++++++++++++-----
 lib/sg_pool.c                               |   3 +-
 tools/testing/scatterlist/main.c            |   9 +-
 7 files changed, 175 insertions(+), 157 deletions(-)

--
2.26.2


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH rdma-next v3 0/2] Dynamicaly allocate SG table from the pages
@ 2020-09-22  8:39 ` Leon Romanovsky
  0 siblings, 0 replies; 43+ messages in thread
From: Leon Romanovsky @ 2020-09-22  8:39 UTC (permalink / raw)
  Cc: David Airlie, Maor Gottlieb, intel-gfx, Roland Scheidegger,
	linux-kernel, linux-rdma, VMware Graphics, dri-devel,
	Rodrigo Vivi, Leon Romanovsky

From: Leon Romanovsky <leonro@nvidia.com>

Changelog:
v3:
 * Squashed Christopher's suggestion to avoid introduced new API, but extend existing one.
v2: https://lore.kernel.org/linux-rdma/20200916140726.839377-1-leon@kernel.org
 * Fixed indentations and comments
 * Deleted sg_alloc_next()
 * Squashed lib/scatterlist patches into one
v1: https://lore.kernel.org/lkml/20200910134259.1304543-1-leon@kernel.org
 * Changed _sg_chain to be __sg_chain
 * Added dependency on ARCH_NO_SG_CHAIN
 * Removed struct sg_append
v0:
 * https://lore.kernel.org/lkml/20200903121853.1145976-1-leon@kernel.org

--------------------------------------------------------------------------
From Maor:

This series extends __sg_alloc_table_from_pages to allow chaining of
new pages to already initialized SG table.

This allows drivers to utilize the optimization of merging contiguous
pages without a need to pre allocate all the pages and hold them in
a very large temporary buffer prior to the call to SG table initialization.

The second patch changes the Infiniband driver to use the new API. It
removes duplicate functionality from the code and benefits the
optimization of allocating dynamic SG table from pages.

In huge pages system of 2MB page size, without this change, the SG table
would contain x512 SG entries.
E.g. for 100GB memory registration:

             Number of entries      Size
    Before        26214400          600.0MB
    After            51200            1.2MB

Thanks

Maor Gottlieb (2):
  lib/scatterlist: Add support in dynamic allocation of SG table from
    pages
  RDMA/umem: Move to allocate SG table from pages

 drivers/gpu/drm/i915/gem/i915_gem_userptr.c |  12 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c  |  15 +-
 drivers/infiniband/core/umem.c              |  92 ++----------
 include/linux/scatterlist.h                 |  43 +++---
 lib/scatterlist.c                           | 158 +++++++++++++++-----
 lib/sg_pool.c                               |   3 +-
 tools/testing/scatterlist/main.c            |   9 +-
 7 files changed, 175 insertions(+), 157 deletions(-)

--
2.26.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Intel-gfx] [PATCH rdma-next v3 0/2] Dynamicaly allocate SG table from the pages
@ 2020-09-22  8:39 ` Leon Romanovsky
  0 siblings, 0 replies; 43+ messages in thread
From: Leon Romanovsky @ 2020-09-22  8:39 UTC (permalink / raw)
  Cc: David Airlie, Maor Gottlieb, intel-gfx, Roland Scheidegger,
	linux-kernel, linux-rdma, VMware Graphics, dri-devel,
	Leon Romanovsky

From: Leon Romanovsky <leonro@nvidia.com>

Changelog:
v3:
 * Squashed Christopher's suggestion to avoid introduced new API, but extend existing one.
v2: https://lore.kernel.org/linux-rdma/20200916140726.839377-1-leon@kernel.org
 * Fixed indentations and comments
 * Deleted sg_alloc_next()
 * Squashed lib/scatterlist patches into one
v1: https://lore.kernel.org/lkml/20200910134259.1304543-1-leon@kernel.org
 * Changed _sg_chain to be __sg_chain
 * Added dependency on ARCH_NO_SG_CHAIN
 * Removed struct sg_append
v0:
 * https://lore.kernel.org/lkml/20200903121853.1145976-1-leon@kernel.org

--------------------------------------------------------------------------
From Maor:

This series extends __sg_alloc_table_from_pages to allow chaining of
new pages to already initialized SG table.

This allows drivers to utilize the optimization of merging contiguous
pages without a need to pre allocate all the pages and hold them in
a very large temporary buffer prior to the call to SG table initialization.

The second patch changes the Infiniband driver to use the new API. It
removes duplicate functionality from the code and benefits the
optimization of allocating dynamic SG table from pages.

In huge pages system of 2MB page size, without this change, the SG table
would contain x512 SG entries.
E.g. for 100GB memory registration:

             Number of entries      Size
    Before        26214400          600.0MB
    After            51200            1.2MB

Thanks

Maor Gottlieb (2):
  lib/scatterlist: Add support in dynamic allocation of SG table from
    pages
  RDMA/umem: Move to allocate SG table from pages

 drivers/gpu/drm/i915/gem/i915_gem_userptr.c |  12 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c  |  15 +-
 drivers/infiniband/core/umem.c              |  92 ++----------
 include/linux/scatterlist.h                 |  43 +++---
 lib/scatterlist.c                           | 158 +++++++++++++++-----
 lib/sg_pool.c                               |   3 +-
 tools/testing/scatterlist/main.c            |   9 +-
 7 files changed, 175 insertions(+), 157 deletions(-)

--
2.26.2

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
  2020-09-22  8:39 ` Leon Romanovsky
  (?)
@ 2020-09-22  8:39   ` Leon Romanovsky
  -1 siblings, 0 replies; 43+ messages in thread
From: Leon Romanovsky @ 2020-09-22  8:39 UTC (permalink / raw)
  To: Christoph Hellwig, Doug Ledford, Jason Gunthorpe
  Cc: Maor Gottlieb, linux-rdma, Daniel Vetter, David Airlie,
	dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen,
	Maor Gottlieb, Rodrigo Vivi, Roland Scheidegger, VMware Graphics

From: Maor Gottlieb <maorg@mellanox.com>

Extend __sg_alloc_table_from_pages to support dynamic allocation of
SG table from pages. It should be used by drivers that can't supply
all the pages at one time.

This function returns the last populated SGE in the table. Users should
pass it as an argument to the function from the second call and forward.
As before, nents will be equal to the number of populated SGEs (chunks).

With this new extension, drivers can benefit the optimization of merging
contiguous pages without a need to allocate all pages in advance and
hold them in a large buffer.

E.g. with the Infiniband driver that allocates a single page for hold
the
pages. For 1TB memory registration, the temporary buffer would consume
only
4KB, instead of 2GB.

Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c |  12 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c  |  15 +-
 include/linux/scatterlist.h                 |  43 +++---
 lib/scatterlist.c                           | 158 +++++++++++++++-----
 lib/sg_pool.c                               |   3 +-
 tools/testing/scatterlist/main.c            |   9 +-
 6 files changed, 163 insertions(+), 77 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 12b30075134a..f2eaed6aca3d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -403,6 +403,7 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
 	unsigned int max_segment = i915_sg_segment_size();
 	struct sg_table *st;
 	unsigned int sg_page_sizes;
+	struct scatterlist *sg;
 	int ret;

 	st = kmalloc(sizeof(*st), GFP_KERNEL);
@@ -410,13 +411,12 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
 		return ERR_PTR(-ENOMEM);

 alloc_table:
-	ret = __sg_alloc_table_from_pages(st, pvec, num_pages,
-					  0, num_pages << PAGE_SHIFT,
-					  max_segment,
-					  GFP_KERNEL);
-	if (ret) {
+	sg = __sg_alloc_table_from_pages(st, pvec, num_pages, 0,
+					 num_pages << PAGE_SHIFT, max_segment,
+					 NULL, 0, GFP_KERNEL);
+	if (IS_ERR(sg)) {
 		kfree(st);
-		return ERR_PTR(ret);
+		return ERR_CAST(sg);
 	}

 	ret = i915_gem_gtt_prepare_pages(obj, st);
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
index ab524ab3b0b4..f22acd398b1f 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
@@ -419,6 +419,7 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt)
 	int ret = 0;
 	static size_t sgl_size;
 	static size_t sgt_size;
+	struct scatterlist *sg;

 	if (vmw_tt->mapped)
 		return 0;
@@ -441,13 +442,15 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt)
 		if (unlikely(ret != 0))
 			return ret;

-		ret = __sg_alloc_table_from_pages
-			(&vmw_tt->sgt, vsgt->pages, vsgt->num_pages, 0,
-			 (unsigned long) vsgt->num_pages << PAGE_SHIFT,
-			 dma_get_max_seg_size(dev_priv->dev->dev),
-			 GFP_KERNEL);
-		if (unlikely(ret != 0))
+		sg = __sg_alloc_table_from_pages(&vmw_tt->sgt, vsgt->pages,
+				vsgt->num_pages, 0,
+				(unsigned long) vsgt->num_pages << PAGE_SHIFT,
+				dma_get_max_seg_size(dev_priv->dev->dev),
+				NULL, 0, GFP_KERNEL);
+		if (IS_ERR(sg)) {
+			ret = PTR_ERR(sg);
 			goto out_sg_alloc_fail;
+		}

 		if (vsgt->num_pages > vmw_tt->sgt.nents) {
 			uint64_t over_alloc =
diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
index 45cf7b69d852..c24cc667b56b 100644
--- a/include/linux/scatterlist.h
+++ b/include/linux/scatterlist.h
@@ -165,6 +165,22 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf,
 #define for_each_sgtable_dma_sg(sgt, sg, i)	\
 	for_each_sg((sgt)->sgl, sg, (sgt)->nents, i)

+static inline void __sg_chain(struct scatterlist *chain_sg,
+			      struct scatterlist *sgl)
+{
+	/*
+	 * offset and length are unused for chain entry. Clear them.
+	 */
+	chain_sg->offset = 0;
+	chain_sg->length = 0;
+
+	/*
+	 * Set lowest bit to indicate a link pointer, and make sure to clear
+	 * the termination bit if it happens to be set.
+	 */
+	chain_sg->page_link = ((unsigned long) sgl | SG_CHAIN) & ~SG_END;
+}
+
 /**
  * sg_chain - Chain two sglists together
  * @prv:	First scatterlist
@@ -178,18 +194,7 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf,
 static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents,
 			    struct scatterlist *sgl)
 {
-	/*
-	 * offset and length are unused for chain entry.  Clear them.
-	 */
-	prv[prv_nents - 1].offset = 0;
-	prv[prv_nents - 1].length = 0;
-
-	/*
-	 * Set lowest bit to indicate a link pointer, and make sure to clear
-	 * the termination bit if it happens to be set.
-	 */
-	prv[prv_nents - 1].page_link = ((unsigned long) sgl | SG_CHAIN)
-					& ~SG_END;
+	__sg_chain(&prv[prv_nents - 1], sgl);
 }

 /**
@@ -283,13 +288,15 @@ typedef void (sg_free_fn)(struct scatterlist *, unsigned int);
 void __sg_free_table(struct sg_table *, unsigned int, unsigned int,
 		     sg_free_fn *);
 void sg_free_table(struct sg_table *);
-int __sg_alloc_table(struct sg_table *, unsigned int, unsigned int,
-		     struct scatterlist *, unsigned int, gfp_t, sg_alloc_fn *);
+int __sg_alloc_table(struct sg_table *, struct scatterlist *, unsigned int,
+		unsigned int, struct scatterlist *, unsigned int,
+		gfp_t, sg_alloc_fn *);
 int sg_alloc_table(struct sg_table *, unsigned int, gfp_t);
-int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
-				unsigned int n_pages, unsigned int offset,
-				unsigned long size, unsigned int max_segment,
-				gfp_t gfp_mask);
+struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt,
+		struct page **pages, unsigned int n_pages, unsigned int offset,
+		unsigned long size, unsigned int max_segment,
+		struct scatterlist *prv, unsigned int left_pages,
+		gfp_t gfp_mask);
 int sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
 			      unsigned int n_pages, unsigned int offset,
 			      unsigned long size, gfp_t gfp_mask);
diff --git a/lib/scatterlist.c b/lib/scatterlist.c
index 5d63a8857f36..91587560497d 100644
--- a/lib/scatterlist.c
+++ b/lib/scatterlist.c
@@ -245,6 +245,7 @@ EXPORT_SYMBOL(sg_free_table);
 /**
  * __sg_alloc_table - Allocate and initialize an sg table with given allocator
  * @table:	The sg table header to use
+ * @prv:	Last populated sge in sgt
  * @nents:	Number of entries in sg list
  * @max_ents:	The maximum number of entries the allocator returns per call
  * @nents_first_chunk: Number of entries int the (preallocated) first
@@ -263,17 +264,15 @@ EXPORT_SYMBOL(sg_free_table);
  *   __sg_free_table() to cleanup any leftover allocations.
  *
  **/
-int __sg_alloc_table(struct sg_table *table, unsigned int nents,
-		     unsigned int max_ents, struct scatterlist *first_chunk,
-		     unsigned int nents_first_chunk, gfp_t gfp_mask,
-		     sg_alloc_fn *alloc_fn)
+int __sg_alloc_table(struct sg_table *table, struct scatterlist *prv,
+		unsigned int nents, unsigned int max_ents,
+		struct scatterlist *first_chunk,
+		unsigned int nents_first_chunk, gfp_t gfp_mask,
+		sg_alloc_fn *alloc_fn)
 {
-	struct scatterlist *sg, *prv;
-	unsigned int left;
-	unsigned curr_max_ents = nents_first_chunk ?: max_ents;
-	unsigned prv_max_ents;
-
-	memset(table, 0, sizeof(*table));
+	unsigned int curr_max_ents = nents_first_chunk ?: max_ents;
+	unsigned int left, prv_max_ents = 0;
+	struct scatterlist *sg;

 	if (nents == 0)
 		return -EINVAL;
@@ -283,7 +282,6 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
 #endif

 	left = nents;
-	prv = NULL;
 	do {
 		unsigned int sg_size, alloc_size = left;

@@ -308,7 +306,7 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
 			 * linkage.  Without this, sg_kfree() may get
 			 * confused.
 			 */
-			if (prv)
+			if (prv_max_ents)
 				table->nents = ++table->orig_nents;

 			return -ENOMEM;
@@ -321,10 +319,18 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
 		 * If this is the first mapping, assign the sg table header.
 		 * If this is not the first mapping, chain previous part.
 		 */
-		if (prv)
-			sg_chain(prv, prv_max_ents, sg);
-		else
+		if (!prv)
 			table->sgl = sg;
+		else if (prv_max_ents)
+			sg_chain(prv, prv_max_ents, sg);
+		else {
+			__sg_chain(prv, sg);
+			/*
+			 * We decrease one since the prvious last sge in used to
+			 * chain the chunks together.
+			 */
+			table->nents = table->orig_nents -= 1;
+		}

 		/*
 		 * If no more entries after this one, mark the end
@@ -356,7 +362,8 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask)
 {
 	int ret;

-	ret = __sg_alloc_table(table, nents, SG_MAX_SINGLE_ALLOC,
+	memset(table, 0, sizeof(*table));
+	ret = __sg_alloc_table(table, NULL, nents, SG_MAX_SINGLE_ALLOC,
 			       NULL, 0, gfp_mask, sg_kmalloc);
 	if (unlikely(ret))
 		__sg_free_table(table, SG_MAX_SINGLE_ALLOC, 0, sg_kfree);
@@ -365,6 +372,30 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask)
 }
 EXPORT_SYMBOL(sg_alloc_table);

+static struct scatterlist *get_next_sg(struct sg_table *table,
+		struct scatterlist *prv, unsigned long left_npages,
+		gfp_t gfp_mask)
+{
+	struct scatterlist *next_sg;
+	int ret;
+
+	/* If table was just allocated */
+	if (!prv)
+		return table->sgl;
+
+	/* Check if last entry should be keeped for chainning */
+	next_sg = sg_next(prv);
+	if (!sg_is_last(next_sg) || left_npages == 1)
+		return next_sg;
+
+	ret = __sg_alloc_table(table, next_sg,
+			min_t(unsigned long, left_npages, SG_MAX_SINGLE_ALLOC),
+			SG_MAX_SINGLE_ALLOC, NULL, 0, gfp_mask, sg_kmalloc);
+	if (ret)
+		return ERR_PTR(ret);
+	return sg_next(prv);
+}
+
 /**
  * __sg_alloc_table_from_pages - Allocate and initialize an sg table from
  *			         an array of pages
@@ -374,29 +405,47 @@ EXPORT_SYMBOL(sg_alloc_table);
  * @offset:      Offset from start of the first page to the start of a buffer
  * @size:        Number of valid bytes in the buffer (after offset)
  * @max_segment: Maximum size of a scatterlist node in bytes (page aligned)
+ * @prv:	 Last populated sge in sgt
+ * @left_pages:  Left pages caller have to set after this call
  * @gfp_mask:	 GFP allocation mask
  *
- *  Description:
- *    Allocate and initialize an sg table from a list of pages. Contiguous
- *    ranges of the pages are squashed into a single scatterlist node up to the
- *    maximum size specified in @max_segment. An user may provide an offset at a
- *    start and a size of valid data in a buffer specified by the page array.
- *    The returned sg table is released by sg_free_table.
+ * Description:
+ *    If @prv is NULL, allocate and initialize an sg table from a list of pages,
+ *    else reuse the scatterlist passed in at @prv.
+ *    Contiguous ranges of the pages are squashed into a single scatterlist
+ *    entry up to the maximum size specified in @max_segment.  A user may
+ *    provide an offset at a start and a size of valid data in a buffer
+ *    specified by the page array.
  *
  * Returns:
- *   0 on success, negative error on failure
+ *   Last SGE in sgt on success, PTR_ERR on otherwise.
+ *   The allocation in @sgt must be released by sg_free_table.
+ *
+ * Notes:
+ *   If this function returns non-0 (eg failure), the caller must call
+ *   sg_free_table() to cleanup any leftover allocations.
  */
-int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
-				unsigned int n_pages, unsigned int offset,
-				unsigned long size, unsigned int max_segment,
-				gfp_t gfp_mask)
+struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt,
+		struct page **pages, unsigned int n_pages, unsigned int offset,
+		unsigned long size, unsigned int max_segment,
+		struct scatterlist *prv, unsigned int left_pages,
+		gfp_t gfp_mask)
 {
-	unsigned int chunks, cur_page, seg_len, i;
+	unsigned int chunks, cur_page, seg_len, i, prv_len = 0;
+	unsigned int tmp_nents = sgt->nents;
+	struct scatterlist *s = prv;
+	unsigned int table_size;
 	int ret;
-	struct scatterlist *s;

 	if (WARN_ON(!max_segment || offset_in_page(max_segment)))
-		return -EINVAL;
+		return ERR_PTR(-EINVAL);
+	if (IS_ENABLED(CONFIG_ARCH_NO_SG_CHAIN) && prv)
+		return ERR_PTR(-EOPNOTSUPP);
+
+	if (prv &&
+	    page_to_pfn(sg_page(prv)) + (prv->length >> PAGE_SHIFT) ==
+	    page_to_pfn(pages[0]))
+		prv_len = prv->length;

 	/* compute number of contiguous chunks */
 	chunks = 1;
@@ -410,13 +459,17 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
 		}
 	}

-	ret = sg_alloc_table(sgt, chunks, gfp_mask);
-	if (unlikely(ret))
-		return ret;
+	if (!prv) {
+		/* Only the last allocation could be less than the maximum */
+		table_size = left_pages ? SG_MAX_SINGLE_ALLOC : chunks;
+		ret = sg_alloc_table(sgt, table_size, gfp_mask);
+		if (unlikely(ret))
+			return ERR_PTR(ret);
+	}

 	/* merging chunks and putting them into the scatterlist */
 	cur_page = 0;
-	for_each_sg(sgt->sgl, s, sgt->orig_nents, i) {
+	for (i = 0; i < chunks; i++) {
 		unsigned int j, chunk_size;

 		/* look for the end of the current chunk */
@@ -425,19 +478,41 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
 			seg_len += PAGE_SIZE;
 			if (seg_len >= max_segment ||
 			    page_to_pfn(pages[j]) !=
-			    page_to_pfn(pages[j - 1]) + 1)
+				    page_to_pfn(pages[j - 1]) + 1)
 				break;
 		}

 		chunk_size = ((j - cur_page) << PAGE_SHIFT) - offset;
-		sg_set_page(s, pages[cur_page],
-			    min_t(unsigned long, size, chunk_size), offset);
+		chunk_size = min_t(unsigned long, size, chunk_size);
+		if (!i && prv_len) {
+			if (max_segment - prv->length >= chunk_size) {
+				sg_set_page(s, sg_page(s),
+					    s->length + chunk_size, s->offset);
+				goto next;
+			}
+		}
+
+		/* Pass how many chunks might left */
+		s = get_next_sg(sgt, s, chunks - i + left_pages, gfp_mask);
+		if (IS_ERR(s)) {
+			/*
+			 * Adjust entry length to be as before function was
+			 * called.
+			 */
+			if (prv_len)
+				prv->length = prv_len;
+			goto out;
+		}
+		sg_set_page(s, pages[cur_page], chunk_size, offset);
+		tmp_nents++;
+next:
 		size -= chunk_size;
 		offset = 0;
 		cur_page = j;
 	}
-
-	return 0;
+	sgt->nents = tmp_nents;
+out:
+	return s;
 }
 EXPORT_SYMBOL(__sg_alloc_table_from_pages);

@@ -465,8 +540,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
 			      unsigned int n_pages, unsigned int offset,
 			      unsigned long size, gfp_t gfp_mask)
 {
-	return __sg_alloc_table_from_pages(sgt, pages, n_pages, offset, size,
-					   SCATTERLIST_MAX_SEGMENT, gfp_mask);
+	return PTR_ERR_OR_ZERO(__sg_alloc_table_from_pages(sgt, pages, n_pages,
+			offset, size, SCATTERLIST_MAX_SEGMENT, NULL, 0,
+			gfp_mask));
 }
 EXPORT_SYMBOL(sg_alloc_table_from_pages);

diff --git a/lib/sg_pool.c b/lib/sg_pool.c
index db29e5c1f790..c449248bf5d5 100644
--- a/lib/sg_pool.c
+++ b/lib/sg_pool.c
@@ -129,7 +129,8 @@ int sg_alloc_table_chained(struct sg_table *table, int nents,
 		nents_first_chunk = 0;
 	}

-	ret = __sg_alloc_table(table, nents, SG_CHUNK_SIZE,
+	memset(table, 0, sizeof(*table));
+	ret = __sg_alloc_table(table, NULL, nents, SG_CHUNK_SIZE,
 			       first_chunk, nents_first_chunk,
 			       GFP_ATOMIC, sg_pool_alloc);
 	if (unlikely(ret))
diff --git a/tools/testing/scatterlist/main.c b/tools/testing/scatterlist/main.c
index 0a1464181226..4899359a31ac 100644
--- a/tools/testing/scatterlist/main.c
+++ b/tools/testing/scatterlist/main.c
@@ -55,14 +55,13 @@ int main(void)
 	for (i = 0, test = tests; test->expected_segments; test++, i++) {
 		struct page *pages[MAX_PAGES];
 		struct sg_table st;
-		int ret;
+		struct scatterlist *sg;

 		set_pages(pages, test->pfn, test->num_pages);

-		ret = __sg_alloc_table_from_pages(&st, pages, test->num_pages,
-						  0, test->size, test->max_seg,
-						  GFP_KERNEL);
-		assert(ret == test->alloc_ret);
+		sg = __sg_alloc_table_from_pages(&st, pages, test->num_pages, 0,
+				test->size, test->max_seg, NULL, 0, GFP_KERNEL);
+		assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);

 		if (test->alloc_ret)
 			continue;
--
2.26.2


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
@ 2020-09-22  8:39   ` Leon Romanovsky
  0 siblings, 0 replies; 43+ messages in thread
From: Leon Romanovsky @ 2020-09-22  8:39 UTC (permalink / raw)
  To: Christoph Hellwig, Doug Ledford, Jason Gunthorpe
  Cc: linux-rdma, intel-gfx, Roland Scheidegger, dri-devel,
	David Airlie, VMware Graphics, Rodrigo Vivi, Maor Gottlieb,
	Maor Gottlieb

From: Maor Gottlieb <maorg@mellanox.com>

Extend __sg_alloc_table_from_pages to support dynamic allocation of
SG table from pages. It should be used by drivers that can't supply
all the pages at one time.

This function returns the last populated SGE in the table. Users should
pass it as an argument to the function from the second call and forward.
As before, nents will be equal to the number of populated SGEs (chunks).

With this new extension, drivers can benefit the optimization of merging
contiguous pages without a need to allocate all pages in advance and
hold them in a large buffer.

E.g. with the Infiniband driver that allocates a single page for hold
the
pages. For 1TB memory registration, the temporary buffer would consume
only
4KB, instead of 2GB.

Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c |  12 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c  |  15 +-
 include/linux/scatterlist.h                 |  43 +++---
 lib/scatterlist.c                           | 158 +++++++++++++++-----
 lib/sg_pool.c                               |   3 +-
 tools/testing/scatterlist/main.c            |   9 +-
 6 files changed, 163 insertions(+), 77 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 12b30075134a..f2eaed6aca3d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -403,6 +403,7 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
 	unsigned int max_segment = i915_sg_segment_size();
 	struct sg_table *st;
 	unsigned int sg_page_sizes;
+	struct scatterlist *sg;
 	int ret;

 	st = kmalloc(sizeof(*st), GFP_KERNEL);
@@ -410,13 +411,12 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
 		return ERR_PTR(-ENOMEM);

 alloc_table:
-	ret = __sg_alloc_table_from_pages(st, pvec, num_pages,
-					  0, num_pages << PAGE_SHIFT,
-					  max_segment,
-					  GFP_KERNEL);
-	if (ret) {
+	sg = __sg_alloc_table_from_pages(st, pvec, num_pages, 0,
+					 num_pages << PAGE_SHIFT, max_segment,
+					 NULL, 0, GFP_KERNEL);
+	if (IS_ERR(sg)) {
 		kfree(st);
-		return ERR_PTR(ret);
+		return ERR_CAST(sg);
 	}

 	ret = i915_gem_gtt_prepare_pages(obj, st);
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
index ab524ab3b0b4..f22acd398b1f 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
@@ -419,6 +419,7 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt)
 	int ret = 0;
 	static size_t sgl_size;
 	static size_t sgt_size;
+	struct scatterlist *sg;

 	if (vmw_tt->mapped)
 		return 0;
@@ -441,13 +442,15 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt)
 		if (unlikely(ret != 0))
 			return ret;

-		ret = __sg_alloc_table_from_pages
-			(&vmw_tt->sgt, vsgt->pages, vsgt->num_pages, 0,
-			 (unsigned long) vsgt->num_pages << PAGE_SHIFT,
-			 dma_get_max_seg_size(dev_priv->dev->dev),
-			 GFP_KERNEL);
-		if (unlikely(ret != 0))
+		sg = __sg_alloc_table_from_pages(&vmw_tt->sgt, vsgt->pages,
+				vsgt->num_pages, 0,
+				(unsigned long) vsgt->num_pages << PAGE_SHIFT,
+				dma_get_max_seg_size(dev_priv->dev->dev),
+				NULL, 0, GFP_KERNEL);
+		if (IS_ERR(sg)) {
+			ret = PTR_ERR(sg);
 			goto out_sg_alloc_fail;
+		}

 		if (vsgt->num_pages > vmw_tt->sgt.nents) {
 			uint64_t over_alloc =
diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
index 45cf7b69d852..c24cc667b56b 100644
--- a/include/linux/scatterlist.h
+++ b/include/linux/scatterlist.h
@@ -165,6 +165,22 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf,
 #define for_each_sgtable_dma_sg(sgt, sg, i)	\
 	for_each_sg((sgt)->sgl, sg, (sgt)->nents, i)

+static inline void __sg_chain(struct scatterlist *chain_sg,
+			      struct scatterlist *sgl)
+{
+	/*
+	 * offset and length are unused for chain entry. Clear them.
+	 */
+	chain_sg->offset = 0;
+	chain_sg->length = 0;
+
+	/*
+	 * Set lowest bit to indicate a link pointer, and make sure to clear
+	 * the termination bit if it happens to be set.
+	 */
+	chain_sg->page_link = ((unsigned long) sgl | SG_CHAIN) & ~SG_END;
+}
+
 /**
  * sg_chain - Chain two sglists together
  * @prv:	First scatterlist
@@ -178,18 +194,7 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf,
 static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents,
 			    struct scatterlist *sgl)
 {
-	/*
-	 * offset and length are unused for chain entry.  Clear them.
-	 */
-	prv[prv_nents - 1].offset = 0;
-	prv[prv_nents - 1].length = 0;
-
-	/*
-	 * Set lowest bit to indicate a link pointer, and make sure to clear
-	 * the termination bit if it happens to be set.
-	 */
-	prv[prv_nents - 1].page_link = ((unsigned long) sgl | SG_CHAIN)
-					& ~SG_END;
+	__sg_chain(&prv[prv_nents - 1], sgl);
 }

 /**
@@ -283,13 +288,15 @@ typedef void (sg_free_fn)(struct scatterlist *, unsigned int);
 void __sg_free_table(struct sg_table *, unsigned int, unsigned int,
 		     sg_free_fn *);
 void sg_free_table(struct sg_table *);
-int __sg_alloc_table(struct sg_table *, unsigned int, unsigned int,
-		     struct scatterlist *, unsigned int, gfp_t, sg_alloc_fn *);
+int __sg_alloc_table(struct sg_table *, struct scatterlist *, unsigned int,
+		unsigned int, struct scatterlist *, unsigned int,
+		gfp_t, sg_alloc_fn *);
 int sg_alloc_table(struct sg_table *, unsigned int, gfp_t);
-int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
-				unsigned int n_pages, unsigned int offset,
-				unsigned long size, unsigned int max_segment,
-				gfp_t gfp_mask);
+struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt,
+		struct page **pages, unsigned int n_pages, unsigned int offset,
+		unsigned long size, unsigned int max_segment,
+		struct scatterlist *prv, unsigned int left_pages,
+		gfp_t gfp_mask);
 int sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
 			      unsigned int n_pages, unsigned int offset,
 			      unsigned long size, gfp_t gfp_mask);
diff --git a/lib/scatterlist.c b/lib/scatterlist.c
index 5d63a8857f36..91587560497d 100644
--- a/lib/scatterlist.c
+++ b/lib/scatterlist.c
@@ -245,6 +245,7 @@ EXPORT_SYMBOL(sg_free_table);
 /**
  * __sg_alloc_table - Allocate and initialize an sg table with given allocator
  * @table:	The sg table header to use
+ * @prv:	Last populated sge in sgt
  * @nents:	Number of entries in sg list
  * @max_ents:	The maximum number of entries the allocator returns per call
  * @nents_first_chunk: Number of entries int the (preallocated) first
@@ -263,17 +264,15 @@ EXPORT_SYMBOL(sg_free_table);
  *   __sg_free_table() to cleanup any leftover allocations.
  *
  **/
-int __sg_alloc_table(struct sg_table *table, unsigned int nents,
-		     unsigned int max_ents, struct scatterlist *first_chunk,
-		     unsigned int nents_first_chunk, gfp_t gfp_mask,
-		     sg_alloc_fn *alloc_fn)
+int __sg_alloc_table(struct sg_table *table, struct scatterlist *prv,
+		unsigned int nents, unsigned int max_ents,
+		struct scatterlist *first_chunk,
+		unsigned int nents_first_chunk, gfp_t gfp_mask,
+		sg_alloc_fn *alloc_fn)
 {
-	struct scatterlist *sg, *prv;
-	unsigned int left;
-	unsigned curr_max_ents = nents_first_chunk ?: max_ents;
-	unsigned prv_max_ents;
-
-	memset(table, 0, sizeof(*table));
+	unsigned int curr_max_ents = nents_first_chunk ?: max_ents;
+	unsigned int left, prv_max_ents = 0;
+	struct scatterlist *sg;

 	if (nents == 0)
 		return -EINVAL;
@@ -283,7 +282,6 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
 #endif

 	left = nents;
-	prv = NULL;
 	do {
 		unsigned int sg_size, alloc_size = left;

@@ -308,7 +306,7 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
 			 * linkage.  Without this, sg_kfree() may get
 			 * confused.
 			 */
-			if (prv)
+			if (prv_max_ents)
 				table->nents = ++table->orig_nents;

 			return -ENOMEM;
@@ -321,10 +319,18 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
 		 * If this is the first mapping, assign the sg table header.
 		 * If this is not the first mapping, chain previous part.
 		 */
-		if (prv)
-			sg_chain(prv, prv_max_ents, sg);
-		else
+		if (!prv)
 			table->sgl = sg;
+		else if (prv_max_ents)
+			sg_chain(prv, prv_max_ents, sg);
+		else {
+			__sg_chain(prv, sg);
+			/*
+			 * We decrease one since the prvious last sge in used to
+			 * chain the chunks together.
+			 */
+			table->nents = table->orig_nents -= 1;
+		}

 		/*
 		 * If no more entries after this one, mark the end
@@ -356,7 +362,8 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask)
 {
 	int ret;

-	ret = __sg_alloc_table(table, nents, SG_MAX_SINGLE_ALLOC,
+	memset(table, 0, sizeof(*table));
+	ret = __sg_alloc_table(table, NULL, nents, SG_MAX_SINGLE_ALLOC,
 			       NULL, 0, gfp_mask, sg_kmalloc);
 	if (unlikely(ret))
 		__sg_free_table(table, SG_MAX_SINGLE_ALLOC, 0, sg_kfree);
@@ -365,6 +372,30 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask)
 }
 EXPORT_SYMBOL(sg_alloc_table);

+static struct scatterlist *get_next_sg(struct sg_table *table,
+		struct scatterlist *prv, unsigned long left_npages,
+		gfp_t gfp_mask)
+{
+	struct scatterlist *next_sg;
+	int ret;
+
+	/* If table was just allocated */
+	if (!prv)
+		return table->sgl;
+
+	/* Check if last entry should be keeped for chainning */
+	next_sg = sg_next(prv);
+	if (!sg_is_last(next_sg) || left_npages == 1)
+		return next_sg;
+
+	ret = __sg_alloc_table(table, next_sg,
+			min_t(unsigned long, left_npages, SG_MAX_SINGLE_ALLOC),
+			SG_MAX_SINGLE_ALLOC, NULL, 0, gfp_mask, sg_kmalloc);
+	if (ret)
+		return ERR_PTR(ret);
+	return sg_next(prv);
+}
+
 /**
  * __sg_alloc_table_from_pages - Allocate and initialize an sg table from
  *			         an array of pages
@@ -374,29 +405,47 @@ EXPORT_SYMBOL(sg_alloc_table);
  * @offset:      Offset from start of the first page to the start of a buffer
  * @size:        Number of valid bytes in the buffer (after offset)
  * @max_segment: Maximum size of a scatterlist node in bytes (page aligned)
+ * @prv:	 Last populated sge in sgt
+ * @left_pages:  Left pages caller have to set after this call
  * @gfp_mask:	 GFP allocation mask
  *
- *  Description:
- *    Allocate and initialize an sg table from a list of pages. Contiguous
- *    ranges of the pages are squashed into a single scatterlist node up to the
- *    maximum size specified in @max_segment. An user may provide an offset at a
- *    start and a size of valid data in a buffer specified by the page array.
- *    The returned sg table is released by sg_free_table.
+ * Description:
+ *    If @prv is NULL, allocate and initialize an sg table from a list of pages,
+ *    else reuse the scatterlist passed in at @prv.
+ *    Contiguous ranges of the pages are squashed into a single scatterlist
+ *    entry up to the maximum size specified in @max_segment.  A user may
+ *    provide an offset at a start and a size of valid data in a buffer
+ *    specified by the page array.
  *
  * Returns:
- *   0 on success, negative error on failure
+ *   Last SGE in sgt on success, PTR_ERR on otherwise.
+ *   The allocation in @sgt must be released by sg_free_table.
+ *
+ * Notes:
+ *   If this function returns non-0 (eg failure), the caller must call
+ *   sg_free_table() to cleanup any leftover allocations.
  */
-int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
-				unsigned int n_pages, unsigned int offset,
-				unsigned long size, unsigned int max_segment,
-				gfp_t gfp_mask)
+struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt,
+		struct page **pages, unsigned int n_pages, unsigned int offset,
+		unsigned long size, unsigned int max_segment,
+		struct scatterlist *prv, unsigned int left_pages,
+		gfp_t gfp_mask)
 {
-	unsigned int chunks, cur_page, seg_len, i;
+	unsigned int chunks, cur_page, seg_len, i, prv_len = 0;
+	unsigned int tmp_nents = sgt->nents;
+	struct scatterlist *s = prv;
+	unsigned int table_size;
 	int ret;
-	struct scatterlist *s;

 	if (WARN_ON(!max_segment || offset_in_page(max_segment)))
-		return -EINVAL;
+		return ERR_PTR(-EINVAL);
+	if (IS_ENABLED(CONFIG_ARCH_NO_SG_CHAIN) && prv)
+		return ERR_PTR(-EOPNOTSUPP);
+
+	if (prv &&
+	    page_to_pfn(sg_page(prv)) + (prv->length >> PAGE_SHIFT) ==
+	    page_to_pfn(pages[0]))
+		prv_len = prv->length;

 	/* compute number of contiguous chunks */
 	chunks = 1;
@@ -410,13 +459,17 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
 		}
 	}

-	ret = sg_alloc_table(sgt, chunks, gfp_mask);
-	if (unlikely(ret))
-		return ret;
+	if (!prv) {
+		/* Only the last allocation could be less than the maximum */
+		table_size = left_pages ? SG_MAX_SINGLE_ALLOC : chunks;
+		ret = sg_alloc_table(sgt, table_size, gfp_mask);
+		if (unlikely(ret))
+			return ERR_PTR(ret);
+	}

 	/* merging chunks and putting them into the scatterlist */
 	cur_page = 0;
-	for_each_sg(sgt->sgl, s, sgt->orig_nents, i) {
+	for (i = 0; i < chunks; i++) {
 		unsigned int j, chunk_size;

 		/* look for the end of the current chunk */
@@ -425,19 +478,41 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
 			seg_len += PAGE_SIZE;
 			if (seg_len >= max_segment ||
 			    page_to_pfn(pages[j]) !=
-			    page_to_pfn(pages[j - 1]) + 1)
+				    page_to_pfn(pages[j - 1]) + 1)
 				break;
 		}

 		chunk_size = ((j - cur_page) << PAGE_SHIFT) - offset;
-		sg_set_page(s, pages[cur_page],
-			    min_t(unsigned long, size, chunk_size), offset);
+		chunk_size = min_t(unsigned long, size, chunk_size);
+		if (!i && prv_len) {
+			if (max_segment - prv->length >= chunk_size) {
+				sg_set_page(s, sg_page(s),
+					    s->length + chunk_size, s->offset);
+				goto next;
+			}
+		}
+
+		/* Pass how many chunks might left */
+		s = get_next_sg(sgt, s, chunks - i + left_pages, gfp_mask);
+		if (IS_ERR(s)) {
+			/*
+			 * Adjust entry length to be as before function was
+			 * called.
+			 */
+			if (prv_len)
+				prv->length = prv_len;
+			goto out;
+		}
+		sg_set_page(s, pages[cur_page], chunk_size, offset);
+		tmp_nents++;
+next:
 		size -= chunk_size;
 		offset = 0;
 		cur_page = j;
 	}
-
-	return 0;
+	sgt->nents = tmp_nents;
+out:
+	return s;
 }
 EXPORT_SYMBOL(__sg_alloc_table_from_pages);

@@ -465,8 +540,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
 			      unsigned int n_pages, unsigned int offset,
 			      unsigned long size, gfp_t gfp_mask)
 {
-	return __sg_alloc_table_from_pages(sgt, pages, n_pages, offset, size,
-					   SCATTERLIST_MAX_SEGMENT, gfp_mask);
+	return PTR_ERR_OR_ZERO(__sg_alloc_table_from_pages(sgt, pages, n_pages,
+			offset, size, SCATTERLIST_MAX_SEGMENT, NULL, 0,
+			gfp_mask));
 }
 EXPORT_SYMBOL(sg_alloc_table_from_pages);

diff --git a/lib/sg_pool.c b/lib/sg_pool.c
index db29e5c1f790..c449248bf5d5 100644
--- a/lib/sg_pool.c
+++ b/lib/sg_pool.c
@@ -129,7 +129,8 @@ int sg_alloc_table_chained(struct sg_table *table, int nents,
 		nents_first_chunk = 0;
 	}

-	ret = __sg_alloc_table(table, nents, SG_CHUNK_SIZE,
+	memset(table, 0, sizeof(*table));
+	ret = __sg_alloc_table(table, NULL, nents, SG_CHUNK_SIZE,
 			       first_chunk, nents_first_chunk,
 			       GFP_ATOMIC, sg_pool_alloc);
 	if (unlikely(ret))
diff --git a/tools/testing/scatterlist/main.c b/tools/testing/scatterlist/main.c
index 0a1464181226..4899359a31ac 100644
--- a/tools/testing/scatterlist/main.c
+++ b/tools/testing/scatterlist/main.c
@@ -55,14 +55,13 @@ int main(void)
 	for (i = 0, test = tests; test->expected_segments; test++, i++) {
 		struct page *pages[MAX_PAGES];
 		struct sg_table st;
-		int ret;
+		struct scatterlist *sg;

 		set_pages(pages, test->pfn, test->num_pages);

-		ret = __sg_alloc_table_from_pages(&st, pages, test->num_pages,
-						  0, test->size, test->max_seg,
-						  GFP_KERNEL);
-		assert(ret == test->alloc_ret);
+		sg = __sg_alloc_table_from_pages(&st, pages, test->num_pages, 0,
+				test->size, test->max_seg, NULL, 0, GFP_KERNEL);
+		assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);

 		if (test->alloc_ret)
 			continue;
--
2.26.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
@ 2020-09-22  8:39   ` Leon Romanovsky
  0 siblings, 0 replies; 43+ messages in thread
From: Leon Romanovsky @ 2020-09-22  8:39 UTC (permalink / raw)
  To: Christoph Hellwig, Doug Ledford, Jason Gunthorpe
  Cc: linux-rdma, intel-gfx, Roland Scheidegger, dri-devel,
	David Airlie, VMware Graphics, Maor Gottlieb, Maor Gottlieb

From: Maor Gottlieb <maorg@mellanox.com>

Extend __sg_alloc_table_from_pages to support dynamic allocation of
SG table from pages. It should be used by drivers that can't supply
all the pages at one time.

This function returns the last populated SGE in the table. Users should
pass it as an argument to the function from the second call and forward.
As before, nents will be equal to the number of populated SGEs (chunks).

With this new extension, drivers can benefit the optimization of merging
contiguous pages without a need to allocate all pages in advance and
hold them in a large buffer.

E.g. with the Infiniband driver that allocates a single page for hold
the
pages. For 1TB memory registration, the temporary buffer would consume
only
4KB, instead of 2GB.

Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c |  12 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c  |  15 +-
 include/linux/scatterlist.h                 |  43 +++---
 lib/scatterlist.c                           | 158 +++++++++++++++-----
 lib/sg_pool.c                               |   3 +-
 tools/testing/scatterlist/main.c            |   9 +-
 6 files changed, 163 insertions(+), 77 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 12b30075134a..f2eaed6aca3d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -403,6 +403,7 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
 	unsigned int max_segment = i915_sg_segment_size();
 	struct sg_table *st;
 	unsigned int sg_page_sizes;
+	struct scatterlist *sg;
 	int ret;

 	st = kmalloc(sizeof(*st), GFP_KERNEL);
@@ -410,13 +411,12 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
 		return ERR_PTR(-ENOMEM);

 alloc_table:
-	ret = __sg_alloc_table_from_pages(st, pvec, num_pages,
-					  0, num_pages << PAGE_SHIFT,
-					  max_segment,
-					  GFP_KERNEL);
-	if (ret) {
+	sg = __sg_alloc_table_from_pages(st, pvec, num_pages, 0,
+					 num_pages << PAGE_SHIFT, max_segment,
+					 NULL, 0, GFP_KERNEL);
+	if (IS_ERR(sg)) {
 		kfree(st);
-		return ERR_PTR(ret);
+		return ERR_CAST(sg);
 	}

 	ret = i915_gem_gtt_prepare_pages(obj, st);
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
index ab524ab3b0b4..f22acd398b1f 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
@@ -419,6 +419,7 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt)
 	int ret = 0;
 	static size_t sgl_size;
 	static size_t sgt_size;
+	struct scatterlist *sg;

 	if (vmw_tt->mapped)
 		return 0;
@@ -441,13 +442,15 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt)
 		if (unlikely(ret != 0))
 			return ret;

-		ret = __sg_alloc_table_from_pages
-			(&vmw_tt->sgt, vsgt->pages, vsgt->num_pages, 0,
-			 (unsigned long) vsgt->num_pages << PAGE_SHIFT,
-			 dma_get_max_seg_size(dev_priv->dev->dev),
-			 GFP_KERNEL);
-		if (unlikely(ret != 0))
+		sg = __sg_alloc_table_from_pages(&vmw_tt->sgt, vsgt->pages,
+				vsgt->num_pages, 0,
+				(unsigned long) vsgt->num_pages << PAGE_SHIFT,
+				dma_get_max_seg_size(dev_priv->dev->dev),
+				NULL, 0, GFP_KERNEL);
+		if (IS_ERR(sg)) {
+			ret = PTR_ERR(sg);
 			goto out_sg_alloc_fail;
+		}

 		if (vsgt->num_pages > vmw_tt->sgt.nents) {
 			uint64_t over_alloc =
diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
index 45cf7b69d852..c24cc667b56b 100644
--- a/include/linux/scatterlist.h
+++ b/include/linux/scatterlist.h
@@ -165,6 +165,22 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf,
 #define for_each_sgtable_dma_sg(sgt, sg, i)	\
 	for_each_sg((sgt)->sgl, sg, (sgt)->nents, i)

+static inline void __sg_chain(struct scatterlist *chain_sg,
+			      struct scatterlist *sgl)
+{
+	/*
+	 * offset and length are unused for chain entry. Clear them.
+	 */
+	chain_sg->offset = 0;
+	chain_sg->length = 0;
+
+	/*
+	 * Set lowest bit to indicate a link pointer, and make sure to clear
+	 * the termination bit if it happens to be set.
+	 */
+	chain_sg->page_link = ((unsigned long) sgl | SG_CHAIN) & ~SG_END;
+}
+
 /**
  * sg_chain - Chain two sglists together
  * @prv:	First scatterlist
@@ -178,18 +194,7 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf,
 static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents,
 			    struct scatterlist *sgl)
 {
-	/*
-	 * offset and length are unused for chain entry.  Clear them.
-	 */
-	prv[prv_nents - 1].offset = 0;
-	prv[prv_nents - 1].length = 0;
-
-	/*
-	 * Set lowest bit to indicate a link pointer, and make sure to clear
-	 * the termination bit if it happens to be set.
-	 */
-	prv[prv_nents - 1].page_link = ((unsigned long) sgl | SG_CHAIN)
-					& ~SG_END;
+	__sg_chain(&prv[prv_nents - 1], sgl);
 }

 /**
@@ -283,13 +288,15 @@ typedef void (sg_free_fn)(struct scatterlist *, unsigned int);
 void __sg_free_table(struct sg_table *, unsigned int, unsigned int,
 		     sg_free_fn *);
 void sg_free_table(struct sg_table *);
-int __sg_alloc_table(struct sg_table *, unsigned int, unsigned int,
-		     struct scatterlist *, unsigned int, gfp_t, sg_alloc_fn *);
+int __sg_alloc_table(struct sg_table *, struct scatterlist *, unsigned int,
+		unsigned int, struct scatterlist *, unsigned int,
+		gfp_t, sg_alloc_fn *);
 int sg_alloc_table(struct sg_table *, unsigned int, gfp_t);
-int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
-				unsigned int n_pages, unsigned int offset,
-				unsigned long size, unsigned int max_segment,
-				gfp_t gfp_mask);
+struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt,
+		struct page **pages, unsigned int n_pages, unsigned int offset,
+		unsigned long size, unsigned int max_segment,
+		struct scatterlist *prv, unsigned int left_pages,
+		gfp_t gfp_mask);
 int sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
 			      unsigned int n_pages, unsigned int offset,
 			      unsigned long size, gfp_t gfp_mask);
diff --git a/lib/scatterlist.c b/lib/scatterlist.c
index 5d63a8857f36..91587560497d 100644
--- a/lib/scatterlist.c
+++ b/lib/scatterlist.c
@@ -245,6 +245,7 @@ EXPORT_SYMBOL(sg_free_table);
 /**
  * __sg_alloc_table - Allocate and initialize an sg table with given allocator
  * @table:	The sg table header to use
+ * @prv:	Last populated sge in sgt
  * @nents:	Number of entries in sg list
  * @max_ents:	The maximum number of entries the allocator returns per call
  * @nents_first_chunk: Number of entries int the (preallocated) first
@@ -263,17 +264,15 @@ EXPORT_SYMBOL(sg_free_table);
  *   __sg_free_table() to cleanup any leftover allocations.
  *
  **/
-int __sg_alloc_table(struct sg_table *table, unsigned int nents,
-		     unsigned int max_ents, struct scatterlist *first_chunk,
-		     unsigned int nents_first_chunk, gfp_t gfp_mask,
-		     sg_alloc_fn *alloc_fn)
+int __sg_alloc_table(struct sg_table *table, struct scatterlist *prv,
+		unsigned int nents, unsigned int max_ents,
+		struct scatterlist *first_chunk,
+		unsigned int nents_first_chunk, gfp_t gfp_mask,
+		sg_alloc_fn *alloc_fn)
 {
-	struct scatterlist *sg, *prv;
-	unsigned int left;
-	unsigned curr_max_ents = nents_first_chunk ?: max_ents;
-	unsigned prv_max_ents;
-
-	memset(table, 0, sizeof(*table));
+	unsigned int curr_max_ents = nents_first_chunk ?: max_ents;
+	unsigned int left, prv_max_ents = 0;
+	struct scatterlist *sg;

 	if (nents == 0)
 		return -EINVAL;
@@ -283,7 +282,6 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
 #endif

 	left = nents;
-	prv = NULL;
 	do {
 		unsigned int sg_size, alloc_size = left;

@@ -308,7 +306,7 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
 			 * linkage.  Without this, sg_kfree() may get
 			 * confused.
 			 */
-			if (prv)
+			if (prv_max_ents)
 				table->nents = ++table->orig_nents;

 			return -ENOMEM;
@@ -321,10 +319,18 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
 		 * If this is the first mapping, assign the sg table header.
 		 * If this is not the first mapping, chain previous part.
 		 */
-		if (prv)
-			sg_chain(prv, prv_max_ents, sg);
-		else
+		if (!prv)
 			table->sgl = sg;
+		else if (prv_max_ents)
+			sg_chain(prv, prv_max_ents, sg);
+		else {
+			__sg_chain(prv, sg);
+			/*
+			 * We decrease one since the prvious last sge in used to
+			 * chain the chunks together.
+			 */
+			table->nents = table->orig_nents -= 1;
+		}

 		/*
 		 * If no more entries after this one, mark the end
@@ -356,7 +362,8 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask)
 {
 	int ret;

-	ret = __sg_alloc_table(table, nents, SG_MAX_SINGLE_ALLOC,
+	memset(table, 0, sizeof(*table));
+	ret = __sg_alloc_table(table, NULL, nents, SG_MAX_SINGLE_ALLOC,
 			       NULL, 0, gfp_mask, sg_kmalloc);
 	if (unlikely(ret))
 		__sg_free_table(table, SG_MAX_SINGLE_ALLOC, 0, sg_kfree);
@@ -365,6 +372,30 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask)
 }
 EXPORT_SYMBOL(sg_alloc_table);

+static struct scatterlist *get_next_sg(struct sg_table *table,
+		struct scatterlist *prv, unsigned long left_npages,
+		gfp_t gfp_mask)
+{
+	struct scatterlist *next_sg;
+	int ret;
+
+	/* If table was just allocated */
+	if (!prv)
+		return table->sgl;
+
+	/* Check if last entry should be keeped for chainning */
+	next_sg = sg_next(prv);
+	if (!sg_is_last(next_sg) || left_npages == 1)
+		return next_sg;
+
+	ret = __sg_alloc_table(table, next_sg,
+			min_t(unsigned long, left_npages, SG_MAX_SINGLE_ALLOC),
+			SG_MAX_SINGLE_ALLOC, NULL, 0, gfp_mask, sg_kmalloc);
+	if (ret)
+		return ERR_PTR(ret);
+	return sg_next(prv);
+}
+
 /**
  * __sg_alloc_table_from_pages - Allocate and initialize an sg table from
  *			         an array of pages
@@ -374,29 +405,47 @@ EXPORT_SYMBOL(sg_alloc_table);
  * @offset:      Offset from start of the first page to the start of a buffer
  * @size:        Number of valid bytes in the buffer (after offset)
  * @max_segment: Maximum size of a scatterlist node in bytes (page aligned)
+ * @prv:	 Last populated sge in sgt
+ * @left_pages:  Left pages caller have to set after this call
  * @gfp_mask:	 GFP allocation mask
  *
- *  Description:
- *    Allocate and initialize an sg table from a list of pages. Contiguous
- *    ranges of the pages are squashed into a single scatterlist node up to the
- *    maximum size specified in @max_segment. An user may provide an offset at a
- *    start and a size of valid data in a buffer specified by the page array.
- *    The returned sg table is released by sg_free_table.
+ * Description:
+ *    If @prv is NULL, allocate and initialize an sg table from a list of pages,
+ *    else reuse the scatterlist passed in at @prv.
+ *    Contiguous ranges of the pages are squashed into a single scatterlist
+ *    entry up to the maximum size specified in @max_segment.  A user may
+ *    provide an offset at a start and a size of valid data in a buffer
+ *    specified by the page array.
  *
  * Returns:
- *   0 on success, negative error on failure
+ *   Last SGE in sgt on success, PTR_ERR on otherwise.
+ *   The allocation in @sgt must be released by sg_free_table.
+ *
+ * Notes:
+ *   If this function returns non-0 (eg failure), the caller must call
+ *   sg_free_table() to cleanup any leftover allocations.
  */
-int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
-				unsigned int n_pages, unsigned int offset,
-				unsigned long size, unsigned int max_segment,
-				gfp_t gfp_mask)
+struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt,
+		struct page **pages, unsigned int n_pages, unsigned int offset,
+		unsigned long size, unsigned int max_segment,
+		struct scatterlist *prv, unsigned int left_pages,
+		gfp_t gfp_mask)
 {
-	unsigned int chunks, cur_page, seg_len, i;
+	unsigned int chunks, cur_page, seg_len, i, prv_len = 0;
+	unsigned int tmp_nents = sgt->nents;
+	struct scatterlist *s = prv;
+	unsigned int table_size;
 	int ret;
-	struct scatterlist *s;

 	if (WARN_ON(!max_segment || offset_in_page(max_segment)))
-		return -EINVAL;
+		return ERR_PTR(-EINVAL);
+	if (IS_ENABLED(CONFIG_ARCH_NO_SG_CHAIN) && prv)
+		return ERR_PTR(-EOPNOTSUPP);
+
+	if (prv &&
+	    page_to_pfn(sg_page(prv)) + (prv->length >> PAGE_SHIFT) ==
+	    page_to_pfn(pages[0]))
+		prv_len = prv->length;

 	/* compute number of contiguous chunks */
 	chunks = 1;
@@ -410,13 +459,17 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
 		}
 	}

-	ret = sg_alloc_table(sgt, chunks, gfp_mask);
-	if (unlikely(ret))
-		return ret;
+	if (!prv) {
+		/* Only the last allocation could be less than the maximum */
+		table_size = left_pages ? SG_MAX_SINGLE_ALLOC : chunks;
+		ret = sg_alloc_table(sgt, table_size, gfp_mask);
+		if (unlikely(ret))
+			return ERR_PTR(ret);
+	}

 	/* merging chunks and putting them into the scatterlist */
 	cur_page = 0;
-	for_each_sg(sgt->sgl, s, sgt->orig_nents, i) {
+	for (i = 0; i < chunks; i++) {
 		unsigned int j, chunk_size;

 		/* look for the end of the current chunk */
@@ -425,19 +478,41 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
 			seg_len += PAGE_SIZE;
 			if (seg_len >= max_segment ||
 			    page_to_pfn(pages[j]) !=
-			    page_to_pfn(pages[j - 1]) + 1)
+				    page_to_pfn(pages[j - 1]) + 1)
 				break;
 		}

 		chunk_size = ((j - cur_page) << PAGE_SHIFT) - offset;
-		sg_set_page(s, pages[cur_page],
-			    min_t(unsigned long, size, chunk_size), offset);
+		chunk_size = min_t(unsigned long, size, chunk_size);
+		if (!i && prv_len) {
+			if (max_segment - prv->length >= chunk_size) {
+				sg_set_page(s, sg_page(s),
+					    s->length + chunk_size, s->offset);
+				goto next;
+			}
+		}
+
+		/* Pass how many chunks might left */
+		s = get_next_sg(sgt, s, chunks - i + left_pages, gfp_mask);
+		if (IS_ERR(s)) {
+			/*
+			 * Adjust entry length to be as before function was
+			 * called.
+			 */
+			if (prv_len)
+				prv->length = prv_len;
+			goto out;
+		}
+		sg_set_page(s, pages[cur_page], chunk_size, offset);
+		tmp_nents++;
+next:
 		size -= chunk_size;
 		offset = 0;
 		cur_page = j;
 	}
-
-	return 0;
+	sgt->nents = tmp_nents;
+out:
+	return s;
 }
 EXPORT_SYMBOL(__sg_alloc_table_from_pages);

@@ -465,8 +540,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
 			      unsigned int n_pages, unsigned int offset,
 			      unsigned long size, gfp_t gfp_mask)
 {
-	return __sg_alloc_table_from_pages(sgt, pages, n_pages, offset, size,
-					   SCATTERLIST_MAX_SEGMENT, gfp_mask);
+	return PTR_ERR_OR_ZERO(__sg_alloc_table_from_pages(sgt, pages, n_pages,
+			offset, size, SCATTERLIST_MAX_SEGMENT, NULL, 0,
+			gfp_mask));
 }
 EXPORT_SYMBOL(sg_alloc_table_from_pages);

diff --git a/lib/sg_pool.c b/lib/sg_pool.c
index db29e5c1f790..c449248bf5d5 100644
--- a/lib/sg_pool.c
+++ b/lib/sg_pool.c
@@ -129,7 +129,8 @@ int sg_alloc_table_chained(struct sg_table *table, int nents,
 		nents_first_chunk = 0;
 	}

-	ret = __sg_alloc_table(table, nents, SG_CHUNK_SIZE,
+	memset(table, 0, sizeof(*table));
+	ret = __sg_alloc_table(table, NULL, nents, SG_CHUNK_SIZE,
 			       first_chunk, nents_first_chunk,
 			       GFP_ATOMIC, sg_pool_alloc);
 	if (unlikely(ret))
diff --git a/tools/testing/scatterlist/main.c b/tools/testing/scatterlist/main.c
index 0a1464181226..4899359a31ac 100644
--- a/tools/testing/scatterlist/main.c
+++ b/tools/testing/scatterlist/main.c
@@ -55,14 +55,13 @@ int main(void)
 	for (i = 0, test = tests; test->expected_segments; test++, i++) {
 		struct page *pages[MAX_PAGES];
 		struct sg_table st;
-		int ret;
+		struct scatterlist *sg;

 		set_pages(pages, test->pfn, test->num_pages);

-		ret = __sg_alloc_table_from_pages(&st, pages, test->num_pages,
-						  0, test->size, test->max_seg,
-						  GFP_KERNEL);
-		assert(ret == test->alloc_ret);
+		sg = __sg_alloc_table_from_pages(&st, pages, test->num_pages, 0,
+				test->size, test->max_seg, NULL, 0, GFP_KERNEL);
+		assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);

 		if (test->alloc_ret)
 			continue;
--
2.26.2

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH rdma-next v3 2/2] RDMA/umem: Move to allocate SG table from pages
  2020-09-22  8:39 ` Leon Romanovsky
                   ` (2 preceding siblings ...)
  (?)
@ 2020-09-22  8:39 ` Leon Romanovsky
  -1 siblings, 0 replies; 43+ messages in thread
From: Leon Romanovsky @ 2020-09-22  8:39 UTC (permalink / raw)
  To: Christoph Hellwig, Doug Ledford, Jason Gunthorpe
  Cc: Maor Gottlieb, linux-rdma

From: Maor Gottlieb <maorg@nvidia.com>

Remove the implementation of ib_umem_add_sg_table and instead
call to __sg_alloc_table_from_pages which already has the logic to
merge contiguous pages.

Besides that it removes duplicated functionality, it reduces the
memory consumption of the SG table significantly. Prior to this
patch, the SG table was allocated in advance regardless consideration
of contiguous pages.

In huge pages system of 2MB page size, without this change, the SG table
would contain x512 SG entries.
E.g. for 100GB memory registration:

	 Number of entries	Size
Before 	      26214400          600.0MB
After            51200		  1.2MB

Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/infiniband/core/umem.c | 92 +++++-----------------------------
 1 file changed, 12 insertions(+), 80 deletions(-)

diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
index 01b680b62846..0ef736970aba 100644
--- a/drivers/infiniband/core/umem.c
+++ b/drivers/infiniband/core/umem.c
@@ -63,73 +63,6 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d
 	sg_free_table(&umem->sg_head);
 }

-/* ib_umem_add_sg_table - Add N contiguous pages to scatter table
- *
- * sg: current scatterlist entry
- * page_list: array of npage struct page pointers
- * npages: number of pages in page_list
- * max_seg_sz: maximum segment size in bytes
- * nents: [out] number of entries in the scatterlist
- *
- * Return new end of scatterlist
- */
-static struct scatterlist *ib_umem_add_sg_table(struct scatterlist *sg,
-						struct page **page_list,
-						unsigned long npages,
-						unsigned int max_seg_sz,
-						int *nents)
-{
-	unsigned long first_pfn;
-	unsigned long i = 0;
-	bool update_cur_sg = false;
-	bool first = !sg_page(sg);
-
-	/* Check if new page_list is contiguous with end of previous page_list.
-	 * sg->length here is a multiple of PAGE_SIZE and sg->offset is 0.
-	 */
-	if (!first && (page_to_pfn(sg_page(sg)) + (sg->length >> PAGE_SHIFT) ==
-		       page_to_pfn(page_list[0])))
-		update_cur_sg = true;
-
-	while (i != npages) {
-		unsigned long len;
-		struct page *first_page = page_list[i];
-
-		first_pfn = page_to_pfn(first_page);
-
-		/* Compute the number of contiguous pages we have starting
-		 * at i
-		 */
-		for (len = 0; i != npages &&
-			      first_pfn + len == page_to_pfn(page_list[i]) &&
-			      len < (max_seg_sz >> PAGE_SHIFT);
-		     len++)
-			i++;
-
-		/* Squash N contiguous pages from page_list into current sge */
-		if (update_cur_sg) {
-			if ((max_seg_sz - sg->length) >= (len << PAGE_SHIFT)) {
-				sg_set_page(sg, sg_page(sg),
-					    sg->length + (len << PAGE_SHIFT),
-					    0);
-				update_cur_sg = false;
-				continue;
-			}
-			update_cur_sg = false;
-		}
-
-		/* Squash N contiguous pages into next sge or first sge */
-		if (!first)
-			sg = sg_next(sg);
-
-		(*nents)++;
-		sg_set_page(sg, first_page, len << PAGE_SHIFT, 0);
-		first = false;
-	}
-
-	return sg;
-}
-
 /**
  * ib_umem_find_best_pgsz - Find best HW page size to use for this MR
  *
@@ -221,7 +154,7 @@ static struct ib_umem *__ib_umem_get(struct ib_device *device,
 	struct mm_struct *mm;
 	unsigned long npages;
 	int ret;
-	struct scatterlist *sg;
+	struct scatterlist *sg = NULL;
 	unsigned int gup_flags = FOLL_WRITE;

 	/*
@@ -276,15 +209,9 @@ static struct ib_umem *__ib_umem_get(struct ib_device *device,

 	cur_base = addr & PAGE_MASK;

-	ret = sg_alloc_table(&umem->sg_head, npages, GFP_KERNEL);
-	if (ret)
-		goto vma;
-
 	if (!umem->writable)
 		gup_flags |= FOLL_FORCE;

-	sg = umem->sg_head.sgl;
-
 	while (npages) {
 		cond_resched();
 		ret = pin_user_pages_fast(cur_base,
@@ -296,11 +223,17 @@ static struct ib_umem *__ib_umem_get(struct ib_device *device,
 			goto umem_release;

 		cur_base += ret * PAGE_SIZE;
-		npages   -= ret;
-
-		sg = ib_umem_add_sg_table(sg, page_list, ret,
-			dma_get_max_seg_size(device->dma_device),
-			&umem->sg_nents);
+		npages -= ret;
+		sg = __sg_alloc_table_from_pages(
+			&umem->sg_head, page_list, ret, 0, ret << PAGE_SHIFT,
+			dma_get_max_seg_size(device->dma_device), sg, npages,
+			GFP_KERNEL);
+		umem->sg_nents = umem->sg_head.nents;
+		if (IS_ERR(sg)) {
+			unpin_user_pages_dirty_lock(page_list, ret, 0);
+			ret = PTR_ERR(sg);
+			goto umem_release;
+		}
 	}

 	sg_mark_end(sg);
@@ -322,7 +255,6 @@ static struct ib_umem *__ib_umem_get(struct ib_device *device,

 umem_release:
 	__ib_umem_release(device, umem, 0);
-vma:
 	atomic64_sub(ib_umem_num_pages(umem), &mm->pinned_vm);
 out:
 	free_page((unsigned long) page_list);
--
2.26.2


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* Re: [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
  2020-09-22  8:39   ` Leon Romanovsky
@ 2020-09-23  5:42     ` Christoph Hellwig
  -1 siblings, 0 replies; 43+ messages in thread
From: Christoph Hellwig @ 2020-09-23  5:42 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Christoph Hellwig, Doug Ledford, Jason Gunthorpe, Maor Gottlieb,
	linux-rdma, Daniel Vetter, David Airlie, dri-devel, intel-gfx,
	Jani Nikula, Joonas Lahtinen, Maor Gottlieb, Rodrigo Vivi,
	Roland Scheidegger, VMware Graphics

On Tue, Sep 22, 2020 at 11:39:57AM +0300, Leon Romanovsky wrote:
> E.g. with the Infiniband driver that allocates a single page for hold
> the
> pages. For 1TB memory registration, the temporary buffer would consume
> only
> 4KB, instead of 2GB.

Formatting looks  little weird here.

Otherwise looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
@ 2020-09-23  5:42     ` Christoph Hellwig
  0 siblings, 0 replies; 43+ messages in thread
From: Christoph Hellwig @ 2020-09-23  5:42 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Jason Gunthorpe, linux-rdma, intel-gfx, Roland Scheidegger,
	dri-devel, Maor Gottlieb, David Airlie, Doug Ledford,
	VMware Graphics, Maor Gottlieb, Christoph Hellwig

On Tue, Sep 22, 2020 at 11:39:57AM +0300, Leon Romanovsky wrote:
> E.g. with the Infiniband driver that allocates a single page for hold
> the
> pages. For 1TB memory registration, the temporary buffer would consume
> only
> 4KB, instead of 2GB.

Formatting looks  little weird here.

Otherwise looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
  2020-09-22  8:39   ` Leon Romanovsky
@ 2020-09-24  8:21     ` Tvrtko Ursulin
  -1 siblings, 0 replies; 43+ messages in thread
From: Tvrtko Ursulin @ 2020-09-24  8:21 UTC (permalink / raw)
  To: Leon Romanovsky, Christoph Hellwig, Doug Ledford, Jason Gunthorpe
  Cc: linux-rdma, intel-gfx, Roland Scheidegger, dri-devel,
	David Airlie, VMware Graphics, Maor Gottlieb, Maor Gottlieb


On 22/09/2020 09:39, Leon Romanovsky wrote:
> From: Maor Gottlieb <maorg@mellanox.com>
> 
> Extend __sg_alloc_table_from_pages to support dynamic allocation of
> SG table from pages. It should be used by drivers that can't supply
> all the pages at one time.
> 
> This function returns the last populated SGE in the table. Users should
> pass it as an argument to the function from the second call and forward.
> As before, nents will be equal to the number of populated SGEs (chunks).

So it's appending and growing the "list", did I get that right? Sounds 
handy indeed. Some comments/questions below.

> 
> With this new extension, drivers can benefit the optimization of merging
> contiguous pages without a need to allocate all pages in advance and
> hold them in a large buffer.
> 
> E.g. with the Infiniband driver that allocates a single page for hold
> the
> pages. For 1TB memory registration, the temporary buffer would consume
> only
> 4KB, instead of 2GB.
> 
> Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> ---
>   drivers/gpu/drm/i915/gem/i915_gem_userptr.c |  12 +-
>   drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c  |  15 +-
>   include/linux/scatterlist.h                 |  43 +++---
>   lib/scatterlist.c                           | 158 +++++++++++++++-----
>   lib/sg_pool.c                               |   3 +-
>   tools/testing/scatterlist/main.c            |   9 +-
>   6 files changed, 163 insertions(+), 77 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> index 12b30075134a..f2eaed6aca3d 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> @@ -403,6 +403,7 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
>   	unsigned int max_segment = i915_sg_segment_size();
>   	struct sg_table *st;
>   	unsigned int sg_page_sizes;
> +	struct scatterlist *sg;
>   	int ret;
> 
>   	st = kmalloc(sizeof(*st), GFP_KERNEL);
> @@ -410,13 +411,12 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
>   		return ERR_PTR(-ENOMEM);
> 
>   alloc_table:
> -	ret = __sg_alloc_table_from_pages(st, pvec, num_pages,
> -					  0, num_pages << PAGE_SHIFT,
> -					  max_segment,
> -					  GFP_KERNEL);
> -	if (ret) {
> +	sg = __sg_alloc_table_from_pages(st, pvec, num_pages, 0,
> +					 num_pages << PAGE_SHIFT, max_segment,
> +					 NULL, 0, GFP_KERNEL);
> +	if (IS_ERR(sg)) {
>   		kfree(st);
> -		return ERR_PTR(ret);
> +		return ERR_CAST(sg);
>   	}
> 
>   	ret = i915_gem_gtt_prepare_pages(obj, st);
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
> index ab524ab3b0b4..f22acd398b1f 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
> @@ -419,6 +419,7 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt)
>   	int ret = 0;
>   	static size_t sgl_size;
>   	static size_t sgt_size;
> +	struct scatterlist *sg;
> 
>   	if (vmw_tt->mapped)
>   		return 0;
> @@ -441,13 +442,15 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt)
>   		if (unlikely(ret != 0))
>   			return ret;
> 
> -		ret = __sg_alloc_table_from_pages
> -			(&vmw_tt->sgt, vsgt->pages, vsgt->num_pages, 0,
> -			 (unsigned long) vsgt->num_pages << PAGE_SHIFT,
> -			 dma_get_max_seg_size(dev_priv->dev->dev),
> -			 GFP_KERNEL);
> -		if (unlikely(ret != 0))
> +		sg = __sg_alloc_table_from_pages(&vmw_tt->sgt, vsgt->pages,
> +				vsgt->num_pages, 0,
> +				(unsigned long) vsgt->num_pages << PAGE_SHIFT,
> +				dma_get_max_seg_size(dev_priv->dev->dev),
> +				NULL, 0, GFP_KERNEL);
> +		if (IS_ERR(sg)) {
> +			ret = PTR_ERR(sg);
>   			goto out_sg_alloc_fail;
> +		}
> 
>   		if (vsgt->num_pages > vmw_tt->sgt.nents) {
>   			uint64_t over_alloc =
> diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
> index 45cf7b69d852..c24cc667b56b 100644
> --- a/include/linux/scatterlist.h
> +++ b/include/linux/scatterlist.h
> @@ -165,6 +165,22 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf,
>   #define for_each_sgtable_dma_sg(sgt, sg, i)	\
>   	for_each_sg((sgt)->sgl, sg, (sgt)->nents, i)
> 
> +static inline void __sg_chain(struct scatterlist *chain_sg,
> +			      struct scatterlist *sgl)
> +{
> +	/*
> +	 * offset and length are unused for chain entry. Clear them.
> +	 */
> +	chain_sg->offset = 0;
> +	chain_sg->length = 0;
> +
> +	/*
> +	 * Set lowest bit to indicate a link pointer, and make sure to clear
> +	 * the termination bit if it happens to be set.
> +	 */
> +	chain_sg->page_link = ((unsigned long) sgl | SG_CHAIN) & ~SG_END;
> +}
> +
>   /**
>    * sg_chain - Chain two sglists together
>    * @prv:	First scatterlist
> @@ -178,18 +194,7 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf,
>   static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents,
>   			    struct scatterlist *sgl)
>   {
> -	/*
> -	 * offset and length are unused for chain entry.  Clear them.
> -	 */
> -	prv[prv_nents - 1].offset = 0;
> -	prv[prv_nents - 1].length = 0;
> -
> -	/*
> -	 * Set lowest bit to indicate a link pointer, and make sure to clear
> -	 * the termination bit if it happens to be set.
> -	 */
> -	prv[prv_nents - 1].page_link = ((unsigned long) sgl | SG_CHAIN)
> -					& ~SG_END;
> +	__sg_chain(&prv[prv_nents - 1], sgl);
>   }
> 
>   /**
> @@ -283,13 +288,15 @@ typedef void (sg_free_fn)(struct scatterlist *, unsigned int);
>   void __sg_free_table(struct sg_table *, unsigned int, unsigned int,
>   		     sg_free_fn *);
>   void sg_free_table(struct sg_table *);
> -int __sg_alloc_table(struct sg_table *, unsigned int, unsigned int,
> -		     struct scatterlist *, unsigned int, gfp_t, sg_alloc_fn *);
> +int __sg_alloc_table(struct sg_table *, struct scatterlist *, unsigned int,
> +		unsigned int, struct scatterlist *, unsigned int,
> +		gfp_t, sg_alloc_fn *);
>   int sg_alloc_table(struct sg_table *, unsigned int, gfp_t);
> -int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
> -				unsigned int n_pages, unsigned int offset,
> -				unsigned long size, unsigned int max_segment,
> -				gfp_t gfp_mask);
> +struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt,
> +		struct page **pages, unsigned int n_pages, unsigned int offset,
> +		unsigned long size, unsigned int max_segment,
> +		struct scatterlist *prv, unsigned int left_pages,
> +		gfp_t gfp_mask);
>   int sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
>   			      unsigned int n_pages, unsigned int offset,
>   			      unsigned long size, gfp_t gfp_mask);
> diff --git a/lib/scatterlist.c b/lib/scatterlist.c
> index 5d63a8857f36..91587560497d 100644
> --- a/lib/scatterlist.c
> +++ b/lib/scatterlist.c
> @@ -245,6 +245,7 @@ EXPORT_SYMBOL(sg_free_table);
>   /**
>    * __sg_alloc_table - Allocate and initialize an sg table with given allocator
>    * @table:	The sg table header to use
> + * @prv:	Last populated sge in sgt
>    * @nents:	Number of entries in sg list
>    * @max_ents:	The maximum number of entries the allocator returns per call
>    * @nents_first_chunk: Number of entries int the (preallocated) first
> @@ -263,17 +264,15 @@ EXPORT_SYMBOL(sg_free_table);
>    *   __sg_free_table() to cleanup any leftover allocations.
>    *
>    **/
> -int __sg_alloc_table(struct sg_table *table, unsigned int nents,
> -		     unsigned int max_ents, struct scatterlist *first_chunk,
> -		     unsigned int nents_first_chunk, gfp_t gfp_mask,
> -		     sg_alloc_fn *alloc_fn)
> +int __sg_alloc_table(struct sg_table *table, struct scatterlist *prv,
> +		unsigned int nents, unsigned int max_ents,
> +		struct scatterlist *first_chunk,
> +		unsigned int nents_first_chunk, gfp_t gfp_mask,
> +		sg_alloc_fn *alloc_fn)
>   {
> -	struct scatterlist *sg, *prv;
> -	unsigned int left;
> -	unsigned curr_max_ents = nents_first_chunk ?: max_ents;
> -	unsigned prv_max_ents;
> -
> -	memset(table, 0, sizeof(*table));
> +	unsigned int curr_max_ents = nents_first_chunk ?: max_ents;
> +	unsigned int left, prv_max_ents = 0;
> +	struct scatterlist *sg;
> 
>   	if (nents == 0)
>   		return -EINVAL;
> @@ -283,7 +282,6 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
>   #endif
> 
>   	left = nents;
> -	prv = NULL;
>   	do {
>   		unsigned int sg_size, alloc_size = left;
> 
> @@ -308,7 +306,7 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
>   			 * linkage.  Without this, sg_kfree() may get
>   			 * confused.
>   			 */
> -			if (prv)
> +			if (prv_max_ents)
>   				table->nents = ++table->orig_nents;
> 
>   			return -ENOMEM;
> @@ -321,10 +319,18 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
>   		 * If this is the first mapping, assign the sg table header.
>   		 * If this is not the first mapping, chain previous part.
>   		 */
> -		if (prv)
> -			sg_chain(prv, prv_max_ents, sg);
> -		else
> +		if (!prv)
>   			table->sgl = sg;
> +		else if (prv_max_ents)
> +			sg_chain(prv, prv_max_ents, sg);
> +		else {
> +			__sg_chain(prv, sg);
> +			/*
> +			 * We decrease one since the prvious last sge in used to
> +			 * chain the chunks together.
> +			 */
> +			table->nents = table->orig_nents -= 1;
> +		}
> 
>   		/*
>   		 * If no more entries after this one, mark the end
> @@ -356,7 +362,8 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask)
>   {
>   	int ret;
> 
> -	ret = __sg_alloc_table(table, nents, SG_MAX_SINGLE_ALLOC,
> +	memset(table, 0, sizeof(*table));
> +	ret = __sg_alloc_table(table, NULL, nents, SG_MAX_SINGLE_ALLOC,
>   			       NULL, 0, gfp_mask, sg_kmalloc);
>   	if (unlikely(ret))
>   		__sg_free_table(table, SG_MAX_SINGLE_ALLOC, 0, sg_kfree);
> @@ -365,6 +372,30 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask)
>   }
>   EXPORT_SYMBOL(sg_alloc_table);
> 
> +static struct scatterlist *get_next_sg(struct sg_table *table,
> +		struct scatterlist *prv, unsigned long left_npages,
> +		gfp_t gfp_mask)
> +{
> +	struct scatterlist *next_sg;
> +	int ret;
> +
> +	/* If table was just allocated */
> +	if (!prv)
> +		return table->sgl;
> +
> +	/* Check if last entry should be keeped for chainning */
> +	next_sg = sg_next(prv);
> +	if (!sg_is_last(next_sg) || left_npages == 1)
> +		return next_sg;
> +
> +	ret = __sg_alloc_table(table, next_sg,
> +			min_t(unsigned long, left_npages, SG_MAX_SINGLE_ALLOC),
> +			SG_MAX_SINGLE_ALLOC, NULL, 0, gfp_mask, sg_kmalloc);
> +	if (ret)
> +		return ERR_PTR(ret);
> +	return sg_next(prv);
> +}
> +
>   /**
>    * __sg_alloc_table_from_pages - Allocate and initialize an sg table from
>    *			         an array of pages
> @@ -374,29 +405,47 @@ EXPORT_SYMBOL(sg_alloc_table);
>    * @offset:      Offset from start of the first page to the start of a buffer
>    * @size:        Number of valid bytes in the buffer (after offset)
>    * @max_segment: Maximum size of a scatterlist node in bytes (page aligned)
> + * @prv:	 Last populated sge in sgt
> + * @left_pages:  Left pages caller have to set after this call
>    * @gfp_mask:	 GFP allocation mask
>    *
> - *  Description:
> - *    Allocate and initialize an sg table from a list of pages. Contiguous
> - *    ranges of the pages are squashed into a single scatterlist node up to the
> - *    maximum size specified in @max_segment. An user may provide an offset at a
> - *    start and a size of valid data in a buffer specified by the page array.
> - *    The returned sg table is released by sg_free_table.
> + * Description:
> + *    If @prv is NULL, allocate and initialize an sg table from a list of pages,
> + *    else reuse the scatterlist passed in at @prv.
> + *    Contiguous ranges of the pages are squashed into a single scatterlist
> + *    entry up to the maximum size specified in @max_segment.  A user may
> + *    provide an offset at a start and a size of valid data in a buffer
> + *    specified by the page array.
>    *
>    * Returns:
> - *   0 on success, negative error on failure
> + *   Last SGE in sgt on success, PTR_ERR on otherwise.
> + *   The allocation in @sgt must be released by sg_free_table.
> + *
> + * Notes:
> + *   If this function returns non-0 (eg failure), the caller must call
> + *   sg_free_table() to cleanup any leftover allocations.
>    */
> -int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
> -				unsigned int n_pages, unsigned int offset,
> -				unsigned long size, unsigned int max_segment,
> -				gfp_t gfp_mask)
> +struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt,
> +		struct page **pages, unsigned int n_pages, unsigned int offset,
> +		unsigned long size, unsigned int max_segment,
> +		struct scatterlist *prv, unsigned int left_pages,
> +		gfp_t gfp_mask)
>   {
> -	unsigned int chunks, cur_page, seg_len, i;
> +	unsigned int chunks, cur_page, seg_len, i, prv_len = 0;
> +	unsigned int tmp_nents = sgt->nents;
> +	struct scatterlist *s = prv;
> +	unsigned int table_size;
>   	int ret;
> -	struct scatterlist *s;
> 
>   	if (WARN_ON(!max_segment || offset_in_page(max_segment)))
> -		return -EINVAL;
> +		return ERR_PTR(-EINVAL);
> +	if (IS_ENABLED(CONFIG_ARCH_NO_SG_CHAIN) && prv)
> +		return ERR_PTR(-EOPNOTSUPP);

I would consider trying to make the failure caught at compile time. It 
would probably need a static inline wrapper to BUILD_BUG_ON is prv is 
not compile time constant. Because my gut feeling is runtime is a bit 
awkward.

Hm, but also isn't the check too strict? It would be possible to append 
to the last sgt as long as under max_ents, no? (Like the current check 
in __sg_alloc_table.)

> +
> +	if (prv &&
> +	    page_to_pfn(sg_page(prv)) + (prv->length >> PAGE_SHIFT) ==
> +	    page_to_pfn(pages[0]))
> +		prv_len = prv->length;
> 
>   	/* compute number of contiguous chunks */
>   	chunks = 1;
> @@ -410,13 +459,17 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
>   		}
>   	}
> 
> -	ret = sg_alloc_table(sgt, chunks, gfp_mask);
> -	if (unlikely(ret))
> -		return ret;
> +	if (!prv) {
> +		/* Only the last allocation could be less than the maximum */
> +		table_size = left_pages ? SG_MAX_SINGLE_ALLOC : chunks;
> +		ret = sg_alloc_table(sgt, table_size, gfp_mask);
> +		if (unlikely(ret))
> +			return ERR_PTR(ret);
> +	}
> 
>   	/* merging chunks and putting them into the scatterlist */
>   	cur_page = 0;
> -	for_each_sg(sgt->sgl, s, sgt->orig_nents, i) {
> +	for (i = 0; i < chunks; i++) {
>   		unsigned int j, chunk_size;
> 
>   		/* look for the end of the current chunk */
> @@ -425,19 +478,41 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
>   			seg_len += PAGE_SIZE;
>   			if (seg_len >= max_segment ||
>   			    page_to_pfn(pages[j]) !=
> -			    page_to_pfn(pages[j - 1]) + 1)
> +				    page_to_pfn(pages[j - 1]) + 1)
>   				break;
>   		}
> 
>   		chunk_size = ((j - cur_page) << PAGE_SHIFT) - offset;
> -		sg_set_page(s, pages[cur_page],
> -			    min_t(unsigned long, size, chunk_size), offset);
> +		chunk_size = min_t(unsigned long, size, chunk_size);
> +		if (!i && prv_len) {
> +			if (max_segment - prv->length >= chunk_size) {
> +				sg_set_page(s, sg_page(s),
> +					    s->length + chunk_size, s->offset);
> +				goto next;
> +			}
> +		}
> +
> +		/* Pass how many chunks might left */
> +		s = get_next_sg(sgt, s, chunks - i + left_pages, gfp_mask);
> +		if (IS_ERR(s)) {
> +			/*
> +			 * Adjust entry length to be as before function was
> +			 * called.
> +			 */
> +			if (prv_len)
> +				prv->length = prv_len;
> +			goto out;
> +		}
> +		sg_set_page(s, pages[cur_page], chunk_size, offset);
> +		tmp_nents++;
> +next:
>   		size -= chunk_size;
>   		offset = 0;
>   		cur_page = j;
>   	}
> -
> -	return 0;
> +	sgt->nents = tmp_nents;
> +out:
> +	return s;
>   }
>   EXPORT_SYMBOL(__sg_alloc_table_from_pages);
> 
> @@ -465,8 +540,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
>   			      unsigned int n_pages, unsigned int offset,
>   			      unsigned long size, gfp_t gfp_mask)
>   {
> -	return __sg_alloc_table_from_pages(sgt, pages, n_pages, offset, size,
> -					   SCATTERLIST_MAX_SEGMENT, gfp_mask);
> +	return PTR_ERR_OR_ZERO(__sg_alloc_table_from_pages(sgt, pages, n_pages,
> +			offset, size, SCATTERLIST_MAX_SEGMENT, NULL, 0,
> +			gfp_mask));
>   }
>   EXPORT_SYMBOL(sg_alloc_table_from_pages);
> 
> diff --git a/lib/sg_pool.c b/lib/sg_pool.c
> index db29e5c1f790..c449248bf5d5 100644
> --- a/lib/sg_pool.c
> +++ b/lib/sg_pool.c
> @@ -129,7 +129,8 @@ int sg_alloc_table_chained(struct sg_table *table, int nents,
>   		nents_first_chunk = 0;
>   	}
> 
> -	ret = __sg_alloc_table(table, nents, SG_CHUNK_SIZE,
> +	memset(table, 0, sizeof(*table));
> +	ret = __sg_alloc_table(table, NULL, nents, SG_CHUNK_SIZE,
>   			       first_chunk, nents_first_chunk,
>   			       GFP_ATOMIC, sg_pool_alloc);
>   	if (unlikely(ret))
> diff --git a/tools/testing/scatterlist/main.c b/tools/testing/scatterlist/main.c
> index 0a1464181226..4899359a31ac 100644
> --- a/tools/testing/scatterlist/main.c
> +++ b/tools/testing/scatterlist/main.c
> @@ -55,14 +55,13 @@ int main(void)
>   	for (i = 0, test = tests; test->expected_segments; test++, i++) {
>   		struct page *pages[MAX_PAGES];
>   		struct sg_table st;
> -		int ret;
> +		struct scatterlist *sg;
> 
>   		set_pages(pages, test->pfn, test->num_pages);
> 
> -		ret = __sg_alloc_table_from_pages(&st, pages, test->num_pages,
> -						  0, test->size, test->max_seg,
> -						  GFP_KERNEL);
> -		assert(ret == test->alloc_ret);
> +		sg = __sg_alloc_table_from_pages(&st, pages, test->num_pages, 0,
> +				test->size, test->max_seg, NULL, 0, GFP_KERNEL);
> +		assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);

Some test coverage for relatively complex code would be very welcomed. 
Since the testing framework is already there, even if it bit-rotted a 
bit, but shouldn't be hard to fix.

A few tests to check append/grow works as expected, in terms of how the 
end table looks like given the initial state and some different page 
patterns added to it. And both crossing and not crossing into sg 
chaining scenarios.

Regards,

Tvrtko

> 
>   		if (test->alloc_ret)
>   			continue;
> --
> 2.26.2
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
> 
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
@ 2020-09-24  8:21     ` Tvrtko Ursulin
  0 siblings, 0 replies; 43+ messages in thread
From: Tvrtko Ursulin @ 2020-09-24  8:21 UTC (permalink / raw)
  To: Leon Romanovsky, Christoph Hellwig, Doug Ledford, Jason Gunthorpe
  Cc: linux-rdma, intel-gfx, Roland Scheidegger, dri-devel,
	David Airlie, VMware Graphics, Maor Gottlieb, Maor Gottlieb


On 22/09/2020 09:39, Leon Romanovsky wrote:
> From: Maor Gottlieb <maorg@mellanox.com>
> 
> Extend __sg_alloc_table_from_pages to support dynamic allocation of
> SG table from pages. It should be used by drivers that can't supply
> all the pages at one time.
> 
> This function returns the last populated SGE in the table. Users should
> pass it as an argument to the function from the second call and forward.
> As before, nents will be equal to the number of populated SGEs (chunks).

So it's appending and growing the "list", did I get that right? Sounds 
handy indeed. Some comments/questions below.

> 
> With this new extension, drivers can benefit the optimization of merging
> contiguous pages without a need to allocate all pages in advance and
> hold them in a large buffer.
> 
> E.g. with the Infiniband driver that allocates a single page for hold
> the
> pages. For 1TB memory registration, the temporary buffer would consume
> only
> 4KB, instead of 2GB.
> 
> Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> ---
>   drivers/gpu/drm/i915/gem/i915_gem_userptr.c |  12 +-
>   drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c  |  15 +-
>   include/linux/scatterlist.h                 |  43 +++---
>   lib/scatterlist.c                           | 158 +++++++++++++++-----
>   lib/sg_pool.c                               |   3 +-
>   tools/testing/scatterlist/main.c            |   9 +-
>   6 files changed, 163 insertions(+), 77 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> index 12b30075134a..f2eaed6aca3d 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> @@ -403,6 +403,7 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
>   	unsigned int max_segment = i915_sg_segment_size();
>   	struct sg_table *st;
>   	unsigned int sg_page_sizes;
> +	struct scatterlist *sg;
>   	int ret;
> 
>   	st = kmalloc(sizeof(*st), GFP_KERNEL);
> @@ -410,13 +411,12 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
>   		return ERR_PTR(-ENOMEM);
> 
>   alloc_table:
> -	ret = __sg_alloc_table_from_pages(st, pvec, num_pages,
> -					  0, num_pages << PAGE_SHIFT,
> -					  max_segment,
> -					  GFP_KERNEL);
> -	if (ret) {
> +	sg = __sg_alloc_table_from_pages(st, pvec, num_pages, 0,
> +					 num_pages << PAGE_SHIFT, max_segment,
> +					 NULL, 0, GFP_KERNEL);
> +	if (IS_ERR(sg)) {
>   		kfree(st);
> -		return ERR_PTR(ret);
> +		return ERR_CAST(sg);
>   	}
> 
>   	ret = i915_gem_gtt_prepare_pages(obj, st);
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
> index ab524ab3b0b4..f22acd398b1f 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
> @@ -419,6 +419,7 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt)
>   	int ret = 0;
>   	static size_t sgl_size;
>   	static size_t sgt_size;
> +	struct scatterlist *sg;
> 
>   	if (vmw_tt->mapped)
>   		return 0;
> @@ -441,13 +442,15 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt)
>   		if (unlikely(ret != 0))
>   			return ret;
> 
> -		ret = __sg_alloc_table_from_pages
> -			(&vmw_tt->sgt, vsgt->pages, vsgt->num_pages, 0,
> -			 (unsigned long) vsgt->num_pages << PAGE_SHIFT,
> -			 dma_get_max_seg_size(dev_priv->dev->dev),
> -			 GFP_KERNEL);
> -		if (unlikely(ret != 0))
> +		sg = __sg_alloc_table_from_pages(&vmw_tt->sgt, vsgt->pages,
> +				vsgt->num_pages, 0,
> +				(unsigned long) vsgt->num_pages << PAGE_SHIFT,
> +				dma_get_max_seg_size(dev_priv->dev->dev),
> +				NULL, 0, GFP_KERNEL);
> +		if (IS_ERR(sg)) {
> +			ret = PTR_ERR(sg);
>   			goto out_sg_alloc_fail;
> +		}
> 
>   		if (vsgt->num_pages > vmw_tt->sgt.nents) {
>   			uint64_t over_alloc =
> diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
> index 45cf7b69d852..c24cc667b56b 100644
> --- a/include/linux/scatterlist.h
> +++ b/include/linux/scatterlist.h
> @@ -165,6 +165,22 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf,
>   #define for_each_sgtable_dma_sg(sgt, sg, i)	\
>   	for_each_sg((sgt)->sgl, sg, (sgt)->nents, i)
> 
> +static inline void __sg_chain(struct scatterlist *chain_sg,
> +			      struct scatterlist *sgl)
> +{
> +	/*
> +	 * offset and length are unused for chain entry. Clear them.
> +	 */
> +	chain_sg->offset = 0;
> +	chain_sg->length = 0;
> +
> +	/*
> +	 * Set lowest bit to indicate a link pointer, and make sure to clear
> +	 * the termination bit if it happens to be set.
> +	 */
> +	chain_sg->page_link = ((unsigned long) sgl | SG_CHAIN) & ~SG_END;
> +}
> +
>   /**
>    * sg_chain - Chain two sglists together
>    * @prv:	First scatterlist
> @@ -178,18 +194,7 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf,
>   static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents,
>   			    struct scatterlist *sgl)
>   {
> -	/*
> -	 * offset and length are unused for chain entry.  Clear them.
> -	 */
> -	prv[prv_nents - 1].offset = 0;
> -	prv[prv_nents - 1].length = 0;
> -
> -	/*
> -	 * Set lowest bit to indicate a link pointer, and make sure to clear
> -	 * the termination bit if it happens to be set.
> -	 */
> -	prv[prv_nents - 1].page_link = ((unsigned long) sgl | SG_CHAIN)
> -					& ~SG_END;
> +	__sg_chain(&prv[prv_nents - 1], sgl);
>   }
> 
>   /**
> @@ -283,13 +288,15 @@ typedef void (sg_free_fn)(struct scatterlist *, unsigned int);
>   void __sg_free_table(struct sg_table *, unsigned int, unsigned int,
>   		     sg_free_fn *);
>   void sg_free_table(struct sg_table *);
> -int __sg_alloc_table(struct sg_table *, unsigned int, unsigned int,
> -		     struct scatterlist *, unsigned int, gfp_t, sg_alloc_fn *);
> +int __sg_alloc_table(struct sg_table *, struct scatterlist *, unsigned int,
> +		unsigned int, struct scatterlist *, unsigned int,
> +		gfp_t, sg_alloc_fn *);
>   int sg_alloc_table(struct sg_table *, unsigned int, gfp_t);
> -int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
> -				unsigned int n_pages, unsigned int offset,
> -				unsigned long size, unsigned int max_segment,
> -				gfp_t gfp_mask);
> +struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt,
> +		struct page **pages, unsigned int n_pages, unsigned int offset,
> +		unsigned long size, unsigned int max_segment,
> +		struct scatterlist *prv, unsigned int left_pages,
> +		gfp_t gfp_mask);
>   int sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
>   			      unsigned int n_pages, unsigned int offset,
>   			      unsigned long size, gfp_t gfp_mask);
> diff --git a/lib/scatterlist.c b/lib/scatterlist.c
> index 5d63a8857f36..91587560497d 100644
> --- a/lib/scatterlist.c
> +++ b/lib/scatterlist.c
> @@ -245,6 +245,7 @@ EXPORT_SYMBOL(sg_free_table);
>   /**
>    * __sg_alloc_table - Allocate and initialize an sg table with given allocator
>    * @table:	The sg table header to use
> + * @prv:	Last populated sge in sgt
>    * @nents:	Number of entries in sg list
>    * @max_ents:	The maximum number of entries the allocator returns per call
>    * @nents_first_chunk: Number of entries int the (preallocated) first
> @@ -263,17 +264,15 @@ EXPORT_SYMBOL(sg_free_table);
>    *   __sg_free_table() to cleanup any leftover allocations.
>    *
>    **/
> -int __sg_alloc_table(struct sg_table *table, unsigned int nents,
> -		     unsigned int max_ents, struct scatterlist *first_chunk,
> -		     unsigned int nents_first_chunk, gfp_t gfp_mask,
> -		     sg_alloc_fn *alloc_fn)
> +int __sg_alloc_table(struct sg_table *table, struct scatterlist *prv,
> +		unsigned int nents, unsigned int max_ents,
> +		struct scatterlist *first_chunk,
> +		unsigned int nents_first_chunk, gfp_t gfp_mask,
> +		sg_alloc_fn *alloc_fn)
>   {
> -	struct scatterlist *sg, *prv;
> -	unsigned int left;
> -	unsigned curr_max_ents = nents_first_chunk ?: max_ents;
> -	unsigned prv_max_ents;
> -
> -	memset(table, 0, sizeof(*table));
> +	unsigned int curr_max_ents = nents_first_chunk ?: max_ents;
> +	unsigned int left, prv_max_ents = 0;
> +	struct scatterlist *sg;
> 
>   	if (nents == 0)
>   		return -EINVAL;
> @@ -283,7 +282,6 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
>   #endif
> 
>   	left = nents;
> -	prv = NULL;
>   	do {
>   		unsigned int sg_size, alloc_size = left;
> 
> @@ -308,7 +306,7 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
>   			 * linkage.  Without this, sg_kfree() may get
>   			 * confused.
>   			 */
> -			if (prv)
> +			if (prv_max_ents)
>   				table->nents = ++table->orig_nents;
> 
>   			return -ENOMEM;
> @@ -321,10 +319,18 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
>   		 * If this is the first mapping, assign the sg table header.
>   		 * If this is not the first mapping, chain previous part.
>   		 */
> -		if (prv)
> -			sg_chain(prv, prv_max_ents, sg);
> -		else
> +		if (!prv)
>   			table->sgl = sg;
> +		else if (prv_max_ents)
> +			sg_chain(prv, prv_max_ents, sg);
> +		else {
> +			__sg_chain(prv, sg);
> +			/*
> +			 * We decrease one since the prvious last sge in used to
> +			 * chain the chunks together.
> +			 */
> +			table->nents = table->orig_nents -= 1;
> +		}
> 
>   		/*
>   		 * If no more entries after this one, mark the end
> @@ -356,7 +362,8 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask)
>   {
>   	int ret;
> 
> -	ret = __sg_alloc_table(table, nents, SG_MAX_SINGLE_ALLOC,
> +	memset(table, 0, sizeof(*table));
> +	ret = __sg_alloc_table(table, NULL, nents, SG_MAX_SINGLE_ALLOC,
>   			       NULL, 0, gfp_mask, sg_kmalloc);
>   	if (unlikely(ret))
>   		__sg_free_table(table, SG_MAX_SINGLE_ALLOC, 0, sg_kfree);
> @@ -365,6 +372,30 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask)
>   }
>   EXPORT_SYMBOL(sg_alloc_table);
> 
> +static struct scatterlist *get_next_sg(struct sg_table *table,
> +		struct scatterlist *prv, unsigned long left_npages,
> +		gfp_t gfp_mask)
> +{
> +	struct scatterlist *next_sg;
> +	int ret;
> +
> +	/* If table was just allocated */
> +	if (!prv)
> +		return table->sgl;
> +
> +	/* Check if last entry should be keeped for chainning */
> +	next_sg = sg_next(prv);
> +	if (!sg_is_last(next_sg) || left_npages == 1)
> +		return next_sg;
> +
> +	ret = __sg_alloc_table(table, next_sg,
> +			min_t(unsigned long, left_npages, SG_MAX_SINGLE_ALLOC),
> +			SG_MAX_SINGLE_ALLOC, NULL, 0, gfp_mask, sg_kmalloc);
> +	if (ret)
> +		return ERR_PTR(ret);
> +	return sg_next(prv);
> +}
> +
>   /**
>    * __sg_alloc_table_from_pages - Allocate and initialize an sg table from
>    *			         an array of pages
> @@ -374,29 +405,47 @@ EXPORT_SYMBOL(sg_alloc_table);
>    * @offset:      Offset from start of the first page to the start of a buffer
>    * @size:        Number of valid bytes in the buffer (after offset)
>    * @max_segment: Maximum size of a scatterlist node in bytes (page aligned)
> + * @prv:	 Last populated sge in sgt
> + * @left_pages:  Left pages caller have to set after this call
>    * @gfp_mask:	 GFP allocation mask
>    *
> - *  Description:
> - *    Allocate and initialize an sg table from a list of pages. Contiguous
> - *    ranges of the pages are squashed into a single scatterlist node up to the
> - *    maximum size specified in @max_segment. An user may provide an offset at a
> - *    start and a size of valid data in a buffer specified by the page array.
> - *    The returned sg table is released by sg_free_table.
> + * Description:
> + *    If @prv is NULL, allocate and initialize an sg table from a list of pages,
> + *    else reuse the scatterlist passed in at @prv.
> + *    Contiguous ranges of the pages are squashed into a single scatterlist
> + *    entry up to the maximum size specified in @max_segment.  A user may
> + *    provide an offset at a start and a size of valid data in a buffer
> + *    specified by the page array.
>    *
>    * Returns:
> - *   0 on success, negative error on failure
> + *   Last SGE in sgt on success, PTR_ERR on otherwise.
> + *   The allocation in @sgt must be released by sg_free_table.
> + *
> + * Notes:
> + *   If this function returns non-0 (eg failure), the caller must call
> + *   sg_free_table() to cleanup any leftover allocations.
>    */
> -int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
> -				unsigned int n_pages, unsigned int offset,
> -				unsigned long size, unsigned int max_segment,
> -				gfp_t gfp_mask)
> +struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt,
> +		struct page **pages, unsigned int n_pages, unsigned int offset,
> +		unsigned long size, unsigned int max_segment,
> +		struct scatterlist *prv, unsigned int left_pages,
> +		gfp_t gfp_mask)
>   {
> -	unsigned int chunks, cur_page, seg_len, i;
> +	unsigned int chunks, cur_page, seg_len, i, prv_len = 0;
> +	unsigned int tmp_nents = sgt->nents;
> +	struct scatterlist *s = prv;
> +	unsigned int table_size;
>   	int ret;
> -	struct scatterlist *s;
> 
>   	if (WARN_ON(!max_segment || offset_in_page(max_segment)))
> -		return -EINVAL;
> +		return ERR_PTR(-EINVAL);
> +	if (IS_ENABLED(CONFIG_ARCH_NO_SG_CHAIN) && prv)
> +		return ERR_PTR(-EOPNOTSUPP);

I would consider trying to make the failure caught at compile time. It 
would probably need a static inline wrapper to BUILD_BUG_ON is prv is 
not compile time constant. Because my gut feeling is runtime is a bit 
awkward.

Hm, but also isn't the check too strict? It would be possible to append 
to the last sgt as long as under max_ents, no? (Like the current check 
in __sg_alloc_table.)

> +
> +	if (prv &&
> +	    page_to_pfn(sg_page(prv)) + (prv->length >> PAGE_SHIFT) ==
> +	    page_to_pfn(pages[0]))
> +		prv_len = prv->length;
> 
>   	/* compute number of contiguous chunks */
>   	chunks = 1;
> @@ -410,13 +459,17 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
>   		}
>   	}
> 
> -	ret = sg_alloc_table(sgt, chunks, gfp_mask);
> -	if (unlikely(ret))
> -		return ret;
> +	if (!prv) {
> +		/* Only the last allocation could be less than the maximum */
> +		table_size = left_pages ? SG_MAX_SINGLE_ALLOC : chunks;
> +		ret = sg_alloc_table(sgt, table_size, gfp_mask);
> +		if (unlikely(ret))
> +			return ERR_PTR(ret);
> +	}
> 
>   	/* merging chunks and putting them into the scatterlist */
>   	cur_page = 0;
> -	for_each_sg(sgt->sgl, s, sgt->orig_nents, i) {
> +	for (i = 0; i < chunks; i++) {
>   		unsigned int j, chunk_size;
> 
>   		/* look for the end of the current chunk */
> @@ -425,19 +478,41 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
>   			seg_len += PAGE_SIZE;
>   			if (seg_len >= max_segment ||
>   			    page_to_pfn(pages[j]) !=
> -			    page_to_pfn(pages[j - 1]) + 1)
> +				    page_to_pfn(pages[j - 1]) + 1)
>   				break;
>   		}
> 
>   		chunk_size = ((j - cur_page) << PAGE_SHIFT) - offset;
> -		sg_set_page(s, pages[cur_page],
> -			    min_t(unsigned long, size, chunk_size), offset);
> +		chunk_size = min_t(unsigned long, size, chunk_size);
> +		if (!i && prv_len) {
> +			if (max_segment - prv->length >= chunk_size) {
> +				sg_set_page(s, sg_page(s),
> +					    s->length + chunk_size, s->offset);
> +				goto next;
> +			}
> +		}
> +
> +		/* Pass how many chunks might left */
> +		s = get_next_sg(sgt, s, chunks - i + left_pages, gfp_mask);
> +		if (IS_ERR(s)) {
> +			/*
> +			 * Adjust entry length to be as before function was
> +			 * called.
> +			 */
> +			if (prv_len)
> +				prv->length = prv_len;
> +			goto out;
> +		}
> +		sg_set_page(s, pages[cur_page], chunk_size, offset);
> +		tmp_nents++;
> +next:
>   		size -= chunk_size;
>   		offset = 0;
>   		cur_page = j;
>   	}
> -
> -	return 0;
> +	sgt->nents = tmp_nents;
> +out:
> +	return s;
>   }
>   EXPORT_SYMBOL(__sg_alloc_table_from_pages);
> 
> @@ -465,8 +540,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
>   			      unsigned int n_pages, unsigned int offset,
>   			      unsigned long size, gfp_t gfp_mask)
>   {
> -	return __sg_alloc_table_from_pages(sgt, pages, n_pages, offset, size,
> -					   SCATTERLIST_MAX_SEGMENT, gfp_mask);
> +	return PTR_ERR_OR_ZERO(__sg_alloc_table_from_pages(sgt, pages, n_pages,
> +			offset, size, SCATTERLIST_MAX_SEGMENT, NULL, 0,
> +			gfp_mask));
>   }
>   EXPORT_SYMBOL(sg_alloc_table_from_pages);
> 
> diff --git a/lib/sg_pool.c b/lib/sg_pool.c
> index db29e5c1f790..c449248bf5d5 100644
> --- a/lib/sg_pool.c
> +++ b/lib/sg_pool.c
> @@ -129,7 +129,8 @@ int sg_alloc_table_chained(struct sg_table *table, int nents,
>   		nents_first_chunk = 0;
>   	}
> 
> -	ret = __sg_alloc_table(table, nents, SG_CHUNK_SIZE,
> +	memset(table, 0, sizeof(*table));
> +	ret = __sg_alloc_table(table, NULL, nents, SG_CHUNK_SIZE,
>   			       first_chunk, nents_first_chunk,
>   			       GFP_ATOMIC, sg_pool_alloc);
>   	if (unlikely(ret))
> diff --git a/tools/testing/scatterlist/main.c b/tools/testing/scatterlist/main.c
> index 0a1464181226..4899359a31ac 100644
> --- a/tools/testing/scatterlist/main.c
> +++ b/tools/testing/scatterlist/main.c
> @@ -55,14 +55,13 @@ int main(void)
>   	for (i = 0, test = tests; test->expected_segments; test++, i++) {
>   		struct page *pages[MAX_PAGES];
>   		struct sg_table st;
> -		int ret;
> +		struct scatterlist *sg;
> 
>   		set_pages(pages, test->pfn, test->num_pages);
> 
> -		ret = __sg_alloc_table_from_pages(&st, pages, test->num_pages,
> -						  0, test->size, test->max_seg,
> -						  GFP_KERNEL);
> -		assert(ret == test->alloc_ret);
> +		sg = __sg_alloc_table_from_pages(&st, pages, test->num_pages, 0,
> +				test->size, test->max_seg, NULL, 0, GFP_KERNEL);
> +		assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);

Some test coverage for relatively complex code would be very welcomed. 
Since the testing framework is already there, even if it bit-rotted a 
bit, but shouldn't be hard to fix.

A few tests to check append/grow works as expected, in terms of how the 
end table looks like given the initial state and some different page 
patterns added to it. And both crossing and not crossing into sg 
chaining scenarios.

Regards,

Tvrtko

> 
>   		if (test->alloc_ret)
>   			continue;
> --
> 2.26.2
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
  2020-09-24  8:21     ` Tvrtko Ursulin
  (?)
@ 2020-09-25  7:13       ` Leon Romanovsky
  -1 siblings, 0 replies; 43+ messages in thread
From: Leon Romanovsky @ 2020-09-25  7:13 UTC (permalink / raw)
  To: Tvrtko Ursulin
  Cc: Christoph Hellwig, Doug Ledford, Jason Gunthorpe, linux-rdma,
	intel-gfx, Roland Scheidegger, dri-devel, David Airlie,
	VMware Graphics, Maor Gottlieb, Maor Gottlieb

On Thu, Sep 24, 2020 at 09:21:20AM +0100, Tvrtko Ursulin wrote:
>
> On 22/09/2020 09:39, Leon Romanovsky wrote:
> > From: Maor Gottlieb <maorg@mellanox.com>
> >
> > Extend __sg_alloc_table_from_pages to support dynamic allocation of
> > SG table from pages. It should be used by drivers that can't supply
> > all the pages at one time.
> >
> > This function returns the last populated SGE in the table. Users should
> > pass it as an argument to the function from the second call and forward.
> > As before, nents will be equal to the number of populated SGEs (chunks).
>
> So it's appending and growing the "list", did I get that right? Sounds handy
> indeed. Some comments/questions below.

Yes, we (RDMA) use this function to chain contiguous pages.

>
> >
> > With this new extension, drivers can benefit the optimization of merging
> > contiguous pages without a need to allocate all pages in advance and
> > hold them in a large buffer.
> >
> > E.g. with the Infiniband driver that allocates a single page for hold
> > the
> > pages. For 1TB memory registration, the temporary buffer would consume
> > only
> > 4KB, instead of 2GB.
> >
> > Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
> > Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> > ---
> >   drivers/gpu/drm/i915/gem/i915_gem_userptr.c |  12 +-
> >   drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c  |  15 +-
> >   include/linux/scatterlist.h                 |  43 +++---
> >   lib/scatterlist.c                           | 158 +++++++++++++++-----
> >   lib/sg_pool.c                               |   3 +-
> >   tools/testing/scatterlist/main.c            |   9 +-
> >   6 files changed, 163 insertions(+), 77 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> > index 12b30075134a..f2eaed6aca3d 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> > @@ -403,6 +403,7 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
> >   	unsigned int max_segment = i915_sg_segment_size();
> >   	struct sg_table *st;
> >   	unsigned int sg_page_sizes;
> > +	struct scatterlist *sg;
> >   	int ret;
> >
> >   	st = kmalloc(sizeof(*st), GFP_KERNEL);
> > @@ -410,13 +411,12 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
> >   		return ERR_PTR(-ENOMEM);
> >
> >   alloc_table:
> > -	ret = __sg_alloc_table_from_pages(st, pvec, num_pages,
> > -					  0, num_pages << PAGE_SHIFT,
> > -					  max_segment,
> > -					  GFP_KERNEL);
> > -	if (ret) {
> > +	sg = __sg_alloc_table_from_pages(st, pvec, num_pages, 0,
> > +					 num_pages << PAGE_SHIFT, max_segment,
> > +					 NULL, 0, GFP_KERNEL);
> > +	if (IS_ERR(sg)) {
> >   		kfree(st);
> > -		return ERR_PTR(ret);
> > +		return ERR_CAST(sg);
> >   	}
> >
> >   	ret = i915_gem_gtt_prepare_pages(obj, st);
> > diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
> > index ab524ab3b0b4..f22acd398b1f 100644
> > --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
> > +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
> > @@ -419,6 +419,7 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt)
> >   	int ret = 0;
> >   	static size_t sgl_size;
> >   	static size_t sgt_size;
> > +	struct scatterlist *sg;
> >
> >   	if (vmw_tt->mapped)
> >   		return 0;
> > @@ -441,13 +442,15 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt)
> >   		if (unlikely(ret != 0))
> >   			return ret;
> >
> > -		ret = __sg_alloc_table_from_pages
> > -			(&vmw_tt->sgt, vsgt->pages, vsgt->num_pages, 0,
> > -			 (unsigned long) vsgt->num_pages << PAGE_SHIFT,
> > -			 dma_get_max_seg_size(dev_priv->dev->dev),
> > -			 GFP_KERNEL);
> > -		if (unlikely(ret != 0))
> > +		sg = __sg_alloc_table_from_pages(&vmw_tt->sgt, vsgt->pages,
> > +				vsgt->num_pages, 0,
> > +				(unsigned long) vsgt->num_pages << PAGE_SHIFT,
> > +				dma_get_max_seg_size(dev_priv->dev->dev),
> > +				NULL, 0, GFP_KERNEL);
> > +		if (IS_ERR(sg)) {
> > +			ret = PTR_ERR(sg);
> >   			goto out_sg_alloc_fail;
> > +		}
> >
> >   		if (vsgt->num_pages > vmw_tt->sgt.nents) {
> >   			uint64_t over_alloc =
> > diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
> > index 45cf7b69d852..c24cc667b56b 100644
> > --- a/include/linux/scatterlist.h
> > +++ b/include/linux/scatterlist.h
> > @@ -165,6 +165,22 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf,
> >   #define for_each_sgtable_dma_sg(sgt, sg, i)	\
> >   	for_each_sg((sgt)->sgl, sg, (sgt)->nents, i)
> >
> > +static inline void __sg_chain(struct scatterlist *chain_sg,
> > +			      struct scatterlist *sgl)
> > +{
> > +	/*
> > +	 * offset and length are unused for chain entry. Clear them.
> > +	 */
> > +	chain_sg->offset = 0;
> > +	chain_sg->length = 0;
> > +
> > +	/*
> > +	 * Set lowest bit to indicate a link pointer, and make sure to clear
> > +	 * the termination bit if it happens to be set.
> > +	 */
> > +	chain_sg->page_link = ((unsigned long) sgl | SG_CHAIN) & ~SG_END;
> > +}
> > +
> >   /**
> >    * sg_chain - Chain two sglists together
> >    * @prv:	First scatterlist
> > @@ -178,18 +194,7 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf,
> >   static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents,
> >   			    struct scatterlist *sgl)
> >   {
> > -	/*
> > -	 * offset and length are unused for chain entry.  Clear them.
> > -	 */
> > -	prv[prv_nents - 1].offset = 0;
> > -	prv[prv_nents - 1].length = 0;
> > -
> > -	/*
> > -	 * Set lowest bit to indicate a link pointer, and make sure to clear
> > -	 * the termination bit if it happens to be set.
> > -	 */
> > -	prv[prv_nents - 1].page_link = ((unsigned long) sgl | SG_CHAIN)
> > -					& ~SG_END;
> > +	__sg_chain(&prv[prv_nents - 1], sgl);
> >   }
> >
> >   /**
> > @@ -283,13 +288,15 @@ typedef void (sg_free_fn)(struct scatterlist *, unsigned int);
> >   void __sg_free_table(struct sg_table *, unsigned int, unsigned int,
> >   		     sg_free_fn *);
> >   void sg_free_table(struct sg_table *);
> > -int __sg_alloc_table(struct sg_table *, unsigned int, unsigned int,
> > -		     struct scatterlist *, unsigned int, gfp_t, sg_alloc_fn *);
> > +int __sg_alloc_table(struct sg_table *, struct scatterlist *, unsigned int,
> > +		unsigned int, struct scatterlist *, unsigned int,
> > +		gfp_t, sg_alloc_fn *);
> >   int sg_alloc_table(struct sg_table *, unsigned int, gfp_t);
> > -int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
> > -				unsigned int n_pages, unsigned int offset,
> > -				unsigned long size, unsigned int max_segment,
> > -				gfp_t gfp_mask);
> > +struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt,
> > +		struct page **pages, unsigned int n_pages, unsigned int offset,
> > +		unsigned long size, unsigned int max_segment,
> > +		struct scatterlist *prv, unsigned int left_pages,
> > +		gfp_t gfp_mask);
> >   int sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
> >   			      unsigned int n_pages, unsigned int offset,
> >   			      unsigned long size, gfp_t gfp_mask);
> > diff --git a/lib/scatterlist.c b/lib/scatterlist.c
> > index 5d63a8857f36..91587560497d 100644
> > --- a/lib/scatterlist.c
> > +++ b/lib/scatterlist.c
> > @@ -245,6 +245,7 @@ EXPORT_SYMBOL(sg_free_table);
> >   /**
> >    * __sg_alloc_table - Allocate and initialize an sg table with given allocator
> >    * @table:	The sg table header to use
> > + * @prv:	Last populated sge in sgt
> >    * @nents:	Number of entries in sg list
> >    * @max_ents:	The maximum number of entries the allocator returns per call
> >    * @nents_first_chunk: Number of entries int the (preallocated) first
> > @@ -263,17 +264,15 @@ EXPORT_SYMBOL(sg_free_table);
> >    *   __sg_free_table() to cleanup any leftover allocations.
> >    *
> >    **/
> > -int __sg_alloc_table(struct sg_table *table, unsigned int nents,
> > -		     unsigned int max_ents, struct scatterlist *first_chunk,
> > -		     unsigned int nents_first_chunk, gfp_t gfp_mask,
> > -		     sg_alloc_fn *alloc_fn)
> > +int __sg_alloc_table(struct sg_table *table, struct scatterlist *prv,
> > +		unsigned int nents, unsigned int max_ents,
> > +		struct scatterlist *first_chunk,
> > +		unsigned int nents_first_chunk, gfp_t gfp_mask,
> > +		sg_alloc_fn *alloc_fn)
> >   {
> > -	struct scatterlist *sg, *prv;
> > -	unsigned int left;
> > -	unsigned curr_max_ents = nents_first_chunk ?: max_ents;
> > -	unsigned prv_max_ents;
> > -
> > -	memset(table, 0, sizeof(*table));
> > +	unsigned int curr_max_ents = nents_first_chunk ?: max_ents;
> > +	unsigned int left, prv_max_ents = 0;
> > +	struct scatterlist *sg;
> >
> >   	if (nents == 0)
> >   		return -EINVAL;
> > @@ -283,7 +282,6 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
> >   #endif
> >
> >   	left = nents;
> > -	prv = NULL;
> >   	do {
> >   		unsigned int sg_size, alloc_size = left;
> >
> > @@ -308,7 +306,7 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
> >   			 * linkage.  Without this, sg_kfree() may get
> >   			 * confused.
> >   			 */
> > -			if (prv)
> > +			if (prv_max_ents)
> >   				table->nents = ++table->orig_nents;
> >
> >   			return -ENOMEM;
> > @@ -321,10 +319,18 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
> >   		 * If this is the first mapping, assign the sg table header.
> >   		 * If this is not the first mapping, chain previous part.
> >   		 */
> > -		if (prv)
> > -			sg_chain(prv, prv_max_ents, sg);
> > -		else
> > +		if (!prv)
> >   			table->sgl = sg;
> > +		else if (prv_max_ents)
> > +			sg_chain(prv, prv_max_ents, sg);
> > +		else {
> > +			__sg_chain(prv, sg);
> > +			/*
> > +			 * We decrease one since the prvious last sge in used to
> > +			 * chain the chunks together.
> > +			 */
> > +			table->nents = table->orig_nents -= 1;
> > +		}
> >
> >   		/*
> >   		 * If no more entries after this one, mark the end
> > @@ -356,7 +362,8 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask)
> >   {
> >   	int ret;
> >
> > -	ret = __sg_alloc_table(table, nents, SG_MAX_SINGLE_ALLOC,
> > +	memset(table, 0, sizeof(*table));
> > +	ret = __sg_alloc_table(table, NULL, nents, SG_MAX_SINGLE_ALLOC,
> >   			       NULL, 0, gfp_mask, sg_kmalloc);
> >   	if (unlikely(ret))
> >   		__sg_free_table(table, SG_MAX_SINGLE_ALLOC, 0, sg_kfree);
> > @@ -365,6 +372,30 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask)
> >   }
> >   EXPORT_SYMBOL(sg_alloc_table);
> >
> > +static struct scatterlist *get_next_sg(struct sg_table *table,
> > +		struct scatterlist *prv, unsigned long left_npages,
> > +		gfp_t gfp_mask)
> > +{
> > +	struct scatterlist *next_sg;
> > +	int ret;
> > +
> > +	/* If table was just allocated */
> > +	if (!prv)
> > +		return table->sgl;
> > +
> > +	/* Check if last entry should be keeped for chainning */
> > +	next_sg = sg_next(prv);
> > +	if (!sg_is_last(next_sg) || left_npages == 1)
> > +		return next_sg;
> > +
> > +	ret = __sg_alloc_table(table, next_sg,
> > +			min_t(unsigned long, left_npages, SG_MAX_SINGLE_ALLOC),
> > +			SG_MAX_SINGLE_ALLOC, NULL, 0, gfp_mask, sg_kmalloc);
> > +	if (ret)
> > +		return ERR_PTR(ret);
> > +	return sg_next(prv);
> > +}
> > +
> >   /**
> >    * __sg_alloc_table_from_pages - Allocate and initialize an sg table from
> >    *			         an array of pages
> > @@ -374,29 +405,47 @@ EXPORT_SYMBOL(sg_alloc_table);
> >    * @offset:      Offset from start of the first page to the start of a buffer
> >    * @size:        Number of valid bytes in the buffer (after offset)
> >    * @max_segment: Maximum size of a scatterlist node in bytes (page aligned)
> > + * @prv:	 Last populated sge in sgt
> > + * @left_pages:  Left pages caller have to set after this call
> >    * @gfp_mask:	 GFP allocation mask
> >    *
> > - *  Description:
> > - *    Allocate and initialize an sg table from a list of pages. Contiguous
> > - *    ranges of the pages are squashed into a single scatterlist node up to the
> > - *    maximum size specified in @max_segment. An user may provide an offset at a
> > - *    start and a size of valid data in a buffer specified by the page array.
> > - *    The returned sg table is released by sg_free_table.
> > + * Description:
> > + *    If @prv is NULL, allocate and initialize an sg table from a list of pages,
> > + *    else reuse the scatterlist passed in at @prv.
> > + *    Contiguous ranges of the pages are squashed into a single scatterlist
> > + *    entry up to the maximum size specified in @max_segment.  A user may
> > + *    provide an offset at a start and a size of valid data in a buffer
> > + *    specified by the page array.
> >    *
> >    * Returns:
> > - *   0 on success, negative error on failure
> > + *   Last SGE in sgt on success, PTR_ERR on otherwise.
> > + *   The allocation in @sgt must be released by sg_free_table.
> > + *
> > + * Notes:
> > + *   If this function returns non-0 (eg failure), the caller must call
> > + *   sg_free_table() to cleanup any leftover allocations.
> >    */
> > -int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
> > -				unsigned int n_pages, unsigned int offset,
> > -				unsigned long size, unsigned int max_segment,
> > -				gfp_t gfp_mask)
> > +struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt,
> > +		struct page **pages, unsigned int n_pages, unsigned int offset,
> > +		unsigned long size, unsigned int max_segment,
> > +		struct scatterlist *prv, unsigned int left_pages,
> > +		gfp_t gfp_mask)
> >   {
> > -	unsigned int chunks, cur_page, seg_len, i;
> > +	unsigned int chunks, cur_page, seg_len, i, prv_len = 0;
> > +	unsigned int tmp_nents = sgt->nents;
> > +	struct scatterlist *s = prv;
> > +	unsigned int table_size;
> >   	int ret;
> > -	struct scatterlist *s;
> >
> >   	if (WARN_ON(!max_segment || offset_in_page(max_segment)))
> > -		return -EINVAL;
> > +		return ERR_PTR(-EINVAL);
> > +	if (IS_ENABLED(CONFIG_ARCH_NO_SG_CHAIN) && prv)
> > +		return ERR_PTR(-EOPNOTSUPP);
>
> I would consider trying to make the failure caught at compile time. It would
> probably need a static inline wrapper to BUILD_BUG_ON is prv is not compile
> time constant. Because my gut feeling is runtime is a bit awkward.

In second patch [1], priv is dynamic pointer that can't be checked at
compile time.

[1] https://lore.kernel.org/linux-rdma/20200923054251.GA15249@lst.de/T/#m19b0836f23db9d626309c3e70939ce884946e2f6

>
> Hm, but also isn't the check too strict? It would be possible to append to
> the last sgt as long as under max_ents, no? (Like the current check in
> __sg_alloc_table.)

It can be, but it is corner case that doesn't worth to code. Right now,
RDMA is the single user of this append thing and our setups are
!CONFIG_ARCH_NO_SG_CHAIN.

>
> > +
> > +	if (prv &&
> > +	    page_to_pfn(sg_page(prv)) + (prv->length >> PAGE_SHIFT) ==
> > +	    page_to_pfn(pages[0]))
> > +		prv_len = prv->length;
> >
> >   	/* compute number of contiguous chunks */
> >   	chunks = 1;
> > @@ -410,13 +459,17 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
> >   		}
> >   	}
> >
> > -	ret = sg_alloc_table(sgt, chunks, gfp_mask);
> > -	if (unlikely(ret))
> > -		return ret;
> > +	if (!prv) {
> > +		/* Only the last allocation could be less than the maximum */
> > +		table_size = left_pages ? SG_MAX_SINGLE_ALLOC : chunks;
> > +		ret = sg_alloc_table(sgt, table_size, gfp_mask);
> > +		if (unlikely(ret))
> > +			return ERR_PTR(ret);
> > +	}
> >
> >   	/* merging chunks and putting them into the scatterlist */
> >   	cur_page = 0;
> > -	for_each_sg(sgt->sgl, s, sgt->orig_nents, i) {
> > +	for (i = 0; i < chunks; i++) {
> >   		unsigned int j, chunk_size;
> >
> >   		/* look for the end of the current chunk */
> > @@ -425,19 +478,41 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
> >   			seg_len += PAGE_SIZE;
> >   			if (seg_len >= max_segment ||
> >   			    page_to_pfn(pages[j]) !=
> > -			    page_to_pfn(pages[j - 1]) + 1)
> > +				    page_to_pfn(pages[j - 1]) + 1)
> >   				break;
> >   		}
> >
> >   		chunk_size = ((j - cur_page) << PAGE_SHIFT) - offset;
> > -		sg_set_page(s, pages[cur_page],
> > -			    min_t(unsigned long, size, chunk_size), offset);
> > +		chunk_size = min_t(unsigned long, size, chunk_size);
> > +		if (!i && prv_len) {
> > +			if (max_segment - prv->length >= chunk_size) {
> > +				sg_set_page(s, sg_page(s),
> > +					    s->length + chunk_size, s->offset);
> > +				goto next;
> > +			}
> > +		}
> > +
> > +		/* Pass how many chunks might left */
> > +		s = get_next_sg(sgt, s, chunks - i + left_pages, gfp_mask);
> > +		if (IS_ERR(s)) {
> > +			/*
> > +			 * Adjust entry length to be as before function was
> > +			 * called.
> > +			 */
> > +			if (prv_len)
> > +				prv->length = prv_len;
> > +			goto out;
> > +		}
> > +		sg_set_page(s, pages[cur_page], chunk_size, offset);
> > +		tmp_nents++;
> > +next:
> >   		size -= chunk_size;
> >   		offset = 0;
> >   		cur_page = j;
> >   	}
> > -
> > -	return 0;
> > +	sgt->nents = tmp_nents;
> > +out:
> > +	return s;
> >   }
> >   EXPORT_SYMBOL(__sg_alloc_table_from_pages);
> >
> > @@ -465,8 +540,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
> >   			      unsigned int n_pages, unsigned int offset,
> >   			      unsigned long size, gfp_t gfp_mask)
> >   {
> > -	return __sg_alloc_table_from_pages(sgt, pages, n_pages, offset, size,
> > -					   SCATTERLIST_MAX_SEGMENT, gfp_mask);
> > +	return PTR_ERR_OR_ZERO(__sg_alloc_table_from_pages(sgt, pages, n_pages,
> > +			offset, size, SCATTERLIST_MAX_SEGMENT, NULL, 0,
> > +			gfp_mask));
> >   }
> >   EXPORT_SYMBOL(sg_alloc_table_from_pages);
> >
> > diff --git a/lib/sg_pool.c b/lib/sg_pool.c
> > index db29e5c1f790..c449248bf5d5 100644
> > --- a/lib/sg_pool.c
> > +++ b/lib/sg_pool.c
> > @@ -129,7 +129,8 @@ int sg_alloc_table_chained(struct sg_table *table, int nents,
> >   		nents_first_chunk = 0;
> >   	}
> >
> > -	ret = __sg_alloc_table(table, nents, SG_CHUNK_SIZE,
> > +	memset(table, 0, sizeof(*table));
> > +	ret = __sg_alloc_table(table, NULL, nents, SG_CHUNK_SIZE,
> >   			       first_chunk, nents_first_chunk,
> >   			       GFP_ATOMIC, sg_pool_alloc);
> >   	if (unlikely(ret))
> > diff --git a/tools/testing/scatterlist/main.c b/tools/testing/scatterlist/main.c
> > index 0a1464181226..4899359a31ac 100644
> > --- a/tools/testing/scatterlist/main.c
> > +++ b/tools/testing/scatterlist/main.c
> > @@ -55,14 +55,13 @@ int main(void)
> >   	for (i = 0, test = tests; test->expected_segments; test++, i++) {
> >   		struct page *pages[MAX_PAGES];
> >   		struct sg_table st;
> > -		int ret;
> > +		struct scatterlist *sg;
> >
> >   		set_pages(pages, test->pfn, test->num_pages);
> >
> > -		ret = __sg_alloc_table_from_pages(&st, pages, test->num_pages,
> > -						  0, test->size, test->max_seg,
> > -						  GFP_KERNEL);
> > -		assert(ret == test->alloc_ret);
> > +		sg = __sg_alloc_table_from_pages(&st, pages, test->num_pages, 0,
> > +				test->size, test->max_seg, NULL, 0, GFP_KERNEL);
> > +		assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);
>
> Some test coverage for relatively complex code would be very welcomed. Since
> the testing framework is already there, even if it bit-rotted a bit, but
> shouldn't be hard to fix.
>
> A few tests to check append/grow works as expected, in terms of how the end
> table looks like given the initial state and some different page patterns
> added to it. And both crossing and not crossing into sg chaining scenarios.

This function is basic for all RDMA devices and we are pretty confident
that the old and new flows are tested thoroughly.

We will add proper test in next kernel cycle.

Thanks

>
> Regards,
>
> Tvrtko
>
> >
> >   		if (test->alloc_ret)
> >   			continue;
> > --
> > 2.26.2
> >
> > _______________________________________________
> > Intel-gfx mailing list
> > Intel-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/intel-gfx
> >

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
@ 2020-09-25  7:13       ` Leon Romanovsky
  0 siblings, 0 replies; 43+ messages in thread
From: Leon Romanovsky @ 2020-09-25  7:13 UTC (permalink / raw)
  To: Tvrtko Ursulin
  Cc: linux-rdma, intel-gfx, Roland Scheidegger, dri-devel,
	Maor Gottlieb, David Airlie, Doug Ledford, VMware Graphics,
	Jason Gunthorpe, Maor Gottlieb, Christoph Hellwig

On Thu, Sep 24, 2020 at 09:21:20AM +0100, Tvrtko Ursulin wrote:
>
> On 22/09/2020 09:39, Leon Romanovsky wrote:
> > From: Maor Gottlieb <maorg@mellanox.com>
> >
> > Extend __sg_alloc_table_from_pages to support dynamic allocation of
> > SG table from pages. It should be used by drivers that can't supply
> > all the pages at one time.
> >
> > This function returns the last populated SGE in the table. Users should
> > pass it as an argument to the function from the second call and forward.
> > As before, nents will be equal to the number of populated SGEs (chunks).
>
> So it's appending and growing the "list", did I get that right? Sounds handy
> indeed. Some comments/questions below.

Yes, we (RDMA) use this function to chain contiguous pages.

>
> >
> > With this new extension, drivers can benefit the optimization of merging
> > contiguous pages without a need to allocate all pages in advance and
> > hold them in a large buffer.
> >
> > E.g. with the Infiniband driver that allocates a single page for hold
> > the
> > pages. For 1TB memory registration, the temporary buffer would consume
> > only
> > 4KB, instead of 2GB.
> >
> > Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
> > Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> > ---
> >   drivers/gpu/drm/i915/gem/i915_gem_userptr.c |  12 +-
> >   drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c  |  15 +-
> >   include/linux/scatterlist.h                 |  43 +++---
> >   lib/scatterlist.c                           | 158 +++++++++++++++-----
> >   lib/sg_pool.c                               |   3 +-
> >   tools/testing/scatterlist/main.c            |   9 +-
> >   6 files changed, 163 insertions(+), 77 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> > index 12b30075134a..f2eaed6aca3d 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> > @@ -403,6 +403,7 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
> >   	unsigned int max_segment = i915_sg_segment_size();
> >   	struct sg_table *st;
> >   	unsigned int sg_page_sizes;
> > +	struct scatterlist *sg;
> >   	int ret;
> >
> >   	st = kmalloc(sizeof(*st), GFP_KERNEL);
> > @@ -410,13 +411,12 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
> >   		return ERR_PTR(-ENOMEM);
> >
> >   alloc_table:
> > -	ret = __sg_alloc_table_from_pages(st, pvec, num_pages,
> > -					  0, num_pages << PAGE_SHIFT,
> > -					  max_segment,
> > -					  GFP_KERNEL);
> > -	if (ret) {
> > +	sg = __sg_alloc_table_from_pages(st, pvec, num_pages, 0,
> > +					 num_pages << PAGE_SHIFT, max_segment,
> > +					 NULL, 0, GFP_KERNEL);
> > +	if (IS_ERR(sg)) {
> >   		kfree(st);
> > -		return ERR_PTR(ret);
> > +		return ERR_CAST(sg);
> >   	}
> >
> >   	ret = i915_gem_gtt_prepare_pages(obj, st);
> > diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
> > index ab524ab3b0b4..f22acd398b1f 100644
> > --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
> > +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
> > @@ -419,6 +419,7 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt)
> >   	int ret = 0;
> >   	static size_t sgl_size;
> >   	static size_t sgt_size;
> > +	struct scatterlist *sg;
> >
> >   	if (vmw_tt->mapped)
> >   		return 0;
> > @@ -441,13 +442,15 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt)
> >   		if (unlikely(ret != 0))
> >   			return ret;
> >
> > -		ret = __sg_alloc_table_from_pages
> > -			(&vmw_tt->sgt, vsgt->pages, vsgt->num_pages, 0,
> > -			 (unsigned long) vsgt->num_pages << PAGE_SHIFT,
> > -			 dma_get_max_seg_size(dev_priv->dev->dev),
> > -			 GFP_KERNEL);
> > -		if (unlikely(ret != 0))
> > +		sg = __sg_alloc_table_from_pages(&vmw_tt->sgt, vsgt->pages,
> > +				vsgt->num_pages, 0,
> > +				(unsigned long) vsgt->num_pages << PAGE_SHIFT,
> > +				dma_get_max_seg_size(dev_priv->dev->dev),
> > +				NULL, 0, GFP_KERNEL);
> > +		if (IS_ERR(sg)) {
> > +			ret = PTR_ERR(sg);
> >   			goto out_sg_alloc_fail;
> > +		}
> >
> >   		if (vsgt->num_pages > vmw_tt->sgt.nents) {
> >   			uint64_t over_alloc =
> > diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
> > index 45cf7b69d852..c24cc667b56b 100644
> > --- a/include/linux/scatterlist.h
> > +++ b/include/linux/scatterlist.h
> > @@ -165,6 +165,22 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf,
> >   #define for_each_sgtable_dma_sg(sgt, sg, i)	\
> >   	for_each_sg((sgt)->sgl, sg, (sgt)->nents, i)
> >
> > +static inline void __sg_chain(struct scatterlist *chain_sg,
> > +			      struct scatterlist *sgl)
> > +{
> > +	/*
> > +	 * offset and length are unused for chain entry. Clear them.
> > +	 */
> > +	chain_sg->offset = 0;
> > +	chain_sg->length = 0;
> > +
> > +	/*
> > +	 * Set lowest bit to indicate a link pointer, and make sure to clear
> > +	 * the termination bit if it happens to be set.
> > +	 */
> > +	chain_sg->page_link = ((unsigned long) sgl | SG_CHAIN) & ~SG_END;
> > +}
> > +
> >   /**
> >    * sg_chain - Chain two sglists together
> >    * @prv:	First scatterlist
> > @@ -178,18 +194,7 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf,
> >   static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents,
> >   			    struct scatterlist *sgl)
> >   {
> > -	/*
> > -	 * offset and length are unused for chain entry.  Clear them.
> > -	 */
> > -	prv[prv_nents - 1].offset = 0;
> > -	prv[prv_nents - 1].length = 0;
> > -
> > -	/*
> > -	 * Set lowest bit to indicate a link pointer, and make sure to clear
> > -	 * the termination bit if it happens to be set.
> > -	 */
> > -	prv[prv_nents - 1].page_link = ((unsigned long) sgl | SG_CHAIN)
> > -					& ~SG_END;
> > +	__sg_chain(&prv[prv_nents - 1], sgl);
> >   }
> >
> >   /**
> > @@ -283,13 +288,15 @@ typedef void (sg_free_fn)(struct scatterlist *, unsigned int);
> >   void __sg_free_table(struct sg_table *, unsigned int, unsigned int,
> >   		     sg_free_fn *);
> >   void sg_free_table(struct sg_table *);
> > -int __sg_alloc_table(struct sg_table *, unsigned int, unsigned int,
> > -		     struct scatterlist *, unsigned int, gfp_t, sg_alloc_fn *);
> > +int __sg_alloc_table(struct sg_table *, struct scatterlist *, unsigned int,
> > +		unsigned int, struct scatterlist *, unsigned int,
> > +		gfp_t, sg_alloc_fn *);
> >   int sg_alloc_table(struct sg_table *, unsigned int, gfp_t);
> > -int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
> > -				unsigned int n_pages, unsigned int offset,
> > -				unsigned long size, unsigned int max_segment,
> > -				gfp_t gfp_mask);
> > +struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt,
> > +		struct page **pages, unsigned int n_pages, unsigned int offset,
> > +		unsigned long size, unsigned int max_segment,
> > +		struct scatterlist *prv, unsigned int left_pages,
> > +		gfp_t gfp_mask);
> >   int sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
> >   			      unsigned int n_pages, unsigned int offset,
> >   			      unsigned long size, gfp_t gfp_mask);
> > diff --git a/lib/scatterlist.c b/lib/scatterlist.c
> > index 5d63a8857f36..91587560497d 100644
> > --- a/lib/scatterlist.c
> > +++ b/lib/scatterlist.c
> > @@ -245,6 +245,7 @@ EXPORT_SYMBOL(sg_free_table);
> >   /**
> >    * __sg_alloc_table - Allocate and initialize an sg table with given allocator
> >    * @table:	The sg table header to use
> > + * @prv:	Last populated sge in sgt
> >    * @nents:	Number of entries in sg list
> >    * @max_ents:	The maximum number of entries the allocator returns per call
> >    * @nents_first_chunk: Number of entries int the (preallocated) first
> > @@ -263,17 +264,15 @@ EXPORT_SYMBOL(sg_free_table);
> >    *   __sg_free_table() to cleanup any leftover allocations.
> >    *
> >    **/
> > -int __sg_alloc_table(struct sg_table *table, unsigned int nents,
> > -		     unsigned int max_ents, struct scatterlist *first_chunk,
> > -		     unsigned int nents_first_chunk, gfp_t gfp_mask,
> > -		     sg_alloc_fn *alloc_fn)
> > +int __sg_alloc_table(struct sg_table *table, struct scatterlist *prv,
> > +		unsigned int nents, unsigned int max_ents,
> > +		struct scatterlist *first_chunk,
> > +		unsigned int nents_first_chunk, gfp_t gfp_mask,
> > +		sg_alloc_fn *alloc_fn)
> >   {
> > -	struct scatterlist *sg, *prv;
> > -	unsigned int left;
> > -	unsigned curr_max_ents = nents_first_chunk ?: max_ents;
> > -	unsigned prv_max_ents;
> > -
> > -	memset(table, 0, sizeof(*table));
> > +	unsigned int curr_max_ents = nents_first_chunk ?: max_ents;
> > +	unsigned int left, prv_max_ents = 0;
> > +	struct scatterlist *sg;
> >
> >   	if (nents == 0)
> >   		return -EINVAL;
> > @@ -283,7 +282,6 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
> >   #endif
> >
> >   	left = nents;
> > -	prv = NULL;
> >   	do {
> >   		unsigned int sg_size, alloc_size = left;
> >
> > @@ -308,7 +306,7 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
> >   			 * linkage.  Without this, sg_kfree() may get
> >   			 * confused.
> >   			 */
> > -			if (prv)
> > +			if (prv_max_ents)
> >   				table->nents = ++table->orig_nents;
> >
> >   			return -ENOMEM;
> > @@ -321,10 +319,18 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
> >   		 * If this is the first mapping, assign the sg table header.
> >   		 * If this is not the first mapping, chain previous part.
> >   		 */
> > -		if (prv)
> > -			sg_chain(prv, prv_max_ents, sg);
> > -		else
> > +		if (!prv)
> >   			table->sgl = sg;
> > +		else if (prv_max_ents)
> > +			sg_chain(prv, prv_max_ents, sg);
> > +		else {
> > +			__sg_chain(prv, sg);
> > +			/*
> > +			 * We decrease one since the prvious last sge in used to
> > +			 * chain the chunks together.
> > +			 */
> > +			table->nents = table->orig_nents -= 1;
> > +		}
> >
> >   		/*
> >   		 * If no more entries after this one, mark the end
> > @@ -356,7 +362,8 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask)
> >   {
> >   	int ret;
> >
> > -	ret = __sg_alloc_table(table, nents, SG_MAX_SINGLE_ALLOC,
> > +	memset(table, 0, sizeof(*table));
> > +	ret = __sg_alloc_table(table, NULL, nents, SG_MAX_SINGLE_ALLOC,
> >   			       NULL, 0, gfp_mask, sg_kmalloc);
> >   	if (unlikely(ret))
> >   		__sg_free_table(table, SG_MAX_SINGLE_ALLOC, 0, sg_kfree);
> > @@ -365,6 +372,30 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask)
> >   }
> >   EXPORT_SYMBOL(sg_alloc_table);
> >
> > +static struct scatterlist *get_next_sg(struct sg_table *table,
> > +		struct scatterlist *prv, unsigned long left_npages,
> > +		gfp_t gfp_mask)
> > +{
> > +	struct scatterlist *next_sg;
> > +	int ret;
> > +
> > +	/* If table was just allocated */
> > +	if (!prv)
> > +		return table->sgl;
> > +
> > +	/* Check if last entry should be keeped for chainning */
> > +	next_sg = sg_next(prv);
> > +	if (!sg_is_last(next_sg) || left_npages == 1)
> > +		return next_sg;
> > +
> > +	ret = __sg_alloc_table(table, next_sg,
> > +			min_t(unsigned long, left_npages, SG_MAX_SINGLE_ALLOC),
> > +			SG_MAX_SINGLE_ALLOC, NULL, 0, gfp_mask, sg_kmalloc);
> > +	if (ret)
> > +		return ERR_PTR(ret);
> > +	return sg_next(prv);
> > +}
> > +
> >   /**
> >    * __sg_alloc_table_from_pages - Allocate and initialize an sg table from
> >    *			         an array of pages
> > @@ -374,29 +405,47 @@ EXPORT_SYMBOL(sg_alloc_table);
> >    * @offset:      Offset from start of the first page to the start of a buffer
> >    * @size:        Number of valid bytes in the buffer (after offset)
> >    * @max_segment: Maximum size of a scatterlist node in bytes (page aligned)
> > + * @prv:	 Last populated sge in sgt
> > + * @left_pages:  Left pages caller have to set after this call
> >    * @gfp_mask:	 GFP allocation mask
> >    *
> > - *  Description:
> > - *    Allocate and initialize an sg table from a list of pages. Contiguous
> > - *    ranges of the pages are squashed into a single scatterlist node up to the
> > - *    maximum size specified in @max_segment. An user may provide an offset at a
> > - *    start and a size of valid data in a buffer specified by the page array.
> > - *    The returned sg table is released by sg_free_table.
> > + * Description:
> > + *    If @prv is NULL, allocate and initialize an sg table from a list of pages,
> > + *    else reuse the scatterlist passed in at @prv.
> > + *    Contiguous ranges of the pages are squashed into a single scatterlist
> > + *    entry up to the maximum size specified in @max_segment.  A user may
> > + *    provide an offset at a start and a size of valid data in a buffer
> > + *    specified by the page array.
> >    *
> >    * Returns:
> > - *   0 on success, negative error on failure
> > + *   Last SGE in sgt on success, PTR_ERR on otherwise.
> > + *   The allocation in @sgt must be released by sg_free_table.
> > + *
> > + * Notes:
> > + *   If this function returns non-0 (eg failure), the caller must call
> > + *   sg_free_table() to cleanup any leftover allocations.
> >    */
> > -int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
> > -				unsigned int n_pages, unsigned int offset,
> > -				unsigned long size, unsigned int max_segment,
> > -				gfp_t gfp_mask)
> > +struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt,
> > +		struct page **pages, unsigned int n_pages, unsigned int offset,
> > +		unsigned long size, unsigned int max_segment,
> > +		struct scatterlist *prv, unsigned int left_pages,
> > +		gfp_t gfp_mask)
> >   {
> > -	unsigned int chunks, cur_page, seg_len, i;
> > +	unsigned int chunks, cur_page, seg_len, i, prv_len = 0;
> > +	unsigned int tmp_nents = sgt->nents;
> > +	struct scatterlist *s = prv;
> > +	unsigned int table_size;
> >   	int ret;
> > -	struct scatterlist *s;
> >
> >   	if (WARN_ON(!max_segment || offset_in_page(max_segment)))
> > -		return -EINVAL;
> > +		return ERR_PTR(-EINVAL);
> > +	if (IS_ENABLED(CONFIG_ARCH_NO_SG_CHAIN) && prv)
> > +		return ERR_PTR(-EOPNOTSUPP);
>
> I would consider trying to make the failure caught at compile time. It would
> probably need a static inline wrapper to BUILD_BUG_ON is prv is not compile
> time constant. Because my gut feeling is runtime is a bit awkward.

In second patch [1], priv is dynamic pointer that can't be checked at
compile time.

[1] https://lore.kernel.org/linux-rdma/20200923054251.GA15249@lst.de/T/#m19b0836f23db9d626309c3e70939ce884946e2f6

>
> Hm, but also isn't the check too strict? It would be possible to append to
> the last sgt as long as under max_ents, no? (Like the current check in
> __sg_alloc_table.)

It can be, but it is corner case that doesn't worth to code. Right now,
RDMA is the single user of this append thing and our setups are
!CONFIG_ARCH_NO_SG_CHAIN.

>
> > +
> > +	if (prv &&
> > +	    page_to_pfn(sg_page(prv)) + (prv->length >> PAGE_SHIFT) ==
> > +	    page_to_pfn(pages[0]))
> > +		prv_len = prv->length;
> >
> >   	/* compute number of contiguous chunks */
> >   	chunks = 1;
> > @@ -410,13 +459,17 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
> >   		}
> >   	}
> >
> > -	ret = sg_alloc_table(sgt, chunks, gfp_mask);
> > -	if (unlikely(ret))
> > -		return ret;
> > +	if (!prv) {
> > +		/* Only the last allocation could be less than the maximum */
> > +		table_size = left_pages ? SG_MAX_SINGLE_ALLOC : chunks;
> > +		ret = sg_alloc_table(sgt, table_size, gfp_mask);
> > +		if (unlikely(ret))
> > +			return ERR_PTR(ret);
> > +	}
> >
> >   	/* merging chunks and putting them into the scatterlist */
> >   	cur_page = 0;
> > -	for_each_sg(sgt->sgl, s, sgt->orig_nents, i) {
> > +	for (i = 0; i < chunks; i++) {
> >   		unsigned int j, chunk_size;
> >
> >   		/* look for the end of the current chunk */
> > @@ -425,19 +478,41 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
> >   			seg_len += PAGE_SIZE;
> >   			if (seg_len >= max_segment ||
> >   			    page_to_pfn(pages[j]) !=
> > -			    page_to_pfn(pages[j - 1]) + 1)
> > +				    page_to_pfn(pages[j - 1]) + 1)
> >   				break;
> >   		}
> >
> >   		chunk_size = ((j - cur_page) << PAGE_SHIFT) - offset;
> > -		sg_set_page(s, pages[cur_page],
> > -			    min_t(unsigned long, size, chunk_size), offset);
> > +		chunk_size = min_t(unsigned long, size, chunk_size);
> > +		if (!i && prv_len) {
> > +			if (max_segment - prv->length >= chunk_size) {
> > +				sg_set_page(s, sg_page(s),
> > +					    s->length + chunk_size, s->offset);
> > +				goto next;
> > +			}
> > +		}
> > +
> > +		/* Pass how many chunks might left */
> > +		s = get_next_sg(sgt, s, chunks - i + left_pages, gfp_mask);
> > +		if (IS_ERR(s)) {
> > +			/*
> > +			 * Adjust entry length to be as before function was
> > +			 * called.
> > +			 */
> > +			if (prv_len)
> > +				prv->length = prv_len;
> > +			goto out;
> > +		}
> > +		sg_set_page(s, pages[cur_page], chunk_size, offset);
> > +		tmp_nents++;
> > +next:
> >   		size -= chunk_size;
> >   		offset = 0;
> >   		cur_page = j;
> >   	}
> > -
> > -	return 0;
> > +	sgt->nents = tmp_nents;
> > +out:
> > +	return s;
> >   }
> >   EXPORT_SYMBOL(__sg_alloc_table_from_pages);
> >
> > @@ -465,8 +540,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
> >   			      unsigned int n_pages, unsigned int offset,
> >   			      unsigned long size, gfp_t gfp_mask)
> >   {
> > -	return __sg_alloc_table_from_pages(sgt, pages, n_pages, offset, size,
> > -					   SCATTERLIST_MAX_SEGMENT, gfp_mask);
> > +	return PTR_ERR_OR_ZERO(__sg_alloc_table_from_pages(sgt, pages, n_pages,
> > +			offset, size, SCATTERLIST_MAX_SEGMENT, NULL, 0,
> > +			gfp_mask));
> >   }
> >   EXPORT_SYMBOL(sg_alloc_table_from_pages);
> >
> > diff --git a/lib/sg_pool.c b/lib/sg_pool.c
> > index db29e5c1f790..c449248bf5d5 100644
> > --- a/lib/sg_pool.c
> > +++ b/lib/sg_pool.c
> > @@ -129,7 +129,8 @@ int sg_alloc_table_chained(struct sg_table *table, int nents,
> >   		nents_first_chunk = 0;
> >   	}
> >
> > -	ret = __sg_alloc_table(table, nents, SG_CHUNK_SIZE,
> > +	memset(table, 0, sizeof(*table));
> > +	ret = __sg_alloc_table(table, NULL, nents, SG_CHUNK_SIZE,
> >   			       first_chunk, nents_first_chunk,
> >   			       GFP_ATOMIC, sg_pool_alloc);
> >   	if (unlikely(ret))
> > diff --git a/tools/testing/scatterlist/main.c b/tools/testing/scatterlist/main.c
> > index 0a1464181226..4899359a31ac 100644
> > --- a/tools/testing/scatterlist/main.c
> > +++ b/tools/testing/scatterlist/main.c
> > @@ -55,14 +55,13 @@ int main(void)
> >   	for (i = 0, test = tests; test->expected_segments; test++, i++) {
> >   		struct page *pages[MAX_PAGES];
> >   		struct sg_table st;
> > -		int ret;
> > +		struct scatterlist *sg;
> >
> >   		set_pages(pages, test->pfn, test->num_pages);
> >
> > -		ret = __sg_alloc_table_from_pages(&st, pages, test->num_pages,
> > -						  0, test->size, test->max_seg,
> > -						  GFP_KERNEL);
> > -		assert(ret == test->alloc_ret);
> > +		sg = __sg_alloc_table_from_pages(&st, pages, test->num_pages, 0,
> > +				test->size, test->max_seg, NULL, 0, GFP_KERNEL);
> > +		assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);
>
> Some test coverage for relatively complex code would be very welcomed. Since
> the testing framework is already there, even if it bit-rotted a bit, but
> shouldn't be hard to fix.
>
> A few tests to check append/grow works as expected, in terms of how the end
> table looks like given the initial state and some different page patterns
> added to it. And both crossing and not crossing into sg chaining scenarios.

This function is basic for all RDMA devices and we are pretty confident
that the old and new flows are tested thoroughly.

We will add proper test in next kernel cycle.

Thanks

>
> Regards,
>
> Tvrtko
>
> >
> >   		if (test->alloc_ret)
> >   			continue;
> > --
> > 2.26.2
> >
> > _______________________________________________
> > Intel-gfx mailing list
> > Intel-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/intel-gfx
> >
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
@ 2020-09-25  7:13       ` Leon Romanovsky
  0 siblings, 0 replies; 43+ messages in thread
From: Leon Romanovsky @ 2020-09-25  7:13 UTC (permalink / raw)
  To: Tvrtko Ursulin
  Cc: linux-rdma, intel-gfx, Roland Scheidegger, dri-devel,
	Maor Gottlieb, David Airlie, Doug Ledford, VMware Graphics,
	Jason Gunthorpe, Maor Gottlieb, Christoph Hellwig

On Thu, Sep 24, 2020 at 09:21:20AM +0100, Tvrtko Ursulin wrote:
>
> On 22/09/2020 09:39, Leon Romanovsky wrote:
> > From: Maor Gottlieb <maorg@mellanox.com>
> >
> > Extend __sg_alloc_table_from_pages to support dynamic allocation of
> > SG table from pages. It should be used by drivers that can't supply
> > all the pages at one time.
> >
> > This function returns the last populated SGE in the table. Users should
> > pass it as an argument to the function from the second call and forward.
> > As before, nents will be equal to the number of populated SGEs (chunks).
>
> So it's appending and growing the "list", did I get that right? Sounds handy
> indeed. Some comments/questions below.

Yes, we (RDMA) use this function to chain contiguous pages.

>
> >
> > With this new extension, drivers can benefit the optimization of merging
> > contiguous pages without a need to allocate all pages in advance and
> > hold them in a large buffer.
> >
> > E.g. with the Infiniband driver that allocates a single page for hold
> > the
> > pages. For 1TB memory registration, the temporary buffer would consume
> > only
> > 4KB, instead of 2GB.
> >
> > Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
> > Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> > ---
> >   drivers/gpu/drm/i915/gem/i915_gem_userptr.c |  12 +-
> >   drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c  |  15 +-
> >   include/linux/scatterlist.h                 |  43 +++---
> >   lib/scatterlist.c                           | 158 +++++++++++++++-----
> >   lib/sg_pool.c                               |   3 +-
> >   tools/testing/scatterlist/main.c            |   9 +-
> >   6 files changed, 163 insertions(+), 77 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> > index 12b30075134a..f2eaed6aca3d 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> > @@ -403,6 +403,7 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
> >   	unsigned int max_segment = i915_sg_segment_size();
> >   	struct sg_table *st;
> >   	unsigned int sg_page_sizes;
> > +	struct scatterlist *sg;
> >   	int ret;
> >
> >   	st = kmalloc(sizeof(*st), GFP_KERNEL);
> > @@ -410,13 +411,12 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj,
> >   		return ERR_PTR(-ENOMEM);
> >
> >   alloc_table:
> > -	ret = __sg_alloc_table_from_pages(st, pvec, num_pages,
> > -					  0, num_pages << PAGE_SHIFT,
> > -					  max_segment,
> > -					  GFP_KERNEL);
> > -	if (ret) {
> > +	sg = __sg_alloc_table_from_pages(st, pvec, num_pages, 0,
> > +					 num_pages << PAGE_SHIFT, max_segment,
> > +					 NULL, 0, GFP_KERNEL);
> > +	if (IS_ERR(sg)) {
> >   		kfree(st);
> > -		return ERR_PTR(ret);
> > +		return ERR_CAST(sg);
> >   	}
> >
> >   	ret = i915_gem_gtt_prepare_pages(obj, st);
> > diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
> > index ab524ab3b0b4..f22acd398b1f 100644
> > --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
> > +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
> > @@ -419,6 +419,7 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt)
> >   	int ret = 0;
> >   	static size_t sgl_size;
> >   	static size_t sgt_size;
> > +	struct scatterlist *sg;
> >
> >   	if (vmw_tt->mapped)
> >   		return 0;
> > @@ -441,13 +442,15 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt)
> >   		if (unlikely(ret != 0))
> >   			return ret;
> >
> > -		ret = __sg_alloc_table_from_pages
> > -			(&vmw_tt->sgt, vsgt->pages, vsgt->num_pages, 0,
> > -			 (unsigned long) vsgt->num_pages << PAGE_SHIFT,
> > -			 dma_get_max_seg_size(dev_priv->dev->dev),
> > -			 GFP_KERNEL);
> > -		if (unlikely(ret != 0))
> > +		sg = __sg_alloc_table_from_pages(&vmw_tt->sgt, vsgt->pages,
> > +				vsgt->num_pages, 0,
> > +				(unsigned long) vsgt->num_pages << PAGE_SHIFT,
> > +				dma_get_max_seg_size(dev_priv->dev->dev),
> > +				NULL, 0, GFP_KERNEL);
> > +		if (IS_ERR(sg)) {
> > +			ret = PTR_ERR(sg);
> >   			goto out_sg_alloc_fail;
> > +		}
> >
> >   		if (vsgt->num_pages > vmw_tt->sgt.nents) {
> >   			uint64_t over_alloc =
> > diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
> > index 45cf7b69d852..c24cc667b56b 100644
> > --- a/include/linux/scatterlist.h
> > +++ b/include/linux/scatterlist.h
> > @@ -165,6 +165,22 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf,
> >   #define for_each_sgtable_dma_sg(sgt, sg, i)	\
> >   	for_each_sg((sgt)->sgl, sg, (sgt)->nents, i)
> >
> > +static inline void __sg_chain(struct scatterlist *chain_sg,
> > +			      struct scatterlist *sgl)
> > +{
> > +	/*
> > +	 * offset and length are unused for chain entry. Clear them.
> > +	 */
> > +	chain_sg->offset = 0;
> > +	chain_sg->length = 0;
> > +
> > +	/*
> > +	 * Set lowest bit to indicate a link pointer, and make sure to clear
> > +	 * the termination bit if it happens to be set.
> > +	 */
> > +	chain_sg->page_link = ((unsigned long) sgl | SG_CHAIN) & ~SG_END;
> > +}
> > +
> >   /**
> >    * sg_chain - Chain two sglists together
> >    * @prv:	First scatterlist
> > @@ -178,18 +194,7 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf,
> >   static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents,
> >   			    struct scatterlist *sgl)
> >   {
> > -	/*
> > -	 * offset and length are unused for chain entry.  Clear them.
> > -	 */
> > -	prv[prv_nents - 1].offset = 0;
> > -	prv[prv_nents - 1].length = 0;
> > -
> > -	/*
> > -	 * Set lowest bit to indicate a link pointer, and make sure to clear
> > -	 * the termination bit if it happens to be set.
> > -	 */
> > -	prv[prv_nents - 1].page_link = ((unsigned long) sgl | SG_CHAIN)
> > -					& ~SG_END;
> > +	__sg_chain(&prv[prv_nents - 1], sgl);
> >   }
> >
> >   /**
> > @@ -283,13 +288,15 @@ typedef void (sg_free_fn)(struct scatterlist *, unsigned int);
> >   void __sg_free_table(struct sg_table *, unsigned int, unsigned int,
> >   		     sg_free_fn *);
> >   void sg_free_table(struct sg_table *);
> > -int __sg_alloc_table(struct sg_table *, unsigned int, unsigned int,
> > -		     struct scatterlist *, unsigned int, gfp_t, sg_alloc_fn *);
> > +int __sg_alloc_table(struct sg_table *, struct scatterlist *, unsigned int,
> > +		unsigned int, struct scatterlist *, unsigned int,
> > +		gfp_t, sg_alloc_fn *);
> >   int sg_alloc_table(struct sg_table *, unsigned int, gfp_t);
> > -int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
> > -				unsigned int n_pages, unsigned int offset,
> > -				unsigned long size, unsigned int max_segment,
> > -				gfp_t gfp_mask);
> > +struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt,
> > +		struct page **pages, unsigned int n_pages, unsigned int offset,
> > +		unsigned long size, unsigned int max_segment,
> > +		struct scatterlist *prv, unsigned int left_pages,
> > +		gfp_t gfp_mask);
> >   int sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
> >   			      unsigned int n_pages, unsigned int offset,
> >   			      unsigned long size, gfp_t gfp_mask);
> > diff --git a/lib/scatterlist.c b/lib/scatterlist.c
> > index 5d63a8857f36..91587560497d 100644
> > --- a/lib/scatterlist.c
> > +++ b/lib/scatterlist.c
> > @@ -245,6 +245,7 @@ EXPORT_SYMBOL(sg_free_table);
> >   /**
> >    * __sg_alloc_table - Allocate and initialize an sg table with given allocator
> >    * @table:	The sg table header to use
> > + * @prv:	Last populated sge in sgt
> >    * @nents:	Number of entries in sg list
> >    * @max_ents:	The maximum number of entries the allocator returns per call
> >    * @nents_first_chunk: Number of entries int the (preallocated) first
> > @@ -263,17 +264,15 @@ EXPORT_SYMBOL(sg_free_table);
> >    *   __sg_free_table() to cleanup any leftover allocations.
> >    *
> >    **/
> > -int __sg_alloc_table(struct sg_table *table, unsigned int nents,
> > -		     unsigned int max_ents, struct scatterlist *first_chunk,
> > -		     unsigned int nents_first_chunk, gfp_t gfp_mask,
> > -		     sg_alloc_fn *alloc_fn)
> > +int __sg_alloc_table(struct sg_table *table, struct scatterlist *prv,
> > +		unsigned int nents, unsigned int max_ents,
> > +		struct scatterlist *first_chunk,
> > +		unsigned int nents_first_chunk, gfp_t gfp_mask,
> > +		sg_alloc_fn *alloc_fn)
> >   {
> > -	struct scatterlist *sg, *prv;
> > -	unsigned int left;
> > -	unsigned curr_max_ents = nents_first_chunk ?: max_ents;
> > -	unsigned prv_max_ents;
> > -
> > -	memset(table, 0, sizeof(*table));
> > +	unsigned int curr_max_ents = nents_first_chunk ?: max_ents;
> > +	unsigned int left, prv_max_ents = 0;
> > +	struct scatterlist *sg;
> >
> >   	if (nents == 0)
> >   		return -EINVAL;
> > @@ -283,7 +282,6 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
> >   #endif
> >
> >   	left = nents;
> > -	prv = NULL;
> >   	do {
> >   		unsigned int sg_size, alloc_size = left;
> >
> > @@ -308,7 +306,7 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
> >   			 * linkage.  Without this, sg_kfree() may get
> >   			 * confused.
> >   			 */
> > -			if (prv)
> > +			if (prv_max_ents)
> >   				table->nents = ++table->orig_nents;
> >
> >   			return -ENOMEM;
> > @@ -321,10 +319,18 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
> >   		 * If this is the first mapping, assign the sg table header.
> >   		 * If this is not the first mapping, chain previous part.
> >   		 */
> > -		if (prv)
> > -			sg_chain(prv, prv_max_ents, sg);
> > -		else
> > +		if (!prv)
> >   			table->sgl = sg;
> > +		else if (prv_max_ents)
> > +			sg_chain(prv, prv_max_ents, sg);
> > +		else {
> > +			__sg_chain(prv, sg);
> > +			/*
> > +			 * We decrease one since the prvious last sge in used to
> > +			 * chain the chunks together.
> > +			 */
> > +			table->nents = table->orig_nents -= 1;
> > +		}
> >
> >   		/*
> >   		 * If no more entries after this one, mark the end
> > @@ -356,7 +362,8 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask)
> >   {
> >   	int ret;
> >
> > -	ret = __sg_alloc_table(table, nents, SG_MAX_SINGLE_ALLOC,
> > +	memset(table, 0, sizeof(*table));
> > +	ret = __sg_alloc_table(table, NULL, nents, SG_MAX_SINGLE_ALLOC,
> >   			       NULL, 0, gfp_mask, sg_kmalloc);
> >   	if (unlikely(ret))
> >   		__sg_free_table(table, SG_MAX_SINGLE_ALLOC, 0, sg_kfree);
> > @@ -365,6 +372,30 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask)
> >   }
> >   EXPORT_SYMBOL(sg_alloc_table);
> >
> > +static struct scatterlist *get_next_sg(struct sg_table *table,
> > +		struct scatterlist *prv, unsigned long left_npages,
> > +		gfp_t gfp_mask)
> > +{
> > +	struct scatterlist *next_sg;
> > +	int ret;
> > +
> > +	/* If table was just allocated */
> > +	if (!prv)
> > +		return table->sgl;
> > +
> > +	/* Check if last entry should be keeped for chainning */
> > +	next_sg = sg_next(prv);
> > +	if (!sg_is_last(next_sg) || left_npages == 1)
> > +		return next_sg;
> > +
> > +	ret = __sg_alloc_table(table, next_sg,
> > +			min_t(unsigned long, left_npages, SG_MAX_SINGLE_ALLOC),
> > +			SG_MAX_SINGLE_ALLOC, NULL, 0, gfp_mask, sg_kmalloc);
> > +	if (ret)
> > +		return ERR_PTR(ret);
> > +	return sg_next(prv);
> > +}
> > +
> >   /**
> >    * __sg_alloc_table_from_pages - Allocate and initialize an sg table from
> >    *			         an array of pages
> > @@ -374,29 +405,47 @@ EXPORT_SYMBOL(sg_alloc_table);
> >    * @offset:      Offset from start of the first page to the start of a buffer
> >    * @size:        Number of valid bytes in the buffer (after offset)
> >    * @max_segment: Maximum size of a scatterlist node in bytes (page aligned)
> > + * @prv:	 Last populated sge in sgt
> > + * @left_pages:  Left pages caller have to set after this call
> >    * @gfp_mask:	 GFP allocation mask
> >    *
> > - *  Description:
> > - *    Allocate and initialize an sg table from a list of pages. Contiguous
> > - *    ranges of the pages are squashed into a single scatterlist node up to the
> > - *    maximum size specified in @max_segment. An user may provide an offset at a
> > - *    start and a size of valid data in a buffer specified by the page array.
> > - *    The returned sg table is released by sg_free_table.
> > + * Description:
> > + *    If @prv is NULL, allocate and initialize an sg table from a list of pages,
> > + *    else reuse the scatterlist passed in at @prv.
> > + *    Contiguous ranges of the pages are squashed into a single scatterlist
> > + *    entry up to the maximum size specified in @max_segment.  A user may
> > + *    provide an offset at a start and a size of valid data in a buffer
> > + *    specified by the page array.
> >    *
> >    * Returns:
> > - *   0 on success, negative error on failure
> > + *   Last SGE in sgt on success, PTR_ERR on otherwise.
> > + *   The allocation in @sgt must be released by sg_free_table.
> > + *
> > + * Notes:
> > + *   If this function returns non-0 (eg failure), the caller must call
> > + *   sg_free_table() to cleanup any leftover allocations.
> >    */
> > -int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
> > -				unsigned int n_pages, unsigned int offset,
> > -				unsigned long size, unsigned int max_segment,
> > -				gfp_t gfp_mask)
> > +struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt,
> > +		struct page **pages, unsigned int n_pages, unsigned int offset,
> > +		unsigned long size, unsigned int max_segment,
> > +		struct scatterlist *prv, unsigned int left_pages,
> > +		gfp_t gfp_mask)
> >   {
> > -	unsigned int chunks, cur_page, seg_len, i;
> > +	unsigned int chunks, cur_page, seg_len, i, prv_len = 0;
> > +	unsigned int tmp_nents = sgt->nents;
> > +	struct scatterlist *s = prv;
> > +	unsigned int table_size;
> >   	int ret;
> > -	struct scatterlist *s;
> >
> >   	if (WARN_ON(!max_segment || offset_in_page(max_segment)))
> > -		return -EINVAL;
> > +		return ERR_PTR(-EINVAL);
> > +	if (IS_ENABLED(CONFIG_ARCH_NO_SG_CHAIN) && prv)
> > +		return ERR_PTR(-EOPNOTSUPP);
>
> I would consider trying to make the failure caught at compile time. It would
> probably need a static inline wrapper to BUILD_BUG_ON is prv is not compile
> time constant. Because my gut feeling is runtime is a bit awkward.

In second patch [1], priv is dynamic pointer that can't be checked at
compile time.

[1] https://lore.kernel.org/linux-rdma/20200923054251.GA15249@lst.de/T/#m19b0836f23db9d626309c3e70939ce884946e2f6

>
> Hm, but also isn't the check too strict? It would be possible to append to
> the last sgt as long as under max_ents, no? (Like the current check in
> __sg_alloc_table.)

It can be, but it is corner case that doesn't worth to code. Right now,
RDMA is the single user of this append thing and our setups are
!CONFIG_ARCH_NO_SG_CHAIN.

>
> > +
> > +	if (prv &&
> > +	    page_to_pfn(sg_page(prv)) + (prv->length >> PAGE_SHIFT) ==
> > +	    page_to_pfn(pages[0]))
> > +		prv_len = prv->length;
> >
> >   	/* compute number of contiguous chunks */
> >   	chunks = 1;
> > @@ -410,13 +459,17 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
> >   		}
> >   	}
> >
> > -	ret = sg_alloc_table(sgt, chunks, gfp_mask);
> > -	if (unlikely(ret))
> > -		return ret;
> > +	if (!prv) {
> > +		/* Only the last allocation could be less than the maximum */
> > +		table_size = left_pages ? SG_MAX_SINGLE_ALLOC : chunks;
> > +		ret = sg_alloc_table(sgt, table_size, gfp_mask);
> > +		if (unlikely(ret))
> > +			return ERR_PTR(ret);
> > +	}
> >
> >   	/* merging chunks and putting them into the scatterlist */
> >   	cur_page = 0;
> > -	for_each_sg(sgt->sgl, s, sgt->orig_nents, i) {
> > +	for (i = 0; i < chunks; i++) {
> >   		unsigned int j, chunk_size;
> >
> >   		/* look for the end of the current chunk */
> > @@ -425,19 +478,41 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
> >   			seg_len += PAGE_SIZE;
> >   			if (seg_len >= max_segment ||
> >   			    page_to_pfn(pages[j]) !=
> > -			    page_to_pfn(pages[j - 1]) + 1)
> > +				    page_to_pfn(pages[j - 1]) + 1)
> >   				break;
> >   		}
> >
> >   		chunk_size = ((j - cur_page) << PAGE_SHIFT) - offset;
> > -		sg_set_page(s, pages[cur_page],
> > -			    min_t(unsigned long, size, chunk_size), offset);
> > +		chunk_size = min_t(unsigned long, size, chunk_size);
> > +		if (!i && prv_len) {
> > +			if (max_segment - prv->length >= chunk_size) {
> > +				sg_set_page(s, sg_page(s),
> > +					    s->length + chunk_size, s->offset);
> > +				goto next;
> > +			}
> > +		}
> > +
> > +		/* Pass how many chunks might left */
> > +		s = get_next_sg(sgt, s, chunks - i + left_pages, gfp_mask);
> > +		if (IS_ERR(s)) {
> > +			/*
> > +			 * Adjust entry length to be as before function was
> > +			 * called.
> > +			 */
> > +			if (prv_len)
> > +				prv->length = prv_len;
> > +			goto out;
> > +		}
> > +		sg_set_page(s, pages[cur_page], chunk_size, offset);
> > +		tmp_nents++;
> > +next:
> >   		size -= chunk_size;
> >   		offset = 0;
> >   		cur_page = j;
> >   	}
> > -
> > -	return 0;
> > +	sgt->nents = tmp_nents;
> > +out:
> > +	return s;
> >   }
> >   EXPORT_SYMBOL(__sg_alloc_table_from_pages);
> >
> > @@ -465,8 +540,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
> >   			      unsigned int n_pages, unsigned int offset,
> >   			      unsigned long size, gfp_t gfp_mask)
> >   {
> > -	return __sg_alloc_table_from_pages(sgt, pages, n_pages, offset, size,
> > -					   SCATTERLIST_MAX_SEGMENT, gfp_mask);
> > +	return PTR_ERR_OR_ZERO(__sg_alloc_table_from_pages(sgt, pages, n_pages,
> > +			offset, size, SCATTERLIST_MAX_SEGMENT, NULL, 0,
> > +			gfp_mask));
> >   }
> >   EXPORT_SYMBOL(sg_alloc_table_from_pages);
> >
> > diff --git a/lib/sg_pool.c b/lib/sg_pool.c
> > index db29e5c1f790..c449248bf5d5 100644
> > --- a/lib/sg_pool.c
> > +++ b/lib/sg_pool.c
> > @@ -129,7 +129,8 @@ int sg_alloc_table_chained(struct sg_table *table, int nents,
> >   		nents_first_chunk = 0;
> >   	}
> >
> > -	ret = __sg_alloc_table(table, nents, SG_CHUNK_SIZE,
> > +	memset(table, 0, sizeof(*table));
> > +	ret = __sg_alloc_table(table, NULL, nents, SG_CHUNK_SIZE,
> >   			       first_chunk, nents_first_chunk,
> >   			       GFP_ATOMIC, sg_pool_alloc);
> >   	if (unlikely(ret))
> > diff --git a/tools/testing/scatterlist/main.c b/tools/testing/scatterlist/main.c
> > index 0a1464181226..4899359a31ac 100644
> > --- a/tools/testing/scatterlist/main.c
> > +++ b/tools/testing/scatterlist/main.c
> > @@ -55,14 +55,13 @@ int main(void)
> >   	for (i = 0, test = tests; test->expected_segments; test++, i++) {
> >   		struct page *pages[MAX_PAGES];
> >   		struct sg_table st;
> > -		int ret;
> > +		struct scatterlist *sg;
> >
> >   		set_pages(pages, test->pfn, test->num_pages);
> >
> > -		ret = __sg_alloc_table_from_pages(&st, pages, test->num_pages,
> > -						  0, test->size, test->max_seg,
> > -						  GFP_KERNEL);
> > -		assert(ret == test->alloc_ret);
> > +		sg = __sg_alloc_table_from_pages(&st, pages, test->num_pages, 0,
> > +				test->size, test->max_seg, NULL, 0, GFP_KERNEL);
> > +		assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);
>
> Some test coverage for relatively complex code would be very welcomed. Since
> the testing framework is already there, even if it bit-rotted a bit, but
> shouldn't be hard to fix.
>
> A few tests to check append/grow works as expected, in terms of how the end
> table looks like given the initial state and some different page patterns
> added to it. And both crossing and not crossing into sg chaining scenarios.

This function is basic for all RDMA devices and we are pretty confident
that the old and new flows are tested thoroughly.

We will add proper test in next kernel cycle.

Thanks

>
> Regards,
>
> Tvrtko
>
> >
> >   		if (test->alloc_ret)
> >   			continue;
> > --
> > 2.26.2
> >
> > _______________________________________________
> > Intel-gfx mailing list
> > Intel-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/intel-gfx
> >
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
  2020-09-25  7:13       ` Leon Romanovsky
@ 2020-09-25 11:41         ` Tvrtko Ursulin
  -1 siblings, 0 replies; 43+ messages in thread
From: Tvrtko Ursulin @ 2020-09-25 11:41 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: linux-rdma, intel-gfx, Roland Scheidegger, dri-devel,
	Maor Gottlieb, David Airlie, Doug Ledford, VMware Graphics,
	Jason Gunthorpe, Maor Gottlieb, Christoph Hellwig


On 25/09/2020 08:13, Leon Romanovsky wrote:
> On Thu, Sep 24, 2020 at 09:21:20AM +0100, Tvrtko Ursulin wrote:
>>
>> On 22/09/2020 09:39, Leon Romanovsky wrote:
>>> From: Maor Gottlieb <maorg@mellanox.com>
>>>
>>> Extend __sg_alloc_table_from_pages to support dynamic allocation of
>>> SG table from pages. It should be used by drivers that can't supply
>>> all the pages at one time.
>>>
>>> This function returns the last populated SGE in the table. Users should
>>> pass it as an argument to the function from the second call and forward.
>>> As before, nents will be equal to the number of populated SGEs (chunks).
>>
>> So it's appending and growing the "list", did I get that right? Sounds handy
>> indeed. Some comments/questions below.
> 
> Yes, we (RDMA) use this function to chain contiguous pages.

I will eveluate if i915 could start using it. We have some loops which 
build page by page and coalesce.

[snip]

>>>    	if (unlikely(ret))
>>> diff --git a/tools/testing/scatterlist/main.c b/tools/testing/scatterlist/main.c
>>> index 0a1464181226..4899359a31ac 100644
>>> --- a/tools/testing/scatterlist/main.c
>>> +++ b/tools/testing/scatterlist/main.c
>>> @@ -55,14 +55,13 @@ int main(void)
>>>    	for (i = 0, test = tests; test->expected_segments; test++, i++) {
>>>    		struct page *pages[MAX_PAGES];
>>>    		struct sg_table st;
>>> -		int ret;
>>> +		struct scatterlist *sg;
>>>
>>>    		set_pages(pages, test->pfn, test->num_pages);
>>>
>>> -		ret = __sg_alloc_table_from_pages(&st, pages, test->num_pages,
>>> -						  0, test->size, test->max_seg,
>>> -						  GFP_KERNEL);
>>> -		assert(ret == test->alloc_ret);
>>> +		sg = __sg_alloc_table_from_pages(&st, pages, test->num_pages, 0,
>>> +				test->size, test->max_seg, NULL, 0, GFP_KERNEL);
>>> +		assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);
>>
>> Some test coverage for relatively complex code would be very welcomed. Since
>> the testing framework is already there, even if it bit-rotted a bit, but
>> shouldn't be hard to fix.
>>
>> A few tests to check append/grow works as expected, in terms of how the end
>> table looks like given the initial state and some different page patterns
>> added to it. And both crossing and not crossing into sg chaining scenarios.
> 
> This function is basic for all RDMA devices and we are pretty confident
> that the old and new flows are tested thoroughly.
> 
> We will add proper test in next kernel cycle.

Patch seems to be adding a requirement that all callers of 
(__)sg_alloc_table_from_pages pass in zeroed struct sg_table, which 
wasn't the case so far.

Have you audited all the callers and/or fixed them? There seems to be 
quite a few. Gut feel says problem would probably be better solved in 
lib/scatterlist.c and not by making all the callers memset. Should be 
possible if you make sure you only read st->nents if prev was passed in?

I've fixed the unit test and with this change the existing tests do 
pass. But without zeroing it does fail on the very first, single page, 
test scenario.

You can pull the unit test hacks from 
git://people.freedesktop.org/~tursulin/drm-intel sgtest.

Regards,

Tvrtko
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
@ 2020-09-25 11:41         ` Tvrtko Ursulin
  0 siblings, 0 replies; 43+ messages in thread
From: Tvrtko Ursulin @ 2020-09-25 11:41 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: linux-rdma, intel-gfx, Roland Scheidegger, dri-devel,
	Maor Gottlieb, David Airlie, Doug Ledford, VMware Graphics,
	Jason Gunthorpe, Maor Gottlieb, Christoph Hellwig


On 25/09/2020 08:13, Leon Romanovsky wrote:
> On Thu, Sep 24, 2020 at 09:21:20AM +0100, Tvrtko Ursulin wrote:
>>
>> On 22/09/2020 09:39, Leon Romanovsky wrote:
>>> From: Maor Gottlieb <maorg@mellanox.com>
>>>
>>> Extend __sg_alloc_table_from_pages to support dynamic allocation of
>>> SG table from pages. It should be used by drivers that can't supply
>>> all the pages at one time.
>>>
>>> This function returns the last populated SGE in the table. Users should
>>> pass it as an argument to the function from the second call and forward.
>>> As before, nents will be equal to the number of populated SGEs (chunks).
>>
>> So it's appending and growing the "list", did I get that right? Sounds handy
>> indeed. Some comments/questions below.
> 
> Yes, we (RDMA) use this function to chain contiguous pages.

I will eveluate if i915 could start using it. We have some loops which 
build page by page and coalesce.

[snip]

>>>    	if (unlikely(ret))
>>> diff --git a/tools/testing/scatterlist/main.c b/tools/testing/scatterlist/main.c
>>> index 0a1464181226..4899359a31ac 100644
>>> --- a/tools/testing/scatterlist/main.c
>>> +++ b/tools/testing/scatterlist/main.c
>>> @@ -55,14 +55,13 @@ int main(void)
>>>    	for (i = 0, test = tests; test->expected_segments; test++, i++) {
>>>    		struct page *pages[MAX_PAGES];
>>>    		struct sg_table st;
>>> -		int ret;
>>> +		struct scatterlist *sg;
>>>
>>>    		set_pages(pages, test->pfn, test->num_pages);
>>>
>>> -		ret = __sg_alloc_table_from_pages(&st, pages, test->num_pages,
>>> -						  0, test->size, test->max_seg,
>>> -						  GFP_KERNEL);
>>> -		assert(ret == test->alloc_ret);
>>> +		sg = __sg_alloc_table_from_pages(&st, pages, test->num_pages, 0,
>>> +				test->size, test->max_seg, NULL, 0, GFP_KERNEL);
>>> +		assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);
>>
>> Some test coverage for relatively complex code would be very welcomed. Since
>> the testing framework is already there, even if it bit-rotted a bit, but
>> shouldn't be hard to fix.
>>
>> A few tests to check append/grow works as expected, in terms of how the end
>> table looks like given the initial state and some different page patterns
>> added to it. And both crossing and not crossing into sg chaining scenarios.
> 
> This function is basic for all RDMA devices and we are pretty confident
> that the old and new flows are tested thoroughly.
> 
> We will add proper test in next kernel cycle.

Patch seems to be adding a requirement that all callers of 
(__)sg_alloc_table_from_pages pass in zeroed struct sg_table, which 
wasn't the case so far.

Have you audited all the callers and/or fixed them? There seems to be 
quite a few. Gut feel says problem would probably be better solved in 
lib/scatterlist.c and not by making all the callers memset. Should be 
possible if you make sure you only read st->nents if prev was passed in?

I've fixed the unit test and with this change the existing tests do 
pass. But without zeroing it does fail on the very first, single page, 
test scenario.

You can pull the unit test hacks from 
git://people.freedesktop.org/~tursulin/drm-intel sgtest.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
  2020-09-25  7:13       ` Leon Romanovsky
  (?)
@ 2020-09-25 11:55         ` Jason Gunthorpe
  -1 siblings, 0 replies; 43+ messages in thread
From: Jason Gunthorpe @ 2020-09-25 11:55 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Tvrtko Ursulin, Christoph Hellwig, Doug Ledford, linux-rdma,
	intel-gfx, Roland Scheidegger, dri-devel, David Airlie,
	VMware Graphics, Maor Gottlieb, Maor Gottlieb

On Fri, Sep 25, 2020 at 10:13:30AM +0300, Leon Romanovsky wrote:
> > > diff --git a/tools/testing/scatterlist/main.c b/tools/testing/scatterlist/main.c
> > > index 0a1464181226..4899359a31ac 100644
> > > +++ b/tools/testing/scatterlist/main.c
> > > @@ -55,14 +55,13 @@ int main(void)
> > >   	for (i = 0, test = tests; test->expected_segments; test++, i++) {
> > >   		struct page *pages[MAX_PAGES];
> > >   		struct sg_table st;
> > > -		int ret;
> > > +		struct scatterlist *sg;
> > >
> > >   		set_pages(pages, test->pfn, test->num_pages);
> > >
> > > -		ret = __sg_alloc_table_from_pages(&st, pages, test->num_pages,
> > > -						  0, test->size, test->max_seg,
> > > -						  GFP_KERNEL);
> > > -		assert(ret == test->alloc_ret);
> > > +		sg = __sg_alloc_table_from_pages(&st, pages, test->num_pages, 0,
> > > +				test->size, test->max_seg, NULL, 0, GFP_KERNEL);
> > > +		assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);
> >
> > Some test coverage for relatively complex code would be very welcomed. Since
> > the testing framework is already there, even if it bit-rotted a bit, but
> > shouldn't be hard to fix.
> >
> > A few tests to check append/grow works as expected, in terms of how the end
> > table looks like given the initial state and some different page patterns
> > added to it. And both crossing and not crossing into sg chaining scenarios.
> 
> This function is basic for all RDMA devices and we are pretty confident
> that the old and new flows are tested thoroughly.

Well, since 0-day is reporting that __i915_gem_userptr_alloc_pages is
crashing on this, it probably does need some tests :\

Jason

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
@ 2020-09-25 11:55         ` Jason Gunthorpe
  0 siblings, 0 replies; 43+ messages in thread
From: Jason Gunthorpe @ 2020-09-25 11:55 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Tvrtko Ursulin, linux-rdma, intel-gfx, Roland Scheidegger,
	dri-devel, Maor Gottlieb, David Airlie, Doug Ledford,
	VMware Graphics, Maor Gottlieb, Christoph Hellwig

On Fri, Sep 25, 2020 at 10:13:30AM +0300, Leon Romanovsky wrote:
> > > diff --git a/tools/testing/scatterlist/main.c b/tools/testing/scatterlist/main.c
> > > index 0a1464181226..4899359a31ac 100644
> > > +++ b/tools/testing/scatterlist/main.c
> > > @@ -55,14 +55,13 @@ int main(void)
> > >   	for (i = 0, test = tests; test->expected_segments; test++, i++) {
> > >   		struct page *pages[MAX_PAGES];
> > >   		struct sg_table st;
> > > -		int ret;
> > > +		struct scatterlist *sg;
> > >
> > >   		set_pages(pages, test->pfn, test->num_pages);
> > >
> > > -		ret = __sg_alloc_table_from_pages(&st, pages, test->num_pages,
> > > -						  0, test->size, test->max_seg,
> > > -						  GFP_KERNEL);
> > > -		assert(ret == test->alloc_ret);
> > > +		sg = __sg_alloc_table_from_pages(&st, pages, test->num_pages, 0,
> > > +				test->size, test->max_seg, NULL, 0, GFP_KERNEL);
> > > +		assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);
> >
> > Some test coverage for relatively complex code would be very welcomed. Since
> > the testing framework is already there, even if it bit-rotted a bit, but
> > shouldn't be hard to fix.
> >
> > A few tests to check append/grow works as expected, in terms of how the end
> > table looks like given the initial state and some different page patterns
> > added to it. And both crossing and not crossing into sg chaining scenarios.
> 
> This function is basic for all RDMA devices and we are pretty confident
> that the old and new flows are tested thoroughly.

Well, since 0-day is reporting that __i915_gem_userptr_alloc_pages is
crashing on this, it probably does need some tests :\

Jason
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
@ 2020-09-25 11:55         ` Jason Gunthorpe
  0 siblings, 0 replies; 43+ messages in thread
From: Jason Gunthorpe @ 2020-09-25 11:55 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: linux-rdma, intel-gfx, Roland Scheidegger, dri-devel,
	Maor Gottlieb, David Airlie, Doug Ledford, VMware Graphics,
	Maor Gottlieb, Christoph Hellwig

On Fri, Sep 25, 2020 at 10:13:30AM +0300, Leon Romanovsky wrote:
> > > diff --git a/tools/testing/scatterlist/main.c b/tools/testing/scatterlist/main.c
> > > index 0a1464181226..4899359a31ac 100644
> > > +++ b/tools/testing/scatterlist/main.c
> > > @@ -55,14 +55,13 @@ int main(void)
> > >   	for (i = 0, test = tests; test->expected_segments; test++, i++) {
> > >   		struct page *pages[MAX_PAGES];
> > >   		struct sg_table st;
> > > -		int ret;
> > > +		struct scatterlist *sg;
> > >
> > >   		set_pages(pages, test->pfn, test->num_pages);
> > >
> > > -		ret = __sg_alloc_table_from_pages(&st, pages, test->num_pages,
> > > -						  0, test->size, test->max_seg,
> > > -						  GFP_KERNEL);
> > > -		assert(ret == test->alloc_ret);
> > > +		sg = __sg_alloc_table_from_pages(&st, pages, test->num_pages, 0,
> > > +				test->size, test->max_seg, NULL, 0, GFP_KERNEL);
> > > +		assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);
> >
> > Some test coverage for relatively complex code would be very welcomed. Since
> > the testing framework is already there, even if it bit-rotted a bit, but
> > shouldn't be hard to fix.
> >
> > A few tests to check append/grow works as expected, in terms of how the end
> > table looks like given the initial state and some different page patterns
> > added to it. And both crossing and not crossing into sg chaining scenarios.
> 
> This function is basic for all RDMA devices and we are pretty confident
> that the old and new flows are tested thoroughly.

Well, since 0-day is reporting that __i915_gem_userptr_alloc_pages is
crashing on this, it probably does need some tests :\

Jason
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
  2020-09-25 11:41         ` Tvrtko Ursulin
  (?)
@ 2020-09-25 11:58           ` Jason Gunthorpe
  -1 siblings, 0 replies; 43+ messages in thread
From: Jason Gunthorpe @ 2020-09-25 11:58 UTC (permalink / raw)
  To: Tvrtko Ursulin
  Cc: Leon Romanovsky, Christoph Hellwig, Doug Ledford, linux-rdma,
	intel-gfx, Roland Scheidegger, dri-devel, David Airlie,
	VMware Graphics, Maor Gottlieb, Maor Gottlieb

On Fri, Sep 25, 2020 at 12:41:29PM +0100, Tvrtko Ursulin wrote:
> 
> On 25/09/2020 08:13, Leon Romanovsky wrote:
> > On Thu, Sep 24, 2020 at 09:21:20AM +0100, Tvrtko Ursulin wrote:
> > > 
> > > On 22/09/2020 09:39, Leon Romanovsky wrote:
> > > > From: Maor Gottlieb <maorg@mellanox.com>
> > > > 
> > > > Extend __sg_alloc_table_from_pages to support dynamic allocation of
> > > > SG table from pages. It should be used by drivers that can't supply
> > > > all the pages at one time.
> > > > 
> > > > This function returns the last populated SGE in the table. Users should
> > > > pass it as an argument to the function from the second call and forward.
> > > > As before, nents will be equal to the number of populated SGEs (chunks).
> > > 
> > > So it's appending and growing the "list", did I get that right? Sounds handy
> > > indeed. Some comments/questions below.
> > 
> > Yes, we (RDMA) use this function to chain contiguous pages.
> 
> I will eveluate if i915 could start using it. We have some loops which build
> page by page and coalesce.

Christoph H doesn't like it, but if there are enough cases we should
really have a pin_user_pages_to_sg() rather than open code this all
over the place.

With THP the chance of getting a coalescing SG is much higher, and
everything is more efficient with larger SGEs.

Jason

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
@ 2020-09-25 11:58           ` Jason Gunthorpe
  0 siblings, 0 replies; 43+ messages in thread
From: Jason Gunthorpe @ 2020-09-25 11:58 UTC (permalink / raw)
  To: Tvrtko Ursulin
  Cc: Leon Romanovsky, linux-rdma, intel-gfx, Roland Scheidegger,
	dri-devel, Maor Gottlieb, David Airlie, Doug Ledford,
	VMware Graphics, Maor Gottlieb, Christoph Hellwig

On Fri, Sep 25, 2020 at 12:41:29PM +0100, Tvrtko Ursulin wrote:
> 
> On 25/09/2020 08:13, Leon Romanovsky wrote:
> > On Thu, Sep 24, 2020 at 09:21:20AM +0100, Tvrtko Ursulin wrote:
> > > 
> > > On 22/09/2020 09:39, Leon Romanovsky wrote:
> > > > From: Maor Gottlieb <maorg@mellanox.com>
> > > > 
> > > > Extend __sg_alloc_table_from_pages to support dynamic allocation of
> > > > SG table from pages. It should be used by drivers that can't supply
> > > > all the pages at one time.
> > > > 
> > > > This function returns the last populated SGE in the table. Users should
> > > > pass it as an argument to the function from the second call and forward.
> > > > As before, nents will be equal to the number of populated SGEs (chunks).
> > > 
> > > So it's appending and growing the "list", did I get that right? Sounds handy
> > > indeed. Some comments/questions below.
> > 
> > Yes, we (RDMA) use this function to chain contiguous pages.
> 
> I will eveluate if i915 could start using it. We have some loops which build
> page by page and coalesce.

Christoph H doesn't like it, but if there are enough cases we should
really have a pin_user_pages_to_sg() rather than open code this all
over the place.

With THP the chance of getting a coalescing SG is much higher, and
everything is more efficient with larger SGEs.

Jason
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
@ 2020-09-25 11:58           ` Jason Gunthorpe
  0 siblings, 0 replies; 43+ messages in thread
From: Jason Gunthorpe @ 2020-09-25 11:58 UTC (permalink / raw)
  To: Tvrtko Ursulin
  Cc: Leon Romanovsky, linux-rdma, intel-gfx, Roland Scheidegger,
	dri-devel, Maor Gottlieb, David Airlie, Doug Ledford,
	VMware Graphics, Maor Gottlieb, Christoph Hellwig

On Fri, Sep 25, 2020 at 12:41:29PM +0100, Tvrtko Ursulin wrote:
> 
> On 25/09/2020 08:13, Leon Romanovsky wrote:
> > On Thu, Sep 24, 2020 at 09:21:20AM +0100, Tvrtko Ursulin wrote:
> > > 
> > > On 22/09/2020 09:39, Leon Romanovsky wrote:
> > > > From: Maor Gottlieb <maorg@mellanox.com>
> > > > 
> > > > Extend __sg_alloc_table_from_pages to support dynamic allocation of
> > > > SG table from pages. It should be used by drivers that can't supply
> > > > all the pages at one time.
> > > > 
> > > > This function returns the last populated SGE in the table. Users should
> > > > pass it as an argument to the function from the second call and forward.
> > > > As before, nents will be equal to the number of populated SGEs (chunks).
> > > 
> > > So it's appending and growing the "list", did I get that right? Sounds handy
> > > indeed. Some comments/questions below.
> > 
> > Yes, we (RDMA) use this function to chain contiguous pages.
> 
> I will eveluate if i915 could start using it. We have some loops which build
> page by page and coalesce.

Christoph H doesn't like it, but if there are enough cases we should
really have a pin_user_pages_to_sg() rather than open code this all
over the place.

With THP the chance of getting a coalescing SG is much higher, and
everything is more efficient with larger SGEs.

Jason
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
  2020-09-25 11:41         ` Tvrtko Ursulin
  (?)
@ 2020-09-25 12:13           ` Maor Gottlieb
  -1 siblings, 0 replies; 43+ messages in thread
From: Maor Gottlieb @ 2020-09-25 12:13 UTC (permalink / raw)
  To: Tvrtko Ursulin, Leon Romanovsky
  Cc: Christoph Hellwig, Doug Ledford, Jason Gunthorpe, linux-rdma,
	intel-gfx, Roland Scheidegger, dri-devel, David Airlie,
	VMware Graphics, Maor Gottlieb


On 9/25/2020 2:41 PM, Tvrtko Ursulin wrote:
>
> On 25/09/2020 08:13, Leon Romanovsky wrote:
>> On Thu, Sep 24, 2020 at 09:21:20AM +0100, Tvrtko Ursulin wrote:
>>>
>>> On 22/09/2020 09:39, Leon Romanovsky wrote:
>>>> From: Maor Gottlieb <maorg@mellanox.com>
>>>>
>>>> Extend __sg_alloc_table_from_pages to support dynamic allocation of
>>>> SG table from pages. It should be used by drivers that can't supply
>>>> all the pages at one time.
>>>>
>>>> This function returns the last populated SGE in the table. Users 
>>>> should
>>>> pass it as an argument to the function from the second call and 
>>>> forward.
>>>> As before, nents will be equal to the number of populated SGEs 
>>>> (chunks).
>>>
>>> So it's appending and growing the "list", did I get that right? 
>>> Sounds handy
>>> indeed. Some comments/questions below.
>>
>> Yes, we (RDMA) use this function to chain contiguous pages.
>
> I will eveluate if i915 could start using it. We have some loops which 
> build page by page and coalesce.
>
> [snip]
>
>>>>        if (unlikely(ret))
>>>> diff --git a/tools/testing/scatterlist/main.c 
>>>> b/tools/testing/scatterlist/main.c
>>>> index 0a1464181226..4899359a31ac 100644
>>>> --- a/tools/testing/scatterlist/main.c
>>>> +++ b/tools/testing/scatterlist/main.c
>>>> @@ -55,14 +55,13 @@ int main(void)
>>>>        for (i = 0, test = tests; test->expected_segments; test++, 
>>>> i++) {
>>>>            struct page *pages[MAX_PAGES];
>>>>            struct sg_table st;
>>>> -        int ret;
>>>> +        struct scatterlist *sg;
>>>>
>>>>            set_pages(pages, test->pfn, test->num_pages);
>>>>
>>>> -        ret = __sg_alloc_table_from_pages(&st, pages, 
>>>> test->num_pages,
>>>> -                          0, test->size, test->max_seg,
>>>> -                          GFP_KERNEL);
>>>> -        assert(ret == test->alloc_ret);
>>>> +        sg = __sg_alloc_table_from_pages(&st, pages, 
>>>> test->num_pages, 0,
>>>> +                test->size, test->max_seg, NULL, 0, GFP_KERNEL);
>>>> +        assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);
>>>
>>> Some test coverage for relatively complex code would be very 
>>> welcomed. Since
>>> the testing framework is already there, even if it bit-rotted a bit, 
>>> but
>>> shouldn't be hard to fix.
>>>
>>> A few tests to check append/grow works as expected, in terms of how 
>>> the end
>>> table looks like given the initial state and some different page 
>>> patterns
>>> added to it. And both crossing and not crossing into sg chaining 
>>> scenarios.
>>
>> This function is basic for all RDMA devices and we are pretty confident
>> that the old and new flows are tested thoroughly.
>>
>> We will add proper test in next kernel cycle.
>
> Patch seems to be adding a requirement that all callers of 
> (__)sg_alloc_table_from_pages pass in zeroed struct sg_table, which 
> wasn't the case so far.
>
> Have you audited all the callers and/or fixed them? There seems to be 
> quite a few. Gut feel says problem would probably be better solved in 
> lib/scatterlist.c and not by making all the callers memset. Should be 
> possible if you make sure you only read st->nents if prev was passed in?
>
> I've fixed the unit test and with this change the existing tests do 
> pass. But without zeroing it does fail on the very first, single page, 
> test scenario.
>
> You can pull the unit test hacks from 
> git://people.freedesktop.org/~tursulin/drm-intel sgtest.
>
> Regards,
>
> Tvrtko

Thanks for finding this issue.  In the regular flow, 
__sg_alloc_table_from_pages memset the sg_table struct, but currently 
the code access this struct before. Will be fixed internally in scatterlist.


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
@ 2020-09-25 12:13           ` Maor Gottlieb
  0 siblings, 0 replies; 43+ messages in thread
From: Maor Gottlieb @ 2020-09-25 12:13 UTC (permalink / raw)
  To: Tvrtko Ursulin, Leon Romanovsky
  Cc: linux-rdma, intel-gfx, Roland Scheidegger, dri-devel,
	Maor Gottlieb, David Airlie, Doug Ledford, VMware Graphics,
	Jason Gunthorpe, Christoph Hellwig


On 9/25/2020 2:41 PM, Tvrtko Ursulin wrote:
>
> On 25/09/2020 08:13, Leon Romanovsky wrote:
>> On Thu, Sep 24, 2020 at 09:21:20AM +0100, Tvrtko Ursulin wrote:
>>>
>>> On 22/09/2020 09:39, Leon Romanovsky wrote:
>>>> From: Maor Gottlieb <maorg@mellanox.com>
>>>>
>>>> Extend __sg_alloc_table_from_pages to support dynamic allocation of
>>>> SG table from pages. It should be used by drivers that can't supply
>>>> all the pages at one time.
>>>>
>>>> This function returns the last populated SGE in the table. Users 
>>>> should
>>>> pass it as an argument to the function from the second call and 
>>>> forward.
>>>> As before, nents will be equal to the number of populated SGEs 
>>>> (chunks).
>>>
>>> So it's appending and growing the "list", did I get that right? 
>>> Sounds handy
>>> indeed. Some comments/questions below.
>>
>> Yes, we (RDMA) use this function to chain contiguous pages.
>
> I will eveluate if i915 could start using it. We have some loops which 
> build page by page and coalesce.
>
> [snip]
>
>>>>        if (unlikely(ret))
>>>> diff --git a/tools/testing/scatterlist/main.c 
>>>> b/tools/testing/scatterlist/main.c
>>>> index 0a1464181226..4899359a31ac 100644
>>>> --- a/tools/testing/scatterlist/main.c
>>>> +++ b/tools/testing/scatterlist/main.c
>>>> @@ -55,14 +55,13 @@ int main(void)
>>>>        for (i = 0, test = tests; test->expected_segments; test++, 
>>>> i++) {
>>>>            struct page *pages[MAX_PAGES];
>>>>            struct sg_table st;
>>>> -        int ret;
>>>> +        struct scatterlist *sg;
>>>>
>>>>            set_pages(pages, test->pfn, test->num_pages);
>>>>
>>>> -        ret = __sg_alloc_table_from_pages(&st, pages, 
>>>> test->num_pages,
>>>> -                          0, test->size, test->max_seg,
>>>> -                          GFP_KERNEL);
>>>> -        assert(ret == test->alloc_ret);
>>>> +        sg = __sg_alloc_table_from_pages(&st, pages, 
>>>> test->num_pages, 0,
>>>> +                test->size, test->max_seg, NULL, 0, GFP_KERNEL);
>>>> +        assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);
>>>
>>> Some test coverage for relatively complex code would be very 
>>> welcomed. Since
>>> the testing framework is already there, even if it bit-rotted a bit, 
>>> but
>>> shouldn't be hard to fix.
>>>
>>> A few tests to check append/grow works as expected, in terms of how 
>>> the end
>>> table looks like given the initial state and some different page 
>>> patterns
>>> added to it. And both crossing and not crossing into sg chaining 
>>> scenarios.
>>
>> This function is basic for all RDMA devices and we are pretty confident
>> that the old and new flows are tested thoroughly.
>>
>> We will add proper test in next kernel cycle.
>
> Patch seems to be adding a requirement that all callers of 
> (__)sg_alloc_table_from_pages pass in zeroed struct sg_table, which 
> wasn't the case so far.
>
> Have you audited all the callers and/or fixed them? There seems to be 
> quite a few. Gut feel says problem would probably be better solved in 
> lib/scatterlist.c and not by making all the callers memset. Should be 
> possible if you make sure you only read st->nents if prev was passed in?
>
> I've fixed the unit test and with this change the existing tests do 
> pass. But without zeroing it does fail on the very first, single page, 
> test scenario.
>
> You can pull the unit test hacks from 
> git://people.freedesktop.org/~tursulin/drm-intel sgtest.
>
> Regards,
>
> Tvrtko

Thanks for finding this issue.  In the regular flow, 
__sg_alloc_table_from_pages memset the sg_table struct, but currently 
the code access this struct before. Will be fixed internally in scatterlist.

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
@ 2020-09-25 12:13           ` Maor Gottlieb
  0 siblings, 0 replies; 43+ messages in thread
From: Maor Gottlieb @ 2020-09-25 12:13 UTC (permalink / raw)
  To: Tvrtko Ursulin, Leon Romanovsky
  Cc: linux-rdma, intel-gfx, Roland Scheidegger, dri-devel,
	Maor Gottlieb, David Airlie, Doug Ledford, VMware Graphics,
	Jason Gunthorpe, Christoph Hellwig


On 9/25/2020 2:41 PM, Tvrtko Ursulin wrote:
>
> On 25/09/2020 08:13, Leon Romanovsky wrote:
>> On Thu, Sep 24, 2020 at 09:21:20AM +0100, Tvrtko Ursulin wrote:
>>>
>>> On 22/09/2020 09:39, Leon Romanovsky wrote:
>>>> From: Maor Gottlieb <maorg@mellanox.com>
>>>>
>>>> Extend __sg_alloc_table_from_pages to support dynamic allocation of
>>>> SG table from pages. It should be used by drivers that can't supply
>>>> all the pages at one time.
>>>>
>>>> This function returns the last populated SGE in the table. Users 
>>>> should
>>>> pass it as an argument to the function from the second call and 
>>>> forward.
>>>> As before, nents will be equal to the number of populated SGEs 
>>>> (chunks).
>>>
>>> So it's appending and growing the "list", did I get that right? 
>>> Sounds handy
>>> indeed. Some comments/questions below.
>>
>> Yes, we (RDMA) use this function to chain contiguous pages.
>
> I will eveluate if i915 could start using it. We have some loops which 
> build page by page and coalesce.
>
> [snip]
>
>>>>        if (unlikely(ret))
>>>> diff --git a/tools/testing/scatterlist/main.c 
>>>> b/tools/testing/scatterlist/main.c
>>>> index 0a1464181226..4899359a31ac 100644
>>>> --- a/tools/testing/scatterlist/main.c
>>>> +++ b/tools/testing/scatterlist/main.c
>>>> @@ -55,14 +55,13 @@ int main(void)
>>>>        for (i = 0, test = tests; test->expected_segments; test++, 
>>>> i++) {
>>>>            struct page *pages[MAX_PAGES];
>>>>            struct sg_table st;
>>>> -        int ret;
>>>> +        struct scatterlist *sg;
>>>>
>>>>            set_pages(pages, test->pfn, test->num_pages);
>>>>
>>>> -        ret = __sg_alloc_table_from_pages(&st, pages, 
>>>> test->num_pages,
>>>> -                          0, test->size, test->max_seg,
>>>> -                          GFP_KERNEL);
>>>> -        assert(ret == test->alloc_ret);
>>>> +        sg = __sg_alloc_table_from_pages(&st, pages, 
>>>> test->num_pages, 0,
>>>> +                test->size, test->max_seg, NULL, 0, GFP_KERNEL);
>>>> +        assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);
>>>
>>> Some test coverage for relatively complex code would be very 
>>> welcomed. Since
>>> the testing framework is already there, even if it bit-rotted a bit, 
>>> but
>>> shouldn't be hard to fix.
>>>
>>> A few tests to check append/grow works as expected, in terms of how 
>>> the end
>>> table looks like given the initial state and some different page 
>>> patterns
>>> added to it. And both crossing and not crossing into sg chaining 
>>> scenarios.
>>
>> This function is basic for all RDMA devices and we are pretty confident
>> that the old and new flows are tested thoroughly.
>>
>> We will add proper test in next kernel cycle.
>
> Patch seems to be adding a requirement that all callers of 
> (__)sg_alloc_table_from_pages pass in zeroed struct sg_table, which 
> wasn't the case so far.
>
> Have you audited all the callers and/or fixed them? There seems to be 
> quite a few. Gut feel says problem would probably be better solved in 
> lib/scatterlist.c and not by making all the callers memset. Should be 
> possible if you make sure you only read st->nents if prev was passed in?
>
> I've fixed the unit test and with this change the existing tests do 
> pass. But without zeroing it does fail on the very first, single page, 
> test scenario.
>
> You can pull the unit test hacks from 
> git://people.freedesktop.org/~tursulin/drm-intel sgtest.
>
> Regards,
>
> Tvrtko

Thanks for finding this issue.  In the regular flow, 
__sg_alloc_table_from_pages memset the sg_table struct, but currently 
the code access this struct before. Will be fixed internally in scatterlist.

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
  2020-09-25 11:55         ` Jason Gunthorpe
  (?)
@ 2020-09-25 12:18           ` Maor Gottlieb
  -1 siblings, 0 replies; 43+ messages in thread
From: Maor Gottlieb @ 2020-09-25 12:18 UTC (permalink / raw)
  To: Jason Gunthorpe, Leon Romanovsky
  Cc: Tvrtko Ursulin, Christoph Hellwig, Doug Ledford, linux-rdma,
	intel-gfx, Roland Scheidegger, dri-devel, David Airlie,
	VMware Graphics, Maor Gottlieb


On 9/25/2020 2:55 PM, Jason Gunthorpe wrote:
> On Fri, Sep 25, 2020 at 10:13:30AM +0300, Leon Romanovsky wrote:
>>>> diff --git a/tools/testing/scatterlist/main.c b/tools/testing/scatterlist/main.c
>>>> index 0a1464181226..4899359a31ac 100644
>>>> +++ b/tools/testing/scatterlist/main.c
>>>> @@ -55,14 +55,13 @@ int main(void)
>>>>    	for (i = 0, test = tests; test->expected_segments; test++, i++) {
>>>>    		struct page *pages[MAX_PAGES];
>>>>    		struct sg_table st;
>>>> -		int ret;
>>>> +		struct scatterlist *sg;
>>>>
>>>>    		set_pages(pages, test->pfn, test->num_pages);
>>>>
>>>> -		ret = __sg_alloc_table_from_pages(&st, pages, test->num_pages,
>>>> -						  0, test->size, test->max_seg,
>>>> -						  GFP_KERNEL);
>>>> -		assert(ret == test->alloc_ret);
>>>> +		sg = __sg_alloc_table_from_pages(&st, pages, test->num_pages, 0,
>>>> +				test->size, test->max_seg, NULL, 0, GFP_KERNEL);
>>>> +		assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);
>>> Some test coverage for relatively complex code would be very welcomed. Since
>>> the testing framework is already there, even if it bit-rotted a bit, but
>>> shouldn't be hard to fix.
>>>
>>> A few tests to check append/grow works as expected, in terms of how the end
>>> table looks like given the initial state and some different page patterns
>>> added to it. And both crossing and not crossing into sg chaining scenarios.
>> This function is basic for all RDMA devices and we are pretty confident
>> that the old and new flows are tested thoroughly.
> Well, since 0-day is reporting that __i915_gem_userptr_alloc_pages is
> crashing on this, it probably does need some tests :\
>
> Jason

It is crashing in the regular old flow which already tested.
However, I will add more tests.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
@ 2020-09-25 12:18           ` Maor Gottlieb
  0 siblings, 0 replies; 43+ messages in thread
From: Maor Gottlieb @ 2020-09-25 12:18 UTC (permalink / raw)
  To: Jason Gunthorpe, Leon Romanovsky
  Cc: Tvrtko Ursulin, linux-rdma, intel-gfx, Roland Scheidegger,
	dri-devel, Maor Gottlieb, David Airlie, Doug Ledford,
	VMware Graphics, Christoph Hellwig


On 9/25/2020 2:55 PM, Jason Gunthorpe wrote:
> On Fri, Sep 25, 2020 at 10:13:30AM +0300, Leon Romanovsky wrote:
>>>> diff --git a/tools/testing/scatterlist/main.c b/tools/testing/scatterlist/main.c
>>>> index 0a1464181226..4899359a31ac 100644
>>>> +++ b/tools/testing/scatterlist/main.c
>>>> @@ -55,14 +55,13 @@ int main(void)
>>>>    	for (i = 0, test = tests; test->expected_segments; test++, i++) {
>>>>    		struct page *pages[MAX_PAGES];
>>>>    		struct sg_table st;
>>>> -		int ret;
>>>> +		struct scatterlist *sg;
>>>>
>>>>    		set_pages(pages, test->pfn, test->num_pages);
>>>>
>>>> -		ret = __sg_alloc_table_from_pages(&st, pages, test->num_pages,
>>>> -						  0, test->size, test->max_seg,
>>>> -						  GFP_KERNEL);
>>>> -		assert(ret == test->alloc_ret);
>>>> +		sg = __sg_alloc_table_from_pages(&st, pages, test->num_pages, 0,
>>>> +				test->size, test->max_seg, NULL, 0, GFP_KERNEL);
>>>> +		assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);
>>> Some test coverage for relatively complex code would be very welcomed. Since
>>> the testing framework is already there, even if it bit-rotted a bit, but
>>> shouldn't be hard to fix.
>>>
>>> A few tests to check append/grow works as expected, in terms of how the end
>>> table looks like given the initial state and some different page patterns
>>> added to it. And both crossing and not crossing into sg chaining scenarios.
>> This function is basic for all RDMA devices and we are pretty confident
>> that the old and new flows are tested thoroughly.
> Well, since 0-day is reporting that __i915_gem_userptr_alloc_pages is
> crashing on this, it probably does need some tests :\
>
> Jason

It is crashing in the regular old flow which already tested.
However, I will add more tests.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
@ 2020-09-25 12:18           ` Maor Gottlieb
  0 siblings, 0 replies; 43+ messages in thread
From: Maor Gottlieb @ 2020-09-25 12:18 UTC (permalink / raw)
  To: Jason Gunthorpe, Leon Romanovsky
  Cc: linux-rdma, intel-gfx, Roland Scheidegger, dri-devel,
	Maor Gottlieb, David Airlie, Doug Ledford, VMware Graphics,
	Christoph Hellwig


On 9/25/2020 2:55 PM, Jason Gunthorpe wrote:
> On Fri, Sep 25, 2020 at 10:13:30AM +0300, Leon Romanovsky wrote:
>>>> diff --git a/tools/testing/scatterlist/main.c b/tools/testing/scatterlist/main.c
>>>> index 0a1464181226..4899359a31ac 100644
>>>> +++ b/tools/testing/scatterlist/main.c
>>>> @@ -55,14 +55,13 @@ int main(void)
>>>>    	for (i = 0, test = tests; test->expected_segments; test++, i++) {
>>>>    		struct page *pages[MAX_PAGES];
>>>>    		struct sg_table st;
>>>> -		int ret;
>>>> +		struct scatterlist *sg;
>>>>
>>>>    		set_pages(pages, test->pfn, test->num_pages);
>>>>
>>>> -		ret = __sg_alloc_table_from_pages(&st, pages, test->num_pages,
>>>> -						  0, test->size, test->max_seg,
>>>> -						  GFP_KERNEL);
>>>> -		assert(ret == test->alloc_ret);
>>>> +		sg = __sg_alloc_table_from_pages(&st, pages, test->num_pages, 0,
>>>> +				test->size, test->max_seg, NULL, 0, GFP_KERNEL);
>>>> +		assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);
>>> Some test coverage for relatively complex code would be very welcomed. Since
>>> the testing framework is already there, even if it bit-rotted a bit, but
>>> shouldn't be hard to fix.
>>>
>>> A few tests to check append/grow works as expected, in terms of how the end
>>> table looks like given the initial state and some different page patterns
>>> added to it. And both crossing and not crossing into sg chaining scenarios.
>> This function is basic for all RDMA devices and we are pretty confident
>> that the old and new flows are tested thoroughly.
> Well, since 0-day is reporting that __i915_gem_userptr_alloc_pages is
> crashing on this, it probably does need some tests :\
>
> Jason

It is crashing in the regular old flow which already tested.
However, I will add more tests.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
  2020-09-25 11:58           ` Jason Gunthorpe
  (?)
@ 2020-09-25 12:29             ` Tvrtko Ursulin
  -1 siblings, 0 replies; 43+ messages in thread
From: Tvrtko Ursulin @ 2020-09-25 12:29 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Leon Romanovsky, Christoph Hellwig, Doug Ledford, linux-rdma,
	intel-gfx, Roland Scheidegger, dri-devel, David Airlie,
	VMware Graphics, Maor Gottlieb, Maor Gottlieb


On 25/09/2020 12:58, Jason Gunthorpe wrote:
> On Fri, Sep 25, 2020 at 12:41:29PM +0100, Tvrtko Ursulin wrote:
>>
>> On 25/09/2020 08:13, Leon Romanovsky wrote:
>>> On Thu, Sep 24, 2020 at 09:21:20AM +0100, Tvrtko Ursulin wrote:
>>>>
>>>> On 22/09/2020 09:39, Leon Romanovsky wrote:
>>>>> From: Maor Gottlieb <maorg@mellanox.com>
>>>>>
>>>>> Extend __sg_alloc_table_from_pages to support dynamic allocation of
>>>>> SG table from pages. It should be used by drivers that can't supply
>>>>> all the pages at one time.
>>>>>
>>>>> This function returns the last populated SGE in the table. Users should
>>>>> pass it as an argument to the function from the second call and forward.
>>>>> As before, nents will be equal to the number of populated SGEs (chunks).
>>>>
>>>> So it's appending and growing the "list", did I get that right? Sounds handy
>>>> indeed. Some comments/questions below.
>>>
>>> Yes, we (RDMA) use this function to chain contiguous pages.
>>
>> I will eveluate if i915 could start using it. We have some loops which build
>> page by page and coalesce.
> 
> Christoph H doesn't like it, but if there are enough cases we should
> really have a pin_user_pages_to_sg() rather than open code this all
> over the place.
> 
> With THP the chance of getting a coalescing SG is much higher, and
> everything is more efficient with larger SGEs.

Right, I was actually referring to i915 sites where we build sg tables 
out of shmem and plain kernel pages. In those areas we have some open 
coded coalescing loops (see for instance our shmem_get_pages). Plus a 
local "trim" to discard the unused entries, since we allocate 
pessimistically not knowing how coalescing will pan out. This kind of 
core function which appends pages could replace some of that. Maybe it 
would be slightly less efficient but I will pencil in to at least 
evaluate it.

Otherwise I do agree that coalescing is a win and in the past I have 
measured savings in a few MiB range just for struct scatterlist storage.

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
@ 2020-09-25 12:29             ` Tvrtko Ursulin
  0 siblings, 0 replies; 43+ messages in thread
From: Tvrtko Ursulin @ 2020-09-25 12:29 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Leon Romanovsky, linux-rdma, intel-gfx, Roland Scheidegger,
	dri-devel, Maor Gottlieb, David Airlie, Doug Ledford,
	VMware Graphics, Maor Gottlieb, Christoph Hellwig


On 25/09/2020 12:58, Jason Gunthorpe wrote:
> On Fri, Sep 25, 2020 at 12:41:29PM +0100, Tvrtko Ursulin wrote:
>>
>> On 25/09/2020 08:13, Leon Romanovsky wrote:
>>> On Thu, Sep 24, 2020 at 09:21:20AM +0100, Tvrtko Ursulin wrote:
>>>>
>>>> On 22/09/2020 09:39, Leon Romanovsky wrote:
>>>>> From: Maor Gottlieb <maorg@mellanox.com>
>>>>>
>>>>> Extend __sg_alloc_table_from_pages to support dynamic allocation of
>>>>> SG table from pages. It should be used by drivers that can't supply
>>>>> all the pages at one time.
>>>>>
>>>>> This function returns the last populated SGE in the table. Users should
>>>>> pass it as an argument to the function from the second call and forward.
>>>>> As before, nents will be equal to the number of populated SGEs (chunks).
>>>>
>>>> So it's appending and growing the "list", did I get that right? Sounds handy
>>>> indeed. Some comments/questions below.
>>>
>>> Yes, we (RDMA) use this function to chain contiguous pages.
>>
>> I will eveluate if i915 could start using it. We have some loops which build
>> page by page and coalesce.
> 
> Christoph H doesn't like it, but if there are enough cases we should
> really have a pin_user_pages_to_sg() rather than open code this all
> over the place.
> 
> With THP the chance of getting a coalescing SG is much higher, and
> everything is more efficient with larger SGEs.

Right, I was actually referring to i915 sites where we build sg tables 
out of shmem and plain kernel pages. In those areas we have some open 
coded coalescing loops (see for instance our shmem_get_pages). Plus a 
local "trim" to discard the unused entries, since we allocate 
pessimistically not knowing how coalescing will pan out. This kind of 
core function which appends pages could replace some of that. Maybe it 
would be slightly less efficient but I will pencil in to at least 
evaluate it.

Otherwise I do agree that coalescing is a win and in the past I have 
measured savings in a few MiB range just for struct scatterlist storage.

Regards,

Tvrtko
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
@ 2020-09-25 12:29             ` Tvrtko Ursulin
  0 siblings, 0 replies; 43+ messages in thread
From: Tvrtko Ursulin @ 2020-09-25 12:29 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Leon Romanovsky, linux-rdma, intel-gfx, Roland Scheidegger,
	dri-devel, Maor Gottlieb, David Airlie, Doug Ledford,
	VMware Graphics, Maor Gottlieb, Christoph Hellwig


On 25/09/2020 12:58, Jason Gunthorpe wrote:
> On Fri, Sep 25, 2020 at 12:41:29PM +0100, Tvrtko Ursulin wrote:
>>
>> On 25/09/2020 08:13, Leon Romanovsky wrote:
>>> On Thu, Sep 24, 2020 at 09:21:20AM +0100, Tvrtko Ursulin wrote:
>>>>
>>>> On 22/09/2020 09:39, Leon Romanovsky wrote:
>>>>> From: Maor Gottlieb <maorg@mellanox.com>
>>>>>
>>>>> Extend __sg_alloc_table_from_pages to support dynamic allocation of
>>>>> SG table from pages. It should be used by drivers that can't supply
>>>>> all the pages at one time.
>>>>>
>>>>> This function returns the last populated SGE in the table. Users should
>>>>> pass it as an argument to the function from the second call and forward.
>>>>> As before, nents will be equal to the number of populated SGEs (chunks).
>>>>
>>>> So it's appending and growing the "list", did I get that right? Sounds handy
>>>> indeed. Some comments/questions below.
>>>
>>> Yes, we (RDMA) use this function to chain contiguous pages.
>>
>> I will eveluate if i915 could start using it. We have some loops which build
>> page by page and coalesce.
> 
> Christoph H doesn't like it, but if there are enough cases we should
> really have a pin_user_pages_to_sg() rather than open code this all
> over the place.
> 
> With THP the chance of getting a coalescing SG is much higher, and
> everything is more efficient with larger SGEs.

Right, I was actually referring to i915 sites where we build sg tables 
out of shmem and plain kernel pages. In those areas we have some open 
coded coalescing loops (see for instance our shmem_get_pages). Plus a 
local "trim" to discard the unused entries, since we allocate 
pessimistically not knowing how coalescing will pan out. This kind of 
core function which appends pages could replace some of that. Maybe it 
would be slightly less efficient but I will pencil in to at least 
evaluate it.

Otherwise I do agree that coalescing is a win and in the past I have 
measured savings in a few MiB range just for struct scatterlist storage.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
  2020-09-25 12:18           ` Maor Gottlieb
  (?)
@ 2020-09-25 12:33             ` Tvrtko Ursulin
  -1 siblings, 0 replies; 43+ messages in thread
From: Tvrtko Ursulin @ 2020-09-25 12:33 UTC (permalink / raw)
  To: Maor Gottlieb, Jason Gunthorpe, Leon Romanovsky
  Cc: Christoph Hellwig, Doug Ledford, linux-rdma, intel-gfx,
	Roland Scheidegger, dri-devel, David Airlie, VMware Graphics,
	Maor Gottlieb


On 25/09/2020 13:18, Maor Gottlieb wrote:
> On 9/25/2020 2:55 PM, Jason Gunthorpe wrote:
>> On Fri, Sep 25, 2020 at 10:13:30AM +0300, Leon Romanovsky wrote:
>>>>> diff --git a/tools/testing/scatterlist/main.c 
>>>>> b/tools/testing/scatterlist/main.c
>>>>> index 0a1464181226..4899359a31ac 100644
>>>>> +++ b/tools/testing/scatterlist/main.c
>>>>> @@ -55,14 +55,13 @@ int main(void)
>>>>>        for (i = 0, test = tests; test->expected_segments; test++, 
>>>>> i++) {
>>>>>            struct page *pages[MAX_PAGES];
>>>>>            struct sg_table st;
>>>>> -        int ret;
>>>>> +        struct scatterlist *sg;
>>>>>
>>>>>            set_pages(pages, test->pfn, test->num_pages);
>>>>>
>>>>> -        ret = __sg_alloc_table_from_pages(&st, pages, 
>>>>> test->num_pages,
>>>>> -                          0, test->size, test->max_seg,
>>>>> -                          GFP_KERNEL);
>>>>> -        assert(ret == test->alloc_ret);
>>>>> +        sg = __sg_alloc_table_from_pages(&st, pages, 
>>>>> test->num_pages, 0,
>>>>> +                test->size, test->max_seg, NULL, 0, GFP_KERNEL);
>>>>> +        assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);
>>>> Some test coverage for relatively complex code would be very 
>>>> welcomed. Since
>>>> the testing framework is already there, even if it bit-rotted a bit, 
>>>> but
>>>> shouldn't be hard to fix.
>>>>
>>>> A few tests to check append/grow works as expected, in terms of how 
>>>> the end
>>>> table looks like given the initial state and some different page 
>>>> patterns
>>>> added to it. And both crossing and not crossing into sg chaining 
>>>> scenarios.
>>> This function is basic for all RDMA devices and we are pretty confident
>>> that the old and new flows are tested thoroughly.
>> Well, since 0-day is reporting that __i915_gem_userptr_alloc_pages is
>> crashing on this, it probably does need some tests :\
>>
>> Jason
> 
> It is crashing in the regular old flow which already tested.
> However, I will add more tests.

Do you want to take some of the commits from 
git://people.freedesktop.org/~tursulin/drm-intel sgtest? It would be 
fine by me. I can clean up the commit messages if you want.

https://cgit.freedesktop.org/~tursulin/drm-intel/commit/?h=sgtest&id=79102f4d795c4769431fc44a6cf7ed5c5b1b5214 
- this one undoes the bit rot and makes the test just work on the 
current kernel.

https://cgit.freedesktop.org/~tursulin/drm-intel/commit/?h=sgtest&id=b09bfe80486c4d93ee1d8ae17d5b46397b1c6ee1 
- this one you probably should squash in your patch. Minus the zeroing 
of struct sg_stable since that would hide the issue.

https://cgit.freedesktop.org/~tursulin/drm-intel/commit/?h=sgtest&id=97f5df37e612f798ced90541eece13e2ef639181 
- final commit is optional but I guess handy for debugging.

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
@ 2020-09-25 12:33             ` Tvrtko Ursulin
  0 siblings, 0 replies; 43+ messages in thread
From: Tvrtko Ursulin @ 2020-09-25 12:33 UTC (permalink / raw)
  To: Maor Gottlieb, Jason Gunthorpe, Leon Romanovsky
  Cc: linux-rdma, intel-gfx, Roland Scheidegger, dri-devel,
	Maor Gottlieb, David Airlie, Doug Ledford, VMware Graphics,
	Christoph Hellwig


On 25/09/2020 13:18, Maor Gottlieb wrote:
> On 9/25/2020 2:55 PM, Jason Gunthorpe wrote:
>> On Fri, Sep 25, 2020 at 10:13:30AM +0300, Leon Romanovsky wrote:
>>>>> diff --git a/tools/testing/scatterlist/main.c 
>>>>> b/tools/testing/scatterlist/main.c
>>>>> index 0a1464181226..4899359a31ac 100644
>>>>> +++ b/tools/testing/scatterlist/main.c
>>>>> @@ -55,14 +55,13 @@ int main(void)
>>>>>        for (i = 0, test = tests; test->expected_segments; test++, 
>>>>> i++) {
>>>>>            struct page *pages[MAX_PAGES];
>>>>>            struct sg_table st;
>>>>> -        int ret;
>>>>> +        struct scatterlist *sg;
>>>>>
>>>>>            set_pages(pages, test->pfn, test->num_pages);
>>>>>
>>>>> -        ret = __sg_alloc_table_from_pages(&st, pages, 
>>>>> test->num_pages,
>>>>> -                          0, test->size, test->max_seg,
>>>>> -                          GFP_KERNEL);
>>>>> -        assert(ret == test->alloc_ret);
>>>>> +        sg = __sg_alloc_table_from_pages(&st, pages, 
>>>>> test->num_pages, 0,
>>>>> +                test->size, test->max_seg, NULL, 0, GFP_KERNEL);
>>>>> +        assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);
>>>> Some test coverage for relatively complex code would be very 
>>>> welcomed. Since
>>>> the testing framework is already there, even if it bit-rotted a bit, 
>>>> but
>>>> shouldn't be hard to fix.
>>>>
>>>> A few tests to check append/grow works as expected, in terms of how 
>>>> the end
>>>> table looks like given the initial state and some different page 
>>>> patterns
>>>> added to it. And both crossing and not crossing into sg chaining 
>>>> scenarios.
>>> This function is basic for all RDMA devices and we are pretty confident
>>> that the old and new flows are tested thoroughly.
>> Well, since 0-day is reporting that __i915_gem_userptr_alloc_pages is
>> crashing on this, it probably does need some tests :\
>>
>> Jason
> 
> It is crashing in the regular old flow which already tested.
> However, I will add more tests.

Do you want to take some of the commits from 
git://people.freedesktop.org/~tursulin/drm-intel sgtest? It would be 
fine by me. I can clean up the commit messages if you want.

https://cgit.freedesktop.org/~tursulin/drm-intel/commit/?h=sgtest&id=79102f4d795c4769431fc44a6cf7ed5c5b1b5214 
- this one undoes the bit rot and makes the test just work on the 
current kernel.

https://cgit.freedesktop.org/~tursulin/drm-intel/commit/?h=sgtest&id=b09bfe80486c4d93ee1d8ae17d5b46397b1c6ee1 
- this one you probably should squash in your patch. Minus the zeroing 
of struct sg_stable since that would hide the issue.

https://cgit.freedesktop.org/~tursulin/drm-intel/commit/?h=sgtest&id=97f5df37e612f798ced90541eece13e2ef639181 
- final commit is optional but I guess handy for debugging.

Regards,

Tvrtko
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
@ 2020-09-25 12:33             ` Tvrtko Ursulin
  0 siblings, 0 replies; 43+ messages in thread
From: Tvrtko Ursulin @ 2020-09-25 12:33 UTC (permalink / raw)
  To: Maor Gottlieb, Jason Gunthorpe, Leon Romanovsky
  Cc: linux-rdma, intel-gfx, Roland Scheidegger, dri-devel,
	Maor Gottlieb, David Airlie, Doug Ledford, VMware Graphics,
	Christoph Hellwig


On 25/09/2020 13:18, Maor Gottlieb wrote:
> On 9/25/2020 2:55 PM, Jason Gunthorpe wrote:
>> On Fri, Sep 25, 2020 at 10:13:30AM +0300, Leon Romanovsky wrote:
>>>>> diff --git a/tools/testing/scatterlist/main.c 
>>>>> b/tools/testing/scatterlist/main.c
>>>>> index 0a1464181226..4899359a31ac 100644
>>>>> +++ b/tools/testing/scatterlist/main.c
>>>>> @@ -55,14 +55,13 @@ int main(void)
>>>>>        for (i = 0, test = tests; test->expected_segments; test++, 
>>>>> i++) {
>>>>>            struct page *pages[MAX_PAGES];
>>>>>            struct sg_table st;
>>>>> -        int ret;
>>>>> +        struct scatterlist *sg;
>>>>>
>>>>>            set_pages(pages, test->pfn, test->num_pages);
>>>>>
>>>>> -        ret = __sg_alloc_table_from_pages(&st, pages, 
>>>>> test->num_pages,
>>>>> -                          0, test->size, test->max_seg,
>>>>> -                          GFP_KERNEL);
>>>>> -        assert(ret == test->alloc_ret);
>>>>> +        sg = __sg_alloc_table_from_pages(&st, pages, 
>>>>> test->num_pages, 0,
>>>>> +                test->size, test->max_seg, NULL, 0, GFP_KERNEL);
>>>>> +        assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);
>>>> Some test coverage for relatively complex code would be very 
>>>> welcomed. Since
>>>> the testing framework is already there, even if it bit-rotted a bit, 
>>>> but
>>>> shouldn't be hard to fix.
>>>>
>>>> A few tests to check append/grow works as expected, in terms of how 
>>>> the end
>>>> table looks like given the initial state and some different page 
>>>> patterns
>>>> added to it. And both crossing and not crossing into sg chaining 
>>>> scenarios.
>>> This function is basic for all RDMA devices and we are pretty confident
>>> that the old and new flows are tested thoroughly.
>> Well, since 0-day is reporting that __i915_gem_userptr_alloc_pages is
>> crashing on this, it probably does need some tests :\
>>
>> Jason
> 
> It is crashing in the regular old flow which already tested.
> However, I will add more tests.

Do you want to take some of the commits from 
git://people.freedesktop.org/~tursulin/drm-intel sgtest? It would be 
fine by me. I can clean up the commit messages if you want.

https://cgit.freedesktop.org/~tursulin/drm-intel/commit/?h=sgtest&id=79102f4d795c4769431fc44a6cf7ed5c5b1b5214 
- this one undoes the bit rot and makes the test just work on the 
current kernel.

https://cgit.freedesktop.org/~tursulin/drm-intel/commit/?h=sgtest&id=b09bfe80486c4d93ee1d8ae17d5b46397b1c6ee1 
- this one you probably should squash in your patch. Minus the zeroing 
of struct sg_stable since that would hide the issue.

https://cgit.freedesktop.org/~tursulin/drm-intel/commit/?h=sgtest&id=97f5df37e612f798ced90541eece13e2ef639181 
- final commit is optional but I guess handy for debugging.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
  2020-09-25 12:29             ` Tvrtko Ursulin
  (?)
@ 2020-09-25 12:34               ` Jason Gunthorpe
  -1 siblings, 0 replies; 43+ messages in thread
From: Jason Gunthorpe @ 2020-09-25 12:34 UTC (permalink / raw)
  To: Tvrtko Ursulin
  Cc: Leon Romanovsky, Christoph Hellwig, Doug Ledford, linux-rdma,
	intel-gfx, Roland Scheidegger, dri-devel, David Airlie,
	VMware Graphics, Maor Gottlieb, Maor Gottlieb

On Fri, Sep 25, 2020 at 01:29:49PM +0100, Tvrtko Ursulin wrote:
> 
> On 25/09/2020 12:58, Jason Gunthorpe wrote:
> > On Fri, Sep 25, 2020 at 12:41:29PM +0100, Tvrtko Ursulin wrote:
> > > 
> > > On 25/09/2020 08:13, Leon Romanovsky wrote:
> > > > On Thu, Sep 24, 2020 at 09:21:20AM +0100, Tvrtko Ursulin wrote:
> > > > > 
> > > > > On 22/09/2020 09:39, Leon Romanovsky wrote:
> > > > > > From: Maor Gottlieb <maorg@mellanox.com>
> > > > > > 
> > > > > > Extend __sg_alloc_table_from_pages to support dynamic allocation of
> > > > > > SG table from pages. It should be used by drivers that can't supply
> > > > > > all the pages at one time.
> > > > > > 
> > > > > > This function returns the last populated SGE in the table. Users should
> > > > > > pass it as an argument to the function from the second call and forward.
> > > > > > As before, nents will be equal to the number of populated SGEs (chunks).
> > > > > 
> > > > > So it's appending and growing the "list", did I get that right? Sounds handy
> > > > > indeed. Some comments/questions below.
> > > > 
> > > > Yes, we (RDMA) use this function to chain contiguous pages.
> > > 
> > > I will eveluate if i915 could start using it. We have some loops which build
> > > page by page and coalesce.
> > 
> > Christoph H doesn't like it, but if there are enough cases we should
> > really have a pin_user_pages_to_sg() rather than open code this all
> > over the place.
> > 
> > With THP the chance of getting a coalescing SG is much higher, and
> > everything is more efficient with larger SGEs.
> 
> Right, I was actually referring to i915 sites where we build sg tables out
> of shmem and plain kernel pages. In those areas we have some open coded
> coalescing loops (see for instance our shmem_get_pages). Plus a local "trim"
> to discard the unused entries, since we allocate pessimistically not knowing
> how coalescing will pan out. This kind of core function which appends pages
> could replace some of that. Maybe it would be slightly less efficient but I
> will pencil in to at least evaluate it.
> 
> Otherwise I do agree that coalescing is a win and in the past I have
> measured savings in a few MiB range just for struct scatterlist storage.

I think the eventual dream is to have a pin_user_pages_bvec or similar
that is integrated into the GUP logic so avoids all the extra work,
just allocates pages of bvecs on the fly. No extra step through a
linear array of page *'s

Starting to structuring things to take advantage of that makes some
sense

Jason

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
@ 2020-09-25 12:34               ` Jason Gunthorpe
  0 siblings, 0 replies; 43+ messages in thread
From: Jason Gunthorpe @ 2020-09-25 12:34 UTC (permalink / raw)
  To: Tvrtko Ursulin
  Cc: Leon Romanovsky, linux-rdma, intel-gfx, Roland Scheidegger,
	dri-devel, Maor Gottlieb, David Airlie, Doug Ledford,
	VMware Graphics, Maor Gottlieb, Christoph Hellwig

On Fri, Sep 25, 2020 at 01:29:49PM +0100, Tvrtko Ursulin wrote:
> 
> On 25/09/2020 12:58, Jason Gunthorpe wrote:
> > On Fri, Sep 25, 2020 at 12:41:29PM +0100, Tvrtko Ursulin wrote:
> > > 
> > > On 25/09/2020 08:13, Leon Romanovsky wrote:
> > > > On Thu, Sep 24, 2020 at 09:21:20AM +0100, Tvrtko Ursulin wrote:
> > > > > 
> > > > > On 22/09/2020 09:39, Leon Romanovsky wrote:
> > > > > > From: Maor Gottlieb <maorg@mellanox.com>
> > > > > > 
> > > > > > Extend __sg_alloc_table_from_pages to support dynamic allocation of
> > > > > > SG table from pages. It should be used by drivers that can't supply
> > > > > > all the pages at one time.
> > > > > > 
> > > > > > This function returns the last populated SGE in the table. Users should
> > > > > > pass it as an argument to the function from the second call and forward.
> > > > > > As before, nents will be equal to the number of populated SGEs (chunks).
> > > > > 
> > > > > So it's appending and growing the "list", did I get that right? Sounds handy
> > > > > indeed. Some comments/questions below.
> > > > 
> > > > Yes, we (RDMA) use this function to chain contiguous pages.
> > > 
> > > I will eveluate if i915 could start using it. We have some loops which build
> > > page by page and coalesce.
> > 
> > Christoph H doesn't like it, but if there are enough cases we should
> > really have a pin_user_pages_to_sg() rather than open code this all
> > over the place.
> > 
> > With THP the chance of getting a coalescing SG is much higher, and
> > everything is more efficient with larger SGEs.
> 
> Right, I was actually referring to i915 sites where we build sg tables out
> of shmem and plain kernel pages. In those areas we have some open coded
> coalescing loops (see for instance our shmem_get_pages). Plus a local "trim"
> to discard the unused entries, since we allocate pessimistically not knowing
> how coalescing will pan out. This kind of core function which appends pages
> could replace some of that. Maybe it would be slightly less efficient but I
> will pencil in to at least evaluate it.
> 
> Otherwise I do agree that coalescing is a win and in the past I have
> measured savings in a few MiB range just for struct scatterlist storage.

I think the eventual dream is to have a pin_user_pages_bvec or similar
that is integrated into the GUP logic so avoids all the extra work,
just allocates pages of bvecs on the fly. No extra step through a
linear array of page *'s

Starting to structuring things to take advantage of that makes some
sense

Jason
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
@ 2020-09-25 12:34               ` Jason Gunthorpe
  0 siblings, 0 replies; 43+ messages in thread
From: Jason Gunthorpe @ 2020-09-25 12:34 UTC (permalink / raw)
  To: Tvrtko Ursulin
  Cc: Leon Romanovsky, linux-rdma, intel-gfx, Roland Scheidegger,
	dri-devel, Maor Gottlieb, David Airlie, Doug Ledford,
	VMware Graphics, Maor Gottlieb, Christoph Hellwig

On Fri, Sep 25, 2020 at 01:29:49PM +0100, Tvrtko Ursulin wrote:
> 
> On 25/09/2020 12:58, Jason Gunthorpe wrote:
> > On Fri, Sep 25, 2020 at 12:41:29PM +0100, Tvrtko Ursulin wrote:
> > > 
> > > On 25/09/2020 08:13, Leon Romanovsky wrote:
> > > > On Thu, Sep 24, 2020 at 09:21:20AM +0100, Tvrtko Ursulin wrote:
> > > > > 
> > > > > On 22/09/2020 09:39, Leon Romanovsky wrote:
> > > > > > From: Maor Gottlieb <maorg@mellanox.com>
> > > > > > 
> > > > > > Extend __sg_alloc_table_from_pages to support dynamic allocation of
> > > > > > SG table from pages. It should be used by drivers that can't supply
> > > > > > all the pages at one time.
> > > > > > 
> > > > > > This function returns the last populated SGE in the table. Users should
> > > > > > pass it as an argument to the function from the second call and forward.
> > > > > > As before, nents will be equal to the number of populated SGEs (chunks).
> > > > > 
> > > > > So it's appending and growing the "list", did I get that right? Sounds handy
> > > > > indeed. Some comments/questions below.
> > > > 
> > > > Yes, we (RDMA) use this function to chain contiguous pages.
> > > 
> > > I will eveluate if i915 could start using it. We have some loops which build
> > > page by page and coalesce.
> > 
> > Christoph H doesn't like it, but if there are enough cases we should
> > really have a pin_user_pages_to_sg() rather than open code this all
> > over the place.
> > 
> > With THP the chance of getting a coalescing SG is much higher, and
> > everything is more efficient with larger SGEs.
> 
> Right, I was actually referring to i915 sites where we build sg tables out
> of shmem and plain kernel pages. In those areas we have some open coded
> coalescing loops (see for instance our shmem_get_pages). Plus a local "trim"
> to discard the unused entries, since we allocate pessimistically not knowing
> how coalescing will pan out. This kind of core function which appends pages
> could replace some of that. Maybe it would be slightly less efficient but I
> will pencil in to at least evaluate it.
> 
> Otherwise I do agree that coalescing is a win and in the past I have
> measured savings in a few MiB range just for struct scatterlist storage.

I think the eventual dream is to have a pin_user_pages_bvec or similar
that is integrated into the GUP logic so avoids all the extra work,
just allocates pages of bvecs on the fly. No extra step through a
linear array of page *'s

Starting to structuring things to take advantage of that makes some
sense

Jason
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
  2020-09-25 12:33             ` Tvrtko Ursulin
  (?)
@ 2020-09-25 13:39               ` Maor Gottlieb
  -1 siblings, 0 replies; 43+ messages in thread
From: Maor Gottlieb @ 2020-09-25 13:39 UTC (permalink / raw)
  To: Tvrtko Ursulin, Jason Gunthorpe, Leon Romanovsky
  Cc: Christoph Hellwig, Doug Ledford, linux-rdma, intel-gfx,
	Roland Scheidegger, dri-devel, David Airlie, VMware Graphics,
	Maor Gottlieb


On 9/25/2020 3:33 PM, Tvrtko Ursulin wrote:
>
> On 25/09/2020 13:18, Maor Gottlieb wrote:
>> On 9/25/2020 2:55 PM, Jason Gunthorpe wrote:
>>> On Fri, Sep 25, 2020 at 10:13:30AM +0300, Leon Romanovsky wrote:
>>>>>> diff --git a/tools/testing/scatterlist/main.c 
>>>>>> b/tools/testing/scatterlist/main.c
>>>>>> index 0a1464181226..4899359a31ac 100644
>>>>>> +++ b/tools/testing/scatterlist/main.c
>>>>>> @@ -55,14 +55,13 @@ int main(void)
>>>>>>        for (i = 0, test = tests; test->expected_segments; test++, 
>>>>>> i++) {
>>>>>>            struct page *pages[MAX_PAGES];
>>>>>>            struct sg_table st;
>>>>>> -        int ret;
>>>>>> +        struct scatterlist *sg;
>>>>>>
>>>>>>            set_pages(pages, test->pfn, test->num_pages);
>>>>>>
>>>>>> -        ret = __sg_alloc_table_from_pages(&st, pages, 
>>>>>> test->num_pages,
>>>>>> -                          0, test->size, test->max_seg,
>>>>>> -                          GFP_KERNEL);
>>>>>> -        assert(ret == test->alloc_ret);
>>>>>> +        sg = __sg_alloc_table_from_pages(&st, pages, 
>>>>>> test->num_pages, 0,
>>>>>> +                test->size, test->max_seg, NULL, 0, GFP_KERNEL);
>>>>>> +        assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);
>>>>> Some test coverage for relatively complex code would be very 
>>>>> welcomed. Since
>>>>> the testing framework is already there, even if it bit-rotted a 
>>>>> bit, but
>>>>> shouldn't be hard to fix.
>>>>>
>>>>> A few tests to check append/grow works as expected, in terms of 
>>>>> how the end
>>>>> table looks like given the initial state and some different page 
>>>>> patterns
>>>>> added to it. And both crossing and not crossing into sg chaining 
>>>>> scenarios.
>>>> This function is basic for all RDMA devices and we are pretty 
>>>> confident
>>>> that the old and new flows are tested thoroughly.
>>> Well, since 0-day is reporting that __i915_gem_userptr_alloc_pages is
>>> crashing on this, it probably does need some tests :\
>>>
>>> Jason
>>
>> It is crashing in the regular old flow which already tested.
>> However, I will add more tests.
>
> Do you want to take some of the commits from 
> git://people.freedesktop.org/~tursulin/drm-intel sgtest? It would be 
> fine by me. I can clean up the commit messages if you want.

I will very appreciate it. Thanks
>
> https://cgit.freedesktop.org/~tursulin/drm-intel/commit/?h=sgtest&id=79102f4d795c4769431fc44a6cf7ed5c5b1b5214 
> - this one undoes the bit rot and makes the test just work on the 
> current kernel.
>
> https://cgit.freedesktop.org/~tursulin/drm-intel/commit/?h=sgtest&id=b09bfe80486c4d93ee1d8ae17d5b46397b1c6ee1 
> - this one you probably should squash in your patch. Minus the zeroing 
> of struct sg_stable since that would hide the issue.
>
> https://cgit.freedesktop.org/~tursulin/drm-intel/commit/?h=sgtest&id=97f5df37e612f798ced90541eece13e2ef639181 
> - final commit is optional but I guess handy for debugging.
>
> Regards,
>
> Tvrtko

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
@ 2020-09-25 13:39               ` Maor Gottlieb
  0 siblings, 0 replies; 43+ messages in thread
From: Maor Gottlieb @ 2020-09-25 13:39 UTC (permalink / raw)
  To: Tvrtko Ursulin, Jason Gunthorpe, Leon Romanovsky
  Cc: linux-rdma, intel-gfx, Roland Scheidegger, dri-devel,
	Maor Gottlieb, David Airlie, Doug Ledford, VMware Graphics,
	Christoph Hellwig


On 9/25/2020 3:33 PM, Tvrtko Ursulin wrote:
>
> On 25/09/2020 13:18, Maor Gottlieb wrote:
>> On 9/25/2020 2:55 PM, Jason Gunthorpe wrote:
>>> On Fri, Sep 25, 2020 at 10:13:30AM +0300, Leon Romanovsky wrote:
>>>>>> diff --git a/tools/testing/scatterlist/main.c 
>>>>>> b/tools/testing/scatterlist/main.c
>>>>>> index 0a1464181226..4899359a31ac 100644
>>>>>> +++ b/tools/testing/scatterlist/main.c
>>>>>> @@ -55,14 +55,13 @@ int main(void)
>>>>>>        for (i = 0, test = tests; test->expected_segments; test++, 
>>>>>> i++) {
>>>>>>            struct page *pages[MAX_PAGES];
>>>>>>            struct sg_table st;
>>>>>> -        int ret;
>>>>>> +        struct scatterlist *sg;
>>>>>>
>>>>>>            set_pages(pages, test->pfn, test->num_pages);
>>>>>>
>>>>>> -        ret = __sg_alloc_table_from_pages(&st, pages, 
>>>>>> test->num_pages,
>>>>>> -                          0, test->size, test->max_seg,
>>>>>> -                          GFP_KERNEL);
>>>>>> -        assert(ret == test->alloc_ret);
>>>>>> +        sg = __sg_alloc_table_from_pages(&st, pages, 
>>>>>> test->num_pages, 0,
>>>>>> +                test->size, test->max_seg, NULL, 0, GFP_KERNEL);
>>>>>> +        assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);
>>>>> Some test coverage for relatively complex code would be very 
>>>>> welcomed. Since
>>>>> the testing framework is already there, even if it bit-rotted a 
>>>>> bit, but
>>>>> shouldn't be hard to fix.
>>>>>
>>>>> A few tests to check append/grow works as expected, in terms of 
>>>>> how the end
>>>>> table looks like given the initial state and some different page 
>>>>> patterns
>>>>> added to it. And both crossing and not crossing into sg chaining 
>>>>> scenarios.
>>>> This function is basic for all RDMA devices and we are pretty 
>>>> confident
>>>> that the old and new flows are tested thoroughly.
>>> Well, since 0-day is reporting that __i915_gem_userptr_alloc_pages is
>>> crashing on this, it probably does need some tests :\
>>>
>>> Jason
>>
>> It is crashing in the regular old flow which already tested.
>> However, I will add more tests.
>
> Do you want to take some of the commits from 
> git://people.freedesktop.org/~tursulin/drm-intel sgtest? It would be 
> fine by me. I can clean up the commit messages if you want.

I will very appreciate it. Thanks
>
> https://cgit.freedesktop.org/~tursulin/drm-intel/commit/?h=sgtest&id=79102f4d795c4769431fc44a6cf7ed5c5b1b5214 
> - this one undoes the bit rot and makes the test just work on the 
> current kernel.
>
> https://cgit.freedesktop.org/~tursulin/drm-intel/commit/?h=sgtest&id=b09bfe80486c4d93ee1d8ae17d5b46397b1c6ee1 
> - this one you probably should squash in your patch. Minus the zeroing 
> of struct sg_stable since that would hide the issue.
>
> https://cgit.freedesktop.org/~tursulin/drm-intel/commit/?h=sgtest&id=97f5df37e612f798ced90541eece13e2ef639181 
> - final commit is optional but I guess handy for debugging.
>
> Regards,
>
> Tvrtko
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
@ 2020-09-25 13:39               ` Maor Gottlieb
  0 siblings, 0 replies; 43+ messages in thread
From: Maor Gottlieb @ 2020-09-25 13:39 UTC (permalink / raw)
  To: Tvrtko Ursulin, Jason Gunthorpe, Leon Romanovsky
  Cc: linux-rdma, intel-gfx, Roland Scheidegger, dri-devel,
	Maor Gottlieb, David Airlie, Doug Ledford, VMware Graphics,
	Christoph Hellwig


On 9/25/2020 3:33 PM, Tvrtko Ursulin wrote:
>
> On 25/09/2020 13:18, Maor Gottlieb wrote:
>> On 9/25/2020 2:55 PM, Jason Gunthorpe wrote:
>>> On Fri, Sep 25, 2020 at 10:13:30AM +0300, Leon Romanovsky wrote:
>>>>>> diff --git a/tools/testing/scatterlist/main.c 
>>>>>> b/tools/testing/scatterlist/main.c
>>>>>> index 0a1464181226..4899359a31ac 100644
>>>>>> +++ b/tools/testing/scatterlist/main.c
>>>>>> @@ -55,14 +55,13 @@ int main(void)
>>>>>>        for (i = 0, test = tests; test->expected_segments; test++, 
>>>>>> i++) {
>>>>>>            struct page *pages[MAX_PAGES];
>>>>>>            struct sg_table st;
>>>>>> -        int ret;
>>>>>> +        struct scatterlist *sg;
>>>>>>
>>>>>>            set_pages(pages, test->pfn, test->num_pages);
>>>>>>
>>>>>> -        ret = __sg_alloc_table_from_pages(&st, pages, 
>>>>>> test->num_pages,
>>>>>> -                          0, test->size, test->max_seg,
>>>>>> -                          GFP_KERNEL);
>>>>>> -        assert(ret == test->alloc_ret);
>>>>>> +        sg = __sg_alloc_table_from_pages(&st, pages, 
>>>>>> test->num_pages, 0,
>>>>>> +                test->size, test->max_seg, NULL, 0, GFP_KERNEL);
>>>>>> +        assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);
>>>>> Some test coverage for relatively complex code would be very 
>>>>> welcomed. Since
>>>>> the testing framework is already there, even if it bit-rotted a 
>>>>> bit, but
>>>>> shouldn't be hard to fix.
>>>>>
>>>>> A few tests to check append/grow works as expected, in terms of 
>>>>> how the end
>>>>> table looks like given the initial state and some different page 
>>>>> patterns
>>>>> added to it. And both crossing and not crossing into sg chaining 
>>>>> scenarios.
>>>> This function is basic for all RDMA devices and we are pretty 
>>>> confident
>>>> that the old and new flows are tested thoroughly.
>>> Well, since 0-day is reporting that __i915_gem_userptr_alloc_pages is
>>> crashing on this, it probably does need some tests :\
>>>
>>> Jason
>>
>> It is crashing in the regular old flow which already tested.
>> However, I will add more tests.
>
> Do you want to take some of the commits from 
> git://people.freedesktop.org/~tursulin/drm-intel sgtest? It would be 
> fine by me. I can clean up the commit messages if you want.

I will very appreciate it. Thanks
>
> https://cgit.freedesktop.org/~tursulin/drm-intel/commit/?h=sgtest&id=79102f4d795c4769431fc44a6cf7ed5c5b1b5214 
> - this one undoes the bit rot and makes the test just work on the 
> current kernel.
>
> https://cgit.freedesktop.org/~tursulin/drm-intel/commit/?h=sgtest&id=b09bfe80486c4d93ee1d8ae17d5b46397b1c6ee1 
> - this one you probably should squash in your patch. Minus the zeroing 
> of struct sg_stable since that would hide the issue.
>
> https://cgit.freedesktop.org/~tursulin/drm-intel/commit/?h=sgtest&id=97f5df37e612f798ced90541eece13e2ef639181 
> - final commit is optional but I guess handy for debugging.
>
> Regards,
>
> Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
  2020-09-25 13:39               ` Maor Gottlieb
  (?)
@ 2020-09-25 13:54                 ` Tvrtko Ursulin
  -1 siblings, 0 replies; 43+ messages in thread
From: Tvrtko Ursulin @ 2020-09-25 13:54 UTC (permalink / raw)
  To: Maor Gottlieb, Jason Gunthorpe, Leon Romanovsky
  Cc: Christoph Hellwig, Doug Ledford, linux-rdma, intel-gfx,
	Roland Scheidegger, dri-devel, David Airlie, VMware Graphics,
	Maor Gottlieb


On 25/09/2020 14:39, Maor Gottlieb wrote:
> 
> On 9/25/2020 3:33 PM, Tvrtko Ursulin wrote:
>>
>> On 25/09/2020 13:18, Maor Gottlieb wrote:
>>> On 9/25/2020 2:55 PM, Jason Gunthorpe wrote:
>>>> On Fri, Sep 25, 2020 at 10:13:30AM +0300, Leon Romanovsky wrote:
>>>>>>> diff --git a/tools/testing/scatterlist/main.c 
>>>>>>> b/tools/testing/scatterlist/main.c
>>>>>>> index 0a1464181226..4899359a31ac 100644
>>>>>>> +++ b/tools/testing/scatterlist/main.c
>>>>>>> @@ -55,14 +55,13 @@ int main(void)
>>>>>>>        for (i = 0, test = tests; test->expected_segments; test++, 
>>>>>>> i++) {
>>>>>>>            struct page *pages[MAX_PAGES];
>>>>>>>            struct sg_table st;
>>>>>>> -        int ret;
>>>>>>> +        struct scatterlist *sg;
>>>>>>>
>>>>>>>            set_pages(pages, test->pfn, test->num_pages);
>>>>>>>
>>>>>>> -        ret = __sg_alloc_table_from_pages(&st, pages, 
>>>>>>> test->num_pages,
>>>>>>> -                          0, test->size, test->max_seg,
>>>>>>> -                          GFP_KERNEL);
>>>>>>> -        assert(ret == test->alloc_ret);
>>>>>>> +        sg = __sg_alloc_table_from_pages(&st, pages, 
>>>>>>> test->num_pages, 0,
>>>>>>> +                test->size, test->max_seg, NULL, 0, GFP_KERNEL);
>>>>>>> +        assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);
>>>>>> Some test coverage for relatively complex code would be very 
>>>>>> welcomed. Since
>>>>>> the testing framework is already there, even if it bit-rotted a 
>>>>>> bit, but
>>>>>> shouldn't be hard to fix.
>>>>>>
>>>>>> A few tests to check append/grow works as expected, in terms of 
>>>>>> how the end
>>>>>> table looks like given the initial state and some different page 
>>>>>> patterns
>>>>>> added to it. And both crossing and not crossing into sg chaining 
>>>>>> scenarios.
>>>>> This function is basic for all RDMA devices and we are pretty 
>>>>> confident
>>>>> that the old and new flows are tested thoroughly.
>>>> Well, since 0-day is reporting that __i915_gem_userptr_alloc_pages is
>>>> crashing on this, it probably does need some tests :\
>>>>
>>>> Jason
>>>
>>> It is crashing in the regular old flow which already tested.
>>> However, I will add more tests.
>>
>> Do you want to take some of the commits from 
>> git://people.freedesktop.org/~tursulin/drm-intel sgtest? It would be 
>> fine by me. I can clean up the commit messages if you want.
> 
> I will very appreciate it. Thanks

I've pushed a branch with tidied commit messages and a bit re-ordered to 
the same location. You can pull and include in your series:

  tools/testing/scatterlist: Rejuvenate bit-rotten test
  tools/testing/scatterlist: Show errors in human readable form

And "test fixes for sg append" you can squash (minus the sg_table 
zeroing) into your patch.

If this plan does not work for you, I can send two of my patches to lkml 
independently. What ever you prefer.

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
@ 2020-09-25 13:54                 ` Tvrtko Ursulin
  0 siblings, 0 replies; 43+ messages in thread
From: Tvrtko Ursulin @ 2020-09-25 13:54 UTC (permalink / raw)
  To: Maor Gottlieb, Jason Gunthorpe, Leon Romanovsky
  Cc: linux-rdma, intel-gfx, Roland Scheidegger, dri-devel,
	Maor Gottlieb, David Airlie, Doug Ledford, VMware Graphics,
	Christoph Hellwig


On 25/09/2020 14:39, Maor Gottlieb wrote:
> 
> On 9/25/2020 3:33 PM, Tvrtko Ursulin wrote:
>>
>> On 25/09/2020 13:18, Maor Gottlieb wrote:
>>> On 9/25/2020 2:55 PM, Jason Gunthorpe wrote:
>>>> On Fri, Sep 25, 2020 at 10:13:30AM +0300, Leon Romanovsky wrote:
>>>>>>> diff --git a/tools/testing/scatterlist/main.c 
>>>>>>> b/tools/testing/scatterlist/main.c
>>>>>>> index 0a1464181226..4899359a31ac 100644
>>>>>>> +++ b/tools/testing/scatterlist/main.c
>>>>>>> @@ -55,14 +55,13 @@ int main(void)
>>>>>>>        for (i = 0, test = tests; test->expected_segments; test++, 
>>>>>>> i++) {
>>>>>>>            struct page *pages[MAX_PAGES];
>>>>>>>            struct sg_table st;
>>>>>>> -        int ret;
>>>>>>> +        struct scatterlist *sg;
>>>>>>>
>>>>>>>            set_pages(pages, test->pfn, test->num_pages);
>>>>>>>
>>>>>>> -        ret = __sg_alloc_table_from_pages(&st, pages, 
>>>>>>> test->num_pages,
>>>>>>> -                          0, test->size, test->max_seg,
>>>>>>> -                          GFP_KERNEL);
>>>>>>> -        assert(ret == test->alloc_ret);
>>>>>>> +        sg = __sg_alloc_table_from_pages(&st, pages, 
>>>>>>> test->num_pages, 0,
>>>>>>> +                test->size, test->max_seg, NULL, 0, GFP_KERNEL);
>>>>>>> +        assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);
>>>>>> Some test coverage for relatively complex code would be very 
>>>>>> welcomed. Since
>>>>>> the testing framework is already there, even if it bit-rotted a 
>>>>>> bit, but
>>>>>> shouldn't be hard to fix.
>>>>>>
>>>>>> A few tests to check append/grow works as expected, in terms of 
>>>>>> how the end
>>>>>> table looks like given the initial state and some different page 
>>>>>> patterns
>>>>>> added to it. And both crossing and not crossing into sg chaining 
>>>>>> scenarios.
>>>>> This function is basic for all RDMA devices and we are pretty 
>>>>> confident
>>>>> that the old and new flows are tested thoroughly.
>>>> Well, since 0-day is reporting that __i915_gem_userptr_alloc_pages is
>>>> crashing on this, it probably does need some tests :\
>>>>
>>>> Jason
>>>
>>> It is crashing in the regular old flow which already tested.
>>> However, I will add more tests.
>>
>> Do you want to take some of the commits from 
>> git://people.freedesktop.org/~tursulin/drm-intel sgtest? It would be 
>> fine by me. I can clean up the commit messages if you want.
> 
> I will very appreciate it. Thanks

I've pushed a branch with tidied commit messages and a bit re-ordered to 
the same location. You can pull and include in your series:

  tools/testing/scatterlist: Rejuvenate bit-rotten test
  tools/testing/scatterlist: Show errors in human readable form

And "test fixes for sg append" you can squash (minus the sg_table 
zeroing) into your patch.

If this plan does not work for you, I can send two of my patches to lkml 
independently. What ever you prefer.

Regards,

Tvrtko
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages
@ 2020-09-25 13:54                 ` Tvrtko Ursulin
  0 siblings, 0 replies; 43+ messages in thread
From: Tvrtko Ursulin @ 2020-09-25 13:54 UTC (permalink / raw)
  To: Maor Gottlieb, Jason Gunthorpe, Leon Romanovsky
  Cc: linux-rdma, intel-gfx, Roland Scheidegger, dri-devel,
	Maor Gottlieb, David Airlie, Doug Ledford, VMware Graphics,
	Christoph Hellwig


On 25/09/2020 14:39, Maor Gottlieb wrote:
> 
> On 9/25/2020 3:33 PM, Tvrtko Ursulin wrote:
>>
>> On 25/09/2020 13:18, Maor Gottlieb wrote:
>>> On 9/25/2020 2:55 PM, Jason Gunthorpe wrote:
>>>> On Fri, Sep 25, 2020 at 10:13:30AM +0300, Leon Romanovsky wrote:
>>>>>>> diff --git a/tools/testing/scatterlist/main.c 
>>>>>>> b/tools/testing/scatterlist/main.c
>>>>>>> index 0a1464181226..4899359a31ac 100644
>>>>>>> +++ b/tools/testing/scatterlist/main.c
>>>>>>> @@ -55,14 +55,13 @@ int main(void)
>>>>>>>        for (i = 0, test = tests; test->expected_segments; test++, 
>>>>>>> i++) {
>>>>>>>            struct page *pages[MAX_PAGES];
>>>>>>>            struct sg_table st;
>>>>>>> -        int ret;
>>>>>>> +        struct scatterlist *sg;
>>>>>>>
>>>>>>>            set_pages(pages, test->pfn, test->num_pages);
>>>>>>>
>>>>>>> -        ret = __sg_alloc_table_from_pages(&st, pages, 
>>>>>>> test->num_pages,
>>>>>>> -                          0, test->size, test->max_seg,
>>>>>>> -                          GFP_KERNEL);
>>>>>>> -        assert(ret == test->alloc_ret);
>>>>>>> +        sg = __sg_alloc_table_from_pages(&st, pages, 
>>>>>>> test->num_pages, 0,
>>>>>>> +                test->size, test->max_seg, NULL, 0, GFP_KERNEL);
>>>>>>> +        assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);
>>>>>> Some test coverage for relatively complex code would be very 
>>>>>> welcomed. Since
>>>>>> the testing framework is already there, even if it bit-rotted a 
>>>>>> bit, but
>>>>>> shouldn't be hard to fix.
>>>>>>
>>>>>> A few tests to check append/grow works as expected, in terms of 
>>>>>> how the end
>>>>>> table looks like given the initial state and some different page 
>>>>>> patterns
>>>>>> added to it. And both crossing and not crossing into sg chaining 
>>>>>> scenarios.
>>>>> This function is basic for all RDMA devices and we are pretty 
>>>>> confident
>>>>> that the old and new flows are tested thoroughly.
>>>> Well, since 0-day is reporting that __i915_gem_userptr_alloc_pages is
>>>> crashing on this, it probably does need some tests :\
>>>>
>>>> Jason
>>>
>>> It is crashing in the regular old flow which already tested.
>>> However, I will add more tests.
>>
>> Do you want to take some of the commits from 
>> git://people.freedesktop.org/~tursulin/drm-intel sgtest? It would be 
>> fine by me. I can clean up the commit messages if you want.
> 
> I will very appreciate it. Thanks

I've pushed a branch with tidied commit messages and a bit re-ordered to 
the same location. You can pull and include in your series:

  tools/testing/scatterlist: Rejuvenate bit-rotten test
  tools/testing/scatterlist: Show errors in human readable form

And "test fixes for sg append" you can squash (minus the sg_table 
zeroing) into your patch.

If this plan does not work for you, I can send two of my patches to lkml 
independently. What ever you prefer.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 43+ messages in thread

end of thread, other threads:[~2020-09-28  7:07 UTC | newest]

Thread overview: 43+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-22  8:39 [PATCH rdma-next v3 0/2] Dynamicaly allocate SG table from the pages Leon Romanovsky
2020-09-22  8:39 ` [Intel-gfx] " Leon Romanovsky
2020-09-22  8:39 ` Leon Romanovsky
2020-09-22  8:39 ` [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages Leon Romanovsky
2020-09-22  8:39   ` [Intel-gfx] " Leon Romanovsky
2020-09-22  8:39   ` Leon Romanovsky
2020-09-23  5:42   ` Christoph Hellwig
2020-09-23  5:42     ` [Intel-gfx] " Christoph Hellwig
2020-09-24  8:21   ` Tvrtko Ursulin
2020-09-24  8:21     ` Tvrtko Ursulin
2020-09-25  7:13     ` Leon Romanovsky
2020-09-25  7:13       ` Leon Romanovsky
2020-09-25  7:13       ` Leon Romanovsky
2020-09-25 11:41       ` Tvrtko Ursulin
2020-09-25 11:41         ` Tvrtko Ursulin
2020-09-25 11:58         ` Jason Gunthorpe
2020-09-25 11:58           ` Jason Gunthorpe
2020-09-25 11:58           ` Jason Gunthorpe
2020-09-25 12:29           ` Tvrtko Ursulin
2020-09-25 12:29             ` Tvrtko Ursulin
2020-09-25 12:29             ` Tvrtko Ursulin
2020-09-25 12:34             ` Jason Gunthorpe
2020-09-25 12:34               ` Jason Gunthorpe
2020-09-25 12:34               ` Jason Gunthorpe
2020-09-25 12:13         ` Maor Gottlieb
2020-09-25 12:13           ` Maor Gottlieb
2020-09-25 12:13           ` Maor Gottlieb
2020-09-25 11:55       ` Jason Gunthorpe
2020-09-25 11:55         ` Jason Gunthorpe
2020-09-25 11:55         ` Jason Gunthorpe
2020-09-25 12:18         ` Maor Gottlieb
2020-09-25 12:18           ` Maor Gottlieb
2020-09-25 12:18           ` Maor Gottlieb
2020-09-25 12:33           ` Tvrtko Ursulin
2020-09-25 12:33             ` Tvrtko Ursulin
2020-09-25 12:33             ` Tvrtko Ursulin
2020-09-25 13:39             ` Maor Gottlieb
2020-09-25 13:39               ` Maor Gottlieb
2020-09-25 13:39               ` Maor Gottlieb
2020-09-25 13:54               ` Tvrtko Ursulin
2020-09-25 13:54                 ` Tvrtko Ursulin
2020-09-25 13:54                 ` Tvrtko Ursulin
2020-09-22  8:39 ` [PATCH rdma-next v3 2/2] RDMA/umem: Move to allocate " Leon Romanovsky

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.