From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13149C4363D for ; Fri, 25 Sep 2020 07:13:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9D9D3235F8 for ; Fri, 25 Sep 2020 07:13:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601018016; bh=FLpqVWgSqTcNzNQkmPuNcN8MvbswI6zQzJ76hN4Pdkk=; h=Date:From:To:Cc:Subject:References:In-Reply-To:List-ID:From; b=liuNifMjhBuO7Am0RQMDvDW8i4fBq4A6BN7V4mSl+Rrucq/u7q/FJMvkt41SWWkp/ FiwLJoRecfkgMImDh1n66op9DgcKIqO/F5IngyFrkLooNs5UPN6iP4Z6OvVkOv+74x 55yHDvj5qGBU+ip2gTiwVKCKggYLjmJ8aF8CliGk= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727135AbgIYHNg (ORCPT ); Fri, 25 Sep 2020 03:13:36 -0400 Received: from mail.kernel.org ([198.145.29.99]:42000 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727067AbgIYHNf (ORCPT ); Fri, 25 Sep 2020 03:13:35 -0400 Received: from localhost (unknown [213.57.247.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 9DB0D20759; Fri, 25 Sep 2020 07:13:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601018014; bh=FLpqVWgSqTcNzNQkmPuNcN8MvbswI6zQzJ76hN4Pdkk=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=S+x2p8u58Y3GWUmjbB4NPnzVdmpBGbn8w9xNt/WkGjNQcCF94HrSKi03dMigcrhAS jirIqPJeQUe5nxg4TsGdB0ZAb+4B/1Zgkc/i9/SpmTWn6IXuM5N47Tef52jC1rRp3P MrXOXGBJh6thDfcHSP9Lyyd5abgwVQbfXX39ySN0= Date: Fri, 25 Sep 2020 10:13:30 +0300 From: Leon Romanovsky To: Tvrtko Ursulin Cc: Christoph Hellwig , Doug Ledford , Jason Gunthorpe , linux-rdma@vger.kernel.org, intel-gfx@lists.freedesktop.org, Roland Scheidegger , dri-devel@lists.freedesktop.org, David Airlie , VMware Graphics , Maor Gottlieb , Maor Gottlieb Subject: Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages Message-ID: <20200925071330.GA2280698@unreal> References: <20200922083958.2150803-1-leon@kernel.org> <20200922083958.2150803-2-leon@kernel.org> <118a03ef-d160-e202-81cc-16c9c39359fc@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <118a03ef-d160-e202-81cc-16c9c39359fc@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org On Thu, Sep 24, 2020 at 09:21:20AM +0100, Tvrtko Ursulin wrote: > > On 22/09/2020 09:39, Leon Romanovsky wrote: > > From: Maor Gottlieb > > > > Extend __sg_alloc_table_from_pages to support dynamic allocation of > > SG table from pages. It should be used by drivers that can't supply > > all the pages at one time. > > > > This function returns the last populated SGE in the table. Users should > > pass it as an argument to the function from the second call and forward. > > As before, nents will be equal to the number of populated SGEs (chunks). > > So it's appending and growing the "list", did I get that right? Sounds handy > indeed. Some comments/questions below. Yes, we (RDMA) use this function to chain contiguous pages. > > > > > With this new extension, drivers can benefit the optimization of merging > > contiguous pages without a need to allocate all pages in advance and > > hold them in a large buffer. > > > > E.g. with the Infiniband driver that allocates a single page for hold > > the > > pages. For 1TB memory registration, the temporary buffer would consume > > only > > 4KB, instead of 2GB. > > > > Signed-off-by: Maor Gottlieb > > Signed-off-by: Leon Romanovsky > > --- > > drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 12 +- > > drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c | 15 +- > > include/linux/scatterlist.h | 43 +++--- > > lib/scatterlist.c | 158 +++++++++++++++----- > > lib/sg_pool.c | 3 +- > > tools/testing/scatterlist/main.c | 9 +- > > 6 files changed, 163 insertions(+), 77 deletions(-) > > > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c > > index 12b30075134a..f2eaed6aca3d 100644 > > --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c > > @@ -403,6 +403,7 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj, > > unsigned int max_segment = i915_sg_segment_size(); > > struct sg_table *st; > > unsigned int sg_page_sizes; > > + struct scatterlist *sg; > > int ret; > > > > st = kmalloc(sizeof(*st), GFP_KERNEL); > > @@ -410,13 +411,12 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj, > > return ERR_PTR(-ENOMEM); > > > > alloc_table: > > - ret = __sg_alloc_table_from_pages(st, pvec, num_pages, > > - 0, num_pages << PAGE_SHIFT, > > - max_segment, > > - GFP_KERNEL); > > - if (ret) { > > + sg = __sg_alloc_table_from_pages(st, pvec, num_pages, 0, > > + num_pages << PAGE_SHIFT, max_segment, > > + NULL, 0, GFP_KERNEL); > > + if (IS_ERR(sg)) { > > kfree(st); > > - return ERR_PTR(ret); > > + return ERR_CAST(sg); > > } > > > > ret = i915_gem_gtt_prepare_pages(obj, st); > > diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c > > index ab524ab3b0b4..f22acd398b1f 100644 > > --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c > > +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c > > @@ -419,6 +419,7 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt) > > int ret = 0; > > static size_t sgl_size; > > static size_t sgt_size; > > + struct scatterlist *sg; > > > > if (vmw_tt->mapped) > > return 0; > > @@ -441,13 +442,15 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt) > > if (unlikely(ret != 0)) > > return ret; > > > > - ret = __sg_alloc_table_from_pages > > - (&vmw_tt->sgt, vsgt->pages, vsgt->num_pages, 0, > > - (unsigned long) vsgt->num_pages << PAGE_SHIFT, > > - dma_get_max_seg_size(dev_priv->dev->dev), > > - GFP_KERNEL); > > - if (unlikely(ret != 0)) > > + sg = __sg_alloc_table_from_pages(&vmw_tt->sgt, vsgt->pages, > > + vsgt->num_pages, 0, > > + (unsigned long) vsgt->num_pages << PAGE_SHIFT, > > + dma_get_max_seg_size(dev_priv->dev->dev), > > + NULL, 0, GFP_KERNEL); > > + if (IS_ERR(sg)) { > > + ret = PTR_ERR(sg); > > goto out_sg_alloc_fail; > > + } > > > > if (vsgt->num_pages > vmw_tt->sgt.nents) { > > uint64_t over_alloc = > > diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h > > index 45cf7b69d852..c24cc667b56b 100644 > > --- a/include/linux/scatterlist.h > > +++ b/include/linux/scatterlist.h > > @@ -165,6 +165,22 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf, > > #define for_each_sgtable_dma_sg(sgt, sg, i) \ > > for_each_sg((sgt)->sgl, sg, (sgt)->nents, i) > > > > +static inline void __sg_chain(struct scatterlist *chain_sg, > > + struct scatterlist *sgl) > > +{ > > + /* > > + * offset and length are unused for chain entry. Clear them. > > + */ > > + chain_sg->offset = 0; > > + chain_sg->length = 0; > > + > > + /* > > + * Set lowest bit to indicate a link pointer, and make sure to clear > > + * the termination bit if it happens to be set. > > + */ > > + chain_sg->page_link = ((unsigned long) sgl | SG_CHAIN) & ~SG_END; > > +} > > + > > /** > > * sg_chain - Chain two sglists together > > * @prv: First scatterlist > > @@ -178,18 +194,7 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf, > > static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents, > > struct scatterlist *sgl) > > { > > - /* > > - * offset and length are unused for chain entry. Clear them. > > - */ > > - prv[prv_nents - 1].offset = 0; > > - prv[prv_nents - 1].length = 0; > > - > > - /* > > - * Set lowest bit to indicate a link pointer, and make sure to clear > > - * the termination bit if it happens to be set. > > - */ > > - prv[prv_nents - 1].page_link = ((unsigned long) sgl | SG_CHAIN) > > - & ~SG_END; > > + __sg_chain(&prv[prv_nents - 1], sgl); > > } > > > > /** > > @@ -283,13 +288,15 @@ typedef void (sg_free_fn)(struct scatterlist *, unsigned int); > > void __sg_free_table(struct sg_table *, unsigned int, unsigned int, > > sg_free_fn *); > > void sg_free_table(struct sg_table *); > > -int __sg_alloc_table(struct sg_table *, unsigned int, unsigned int, > > - struct scatterlist *, unsigned int, gfp_t, sg_alloc_fn *); > > +int __sg_alloc_table(struct sg_table *, struct scatterlist *, unsigned int, > > + unsigned int, struct scatterlist *, unsigned int, > > + gfp_t, sg_alloc_fn *); > > int sg_alloc_table(struct sg_table *, unsigned int, gfp_t); > > -int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages, > > - unsigned int n_pages, unsigned int offset, > > - unsigned long size, unsigned int max_segment, > > - gfp_t gfp_mask); > > +struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt, > > + struct page **pages, unsigned int n_pages, unsigned int offset, > > + unsigned long size, unsigned int max_segment, > > + struct scatterlist *prv, unsigned int left_pages, > > + gfp_t gfp_mask); > > int sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages, > > unsigned int n_pages, unsigned int offset, > > unsigned long size, gfp_t gfp_mask); > > diff --git a/lib/scatterlist.c b/lib/scatterlist.c > > index 5d63a8857f36..91587560497d 100644 > > --- a/lib/scatterlist.c > > +++ b/lib/scatterlist.c > > @@ -245,6 +245,7 @@ EXPORT_SYMBOL(sg_free_table); > > /** > > * __sg_alloc_table - Allocate and initialize an sg table with given allocator > > * @table: The sg table header to use > > + * @prv: Last populated sge in sgt > > * @nents: Number of entries in sg list > > * @max_ents: The maximum number of entries the allocator returns per call > > * @nents_first_chunk: Number of entries int the (preallocated) first > > @@ -263,17 +264,15 @@ EXPORT_SYMBOL(sg_free_table); > > * __sg_free_table() to cleanup any leftover allocations. > > * > > **/ > > -int __sg_alloc_table(struct sg_table *table, unsigned int nents, > > - unsigned int max_ents, struct scatterlist *first_chunk, > > - unsigned int nents_first_chunk, gfp_t gfp_mask, > > - sg_alloc_fn *alloc_fn) > > +int __sg_alloc_table(struct sg_table *table, struct scatterlist *prv, > > + unsigned int nents, unsigned int max_ents, > > + struct scatterlist *first_chunk, > > + unsigned int nents_first_chunk, gfp_t gfp_mask, > > + sg_alloc_fn *alloc_fn) > > { > > - struct scatterlist *sg, *prv; > > - unsigned int left; > > - unsigned curr_max_ents = nents_first_chunk ?: max_ents; > > - unsigned prv_max_ents; > > - > > - memset(table, 0, sizeof(*table)); > > + unsigned int curr_max_ents = nents_first_chunk ?: max_ents; > > + unsigned int left, prv_max_ents = 0; > > + struct scatterlist *sg; > > > > if (nents == 0) > > return -EINVAL; > > @@ -283,7 +282,6 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents, > > #endif > > > > left = nents; > > - prv = NULL; > > do { > > unsigned int sg_size, alloc_size = left; > > > > @@ -308,7 +306,7 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents, > > * linkage. Without this, sg_kfree() may get > > * confused. > > */ > > - if (prv) > > + if (prv_max_ents) > > table->nents = ++table->orig_nents; > > > > return -ENOMEM; > > @@ -321,10 +319,18 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents, > > * If this is the first mapping, assign the sg table header. > > * If this is not the first mapping, chain previous part. > > */ > > - if (prv) > > - sg_chain(prv, prv_max_ents, sg); > > - else > > + if (!prv) > > table->sgl = sg; > > + else if (prv_max_ents) > > + sg_chain(prv, prv_max_ents, sg); > > + else { > > + __sg_chain(prv, sg); > > + /* > > + * We decrease one since the prvious last sge in used to > > + * chain the chunks together. > > + */ > > + table->nents = table->orig_nents -= 1; > > + } > > > > /* > > * If no more entries after this one, mark the end > > @@ -356,7 +362,8 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask) > > { > > int ret; > > > > - ret = __sg_alloc_table(table, nents, SG_MAX_SINGLE_ALLOC, > > + memset(table, 0, sizeof(*table)); > > + ret = __sg_alloc_table(table, NULL, nents, SG_MAX_SINGLE_ALLOC, > > NULL, 0, gfp_mask, sg_kmalloc); > > if (unlikely(ret)) > > __sg_free_table(table, SG_MAX_SINGLE_ALLOC, 0, sg_kfree); > > @@ -365,6 +372,30 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask) > > } > > EXPORT_SYMBOL(sg_alloc_table); > > > > +static struct scatterlist *get_next_sg(struct sg_table *table, > > + struct scatterlist *prv, unsigned long left_npages, > > + gfp_t gfp_mask) > > +{ > > + struct scatterlist *next_sg; > > + int ret; > > + > > + /* If table was just allocated */ > > + if (!prv) > > + return table->sgl; > > + > > + /* Check if last entry should be keeped for chainning */ > > + next_sg = sg_next(prv); > > + if (!sg_is_last(next_sg) || left_npages == 1) > > + return next_sg; > > + > > + ret = __sg_alloc_table(table, next_sg, > > + min_t(unsigned long, left_npages, SG_MAX_SINGLE_ALLOC), > > + SG_MAX_SINGLE_ALLOC, NULL, 0, gfp_mask, sg_kmalloc); > > + if (ret) > > + return ERR_PTR(ret); > > + return sg_next(prv); > > +} > > + > > /** > > * __sg_alloc_table_from_pages - Allocate and initialize an sg table from > > * an array of pages > > @@ -374,29 +405,47 @@ EXPORT_SYMBOL(sg_alloc_table); > > * @offset: Offset from start of the first page to the start of a buffer > > * @size: Number of valid bytes in the buffer (after offset) > > * @max_segment: Maximum size of a scatterlist node in bytes (page aligned) > > + * @prv: Last populated sge in sgt > > + * @left_pages: Left pages caller have to set after this call > > * @gfp_mask: GFP allocation mask > > * > > - * Description: > > - * Allocate and initialize an sg table from a list of pages. Contiguous > > - * ranges of the pages are squashed into a single scatterlist node up to the > > - * maximum size specified in @max_segment. An user may provide an offset at a > > - * start and a size of valid data in a buffer specified by the page array. > > - * The returned sg table is released by sg_free_table. > > + * Description: > > + * If @prv is NULL, allocate and initialize an sg table from a list of pages, > > + * else reuse the scatterlist passed in at @prv. > > + * Contiguous ranges of the pages are squashed into a single scatterlist > > + * entry up to the maximum size specified in @max_segment. A user may > > + * provide an offset at a start and a size of valid data in a buffer > > + * specified by the page array. > > * > > * Returns: > > - * 0 on success, negative error on failure > > + * Last SGE in sgt on success, PTR_ERR on otherwise. > > + * The allocation in @sgt must be released by sg_free_table. > > + * > > + * Notes: > > + * If this function returns non-0 (eg failure), the caller must call > > + * sg_free_table() to cleanup any leftover allocations. > > */ > > -int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages, > > - unsigned int n_pages, unsigned int offset, > > - unsigned long size, unsigned int max_segment, > > - gfp_t gfp_mask) > > +struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt, > > + struct page **pages, unsigned int n_pages, unsigned int offset, > > + unsigned long size, unsigned int max_segment, > > + struct scatterlist *prv, unsigned int left_pages, > > + gfp_t gfp_mask) > > { > > - unsigned int chunks, cur_page, seg_len, i; > > + unsigned int chunks, cur_page, seg_len, i, prv_len = 0; > > + unsigned int tmp_nents = sgt->nents; > > + struct scatterlist *s = prv; > > + unsigned int table_size; > > int ret; > > - struct scatterlist *s; > > > > if (WARN_ON(!max_segment || offset_in_page(max_segment))) > > - return -EINVAL; > > + return ERR_PTR(-EINVAL); > > + if (IS_ENABLED(CONFIG_ARCH_NO_SG_CHAIN) && prv) > > + return ERR_PTR(-EOPNOTSUPP); > > I would consider trying to make the failure caught at compile time. It would > probably need a static inline wrapper to BUILD_BUG_ON is prv is not compile > time constant. Because my gut feeling is runtime is a bit awkward. In second patch [1], priv is dynamic pointer that can't be checked at compile time. [1] https://lore.kernel.org/linux-rdma/20200923054251.GA15249@lst.de/T/#m19b0836f23db9d626309c3e70939ce884946e2f6 > > Hm, but also isn't the check too strict? It would be possible to append to > the last sgt as long as under max_ents, no? (Like the current check in > __sg_alloc_table.) It can be, but it is corner case that doesn't worth to code. Right now, RDMA is the single user of this append thing and our setups are !CONFIG_ARCH_NO_SG_CHAIN. > > > + > > + if (prv && > > + page_to_pfn(sg_page(prv)) + (prv->length >> PAGE_SHIFT) == > > + page_to_pfn(pages[0])) > > + prv_len = prv->length; > > > > /* compute number of contiguous chunks */ > > chunks = 1; > > @@ -410,13 +459,17 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages, > > } > > } > > > > - ret = sg_alloc_table(sgt, chunks, gfp_mask); > > - if (unlikely(ret)) > > - return ret; > > + if (!prv) { > > + /* Only the last allocation could be less than the maximum */ > > + table_size = left_pages ? SG_MAX_SINGLE_ALLOC : chunks; > > + ret = sg_alloc_table(sgt, table_size, gfp_mask); > > + if (unlikely(ret)) > > + return ERR_PTR(ret); > > + } > > > > /* merging chunks and putting them into the scatterlist */ > > cur_page = 0; > > - for_each_sg(sgt->sgl, s, sgt->orig_nents, i) { > > + for (i = 0; i < chunks; i++) { > > unsigned int j, chunk_size; > > > > /* look for the end of the current chunk */ > > @@ -425,19 +478,41 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages, > > seg_len += PAGE_SIZE; > > if (seg_len >= max_segment || > > page_to_pfn(pages[j]) != > > - page_to_pfn(pages[j - 1]) + 1) > > + page_to_pfn(pages[j - 1]) + 1) > > break; > > } > > > > chunk_size = ((j - cur_page) << PAGE_SHIFT) - offset; > > - sg_set_page(s, pages[cur_page], > > - min_t(unsigned long, size, chunk_size), offset); > > + chunk_size = min_t(unsigned long, size, chunk_size); > > + if (!i && prv_len) { > > + if (max_segment - prv->length >= chunk_size) { > > + sg_set_page(s, sg_page(s), > > + s->length + chunk_size, s->offset); > > + goto next; > > + } > > + } > > + > > + /* Pass how many chunks might left */ > > + s = get_next_sg(sgt, s, chunks - i + left_pages, gfp_mask); > > + if (IS_ERR(s)) { > > + /* > > + * Adjust entry length to be as before function was > > + * called. > > + */ > > + if (prv_len) > > + prv->length = prv_len; > > + goto out; > > + } > > + sg_set_page(s, pages[cur_page], chunk_size, offset); > > + tmp_nents++; > > +next: > > size -= chunk_size; > > offset = 0; > > cur_page = j; > > } > > - > > - return 0; > > + sgt->nents = tmp_nents; > > +out: > > + return s; > > } > > EXPORT_SYMBOL(__sg_alloc_table_from_pages); > > > > @@ -465,8 +540,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages, > > unsigned int n_pages, unsigned int offset, > > unsigned long size, gfp_t gfp_mask) > > { > > - return __sg_alloc_table_from_pages(sgt, pages, n_pages, offset, size, > > - SCATTERLIST_MAX_SEGMENT, gfp_mask); > > + return PTR_ERR_OR_ZERO(__sg_alloc_table_from_pages(sgt, pages, n_pages, > > + offset, size, SCATTERLIST_MAX_SEGMENT, NULL, 0, > > + gfp_mask)); > > } > > EXPORT_SYMBOL(sg_alloc_table_from_pages); > > > > diff --git a/lib/sg_pool.c b/lib/sg_pool.c > > index db29e5c1f790..c449248bf5d5 100644 > > --- a/lib/sg_pool.c > > +++ b/lib/sg_pool.c > > @@ -129,7 +129,8 @@ int sg_alloc_table_chained(struct sg_table *table, int nents, > > nents_first_chunk = 0; > > } > > > > - ret = __sg_alloc_table(table, nents, SG_CHUNK_SIZE, > > + memset(table, 0, sizeof(*table)); > > + ret = __sg_alloc_table(table, NULL, nents, SG_CHUNK_SIZE, > > first_chunk, nents_first_chunk, > > GFP_ATOMIC, sg_pool_alloc); > > if (unlikely(ret)) > > diff --git a/tools/testing/scatterlist/main.c b/tools/testing/scatterlist/main.c > > index 0a1464181226..4899359a31ac 100644 > > --- a/tools/testing/scatterlist/main.c > > +++ b/tools/testing/scatterlist/main.c > > @@ -55,14 +55,13 @@ int main(void) > > for (i = 0, test = tests; test->expected_segments; test++, i++) { > > struct page *pages[MAX_PAGES]; > > struct sg_table st; > > - int ret; > > + struct scatterlist *sg; > > > > set_pages(pages, test->pfn, test->num_pages); > > > > - ret = __sg_alloc_table_from_pages(&st, pages, test->num_pages, > > - 0, test->size, test->max_seg, > > - GFP_KERNEL); > > - assert(ret == test->alloc_ret); > > + sg = __sg_alloc_table_from_pages(&st, pages, test->num_pages, 0, > > + test->size, test->max_seg, NULL, 0, GFP_KERNEL); > > + assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret); > > Some test coverage for relatively complex code would be very welcomed. Since > the testing framework is already there, even if it bit-rotted a bit, but > shouldn't be hard to fix. > > A few tests to check append/grow works as expected, in terms of how the end > table looks like given the initial state and some different page patterns > added to it. And both crossing and not crossing into sg chaining scenarios. This function is basic for all RDMA devices and we are pretty confident that the old and new flows are tested thoroughly. We will add proper test in next kernel cycle. Thanks > > Regards, > > Tvrtko > > > > > if (test->alloc_ret) > > continue; > > -- > > 2.26.2 > > > > _______________________________________________ > > Intel-gfx mailing list > > Intel-gfx@lists.freedesktop.org > > https://lists.freedesktop.org/mailman/listinfo/intel-gfx > > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E175C47420 for ; Fri, 25 Sep 2020 07:13:40 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A5C7122211 for ; Fri, 25 Sep 2020 07:13:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="S+x2p8u5" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A5C7122211 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 459766EC25; Fri, 25 Sep 2020 07:13:36 +0000 (UTC) Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by gabe.freedesktop.org (Postfix) with ESMTPS id C51266EC20; Fri, 25 Sep 2020 07:13:34 +0000 (UTC) Received: from localhost (unknown [213.57.247.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 9DB0D20759; Fri, 25 Sep 2020 07:13:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601018014; bh=FLpqVWgSqTcNzNQkmPuNcN8MvbswI6zQzJ76hN4Pdkk=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=S+x2p8u58Y3GWUmjbB4NPnzVdmpBGbn8w9xNt/WkGjNQcCF94HrSKi03dMigcrhAS jirIqPJeQUe5nxg4TsGdB0ZAb+4B/1Zgkc/i9/SpmTWn6IXuM5N47Tef52jC1rRp3P MrXOXGBJh6thDfcHSP9Lyyd5abgwVQbfXX39ySN0= Date: Fri, 25 Sep 2020 10:13:30 +0300 From: Leon Romanovsky To: Tvrtko Ursulin Subject: Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages Message-ID: <20200925071330.GA2280698@unreal> References: <20200922083958.2150803-1-leon@kernel.org> <20200922083958.2150803-2-leon@kernel.org> <118a03ef-d160-e202-81cc-16c9c39359fc@linux.intel.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <118a03ef-d160-e202-81cc-16c9c39359fc@linux.intel.com> X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-rdma@vger.kernel.org, intel-gfx@lists.freedesktop.org, Roland Scheidegger , dri-devel@lists.freedesktop.org, Maor Gottlieb , David Airlie , Doug Ledford , VMware Graphics , Jason Gunthorpe , Maor Gottlieb , Christoph Hellwig Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" On Thu, Sep 24, 2020 at 09:21:20AM +0100, Tvrtko Ursulin wrote: > > On 22/09/2020 09:39, Leon Romanovsky wrote: > > From: Maor Gottlieb > > > > Extend __sg_alloc_table_from_pages to support dynamic allocation of > > SG table from pages. It should be used by drivers that can't supply > > all the pages at one time. > > > > This function returns the last populated SGE in the table. Users should > > pass it as an argument to the function from the second call and forward. > > As before, nents will be equal to the number of populated SGEs (chunks). > > So it's appending and growing the "list", did I get that right? Sounds handy > indeed. Some comments/questions below. Yes, we (RDMA) use this function to chain contiguous pages. > > > > > With this new extension, drivers can benefit the optimization of merging > > contiguous pages without a need to allocate all pages in advance and > > hold them in a large buffer. > > > > E.g. with the Infiniband driver that allocates a single page for hold > > the > > pages. For 1TB memory registration, the temporary buffer would consume > > only > > 4KB, instead of 2GB. > > > > Signed-off-by: Maor Gottlieb > > Signed-off-by: Leon Romanovsky > > --- > > drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 12 +- > > drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c | 15 +- > > include/linux/scatterlist.h | 43 +++--- > > lib/scatterlist.c | 158 +++++++++++++++----- > > lib/sg_pool.c | 3 +- > > tools/testing/scatterlist/main.c | 9 +- > > 6 files changed, 163 insertions(+), 77 deletions(-) > > > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c > > index 12b30075134a..f2eaed6aca3d 100644 > > --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c > > @@ -403,6 +403,7 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj, > > unsigned int max_segment = i915_sg_segment_size(); > > struct sg_table *st; > > unsigned int sg_page_sizes; > > + struct scatterlist *sg; > > int ret; > > > > st = kmalloc(sizeof(*st), GFP_KERNEL); > > @@ -410,13 +411,12 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj, > > return ERR_PTR(-ENOMEM); > > > > alloc_table: > > - ret = __sg_alloc_table_from_pages(st, pvec, num_pages, > > - 0, num_pages << PAGE_SHIFT, > > - max_segment, > > - GFP_KERNEL); > > - if (ret) { > > + sg = __sg_alloc_table_from_pages(st, pvec, num_pages, 0, > > + num_pages << PAGE_SHIFT, max_segment, > > + NULL, 0, GFP_KERNEL); > > + if (IS_ERR(sg)) { > > kfree(st); > > - return ERR_PTR(ret); > > + return ERR_CAST(sg); > > } > > > > ret = i915_gem_gtt_prepare_pages(obj, st); > > diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c > > index ab524ab3b0b4..f22acd398b1f 100644 > > --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c > > +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c > > @@ -419,6 +419,7 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt) > > int ret = 0; > > static size_t sgl_size; > > static size_t sgt_size; > > + struct scatterlist *sg; > > > > if (vmw_tt->mapped) > > return 0; > > @@ -441,13 +442,15 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt) > > if (unlikely(ret != 0)) > > return ret; > > > > - ret = __sg_alloc_table_from_pages > > - (&vmw_tt->sgt, vsgt->pages, vsgt->num_pages, 0, > > - (unsigned long) vsgt->num_pages << PAGE_SHIFT, > > - dma_get_max_seg_size(dev_priv->dev->dev), > > - GFP_KERNEL); > > - if (unlikely(ret != 0)) > > + sg = __sg_alloc_table_from_pages(&vmw_tt->sgt, vsgt->pages, > > + vsgt->num_pages, 0, > > + (unsigned long) vsgt->num_pages << PAGE_SHIFT, > > + dma_get_max_seg_size(dev_priv->dev->dev), > > + NULL, 0, GFP_KERNEL); > > + if (IS_ERR(sg)) { > > + ret = PTR_ERR(sg); > > goto out_sg_alloc_fail; > > + } > > > > if (vsgt->num_pages > vmw_tt->sgt.nents) { > > uint64_t over_alloc = > > diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h > > index 45cf7b69d852..c24cc667b56b 100644 > > --- a/include/linux/scatterlist.h > > +++ b/include/linux/scatterlist.h > > @@ -165,6 +165,22 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf, > > #define for_each_sgtable_dma_sg(sgt, sg, i) \ > > for_each_sg((sgt)->sgl, sg, (sgt)->nents, i) > > > > +static inline void __sg_chain(struct scatterlist *chain_sg, > > + struct scatterlist *sgl) > > +{ > > + /* > > + * offset and length are unused for chain entry. Clear them. > > + */ > > + chain_sg->offset = 0; > > + chain_sg->length = 0; > > + > > + /* > > + * Set lowest bit to indicate a link pointer, and make sure to clear > > + * the termination bit if it happens to be set. > > + */ > > + chain_sg->page_link = ((unsigned long) sgl | SG_CHAIN) & ~SG_END; > > +} > > + > > /** > > * sg_chain - Chain two sglists together > > * @prv: First scatterlist > > @@ -178,18 +194,7 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf, > > static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents, > > struct scatterlist *sgl) > > { > > - /* > > - * offset and length are unused for chain entry. Clear them. > > - */ > > - prv[prv_nents - 1].offset = 0; > > - prv[prv_nents - 1].length = 0; > > - > > - /* > > - * Set lowest bit to indicate a link pointer, and make sure to clear > > - * the termination bit if it happens to be set. > > - */ > > - prv[prv_nents - 1].page_link = ((unsigned long) sgl | SG_CHAIN) > > - & ~SG_END; > > + __sg_chain(&prv[prv_nents - 1], sgl); > > } > > > > /** > > @@ -283,13 +288,15 @@ typedef void (sg_free_fn)(struct scatterlist *, unsigned int); > > void __sg_free_table(struct sg_table *, unsigned int, unsigned int, > > sg_free_fn *); > > void sg_free_table(struct sg_table *); > > -int __sg_alloc_table(struct sg_table *, unsigned int, unsigned int, > > - struct scatterlist *, unsigned int, gfp_t, sg_alloc_fn *); > > +int __sg_alloc_table(struct sg_table *, struct scatterlist *, unsigned int, > > + unsigned int, struct scatterlist *, unsigned int, > > + gfp_t, sg_alloc_fn *); > > int sg_alloc_table(struct sg_table *, unsigned int, gfp_t); > > -int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages, > > - unsigned int n_pages, unsigned int offset, > > - unsigned long size, unsigned int max_segment, > > - gfp_t gfp_mask); > > +struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt, > > + struct page **pages, unsigned int n_pages, unsigned int offset, > > + unsigned long size, unsigned int max_segment, > > + struct scatterlist *prv, unsigned int left_pages, > > + gfp_t gfp_mask); > > int sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages, > > unsigned int n_pages, unsigned int offset, > > unsigned long size, gfp_t gfp_mask); > > diff --git a/lib/scatterlist.c b/lib/scatterlist.c > > index 5d63a8857f36..91587560497d 100644 > > --- a/lib/scatterlist.c > > +++ b/lib/scatterlist.c > > @@ -245,6 +245,7 @@ EXPORT_SYMBOL(sg_free_table); > > /** > > * __sg_alloc_table - Allocate and initialize an sg table with given allocator > > * @table: The sg table header to use > > + * @prv: Last populated sge in sgt > > * @nents: Number of entries in sg list > > * @max_ents: The maximum number of entries the allocator returns per call > > * @nents_first_chunk: Number of entries int the (preallocated) first > > @@ -263,17 +264,15 @@ EXPORT_SYMBOL(sg_free_table); > > * __sg_free_table() to cleanup any leftover allocations. > > * > > **/ > > -int __sg_alloc_table(struct sg_table *table, unsigned int nents, > > - unsigned int max_ents, struct scatterlist *first_chunk, > > - unsigned int nents_first_chunk, gfp_t gfp_mask, > > - sg_alloc_fn *alloc_fn) > > +int __sg_alloc_table(struct sg_table *table, struct scatterlist *prv, > > + unsigned int nents, unsigned int max_ents, > > + struct scatterlist *first_chunk, > > + unsigned int nents_first_chunk, gfp_t gfp_mask, > > + sg_alloc_fn *alloc_fn) > > { > > - struct scatterlist *sg, *prv; > > - unsigned int left; > > - unsigned curr_max_ents = nents_first_chunk ?: max_ents; > > - unsigned prv_max_ents; > > - > > - memset(table, 0, sizeof(*table)); > > + unsigned int curr_max_ents = nents_first_chunk ?: max_ents; > > + unsigned int left, prv_max_ents = 0; > > + struct scatterlist *sg; > > > > if (nents == 0) > > return -EINVAL; > > @@ -283,7 +282,6 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents, > > #endif > > > > left = nents; > > - prv = NULL; > > do { > > unsigned int sg_size, alloc_size = left; > > > > @@ -308,7 +306,7 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents, > > * linkage. Without this, sg_kfree() may get > > * confused. > > */ > > - if (prv) > > + if (prv_max_ents) > > table->nents = ++table->orig_nents; > > > > return -ENOMEM; > > @@ -321,10 +319,18 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents, > > * If this is the first mapping, assign the sg table header. > > * If this is not the first mapping, chain previous part. > > */ > > - if (prv) > > - sg_chain(prv, prv_max_ents, sg); > > - else > > + if (!prv) > > table->sgl = sg; > > + else if (prv_max_ents) > > + sg_chain(prv, prv_max_ents, sg); > > + else { > > + __sg_chain(prv, sg); > > + /* > > + * We decrease one since the prvious last sge in used to > > + * chain the chunks together. > > + */ > > + table->nents = table->orig_nents -= 1; > > + } > > > > /* > > * If no more entries after this one, mark the end > > @@ -356,7 +362,8 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask) > > { > > int ret; > > > > - ret = __sg_alloc_table(table, nents, SG_MAX_SINGLE_ALLOC, > > + memset(table, 0, sizeof(*table)); > > + ret = __sg_alloc_table(table, NULL, nents, SG_MAX_SINGLE_ALLOC, > > NULL, 0, gfp_mask, sg_kmalloc); > > if (unlikely(ret)) > > __sg_free_table(table, SG_MAX_SINGLE_ALLOC, 0, sg_kfree); > > @@ -365,6 +372,30 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask) > > } > > EXPORT_SYMBOL(sg_alloc_table); > > > > +static struct scatterlist *get_next_sg(struct sg_table *table, > > + struct scatterlist *prv, unsigned long left_npages, > > + gfp_t gfp_mask) > > +{ > > + struct scatterlist *next_sg; > > + int ret; > > + > > + /* If table was just allocated */ > > + if (!prv) > > + return table->sgl; > > + > > + /* Check if last entry should be keeped for chainning */ > > + next_sg = sg_next(prv); > > + if (!sg_is_last(next_sg) || left_npages == 1) > > + return next_sg; > > + > > + ret = __sg_alloc_table(table, next_sg, > > + min_t(unsigned long, left_npages, SG_MAX_SINGLE_ALLOC), > > + SG_MAX_SINGLE_ALLOC, NULL, 0, gfp_mask, sg_kmalloc); > > + if (ret) > > + return ERR_PTR(ret); > > + return sg_next(prv); > > +} > > + > > /** > > * __sg_alloc_table_from_pages - Allocate and initialize an sg table from > > * an array of pages > > @@ -374,29 +405,47 @@ EXPORT_SYMBOL(sg_alloc_table); > > * @offset: Offset from start of the first page to the start of a buffer > > * @size: Number of valid bytes in the buffer (after offset) > > * @max_segment: Maximum size of a scatterlist node in bytes (page aligned) > > + * @prv: Last populated sge in sgt > > + * @left_pages: Left pages caller have to set after this call > > * @gfp_mask: GFP allocation mask > > * > > - * Description: > > - * Allocate and initialize an sg table from a list of pages. Contiguous > > - * ranges of the pages are squashed into a single scatterlist node up to the > > - * maximum size specified in @max_segment. An user may provide an offset at a > > - * start and a size of valid data in a buffer specified by the page array. > > - * The returned sg table is released by sg_free_table. > > + * Description: > > + * If @prv is NULL, allocate and initialize an sg table from a list of pages, > > + * else reuse the scatterlist passed in at @prv. > > + * Contiguous ranges of the pages are squashed into a single scatterlist > > + * entry up to the maximum size specified in @max_segment. A user may > > + * provide an offset at a start and a size of valid data in a buffer > > + * specified by the page array. > > * > > * Returns: > > - * 0 on success, negative error on failure > > + * Last SGE in sgt on success, PTR_ERR on otherwise. > > + * The allocation in @sgt must be released by sg_free_table. > > + * > > + * Notes: > > + * If this function returns non-0 (eg failure), the caller must call > > + * sg_free_table() to cleanup any leftover allocations. > > */ > > -int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages, > > - unsigned int n_pages, unsigned int offset, > > - unsigned long size, unsigned int max_segment, > > - gfp_t gfp_mask) > > +struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt, > > + struct page **pages, unsigned int n_pages, unsigned int offset, > > + unsigned long size, unsigned int max_segment, > > + struct scatterlist *prv, unsigned int left_pages, > > + gfp_t gfp_mask) > > { > > - unsigned int chunks, cur_page, seg_len, i; > > + unsigned int chunks, cur_page, seg_len, i, prv_len = 0; > > + unsigned int tmp_nents = sgt->nents; > > + struct scatterlist *s = prv; > > + unsigned int table_size; > > int ret; > > - struct scatterlist *s; > > > > if (WARN_ON(!max_segment || offset_in_page(max_segment))) > > - return -EINVAL; > > + return ERR_PTR(-EINVAL); > > + if (IS_ENABLED(CONFIG_ARCH_NO_SG_CHAIN) && prv) > > + return ERR_PTR(-EOPNOTSUPP); > > I would consider trying to make the failure caught at compile time. It would > probably need a static inline wrapper to BUILD_BUG_ON is prv is not compile > time constant. Because my gut feeling is runtime is a bit awkward. In second patch [1], priv is dynamic pointer that can't be checked at compile time. [1] https://lore.kernel.org/linux-rdma/20200923054251.GA15249@lst.de/T/#m19b0836f23db9d626309c3e70939ce884946e2f6 > > Hm, but also isn't the check too strict? It would be possible to append to > the last sgt as long as under max_ents, no? (Like the current check in > __sg_alloc_table.) It can be, but it is corner case that doesn't worth to code. Right now, RDMA is the single user of this append thing and our setups are !CONFIG_ARCH_NO_SG_CHAIN. > > > + > > + if (prv && > > + page_to_pfn(sg_page(prv)) + (prv->length >> PAGE_SHIFT) == > > + page_to_pfn(pages[0])) > > + prv_len = prv->length; > > > > /* compute number of contiguous chunks */ > > chunks = 1; > > @@ -410,13 +459,17 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages, > > } > > } > > > > - ret = sg_alloc_table(sgt, chunks, gfp_mask); > > - if (unlikely(ret)) > > - return ret; > > + if (!prv) { > > + /* Only the last allocation could be less than the maximum */ > > + table_size = left_pages ? SG_MAX_SINGLE_ALLOC : chunks; > > + ret = sg_alloc_table(sgt, table_size, gfp_mask); > > + if (unlikely(ret)) > > + return ERR_PTR(ret); > > + } > > > > /* merging chunks and putting them into the scatterlist */ > > cur_page = 0; > > - for_each_sg(sgt->sgl, s, sgt->orig_nents, i) { > > + for (i = 0; i < chunks; i++) { > > unsigned int j, chunk_size; > > > > /* look for the end of the current chunk */ > > @@ -425,19 +478,41 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages, > > seg_len += PAGE_SIZE; > > if (seg_len >= max_segment || > > page_to_pfn(pages[j]) != > > - page_to_pfn(pages[j - 1]) + 1) > > + page_to_pfn(pages[j - 1]) + 1) > > break; > > } > > > > chunk_size = ((j - cur_page) << PAGE_SHIFT) - offset; > > - sg_set_page(s, pages[cur_page], > > - min_t(unsigned long, size, chunk_size), offset); > > + chunk_size = min_t(unsigned long, size, chunk_size); > > + if (!i && prv_len) { > > + if (max_segment - prv->length >= chunk_size) { > > + sg_set_page(s, sg_page(s), > > + s->length + chunk_size, s->offset); > > + goto next; > > + } > > + } > > + > > + /* Pass how many chunks might left */ > > + s = get_next_sg(sgt, s, chunks - i + left_pages, gfp_mask); > > + if (IS_ERR(s)) { > > + /* > > + * Adjust entry length to be as before function was > > + * called. > > + */ > > + if (prv_len) > > + prv->length = prv_len; > > + goto out; > > + } > > + sg_set_page(s, pages[cur_page], chunk_size, offset); > > + tmp_nents++; > > +next: > > size -= chunk_size; > > offset = 0; > > cur_page = j; > > } > > - > > - return 0; > > + sgt->nents = tmp_nents; > > +out: > > + return s; > > } > > EXPORT_SYMBOL(__sg_alloc_table_from_pages); > > > > @@ -465,8 +540,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages, > > unsigned int n_pages, unsigned int offset, > > unsigned long size, gfp_t gfp_mask) > > { > > - return __sg_alloc_table_from_pages(sgt, pages, n_pages, offset, size, > > - SCATTERLIST_MAX_SEGMENT, gfp_mask); > > + return PTR_ERR_OR_ZERO(__sg_alloc_table_from_pages(sgt, pages, n_pages, > > + offset, size, SCATTERLIST_MAX_SEGMENT, NULL, 0, > > + gfp_mask)); > > } > > EXPORT_SYMBOL(sg_alloc_table_from_pages); > > > > diff --git a/lib/sg_pool.c b/lib/sg_pool.c > > index db29e5c1f790..c449248bf5d5 100644 > > --- a/lib/sg_pool.c > > +++ b/lib/sg_pool.c > > @@ -129,7 +129,8 @@ int sg_alloc_table_chained(struct sg_table *table, int nents, > > nents_first_chunk = 0; > > } > > > > - ret = __sg_alloc_table(table, nents, SG_CHUNK_SIZE, > > + memset(table, 0, sizeof(*table)); > > + ret = __sg_alloc_table(table, NULL, nents, SG_CHUNK_SIZE, > > first_chunk, nents_first_chunk, > > GFP_ATOMIC, sg_pool_alloc); > > if (unlikely(ret)) > > diff --git a/tools/testing/scatterlist/main.c b/tools/testing/scatterlist/main.c > > index 0a1464181226..4899359a31ac 100644 > > --- a/tools/testing/scatterlist/main.c > > +++ b/tools/testing/scatterlist/main.c > > @@ -55,14 +55,13 @@ int main(void) > > for (i = 0, test = tests; test->expected_segments; test++, i++) { > > struct page *pages[MAX_PAGES]; > > struct sg_table st; > > - int ret; > > + struct scatterlist *sg; > > > > set_pages(pages, test->pfn, test->num_pages); > > > > - ret = __sg_alloc_table_from_pages(&st, pages, test->num_pages, > > - 0, test->size, test->max_seg, > > - GFP_KERNEL); > > - assert(ret == test->alloc_ret); > > + sg = __sg_alloc_table_from_pages(&st, pages, test->num_pages, 0, > > + test->size, test->max_seg, NULL, 0, GFP_KERNEL); > > + assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret); > > Some test coverage for relatively complex code would be very welcomed. Since > the testing framework is already there, even if it bit-rotted a bit, but > shouldn't be hard to fix. > > A few tests to check append/grow works as expected, in terms of how the end > table looks like given the initial state and some different page patterns > added to it. And both crossing and not crossing into sg chaining scenarios. This function is basic for all RDMA devices and we are pretty confident that the old and new flows are tested thoroughly. We will add proper test in next kernel cycle. Thanks > > Regards, > > Tvrtko > > > > > if (test->alloc_ret) > > continue; > > -- > > 2.26.2 > > > > _______________________________________________ > > Intel-gfx mailing list > > Intel-gfx@lists.freedesktop.org > > https://lists.freedesktop.org/mailman/listinfo/intel-gfx > > _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12981C4727E for ; Fri, 25 Sep 2020 07:13:38 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 829AA235F8 for ; Fri, 25 Sep 2020 07:13:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="S+x2p8u5" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 829AA235F8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0DFE56EC23; Fri, 25 Sep 2020 07:13:36 +0000 (UTC) Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by gabe.freedesktop.org (Postfix) with ESMTPS id C51266EC20; Fri, 25 Sep 2020 07:13:34 +0000 (UTC) Received: from localhost (unknown [213.57.247.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 9DB0D20759; Fri, 25 Sep 2020 07:13:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601018014; bh=FLpqVWgSqTcNzNQkmPuNcN8MvbswI6zQzJ76hN4Pdkk=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=S+x2p8u58Y3GWUmjbB4NPnzVdmpBGbn8w9xNt/WkGjNQcCF94HrSKi03dMigcrhAS jirIqPJeQUe5nxg4TsGdB0ZAb+4B/1Zgkc/i9/SpmTWn6IXuM5N47Tef52jC1rRp3P MrXOXGBJh6thDfcHSP9Lyyd5abgwVQbfXX39ySN0= Date: Fri, 25 Sep 2020 10:13:30 +0300 From: Leon Romanovsky To: Tvrtko Ursulin Message-ID: <20200925071330.GA2280698@unreal> References: <20200922083958.2150803-1-leon@kernel.org> <20200922083958.2150803-2-leon@kernel.org> <118a03ef-d160-e202-81cc-16c9c39359fc@linux.intel.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <118a03ef-d160-e202-81cc-16c9c39359fc@linux.intel.com> Subject: Re: [Intel-gfx] [PATCH rdma-next v3 1/2] lib/scatterlist: Add support in dynamic allocation of SG table from pages X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-rdma@vger.kernel.org, intel-gfx@lists.freedesktop.org, Roland Scheidegger , dri-devel@lists.freedesktop.org, Maor Gottlieb , David Airlie , Doug Ledford , VMware Graphics , Jason Gunthorpe , Maor Gottlieb , Christoph Hellwig Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" On Thu, Sep 24, 2020 at 09:21:20AM +0100, Tvrtko Ursulin wrote: > > On 22/09/2020 09:39, Leon Romanovsky wrote: > > From: Maor Gottlieb > > > > Extend __sg_alloc_table_from_pages to support dynamic allocation of > > SG table from pages. It should be used by drivers that can't supply > > all the pages at one time. > > > > This function returns the last populated SGE in the table. Users should > > pass it as an argument to the function from the second call and forward. > > As before, nents will be equal to the number of populated SGEs (chunks). > > So it's appending and growing the "list", did I get that right? Sounds handy > indeed. Some comments/questions below. Yes, we (RDMA) use this function to chain contiguous pages. > > > > > With this new extension, drivers can benefit the optimization of merging > > contiguous pages without a need to allocate all pages in advance and > > hold them in a large buffer. > > > > E.g. with the Infiniband driver that allocates a single page for hold > > the > > pages. For 1TB memory registration, the temporary buffer would consume > > only > > 4KB, instead of 2GB. > > > > Signed-off-by: Maor Gottlieb > > Signed-off-by: Leon Romanovsky > > --- > > drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 12 +- > > drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c | 15 +- > > include/linux/scatterlist.h | 43 +++--- > > lib/scatterlist.c | 158 +++++++++++++++----- > > lib/sg_pool.c | 3 +- > > tools/testing/scatterlist/main.c | 9 +- > > 6 files changed, 163 insertions(+), 77 deletions(-) > > > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c > > index 12b30075134a..f2eaed6aca3d 100644 > > --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c > > @@ -403,6 +403,7 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj, > > unsigned int max_segment = i915_sg_segment_size(); > > struct sg_table *st; > > unsigned int sg_page_sizes; > > + struct scatterlist *sg; > > int ret; > > > > st = kmalloc(sizeof(*st), GFP_KERNEL); > > @@ -410,13 +411,12 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj, > > return ERR_PTR(-ENOMEM); > > > > alloc_table: > > - ret = __sg_alloc_table_from_pages(st, pvec, num_pages, > > - 0, num_pages << PAGE_SHIFT, > > - max_segment, > > - GFP_KERNEL); > > - if (ret) { > > + sg = __sg_alloc_table_from_pages(st, pvec, num_pages, 0, > > + num_pages << PAGE_SHIFT, max_segment, > > + NULL, 0, GFP_KERNEL); > > + if (IS_ERR(sg)) { > > kfree(st); > > - return ERR_PTR(ret); > > + return ERR_CAST(sg); > > } > > > > ret = i915_gem_gtt_prepare_pages(obj, st); > > diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c > > index ab524ab3b0b4..f22acd398b1f 100644 > > --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c > > +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c > > @@ -419,6 +419,7 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt) > > int ret = 0; > > static size_t sgl_size; > > static size_t sgt_size; > > + struct scatterlist *sg; > > > > if (vmw_tt->mapped) > > return 0; > > @@ -441,13 +442,15 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt) > > if (unlikely(ret != 0)) > > return ret; > > > > - ret = __sg_alloc_table_from_pages > > - (&vmw_tt->sgt, vsgt->pages, vsgt->num_pages, 0, > > - (unsigned long) vsgt->num_pages << PAGE_SHIFT, > > - dma_get_max_seg_size(dev_priv->dev->dev), > > - GFP_KERNEL); > > - if (unlikely(ret != 0)) > > + sg = __sg_alloc_table_from_pages(&vmw_tt->sgt, vsgt->pages, > > + vsgt->num_pages, 0, > > + (unsigned long) vsgt->num_pages << PAGE_SHIFT, > > + dma_get_max_seg_size(dev_priv->dev->dev), > > + NULL, 0, GFP_KERNEL); > > + if (IS_ERR(sg)) { > > + ret = PTR_ERR(sg); > > goto out_sg_alloc_fail; > > + } > > > > if (vsgt->num_pages > vmw_tt->sgt.nents) { > > uint64_t over_alloc = > > diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h > > index 45cf7b69d852..c24cc667b56b 100644 > > --- a/include/linux/scatterlist.h > > +++ b/include/linux/scatterlist.h > > @@ -165,6 +165,22 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf, > > #define for_each_sgtable_dma_sg(sgt, sg, i) \ > > for_each_sg((sgt)->sgl, sg, (sgt)->nents, i) > > > > +static inline void __sg_chain(struct scatterlist *chain_sg, > > + struct scatterlist *sgl) > > +{ > > + /* > > + * offset and length are unused for chain entry. Clear them. > > + */ > > + chain_sg->offset = 0; > > + chain_sg->length = 0; > > + > > + /* > > + * Set lowest bit to indicate a link pointer, and make sure to clear > > + * the termination bit if it happens to be set. > > + */ > > + chain_sg->page_link = ((unsigned long) sgl | SG_CHAIN) & ~SG_END; > > +} > > + > > /** > > * sg_chain - Chain two sglists together > > * @prv: First scatterlist > > @@ -178,18 +194,7 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf, > > static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents, > > struct scatterlist *sgl) > > { > > - /* > > - * offset and length are unused for chain entry. Clear them. > > - */ > > - prv[prv_nents - 1].offset = 0; > > - prv[prv_nents - 1].length = 0; > > - > > - /* > > - * Set lowest bit to indicate a link pointer, and make sure to clear > > - * the termination bit if it happens to be set. > > - */ > > - prv[prv_nents - 1].page_link = ((unsigned long) sgl | SG_CHAIN) > > - & ~SG_END; > > + __sg_chain(&prv[prv_nents - 1], sgl); > > } > > > > /** > > @@ -283,13 +288,15 @@ typedef void (sg_free_fn)(struct scatterlist *, unsigned int); > > void __sg_free_table(struct sg_table *, unsigned int, unsigned int, > > sg_free_fn *); > > void sg_free_table(struct sg_table *); > > -int __sg_alloc_table(struct sg_table *, unsigned int, unsigned int, > > - struct scatterlist *, unsigned int, gfp_t, sg_alloc_fn *); > > +int __sg_alloc_table(struct sg_table *, struct scatterlist *, unsigned int, > > + unsigned int, struct scatterlist *, unsigned int, > > + gfp_t, sg_alloc_fn *); > > int sg_alloc_table(struct sg_table *, unsigned int, gfp_t); > > -int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages, > > - unsigned int n_pages, unsigned int offset, > > - unsigned long size, unsigned int max_segment, > > - gfp_t gfp_mask); > > +struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt, > > + struct page **pages, unsigned int n_pages, unsigned int offset, > > + unsigned long size, unsigned int max_segment, > > + struct scatterlist *prv, unsigned int left_pages, > > + gfp_t gfp_mask); > > int sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages, > > unsigned int n_pages, unsigned int offset, > > unsigned long size, gfp_t gfp_mask); > > diff --git a/lib/scatterlist.c b/lib/scatterlist.c > > index 5d63a8857f36..91587560497d 100644 > > --- a/lib/scatterlist.c > > +++ b/lib/scatterlist.c > > @@ -245,6 +245,7 @@ EXPORT_SYMBOL(sg_free_table); > > /** > > * __sg_alloc_table - Allocate and initialize an sg table with given allocator > > * @table: The sg table header to use > > + * @prv: Last populated sge in sgt > > * @nents: Number of entries in sg list > > * @max_ents: The maximum number of entries the allocator returns per call > > * @nents_first_chunk: Number of entries int the (preallocated) first > > @@ -263,17 +264,15 @@ EXPORT_SYMBOL(sg_free_table); > > * __sg_free_table() to cleanup any leftover allocations. > > * > > **/ > > -int __sg_alloc_table(struct sg_table *table, unsigned int nents, > > - unsigned int max_ents, struct scatterlist *first_chunk, > > - unsigned int nents_first_chunk, gfp_t gfp_mask, > > - sg_alloc_fn *alloc_fn) > > +int __sg_alloc_table(struct sg_table *table, struct scatterlist *prv, > > + unsigned int nents, unsigned int max_ents, > > + struct scatterlist *first_chunk, > > + unsigned int nents_first_chunk, gfp_t gfp_mask, > > + sg_alloc_fn *alloc_fn) > > { > > - struct scatterlist *sg, *prv; > > - unsigned int left; > > - unsigned curr_max_ents = nents_first_chunk ?: max_ents; > > - unsigned prv_max_ents; > > - > > - memset(table, 0, sizeof(*table)); > > + unsigned int curr_max_ents = nents_first_chunk ?: max_ents; > > + unsigned int left, prv_max_ents = 0; > > + struct scatterlist *sg; > > > > if (nents == 0) > > return -EINVAL; > > @@ -283,7 +282,6 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents, > > #endif > > > > left = nents; > > - prv = NULL; > > do { > > unsigned int sg_size, alloc_size = left; > > > > @@ -308,7 +306,7 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents, > > * linkage. Without this, sg_kfree() may get > > * confused. > > */ > > - if (prv) > > + if (prv_max_ents) > > table->nents = ++table->orig_nents; > > > > return -ENOMEM; > > @@ -321,10 +319,18 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents, > > * If this is the first mapping, assign the sg table header. > > * If this is not the first mapping, chain previous part. > > */ > > - if (prv) > > - sg_chain(prv, prv_max_ents, sg); > > - else > > + if (!prv) > > table->sgl = sg; > > + else if (prv_max_ents) > > + sg_chain(prv, prv_max_ents, sg); > > + else { > > + __sg_chain(prv, sg); > > + /* > > + * We decrease one since the prvious last sge in used to > > + * chain the chunks together. > > + */ > > + table->nents = table->orig_nents -= 1; > > + } > > > > /* > > * If no more entries after this one, mark the end > > @@ -356,7 +362,8 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask) > > { > > int ret; > > > > - ret = __sg_alloc_table(table, nents, SG_MAX_SINGLE_ALLOC, > > + memset(table, 0, sizeof(*table)); > > + ret = __sg_alloc_table(table, NULL, nents, SG_MAX_SINGLE_ALLOC, > > NULL, 0, gfp_mask, sg_kmalloc); > > if (unlikely(ret)) > > __sg_free_table(table, SG_MAX_SINGLE_ALLOC, 0, sg_kfree); > > @@ -365,6 +372,30 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask) > > } > > EXPORT_SYMBOL(sg_alloc_table); > > > > +static struct scatterlist *get_next_sg(struct sg_table *table, > > + struct scatterlist *prv, unsigned long left_npages, > > + gfp_t gfp_mask) > > +{ > > + struct scatterlist *next_sg; > > + int ret; > > + > > + /* If table was just allocated */ > > + if (!prv) > > + return table->sgl; > > + > > + /* Check if last entry should be keeped for chainning */ > > + next_sg = sg_next(prv); > > + if (!sg_is_last(next_sg) || left_npages == 1) > > + return next_sg; > > + > > + ret = __sg_alloc_table(table, next_sg, > > + min_t(unsigned long, left_npages, SG_MAX_SINGLE_ALLOC), > > + SG_MAX_SINGLE_ALLOC, NULL, 0, gfp_mask, sg_kmalloc); > > + if (ret) > > + return ERR_PTR(ret); > > + return sg_next(prv); > > +} > > + > > /** > > * __sg_alloc_table_from_pages - Allocate and initialize an sg table from > > * an array of pages > > @@ -374,29 +405,47 @@ EXPORT_SYMBOL(sg_alloc_table); > > * @offset: Offset from start of the first page to the start of a buffer > > * @size: Number of valid bytes in the buffer (after offset) > > * @max_segment: Maximum size of a scatterlist node in bytes (page aligned) > > + * @prv: Last populated sge in sgt > > + * @left_pages: Left pages caller have to set after this call > > * @gfp_mask: GFP allocation mask > > * > > - * Description: > > - * Allocate and initialize an sg table from a list of pages. Contiguous > > - * ranges of the pages are squashed into a single scatterlist node up to the > > - * maximum size specified in @max_segment. An user may provide an offset at a > > - * start and a size of valid data in a buffer specified by the page array. > > - * The returned sg table is released by sg_free_table. > > + * Description: > > + * If @prv is NULL, allocate and initialize an sg table from a list of pages, > > + * else reuse the scatterlist passed in at @prv. > > + * Contiguous ranges of the pages are squashed into a single scatterlist > > + * entry up to the maximum size specified in @max_segment. A user may > > + * provide an offset at a start and a size of valid data in a buffer > > + * specified by the page array. > > * > > * Returns: > > - * 0 on success, negative error on failure > > + * Last SGE in sgt on success, PTR_ERR on otherwise. > > + * The allocation in @sgt must be released by sg_free_table. > > + * > > + * Notes: > > + * If this function returns non-0 (eg failure), the caller must call > > + * sg_free_table() to cleanup any leftover allocations. > > */ > > -int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages, > > - unsigned int n_pages, unsigned int offset, > > - unsigned long size, unsigned int max_segment, > > - gfp_t gfp_mask) > > +struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt, > > + struct page **pages, unsigned int n_pages, unsigned int offset, > > + unsigned long size, unsigned int max_segment, > > + struct scatterlist *prv, unsigned int left_pages, > > + gfp_t gfp_mask) > > { > > - unsigned int chunks, cur_page, seg_len, i; > > + unsigned int chunks, cur_page, seg_len, i, prv_len = 0; > > + unsigned int tmp_nents = sgt->nents; > > + struct scatterlist *s = prv; > > + unsigned int table_size; > > int ret; > > - struct scatterlist *s; > > > > if (WARN_ON(!max_segment || offset_in_page(max_segment))) > > - return -EINVAL; > > + return ERR_PTR(-EINVAL); > > + if (IS_ENABLED(CONFIG_ARCH_NO_SG_CHAIN) && prv) > > + return ERR_PTR(-EOPNOTSUPP); > > I would consider trying to make the failure caught at compile time. It would > probably need a static inline wrapper to BUILD_BUG_ON is prv is not compile > time constant. Because my gut feeling is runtime is a bit awkward. In second patch [1], priv is dynamic pointer that can't be checked at compile time. [1] https://lore.kernel.org/linux-rdma/20200923054251.GA15249@lst.de/T/#m19b0836f23db9d626309c3e70939ce884946e2f6 > > Hm, but also isn't the check too strict? It would be possible to append to > the last sgt as long as under max_ents, no? (Like the current check in > __sg_alloc_table.) It can be, but it is corner case that doesn't worth to code. Right now, RDMA is the single user of this append thing and our setups are !CONFIG_ARCH_NO_SG_CHAIN. > > > + > > + if (prv && > > + page_to_pfn(sg_page(prv)) + (prv->length >> PAGE_SHIFT) == > > + page_to_pfn(pages[0])) > > + prv_len = prv->length; > > > > /* compute number of contiguous chunks */ > > chunks = 1; > > @@ -410,13 +459,17 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages, > > } > > } > > > > - ret = sg_alloc_table(sgt, chunks, gfp_mask); > > - if (unlikely(ret)) > > - return ret; > > + if (!prv) { > > + /* Only the last allocation could be less than the maximum */ > > + table_size = left_pages ? SG_MAX_SINGLE_ALLOC : chunks; > > + ret = sg_alloc_table(sgt, table_size, gfp_mask); > > + if (unlikely(ret)) > > + return ERR_PTR(ret); > > + } > > > > /* merging chunks and putting them into the scatterlist */ > > cur_page = 0; > > - for_each_sg(sgt->sgl, s, sgt->orig_nents, i) { > > + for (i = 0; i < chunks; i++) { > > unsigned int j, chunk_size; > > > > /* look for the end of the current chunk */ > > @@ -425,19 +478,41 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages, > > seg_len += PAGE_SIZE; > > if (seg_len >= max_segment || > > page_to_pfn(pages[j]) != > > - page_to_pfn(pages[j - 1]) + 1) > > + page_to_pfn(pages[j - 1]) + 1) > > break; > > } > > > > chunk_size = ((j - cur_page) << PAGE_SHIFT) - offset; > > - sg_set_page(s, pages[cur_page], > > - min_t(unsigned long, size, chunk_size), offset); > > + chunk_size = min_t(unsigned long, size, chunk_size); > > + if (!i && prv_len) { > > + if (max_segment - prv->length >= chunk_size) { > > + sg_set_page(s, sg_page(s), > > + s->length + chunk_size, s->offset); > > + goto next; > > + } > > + } > > + > > + /* Pass how many chunks might left */ > > + s = get_next_sg(sgt, s, chunks - i + left_pages, gfp_mask); > > + if (IS_ERR(s)) { > > + /* > > + * Adjust entry length to be as before function was > > + * called. > > + */ > > + if (prv_len) > > + prv->length = prv_len; > > + goto out; > > + } > > + sg_set_page(s, pages[cur_page], chunk_size, offset); > > + tmp_nents++; > > +next: > > size -= chunk_size; > > offset = 0; > > cur_page = j; > > } > > - > > - return 0; > > + sgt->nents = tmp_nents; > > +out: > > + return s; > > } > > EXPORT_SYMBOL(__sg_alloc_table_from_pages); > > > > @@ -465,8 +540,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages, > > unsigned int n_pages, unsigned int offset, > > unsigned long size, gfp_t gfp_mask) > > { > > - return __sg_alloc_table_from_pages(sgt, pages, n_pages, offset, size, > > - SCATTERLIST_MAX_SEGMENT, gfp_mask); > > + return PTR_ERR_OR_ZERO(__sg_alloc_table_from_pages(sgt, pages, n_pages, > > + offset, size, SCATTERLIST_MAX_SEGMENT, NULL, 0, > > + gfp_mask)); > > } > > EXPORT_SYMBOL(sg_alloc_table_from_pages); > > > > diff --git a/lib/sg_pool.c b/lib/sg_pool.c > > index db29e5c1f790..c449248bf5d5 100644 > > --- a/lib/sg_pool.c > > +++ b/lib/sg_pool.c > > @@ -129,7 +129,8 @@ int sg_alloc_table_chained(struct sg_table *table, int nents, > > nents_first_chunk = 0; > > } > > > > - ret = __sg_alloc_table(table, nents, SG_CHUNK_SIZE, > > + memset(table, 0, sizeof(*table)); > > + ret = __sg_alloc_table(table, NULL, nents, SG_CHUNK_SIZE, > > first_chunk, nents_first_chunk, > > GFP_ATOMIC, sg_pool_alloc); > > if (unlikely(ret)) > > diff --git a/tools/testing/scatterlist/main.c b/tools/testing/scatterlist/main.c > > index 0a1464181226..4899359a31ac 100644 > > --- a/tools/testing/scatterlist/main.c > > +++ b/tools/testing/scatterlist/main.c > > @@ -55,14 +55,13 @@ int main(void) > > for (i = 0, test = tests; test->expected_segments; test++, i++) { > > struct page *pages[MAX_PAGES]; > > struct sg_table st; > > - int ret; > > + struct scatterlist *sg; > > > > set_pages(pages, test->pfn, test->num_pages); > > > > - ret = __sg_alloc_table_from_pages(&st, pages, test->num_pages, > > - 0, test->size, test->max_seg, > > - GFP_KERNEL); > > - assert(ret == test->alloc_ret); > > + sg = __sg_alloc_table_from_pages(&st, pages, test->num_pages, 0, > > + test->size, test->max_seg, NULL, 0, GFP_KERNEL); > > + assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret); > > Some test coverage for relatively complex code would be very welcomed. Since > the testing framework is already there, even if it bit-rotted a bit, but > shouldn't be hard to fix. > > A few tests to check append/grow works as expected, in terms of how the end > table looks like given the initial state and some different page patterns > added to it. And both crossing and not crossing into sg chaining scenarios. This function is basic for all RDMA devices and we are pretty confident that the old and new flows are tested thoroughly. We will add proper test in next kernel cycle. Thanks > > Regards, > > Tvrtko > > > > > if (test->alloc_ret) > > continue; > > -- > > 2.26.2 > > > > _______________________________________________ > > Intel-gfx mailing list > > Intel-gfx@lists.freedesktop.org > > https://lists.freedesktop.org/mailman/listinfo/intel-gfx > > _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx