From: Hyeonggon Yoo <42.hyeyoo@gmail.com>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: linux-mm@kvack.org, Christoph Lameter <cl@linux.com>,
Pekka Enberg <penberg@kernel.org>,
David Rientjes <rientjes@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Andrew Morton <akpm@linux-foundation.org>,
Marco Elver <elver@google.com>,
Matthew WilCox <willy@infradead.org>,
Roman Gushchin <roman.gushchin@linux.dev>,
linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH v1 09/15] mm/slab: kmalloc: pass requests larger than order-1 page to page allocator
Date: Fri, 22 Apr 2022 21:40:48 +0900 [thread overview]
Message-ID: <YmKiUOVw9i1Uw1Gb@hyeyoo> (raw)
In-Reply-To: <9b808582-3a64-b626-14a4-55e7d9040261@suse.cz>
On Thu, Mar 24, 2022 at 07:08:27PM +0100, Vlastimil Babka wrote:
> On 3/8/22 12:41, Hyeonggon Yoo wrote:
> > There is not much benefit for serving large objects in kmalloc().
> > Let's pass large requests to page allocator like SLUB for better
> > maintenance of common code.
> >
> > [ vbabka@suse.cz: Enable and disable irq around free_large_kmalloc().
> > Do not lose NUMA locality in __do_kmalloc_node().
> > Use folio_slab(folio)->slab_cache instead of virt_to_cache().
> > Remove large sizes in __kmalloc_index(). ]
A bit late to reply but better late than never...
>
> Thanks for the mention but that's generally only done like this if I took
> your patch and made those changes myself. But I just suggested them. Small
> suggested changes like this are usually just mentioned in e.g. v1->v2
> changelogs.
Ah, okay. I didn't know about the convention. thanks for letting me
know!
> > Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > ---
> > include/linux/slab.h | 23 +++++-----------------
> > mm/slab.c | 45 ++++++++++++++++++++++++++++++--------------
> > mm/slab.h | 3 +++
> > mm/slab_common.c | 25 +++++++++++++++++-------
> > mm/slub.c | 19 -------------------
> > 5 files changed, 57 insertions(+), 58 deletions(-)
> >
> > diff --git a/include/linux/slab.h b/include/linux/slab.h
> > index dfcc8301d969..9ced225a3ea3 100644
> > --- a/include/linux/slab.h
> > +++ b/include/linux/slab.h
> > @@ -226,27 +226,17 @@ void kmem_dump_obj(void *object);
> >
> > #ifdef CONFIG_SLAB
> > /*
> > - * The largest kmalloc size supported by the SLAB allocators is
> > - * 32 megabyte (2^25) or the maximum allocatable page order if that is
> > - * less than 32 MB.
> > - *
> > - * WARNING: Its not easy to increase this value since the allocators have
> > - * to do various tricks to work around compiler limitations in order to
> > - * ensure proper constant folding.
> > + * SLAB and SLUB directly allocates requests fitting in to an order-1 page
> > + * (PAGE_SIZE*2). Larger requests are passed to the page allocator.
> > */
> > -#define KMALLOC_SHIFT_HIGH ((MAX_ORDER + PAGE_SHIFT - 1) <= 25 ? \
> > - (MAX_ORDER + PAGE_SHIFT - 1) : 25)
> > -#define KMALLOC_SHIFT_MAX KMALLOC_SHIFT_HIGH
> > +#define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1)
> > +#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1)
> > #ifndef KMALLOC_SHIFT_LOW
> > #define KMALLOC_SHIFT_LOW 5
> > #endif
> > #endif
> >
> > #ifdef CONFIG_SLUB
> > -/*
> > - * SLUB directly allocates requests fitting in to an order-1 page
> > - * (PAGE_SIZE*2). Larger requests are passed to the page allocator.
> > - */
> > #define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1)
> > #define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1)
> > #ifndef KMALLOC_SHIFT_LOW
> > @@ -398,10 +388,6 @@ static __always_inline unsigned int __kmalloc_index(size_t size,
> > if (size <= 512 * 1024) return 19;
> > if (size <= 1024 * 1024) return 20;
> > if (size <= 2 * 1024 * 1024) return 21;
> > - if (size <= 4 * 1024 * 1024) return 22;
> > - if (size <= 8 * 1024 * 1024) return 23;
> > - if (size <= 16 * 1024 * 1024) return 24;
> > - if (size <= 32 * 1024 * 1024) return 25;
> >
> > if (!IS_ENABLED(CONFIG_PROFILE_ALL_BRANCHES) && size_is_constant)
> > BUILD_BUG_ON_MSG(1, "unexpected size in kmalloc_index()");
> > @@ -411,6 +397,7 @@ static __always_inline unsigned int __kmalloc_index(size_t size,
> > /* Will never be reached. Needed because the compiler may complain */
> > return -1;
> > }
> > +static_assert(PAGE_SHIFT <= 20);
> > #define kmalloc_index(s) __kmalloc_index(s, true)
> > #endif /* !CONFIG_SLOB */
> >
> > diff --git a/mm/slab.c b/mm/slab.c
> > index 6ebf509bf2de..f0041f0125ba 100644
> > --- a/mm/slab.c
> > +++ b/mm/slab.c
> > @@ -3568,7 +3568,7 @@ __do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller)
> > void *ret;
> >
> > if (unlikely(size > KMALLOC_MAX_CACHE_SIZE))
> > - return NULL;
> > + return kmalloc_large_node(size, flags, node);
>
> Similar issue with caller not traced.
>
Actually I moved tracepoint into kmalloc_large_node(),
but the problem I think was I write patches hard to review.
in v2 I split some patches to be more reviewable. thanks!!
> > cachep = kmalloc_slab(size, flags);
> > if (unlikely(ZERO_OR_NULL_PTR(cachep)))
> > return cachep;
> > @@ -3642,15 +3642,25 @@ void kmem_cache_free_bulk(struct kmem_cache *orig_s, size_t size, void **p)
> > {
> > struct kmem_cache *s;
> > size_t i;
> > + struct folio *folio;
> >
> > local_irq_disable();
> > for (i = 0; i < size; i++) {
> > void *objp = p[i];
> >
> > - if (!orig_s) /* called via kfree_bulk */
> > - s = virt_to_cache(objp);
> > - else
> > + if (!orig_s) {
> > + folio = virt_to_folio(objp);
> > + /* called via kfree_bulk */
> > + if (!folio_test_slab(folio)) {
> > + local_irq_enable();
> > + free_large_kmalloc(folio, objp);
> > + local_irq_disable();
> > + continue;
> > + }
> > + s = folio_slab(folio)->slab_cache;
> > + } else
> > s = cache_from_obj(orig_s, objp);
> > +
> > if (!s)
> > continue;
> >
> > @@ -3679,20 +3689,25 @@ void kfree(const void *objp)
> > {
> > struct kmem_cache *c;
> > unsigned long flags;
> > + struct folio *folio;
> > + void *x = (void *) objp;
>
> I think you don't need to add 'x', just do the cast while calling
> free_large_kmalloc(), same as done for __cache_free().
>
in fact also SLUB's kfree defines x. But your suggestion sounds better.
Anyway did it in v2. thanks!
> >
> > trace_kfree(_RET_IP_, objp);
> >
> > if (unlikely(ZERO_OR_NULL_PTR(objp)))
> > return;
> > - local_irq_save(flags);
> > - kfree_debugcheck(objp);
> > - c = virt_to_cache(objp);
> > - if (!c) {
> > - local_irq_restore(flags);
> > +
> > + folio = virt_to_folio(objp);
> > + if (!folio_test_slab(folio)) {
> > + free_large_kmalloc(folio, x);
> > return;
> > }
> > - debug_check_no_locks_freed(objp, c->object_size);
> >
> > + c = folio_slab(folio)->slab_cache;
> > +
> > + local_irq_save(flags);
> > + kfree_debugcheck(objp);
> > + debug_check_no_locks_freed(objp, c->object_size);
> > debug_check_no_obj_freed(objp, c->object_size);
> > __cache_free(c, (void *)objp, _RET_IP_);
> > local_irq_restore(flags);
--
Thanks,
Hyeonggon
next prev parent reply other threads:[~2022-04-22 12:41 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-08 11:41 [RFC PATCH v1 00/15] common kmalloc subsystem on SLAB/SLUB Hyeonggon Yoo
2022-03-08 11:41 ` [RFC PATCH v1 01/15] mm/slab: cleanup slab_alloc() and slab_alloc_node() Hyeonggon Yoo
2022-03-23 15:28 ` Vlastimil Babka
2022-03-24 11:06 ` Hyeonggon Yoo
2022-03-08 11:41 ` [RFC PATCH v1 02/15] mm/sl[auo]b: remove CONFIG_NUMA ifdefs for common functions Hyeonggon Yoo
2022-03-08 11:41 ` [RFC PATCH v1 03/15] mm/sl[au]b: remove CONFIG_TRACING ifdefs for tracing functions Hyeonggon Yoo
2022-03-08 11:41 ` [RFC PATCH v1 04/15] mm/sl[auo]b: fold kmalloc_order() into kmalloc_large() Hyeonggon Yoo
2022-03-24 16:27 ` Vlastimil Babka
2022-03-08 11:41 ` [RFC PATCH v1 05/15] mm/slub: move kmalloc_large_node() to slab_common.c Hyeonggon Yoo
2022-03-24 17:22 ` Vlastimil Babka
2022-03-08 11:41 ` [RFC PATCH v1 06/15] mm/slab_common: cleanup kmalloc_large() Hyeonggon Yoo
2022-03-08 11:41 ` [RFC PATCH v1 07/15] mm/sl[au]b: kmalloc_node: pass large requests to page allocator Hyeonggon Yoo
2022-03-24 17:36 ` Vlastimil Babka
2022-03-08 11:41 ` [RFC PATCH v1 08/15] mm/sl[auo]b: cleanup kmalloc() Hyeonggon Yoo
2022-03-24 17:43 ` Vlastimil Babka
2022-03-24 17:46 ` Vlastimil Babka
2022-04-22 12:46 ` Hyeonggon Yoo
2022-03-08 11:41 ` [RFC PATCH v1 09/15] mm/slab: kmalloc: pass requests larger than order-1 page to page allocator Hyeonggon Yoo
2022-03-24 18:08 ` Vlastimil Babka
2022-04-22 12:40 ` Hyeonggon Yoo [this message]
2022-03-08 11:41 ` [RFC PATCH v1 10/15] mm/sl[auo]b: print cache name in tracepoints Hyeonggon Yoo
2022-03-08 11:41 ` [RFC PATCH v1 11/15] mm/sl[auo]b: use same tracepoint in kmalloc and normal caches Hyeonggon Yoo
2022-03-25 17:13 ` Vlastimil Babka
2022-04-22 12:57 ` Hyeonggon Yoo
2022-03-08 11:41 ` [RFC PATCH v1 12/15] mm/sl[au]b: generalize kmalloc subsystem Hyeonggon Yoo
2022-03-08 11:41 ` [RFC PATCH v1 13/15] mm/sl[au]b: remove kmem_cache_alloc_node_trace() Hyeonggon Yoo
2022-03-08 11:41 ` [RFC PATCH v1 14/15] mm/sl[auo]b: move definition of __ksize() to mm/slab.h Hyeonggon Yoo
2022-03-08 11:41 ` [RFC PATCH v1 15/15] mm/sl[au]b: check if large object is valid in __ksize() Hyeonggon Yoo
2022-03-24 9:59 ` Hyeonggon Yoo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YmKiUOVw9i1Uw1Gb@hyeyoo \
--to=42.hyeyoo@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=elver@google.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).