All of lore.kernel.org
 help / color / mirror / Atom feed
From: Hyeonggon Yoo <42.hyeyoo@gmail.com>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>,
	Matthew Wilcox <willy@infradead.org>,
	 Suren Baghdasaryan <surenb@google.com>,
	Christoph Lameter <cl@linux.com>,
	David Rientjes <rientjes@google.com>,
	 Pekka Enberg <penberg@kernel.org>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	 Roman Gushchin <roman.gushchin@linux.dev>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	 patches@lists.linux.dev
Subject: Re: [RFC v2 2/7] mm, slub: add opt-in slub_percpu_array
Date: Mon, 21 Aug 2023 23:57:16 +0900	[thread overview]
Message-ID: <CAB=+i9TSMVURktFvr7sAt4T2BdaUvsWFapAjTZNtk0AKS01O9A@mail.gmail.com> (raw)
In-Reply-To: <20230810163627.6206-11-vbabka@suse.cz>

Hi,

On Fri, Aug 11, 2023 at 1:36 AM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> kmem_cache_setup_percpu_array() will allocate a per-cpu array for
> caching alloc/free objects of given size for the cache. The cache
> has to be created with SLAB_NO_MERGE flag.
>
> The array is filled by freeing. When empty for alloc or full for
> freeing, it's simply bypassed by the operation, there's currently no
> batch freeing/allocations.
>
> The locking is copied from the page allocator's pcplists, based on
> embedded spin locks. Interrupts are not disabled, only preemption (cpu
> migration on RT). Trylock is attempted to avoid deadlock due to
> an intnerrupt, trylock failure means the array is bypassed.

nit: s/intnerrupt/interrupt/

>  /*
>   * Inlined fastpath so that allocation functions (kmalloc, kmem_cache_alloc)
>   * have the fastpath folded into their functions. So no function call
> @@ -3465,7 +3564,11 @@ static __fastpath_inline void *slab_alloc_node(struct kmem_cache *s, struct list
>         if (unlikely(object))
>                 goto out;
>
> -       object = __slab_alloc_node(s, gfpflags, node, addr, orig_size);
> +       if (s->cpu_array)
> +               object = alloc_from_pca(s);
> +
> +       if (!object)
> +               object = __slab_alloc_node(s, gfpflags, node, addr, orig_size);
>
>         maybe_wipe_obj_freeptr(s, object);
>         init = slab_want_init_on_alloc(gfpflags, s);
> @@ -3715,6 +3818,34 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab,
>         discard_slab(s, slab);
>  }

>  #ifndef CONFIG_SLUB_TINY
>  /*
>   * Fastpath with forced inlining to produce a kfree and kmem_cache_free that
> @@ -3740,6 +3871,11 @@ static __always_inline void do_slab_free(struct kmem_cache *s,
>         unsigned long tid;
>         void **freelist;
>
> +       if (s->cpu_array && cnt == 1) {
> +               if (free_to_pca(s, head))
> +                       return;
> +       }
> +
>  redo:
>         /*
>          * Determine the currently cpus per cpu slab.
> @@ -3793,6 +3929,11 @@ static void do_slab_free(struct kmem_cache *s,
>  {
>         void *tail_obj = tail ? : head;
>
> +       if (s->cpu_array && cnt == 1) {
> +               if (free_to_pca(s, head))
> +                       return;
> +       }
> +
>         __slab_free(s, slab, head, tail_obj, cnt, addr);
>  }
>  #endif /* CONFIG_SLUB_TINY */

Is this functionality needed for SLUB_TINY?

> @@ -4060,6 +4201,45 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
>  }
>  EXPORT_SYMBOL(kmem_cache_alloc_bulk);
>
> +int kmem_cache_prefill_percpu_array(struct kmem_cache *s, unsigned int count,
> +               gfp_t gfp)
> +{
> +       struct slub_percpu_array *pca;
> +       void *objects[32];
> +       unsigned int used;
> +       unsigned int allocated;
> +
> +       if (!s->cpu_array)
> +               return -EINVAL;
> +
> +       /* racy but we don't care */
> +       pca = raw_cpu_ptr(s->cpu_array);
> +
> +       used = READ_ONCE(pca->used);

Hmm for the prefill to be meaningful,
remote allocation should be possible, right?

Otherwise it only prefills for the CPU that requested it.

> +       if (used >= count)
> +               return 0;
> +
> +       if (pca->count < count)
> +               return -EINVAL;
> +
> +       count -= used;
> +
> +       /* TODO fix later */
> +       if (count > 32)
> +               count = 32;
> +
> +       for (int i = 0; i < count; i++)
> +               objects[i] = NULL;
> +       allocated = kmem_cache_alloc_bulk(s, gfp, count, &objects[0]);
> +
> +       for (int i = 0; i < count; i++) {
> +               if (objects[i]) {
> +                       kmem_cache_free(s, objects[i]);
> +               }
> +       }

nit: why not

for (int i = 0; i < allocated; i++) {
    kmem_cache_free(s, objects[i]);
}

and skip objects[i] = NULL

> +       return allocated;
> +}

And a question:
Does SLUB still need to maintain per-cpu partial slab lists even when
an opt-in percpu array is used?

  reply	other threads:[~2023-08-21 14:57 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-10 16:36 [RFC v2 0/7] SLUB percpu array caches and maple tree nodes Vlastimil Babka
2023-08-10 16:36 ` [RFC v2 1/7] mm, slub: fix bulk alloc and free stats Vlastimil Babka
2023-08-10 16:36 ` [RFC v2 2/7] mm, slub: add opt-in slub_percpu_array Vlastimil Babka
2023-08-21 14:57   ` Hyeonggon Yoo [this message]
2023-11-28 17:37     ` Vlastimil Babka
2023-11-29  0:46       ` Hyeonggon Yoo
2023-11-29 13:25         ` Vlastimil Babka
2023-08-10 16:36 ` [RFC v2 3/7] maple_tree: use slub percpu array Vlastimil Babka
2023-08-10 16:36 ` [RFC v2 4/7] maple_tree: avoid bulk alloc/free to use percpu array more Vlastimil Babka
2023-08-10 16:36 ` [RFC v2 5/7] maple_tree: Remove MA_STATE_PREALLOC Vlastimil Babka
2023-08-10 16:36 ` [RFC v2 6/7] maple_tree: replace preallocation with slub percpu array prefill Vlastimil Babka
2023-08-10 16:36 ` [RFC v2 7/7] tools: Add SLUB percpu array functions for testing Vlastimil Babka
2023-08-18 16:44 ` [RFC v2 0/7] SLUB percpu array caches and maple tree nodes Suren Baghdasaryan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAB=+i9TSMVURktFvr7sAt4T2BdaUvsWFapAjTZNtk0AKS01O9A@mail.gmail.com' \
    --to=42.hyeyoo@gmail.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=patches@lists.linux.dev \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.