linux-rt-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Uladzislau Rezki <urezki@gmail.com>
To: Uladzislau Rezki <urezki@gmail.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Daniel Wagner <dwagner@suse.de>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	linux-rt-users@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH] mm: vmalloc: Use the vmap_area_lock to protect ne_fit_preload_node
Date: Tue, 8 Oct 2019 18:04:59 +0200	[thread overview]
Message-ID: <20191008160459.GA5487@pc636> (raw)
In-Reply-To: <20191007214420.GA3212@pc636>

On Mon, Oct 07, 2019 at 11:44:20PM +0200, Uladzislau Rezki wrote:
> On Mon, Oct 07, 2019 at 07:36:44PM +0200, Sebastian Andrzej Siewior wrote:
> > On 2019-10-07 18:56:11 [+0200], Uladzislau Rezki wrote:
> > > Actually there is a high lock contention on vmap_area_lock, because it
> > > is still global. You can have a look at last slide:
> > > 
> > > https://linuxplumbersconf.org/event/4/contributions/547/attachments/287/479/Reworking_of_KVA_allocator_in_Linux_kernel.pdf
> > > 
> > > so this change will make it a bit higher. From the other hand i agree
> > > that for rt it should be fixed, probably it could be done like:
> > > 
> > > ifdef PREEMPT_RT
> > >     migrate_disable()
> > > #else
> > >     preempt_disable()
> > > ...
> > > 
> > > but i am not sure it is good either.
> > 
> > What is to be expected on average? Is the lock acquired and then
> > released again because the slot is empty and memory needs to be
> > allocated or can it be assumed that this hardly happens? 
> > 
> The lock is not released(we are not allowed), instead we just try
> to allocate with GFP_NOWAIT flag. It can happen if preallocation
> has been failed with GFP_KERNEL flag earlier:
> 
> <snip>
> ...
>  } else if (type == NE_FIT_TYPE) {
>   /*
>    * Split no edge of fit VA.
>    *
>    *     |       |
>    *   L V  NVA  V R
>    * |---|-------|---|
>    */
>   lva = __this_cpu_xchg(ne_fit_preload_node, NULL);
>   if (unlikely(!lva)) {
>       ...
>       lva = kmem_cache_alloc(vmap_area_cachep, GFP_NOWAIT);
>       ...
>   }
> ...
> <snip>
> 
> How often we need an extra object for split purpose, the answer
> is it depends on. For example fork() path falls to that pattern.
> 
> I think we can assume that migration can hardly ever happen and
> that should be considered as rare case. Thus we can do a prealoading
> without worrying much if a it occurs:
> 
> <snip>
> urezki@pc636:~/data/ssd/coding/linux-stable$ git diff
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index e92ff5f7dd8b..bc782edcd1fd 100644 
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -1089,20 +1089,16 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
>          * Even if it fails we do not really care about that. Just proceed
>          * as it is. "overflow" path will refill the cache we allocate from.
>          */
> -       preempt_disable();
> -       if (!__this_cpu_read(ne_fit_preload_node)) {
> -               preempt_enable();
> +       if (!this_cpu_read(ne_fit_preload_node)) {
>                 pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node);
> -               preempt_disable();
> 
> -               if (__this_cpu_cmpxchg(ne_fit_preload_node, NULL, pva)) {
> +               if (this_cpu_cmpxchg(ne_fit_preload_node, NULL, pva)) {
>                         if (pva)
>                                 kmem_cache_free(vmap_area_cachep, pva);
>                 }
>         }
>  
>         spin_lock(&vmap_area_lock);
> -       preempt_enable();
> 
>         /*
>          * If an allocation fails, the "vend" address is
> urezki@pc636:~/data/ssd/coding/linux-stable$
> <snip>
> 
> so, we do not guarantee, instead we minimize number of allocations
> with GFP_NOWAIT flag. For example on my 4xCPUs i am not able to
> even trigger the case when CPU is not preloaded.
> 
> I can test it tomorrow on my 12xCPUs to see its behavior there.
> 
Tested it on different systems. For example on my 8xCPUs system that
runs PREEMPT kernel i see only few GFP_NOWAIT allocations, i.e. it
happens when we land to another CPU that was not preloaded.

I run the special test case that follows the preload pattern and path.
So 20 "unbind" threads run it and each does 1000000 allocations. As a
result only 3.5 times among 1000000, during splitting, CPU was not
preloaded thus, GFP_NOWAIT was used to obtain an extra object.

It is obvious that slightly modified approach still minimizes allocations
in atomic context, so it can happen but the number is negligible and can
be ignored, i think.

--
Vlad Rezki

  reply	other threads:[~2019-10-08 16:05 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-03  9:09 [PATCH] mm: vmalloc: Use the vmap_area_lock to protect ne_fit_preload_node Daniel Wagner
2019-10-03 11:55 ` Uladzislau Rezki
2019-10-04 15:37 ` Sebastian Andrzej Siewior
2019-10-04 16:20   ` Uladzislau Rezki
2019-10-04 16:30     ` Sebastian Andrzej Siewior
2019-10-04 17:04       ` Uladzislau Rezki
2019-10-04 17:45         ` Sebastian Andrzej Siewior
2019-10-07  8:30       ` Daniel Wagner
2019-10-07 10:56         ` Sebastian Andrzej Siewior
2019-10-07 16:23           ` Uladzislau Rezki
2019-10-07 16:34             ` Daniel Wagner
2019-10-07 16:56               ` Uladzislau Rezki
2019-10-07 17:22                 ` Daniel Wagner
2019-10-07 17:36                 ` Sebastian Andrzej Siewior
2019-10-07 21:44                   ` Uladzislau Rezki
2019-10-08 16:04                     ` Uladzislau Rezki [this message]
2019-10-09  6:05                       ` Daniel Wagner
2019-10-09  9:47                         ` Uladzislau Rezki
2019-10-07  8:27   ` Daniel Wagner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191008160459.GA5487@pc636 \
    --to=urezki@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=bigeasy@linutronix.de \
    --cc=dwagner@suse.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-rt-users@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).