All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jisheng Zhang <jszhang@marvell.com>
To: Chris Wilson <chris@chris-wilson.co.uk>
Cc: <akpm@linux-foundation.org>, <mgorman@techsingularity.net>,
	<rientjes@google.com>, <iamjoonsoo.kim@lge.com>,
	<agnel.joel@gmail.com>, <linux-mm@kvack.org>,
	<linux-kernel@vger.kernel.org>,
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [PATCH] mm/vmalloc: reduce the number of lazy_max_pages to reduce latency
Date: Thu, 29 Sep 2016 19:18:06 +0800	[thread overview]
Message-ID: <20160929191806.25da2700@xhacker> (raw)
In-Reply-To: <20160929110714.GF28107@nuc-i3427.alporthouse.com>

On Thu, 29 Sep 2016 12:07:14 +0100 Chris Wilson wrote:

> On Thu, Sep 29, 2016 at 04:28:08PM +0800, Jisheng Zhang wrote:
> > On Thu, 29 Sep 2016 09:18:18 +0100 Chris Wilson wrote:
> >   
> > > On Thu, Sep 29, 2016 at 03:34:11PM +0800, Jisheng Zhang wrote:  
> > > > On Marvell berlin arm64 platforms, I see the preemptoff tracer report
> > > > a max 26543 us latency at __purge_vmap_area_lazy, this latency is an
> > > > awfully bad for STB. And the ftrace log also shows __free_vmap_area
> > > > contributes most latency now. I noticed that Joel mentioned the same
> > > > issue[1] on x86 platform and gave two solutions, but it seems no patch
> > > > is sent out for this purpose.
> > > > 
> > > > This patch adopts Joel's first solution, but I use 16MB per core
> > > > rather than 8MB per core for the number of lazy_max_pages. After this
> > > > patch, the preemptoff tracer reports a max 6455us latency, reduced to
> > > > 1/4 of original result.    
> > > 
> > > My understanding is that
> > > 
> > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > > index 91f44e78c516..3f7c6d6969ac 100644
> > > --- a/mm/vmalloc.c
> > > +++ b/mm/vmalloc.c
> > > @@ -626,7 +626,6 @@ void set_iounmap_nonlazy(void)
> > >  static void __purge_vmap_area_lazy(unsigned long *start, unsigned long *end,
> > >                                         int sync, int force_flush)
> > >  {
> > > -       static DEFINE_SPINLOCK(purge_lock);
> > >         struct llist_node *valist;
> > >         struct vmap_area *va;
> > >         struct vmap_area *n_va;
> > > @@ -637,12 +636,6 @@ static void __purge_vmap_area_lazy(unsigned long *start, unsigned long *end,
> > >          * should not expect such behaviour. This just simplifies locking for
> > >          * the case that isn't actually used at the moment anyway.
> > >          */
> > > -       if (!sync && !force_flush) {
> > > -               if (!spin_trylock(&purge_lock))
> > > -                       return;
> > > -       } else
> > > -               spin_lock(&purge_lock);
> > > -
> > >         if (sync)
> > >                 purge_fragmented_blocks_allcpus();
> > >  
> > > @@ -667,7 +660,6 @@ static void __purge_vmap_area_lazy(unsigned long *start, unsigned long *end,
> > >                         __free_vmap_area(va);
> > >                 spin_unlock(&vmap_area_lock);  
> > 
> > Hi Chris,
> > 
> > Per my test, the bottleneck now is __free_vmap_area() over the valist, the
> > iteration is protected with spinlock vmap_area_lock. So the larger lazy max
> > pages, the longer valist, the bigger the latency.
> > 
> > So besides above patch, we still need to remove vmap_are_lock or replace with
> > mutex.  
> 
> Or follow up with
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 3f7c6d6969ac..67b5475f0b0a 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -656,8 +656,10 @@ static void __purge_vmap_area_lazy(unsigned long *start, unsigned long *end,
>  
>         if (nr) {
>                 spin_lock(&vmap_area_lock);
> -               llist_for_each_entry_safe(va, n_va, valist, purge_list)
> +               llist_for_each_entry_safe(va, n_va, valist, purge_list) {
>                         __free_vmap_area(va);
> +                       cond_resched_lock(&vmap_area_lock);

oh, great! This seems works fine. I'm not sure there's any side effect or
performance regression, but this patch plus previous purge_lock removing do
addressed my problem.

Thanks,
Jisheng

> +               }
>                 spin_unlock(&vmap_area_lock);
>         }
>  }
> 
> ?
> -Chris
> 

WARNING: multiple messages have this Message-ID (diff)
From: Jisheng Zhang <jszhang@marvell.com>
To: Chris Wilson <chris@chris-wilson.co.uk>
Cc: akpm@linux-foundation.org, mgorman@techsingularity.net,
	rientjes@google.com, iamjoonsoo.kim@lge.com,
	agnel.joel@gmail.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH] mm/vmalloc: reduce the number of lazy_max_pages to reduce latency
Date: Thu, 29 Sep 2016 19:18:06 +0800	[thread overview]
Message-ID: <20160929191806.25da2700@xhacker> (raw)
In-Reply-To: <20160929110714.GF28107@nuc-i3427.alporthouse.com>

On Thu, 29 Sep 2016 12:07:14 +0100 Chris Wilson wrote:

> On Thu, Sep 29, 2016 at 04:28:08PM +0800, Jisheng Zhang wrote:
> > On Thu, 29 Sep 2016 09:18:18 +0100 Chris Wilson wrote:
> >   
> > > On Thu, Sep 29, 2016 at 03:34:11PM +0800, Jisheng Zhang wrote:  
> > > > On Marvell berlin arm64 platforms, I see the preemptoff tracer report
> > > > a max 26543 us latency at __purge_vmap_area_lazy, this latency is an
> > > > awfully bad for STB. And the ftrace log also shows __free_vmap_area
> > > > contributes most latency now. I noticed that Joel mentioned the same
> > > > issue[1] on x86 platform and gave two solutions, but it seems no patch
> > > > is sent out for this purpose.
> > > > 
> > > > This patch adopts Joel's first solution, but I use 16MB per core
> > > > rather than 8MB per core for the number of lazy_max_pages. After this
> > > > patch, the preemptoff tracer reports a max 6455us latency, reduced to
> > > > 1/4 of original result.    
> > > 
> > > My understanding is that
> > > 
> > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > > index 91f44e78c516..3f7c6d6969ac 100644
> > > --- a/mm/vmalloc.c
> > > +++ b/mm/vmalloc.c
> > > @@ -626,7 +626,6 @@ void set_iounmap_nonlazy(void)
> > >  static void __purge_vmap_area_lazy(unsigned long *start, unsigned long *end,
> > >                                         int sync, int force_flush)
> > >  {
> > > -       static DEFINE_SPINLOCK(purge_lock);
> > >         struct llist_node *valist;
> > >         struct vmap_area *va;
> > >         struct vmap_area *n_va;
> > > @@ -637,12 +636,6 @@ static void __purge_vmap_area_lazy(unsigned long *start, unsigned long *end,
> > >          * should not expect such behaviour. This just simplifies locking for
> > >          * the case that isn't actually used at the moment anyway.
> > >          */
> > > -       if (!sync && !force_flush) {
> > > -               if (!spin_trylock(&purge_lock))
> > > -                       return;
> > > -       } else
> > > -               spin_lock(&purge_lock);
> > > -
> > >         if (sync)
> > >                 purge_fragmented_blocks_allcpus();
> > >  
> > > @@ -667,7 +660,6 @@ static void __purge_vmap_area_lazy(unsigned long *start, unsigned long *end,
> > >                         __free_vmap_area(va);
> > >                 spin_unlock(&vmap_area_lock);  
> > 
> > Hi Chris,
> > 
> > Per my test, the bottleneck now is __free_vmap_area() over the valist, the
> > iteration is protected with spinlock vmap_area_lock. So the larger lazy max
> > pages, the longer valist, the bigger the latency.
> > 
> > So besides above patch, we still need to remove vmap_are_lock or replace with
> > mutex.  
> 
> Or follow up with
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 3f7c6d6969ac..67b5475f0b0a 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -656,8 +656,10 @@ static void __purge_vmap_area_lazy(unsigned long *start, unsigned long *end,
>  
>         if (nr) {
>                 spin_lock(&vmap_area_lock);
> -               llist_for_each_entry_safe(va, n_va, valist, purge_list)
> +               llist_for_each_entry_safe(va, n_va, valist, purge_list) {
>                         __free_vmap_area(va);
> +                       cond_resched_lock(&vmap_area_lock);

oh, great! This seems works fine. I'm not sure there's any side effect or
performance regression, but this patch plus previous purge_lock removing do
addressed my problem.

Thanks,
Jisheng

> +               }
>                 spin_unlock(&vmap_area_lock);
>         }
>  }
> 
> ?
> -Chris
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

WARNING: multiple messages have this Message-ID (diff)
From: jszhang@marvell.com (Jisheng Zhang)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH] mm/vmalloc: reduce the number of lazy_max_pages to reduce latency
Date: Thu, 29 Sep 2016 19:18:06 +0800	[thread overview]
Message-ID: <20160929191806.25da2700@xhacker> (raw)
In-Reply-To: <20160929110714.GF28107@nuc-i3427.alporthouse.com>

On Thu, 29 Sep 2016 12:07:14 +0100 Chris Wilson wrote:

> On Thu, Sep 29, 2016 at 04:28:08PM +0800, Jisheng Zhang wrote:
> > On Thu, 29 Sep 2016 09:18:18 +0100 Chris Wilson wrote:
> >   
> > > On Thu, Sep 29, 2016 at 03:34:11PM +0800, Jisheng Zhang wrote:  
> > > > On Marvell berlin arm64 platforms, I see the preemptoff tracer report
> > > > a max 26543 us latency at __purge_vmap_area_lazy, this latency is an
> > > > awfully bad for STB. And the ftrace log also shows __free_vmap_area
> > > > contributes most latency now. I noticed that Joel mentioned the same
> > > > issue[1] on x86 platform and gave two solutions, but it seems no patch
> > > > is sent out for this purpose.
> > > > 
> > > > This patch adopts Joel's first solution, but I use 16MB per core
> > > > rather than 8MB per core for the number of lazy_max_pages. After this
> > > > patch, the preemptoff tracer reports a max 6455us latency, reduced to
> > > > 1/4 of original result.    
> > > 
> > > My understanding is that
> > > 
> > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > > index 91f44e78c516..3f7c6d6969ac 100644
> > > --- a/mm/vmalloc.c
> > > +++ b/mm/vmalloc.c
> > > @@ -626,7 +626,6 @@ void set_iounmap_nonlazy(void)
> > >  static void __purge_vmap_area_lazy(unsigned long *start, unsigned long *end,
> > >                                         int sync, int force_flush)
> > >  {
> > > -       static DEFINE_SPINLOCK(purge_lock);
> > >         struct llist_node *valist;
> > >         struct vmap_area *va;
> > >         struct vmap_area *n_va;
> > > @@ -637,12 +636,6 @@ static void __purge_vmap_area_lazy(unsigned long *start, unsigned long *end,
> > >          * should not expect such behaviour. This just simplifies locking for
> > >          * the case that isn't actually used at the moment anyway.
> > >          */
> > > -       if (!sync && !force_flush) {
> > > -               if (!spin_trylock(&purge_lock))
> > > -                       return;
> > > -       } else
> > > -               spin_lock(&purge_lock);
> > > -
> > >         if (sync)
> > >                 purge_fragmented_blocks_allcpus();
> > >  
> > > @@ -667,7 +660,6 @@ static void __purge_vmap_area_lazy(unsigned long *start, unsigned long *end,
> > >                         __free_vmap_area(va);
> > >                 spin_unlock(&vmap_area_lock);  
> > 
> > Hi Chris,
> > 
> > Per my test, the bottleneck now is __free_vmap_area() over the valist, the
> > iteration is protected with spinlock vmap_area_lock. So the larger lazy max
> > pages, the longer valist, the bigger the latency.
> > 
> > So besides above patch, we still need to remove vmap_are_lock or replace with
> > mutex.  
> 
> Or follow up with
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 3f7c6d6969ac..67b5475f0b0a 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -656,8 +656,10 @@ static void __purge_vmap_area_lazy(unsigned long *start, unsigned long *end,
>  
>         if (nr) {
>                 spin_lock(&vmap_area_lock);
> -               llist_for_each_entry_safe(va, n_va, valist, purge_list)
> +               llist_for_each_entry_safe(va, n_va, valist, purge_list) {
>                         __free_vmap_area(va);
> +                       cond_resched_lock(&vmap_area_lock);

oh, great! This seems works fine. I'm not sure there's any side effect or
performance regression, but this patch plus previous purge_lock removing do
addressed my problem.

Thanks,
Jisheng

> +               }
>                 spin_unlock(&vmap_area_lock);
>         }
>  }
> 
> ?
> -Chris
> 

  reply	other threads:[~2016-09-29 11:23 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-29  7:34 [PATCH] mm/vmalloc: reduce the number of lazy_max_pages to reduce latency Jisheng Zhang
2016-09-29  7:34 ` Jisheng Zhang
2016-09-29  7:34 ` Jisheng Zhang
2016-09-29  8:18 ` Chris Wilson
2016-09-29  8:18   ` Chris Wilson
2016-09-29  8:18   ` Chris Wilson
2016-09-29  8:28   ` Jisheng Zhang
2016-09-29  8:28     ` Jisheng Zhang
2016-09-29  8:28     ` Jisheng Zhang
2016-09-29 11:07     ` Chris Wilson
2016-09-29 11:07       ` Chris Wilson
2016-09-29 11:07       ` Chris Wilson
2016-09-29 11:18       ` Jisheng Zhang [this message]
2016-09-29 11:18         ` Jisheng Zhang
2016-09-29 11:18         ` Jisheng Zhang
2016-10-09  3:43   ` Joel Fernandes
2016-10-09  3:43     ` Joel Fernandes
2016-10-09  3:43     ` Joel Fernandes
2016-10-09 12:42     ` Chris Wilson
2016-10-09 12:42       ` Chris Wilson
2016-10-09 12:42       ` Chris Wilson
2016-10-09 19:00       ` Joel Fernandes
2016-10-09 19:00         ` Joel Fernandes
2016-10-09 19:00         ` Joel Fernandes
2016-10-09 19:26         ` Chris Wilson
2016-10-09 19:26           ` Chris Wilson
2016-10-09 19:26           ` Chris Wilson
2016-10-11  5:06           ` Joel Fernandes
2016-10-11  5:06             ` Joel Fernandes
2016-10-11  5:06             ` Joel Fernandes
2016-10-11  5:34             ` Joel Fernandes
2016-10-11  5:34               ` Joel Fernandes
2016-10-11  5:34               ` Joel Fernandes

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160929191806.25da2700@xhacker \
    --to=jszhang@marvell.com \
    --cc=agnel.joel@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=chris@chris-wilson.co.uk \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.