All of lore.kernel.org
 help / color / mirror / Atom feed
From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
To: Vladimir Davydov <vdavydov@parallels.com>
Cc: akpm@linux-foundation.org, cl@linux.com, rientjes@google.com,
	penberg@kernel.org, hannes@cmpxchg.org, mhocko@suse.cz,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH -mm v2 7/8] slub: make dead memcg caches discard free slabs immediately
Date: Tue, 10 Jun 2014 17:09:35 +0900	[thread overview]
Message-ID: <20140610080935.GG19036@js1304-P5Q-DELUXE> (raw)
In-Reply-To: <3b53266b76556dd042bbf6147207c70473572a7e.1402060096.git.vdavydov@parallels.com>

On Fri, Jun 06, 2014 at 05:22:44PM +0400, Vladimir Davydov wrote:
> Since a dead memcg cache is destroyed only after the last slab allocated
> to it is freed, we must disable caching of empty slabs for such caches,
> otherwise they will be hanging around forever.
> 
> This patch makes SLUB discard dead memcg caches' slabs as soon as they
> become empty. To achieve that, it disables per cpu partial lists for
> dead caches (see put_cpu_partial) and forbids keeping empty slabs on per
> node partial lists by setting cache's min_partial to 0 on
> kmem_cache_shrink, which is always called on memcg offline (see
> memcg_unregister_all_caches).
> 
> Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
> Thanks-to: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> ---
>  mm/slub.c |   20 ++++++++++++++++++++
>  1 file changed, 20 insertions(+)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index e46d6abe8a68..1dad7e2c586a 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2015,6 +2015,8 @@ static void unfreeze_partials(struct kmem_cache *s,
>  #endif
>  }
>  
> +static void flush_all(struct kmem_cache *s);
> +
>  /*
>   * Put a page that was just frozen (in __slab_free) into a partial page
>   * slot if available. This is done without interrupts disabled and without
> @@ -2064,6 +2066,21 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
>  
>  	} while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page)
>  								!= oldpage);
> +
> +	if (memcg_cache_dead(s)) {
> +               bool done = false;
> +               unsigned long flags;
> +
> +               local_irq_save(flags);
> +               if (this_cpu_read(s->cpu_slab->partial) == page) {
> +                       unfreeze_partials(s, this_cpu_ptr(s->cpu_slab));
> +                       done = true;
> +               }
> +               local_irq_restore(flags);
> +
> +               if (!done)
> +                       flush_all(s);
> +	}

Now, slab_free() is non-preemptable so flush_all() isn't needed.

Thanks.


WARNING: multiple messages have this Message-ID (diff)
From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
To: Vladimir Davydov <vdavydov@parallels.com>
Cc: akpm@linux-foundation.org, cl@linux.com, rientjes@google.com,
	penberg@kernel.org, hannes@cmpxchg.org, mhocko@suse.cz,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH -mm v2 7/8] slub: make dead memcg caches discard free slabs immediately
Date: Tue, 10 Jun 2014 17:09:35 +0900	[thread overview]
Message-ID: <20140610080935.GG19036@js1304-P5Q-DELUXE> (raw)
In-Reply-To: <3b53266b76556dd042bbf6147207c70473572a7e.1402060096.git.vdavydov@parallels.com>

On Fri, Jun 06, 2014 at 05:22:44PM +0400, Vladimir Davydov wrote:
> Since a dead memcg cache is destroyed only after the last slab allocated
> to it is freed, we must disable caching of empty slabs for such caches,
> otherwise they will be hanging around forever.
> 
> This patch makes SLUB discard dead memcg caches' slabs as soon as they
> become empty. To achieve that, it disables per cpu partial lists for
> dead caches (see put_cpu_partial) and forbids keeping empty slabs on per
> node partial lists by setting cache's min_partial to 0 on
> kmem_cache_shrink, which is always called on memcg offline (see
> memcg_unregister_all_caches).
> 
> Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
> Thanks-to: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> ---
>  mm/slub.c |   20 ++++++++++++++++++++
>  1 file changed, 20 insertions(+)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index e46d6abe8a68..1dad7e2c586a 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2015,6 +2015,8 @@ static void unfreeze_partials(struct kmem_cache *s,
>  #endif
>  }
>  
> +static void flush_all(struct kmem_cache *s);
> +
>  /*
>   * Put a page that was just frozen (in __slab_free) into a partial page
>   * slot if available. This is done without interrupts disabled and without
> @@ -2064,6 +2066,21 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
>  
>  	} while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page)
>  								!= oldpage);
> +
> +	if (memcg_cache_dead(s)) {
> +               bool done = false;
> +               unsigned long flags;
> +
> +               local_irq_save(flags);
> +               if (this_cpu_read(s->cpu_slab->partial) == page) {
> +                       unfreeze_partials(s, this_cpu_ptr(s->cpu_slab));
> +                       done = true;
> +               }
> +               local_irq_restore(flags);
> +
> +               if (!done)
> +                       flush_all(s);
> +	}

Now, slab_free() is non-preemptable so flush_all() isn't needed.

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2014-06-10  8:05 UTC|newest]

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-06-06 13:22 [PATCH -mm v2 0/8] memcg/slab: reintroduce dead cache self-destruction Vladimir Davydov
2014-06-06 13:22 ` Vladimir Davydov
2014-06-06 13:22 ` [PATCH -mm v2 1/8] memcg: cleanup memcg_cache_params refcnt usage Vladimir Davydov
2014-06-06 13:22   ` Vladimir Davydov
2014-06-06 13:22 ` [PATCH -mm v2 2/8] memcg: destroy kmem caches when last slab is freed Vladimir Davydov
2014-06-06 13:22   ` Vladimir Davydov
2014-06-06 13:22 ` [PATCH -mm v2 3/8] memcg: mark caches that belong to offline memcgs as dead Vladimir Davydov
2014-06-06 13:22   ` Vladimir Davydov
2014-06-10  7:48   ` Joonsoo Kim
2014-06-10  7:48     ` Joonsoo Kim
2014-06-10 10:06     ` Vladimir Davydov
2014-06-10 10:06       ` Vladimir Davydov
2014-06-06 13:22 ` [PATCH -mm v2 4/8] slub: don't fail kmem_cache_shrink if slab placement optimization fails Vladimir Davydov
2014-06-06 13:22   ` Vladimir Davydov
2014-06-06 13:22 ` [PATCH -mm v2 5/8] slub: make slab_free non-preemptable Vladimir Davydov
2014-06-06 13:22   ` Vladimir Davydov
2014-06-06 14:46   ` Christoph Lameter
2014-06-06 14:46     ` Christoph Lameter
2014-06-09 12:52     ` Vladimir Davydov
2014-06-09 12:52       ` Vladimir Davydov
2014-06-09 13:52       ` Christoph Lameter
2014-06-09 13:52         ` Christoph Lameter
2014-06-12  6:58   ` Joonsoo Kim
2014-06-12  6:58     ` Joonsoo Kim
2014-06-12 10:03     ` Vladimir Davydov
2014-06-12 10:03       ` Vladimir Davydov
2014-06-06 13:22 ` [PATCH -mm v2 6/8] memcg: wait for kfree's to finish before destroying cache Vladimir Davydov
2014-06-06 13:22   ` Vladimir Davydov
2014-06-06 13:22 ` [PATCH -mm v2 7/8] slub: make dead memcg caches discard free slabs immediately Vladimir Davydov
2014-06-06 13:22   ` Vladimir Davydov
2014-06-06 14:48   ` Christoph Lameter
2014-06-06 14:48     ` Christoph Lameter
2014-06-10  8:09   ` Joonsoo Kim [this message]
2014-06-10  8:09     ` Joonsoo Kim
2014-06-10 10:09     ` Vladimir Davydov
2014-06-10 10:09       ` Vladimir Davydov
2014-06-06 13:22 ` [PATCH -mm v2 8/8] slab: " Vladimir Davydov
2014-06-06 13:22   ` Vladimir Davydov
2014-06-06 14:52   ` Christoph Lameter
2014-06-06 14:52     ` Christoph Lameter
2014-06-09 13:04     ` Vladimir Davydov
2014-06-09 13:04       ` Vladimir Davydov
2014-06-10  7:43   ` Joonsoo Kim
2014-06-10  7:43     ` Joonsoo Kim
2014-06-10 10:03     ` Vladimir Davydov
2014-06-10 10:03       ` Vladimir Davydov
2014-06-10 14:26       ` Christoph Lameter
2014-06-10 14:26         ` Christoph Lameter
2014-06-10 15:18         ` Vladimir Davydov
2014-06-10 15:18           ` Vladimir Davydov
2014-06-11  8:11           ` Joonsoo Kim
2014-06-11  8:11             ` Joonsoo Kim
2014-06-11 21:24           ` Vladimir Davydov
2014-06-11 21:24             ` Vladimir Davydov
2014-06-12  6:53             ` Joonsoo Kim
2014-06-12  6:53               ` Joonsoo Kim
2014-06-12 10:02               ` Vladimir Davydov
2014-06-12 10:02                 ` Vladimir Davydov
2014-06-13 16:34               ` Christoph Lameter
2014-06-13 16:34                 ` Christoph Lameter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140610080935.GG19036@js1304-P5Q-DELUXE \
    --to=iamjoonsoo.kim@lge.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.cz \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=vdavydov@parallels.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.