All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vladimir Davydov <vdavydov.dev@gmail.com>
To: Michal Hocko <mhocko@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/2] mm: memcontrol: use special workqueue for creating per-memcg caches
Date: Tue, 4 Oct 2016 16:14:17 +0300	[thread overview]
Message-ID: <20161004131417.GC1862@esperanza> (raw)
In-Reply-To: <20161003131930.GE26768@dhcp22.suse.cz>

On Mon, Oct 03, 2016 at 03:19:31PM +0200, Michal Hocko wrote:
> On Mon 03-10-16 15:35:06, Vladimir Davydov wrote:
> > On Mon, Oct 03, 2016 at 02:06:42PM +0200, Michal Hocko wrote:
> > > On Sat 01-10-16 16:56:47, Vladimir Davydov wrote:
> > > > Creating a lot of cgroups at the same time might stall all worker
> > > > threads with kmem cache creation works, because kmem cache creation is
> > > > done with the slab_mutex held. To prevent that from happening, let's use
> > > > a special workqueue for kmem cache creation with max in-flight work
> > > > items equal to 1.
> > > > 
> > > > Link: https://bugzilla.kernel.org/show_bug.cgi?id=172981
> > > 
> > > This looks like a regression but I am not really sure I understand what
> > > has caused it. We had the WQ based cache creation since kmem was
> > > introduced more or less. So is it 801faf0db894 ("mm/slab: lockless
> > > decision to grow cache") which was pointed by bisection that changed the
> > > timing resp. relaxed the cache creation to the point that would allow
> > > this runaway?
> > 
> > It is in case of SLAB. For SLUB the issue was caused by commit
> > 81ae6d03952c ("mm/slub.c: replace kick_all_cpus_sync() with
> > synchronize_sched() in kmem_cache_shrink()").
> 
> OK, thanks for the confirmation. This would be useful in the changelog
> imho.
> 
> > > This would be really useful for the stable backport
> > > consideration.
> > > 
> > > Also, if I understand the fix correctly, now we do limit the number of
> > > workers to 1 thread. Is this really what we want? Wouldn't it be
> > > possible that few memcgs could starve others fromm having their cache
> > > created? What would be the result, missed charges?
> > 
> > Now kmem caches are created in FIFO order, i.e. if one memcg called
> > kmem_cache_alloc on a non-existent cache before another, it will be
> > served first.
> 
> I do not see where this FIFO is guaranteed.
> __memcg_schedule_kmem_cache_create doesn't seem to be using ordered WQ.

Yeah, you're right - I thought max_active implies ordering, but it
doesn't. Then we can use an ordered workqueue. Here's the updated
patch:

>From 10f5f126800912c6a4b78a8b615138c1322694ad Mon Sep 17 00:00:00 2001
From: Vladimir Davydov <vdavydov.dev@gmail.com>
Date: Sat, 1 Oct 2016 16:39:09 +0300
Subject: [PATCH] mm: memcontrol: use special workqueue for creating per-memcg
 caches

Creating a lot of cgroups at the same time might stall all worker
threads with kmem cache creation works, because kmem cache creation is
done with the slab_mutex held. The problem was amplified by commits
801faf0db894 ("mm/slab: lockless decision to grow cache") in case of
SLAB and 81ae6d03952c ("mm/slub.c: replace kick_all_cpus_sync() with
synchronize_sched() in kmem_cache_shrink()") in case of SLUB, which
increased the maximal time the slab_mutex can be held.

To prevent that from happening, let's use a special ordered single
threaded workqueue for kmem cache creation. This shouldn't introduce any
functional changes regarding how kmem caches are created, as the work
function holds the global slab_mutex during its whole runtime anyway,
making it impossible to run more than one work at a time. By using a
single threaded workqueue, we just avoid creating a thread per each
work. Ordering is required to avoid a situation when a cgroup's work is
put off indefinitely because there are other cgroups to serve, in other
words to guarantee fairness.

Link: https://bugzilla.kernel.org/show_bug.cgi?id=172981
Signed-off-by: Vladimir Davydov <vdavydov.dev@gmail.com>
Reported-by: Doug Smythies <dsmythies@telus.net>
Cc: Christoph Lameter <cl@linux.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Pekka Enberg <penberg@kernel.org>

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 4be518d4e68a..8d753d87ca37 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2175,6 +2175,8 @@ struct memcg_kmem_cache_create_work {
 	struct work_struct work;
 };
 
+static struct workqueue_struct *memcg_kmem_cache_create_wq;
+
 static void memcg_kmem_cache_create_func(struct work_struct *w)
 {
 	struct memcg_kmem_cache_create_work *cw =
@@ -2206,7 +2208,7 @@ static void __memcg_schedule_kmem_cache_create(struct mem_cgroup *memcg,
 	cw->cachep = cachep;
 	INIT_WORK(&cw->work, memcg_kmem_cache_create_func);
 
-	schedule_work(&cw->work);
+	queue_work(memcg_kmem_cache_create_wq, &cw->work);
 }
 
 static void memcg_schedule_kmem_cache_create(struct mem_cgroup *memcg,
@@ -5794,6 +5796,17 @@ static int __init mem_cgroup_init(void)
 {
 	int cpu, node;
 
+#ifndef CONFIG_SLOB
+	/*
+	 * Kmem cache creation is mostly done with the slab_mutex held,
+	 * so use a special workqueue to avoid stalling all worker
+	 * threads in case lots of cgroups are created simultaneously.
+	 */
+	memcg_kmem_cache_create_wq =
+		alloc_ordered_workqueue("memcg_kmem_cache_create", 0);
+	BUG_ON(!memcg_kmem_cache_create_wq);
+#endif
+
 	hotcpu_notifier(memcg_cpu_hotplug_callback, 0);
 
 	for_each_possible_cpu(cpu)

WARNING: multiple messages have this Message-ID (diff)
From: Vladimir Davydov <vdavydov.dev@gmail.com>
To: Michal Hocko <mhocko@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/2] mm: memcontrol: use special workqueue for creating per-memcg caches
Date: Tue, 4 Oct 2016 16:14:17 +0300	[thread overview]
Message-ID: <20161004131417.GC1862@esperanza> (raw)
In-Reply-To: <20161003131930.GE26768@dhcp22.suse.cz>

On Mon, Oct 03, 2016 at 03:19:31PM +0200, Michal Hocko wrote:
> On Mon 03-10-16 15:35:06, Vladimir Davydov wrote:
> > On Mon, Oct 03, 2016 at 02:06:42PM +0200, Michal Hocko wrote:
> > > On Sat 01-10-16 16:56:47, Vladimir Davydov wrote:
> > > > Creating a lot of cgroups at the same time might stall all worker
> > > > threads with kmem cache creation works, because kmem cache creation is
> > > > done with the slab_mutex held. To prevent that from happening, let's use
> > > > a special workqueue for kmem cache creation with max in-flight work
> > > > items equal to 1.
> > > > 
> > > > Link: https://bugzilla.kernel.org/show_bug.cgi?id=172981
> > > 
> > > This looks like a regression but I am not really sure I understand what
> > > has caused it. We had the WQ based cache creation since kmem was
> > > introduced more or less. So is it 801faf0db894 ("mm/slab: lockless
> > > decision to grow cache") which was pointed by bisection that changed the
> > > timing resp. relaxed the cache creation to the point that would allow
> > > this runaway?
> > 
> > It is in case of SLAB. For SLUB the issue was caused by commit
> > 81ae6d03952c ("mm/slub.c: replace kick_all_cpus_sync() with
> > synchronize_sched() in kmem_cache_shrink()").
> 
> OK, thanks for the confirmation. This would be useful in the changelog
> imho.
> 
> > > This would be really useful for the stable backport
> > > consideration.
> > > 
> > > Also, if I understand the fix correctly, now we do limit the number of
> > > workers to 1 thread. Is this really what we want? Wouldn't it be
> > > possible that few memcgs could starve others fromm having their cache
> > > created? What would be the result, missed charges?
> > 
> > Now kmem caches are created in FIFO order, i.e. if one memcg called
> > kmem_cache_alloc on a non-existent cache before another, it will be
> > served first.
> 
> I do not see where this FIFO is guaranteed.
> __memcg_schedule_kmem_cache_create doesn't seem to be using ordered WQ.

Yeah, you're right - I thought max_active implies ordering, but it
doesn't. Then we can use an ordered workqueue. Here's the updated
patch:

  reply	other threads:[~2016-10-04 13:14 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-01 13:56 [PATCH 1/2] mm: memcontrol: use special workqueue for creating per-memcg caches Vladimir Davydov
2016-10-01 13:56 ` Vladimir Davydov
2016-10-01 13:56 ` [PATCH 2/2] slub: move synchronize_sched out of slab_mutex on shrink Vladimir Davydov
2016-10-01 13:56   ` Vladimir Davydov
2016-10-06  6:27   ` Joonsoo Kim
2016-10-06  6:27     ` Joonsoo Kim
2016-10-03 12:06 ` [PATCH 1/2] mm: memcontrol: use special workqueue for creating per-memcg caches Michal Hocko
2016-10-03 12:06   ` Michal Hocko
2016-10-03 12:35   ` Vladimir Davydov
2016-10-03 12:35     ` Vladimir Davydov
2016-10-03 13:19     ` Michal Hocko
2016-10-03 13:19       ` Michal Hocko
2016-10-04 13:14       ` Vladimir Davydov [this message]
2016-10-04 13:14         ` Vladimir Davydov
2016-10-06 12:05         ` Michal Hocko
2016-10-06 12:05           ` Michal Hocko
2016-10-21  3:44         ` Andrew Morton
2016-10-21  3:44           ` Andrew Morton
2016-10-21  6:39           ` Michal Hocko
2016-10-21  6:39             ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20161004131417.GC1862@esperanza \
    --to=vdavydov.dev@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.