linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCHSET] slab: make memcg slab destruction scalable
@ 2017-01-14  5:54 Tejun Heo
  2017-01-14  5:54 ` [PATCH 1/9] Revert "slub: move synchronize_sched out of slab_mutex on shrink" Tejun Heo
                   ` (8 more replies)
  0 siblings, 9 replies; 26+ messages in thread
From: Tejun Heo @ 2017-01-14  5:54 UTC (permalink / raw)
  To: vdavydov.dev, cl, penberg, rientjes, iamjoonsoo.kim, akpm
  Cc: jsvana, hannes, linux-kernel, linux-mm, cgroups, kernel-team

With kmem cgroup support enabled, kmem_caches can be created and
destroyed frequently and a great number of near empty kmem_caches can
accumulate if there are a lot of transient cgroups and the system is
not under memory pressure.  When memory reclaim starts under such
conditions, it can lead to consecutive deactivation and destruction of
many kmem_caches, easily hundreds of thousands on moderately large
systems, exposing scalability issues in the current slab management
code.

I've seen machines which end up with hundred thousands of caches and
many millions of kernfs_nodes.  The current code is O(N^2) on the
total number of caches and has synchronous rcu_barrier() and
synchronize_sched() in cgroup offline / release path which is executed
while holding cgroup_mutex.  Combined, this leads to very expensive
and slow cache destruction operations which can easily keep running
for half a day.

This also messes up /proc/slabinfo along with other cache iterating
operations.  seq_file operates on 4k chunks and on each 4k boundary
tries to seek to the last position in the list.  With a huge number of
caches on the list, this becomes very slow and very prone to the list
content changing underneath it leading to a lot of missing and/or
duplicate entries.

This patchset addresses the scalability problem.

* Separate out root and memcg cache lists and add per-memcg list.
  Update each user to use the appropriate list.

* Replace rcu_barrier() and synchronize_rcu() with call_rcu() and
  call_rcu_sched().

* For dying empty slub caches, remove the sysfs files after
  deactivation so that we don't end up with millions of sysfs files
  without any useful information on them.

This patchset contains the following nine patches.

 0001-Revert-slub-move-synchronize_sched-out-of-slab_mutex.patch
 0002-slab-remove-synchronous-rcu_barrier-call-in-memcg-ca.patch
 0003-slab-simplify-shutdown_memcg_caches.patch
 0004-slab-reorganize-memcg_cache_params.patch
 0005-slab-link-memcg-kmem_caches-on-their-associated-memo.patch
 0006-slab-don-t-put-memcg-caches-on-slab_caches-list.patch
 0007-slab-introduce-__kmemcg_cache_deactivate.patch
 0008-slab-remove-synchronous-synchronize_sched-from-memcg.patch
 0009-slab-remove-slub-sysfs-interface-files-early-for-emp.patch

0001 reverts an existing optimization to prepare for the following
changes.  0002 replaces rcu_barrier() in release path with call_rcu().
0003-0006 separate out the lists.  0007-0008 replace
synchronize_sched() in slub destruction path with call_rcu_sched().
0009 removes sysfs files early for empty dying caches.

This patchset is on top of the current linus#master a121103c9228 and
also available in the following git branch.

 git://git.kernel.org/pub/scm/linux/kernel/git/tj/misc.git review-kmemcg-scalability

diffstat follows.  Thanks.

 include/linux/memcontrol.h |    1 
 include/linux/slab.h       |   39 ++++-
 include/linux/slab_def.h   |    5 
 include/linux/slub_def.h   |    9 -
 mm/memcontrol.c            |    7 -
 mm/slab.c                  |    7 +
 mm/slab.h                  |   21 ++-
 mm/slab_common.c           |  306 ++++++++++++++++++++++++---------------------
 mm/slub.c                  |   54 +++++++
 9 files changed, 283 insertions(+), 166 deletions(-)

--
tejun

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2017-01-17 19:07 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-14  5:54 [PATCHSET] slab: make memcg slab destruction scalable Tejun Heo
2017-01-14  5:54 ` [PATCH 1/9] Revert "slub: move synchronize_sched out of slab_mutex on shrink" Tejun Heo
2017-01-14  5:54 ` [PATCH 2/9] slab: remove synchronous rcu_barrier() call in memcg cache release path Tejun Heo
2017-01-14 13:19   ` Vladimir Davydov
2017-01-14 15:19     ` Tejun Heo
2017-01-17  0:07       ` Joonsoo Kim
2017-01-17 16:37         ` Tejun Heo
2017-01-17 17:02           ` Tejun Heo
2017-01-14  5:54 ` [PATCH 3/9] slab: simplify shutdown_memcg_caches() Tejun Heo
2017-01-14 13:27   ` Vladimir Davydov
2017-01-14 15:38     ` Tejun Heo
2017-01-14 15:53       ` Tejun Heo
2017-01-14  5:54 ` [PATCH 4/9] slab: reorganize memcg_cache_params Tejun Heo
2017-01-14 13:30   ` Vladimir Davydov
2017-01-14  5:54 ` [PATCH 5/9] slab: link memcg kmem_caches on their associated memory cgroup Tejun Heo
2017-01-14 13:33   ` Vladimir Davydov
2017-01-14  5:54 ` [PATCH 6/9] slab: don't put memcg caches on slab_caches list Tejun Heo
2017-01-14 13:39   ` Vladimir Davydov
2017-01-14 15:39     ` Tejun Heo
2017-01-14  5:54 ` [PATCH 7/9] slab: introduce __kmemcg_cache_deactivate() Tejun Heo
2017-01-14 13:42   ` Vladimir Davydov
2017-01-14 15:39     ` Tejun Heo
2017-01-14  5:54 ` [PATCH 8/9] slab: remove synchronous synchronize_sched() from memcg cache deactivation path Tejun Heo
2017-01-14 13:57   ` Vladimir Davydov
2017-01-14  5:54 ` [PATCH 9/9] slab: remove slub sysfs interface files early for empty memcg caches Tejun Heo
2017-01-14 14:00   ` Vladimir Davydov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).