All of lore.kernel.org
 help / color / mirror / Atom feed
From: Kirill Tkhai <ktkhai@virtuozzo.com>
To: akpm@linux-foundation.org, vdavydov.dev@gmail.com,
	shakeelb@google.com, viro@zeniv.linux.org.uk, hannes@cmpxchg.org,
	mhocko@kernel.org, ktkhai@virtuozzo.com, tglx@linutronix.de,
	pombredanne@nexb.com, stummala@codeaurora.org,
	gregkh@linuxfoundation.org, sfr@canb.auug.org.au, guro@fb.com,
	mka@chromium.org, penguin-kernel@I-love.SAKURA.ne.jp,
	chris@chris-wilson.co.uk, longman@redhat.com, minchan@kernel.org,
	ying.huang@intel.com, mgorman@techsingularity.net, jbacik@fb.com,
	linux@roeck-us.net, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, willy@infradead.org, lirongqing@baidu.com,
	aryabinin@virtuozzo.com
Subject: [PATCH v5 13/13] mm: Clear shrinker bit if there are no objects related to memcg
Date: Thu, 10 May 2018 12:54:15 +0300	[thread overview]
Message-ID: <152594605549.22949.16491037134168999424.stgit@localhost.localdomain> (raw)
In-Reply-To: <152594582808.22949.8353313986092337675.stgit@localhost.localdomain>

To avoid further unneed calls of do_shrink_slab()
for shrinkers, which already do not have any charged
objects in a memcg, their bits have to be cleared.

This patch introduces a lockless mechanism to do that
without races without parallel list lru add. After
do_shrink_slab() returns SHRINK_EMPTY the first time,
we clear the bit and call it once again. Then we restore
the bit, if the new return value is different.

Note, that single smp_mb__after_atomic() in shrink_slab_memcg()
covers two situations:

1)list_lru_add()     shrink_slab_memcg
    list_add_tail()    for_each_set_bit() <--- read bit
                         do_shrink_slab() <--- missed list update (no barrier)
    <MB>                 <MB>
    set_bit()            do_shrink_slab() <--- seen list update

This situation, when the first do_shrink_slab() sees set bit,
but it doesn't see list update (i.e., race with the first element
queueing), is rare. So we don't add <MB> before the first call
of do_shrink_slab() instead of this to do not slow down generic
case. Also, it's need the second call as seen in below in (2).

2)list_lru_add()      shrink_slab_memcg()
    list_add_tail()     ...
    set_bit()           ...
  ...                   for_each_set_bit()
  do_shrink_slab()        do_shrink_slab()
    clear_bit()           ...
  ...                     ...
  list_lru_add()          ...
    list_add_tail()       clear_bit()
    <MB>                  <MB>
    set_bit()             do_shrink_slab()

The barriers guarantees, the second do_shrink_slab()
in the right side task sees list update if really
cleared the bit. This case is drawn in the code comment.

[Results/performance of the patchset]

After the whole patchset applied the below test shows signify
increase of performance:

$echo 1 > /sys/fs/cgroup/memory/memory.use_hierarchy
$mkdir /sys/fs/cgroup/memory/ct
$echo 4000M > /sys/fs/cgroup/memory/ct/memory.kmem.limit_in_bytes
    $for i in `seq 0 4000`; do mkdir /sys/fs/cgroup/memory/ct/$i; echo $$ > /sys/fs/cgroup/memory/ct/$i/cgroup.procs; mkdir -p s/$i; mount -t tmpfs $i s/$i; touch s/$i/file; done

Then, 5 sequential calls of drop caches:
$time echo 3 > /proc/sys/vm/drop_caches

1)Before:
0.00user 13.78system 0:13.78elapsed 99%CPU
0.00user 5.59system 0:05.60elapsed 99%CPU
0.00user 5.48system 0:05.48elapsed 99%CPU
0.00user 8.35system 0:08.35elapsed 99%CPU
0.00user 8.34system 0:08.35elapsed 99%CPU

2)After
0.00user 1.10system 0:01.10elapsed 99%CPU
0.00user 0.00system 0:00.01elapsed 64%CPU
0.00user 0.01system 0:00.01elapsed 82%CPU
0.00user 0.00system 0:00.01elapsed 64%CPU
0.00user 0.01system 0:00.01elapsed 82%CPU

The results show the performance increases at least in 548 times.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 include/linux/memcontrol.h |    2 ++
 mm/vmscan.c                |   19 +++++++++++++++++--
 2 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 436691a66500..82c0bf2d0579 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1283,6 +1283,8 @@ static inline void memcg_set_shrinker_bit(struct mem_cgroup *memcg, int nid, int
 
 		rcu_read_lock();
 		map = MEMCG_SHRINKER_MAP(memcg, nid);
+		/* Pairs with smp mb in shrink_slab() */
+		smp_mb__before_atomic();
 		set_bit(nr, map->map);
 		rcu_read_unlock();
 	}
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 7b0075612d73..189b163bef4a 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -586,8 +586,23 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
 			continue;
 
 		ret = do_shrink_slab(&sc, shrinker, priority);
-		if (ret == SHRINK_EMPTY)
-			ret = 0;
+		if (ret == SHRINK_EMPTY) {
+			clear_bit(i, map->map);
+			/*
+			 * Pairs with mb in memcg_set_shrinker_bit():
+			 *
+			 * list_lru_add()     shrink_slab_memcg()
+			 *   list_add_tail()    clear_bit()
+			 *   <MB>               <MB>
+			 *   set_bit()          do_shrink_slab()
+			 */
+			smp_mb__after_atomic();
+			ret = do_shrink_slab(&sc, shrinker, priority);
+			if (ret == SHRINK_EMPTY)
+				ret = 0;
+			else
+				memcg_set_shrinker_bit(memcg, nid, i);
+		}
 		freed += ret;
 
 		if (rwsem_is_contended(&shrinker_rwsem)) {

  parent reply	other threads:[~2018-05-10  9:54 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-10  9:52 [PATCH v5 00/13] Improve shrink_slab() scalability (old complexity was O(n^2), new is O(n)) Kirill Tkhai
2018-05-10  9:52 ` [PATCH v5 01/13] mm: Assign id to every memcg-aware shrinker Kirill Tkhai
2018-05-13  5:15   ` Vladimir Davydov
2018-05-14  9:03     ` Kirill Tkhai
2018-05-15  3:29       ` Vladimir Davydov
2018-05-10  9:52 ` [PATCH v5 02/13] memcg: Move up for_each_mem_cgroup{, _tree} defines Kirill Tkhai
2018-05-10  9:52 ` [PATCH v5 03/13] mm: Assign memcg-aware shrinkers bitmap to memcg Kirill Tkhai
2018-05-13 16:47   ` Vladimir Davydov
2018-05-14  9:34     ` Kirill Tkhai
2018-05-15  3:54       ` Vladimir Davydov
2018-05-10  9:52 ` [PATCH v5 04/13] mm: Refactoring in workingset_init() Kirill Tkhai
2018-05-10  9:52 ` [PATCH v5 05/13] fs: Refactoring in alloc_super() Kirill Tkhai
2018-05-10  9:53 ` [PATCH v5 06/13] fs: Propagate shrinker::id to list_lru Kirill Tkhai
2018-05-13 16:57   ` Vladimir Davydov
2018-05-10  9:53 ` [PATCH v5 07/13] list_lru: Add memcg argument to list_lru_from_kmem() Kirill Tkhai
2018-05-10  9:53 ` [PATCH v5 08/13] list_lru: Pass dst_memcg argument to memcg_drain_list_lru_node() Kirill Tkhai
2018-05-10  9:53 ` [PATCH v5 09/13] list_lru: Pass lru " Kirill Tkhai
2018-05-10  9:53 ` [PATCH v5 10/13] mm: Set bit in memcg shrinker bitmap on first list_lru item apearance Kirill Tkhai
2018-05-15  4:08   ` Vladimir Davydov
2018-05-10  9:53 ` [PATCH v5 11/13] mm: Iterate only over charged shrinkers during memcg shrink_slab() Kirill Tkhai
2018-05-15  5:44   ` Vladimir Davydov
2018-05-15 10:12     ` Kirill Tkhai
2018-05-17  4:33       ` Vladimir Davydov
2018-05-17 11:39         ` Kirill Tkhai
2018-05-15 14:49     ` Kirill Tkhai
2018-05-17  4:16       ` Vladimir Davydov
2018-05-17 11:49         ` Kirill Tkhai
2018-05-17 13:51           ` Vladimir Davydov
2018-05-10  9:54 ` [PATCH v5 12/13] mm: Add SHRINK_EMPTY shrinker methods return value Kirill Tkhai
2018-05-10  9:54 ` Kirill Tkhai [this message]
2018-05-15  5:59   ` [PATCH v5 13/13] mm: Clear shrinker bit if there are no objects related to memcg Vladimir Davydov
2018-05-15  8:55     ` Kirill Tkhai
2018-05-17  4:49       ` Vladimir Davydov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=152594605549.22949.16491037134168999424.stgit@localhost.localdomain \
    --to=ktkhai@virtuozzo.com \
    --cc=akpm@linux-foundation.org \
    --cc=aryabinin@virtuozzo.com \
    --cc=chris@chris-wilson.co.uk \
    --cc=gregkh@linuxfoundation.org \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=jbacik@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux@roeck-us.net \
    --cc=lirongqing@baidu.com \
    --cc=longman@redhat.com \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@kernel.org \
    --cc=minchan@kernel.org \
    --cc=mka@chromium.org \
    --cc=penguin-kernel@I-love.SAKURA.ne.jp \
    --cc=pombredanne@nexb.com \
    --cc=sfr@canb.auug.org.au \
    --cc=shakeelb@google.com \
    --cc=stummala@codeaurora.org \
    --cc=tglx@linutronix.de \
    --cc=vdavydov.dev@gmail.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.