linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/13] Improve shrink_slab() scalability (old complexity was O(n^2), new is O(n))
@ 2018-05-09 11:56 Kirill Tkhai
  2018-05-09 11:56 ` [PATCH v4 01/13] mm: Assign id to every memcg-aware shrinker Kirill Tkhai
                   ` (12 more replies)
  0 siblings, 13 replies; 17+ messages in thread
From: Kirill Tkhai @ 2018-05-09 11:56 UTC (permalink / raw)
  To: akpm, vdavydov.dev, shakeelb, viro, hannes, mhocko, ktkhai, tglx,
	pombredanne, stummala, gregkh, sfr, guro, mka, penguin-kernel,
	chris, longman, minchan, ying.huang, mgorman, jbacik, linux,
	linux-kernel, linux-mm, willy, lirongqing, aryabinin

Hi,

this patches solves the problem with slow shrink_slab() occuring
on the machines having many shrinkers and memory cgroups (i.e.,
with many containers). The problem is complexity of shrink_slab()
is O(n^2) and it grows too fast with the growth of containers
numbers.

Let we have 200 containers, and every container has 10 mounts
and 10 cgroups. All container tasks are isolated, and they don't
touch foreign containers mounts.

In case of global reclaim, a task has to iterate all over the memcgs
and to call all the memcg-aware shrinkers for all of them. This means,
the task has to visit 200 * 10 = 2000 shrinkers for every memcg,
and since there are 2000 memcgs, the total calls of do_shrink_slab()
are 2000 * 2000 = 4000000.

4 million calls are not a number operations, which can takes 1 cpu cycle.
E.g., super_cache_count() accesses at least two lists, and makes arifmetical
calculations. Even, if there are no charged objects, we do these calculations,
and replaces cpu caches by read memory. I observed nodes spending almost 100%
time in kernel, in case of intensive writing and global reclaim. The writer
consumes pages fast, but it's need to shrink_slab() before the reclaimer
reached shrink pages function (and frees SWAP_CLUSTER_MAX pages). Even if
there is no writing, the iterations just waste the time, and slows reclaim down.

Let's see the small test below:

$echo 1 > /sys/fs/cgroup/memory/memory.use_hierarchy
$mkdir /sys/fs/cgroup/memory/ct
$echo 4000M > /sys/fs/cgroup/memory/ct/memory.kmem.limit_in_bytes
$for i in `seq 0 4000`;
	do mkdir /sys/fs/cgroup/memory/ct/$i;
	echo $$ > /sys/fs/cgroup/memory/ct/$i/cgroup.procs;
	mkdir -p s/$i; mount -t tmpfs $i s/$i; touch s/$i/file;
done

Then, let's see drop caches time (5 sequential calls):
$time echo 3 > /proc/sys/vm/drop_caches

0.00user 13.78system 0:13.78elapsed 99%CPU
0.00user 5.59system 0:05.60elapsed 99%CPU
0.00user 5.48system 0:05.48elapsed 99%CPU
0.00user 8.35system 0:08.35elapsed 99%CPU
0.00user 8.34system 0:08.35elapsed 99%CPU


Last four calls don't actually shrink something. So, the iterations
over slab shrinkers take 5.48 seconds. Not so good for scalability.

The patchset solves the problem by making shrink_slab() of O(n)
complexity. There are following functional actions:

1)Assign id to every registered memcg-aware shrinker.
2)Maintain per-memcgroup bitmap of memcg-aware shrinkers,
  and set a shrinker-related bit after the first element
  is added to lru list (also, when removed child memcg
  elements are reparanted).
3)Split memcg-aware shrinkers and !memcg-aware shrinkers,
  and call a shrinker if its bit is set in memcg's shrinker
  bitmap.
  (Also, there is a functionality to clear the bit, after
  last element is shrinked).

This gives signify performance increase. The result after patchset is applied:

$time echo 3 > /proc/sys/vm/drop_caches

0.00user 1.10system 0:01.10elapsed 99%CPU
0.00user 0.00system 0:00.01elapsed 64%CPU
0.00user 0.01system 0:00.01elapsed 82%CPU
0.00user 0.00system 0:00.01elapsed 64%CPU
0.00user 0.01system 0:00.01elapsed 82%CPU

The results show the performance increases at least in 548 times.

So, the patchset makes shrink_slab() of less complexity and improves
the performance in such types of load I pointed. This will give a profit
in case of !global reclaim case, since there also will be less
do_shrink_slab() calls.

This patchset is made against linux-next.git tree.

v4: Do not use memcg mem_cgroup_idr for iteration over mem cgroups

v3: Many changes requested in commentaries to v2:

1)rebase on prealloc_shrinker() code base
2)root_mem_cgroup is made out of memcg maps
3)rwsem replaced with shrinkers_nr_max_mutex
4)changes around assignment of shrinker id to list lru
5)everything renamed

v2: Many changes requested in commentaries to v1:

1)the code mostly moved to mm/memcontrol.c;
2)using IDR instead of array of shrinkers;
3)added a possibility to assign list_lru shrinker id
  at the time of shrinker registering;
4)reorginized locking and renamed functions and variables.

---

Kirill Tkhai (13):
      mm: Assign id to every memcg-aware shrinker
      memcg: Move up for_each_mem_cgroup{,_tree} defines
      mm: Assign memcg-aware shrinkers bitmap to memcg
      mm: Refactoring in workingset_init()
      fs: Refactoring in alloc_super()
      fs: Propagate shrinker::id to list_lru
      list_lru: Add memcg argument to list_lru_from_kmem()
      list_lru: Pass dst_memcg argument to memcg_drain_list_lru_node()
      list_lru: Pass lru argument to memcg_drain_list_lru_node()
      mm: Set bit in memcg shrinker bitmap on first list_lru item apearance
      mm: Iterate only over charged shrinkers during memcg shrink_slab()
      mm: Add SHRINK_EMPTY shrinker methods return value
      mm: Clear shrinker bit if there are no objects related to memcg


 fs/super.c                 |   18 ++++-
 include/linux/list_lru.h   |    3 +
 include/linux/memcontrol.h |   32 ++++++++
 include/linux/shrinker.h   |   11 ++-
 mm/list_lru.c              |   65 +++++++++++++----
 mm/memcontrol.c            |  146 ++++++++++++++++++++++++++++++++++----
 mm/vmscan.c                |  170 +++++++++++++++++++++++++++++++++++++++++---
 mm/workingset.c            |   13 +++
 8 files changed, 405 insertions(+), 53 deletions(-)

--
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v4 01/13] mm: Assign id to every memcg-aware shrinker
  2018-05-09 11:56 [PATCH v4 00/13] Improve shrink_slab() scalability (old complexity was O(n^2), new is O(n)) Kirill Tkhai
@ 2018-05-09 11:56 ` Kirill Tkhai
  2018-05-09 22:55   ` Andrew Morton
  2018-05-09 11:57 ` [PATCH v4 02/13] memcg: Move up for_each_mem_cgroup{, _tree} defines Kirill Tkhai
                   ` (11 subsequent siblings)
  12 siblings, 1 reply; 17+ messages in thread
From: Kirill Tkhai @ 2018-05-09 11:56 UTC (permalink / raw)
  To: akpm, vdavydov.dev, shakeelb, viro, hannes, mhocko, ktkhai, tglx,
	pombredanne, stummala, gregkh, sfr, guro, mka, penguin-kernel,
	chris, longman, minchan, ying.huang, mgorman, jbacik, linux,
	linux-kernel, linux-mm, willy, lirongqing, aryabinin

The patch introduces shrinker::id number, which is used to enumerate
memcg-aware shrinkers. The number start from 0, and the code tries
to maintain it as small as possible.

This will be used as to represent a memcg-aware shrinkers in memcg
shrinkers map.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 fs/super.c               |    3 ++
 include/linux/shrinker.h |    4 +++
 mm/vmscan.c              |   59 ++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 66 insertions(+)

diff --git a/fs/super.c b/fs/super.c
index 122c402049a2..036a5522f9d0 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -248,6 +248,9 @@ static struct super_block *alloc_super(struct file_system_type *type, int flags,
 	s->s_time_gran = 1000000000;
 	s->cleancache_poolid = CLEANCACHE_NO_POOL;
 
+#if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)
+	s->s_shrink.id = -1;
+#endif
 	s->s_shrink.seeks = DEFAULT_SEEKS;
 	s->s_shrink.scan_objects = super_cache_scan;
 	s->s_shrink.count_objects = super_cache_count;
diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
index 6794490f25b2..a9ec364e1b0b 100644
--- a/include/linux/shrinker.h
+++ b/include/linux/shrinker.h
@@ -66,6 +66,10 @@ struct shrinker {
 
 	/* These are for internal use */
 	struct list_head list;
+#if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)
+	/* ID in shrinker_idr */
+	int id;
+#endif
 	/* objs pending delete, per node */
 	atomic_long_t *nr_deferred;
 };
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 10c8a38c5eef..36808bdf02ae 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -169,6 +169,47 @@ unsigned long vm_total_pages;
 static LIST_HEAD(shrinker_list);
 static DECLARE_RWSEM(shrinker_rwsem);
 
+#if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)
+static DEFINE_IDR(shrinker_idr);
+
+static int prealloc_memcg_shrinker(struct shrinker *shrinker)
+{
+	int id, ret;
+
+	down_write(&shrinker_rwsem);
+	ret = id = idr_alloc(&shrinker_idr, shrinker, 0, 0, GFP_KERNEL);
+	if (ret < 0)
+		goto unlock;
+	shrinker->id = id;
+	ret = 0;
+unlock:
+	up_write(&shrinker_rwsem);
+	return ret;
+}
+
+static void del_memcg_shrinker(struct shrinker *shrinker)
+{
+	int id = shrinker->id;
+
+	if (id < 0)
+		return;
+
+	down_write(&shrinker_rwsem);
+	idr_remove(&shrinker_idr, id);
+	up_write(&shrinker_rwsem);
+	shrinker->id = -1;
+}
+#else /* CONFIG_MEMCG && !CONFIG_SLOB */
+static int prealloc_memcg_shrinker(struct shrinker *shrinker)
+{
+	return 0;
+}
+
+static void del_memcg_shrinker(struct shrinker *shrinker)
+{
+}
+#endif /* CONFIG_MEMCG && !CONFIG_SLOB */
+
 #ifdef CONFIG_MEMCG
 static bool global_reclaim(struct scan_control *sc)
 {
@@ -306,6 +347,7 @@ unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone
 int prealloc_shrinker(struct shrinker *shrinker)
 {
 	size_t size = sizeof(*shrinker->nr_deferred);
+	int ret;
 
 	if (shrinker->flags & SHRINKER_NUMA_AWARE)
 		size *= nr_node_ids;
@@ -313,11 +355,26 @@ int prealloc_shrinker(struct shrinker *shrinker)
 	shrinker->nr_deferred = kzalloc(size, GFP_KERNEL);
 	if (!shrinker->nr_deferred)
 		return -ENOMEM;
+
+	if (shrinker->flags & SHRINKER_MEMCG_AWARE) {
+		ret = prealloc_memcg_shrinker(shrinker);
+		if (ret)
+			goto free_deferred;
+	}
+
 	return 0;
+
+free_deferred:
+	kfree(shrinker->nr_deferred);
+	shrinker->nr_deferred = NULL;
+	return -ENOMEM;
 }
 
 void free_prealloced_shrinker(struct shrinker *shrinker)
 {
+	if (shrinker->flags & SHRINKER_MEMCG_AWARE)
+		del_memcg_shrinker(shrinker);
+
 	kfree(shrinker->nr_deferred);
 	shrinker->nr_deferred = NULL;
 }
@@ -347,6 +404,8 @@ void unregister_shrinker(struct shrinker *shrinker)
 {
 	if (!shrinker->nr_deferred)
 		return;
+	if (shrinker->flags & SHRINKER_MEMCG_AWARE)
+		del_memcg_shrinker(shrinker);
 	down_write(&shrinker_rwsem);
 	list_del(&shrinker->list);
 	up_write(&shrinker_rwsem);

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 02/13] memcg: Move up for_each_mem_cgroup{, _tree} defines
  2018-05-09 11:56 [PATCH v4 00/13] Improve shrink_slab() scalability (old complexity was O(n^2), new is O(n)) Kirill Tkhai
  2018-05-09 11:56 ` [PATCH v4 01/13] mm: Assign id to every memcg-aware shrinker Kirill Tkhai
@ 2018-05-09 11:57 ` Kirill Tkhai
  2018-05-09 11:57 ` [PATCH v4 03/13] mm: Assign memcg-aware shrinkers bitmap to memcg Kirill Tkhai
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 17+ messages in thread
From: Kirill Tkhai @ 2018-05-09 11:57 UTC (permalink / raw)
  To: akpm, vdavydov.dev, shakeelb, viro, hannes, mhocko, ktkhai, tglx,
	pombredanne, stummala, gregkh, sfr, guro, mka, penguin-kernel,
	chris, longman, minchan, ying.huang, mgorman, jbacik, linux,
	linux-kernel, linux-mm, willy, lirongqing, aryabinin

Next patch requires these defines are above their current
position, so here they are moved to declarations.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 mm/memcontrol.c |   30 +++++++++++++++---------------
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index bde5819be340..3df3efa7ff40 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -233,6 +233,21 @@ enum res_type {
 /* Used for OOM nofiier */
 #define OOM_CONTROL		(0)
 
+/*
+ * Iteration constructs for visiting all cgroups (under a tree).  If
+ * loops are exited prematurely (break), mem_cgroup_iter_break() must
+ * be used for reference counting.
+ */
+#define for_each_mem_cgroup_tree(iter, root)		\
+	for (iter = mem_cgroup_iter(root, NULL, NULL);	\
+	     iter != NULL;				\
+	     iter = mem_cgroup_iter(root, iter, NULL))
+
+#define for_each_mem_cgroup(iter)			\
+	for (iter = mem_cgroup_iter(NULL, NULL, NULL);	\
+	     iter != NULL;				\
+	     iter = mem_cgroup_iter(NULL, iter, NULL))
+
 /* Some nice accessors for the vmpressure. */
 struct vmpressure *memcg_to_vmpressure(struct mem_cgroup *memcg)
 {
@@ -867,21 +882,6 @@ static void invalidate_reclaim_iterators(struct mem_cgroup *dead_memcg)
 	}
 }
 
-/*
- * Iteration constructs for visiting all cgroups (under a tree).  If
- * loops are exited prematurely (break), mem_cgroup_iter_break() must
- * be used for reference counting.
- */
-#define for_each_mem_cgroup_tree(iter, root)		\
-	for (iter = mem_cgroup_iter(root, NULL, NULL);	\
-	     iter != NULL;				\
-	     iter = mem_cgroup_iter(root, iter, NULL))
-
-#define for_each_mem_cgroup(iter)			\
-	for (iter = mem_cgroup_iter(NULL, NULL, NULL);	\
-	     iter != NULL;				\
-	     iter = mem_cgroup_iter(NULL, iter, NULL))
-
 /**
  * mem_cgroup_scan_tasks - iterate over tasks of a memory cgroup hierarchy
  * @memcg: hierarchy root

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 03/13] mm: Assign memcg-aware shrinkers bitmap to memcg
  2018-05-09 11:56 [PATCH v4 00/13] Improve shrink_slab() scalability (old complexity was O(n^2), new is O(n)) Kirill Tkhai
  2018-05-09 11:56 ` [PATCH v4 01/13] mm: Assign id to every memcg-aware shrinker Kirill Tkhai
  2018-05-09 11:57 ` [PATCH v4 02/13] memcg: Move up for_each_mem_cgroup{, _tree} defines Kirill Tkhai
@ 2018-05-09 11:57 ` Kirill Tkhai
  2018-05-09 11:57 ` [PATCH v4 04/13] mm: Refactoring in workingset_init() Kirill Tkhai
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 17+ messages in thread
From: Kirill Tkhai @ 2018-05-09 11:57 UTC (permalink / raw)
  To: akpm, vdavydov.dev, shakeelb, viro, hannes, mhocko, ktkhai, tglx,
	pombredanne, stummala, gregkh, sfr, guro, mka, penguin-kernel,
	chris, longman, minchan, ying.huang, mgorman, jbacik, linux,
	linux-kernel, linux-mm, willy, lirongqing, aryabinin

Imagine a big node with many cpus, memory cgroups and containers.
Let we have 200 containers, every container has 10 mounts,
and 10 cgroups. All container tasks don't touch foreign
containers mounts. If there is intensive pages write,
and global reclaim happens, a writing task has to iterate
over all memcgs to shrink slab, before it's able to go
to shrink_page_list().

Iteration over all the memcg slabs is very expensive:
the task has to visit 200 * 10 = 2000 shrinkers
for every memcg, and since there are 2000 memcgs,
the total calls are 2000 * 2000 = 4000000.

So, the shrinker makes 4 million do_shrink_slab() calls
just to try to isolate SWAP_CLUSTER_MAX pages in one
of the actively writing memcg via shrink_page_list().
I've observed a node spending almost 100% in kernel,
making useless iteration over already shrinked slab.

This patch adds bitmap of memcg-aware shrinkers to memcg.
The size of the bitmap depends on bitmap_nr_ids, and during
memcg life it's maintained to be enough to fit bitmap_nr_ids
shrinkers. Every bit in the map is related to corresponding
shrinker id.

Next patches will maintain set bit only for really charged
memcg. This will allow shrink_slab() to increase its
performance in significant way. See the last patch for
the numbers.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 include/linux/memcontrol.h |   16 ++++++
 mm/memcontrol.c            |  114 ++++++++++++++++++++++++++++++++++++++++++++
 mm/vmscan.c                |   16 ++++++
 3 files changed, 145 insertions(+), 1 deletion(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 6cbea2f25a87..c159f3abe168 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -105,6 +105,15 @@ struct lruvec_stat {
 	long count[NR_VM_NODE_STAT_ITEMS];
 };
 
+/*
+ * Bitmap of shrinker::id corresponding to memcg-aware shrinkers,
+ * which have elements charged to this memcg.
+ */
+struct memcg_shrinker_map {
+	struct rcu_head rcu;
+	unsigned long map[0];
+};
+
 /*
  * per-zone information in memory controller.
  */
@@ -117,6 +126,7 @@ struct mem_cgroup_per_node {
 	unsigned long		lru_zone_size[MAX_NR_ZONES][NR_LRU_LISTS];
 
 	struct mem_cgroup_reclaim_iter	iter[DEF_PRIORITY + 1];
+	struct memcg_shrinker_map __rcu	*shrinker_map;
 
 	struct rb_node		tree_node;	/* RB tree node */
 	unsigned long		usage_in_excess;/* Set to the value by which */
@@ -1208,6 +1218,8 @@ extern int memcg_nr_cache_ids;
 void memcg_get_cache_ids(void);
 void memcg_put_cache_ids(void);
 
+extern int memcg_shrinker_nr_max;
+
 /*
  * Helper macro to loop through all memcg-specific caches. Callers must still
  * check if the cache is valid (it is either valid or NULL).
@@ -1231,6 +1243,10 @@ static inline int memcg_cache_id(struct mem_cgroup *memcg)
 	return memcg ? memcg->kmemcg_id : -1;
 }
 
+#define MEMCG_SHRINKER_MAP(memcg, nid) (memcg->nodeinfo[nid]->shrinker_map)
+
+extern int memcg_expand_shrinker_maps(int old_id, int id);
+
 #else
 #define for_each_memcg_cache_index(_idx)	\
 	for (; NULL; )
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 3df3efa7ff40..7b224a76ac68 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -320,6 +320,114 @@ EXPORT_SYMBOL(memcg_kmem_enabled_key);
 
 struct workqueue_struct *memcg_kmem_cache_wq;
 
+int memcg_shrinker_nr_max;
+static DEFINE_MUTEX(shrinkers_nr_max_mutex);
+
+static void memcg_free_shrinker_map_rcu(struct rcu_head *head)
+{
+	kvfree(container_of(head, struct memcg_shrinker_map, rcu));
+}
+
+static int memcg_expand_one_shrinker_map(struct mem_cgroup *memcg,
+					 int size, int old_size)
+{
+	struct memcg_shrinker_map *new, *old;
+	int nid;
+
+	lockdep_assert_held(&shrinkers_nr_max_mutex);
+
+	for_each_node(nid) {
+		old = rcu_dereference_protected(MEMCG_SHRINKER_MAP(memcg, nid), true);
+		/* Not yet online memcg */
+		if (old_size && !old)
+			return 0;
+
+		new = kvmalloc(sizeof(*new) + size, GFP_KERNEL);
+		if (!new)
+			return -ENOMEM;
+
+		/* Set all old bits, clear all new bits */
+		memset(new->map, (int)0xff, old_size);
+		memset((void *)new->map + old_size, 0, size - old_size);
+
+		rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, new);
+		if (old)
+			call_rcu(&old->rcu, memcg_free_shrinker_map_rcu);
+	}
+
+	return 0;
+}
+
+static void memcg_free_shrinker_maps(struct mem_cgroup *memcg)
+{
+	struct mem_cgroup_per_node *pn;
+	struct memcg_shrinker_map *map;
+	int nid;
+
+	if (memcg == root_mem_cgroup)
+		return;
+
+	mutex_lock(&shrinkers_nr_max_mutex);
+	for_each_node(nid) {
+		pn = mem_cgroup_nodeinfo(memcg, nid);
+		map = rcu_dereference_protected(pn->shrinker_map, true);
+		if (map)
+			call_rcu(&map->rcu, memcg_free_shrinker_map_rcu);
+		rcu_assign_pointer(pn->shrinker_map, NULL);
+	}
+	mutex_unlock(&shrinkers_nr_max_mutex);
+}
+
+static int memcg_alloc_shrinker_maps(struct mem_cgroup *memcg)
+{
+	int ret, size = memcg_shrinker_nr_max/BITS_PER_BYTE;
+
+	if (memcg == root_mem_cgroup)
+		return 0;
+
+	mutex_lock(&shrinkers_nr_max_mutex);
+	ret = memcg_expand_one_shrinker_map(memcg, size, 0);
+	mutex_unlock(&shrinkers_nr_max_mutex);
+
+	if (ret)
+		memcg_free_shrinker_maps(memcg);
+
+	return ret;
+}
+
+static struct idr mem_cgroup_idr;
+
+int memcg_expand_shrinker_maps(int old_nr, int nr)
+{
+	int size, old_size, ret = 0;
+	struct mem_cgroup *memcg;
+
+	old_size = old_nr / BITS_PER_BYTE;
+	size = nr / BITS_PER_BYTE;
+
+	mutex_lock(&shrinkers_nr_max_mutex);
+
+	if (!root_mem_cgroup)
+		goto unlock;
+
+	for_each_mem_cgroup(memcg) {
+		if (memcg == root_mem_cgroup)
+			continue;
+		ret = memcg_expand_one_shrinker_map(memcg, size, old_size);
+		if (ret)
+			goto unlock;
+	}
+unlock:
+	mutex_unlock(&shrinkers_nr_max_mutex);
+	return ret;
+}
+#else /* CONFIG_SLOB */
+static int memcg_alloc_shrinker_maps(struct mem_cgroup *memcg)
+{
+	return 0;
+}
+static void memcg_free_shrinker_maps(struct mem_cgroup *memcg) { }
+
 #endif /* !CONFIG_SLOB */
 
 /**
@@ -4471,6 +4579,11 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
 {
 	struct mem_cgroup *memcg = mem_cgroup_from_css(css);
 
+	if (memcg_alloc_shrinker_maps(memcg)) {
+		mem_cgroup_id_remove(memcg);
+		return -ENOMEM;
+	}
+
 	/* Online state pins memcg ID, memcg ID pins CSS */
 	atomic_set(&memcg->id.ref, 1);
 	css_get(css);
@@ -4522,6 +4635,7 @@ static void mem_cgroup_css_free(struct cgroup_subsys_state *css)
 	vmpressure_cleanup(&memcg->vmpressure);
 	cancel_work_sync(&memcg->high_work);
 	mem_cgroup_remove_from_trees(memcg);
+	memcg_free_shrinker_maps(memcg);
 	memcg_free_kmem(memcg);
 	mem_cgroup_free(memcg);
 }
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 36808bdf02ae..d1940244d0bf 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -174,12 +174,26 @@ static DEFINE_IDR(shrinker_idr);
 
 static int prealloc_memcg_shrinker(struct shrinker *shrinker)
 {
-	int id, ret;
+	int id, nr, ret;
 
 	down_write(&shrinker_rwsem);
 	ret = id = idr_alloc(&shrinker_idr, shrinker, 0, 0, GFP_KERNEL);
 	if (ret < 0)
 		goto unlock;
+
+	if (id >= memcg_shrinker_nr_max) {
+		nr = memcg_shrinker_nr_max * 2;
+		if (nr == 0)
+			nr = BITS_PER_BYTE;
+		BUG_ON(id >= nr);
+
+		if (memcg_expand_shrinker_maps(memcg_shrinker_nr_max, nr)) {
+			idr_remove(&shrinker_idr, id);
+			goto unlock;
+		}
+		memcg_shrinker_nr_max = nr;
+	}
+
 	shrinker->id = id;
 	ret = 0;
 unlock:

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 04/13] mm: Refactoring in workingset_init()
  2018-05-09 11:56 [PATCH v4 00/13] Improve shrink_slab() scalability (old complexity was O(n^2), new is O(n)) Kirill Tkhai
                   ` (2 preceding siblings ...)
  2018-05-09 11:57 ` [PATCH v4 03/13] mm: Assign memcg-aware shrinkers bitmap to memcg Kirill Tkhai
@ 2018-05-09 11:57 ` Kirill Tkhai
  2018-05-09 11:57 ` [PATCH v4 05/13] fs: Refactoring in alloc_super() Kirill Tkhai
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 17+ messages in thread
From: Kirill Tkhai @ 2018-05-09 11:57 UTC (permalink / raw)
  To: akpm, vdavydov.dev, shakeelb, viro, hannes, mhocko, ktkhai, tglx,
	pombredanne, stummala, gregkh, sfr, guro, mka, penguin-kernel,
	chris, longman, minchan, ying.huang, mgorman, jbacik, linux,
	linux-kernel, linux-mm, willy, lirongqing, aryabinin

Use prealloc_shrinker()/register_shrinker_prepared()
instead of register_shrinker(). This will be used
in next patch.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 mm/workingset.c |    7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/mm/workingset.c b/mm/workingset.c
index 40ee02c83978..c3a4fe145bb7 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -528,15 +528,16 @@ static int __init workingset_init(void)
 	pr_info("workingset: timestamp_bits=%d max_order=%d bucket_order=%u\n",
 	       timestamp_bits, max_order, bucket_order);
 
-	ret = __list_lru_init(&shadow_nodes, true, &shadow_nodes_key);
+	ret = prealloc_shrinker(&workingset_shadow_shrinker);
 	if (ret)
 		goto err;
-	ret = register_shrinker(&workingset_shadow_shrinker);
+	ret = __list_lru_init(&shadow_nodes, true, &shadow_nodes_key);
 	if (ret)
 		goto err_list_lru;
+	register_shrinker_prepared(&workingset_shadow_shrinker);
 	return 0;
 err_list_lru:
-	list_lru_destroy(&shadow_nodes);
+	free_prealloced_shrinker(&workingset_shadow_shrinker);
 err:
 	return ret;
 }

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 05/13] fs: Refactoring in alloc_super()
  2018-05-09 11:56 [PATCH v4 00/13] Improve shrink_slab() scalability (old complexity was O(n^2), new is O(n)) Kirill Tkhai
                   ` (3 preceding siblings ...)
  2018-05-09 11:57 ` [PATCH v4 04/13] mm: Refactoring in workingset_init() Kirill Tkhai
@ 2018-05-09 11:57 ` Kirill Tkhai
  2018-05-09 11:57 ` [PATCH v4 06/13] fs: Propagate shrinker::id to list_lru Kirill Tkhai
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 17+ messages in thread
From: Kirill Tkhai @ 2018-05-09 11:57 UTC (permalink / raw)
  To: akpm, vdavydov.dev, shakeelb, viro, hannes, mhocko, ktkhai, tglx,
	pombredanne, stummala, gregkh, sfr, guro, mka, penguin-kernel,
	chris, longman, minchan, ying.huang, mgorman, jbacik, linux,
	linux-kernel, linux-mm, willy, lirongqing, aryabinin

Do two list_lru_init_memcg() calls after prealloc_super().
destroy_unused_super() in fail path is OK with this.
Next patch needs such the order.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 fs/super.c |    8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/fs/super.c b/fs/super.c
index 036a5522f9d0..d95fa174edab 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -234,10 +234,6 @@ static struct super_block *alloc_super(struct file_system_type *type, int flags,
 	INIT_LIST_HEAD(&s->s_inodes_wb);
 	spin_lock_init(&s->s_inode_wblist_lock);
 
-	if (list_lru_init_memcg(&s->s_dentry_lru))
-		goto fail;
-	if (list_lru_init_memcg(&s->s_inode_lru))
-		goto fail;
 	s->s_count = 1;
 	atomic_set(&s->s_active, 1);
 	mutex_init(&s->s_vfs_rename_mutex);
@@ -258,6 +254,10 @@ static struct super_block *alloc_super(struct file_system_type *type, int flags,
 	s->s_shrink.flags = SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE;
 	if (prealloc_shrinker(&s->s_shrink))
 		goto fail;
+	if (list_lru_init_memcg(&s->s_dentry_lru))
+		goto fail;
+	if (list_lru_init_memcg(&s->s_inode_lru))
+		goto fail;
 	return s;
 
 fail:

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 06/13] fs: Propagate shrinker::id to list_lru
  2018-05-09 11:56 [PATCH v4 00/13] Improve shrink_slab() scalability (old complexity was O(n^2), new is O(n)) Kirill Tkhai
                   ` (4 preceding siblings ...)
  2018-05-09 11:57 ` [PATCH v4 05/13] fs: Refactoring in alloc_super() Kirill Tkhai
@ 2018-05-09 11:57 ` Kirill Tkhai
  2018-05-09 11:57 ` [PATCH v4 07/13] list_lru: Add memcg argument to list_lru_from_kmem() Kirill Tkhai
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 17+ messages in thread
From: Kirill Tkhai @ 2018-05-09 11:57 UTC (permalink / raw)
  To: akpm, vdavydov.dev, shakeelb, viro, hannes, mhocko, ktkhai, tglx,
	pombredanne, stummala, gregkh, sfr, guro, mka, penguin-kernel,
	chris, longman, minchan, ying.huang, mgorman, jbacik, linux,
	linux-kernel, linux-mm, willy, lirongqing, aryabinin

The patch adds list_lru::shrinker_id field, and populates
it by registered shrinker id.

This will be used to set correct bit in memcg shrinkers
map by lru code in next patches, after there appeared
the first related to memcg element in list_lru.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 fs/super.c               |    4 ++++
 include/linux/list_lru.h |    1 +
 mm/list_lru.c            |    6 ++++++
 mm/workingset.c          |    3 +++
 4 files changed, 14 insertions(+)

diff --git a/fs/super.c b/fs/super.c
index d95fa174edab..c9a6ef33a98b 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -258,6 +258,10 @@ static struct super_block *alloc_super(struct file_system_type *type, int flags,
 		goto fail;
 	if (list_lru_init_memcg(&s->s_inode_lru))
 		goto fail;
+#if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)
+	s->s_dentry_lru.shrinker_id = s->s_shrink.id;
+	s->s_inode_lru.shrinker_id = s->s_shrink.id;
+#endif
 	return s;
 
 fail:
diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
index 96def9d15b1b..f5b6bb7a8670 100644
--- a/include/linux/list_lru.h
+++ b/include/linux/list_lru.h
@@ -53,6 +53,7 @@ struct list_lru {
 	struct list_lru_node	*node;
 #if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)
 	struct list_head	list;
+	int			shrinker_id;
 #endif
 };
 
diff --git a/mm/list_lru.c b/mm/list_lru.c
index d9c84c5bda1d..2a4d29491947 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -567,6 +567,9 @@ int __list_lru_init(struct list_lru *lru, bool memcg_aware,
 	size_t size = sizeof(*lru->node) * nr_node_ids;
 	int err = -ENOMEM;
 
+#if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)
+	lru->shrinker_id = -1;
+#endif
 	memcg_get_cache_ids();
 
 	lru->node = kzalloc(size, GFP_KERNEL);
@@ -609,6 +612,9 @@ void list_lru_destroy(struct list_lru *lru)
 	kfree(lru->node);
 	lru->node = NULL;
 
+#if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)
+	lru->shrinker_id = -1;
+#endif
 	memcg_put_cache_ids();
 }
 EXPORT_SYMBOL_GPL(list_lru_destroy);
diff --git a/mm/workingset.c b/mm/workingset.c
index c3a4fe145bb7..b8900573db25 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -534,6 +534,9 @@ static int __init workingset_init(void)
 	ret = __list_lru_init(&shadow_nodes, true, &shadow_nodes_key);
 	if (ret)
 		goto err_list_lru;
+#if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)
+	shadow_nodes.shrinker_id = workingset_shadow_shrinker.id;
+#endif
 	register_shrinker_prepared(&workingset_shadow_shrinker);
 	return 0;
 err_list_lru:

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 07/13] list_lru: Add memcg argument to list_lru_from_kmem()
  2018-05-09 11:56 [PATCH v4 00/13] Improve shrink_slab() scalability (old complexity was O(n^2), new is O(n)) Kirill Tkhai
                   ` (5 preceding siblings ...)
  2018-05-09 11:57 ` [PATCH v4 06/13] fs: Propagate shrinker::id to list_lru Kirill Tkhai
@ 2018-05-09 11:57 ` Kirill Tkhai
  2018-05-09 11:58 ` [PATCH v4 08/13] list_lru: Pass dst_memcg argument to memcg_drain_list_lru_node() Kirill Tkhai
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 17+ messages in thread
From: Kirill Tkhai @ 2018-05-09 11:57 UTC (permalink / raw)
  To: akpm, vdavydov.dev, shakeelb, viro, hannes, mhocko, ktkhai, tglx,
	pombredanne, stummala, gregkh, sfr, guro, mka, penguin-kernel,
	chris, longman, minchan, ying.huang, mgorman, jbacik, linux,
	linux-kernel, linux-mm, willy, lirongqing, aryabinin

This is just refactoring to allow next patches to have
memcg pointer in list_lru_from_kmem().

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 mm/list_lru.c |   25 +++++++++++++++++--------
 1 file changed, 17 insertions(+), 8 deletions(-)

diff --git a/mm/list_lru.c b/mm/list_lru.c
index 2a4d29491947..437f854eac44 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -76,18 +76,24 @@ static __always_inline struct mem_cgroup *mem_cgroup_from_kmem(void *ptr)
 }
 
 static inline struct list_lru_one *
-list_lru_from_kmem(struct list_lru_node *nlru, void *ptr)
+list_lru_from_kmem(struct list_lru_node *nlru, void *ptr,
+		   struct mem_cgroup **memcg_ptr)
 {
-	struct mem_cgroup *memcg;
+	struct list_lru_one *l = &nlru->lru;
+	struct mem_cgroup *memcg = NULL;
 
 	if (!nlru->memcg_lrus)
-		return &nlru->lru;
+		goto out;
 
 	memcg = mem_cgroup_from_kmem(ptr);
 	if (!memcg)
-		return &nlru->lru;
+		goto out;
 
-	return list_lru_from_memcg_idx(nlru, memcg_cache_id(memcg));
+	l = list_lru_from_memcg_idx(nlru, memcg_cache_id(memcg));
+out:
+	if (memcg_ptr)
+		*memcg_ptr = memcg;
+	return l;
 }
 #else
 static inline bool list_lru_memcg_aware(struct list_lru *lru)
@@ -102,8 +108,11 @@ list_lru_from_memcg_idx(struct list_lru_node *nlru, int idx)
 }
 
 static inline struct list_lru_one *
-list_lru_from_kmem(struct list_lru_node *nlru, void *ptr)
+list_lru_from_kmem(struct list_lru_node *nlru, void *ptr,
+		   struct mem_cgroup **memcg_ptr)
 {
+	if (memcg_ptr)
+		*memcg_ptr = NULL;
 	return &nlru->lru;
 }
 #endif /* CONFIG_MEMCG && !CONFIG_SLOB */
@@ -116,7 +125,7 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item)
 
 	spin_lock(&nlru->lock);
 	if (list_empty(item)) {
-		l = list_lru_from_kmem(nlru, item);
+		l = list_lru_from_kmem(nlru, item, NULL);
 		list_add_tail(item, &l->list);
 		l->nr_items++;
 		nlru->nr_items++;
@@ -142,7 +151,7 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item)
 
 	spin_lock(&nlru->lock);
 	if (!list_empty(item)) {
-		l = list_lru_from_kmem(nlru, item);
+		l = list_lru_from_kmem(nlru, item, NULL);
 		list_del_init(item);
 		l->nr_items--;
 		nlru->nr_items--;

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 08/13] list_lru: Pass dst_memcg argument to memcg_drain_list_lru_node()
  2018-05-09 11:56 [PATCH v4 00/13] Improve shrink_slab() scalability (old complexity was O(n^2), new is O(n)) Kirill Tkhai
                   ` (6 preceding siblings ...)
  2018-05-09 11:57 ` [PATCH v4 07/13] list_lru: Add memcg argument to list_lru_from_kmem() Kirill Tkhai
@ 2018-05-09 11:58 ` Kirill Tkhai
  2018-05-09 11:58 ` [PATCH v4 09/13] list_lru: Pass lru " Kirill Tkhai
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 17+ messages in thread
From: Kirill Tkhai @ 2018-05-09 11:58 UTC (permalink / raw)
  To: akpm, vdavydov.dev, shakeelb, viro, hannes, mhocko, ktkhai, tglx,
	pombredanne, stummala, gregkh, sfr, guro, mka, penguin-kernel,
	chris, longman, minchan, ying.huang, mgorman, jbacik, linux,
	linux-kernel, linux-mm, willy, lirongqing, aryabinin

This is just refactoring to allow next patches to have
dst_memcg pointer in memcg_drain_list_lru_node().

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 include/linux/list_lru.h |    2 +-
 mm/list_lru.c            |   11 ++++++-----
 mm/memcontrol.c          |    2 +-
 3 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
index f5b6bb7a8670..5c7db0022ce6 100644
--- a/include/linux/list_lru.h
+++ b/include/linux/list_lru.h
@@ -66,7 +66,7 @@ int __list_lru_init(struct list_lru *lru, bool memcg_aware,
 #define list_lru_init_memcg(lru)	__list_lru_init((lru), true, NULL)
 
 int memcg_update_all_list_lrus(int num_memcgs);
-void memcg_drain_all_list_lrus(int src_idx, int dst_idx);
+void memcg_drain_all_list_lrus(int src_idx, struct mem_cgroup *dst_memcg);
 
 /**
  * list_lru_add: add an element to the lru list's tail
diff --git a/mm/list_lru.c b/mm/list_lru.c
index 437f854eac44..a92850bc209f 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -517,8 +517,9 @@ int memcg_update_all_list_lrus(int new_size)
 }
 
 static void memcg_drain_list_lru_node(struct list_lru_node *nlru,
-				      int src_idx, int dst_idx)
+				      int src_idx, struct mem_cgroup *dst_memcg)
 {
+	int dst_idx = dst_memcg->kmemcg_id;
 	struct list_lru_one *src, *dst;
 
 	/*
@@ -538,7 +539,7 @@ static void memcg_drain_list_lru_node(struct list_lru_node *nlru,
 }
 
 static void memcg_drain_list_lru(struct list_lru *lru,
-				 int src_idx, int dst_idx)
+				 int src_idx, struct mem_cgroup *dst_memcg)
 {
 	int i;
 
@@ -546,16 +547,16 @@ static void memcg_drain_list_lru(struct list_lru *lru,
 		return;
 
 	for_each_node(i)
-		memcg_drain_list_lru_node(&lru->node[i], src_idx, dst_idx);
+		memcg_drain_list_lru_node(&lru->node[i], src_idx, dst_memcg);
 }
 
-void memcg_drain_all_list_lrus(int src_idx, int dst_idx)
+void memcg_drain_all_list_lrus(int src_idx, struct mem_cgroup *dst_memcg)
 {
 	struct list_lru *lru;
 
 	mutex_lock(&list_lrus_mutex);
 	list_for_each_entry(lru, &list_lrus, list)
-		memcg_drain_list_lru(lru, src_idx, dst_idx);
+		memcg_drain_list_lru(lru, src_idx, dst_memcg);
 	mutex_unlock(&list_lrus_mutex);
 }
 #else
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 7b224a76ac68..494cefd29590 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3171,7 +3171,7 @@ static void memcg_offline_kmem(struct mem_cgroup *memcg)
 	}
 	rcu_read_unlock();
 
-	memcg_drain_all_list_lrus(kmemcg_id, parent->kmemcg_id);
+	memcg_drain_all_list_lrus(kmemcg_id, parent);
 
 	memcg_free_cache_id(kmemcg_id);
 }

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 09/13] list_lru: Pass lru argument to memcg_drain_list_lru_node()
  2018-05-09 11:56 [PATCH v4 00/13] Improve shrink_slab() scalability (old complexity was O(n^2), new is O(n)) Kirill Tkhai
                   ` (7 preceding siblings ...)
  2018-05-09 11:58 ` [PATCH v4 08/13] list_lru: Pass dst_memcg argument to memcg_drain_list_lru_node() Kirill Tkhai
@ 2018-05-09 11:58 ` Kirill Tkhai
  2018-05-09 11:58 ` [PATCH v4 10/13] mm: Set bit in memcg shrinker bitmap on first list_lru item apearance Kirill Tkhai
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 17+ messages in thread
From: Kirill Tkhai @ 2018-05-09 11:58 UTC (permalink / raw)
  To: akpm, vdavydov.dev, shakeelb, viro, hannes, mhocko, ktkhai, tglx,
	pombredanne, stummala, gregkh, sfr, guro, mka, penguin-kernel,
	chris, longman, minchan, ying.huang, mgorman, jbacik, linux,
	linux-kernel, linux-mm, willy, lirongqing, aryabinin

This is just refactoring to allow next patches to have
lru pointer in memcg_drain_list_lru_node().

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 mm/list_lru.c |    5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/mm/list_lru.c b/mm/list_lru.c
index a92850bc209f..ed0f97b0c087 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -516,9 +516,10 @@ int memcg_update_all_list_lrus(int new_size)
 	goto out;
 }
 
-static void memcg_drain_list_lru_node(struct list_lru_node *nlru,
+static void memcg_drain_list_lru_node(struct list_lru *lru, int nid,
 				      int src_idx, struct mem_cgroup *dst_memcg)
 {
+	struct list_lru_node *nlru = &lru->node[nid];
 	int dst_idx = dst_memcg->kmemcg_id;
 	struct list_lru_one *src, *dst;
 
@@ -547,7 +548,7 @@ static void memcg_drain_list_lru(struct list_lru *lru,
 		return;
 
 	for_each_node(i)
-		memcg_drain_list_lru_node(&lru->node[i], src_idx, dst_memcg);
+		memcg_drain_list_lru_node(lru, i, src_idx, dst_memcg);
 }
 
 void memcg_drain_all_list_lrus(int src_idx, struct mem_cgroup *dst_memcg)

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 10/13] mm: Set bit in memcg shrinker bitmap on first list_lru item apearance
  2018-05-09 11:56 [PATCH v4 00/13] Improve shrink_slab() scalability (old complexity was O(n^2), new is O(n)) Kirill Tkhai
                   ` (8 preceding siblings ...)
  2018-05-09 11:58 ` [PATCH v4 09/13] list_lru: Pass lru " Kirill Tkhai
@ 2018-05-09 11:58 ` Kirill Tkhai
  2018-05-09 11:58 ` [PATCH v4 11/13] mm: Iterate only over charged shrinkers during memcg shrink_slab() Kirill Tkhai
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 17+ messages in thread
From: Kirill Tkhai @ 2018-05-09 11:58 UTC (permalink / raw)
  To: akpm, vdavydov.dev, shakeelb, viro, hannes, mhocko, ktkhai, tglx,
	pombredanne, stummala, gregkh, sfr, guro, mka, penguin-kernel,
	chris, longman, minchan, ying.huang, mgorman, jbacik, linux,
	linux-kernel, linux-mm, willy, lirongqing, aryabinin

Introduce set_shrinker_bit() function to set shrinker-related
bit in memcg shrinker bitmap, and set the bit after the first
item is added and in case of reparenting destroyed memcg's items.

This will allow next patch to make shrinkers be called only,
in case of they have charged objects at the moment, and
to improve shrink_slab() performance.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 include/linux/memcontrol.h |   13 +++++++++++++
 mm/list_lru.c              |   22 ++++++++++++++++++++--
 2 files changed, 33 insertions(+), 2 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index c159f3abe168..1fb2f96dc2f6 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1247,6 +1247,17 @@ static inline int memcg_cache_id(struct mem_cgroup *memcg)
 
 extern int memcg_expand_shrinker_maps(int old_id, int id);
 
+static inline void memcg_set_shrinker_bit(struct mem_cgroup *memcg, int nid, int nr)
+{
+	if (nr >= 0 && memcg && memcg != root_mem_cgroup) {
+		struct memcg_shrinker_map *map;
+
+		rcu_read_lock();
+		map = MEMCG_SHRINKER_MAP(memcg, nid);
+		set_bit(nr, map->map);
+		rcu_read_unlock();
+	}
+}
 #else
 #define for_each_memcg_cache_index(_idx)	\
 	for (; NULL; )
@@ -1269,6 +1280,8 @@ static inline void memcg_put_cache_ids(void)
 {
 }
 
+static inline void memcg_set_shrinker_bit(struct mem_cgroup *memcg,
+					  int node, int id) { }
 #endif /* CONFIG_MEMCG && !CONFIG_SLOB */
 
 #endif /* _LINUX_MEMCONTROL_H */
diff --git a/mm/list_lru.c b/mm/list_lru.c
index ed0f97b0c087..478567332746 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -30,6 +30,11 @@ static void list_lru_unregister(struct list_lru *lru)
 	list_del(&lru->list);
 	mutex_unlock(&list_lrus_mutex);
 }
+
+static int lru_shrinker_id(struct list_lru *lru)
+{
+	return lru->shrinker_id;
+}
 #else
 static void list_lru_register(struct list_lru *lru)
 {
@@ -38,6 +43,11 @@ static void list_lru_register(struct list_lru *lru)
 static void list_lru_unregister(struct list_lru *lru)
 {
 }
+
+static int lru_shrinker_id(struct list_lru *lru)
+{
+	return -1;
+}
 #endif /* CONFIG_MEMCG && !CONFIG_SLOB */
 
 #if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)
@@ -121,13 +131,17 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item)
 {
 	int nid = page_to_nid(virt_to_page(item));
 	struct list_lru_node *nlru = &lru->node[nid];
+	struct mem_cgroup *memcg;
 	struct list_lru_one *l;
 
 	spin_lock(&nlru->lock);
 	if (list_empty(item)) {
-		l = list_lru_from_kmem(nlru, item, NULL);
+		l = list_lru_from_kmem(nlru, item, &memcg);
 		list_add_tail(item, &l->list);
-		l->nr_items++;
+		/* Set shrinker bit if the first element was added */
+		if (!l->nr_items++)
+			memcg_set_shrinker_bit(memcg, nid,
+					       lru_shrinker_id(lru));
 		nlru->nr_items++;
 		spin_unlock(&nlru->lock);
 		return true;
@@ -522,6 +536,7 @@ static void memcg_drain_list_lru_node(struct list_lru *lru, int nid,
 	struct list_lru_node *nlru = &lru->node[nid];
 	int dst_idx = dst_memcg->kmemcg_id;
 	struct list_lru_one *src, *dst;
+	bool set;
 
 	/*
 	 * Since list_lru_{add,del} may be called under an IRQ-safe lock,
@@ -533,7 +548,10 @@ static void memcg_drain_list_lru_node(struct list_lru *lru, int nid,
 	dst = list_lru_from_memcg_idx(nlru, dst_idx);
 
 	list_splice_init(&src->list, &dst->list);
+	set = (!dst->nr_items && src->nr_items);
 	dst->nr_items += src->nr_items;
+	if (set)
+		memcg_set_shrinker_bit(dst_memcg, nid, lru_shrinker_id(lru));
 	src->nr_items = 0;
 
 	spin_unlock_irq(&nlru->lock);

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 11/13] mm: Iterate only over charged shrinkers during memcg shrink_slab()
  2018-05-09 11:56 [PATCH v4 00/13] Improve shrink_slab() scalability (old complexity was O(n^2), new is O(n)) Kirill Tkhai
                   ` (9 preceding siblings ...)
  2018-05-09 11:58 ` [PATCH v4 10/13] mm: Set bit in memcg shrinker bitmap on first list_lru item apearance Kirill Tkhai
@ 2018-05-09 11:58 ` Kirill Tkhai
  2018-05-09 11:58 ` [PATCH v4 12/13] mm: Add SHRINK_EMPTY shrinker methods return value Kirill Tkhai
  2018-05-09 11:58 ` [PATCH v4 13/13] mm: Clear shrinker bit if there are no objects related to memcg Kirill Tkhai
  12 siblings, 0 replies; 17+ messages in thread
From: Kirill Tkhai @ 2018-05-09 11:58 UTC (permalink / raw)
  To: akpm, vdavydov.dev, shakeelb, viro, hannes, mhocko, ktkhai, tglx,
	pombredanne, stummala, gregkh, sfr, guro, mka, penguin-kernel,
	chris, longman, minchan, ying.huang, mgorman, jbacik, linux,
	linux-kernel, linux-mm, willy, lirongqing, aryabinin

Using the preparations made in previous patches, in case of memcg
shrink, we may avoid shrinkers, which are not set in memcg's shrinkers
bitmap. To do that, we separate iterations over memcg-aware and
!memcg-aware shrinkers, and memcg-aware shrinkers are chosen
via for_each_set_bit() from the bitmap. In case of big nodes,
having many isolated environments, this gives significant
performance growth. See next patches for the details.

Note, that the patch does not respect to empty memcg shrinkers,
since we never clear the bitmap bits after we set it once.
Their shrinkers will be called again, with no shrinked objects
as result. This functionality is provided by next patches.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 include/linux/memcontrol.h |    1 +
 mm/vmscan.c                |   70 ++++++++++++++++++++++++++++++++++++++------
 2 files changed, 62 insertions(+), 9 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 1fb2f96dc2f6..4548b09e44a7 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -756,6 +756,7 @@ void mem_cgroup_split_huge_fixup(struct page *head);
 #define MEM_CGROUP_ID_MAX	0
 
 struct mem_cgroup;
+#define root_mem_cgroup NULL
 
 static inline bool mem_cgroup_disabled(void)
 {
diff --git a/mm/vmscan.c b/mm/vmscan.c
index d1940244d0bf..648b08621334 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -376,6 +376,7 @@ int prealloc_shrinker(struct shrinker *shrinker)
 			goto free_deferred;
 	}
 
+	INIT_LIST_HEAD(&shrinker->list);
 	return 0;
 
 free_deferred:
@@ -547,6 +548,63 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
 	return freed;
 }
 
+#if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)
+static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
+			struct mem_cgroup *memcg, int priority)
+{
+	struct memcg_shrinker_map *map;
+	unsigned long freed = 0;
+	int ret, i;
+
+	if (!memcg_kmem_enabled() || !mem_cgroup_online(memcg))
+		return 0;
+
+	if (!down_read_trylock(&shrinker_rwsem))
+		return 0;
+
+	/*
+	 * 1)Caller passes only alive memcg, so map can't be NULL.
+	 * 2)shrinker_rwsem protects from maps expanding.
+	 */
+	map = rcu_dereference_protected(MEMCG_SHRINKER_MAP(memcg, nid), true);
+	BUG_ON(!map);
+
+	for_each_set_bit(i, map->map, memcg_shrinker_nr_max) {
+		struct shrink_control sc = {
+			.gfp_mask = gfp_mask,
+			.nid = nid,
+			.memcg = memcg,
+		};
+		struct shrinker *shrinker;
+
+		shrinker = idr_find(&shrinker_idr, i);
+		if (!shrinker) {
+			clear_bit(i, map->map);
+			continue;
+		}
+		if (list_empty(&shrinker->list))
+			continue;
+
+		ret = do_shrink_slab(&sc, shrinker, priority);
+		freed += ret;
+
+		if (rwsem_is_contended(&shrinker_rwsem)) {
+			freed = freed ? : 1;
+			break;
+		}
+	}
+
+	up_read(&shrinker_rwsem);
+	return freed;
+}
+#else
+static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
+			struct mem_cgroup *memcg, int priority)
+{
+	return 0;
+}
+#endif
+
 /**
  * shrink_slab - shrink slab caches
  * @gfp_mask: allocation context
@@ -576,8 +634,8 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
 	struct shrinker *shrinker;
 	unsigned long freed = 0;
 
-	if (memcg && (!memcg_kmem_enabled() || !mem_cgroup_online(memcg)))
-		return 0;
+	if (memcg && memcg != root_mem_cgroup)
+		return shrink_slab_memcg(gfp_mask, nid, memcg, priority);
 
 	if (!down_read_trylock(&shrinker_rwsem))
 		goto out;
@@ -589,13 +647,7 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
 			.memcg = memcg,
 		};
 
-		/*
-		 * If kernel memory accounting is disabled, we ignore
-		 * SHRINKER_MEMCG_AWARE flag and call all shrinkers
-		 * passing NULL for memcg.
-		 */
-		if (memcg_kmem_enabled() &&
-		    !!memcg != !!(shrinker->flags & SHRINKER_MEMCG_AWARE))
+		if (!!memcg != !!(shrinker->flags & SHRINKER_MEMCG_AWARE))
 			continue;
 
 		if (!(shrinker->flags & SHRINKER_NUMA_AWARE))

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 12/13] mm: Add SHRINK_EMPTY shrinker methods return value
  2018-05-09 11:56 [PATCH v4 00/13] Improve shrink_slab() scalability (old complexity was O(n^2), new is O(n)) Kirill Tkhai
                   ` (10 preceding siblings ...)
  2018-05-09 11:58 ` [PATCH v4 11/13] mm: Iterate only over charged shrinkers during memcg shrink_slab() Kirill Tkhai
@ 2018-05-09 11:58 ` Kirill Tkhai
  2018-05-09 11:58 ` [PATCH v4 13/13] mm: Clear shrinker bit if there are no objects related to memcg Kirill Tkhai
  12 siblings, 0 replies; 17+ messages in thread
From: Kirill Tkhai @ 2018-05-09 11:58 UTC (permalink / raw)
  To: akpm, vdavydov.dev, shakeelb, viro, hannes, mhocko, ktkhai, tglx,
	pombredanne, stummala, gregkh, sfr, guro, mka, penguin-kernel,
	chris, longman, minchan, ying.huang, mgorman, jbacik, linux,
	linux-kernel, linux-mm, willy, lirongqing, aryabinin

We need to differ the situations, when shrinker has
very small amount of objects (see vfs_pressure_ratio()
called from super_cache_count()), and when it has no
objects at all. Currently, in the both of these cases,
shrinker::count_objects() returns 0.

The patch introduces new SHRINK_EMPTY return value,
which will be used for "no objects at all" case.
It's is a refactoring mostly, as SHRINK_EMPTY is replaced
by 0 by all callers of do_shrink_slab() in this patch,
and all the magic will happen in further.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 fs/super.c               |    3 +++
 include/linux/shrinker.h |    7 +++++--
 mm/vmscan.c              |   12 +++++++++---
 mm/workingset.c          |    3 +++
 4 files changed, 20 insertions(+), 5 deletions(-)

diff --git a/fs/super.c b/fs/super.c
index c9a6ef33a98b..9a10e44d866b 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -134,6 +134,9 @@ static unsigned long super_cache_count(struct shrinker *shrink,
 	total_objects += list_lru_shrink_count(&sb->s_dentry_lru, sc);
 	total_objects += list_lru_shrink_count(&sb->s_inode_lru, sc);
 
+	if (!total_objects)
+		return SHRINK_EMPTY;
+
 	total_objects = vfs_pressure_ratio(total_objects);
 	return total_objects;
 }
diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
index a9ec364e1b0b..4081016540bf 100644
--- a/include/linux/shrinker.h
+++ b/include/linux/shrinker.h
@@ -34,12 +34,15 @@ struct shrink_control {
 };
 
 #define SHRINK_STOP (~0UL)
+#define SHRINK_EMPTY (~0UL - 1)
 /*
  * A callback you can register to apply pressure to ageable caches.
  *
  * @count_objects should return the number of freeable items in the cache. If
- * there are no objects to free or the number of freeable items cannot be
- * determined, it should return 0. No deadlock checks should be done during the
+ * there are no objects to free, it should return SHRINK_EMPTY, while 0 is
+ * returned in cases of the number of freeable items cannot be determined
+ * or shrinker should skip this cache for this time (e.g., their number
+ * is below shrinkable limit). No deadlock checks should be done during the
  * count callback - the shrinker relies on aggregating scan counts that couldn't
  * be executed due to potential deadlocks to be run at a later call when the
  * deadlock condition is no longer pending.
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 648b08621334..80743878576c 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -446,8 +446,8 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
 	long scanned = 0, next_deferred;
 
 	freeable = shrinker->count_objects(shrinker, shrinkctl);
-	if (freeable == 0)
-		return 0;
+	if (freeable == 0 || freeable == SHRINK_EMPTY)
+		return freeable;
 
 	/*
 	 * copy the current shrinker scan count into a local variable
@@ -586,6 +586,8 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
 			continue;
 
 		ret = do_shrink_slab(&sc, shrinker, priority);
+		if (ret == SHRINK_EMPTY)
+			ret = 0;
 		freed += ret;
 
 		if (rwsem_is_contended(&shrinker_rwsem)) {
@@ -633,6 +635,7 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
 {
 	struct shrinker *shrinker;
 	unsigned long freed = 0;
+	int ret;
 
 	if (memcg && memcg != root_mem_cgroup)
 		return shrink_slab_memcg(gfp_mask, nid, memcg, priority);
@@ -653,7 +656,10 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
 		if (!(shrinker->flags & SHRINKER_NUMA_AWARE))
 			sc.nid = 0;
 
-		freed += do_shrink_slab(&sc, shrinker, priority);
+		ret = do_shrink_slab(&sc, shrinker, priority);
+		if (ret == SHRINK_EMPTY)
+			ret = 0;
+		freed += ret;
 		/*
 		 * Bail out if someone want to register a new shrinker to
 		 * prevent the regsitration from being stalled for long periods
diff --git a/mm/workingset.c b/mm/workingset.c
index b8900573db25..ae0555169b22 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -402,6 +402,9 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker,
 	}
 	max_nodes = cache >> (RADIX_TREE_MAP_SHIFT - 3);
 
+	if (!nodes)
+		return SHRINK_EMPTY;
+
 	if (nodes <= max_nodes)
 		return 0;
 	return nodes - max_nodes;

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 13/13] mm: Clear shrinker bit if there are no objects related to memcg
  2018-05-09 11:56 [PATCH v4 00/13] Improve shrink_slab() scalability (old complexity was O(n^2), new is O(n)) Kirill Tkhai
                   ` (11 preceding siblings ...)
  2018-05-09 11:58 ` [PATCH v4 12/13] mm: Add SHRINK_EMPTY shrinker methods return value Kirill Tkhai
@ 2018-05-09 11:58 ` Kirill Tkhai
  12 siblings, 0 replies; 17+ messages in thread
From: Kirill Tkhai @ 2018-05-09 11:58 UTC (permalink / raw)
  To: akpm, vdavydov.dev, shakeelb, viro, hannes, mhocko, ktkhai, tglx,
	pombredanne, stummala, gregkh, sfr, guro, mka, penguin-kernel,
	chris, longman, minchan, ying.huang, mgorman, jbacik, linux,
	linux-kernel, linux-mm, willy, lirongqing, aryabinin

To avoid further unneed calls of do_shrink_slab()
for shrinkers, which already do not have any charged
objects in a memcg, their bits have to be cleared.

This patch introduces a lockless mechanism to do that
without races without parallel list lru add. After
do_shrink_slab() returns SHRINK_EMPTY the first time,
we clear the bit and call it once again. Then we restore
the bit, if the new return value is different.

Note, that single smp_mb__after_atomic() in shrink_slab_memcg()
covers two situations:

1)list_lru_add()     shrink_slab_memcg
    list_add_tail()    for_each_set_bit() <--- read bit
                         do_shrink_slab() <--- missed list update (no barrier)
    <MB>                 <MB>
    set_bit()            do_shrink_slab() <--- seen list update

This situation, when the first do_shrink_slab() sees set bit,
but it doesn't see list update (i.e., race with the first element
queueing), is rare. So we don't add <MB> before the first call
of do_shrink_slab() instead of this to do not slow down generic
case. Also, it's need the second call as seen in below in (2).

2)list_lru_add()      shrink_slab_memcg()
    list_add_tail()     ...
    set_bit()           ...
  ...                   for_each_set_bit()
  do_shrink_slab()        do_shrink_slab()
    clear_bit()           ...
  ...                     ...
  list_lru_add()          ...
    list_add_tail()       clear_bit()
    <MB>                  <MB>
    set_bit()             do_shrink_slab()

The barriers guarantees, the second do_shrink_slab()
in the right side task sees list update if really
cleared the bit. This case is drawn in the code comment.

[Results/performance of the patchset]

After the whole patchset applied the below test shows signify
increase of performance:

$echo 1 > /sys/fs/cgroup/memory/memory.use_hierarchy
$mkdir /sys/fs/cgroup/memory/ct
$echo 4000M > /sys/fs/cgroup/memory/ct/memory.kmem.limit_in_bytes
    $for i in `seq 0 4000`; do mkdir /sys/fs/cgroup/memory/ct/$i; echo $$ > /sys/fs/cgroup/memory/ct/$i/cgroup.procs; mkdir -p s/$i; mount -t tmpfs $i s/$i; touch s/$i/file; done

Then, 5 sequential calls of drop caches:
$time echo 3 > /proc/sys/vm/drop_caches

1)Before:
0.00user 13.78system 0:13.78elapsed 99%CPU
0.00user 5.59system 0:05.60elapsed 99%CPU
0.00user 5.48system 0:05.48elapsed 99%CPU
0.00user 8.35system 0:08.35elapsed 99%CPU
0.00user 8.34system 0:08.35elapsed 99%CPU

2)After
0.00user 1.10system 0:01.10elapsed 99%CPU
0.00user 0.00system 0:00.01elapsed 64%CPU
0.00user 0.01system 0:00.01elapsed 82%CPU
0.00user 0.00system 0:00.01elapsed 64%CPU
0.00user 0.01system 0:00.01elapsed 82%CPU

The results show the performance increases at least in 548 times.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 include/linux/memcontrol.h |    2 ++
 mm/vmscan.c                |   19 +++++++++++++++++--
 2 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 4548b09e44a7..9f00554aa59c 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1255,6 +1255,8 @@ static inline void memcg_set_shrinker_bit(struct mem_cgroup *memcg, int nid, int
 
 		rcu_read_lock();
 		map = MEMCG_SHRINKER_MAP(memcg, nid);
+		/* Pairs with smp mb in shrink_slab() */
+		smp_mb__before_atomic();
 		set_bit(nr, map->map);
 		rcu_read_unlock();
 	}
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 80743878576c..49cdf9a17d6f 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -586,8 +586,23 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
 			continue;
 
 		ret = do_shrink_slab(&sc, shrinker, priority);
-		if (ret == SHRINK_EMPTY)
-			ret = 0;
+		if (ret == SHRINK_EMPTY) {
+			clear_bit(i, map->map);
+			/*
+			 * Pairs with mb in memcg_set_shrinker_bit():
+			 *
+			 * list_lru_add()     shrink_slab_memcg()
+			 *   list_add_tail()    clear_bit()
+			 *   <MB>               <MB>
+			 *   set_bit()          do_shrink_slab()
+			 */
+			smp_mb__after_atomic();
+			ret = do_shrink_slab(&sc, shrinker, priority);
+			if (ret == SHRINK_EMPTY)
+				ret = 0;
+			else
+				memcg_set_shrinker_bit(memcg, nid, i);
+		}
 		freed += ret;
 
 		if (rwsem_is_contended(&shrinker_rwsem)) {

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 01/13] mm: Assign id to every memcg-aware shrinker
  2018-05-09 11:56 ` [PATCH v4 01/13] mm: Assign id to every memcg-aware shrinker Kirill Tkhai
@ 2018-05-09 22:55   ` Andrew Morton
  2018-05-10  2:47     ` Shakeel Butt
  2018-05-10  9:42     ` Kirill Tkhai
  0 siblings, 2 replies; 17+ messages in thread
From: Andrew Morton @ 2018-05-09 22:55 UTC (permalink / raw)
  To: Kirill Tkhai
  Cc: vdavydov.dev, shakeelb, viro, hannes, mhocko, tglx, pombredanne,
	stummala, gregkh, sfr, guro, mka, penguin-kernel, chris, longman,
	minchan, ying.huang, mgorman, jbacik, linux, linux-kernel,
	linux-mm, willy, lirongqing, aryabinin

On Wed, 09 May 2018 14:56:55 +0300 Kirill Tkhai <ktkhai@virtuozzo.com> wrote:

> The patch introduces shrinker::id number, which is used to enumerate
> memcg-aware shrinkers. The number start from 0, and the code tries
> to maintain it as small as possible.
> 
> This will be used as to represent a memcg-aware shrinkers in memcg
> shrinkers map.
> 
> ...
>
> --- a/fs/super.c
> +++ b/fs/super.c
> @@ -248,6 +248,9 @@ static struct super_block *alloc_super(struct file_system_type *type, int flags,
>  	s->s_time_gran = 1000000000;
>  	s->cleancache_poolid = CLEANCACHE_NO_POOL;
>  
> +#if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)

It would be more conventional to do this logic in Kconfig - define a
new MEMCG_SHRINKER which equals MEMCG && !SLOB.

This ifdef occurs a distressing number of times in the patchset :( I
wonder if there's something we can do about that.

Also, why doesn't it work with slob?  Please describe the issue in the
changelogs somewhere.

It's a pretty big patchset.  I *could* merge it up in the hope that
someone is planning do do a review soon.  But is there such a person?

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 01/13] mm: Assign id to every memcg-aware shrinker
  2018-05-09 22:55   ` Andrew Morton
@ 2018-05-10  2:47     ` Shakeel Butt
  2018-05-10  9:42     ` Kirill Tkhai
  1 sibling, 0 replies; 17+ messages in thread
From: Shakeel Butt @ 2018-05-10  2:47 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Kirill Tkhai, Vladimir Davydov, Alexander Viro, Johannes Weiner,
	Michal Hocko, Thomas Gleixner, pombredanne, stummala, gregkh,
	Stephen Rothwell, Roman Gushchin, mka, Tetsuo Handa, chris,
	longman, Minchan Kim, Huang Ying, Mel Gorman, jbacik, linux,
	LKML, Linux MM, Matthew Wilcox, lirongqing, Andrey Ryabinin

On Wed, May 9, 2018 at 3:55 PM Andrew Morton <akpm@linux-foundation.org>
wrote:

> On Wed, 09 May 2018 14:56:55 +0300 Kirill Tkhai <ktkhai@virtuozzo.com>
wrote:

> > The patch introduces shrinker::id number, which is used to enumerate
> > memcg-aware shrinkers. The number start from 0, and the code tries
> > to maintain it as small as possible.
> >
> > This will be used as to represent a memcg-aware shrinkers in memcg
> > shrinkers map.
> >
> > ...
> >
> > --- a/fs/super.c
> > +++ b/fs/super.c
> > @@ -248,6 +248,9 @@ static struct super_block *alloc_super(struct
file_system_type *type, int flags,
> >       s->s_time_gran = 1000000000;
> >       s->cleancache_poolid = CLEANCACHE_NO_POOL;
> >
> > +#if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)

> It would be more conventional to do this logic in Kconfig - define a
> new MEMCG_SHRINKER which equals MEMCG && !SLOB.

> This ifdef occurs a distressing number of times in the patchset :( I
> wonder if there's something we can do about that.

> Also, why doesn't it work with slob?  Please describe the issue in the
> changelogs somewhere.

> It's a pretty big patchset.  I *could* merge it up in the hope that
> someone is planning do do a review soon.  But is there such a person?


Hi Andrew, couple of these patches are being reviewed by Vladimir and I
plan to review too by next week. I think we can merge them into mm tree for
more testing and I will also this patch series internally (though I have to
backport them to our kernel for more extensive testing).

thanks,
Shakeel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 01/13] mm: Assign id to every memcg-aware shrinker
  2018-05-09 22:55   ` Andrew Morton
  2018-05-10  2:47     ` Shakeel Butt
@ 2018-05-10  9:42     ` Kirill Tkhai
  1 sibling, 0 replies; 17+ messages in thread
From: Kirill Tkhai @ 2018-05-10  9:42 UTC (permalink / raw)
  To: Andrew Morton
  Cc: vdavydov.dev, shakeelb, viro, hannes, mhocko, tglx, pombredanne,
	stummala, gregkh, sfr, guro, mka, penguin-kernel, chris, longman,
	minchan, ying.huang, mgorman, jbacik, linux, linux-kernel,
	linux-mm, willy, lirongqing, aryabinin

On 10.05.2018 01:55, Andrew Morton wrote:
> On Wed, 09 May 2018 14:56:55 +0300 Kirill Tkhai <ktkhai@virtuozzo.com> wrote:
> 
>> The patch introduces shrinker::id number, which is used to enumerate
>> memcg-aware shrinkers. The number start from 0, and the code tries
>> to maintain it as small as possible.
>>
>> This will be used as to represent a memcg-aware shrinkers in memcg
>> shrinkers map.
>>
>> ...
>>
>> --- a/fs/super.c
>> +++ b/fs/super.c
>> @@ -248,6 +248,9 @@ static struct super_block *alloc_super(struct file_system_type *type, int flags,
>>  	s->s_time_gran = 1000000000;
>>  	s->cleancache_poolid = CLEANCACHE_NO_POOL;
>>  
>> +#if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)
> 
> It would be more conventional to do this logic in Kconfig - define a
> new MEMCG_SHRINKER which equals MEMCG && !SLOB.
> 
> This ifdef occurs a distressing number of times in the patchset :( I
> wonder if there's something we can do about that.
> 
> Also, why doesn't it work with slob?  Please describe the issue in the
> changelogs somewhere.

All currently existing memcg-aware shrinkers are based on list_lru, which
does not introduce separate memcg lists for SLOB case. So, the optimization
made by this patchset is not need there.

I'll make MEMCG_SHRINKER in next version like you suggested. Even if we have
no such shrinkers at the moment, we may have them in the future, and this will
be useful anyway.
 
> It's a pretty big patchset.  I *could* merge it up in the hope that
> someone is planning do do a review soon.  But is there such a person?

Thanks,
Kirill

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2018-05-10  9:42 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-09 11:56 [PATCH v4 00/13] Improve shrink_slab() scalability (old complexity was O(n^2), new is O(n)) Kirill Tkhai
2018-05-09 11:56 ` [PATCH v4 01/13] mm: Assign id to every memcg-aware shrinker Kirill Tkhai
2018-05-09 22:55   ` Andrew Morton
2018-05-10  2:47     ` Shakeel Butt
2018-05-10  9:42     ` Kirill Tkhai
2018-05-09 11:57 ` [PATCH v4 02/13] memcg: Move up for_each_mem_cgroup{, _tree} defines Kirill Tkhai
2018-05-09 11:57 ` [PATCH v4 03/13] mm: Assign memcg-aware shrinkers bitmap to memcg Kirill Tkhai
2018-05-09 11:57 ` [PATCH v4 04/13] mm: Refactoring in workingset_init() Kirill Tkhai
2018-05-09 11:57 ` [PATCH v4 05/13] fs: Refactoring in alloc_super() Kirill Tkhai
2018-05-09 11:57 ` [PATCH v4 06/13] fs: Propagate shrinker::id to list_lru Kirill Tkhai
2018-05-09 11:57 ` [PATCH v4 07/13] list_lru: Add memcg argument to list_lru_from_kmem() Kirill Tkhai
2018-05-09 11:58 ` [PATCH v4 08/13] list_lru: Pass dst_memcg argument to memcg_drain_list_lru_node() Kirill Tkhai
2018-05-09 11:58 ` [PATCH v4 09/13] list_lru: Pass lru " Kirill Tkhai
2018-05-09 11:58 ` [PATCH v4 10/13] mm: Set bit in memcg shrinker bitmap on first list_lru item apearance Kirill Tkhai
2018-05-09 11:58 ` [PATCH v4 11/13] mm: Iterate only over charged shrinkers during memcg shrink_slab() Kirill Tkhai
2018-05-09 11:58 ` [PATCH v4 12/13] mm: Add SHRINK_EMPTY shrinker methods return value Kirill Tkhai
2018-05-09 11:58 ` [PATCH v4 13/13] mm: Clear shrinker bit if there are no objects related to memcg Kirill Tkhai

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).