All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH -mm v2 0/6] memcg/kmem: cleanup naming and callflows
@ 2014-04-27  9:04 ` Vladimir Davydov
  0 siblings, 0 replies; 16+ messages in thread
From: Vladimir Davydov @ 2014-04-27  9:04 UTC (permalink / raw)
  To: akpm; +Cc: mhocko, hannes, glommer, cl, penberg, linux-kernel, linux-mm, devel

Hi,

In reply to "[PATCH RFC -mm v2 3/3] memcg, slab: simplify
synchronization scheme" Johannes wrote:

> I like this patch, but the API names are confusing. Could we fix up
> that whole thing by any chance?

(see https://lkml.org/lkml/2014/4/18/317)

So this patch set is about cleaning up memcg/kmem naming.

While preparing it I found that some of the ugly-named functions
constituting interface between memcontrol.c and slab_common.c can be
neatly got rid of w/o complicating the code. Quite the contrary, w/o
them call-flows look much simpler, IMO. So the first four patches do not
rename anything actually - they just rework call-flows in kmem cache
creation/destruction and memcg_caches arrays relocations paths. Finally,
patches 5 and 6 clean up the naming.

v1: http://lkml.org/lkml/2014/4/25/254

Changes in v2:
 - move memcg_params allocation/free for per memcg caches to
   slab_common.c, because this way it looks clearer (patch 4)
 - minor changes in function names and comments

Reviews are appreciated.

Thanks,

Vladimir Davydov (6):
  memcg: get rid of memcg_create_cache_name
  memcg: allocate memcg_caches array on first per memcg cache creation
  memcg: cleanup memcg_caches arrays relocation path
  memcg: get rid of memcg_{alloc,free}_cache_params
  memcg: cleanup kmem cache creation/destruction functions naming
  memcg: cleanup kmem_id-related naming

 include/linux/memcontrol.h |   40 +----
 include/linux/slab.h       |   11 +-
 mm/memcontrol.c            |  396 +++++++++++++++++++-------------------------
 mm/slab.c                  |    4 +-
 mm/slab.h                  |   24 ++-
 mm/slab_common.c           |  148 ++++++++++++-----
 mm/slub.c                  |   10 +-
 7 files changed, 314 insertions(+), 319 deletions(-)

-- 
1.7.10.4


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH -mm v2 0/6] memcg/kmem: cleanup naming and callflows
@ 2014-04-27  9:04 ` Vladimir Davydov
  0 siblings, 0 replies; 16+ messages in thread
From: Vladimir Davydov @ 2014-04-27  9:04 UTC (permalink / raw)
  To: akpm; +Cc: mhocko, hannes, glommer, cl, penberg, linux-kernel, linux-mm, devel

Hi,

In reply to "[PATCH RFC -mm v2 3/3] memcg, slab: simplify
synchronization scheme" Johannes wrote:

> I like this patch, but the API names are confusing. Could we fix up
> that whole thing by any chance?

(see https://lkml.org/lkml/2014/4/18/317)

So this patch set is about cleaning up memcg/kmem naming.

While preparing it I found that some of the ugly-named functions
constituting interface between memcontrol.c and slab_common.c can be
neatly got rid of w/o complicating the code. Quite the contrary, w/o
them call-flows look much simpler, IMO. So the first four patches do not
rename anything actually - they just rework call-flows in kmem cache
creation/destruction and memcg_caches arrays relocations paths. Finally,
patches 5 and 6 clean up the naming.

v1: http://lkml.org/lkml/2014/4/25/254

Changes in v2:
 - move memcg_params allocation/free for per memcg caches to
   slab_common.c, because this way it looks clearer (patch 4)
 - minor changes in function names and comments

Reviews are appreciated.

Thanks,

Vladimir Davydov (6):
  memcg: get rid of memcg_create_cache_name
  memcg: allocate memcg_caches array on first per memcg cache creation
  memcg: cleanup memcg_caches arrays relocation path
  memcg: get rid of memcg_{alloc,free}_cache_params
  memcg: cleanup kmem cache creation/destruction functions naming
  memcg: cleanup kmem_id-related naming

 include/linux/memcontrol.h |   40 +----
 include/linux/slab.h       |   11 +-
 mm/memcontrol.c            |  396 +++++++++++++++++++-------------------------
 mm/slab.c                  |    4 +-
 mm/slab.h                  |   24 ++-
 mm/slab_common.c           |  148 ++++++++++++-----
 mm/slub.c                  |   10 +-
 7 files changed, 314 insertions(+), 319 deletions(-)

-- 
1.7.10.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH -mm v2 1/6] memcg: get rid of memcg_create_cache_name
  2014-04-27  9:04 ` Vladimir Davydov
@ 2014-04-27  9:04   ` Vladimir Davydov
  -1 siblings, 0 replies; 16+ messages in thread
From: Vladimir Davydov @ 2014-04-27  9:04 UTC (permalink / raw)
  To: akpm; +Cc: mhocko, hannes, glommer, cl, penberg, linux-kernel, linux-mm, devel

Instead of calling back to memcontrol.c from kmem_cache_create_memcg in
order to just create the name of a per memcg cache, let's allocate it in
place. We only need to pass the memcg name to kmem_cache_create_memcg
for that - everything else can be done in slab_common.c.

This patch also turns the buffer used to store memcg names to be
statically allocated while before it was kmalloc'ed on first use.
Saving those 256 bytes isn't worth complicating the code. If someone is
so tight on memory, they'd better disable CONFIG_MEMCG_KMEM altogether.

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
---
 include/linux/memcontrol.h |    2 --
 include/linux/slab.h       |    3 ++-
 mm/memcontrol.c            |   27 +++------------------------
 mm/slab_common.c           |    7 +++++--
 4 files changed, 10 insertions(+), 29 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 1fa23244fe37..dfc2929a3877 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -492,8 +492,6 @@ void __memcg_kmem_uncharge_pages(struct page *page, int order);
 
 int memcg_cache_id(struct mem_cgroup *memcg);
 
-char *memcg_create_cache_name(struct mem_cgroup *memcg,
-			      struct kmem_cache *root_cache);
 int memcg_alloc_cache_params(struct mem_cgroup *memcg, struct kmem_cache *s,
 			     struct kmem_cache *root_cache);
 void memcg_free_cache_params(struct kmem_cache *s);
diff --git a/include/linux/slab.h b/include/linux/slab.h
index ecbec9ccb80d..86e5b26fbdab 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -117,7 +117,8 @@ struct kmem_cache *kmem_cache_create(const char *, size_t, size_t,
 			void (*)(void *));
 #ifdef CONFIG_MEMCG_KMEM
 struct kmem_cache *kmem_cache_create_memcg(struct mem_cgroup *,
-					   struct kmem_cache *);
+					   struct kmem_cache *,
+					   const char *);
 #endif
 void kmem_cache_destroy(struct kmem_cache *);
 int kmem_cache_shrink(struct kmem_cache *);
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 19d620b3d69c..605d72044533 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3105,29 +3105,6 @@ int memcg_update_cache_size(struct kmem_cache *s, int num_groups)
 	return 0;
 }
 
-char *memcg_create_cache_name(struct mem_cgroup *memcg,
-			      struct kmem_cache *root_cache)
-{
-	static char *buf = NULL;
-
-	/*
-	 * We need a mutex here to protect the shared buffer. Since this is
-	 * expected to be called only on cache creation, we can employ the
-	 * slab_mutex for that purpose.
-	 */
-	lockdep_assert_held(&slab_mutex);
-
-	if (!buf) {
-		buf = kmalloc(NAME_MAX + 1, GFP_KERNEL);
-		if (!buf)
-			return NULL;
-	}
-
-	cgroup_name(memcg->css.cgroup, buf, NAME_MAX + 1);
-	return kasprintf(GFP_KERNEL, "%s(%d:%s)", root_cache->name,
-			 memcg_cache_id(memcg), buf);
-}
-
 int memcg_alloc_cache_params(struct mem_cgroup *memcg, struct kmem_cache *s,
 			     struct kmem_cache *root_cache)
 {
@@ -3168,6 +3145,7 @@ void memcg_free_cache_params(struct kmem_cache *s)
 static void memcg_kmem_create_cache(struct mem_cgroup *memcg,
 				    struct kmem_cache *root_cache)
 {
+	static char memcg_name[NAME_MAX + 1];
 	struct kmem_cache *cachep;
 	int id;
 
@@ -3183,7 +3161,8 @@ static void memcg_kmem_create_cache(struct mem_cgroup *memcg,
 	if (cache_from_memcg_idx(root_cache, id))
 		return;
 
-	cachep = kmem_cache_create_memcg(memcg, root_cache);
+	cgroup_name(memcg->css.cgroup, memcg_name, NAME_MAX + 1);
+	cachep = kmem_cache_create_memcg(memcg, root_cache, memcg_name);
 	/*
 	 * If we could not create a memcg cache, do not complain, because
 	 * that's not critical at all as we can always proceed with the root
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 4f16f374392a..6b6397d2e05e 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -264,13 +264,15 @@ EXPORT_SYMBOL(kmem_cache_create);
  * kmem_cache_create_memcg - Create a cache for a memory cgroup.
  * @memcg: The memory cgroup the new cache is for.
  * @root_cache: The parent of the new cache.
+ * @memcg_name: The name of the memory cgroup.
  *
  * This function attempts to create a kmem cache that will serve allocation
  * requests going from @memcg to @root_cache. The new cache inherits properties
  * from its parent.
  */
 struct kmem_cache *kmem_cache_create_memcg(struct mem_cgroup *memcg,
-					   struct kmem_cache *root_cache)
+					   struct kmem_cache *root_cache,
+					   const char *memcg_name)
 {
 	struct kmem_cache *s = NULL;
 	char *cache_name;
@@ -280,7 +282,8 @@ struct kmem_cache *kmem_cache_create_memcg(struct mem_cgroup *memcg,
 
 	mutex_lock(&slab_mutex);
 
-	cache_name = memcg_create_cache_name(memcg, root_cache);
+	cache_name = kasprintf(GFP_KERNEL, "%s(%d:%s)", root_cache->name,
+			       memcg_cache_id(memcg), memcg_name);
 	if (!cache_name)
 		goto out_unlock;
 
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH -mm v2 1/6] memcg: get rid of memcg_create_cache_name
@ 2014-04-27  9:04   ` Vladimir Davydov
  0 siblings, 0 replies; 16+ messages in thread
From: Vladimir Davydov @ 2014-04-27  9:04 UTC (permalink / raw)
  To: akpm; +Cc: mhocko, hannes, glommer, cl, penberg, linux-kernel, linux-mm, devel

Instead of calling back to memcontrol.c from kmem_cache_create_memcg in
order to just create the name of a per memcg cache, let's allocate it in
place. We only need to pass the memcg name to kmem_cache_create_memcg
for that - everything else can be done in slab_common.c.

This patch also turns the buffer used to store memcg names to be
statically allocated while before it was kmalloc'ed on first use.
Saving those 256 bytes isn't worth complicating the code. If someone is
so tight on memory, they'd better disable CONFIG_MEMCG_KMEM altogether.

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
---
 include/linux/memcontrol.h |    2 --
 include/linux/slab.h       |    3 ++-
 mm/memcontrol.c            |   27 +++------------------------
 mm/slab_common.c           |    7 +++++--
 4 files changed, 10 insertions(+), 29 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 1fa23244fe37..dfc2929a3877 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -492,8 +492,6 @@ void __memcg_kmem_uncharge_pages(struct page *page, int order);
 
 int memcg_cache_id(struct mem_cgroup *memcg);
 
-char *memcg_create_cache_name(struct mem_cgroup *memcg,
-			      struct kmem_cache *root_cache);
 int memcg_alloc_cache_params(struct mem_cgroup *memcg, struct kmem_cache *s,
 			     struct kmem_cache *root_cache);
 void memcg_free_cache_params(struct kmem_cache *s);
diff --git a/include/linux/slab.h b/include/linux/slab.h
index ecbec9ccb80d..86e5b26fbdab 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -117,7 +117,8 @@ struct kmem_cache *kmem_cache_create(const char *, size_t, size_t,
 			void (*)(void *));
 #ifdef CONFIG_MEMCG_KMEM
 struct kmem_cache *kmem_cache_create_memcg(struct mem_cgroup *,
-					   struct kmem_cache *);
+					   struct kmem_cache *,
+					   const char *);
 #endif
 void kmem_cache_destroy(struct kmem_cache *);
 int kmem_cache_shrink(struct kmem_cache *);
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 19d620b3d69c..605d72044533 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3105,29 +3105,6 @@ int memcg_update_cache_size(struct kmem_cache *s, int num_groups)
 	return 0;
 }
 
-char *memcg_create_cache_name(struct mem_cgroup *memcg,
-			      struct kmem_cache *root_cache)
-{
-	static char *buf = NULL;
-
-	/*
-	 * We need a mutex here to protect the shared buffer. Since this is
-	 * expected to be called only on cache creation, we can employ the
-	 * slab_mutex for that purpose.
-	 */
-	lockdep_assert_held(&slab_mutex);
-
-	if (!buf) {
-		buf = kmalloc(NAME_MAX + 1, GFP_KERNEL);
-		if (!buf)
-			return NULL;
-	}
-
-	cgroup_name(memcg->css.cgroup, buf, NAME_MAX + 1);
-	return kasprintf(GFP_KERNEL, "%s(%d:%s)", root_cache->name,
-			 memcg_cache_id(memcg), buf);
-}
-
 int memcg_alloc_cache_params(struct mem_cgroup *memcg, struct kmem_cache *s,
 			     struct kmem_cache *root_cache)
 {
@@ -3168,6 +3145,7 @@ void memcg_free_cache_params(struct kmem_cache *s)
 static void memcg_kmem_create_cache(struct mem_cgroup *memcg,
 				    struct kmem_cache *root_cache)
 {
+	static char memcg_name[NAME_MAX + 1];
 	struct kmem_cache *cachep;
 	int id;
 
@@ -3183,7 +3161,8 @@ static void memcg_kmem_create_cache(struct mem_cgroup *memcg,
 	if (cache_from_memcg_idx(root_cache, id))
 		return;
 
-	cachep = kmem_cache_create_memcg(memcg, root_cache);
+	cgroup_name(memcg->css.cgroup, memcg_name, NAME_MAX + 1);
+	cachep = kmem_cache_create_memcg(memcg, root_cache, memcg_name);
 	/*
 	 * If we could not create a memcg cache, do not complain, because
 	 * that's not critical at all as we can always proceed with the root
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 4f16f374392a..6b6397d2e05e 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -264,13 +264,15 @@ EXPORT_SYMBOL(kmem_cache_create);
  * kmem_cache_create_memcg - Create a cache for a memory cgroup.
  * @memcg: The memory cgroup the new cache is for.
  * @root_cache: The parent of the new cache.
+ * @memcg_name: The name of the memory cgroup.
  *
  * This function attempts to create a kmem cache that will serve allocation
  * requests going from @memcg to @root_cache. The new cache inherits properties
  * from its parent.
  */
 struct kmem_cache *kmem_cache_create_memcg(struct mem_cgroup *memcg,
-					   struct kmem_cache *root_cache)
+					   struct kmem_cache *root_cache,
+					   const char *memcg_name)
 {
 	struct kmem_cache *s = NULL;
 	char *cache_name;
@@ -280,7 +282,8 @@ struct kmem_cache *kmem_cache_create_memcg(struct mem_cgroup *memcg,
 
 	mutex_lock(&slab_mutex);
 
-	cache_name = memcg_create_cache_name(memcg, root_cache);
+	cache_name = kasprintf(GFP_KERNEL, "%s(%d:%s)", root_cache->name,
+			       memcg_cache_id(memcg), memcg_name);
 	if (!cache_name)
 		goto out_unlock;
 
-- 
1.7.10.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH -mm v2 2/6] memcg: allocate memcg_caches array on first per memcg cache creation
  2014-04-27  9:04 ` Vladimir Davydov
@ 2014-04-27  9:04   ` Vladimir Davydov
  -1 siblings, 0 replies; 16+ messages in thread
From: Vladimir Davydov @ 2014-04-27  9:04 UTC (permalink / raw)
  To: akpm; +Cc: mhocko, hannes, glommer, cl, penberg, linux-kernel, linux-mm, devel

There is no need to allocate the memcg_caches array on kmem cache
creation, because we perfectly handle root caches without memcg_params
everywhere. Since not all caches will necessarily be used by memory
cgroups, this rather looks like a waste of memory. So let's allocate
the memcg_caches array on the first per memcg cache creation.

A couple of things to note about this patch:

 - after it is applied memcg_alloc_cache_params does nothing for root
   caches so that it can be inlined into memcg_kmem_create_cache;

 - since memcg_limited_groups_array_size is now accessed under
   activate_kmem_mutex, we can move memcg_limited_groups_array_size
   update (memcg_update_array_size) out of slab_mutex, which is nice,
   because it isn't something specific to kmem caches - it will be used
   in per memcg list_lru in the future;

 - memcg_caches allocation scheme is now spread between slab_common.c
   and memcontrol.c, but that'll be fixed by the next patches where I'll
   move it completely to slab_common.c

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
---
 include/linux/slab.h |    1 +
 mm/memcontrol.c      |   59 ++++++++++++++++++++++++++++++++------------------
 mm/slab_common.c     |   33 ++++++++++++++++++++++++++++
 3 files changed, 72 insertions(+), 21 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index 86e5b26fbdab..095ce8e47a36 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -119,6 +119,7 @@ struct kmem_cache *kmem_cache_create(const char *, size_t, size_t,
 struct kmem_cache *kmem_cache_create_memcg(struct mem_cgroup *,
 					   struct kmem_cache *,
 					   const char *);
+int kmem_cache_init_memcg_array(struct kmem_cache *, int);
 #endif
 void kmem_cache_destroy(struct kmem_cache *);
 int kmem_cache_shrink(struct kmem_cache *);
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 605d72044533..08937040ed75 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3059,6 +3059,9 @@ int memcg_update_cache_size(struct kmem_cache *s, int num_groups)
 
 	VM_BUG_ON(!is_root_cache(s));
 
+	if (!cur_params)
+		return 0;
+
 	if (num_groups > memcg_limited_groups_array_size) {
 		int i;
 		struct memcg_cache_params *new_params;
@@ -3099,8 +3102,7 @@ int memcg_update_cache_size(struct kmem_cache *s, int num_groups)
 		 * anyway.
 		 */
 		rcu_assign_pointer(s->memcg_params, new_params);
-		if (cur_params)
-			kfree_rcu(cur_params, rcu_head);
+		kfree_rcu(cur_params, rcu_head);
 	}
 	return 0;
 }
@@ -3108,27 +3110,16 @@ int memcg_update_cache_size(struct kmem_cache *s, int num_groups)
 int memcg_alloc_cache_params(struct mem_cgroup *memcg, struct kmem_cache *s,
 			     struct kmem_cache *root_cache)
 {
-	size_t size;
-
-	if (!memcg_kmem_enabled())
+	if (!memcg)
 		return 0;
 
-	if (!memcg) {
-		size = offsetof(struct memcg_cache_params, memcg_caches);
-		size += memcg_limited_groups_array_size * sizeof(void *);
-	} else
-		size = sizeof(struct memcg_cache_params);
-
-	s->memcg_params = kzalloc(size, GFP_KERNEL);
+	s->memcg_params = kzalloc(sizeof(*s->memcg_params), GFP_KERNEL);
 	if (!s->memcg_params)
 		return -ENOMEM;
 
-	if (memcg) {
-		s->memcg_params->memcg = memcg;
-		s->memcg_params->root_cache = root_cache;
-		css_get(&memcg->css);
-	} else
-		s->memcg_params->is_root_cache = true;
+	s->memcg_params->memcg = memcg;
+	s->memcg_params->root_cache = root_cache;
+	css_get(&memcg->css);
 
 	return 0;
 }
@@ -3142,6 +3133,30 @@ void memcg_free_cache_params(struct kmem_cache *s)
 	kfree(s->memcg_params);
 }
 
+/*
+ * Prepares the memcg_caches array of the given kmem cache for disposing
+ * memcgs' copies.
+ */
+static int memcg_prepare_kmem_cache(struct kmem_cache *cachep)
+{
+	int ret;
+
+	BUG_ON(!is_root_cache(cachep));
+
+	if (cachep->memcg_params)
+		return 0;
+
+	/* activate_kmem_mutex guarantees a stable value of
+	 * memcg_limited_groups_array_size */
+	mutex_lock(&activate_kmem_mutex);
+	mutex_lock(&memcg_slab_mutex);
+	ret = kmem_cache_init_memcg_array(cachep,
+			memcg_limited_groups_array_size);
+	mutex_unlock(&memcg_slab_mutex);
+	mutex_unlock(&activate_kmem_mutex);
+	return ret;
+}
+
 static void memcg_kmem_create_cache(struct mem_cgroup *memcg,
 				    struct kmem_cache *root_cache)
 {
@@ -3287,10 +3302,13 @@ static void memcg_create_cache_work_func(struct work_struct *w)
 	struct mem_cgroup *memcg = cw->memcg;
 	struct kmem_cache *cachep = cw->cachep;
 
+	if (memcg_prepare_kmem_cache(cachep) != 0)
+		goto out;
+
 	mutex_lock(&memcg_slab_mutex);
 	memcg_kmem_create_cache(memcg, cachep);
 	mutex_unlock(&memcg_slab_mutex);
-
+out:
 	css_put(&memcg->css);
 	kfree(cw);
 }
@@ -3371,8 +3389,7 @@ struct kmem_cache *__memcg_kmem_get_cache(struct kmem_cache *cachep,
 	struct mem_cgroup *memcg;
 	struct kmem_cache *memcg_cachep;
 
-	VM_BUG_ON(!cachep->memcg_params);
-	VM_BUG_ON(!cachep->memcg_params->is_root_cache);
+	VM_BUG_ON(!is_root_cache(cachep));
 
 	if (!current->mm || current->memcg_kmem_skip_account)
 		return cachep;
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 6b6397d2e05e..ce5e96aff4e6 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -77,6 +77,39 @@ static inline int kmem_cache_sanity_check(const char *name, size_t size)
 #endif
 
 #ifdef CONFIG_MEMCG_KMEM
+static int __kmem_cache_init_memcg_array(struct kmem_cache *s, int nr_entries)
+{
+	size_t size;
+	struct memcg_cache_params *new;
+
+	size = offsetof(struct memcg_cache_params, memcg_caches);
+	size += nr_entries * sizeof(void *);
+
+	new = kzalloc(size, GFP_KERNEL);
+	if (!new)
+		return -ENOMEM;
+
+	new->is_root_cache = true;
+
+	/* matching rcu_dereference is in cache_from_memcg_idx */
+	rcu_assign_pointer(s->memcg_params, new);
+
+	return 0;
+}
+
+int kmem_cache_init_memcg_array(struct kmem_cache *s, int nr_entries)
+{
+	int ret = 0;
+
+	BUG_ON(!is_root_cache(s));
+
+	mutex_lock(&slab_mutex);
+	if (!s->memcg_params)
+		ret = __kmem_cache_init_memcg_array(s, nr_entries);
+	mutex_unlock(&slab_mutex);
+	return ret;
+}
+
 int memcg_update_all_caches(int num_memcgs)
 {
 	struct kmem_cache *s;
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH -mm v2 2/6] memcg: allocate memcg_caches array on first per memcg cache creation
@ 2014-04-27  9:04   ` Vladimir Davydov
  0 siblings, 0 replies; 16+ messages in thread
From: Vladimir Davydov @ 2014-04-27  9:04 UTC (permalink / raw)
  To: akpm; +Cc: mhocko, hannes, glommer, cl, penberg, linux-kernel, linux-mm, devel

There is no need to allocate the memcg_caches array on kmem cache
creation, because we perfectly handle root caches without memcg_params
everywhere. Since not all caches will necessarily be used by memory
cgroups, this rather looks like a waste of memory. So let's allocate
the memcg_caches array on the first per memcg cache creation.

A couple of things to note about this patch:

 - after it is applied memcg_alloc_cache_params does nothing for root
   caches so that it can be inlined into memcg_kmem_create_cache;

 - since memcg_limited_groups_array_size is now accessed under
   activate_kmem_mutex, we can move memcg_limited_groups_array_size
   update (memcg_update_array_size) out of slab_mutex, which is nice,
   because it isn't something specific to kmem caches - it will be used
   in per memcg list_lru in the future;

 - memcg_caches allocation scheme is now spread between slab_common.c
   and memcontrol.c, but that'll be fixed by the next patches where I'll
   move it completely to slab_common.c

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
---
 include/linux/slab.h |    1 +
 mm/memcontrol.c      |   59 ++++++++++++++++++++++++++++++++------------------
 mm/slab_common.c     |   33 ++++++++++++++++++++++++++++
 3 files changed, 72 insertions(+), 21 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index 86e5b26fbdab..095ce8e47a36 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -119,6 +119,7 @@ struct kmem_cache *kmem_cache_create(const char *, size_t, size_t,
 struct kmem_cache *kmem_cache_create_memcg(struct mem_cgroup *,
 					   struct kmem_cache *,
 					   const char *);
+int kmem_cache_init_memcg_array(struct kmem_cache *, int);
 #endif
 void kmem_cache_destroy(struct kmem_cache *);
 int kmem_cache_shrink(struct kmem_cache *);
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 605d72044533..08937040ed75 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3059,6 +3059,9 @@ int memcg_update_cache_size(struct kmem_cache *s, int num_groups)
 
 	VM_BUG_ON(!is_root_cache(s));
 
+	if (!cur_params)
+		return 0;
+
 	if (num_groups > memcg_limited_groups_array_size) {
 		int i;
 		struct memcg_cache_params *new_params;
@@ -3099,8 +3102,7 @@ int memcg_update_cache_size(struct kmem_cache *s, int num_groups)
 		 * anyway.
 		 */
 		rcu_assign_pointer(s->memcg_params, new_params);
-		if (cur_params)
-			kfree_rcu(cur_params, rcu_head);
+		kfree_rcu(cur_params, rcu_head);
 	}
 	return 0;
 }
@@ -3108,27 +3110,16 @@ int memcg_update_cache_size(struct kmem_cache *s, int num_groups)
 int memcg_alloc_cache_params(struct mem_cgroup *memcg, struct kmem_cache *s,
 			     struct kmem_cache *root_cache)
 {
-	size_t size;
-
-	if (!memcg_kmem_enabled())
+	if (!memcg)
 		return 0;
 
-	if (!memcg) {
-		size = offsetof(struct memcg_cache_params, memcg_caches);
-		size += memcg_limited_groups_array_size * sizeof(void *);
-	} else
-		size = sizeof(struct memcg_cache_params);
-
-	s->memcg_params = kzalloc(size, GFP_KERNEL);
+	s->memcg_params = kzalloc(sizeof(*s->memcg_params), GFP_KERNEL);
 	if (!s->memcg_params)
 		return -ENOMEM;
 
-	if (memcg) {
-		s->memcg_params->memcg = memcg;
-		s->memcg_params->root_cache = root_cache;
-		css_get(&memcg->css);
-	} else
-		s->memcg_params->is_root_cache = true;
+	s->memcg_params->memcg = memcg;
+	s->memcg_params->root_cache = root_cache;
+	css_get(&memcg->css);
 
 	return 0;
 }
@@ -3142,6 +3133,30 @@ void memcg_free_cache_params(struct kmem_cache *s)
 	kfree(s->memcg_params);
 }
 
+/*
+ * Prepares the memcg_caches array of the given kmem cache for disposing
+ * memcgs' copies.
+ */
+static int memcg_prepare_kmem_cache(struct kmem_cache *cachep)
+{
+	int ret;
+
+	BUG_ON(!is_root_cache(cachep));
+
+	if (cachep->memcg_params)
+		return 0;
+
+	/* activate_kmem_mutex guarantees a stable value of
+	 * memcg_limited_groups_array_size */
+	mutex_lock(&activate_kmem_mutex);
+	mutex_lock(&memcg_slab_mutex);
+	ret = kmem_cache_init_memcg_array(cachep,
+			memcg_limited_groups_array_size);
+	mutex_unlock(&memcg_slab_mutex);
+	mutex_unlock(&activate_kmem_mutex);
+	return ret;
+}
+
 static void memcg_kmem_create_cache(struct mem_cgroup *memcg,
 				    struct kmem_cache *root_cache)
 {
@@ -3287,10 +3302,13 @@ static void memcg_create_cache_work_func(struct work_struct *w)
 	struct mem_cgroup *memcg = cw->memcg;
 	struct kmem_cache *cachep = cw->cachep;
 
+	if (memcg_prepare_kmem_cache(cachep) != 0)
+		goto out;
+
 	mutex_lock(&memcg_slab_mutex);
 	memcg_kmem_create_cache(memcg, cachep);
 	mutex_unlock(&memcg_slab_mutex);
-
+out:
 	css_put(&memcg->css);
 	kfree(cw);
 }
@@ -3371,8 +3389,7 @@ struct kmem_cache *__memcg_kmem_get_cache(struct kmem_cache *cachep,
 	struct mem_cgroup *memcg;
 	struct kmem_cache *memcg_cachep;
 
-	VM_BUG_ON(!cachep->memcg_params);
-	VM_BUG_ON(!cachep->memcg_params->is_root_cache);
+	VM_BUG_ON(!is_root_cache(cachep));
 
 	if (!current->mm || current->memcg_kmem_skip_account)
 		return cachep;
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 6b6397d2e05e..ce5e96aff4e6 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -77,6 +77,39 @@ static inline int kmem_cache_sanity_check(const char *name, size_t size)
 #endif
 
 #ifdef CONFIG_MEMCG_KMEM
+static int __kmem_cache_init_memcg_array(struct kmem_cache *s, int nr_entries)
+{
+	size_t size;
+	struct memcg_cache_params *new;
+
+	size = offsetof(struct memcg_cache_params, memcg_caches);
+	size += nr_entries * sizeof(void *);
+
+	new = kzalloc(size, GFP_KERNEL);
+	if (!new)
+		return -ENOMEM;
+
+	new->is_root_cache = true;
+
+	/* matching rcu_dereference is in cache_from_memcg_idx */
+	rcu_assign_pointer(s->memcg_params, new);
+
+	return 0;
+}
+
+int kmem_cache_init_memcg_array(struct kmem_cache *s, int nr_entries)
+{
+	int ret = 0;
+
+	BUG_ON(!is_root_cache(s));
+
+	mutex_lock(&slab_mutex);
+	if (!s->memcg_params)
+		ret = __kmem_cache_init_memcg_array(s, nr_entries);
+	mutex_unlock(&slab_mutex);
+	return ret;
+}
+
 int memcg_update_all_caches(int num_memcgs)
 {
 	struct kmem_cache *s;
-- 
1.7.10.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH -mm v2 3/6] memcg: cleanup memcg_caches arrays relocation path
  2014-04-27  9:04 ` Vladimir Davydov
@ 2014-04-27  9:04   ` Vladimir Davydov
  -1 siblings, 0 replies; 16+ messages in thread
From: Vladimir Davydov @ 2014-04-27  9:04 UTC (permalink / raw)
  To: akpm; +Cc: mhocko, hannes, glommer, cl, penberg, linux-kernel, linux-mm, devel

Current memcg_caches arrays relocation path looks rather awkward: on
setting kmem limit we call memcg_update_all_caches located on slab's
side, which walks over all root caches and calls back to memcontrol.c
through memcg_update_cache_size and memcg_update_array_size, which in
turn reallocate the memcg_caches array for a particular cache and update
the arrays' size respectively.

The first call from memcontrol.c to slab_common.c, which is
memcg_update_all_caches, is justified, because to iterate over root
caches we have to take the slab_mutex. However, I don't see any reason
why we can't reallocate a memcg_caches array on the slab's size or why
we should call memcg_update_cache_size under the slab_mutex. So let's
clean up this mess.

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
---
 include/linux/memcontrol.h |    3 --
 include/linux/slab.h       |    3 +-
 mm/memcontrol.c            |  125 +++++++++++++-------------------------------
 mm/slab_common.c           |   36 ++++++++-----
 4 files changed, 61 insertions(+), 106 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index dfc2929a3877..6e59393e03f9 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -496,9 +496,6 @@ int memcg_alloc_cache_params(struct mem_cgroup *memcg, struct kmem_cache *s,
 			     struct kmem_cache *root_cache);
 void memcg_free_cache_params(struct kmem_cache *s);
 
-int memcg_update_cache_size(struct kmem_cache *s, int num_groups);
-void memcg_update_array_size(int num_groups);
-
 struct kmem_cache *
 __memcg_kmem_get_cache(struct kmem_cache *cachep, gfp_t gfp);
 
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 095ce8e47a36..c437be67917b 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -120,6 +120,7 @@ struct kmem_cache *kmem_cache_create_memcg(struct mem_cgroup *,
 					   struct kmem_cache *,
 					   const char *);
 int kmem_cache_init_memcg_array(struct kmem_cache *, int);
+int kmem_cache_memcg_arrays_grow(int);
 #endif
 void kmem_cache_destroy(struct kmem_cache *);
 int kmem_cache_shrink(struct kmem_cache *);
@@ -545,8 +546,6 @@ struct memcg_cache_params {
 	};
 };
 
-int memcg_update_all_caches(int num_memcgs);
-
 struct seq_file;
 int cache_show(struct kmem_cache *s, struct seq_file *m);
 void print_slabinfo_header(struct seq_file *m);
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 08937040ed75..714d7bd7f140 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3027,84 +3027,52 @@ int memcg_cache_id(struct mem_cgroup *memcg)
 	return memcg ? memcg->kmemcg_id : -1;
 }
 
-static size_t memcg_caches_array_size(int num_groups)
+static int memcg_init_cache_id(struct mem_cgroup *memcg)
 {
-	ssize_t size;
-	if (num_groups <= 0)
-		return 0;
+	int err = 0;
+	int id, size;
+
+	lockdep_assert_held(&activate_kmem_mutex);
+
+	id = ida_simple_get(&kmem_limited_groups,
+			    0, MEMCG_CACHES_MAX_SIZE, GFP_KERNEL);
+	if (id < 0)
+		return id;
 
-	size = 2 * num_groups;
+	if (id < memcg_limited_groups_array_size)
+		goto out_setid;
+
+	/*
+	 * We don't have enough space for the new id in the arrays that store
+	 * per memcg data. Let's try to grow them then.
+	 */
+	size = id * 2;
 	if (size < MEMCG_CACHES_MIN_SIZE)
 		size = MEMCG_CACHES_MIN_SIZE;
 	else if (size > MEMCG_CACHES_MAX_SIZE)
 		size = MEMCG_CACHES_MAX_SIZE;
 
-	return size;
-}
-
-/*
- * We should update the current array size iff all caches updates succeed. This
- * can only be done from the slab side. The slab mutex needs to be held when
- * calling this.
- */
-void memcg_update_array_size(int num)
-{
-	if (num > memcg_limited_groups_array_size)
-		memcg_limited_groups_array_size = memcg_caches_array_size(num);
-}
-
-int memcg_update_cache_size(struct kmem_cache *s, int num_groups)
-{
-	struct memcg_cache_params *cur_params = s->memcg_params;
-
-	VM_BUG_ON(!is_root_cache(s));
-
-	if (!cur_params)
-		return 0;
-
-	if (num_groups > memcg_limited_groups_array_size) {
-		int i;
-		struct memcg_cache_params *new_params;
-		ssize_t size = memcg_caches_array_size(num_groups);
-
-		size *= sizeof(void *);
-		size += offsetof(struct memcg_cache_params, memcg_caches);
-
-		new_params = kzalloc(size, GFP_KERNEL);
-		if (!new_params)
-			return -ENOMEM;
+	mutex_lock(&memcg_slab_mutex);
+	err = kmem_cache_memcg_arrays_grow(size);
+	mutex_unlock(&memcg_slab_mutex);
 
-		new_params->is_root_cache = true;
+	if (err)
+		goto out_rmid;
 
-		/*
-		 * There is the chance it will be bigger than
-		 * memcg_limited_groups_array_size, if we failed an allocation
-		 * in a cache, in which case all caches updated before it, will
-		 * have a bigger array.
-		 *
-		 * But if that is the case, the data after
-		 * memcg_limited_groups_array_size is certainly unused
-		 */
-		for (i = 0; i < memcg_limited_groups_array_size; i++) {
-			if (!cur_params->memcg_caches[i])
-				continue;
-			new_params->memcg_caches[i] =
-						cur_params->memcg_caches[i];
-		}
+	/*
+	 * Update the arrays' size only after we grew them so that readers
+	 * walking over such an array won't get an index out of range provided
+	 * they use an appropriate mutex to protect the array's elements.
+	 */
+	memcg_limited_groups_array_size = size;
 
-		/*
-		 * Ideally, we would wait until all caches succeed, and only
-		 * then free the old one. But this is not worth the extra
-		 * pointer per-cache we'd have to have for this.
-		 *
-		 * It is not a big deal if some caches are left with a size
-		 * bigger than the others. And all updates will reset this
-		 * anyway.
-		 */
-		rcu_assign_pointer(s->memcg_params, new_params);
-		kfree_rcu(cur_params, rcu_head);
-	}
+out_setid:
+	memcg->kmemcg_id = id;
 	return 0;
+
+out_rmid:
+	ida_simple_remove(&kmem_limited_groups, id);
+	return err;
 }
 
 int memcg_alloc_cache_params(struct mem_cgroup *memcg, struct kmem_cache *s,
@@ -4950,7 +4918,6 @@ static int __memcg_activate_kmem(struct mem_cgroup *memcg,
 				 unsigned long long limit)
 {
 	int err = 0;
-	int memcg_id;
 
 	if (memcg_kmem_is_active(memcg))
 		return 0;
@@ -4980,24 +4947,10 @@ static int __memcg_activate_kmem(struct mem_cgroup *memcg,
 	if (err)
 		goto out;
 
-	memcg_id = ida_simple_get(&kmem_limited_groups,
-				  0, MEMCG_CACHES_MAX_SIZE, GFP_KERNEL);
-	if (memcg_id < 0) {
-		err = memcg_id;
-		goto out;
-	}
-
-	/*
-	 * Make sure we have enough space for this cgroup in each root cache's
-	 * memcg_params.
-	 */
-	mutex_lock(&memcg_slab_mutex);
-	err = memcg_update_all_caches(memcg_id + 1);
-	mutex_unlock(&memcg_slab_mutex);
+	err = memcg_init_cache_id(memcg);
 	if (err)
-		goto out_rmid;
+		goto out;
 
-	memcg->kmemcg_id = memcg_id;
 	INIT_LIST_HEAD(&memcg->memcg_slab_caches);
 
 	/*
@@ -5017,10 +4970,6 @@ static int __memcg_activate_kmem(struct mem_cgroup *memcg,
 out:
 	memcg_resume_kmem_account();
 	return err;
-
-out_rmid:
-	ida_simple_remove(&kmem_limited_groups, memcg_id);
-	goto out;
 }
 
 static int memcg_activate_kmem(struct mem_cgroup *memcg,
diff --git a/mm/slab_common.c b/mm/slab_common.c
index ce5e96aff4e6..801999247619 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -77,10 +77,17 @@ static inline int kmem_cache_sanity_check(const char *name, size_t size)
 #endif
 
 #ifdef CONFIG_MEMCG_KMEM
+/*
+ * (Re)allocates the memcg_caches array of the given kmem cache so that it
+ * can hold up to nr_entries. The caller must hold slab_mutex.
+ */
 static int __kmem_cache_init_memcg_array(struct kmem_cache *s, int nr_entries)
 {
+	int i;
 	size_t size;
-	struct memcg_cache_params *new;
+	struct memcg_cache_params *new, *old;
+
+	old = s->memcg_params;
 
 	size = offsetof(struct memcg_cache_params, memcg_caches);
 	size += nr_entries * sizeof(void *);
@@ -90,10 +97,15 @@ static int __kmem_cache_init_memcg_array(struct kmem_cache *s, int nr_entries)
 		return -ENOMEM;
 
 	new->is_root_cache = true;
+	if (old) {
+		for_each_memcg_cache_index(i)
+			new->memcg_caches[i] = old->memcg_caches[i];
+	}
 
 	/* matching rcu_dereference is in cache_from_memcg_idx */
 	rcu_assign_pointer(s->memcg_params, new);
-
+	if (old)
+		kfree_rcu(old, rcu_head);
 	return 0;
 }
 
@@ -110,32 +122,30 @@ int kmem_cache_init_memcg_array(struct kmem_cache *s, int nr_entries)
 	return ret;
 }
 
-int memcg_update_all_caches(int num_memcgs)
+int kmem_cache_memcg_arrays_grow(int nr_entries)
 {
 	struct kmem_cache *s;
 	int ret = 0;
-	mutex_lock(&slab_mutex);
 
+	mutex_lock(&slab_mutex);
 	list_for_each_entry(s, &slab_caches, list) {
 		if (!is_root_cache(s))
 			continue;
 
-		ret = memcg_update_cache_size(s, num_memcgs);
+		ret = __kmem_cache_init_memcg_array(s, nr_entries);
 		/*
-		 * See comment in memcontrol.c, memcg_update_cache_size:
-		 * Instead of freeing the memory, we'll just leave the caches
-		 * up to this point in an updated state.
+		 * We won't shrink the arrays back to the initial size on
+		 * failure, because it isn't a big deal if some caches are left
+		 * with a size greater than others. Further updates will reset
+		 * this anyway.
 		 */
 		if (ret)
-			goto out;
+			break;
 	}
-
-	memcg_update_array_size(num_memcgs);
-out:
 	mutex_unlock(&slab_mutex);
 	return ret;
 }
-#endif
+#endif /* CONFIG_MEMCG_KMEM */
 
 /*
  * Figure out what the alignment of the objects will be given a set of
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH -mm v2 3/6] memcg: cleanup memcg_caches arrays relocation path
@ 2014-04-27  9:04   ` Vladimir Davydov
  0 siblings, 0 replies; 16+ messages in thread
From: Vladimir Davydov @ 2014-04-27  9:04 UTC (permalink / raw)
  To: akpm; +Cc: mhocko, hannes, glommer, cl, penberg, linux-kernel, linux-mm, devel

Current memcg_caches arrays relocation path looks rather awkward: on
setting kmem limit we call memcg_update_all_caches located on slab's
side, which walks over all root caches and calls back to memcontrol.c
through memcg_update_cache_size and memcg_update_array_size, which in
turn reallocate the memcg_caches array for a particular cache and update
the arrays' size respectively.

The first call from memcontrol.c to slab_common.c, which is
memcg_update_all_caches, is justified, because to iterate over root
caches we have to take the slab_mutex. However, I don't see any reason
why we can't reallocate a memcg_caches array on the slab's size or why
we should call memcg_update_cache_size under the slab_mutex. So let's
clean up this mess.

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
---
 include/linux/memcontrol.h |    3 --
 include/linux/slab.h       |    3 +-
 mm/memcontrol.c            |  125 +++++++++++++-------------------------------
 mm/slab_common.c           |   36 ++++++++-----
 4 files changed, 61 insertions(+), 106 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index dfc2929a3877..6e59393e03f9 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -496,9 +496,6 @@ int memcg_alloc_cache_params(struct mem_cgroup *memcg, struct kmem_cache *s,
 			     struct kmem_cache *root_cache);
 void memcg_free_cache_params(struct kmem_cache *s);
 
-int memcg_update_cache_size(struct kmem_cache *s, int num_groups);
-void memcg_update_array_size(int num_groups);
-
 struct kmem_cache *
 __memcg_kmem_get_cache(struct kmem_cache *cachep, gfp_t gfp);
 
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 095ce8e47a36..c437be67917b 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -120,6 +120,7 @@ struct kmem_cache *kmem_cache_create_memcg(struct mem_cgroup *,
 					   struct kmem_cache *,
 					   const char *);
 int kmem_cache_init_memcg_array(struct kmem_cache *, int);
+int kmem_cache_memcg_arrays_grow(int);
 #endif
 void kmem_cache_destroy(struct kmem_cache *);
 int kmem_cache_shrink(struct kmem_cache *);
@@ -545,8 +546,6 @@ struct memcg_cache_params {
 	};
 };
 
-int memcg_update_all_caches(int num_memcgs);
-
 struct seq_file;
 int cache_show(struct kmem_cache *s, struct seq_file *m);
 void print_slabinfo_header(struct seq_file *m);
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 08937040ed75..714d7bd7f140 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3027,84 +3027,52 @@ int memcg_cache_id(struct mem_cgroup *memcg)
 	return memcg ? memcg->kmemcg_id : -1;
 }
 
-static size_t memcg_caches_array_size(int num_groups)
+static int memcg_init_cache_id(struct mem_cgroup *memcg)
 {
-	ssize_t size;
-	if (num_groups <= 0)
-		return 0;
+	int err = 0;
+	int id, size;
+
+	lockdep_assert_held(&activate_kmem_mutex);
+
+	id = ida_simple_get(&kmem_limited_groups,
+			    0, MEMCG_CACHES_MAX_SIZE, GFP_KERNEL);
+	if (id < 0)
+		return id;
 
-	size = 2 * num_groups;
+	if (id < memcg_limited_groups_array_size)
+		goto out_setid;
+
+	/*
+	 * We don't have enough space for the new id in the arrays that store
+	 * per memcg data. Let's try to grow them then.
+	 */
+	size = id * 2;
 	if (size < MEMCG_CACHES_MIN_SIZE)
 		size = MEMCG_CACHES_MIN_SIZE;
 	else if (size > MEMCG_CACHES_MAX_SIZE)
 		size = MEMCG_CACHES_MAX_SIZE;
 
-	return size;
-}
-
-/*
- * We should update the current array size iff all caches updates succeed. This
- * can only be done from the slab side. The slab mutex needs to be held when
- * calling this.
- */
-void memcg_update_array_size(int num)
-{
-	if (num > memcg_limited_groups_array_size)
-		memcg_limited_groups_array_size = memcg_caches_array_size(num);
-}
-
-int memcg_update_cache_size(struct kmem_cache *s, int num_groups)
-{
-	struct memcg_cache_params *cur_params = s->memcg_params;
-
-	VM_BUG_ON(!is_root_cache(s));
-
-	if (!cur_params)
-		return 0;
-
-	if (num_groups > memcg_limited_groups_array_size) {
-		int i;
-		struct memcg_cache_params *new_params;
-		ssize_t size = memcg_caches_array_size(num_groups);
-
-		size *= sizeof(void *);
-		size += offsetof(struct memcg_cache_params, memcg_caches);
-
-		new_params = kzalloc(size, GFP_KERNEL);
-		if (!new_params)
-			return -ENOMEM;
+	mutex_lock(&memcg_slab_mutex);
+	err = kmem_cache_memcg_arrays_grow(size);
+	mutex_unlock(&memcg_slab_mutex);
 
-		new_params->is_root_cache = true;
+	if (err)
+		goto out_rmid;
 
-		/*
-		 * There is the chance it will be bigger than
-		 * memcg_limited_groups_array_size, if we failed an allocation
-		 * in a cache, in which case all caches updated before it, will
-		 * have a bigger array.
-		 *
-		 * But if that is the case, the data after
-		 * memcg_limited_groups_array_size is certainly unused
-		 */
-		for (i = 0; i < memcg_limited_groups_array_size; i++) {
-			if (!cur_params->memcg_caches[i])
-				continue;
-			new_params->memcg_caches[i] =
-						cur_params->memcg_caches[i];
-		}
+	/*
+	 * Update the arrays' size only after we grew them so that readers
+	 * walking over such an array won't get an index out of range provided
+	 * they use an appropriate mutex to protect the array's elements.
+	 */
+	memcg_limited_groups_array_size = size;
 
-		/*
-		 * Ideally, we would wait until all caches succeed, and only
-		 * then free the old one. But this is not worth the extra
-		 * pointer per-cache we'd have to have for this.
-		 *
-		 * It is not a big deal if some caches are left with a size
-		 * bigger than the others. And all updates will reset this
-		 * anyway.
-		 */
-		rcu_assign_pointer(s->memcg_params, new_params);
-		kfree_rcu(cur_params, rcu_head);
-	}
+out_setid:
+	memcg->kmemcg_id = id;
 	return 0;
+
+out_rmid:
+	ida_simple_remove(&kmem_limited_groups, id);
+	return err;
 }
 
 int memcg_alloc_cache_params(struct mem_cgroup *memcg, struct kmem_cache *s,
@@ -4950,7 +4918,6 @@ static int __memcg_activate_kmem(struct mem_cgroup *memcg,
 				 unsigned long long limit)
 {
 	int err = 0;
-	int memcg_id;
 
 	if (memcg_kmem_is_active(memcg))
 		return 0;
@@ -4980,24 +4947,10 @@ static int __memcg_activate_kmem(struct mem_cgroup *memcg,
 	if (err)
 		goto out;
 
-	memcg_id = ida_simple_get(&kmem_limited_groups,
-				  0, MEMCG_CACHES_MAX_SIZE, GFP_KERNEL);
-	if (memcg_id < 0) {
-		err = memcg_id;
-		goto out;
-	}
-
-	/*
-	 * Make sure we have enough space for this cgroup in each root cache's
-	 * memcg_params.
-	 */
-	mutex_lock(&memcg_slab_mutex);
-	err = memcg_update_all_caches(memcg_id + 1);
-	mutex_unlock(&memcg_slab_mutex);
+	err = memcg_init_cache_id(memcg);
 	if (err)
-		goto out_rmid;
+		goto out;
 
-	memcg->kmemcg_id = memcg_id;
 	INIT_LIST_HEAD(&memcg->memcg_slab_caches);
 
 	/*
@@ -5017,10 +4970,6 @@ static int __memcg_activate_kmem(struct mem_cgroup *memcg,
 out:
 	memcg_resume_kmem_account();
 	return err;
-
-out_rmid:
-	ida_simple_remove(&kmem_limited_groups, memcg_id);
-	goto out;
 }
 
 static int memcg_activate_kmem(struct mem_cgroup *memcg,
diff --git a/mm/slab_common.c b/mm/slab_common.c
index ce5e96aff4e6..801999247619 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -77,10 +77,17 @@ static inline int kmem_cache_sanity_check(const char *name, size_t size)
 #endif
 
 #ifdef CONFIG_MEMCG_KMEM
+/*
+ * (Re)allocates the memcg_caches array of the given kmem cache so that it
+ * can hold up to nr_entries. The caller must hold slab_mutex.
+ */
 static int __kmem_cache_init_memcg_array(struct kmem_cache *s, int nr_entries)
 {
+	int i;
 	size_t size;
-	struct memcg_cache_params *new;
+	struct memcg_cache_params *new, *old;
+
+	old = s->memcg_params;
 
 	size = offsetof(struct memcg_cache_params, memcg_caches);
 	size += nr_entries * sizeof(void *);
@@ -90,10 +97,15 @@ static int __kmem_cache_init_memcg_array(struct kmem_cache *s, int nr_entries)
 		return -ENOMEM;
 
 	new->is_root_cache = true;
+	if (old) {
+		for_each_memcg_cache_index(i)
+			new->memcg_caches[i] = old->memcg_caches[i];
+	}
 
 	/* matching rcu_dereference is in cache_from_memcg_idx */
 	rcu_assign_pointer(s->memcg_params, new);
-
+	if (old)
+		kfree_rcu(old, rcu_head);
 	return 0;
 }
 
@@ -110,32 +122,30 @@ int kmem_cache_init_memcg_array(struct kmem_cache *s, int nr_entries)
 	return ret;
 }
 
-int memcg_update_all_caches(int num_memcgs)
+int kmem_cache_memcg_arrays_grow(int nr_entries)
 {
 	struct kmem_cache *s;
 	int ret = 0;
-	mutex_lock(&slab_mutex);
 
+	mutex_lock(&slab_mutex);
 	list_for_each_entry(s, &slab_caches, list) {
 		if (!is_root_cache(s))
 			continue;
 
-		ret = memcg_update_cache_size(s, num_memcgs);
+		ret = __kmem_cache_init_memcg_array(s, nr_entries);
 		/*
-		 * See comment in memcontrol.c, memcg_update_cache_size:
-		 * Instead of freeing the memory, we'll just leave the caches
-		 * up to this point in an updated state.
+		 * We won't shrink the arrays back to the initial size on
+		 * failure, because it isn't a big deal if some caches are left
+		 * with a size greater than others. Further updates will reset
+		 * this anyway.
 		 */
 		if (ret)
-			goto out;
+			break;
 	}
-
-	memcg_update_array_size(num_memcgs);
-out:
 	mutex_unlock(&slab_mutex);
 	return ret;
 }
-#endif
+#endif /* CONFIG_MEMCG_KMEM */
 
 /*
  * Figure out what the alignment of the objects will be given a set of
-- 
1.7.10.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH -mm v2 4/6] memcg: get rid of memcg_{alloc,free}_cache_params
  2014-04-27  9:04 ` Vladimir Davydov
@ 2014-04-27  9:04   ` Vladimir Davydov
  -1 siblings, 0 replies; 16+ messages in thread
From: Vladimir Davydov @ 2014-04-27  9:04 UTC (permalink / raw)
  To: akpm; +Cc: mhocko, hannes, glommer, cl, penberg, linux-kernel, linux-mm, devel

The only reason why we have special functions for initializing
memcg_cache_params, memcg_{alloc,free}_cache_params defined at
memcontrol.c, is that we can't do css_{get,put} in slab_common.c.
However, we can move css_{get,put} to memcg_kmem_{create,destroy}_cache,
becuase they are only called when creating/destroying per memcg caches.
Then the rest of memcg_{alloc,free}_cache_params can be inlined in
slab_common.c making the code clearer.

Note, to properly bail out in memcg_kmem_destroy_cache, I need to know
if the cache was actually destroyed (it won't if it has leaked objects).
That's why I add the return value to kmem_cache_destroy.

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
---
 include/linux/memcontrol.h |   14 ------------
 include/linux/slab.h       |    2 +-
 mm/memcontrol.c            |   48 +++++++++++++++--------------------------
 mm/slab_common.c           |   51 ++++++++++++++++++++++++++++++++------------
 4 files changed, 55 insertions(+), 60 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 6e59393e03f9..3aee79fc7876 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -492,10 +492,6 @@ void __memcg_kmem_uncharge_pages(struct page *page, int order);
 
 int memcg_cache_id(struct mem_cgroup *memcg);
 
-int memcg_alloc_cache_params(struct mem_cgroup *memcg, struct kmem_cache *s,
-			     struct kmem_cache *root_cache);
-void memcg_free_cache_params(struct kmem_cache *s);
-
 struct kmem_cache *
 __memcg_kmem_get_cache(struct kmem_cache *cachep, gfp_t gfp);
 
@@ -623,16 +619,6 @@ static inline int memcg_cache_id(struct mem_cgroup *memcg)
 	return -1;
 }
 
-static inline int memcg_alloc_cache_params(struct mem_cgroup *memcg,
-		struct kmem_cache *s, struct kmem_cache *root_cache)
-{
-	return 0;
-}
-
-static inline void memcg_free_cache_params(struct kmem_cache *s)
-{
-}
-
 static inline struct kmem_cache *
 memcg_kmem_get_cache(struct kmem_cache *cachep, gfp_t gfp)
 {
diff --git a/include/linux/slab.h b/include/linux/slab.h
index c437be67917b..d041539b2bfb 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -122,7 +122,7 @@ struct kmem_cache *kmem_cache_create_memcg(struct mem_cgroup *,
 int kmem_cache_init_memcg_array(struct kmem_cache *, int);
 int kmem_cache_memcg_arrays_grow(int);
 #endif
-void kmem_cache_destroy(struct kmem_cache *);
+int kmem_cache_destroy(struct kmem_cache *);
 int kmem_cache_shrink(struct kmem_cache *);
 void kmem_cache_free(struct kmem_cache *, void *);
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 714d7bd7f140..415c81c2710a 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3075,32 +3075,6 @@ out_rmid:
 	return err;
 }
 
-int memcg_alloc_cache_params(struct mem_cgroup *memcg, struct kmem_cache *s,
-			     struct kmem_cache *root_cache)
-{
-	if (!memcg)
-		return 0;
-
-	s->memcg_params = kzalloc(sizeof(*s->memcg_params), GFP_KERNEL);
-	if (!s->memcg_params)
-		return -ENOMEM;
-
-	s->memcg_params->memcg = memcg;
-	s->memcg_params->root_cache = root_cache;
-	css_get(&memcg->css);
-
-	return 0;
-}
-
-void memcg_free_cache_params(struct kmem_cache *s)
-{
-	if (!s->memcg_params)
-		return;
-	if (!s->memcg_params->is_root_cache)
-		css_put(&s->memcg_params->memcg->css);
-	kfree(s->memcg_params);
-}
-
 /*
  * Prepares the memcg_caches array of the given kmem cache for disposing
  * memcgs' copies.
@@ -3154,6 +3128,7 @@ static void memcg_kmem_create_cache(struct mem_cgroup *memcg,
 	if (!cachep)
 		return;
 
+	css_get(&memcg->css);
 	list_add(&cachep->memcg_params->list, &memcg->memcg_slab_caches);
 
 	/*
@@ -3167,7 +3142,7 @@ static void memcg_kmem_create_cache(struct mem_cgroup *memcg,
 	root_cache->memcg_params->memcg_caches[id] = cachep;
 }
 
-static void memcg_kmem_destroy_cache(struct kmem_cache *cachep)
+static int memcg_kmem_destroy_cache(struct kmem_cache *cachep)
 {
 	struct kmem_cache *root_cache;
 	struct mem_cgroup *memcg;
@@ -3181,12 +3156,25 @@ static void memcg_kmem_destroy_cache(struct kmem_cache *cachep)
 	memcg = cachep->memcg_params->memcg;
 	id = memcg_cache_id(memcg);
 
+	/*
+	 * Since memcg_caches arrays can be accessed using only slab_mutex for
+	 * protection (e.g. by slabinfo readers), we must clear the cache's
+	 * entry before trying to destroy it.
+	 */
 	BUG_ON(root_cache->memcg_params->memcg_caches[id] != cachep);
 	root_cache->memcg_params->memcg_caches[id] = NULL;
 
 	list_del(&cachep->memcg_params->list);
 
-	kmem_cache_destroy(cachep);
+	if (kmem_cache_destroy(cachep) != 0) {
+		root_cache->memcg_params->memcg_caches[id] = cachep;
+		list_add(&cachep->memcg_params->list,
+			 &memcg->memcg_slab_caches);
+		return -EBUSY;
+	}
+
+	css_put(&memcg->css);
+	return 0;
 }
 
 /*
@@ -3231,9 +3219,7 @@ int __kmem_cache_destroy_memcg_children(struct kmem_cache *s)
 		if (!c)
 			continue;
 
-		memcg_kmem_destroy_cache(c);
-
-		if (cache_from_memcg_idx(s, i))
+		if (memcg_kmem_destroy_cache(c) != 0)
 			failed++;
 	}
 	mutex_unlock(&memcg_slab_mutex);
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 801999247619..055506ba6d37 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -177,7 +177,7 @@ unsigned long calculate_alignment(unsigned long flags,
 static struct kmem_cache *
 do_kmem_cache_create(char *name, size_t object_size, size_t size, size_t align,
 		     unsigned long flags, void (*ctor)(void *),
-		     struct mem_cgroup *memcg, struct kmem_cache *root_cache)
+		     struct memcg_cache_params *memcg_params)
 {
 	struct kmem_cache *s;
 	int err;
@@ -192,10 +192,9 @@ do_kmem_cache_create(char *name, size_t object_size, size_t size, size_t align,
 	s->size = size;
 	s->align = align;
 	s->ctor = ctor;
-
-	err = memcg_alloc_cache_params(memcg, s, root_cache);
-	if (err)
-		goto out_free_cache;
+#ifdef CONFIG_MEMCG_KMEM
+	s->memcg_params = memcg_params;
+#endif
 
 	err = __kmem_cache_create(s, flags);
 	if (err)
@@ -209,7 +208,6 @@ out:
 	return s;
 
 out_free_cache:
-	memcg_free_cache_params(s);
 	kfree(s);
 	goto out;
 }
@@ -275,7 +273,7 @@ kmem_cache_create(const char *name, size_t size, size_t align,
 
 	s = do_kmem_cache_create(cache_name, size, size,
 				 calculate_alignment(flags, align, size),
-				 flags, ctor, NULL, NULL);
+				 flags, ctor, NULL);
 	if (IS_ERR(s)) {
 		err = PTR_ERR(s);
 		kfree(cache_name);
@@ -318,13 +316,21 @@ struct kmem_cache *kmem_cache_create_memcg(struct mem_cgroup *memcg,
 					   const char *memcg_name)
 {
 	struct kmem_cache *s = NULL;
-	char *cache_name;
+	char *cache_name = NULL;
+	struct memcg_cache_params *memcg_params = NULL;
 
 	get_online_cpus();
 	get_online_mems();
 
 	mutex_lock(&slab_mutex);
 
+	memcg_params = kzalloc(sizeof(*memcg_params), GFP_KERNEL);
+	if (!memcg_params)
+		goto out_unlock;
+
+	memcg_params->memcg = memcg;
+	memcg_params->root_cache = root_cache;
+
 	cache_name = kasprintf(GFP_KERNEL, "%s(%d:%s)", root_cache->name,
 			       memcg_cache_id(memcg), memcg_name);
 	if (!cache_name)
@@ -333,11 +339,9 @@ struct kmem_cache *kmem_cache_create_memcg(struct mem_cgroup *memcg,
 	s = do_kmem_cache_create(cache_name, root_cache->object_size,
 				 root_cache->size, root_cache->align,
 				 root_cache->flags, root_cache->ctor,
-				 memcg, root_cache);
-	if (IS_ERR(s)) {
-		kfree(cache_name);
+				 memcg_params);
+	if (IS_ERR(s))
 		s = NULL;
-	}
 
 out_unlock:
 	mutex_unlock(&slab_mutex);
@@ -345,6 +349,10 @@ out_unlock:
 	put_online_mems();
 	put_online_cpus();
 
+	if (!s) {
+		kfree(memcg_params);
+		kfree(cache_name);
+	}
 	return s;
 }
 
@@ -369,8 +377,17 @@ static int kmem_cache_destroy_memcg_children(struct kmem_cache *s)
 }
 #endif /* CONFIG_MEMCG_KMEM */
 
-void kmem_cache_destroy(struct kmem_cache *s)
+/*
+ * kmem_cache_destroy - Destroy a cache.
+ * @s: The cache to destroy.
+ *
+ * Returns 0 on success. If the cache still has objects, -EBUSY is returned and
+ * a warning is printed to the log.
+ */
+int kmem_cache_destroy(struct kmem_cache *s)
 {
+	int ret = 0;
+
 	get_online_cpus();
 	get_online_mems();
 
@@ -380,6 +397,7 @@ void kmem_cache_destroy(struct kmem_cache *s)
 	if (s->refcount)
 		goto out_unlock;
 
+	ret = -EBUSY;
 	if (kmem_cache_destroy_memcg_children(s) != 0)
 		goto out_unlock;
 
@@ -396,9 +414,12 @@ void kmem_cache_destroy(struct kmem_cache *s)
 	if (s->flags & SLAB_DESTROY_BY_RCU)
 		rcu_barrier();
 
-	memcg_free_cache_params(s);
+#ifdef CONFIG_MEMCG_KMEM
+	kfree(s->memcg_params);
+#endif
 	kfree(s->name);
 	kmem_cache_free(kmem_cache, s);
+	ret = 0;
 	goto out;
 
 out_unlock:
@@ -406,6 +427,8 @@ out_unlock:
 out:
 	put_online_mems();
 	put_online_cpus();
+
+	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_destroy);
 
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH -mm v2 4/6] memcg: get rid of memcg_{alloc,free}_cache_params
@ 2014-04-27  9:04   ` Vladimir Davydov
  0 siblings, 0 replies; 16+ messages in thread
From: Vladimir Davydov @ 2014-04-27  9:04 UTC (permalink / raw)
  To: akpm; +Cc: mhocko, hannes, glommer, cl, penberg, linux-kernel, linux-mm, devel

The only reason why we have special functions for initializing
memcg_cache_params, memcg_{alloc,free}_cache_params defined at
memcontrol.c, is that we can't do css_{get,put} in slab_common.c.
However, we can move css_{get,put} to memcg_kmem_{create,destroy}_cache,
becuase they are only called when creating/destroying per memcg caches.
Then the rest of memcg_{alloc,free}_cache_params can be inlined in
slab_common.c making the code clearer.

Note, to properly bail out in memcg_kmem_destroy_cache, I need to know
if the cache was actually destroyed (it won't if it has leaked objects).
That's why I add the return value to kmem_cache_destroy.

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
---
 include/linux/memcontrol.h |   14 ------------
 include/linux/slab.h       |    2 +-
 mm/memcontrol.c            |   48 +++++++++++++++--------------------------
 mm/slab_common.c           |   51 ++++++++++++++++++++++++++++++++------------
 4 files changed, 55 insertions(+), 60 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 6e59393e03f9..3aee79fc7876 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -492,10 +492,6 @@ void __memcg_kmem_uncharge_pages(struct page *page, int order);
 
 int memcg_cache_id(struct mem_cgroup *memcg);
 
-int memcg_alloc_cache_params(struct mem_cgroup *memcg, struct kmem_cache *s,
-			     struct kmem_cache *root_cache);
-void memcg_free_cache_params(struct kmem_cache *s);
-
 struct kmem_cache *
 __memcg_kmem_get_cache(struct kmem_cache *cachep, gfp_t gfp);
 
@@ -623,16 +619,6 @@ static inline int memcg_cache_id(struct mem_cgroup *memcg)
 	return -1;
 }
 
-static inline int memcg_alloc_cache_params(struct mem_cgroup *memcg,
-		struct kmem_cache *s, struct kmem_cache *root_cache)
-{
-	return 0;
-}
-
-static inline void memcg_free_cache_params(struct kmem_cache *s)
-{
-}
-
 static inline struct kmem_cache *
 memcg_kmem_get_cache(struct kmem_cache *cachep, gfp_t gfp)
 {
diff --git a/include/linux/slab.h b/include/linux/slab.h
index c437be67917b..d041539b2bfb 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -122,7 +122,7 @@ struct kmem_cache *kmem_cache_create_memcg(struct mem_cgroup *,
 int kmem_cache_init_memcg_array(struct kmem_cache *, int);
 int kmem_cache_memcg_arrays_grow(int);
 #endif
-void kmem_cache_destroy(struct kmem_cache *);
+int kmem_cache_destroy(struct kmem_cache *);
 int kmem_cache_shrink(struct kmem_cache *);
 void kmem_cache_free(struct kmem_cache *, void *);
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 714d7bd7f140..415c81c2710a 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3075,32 +3075,6 @@ out_rmid:
 	return err;
 }
 
-int memcg_alloc_cache_params(struct mem_cgroup *memcg, struct kmem_cache *s,
-			     struct kmem_cache *root_cache)
-{
-	if (!memcg)
-		return 0;
-
-	s->memcg_params = kzalloc(sizeof(*s->memcg_params), GFP_KERNEL);
-	if (!s->memcg_params)
-		return -ENOMEM;
-
-	s->memcg_params->memcg = memcg;
-	s->memcg_params->root_cache = root_cache;
-	css_get(&memcg->css);
-
-	return 0;
-}
-
-void memcg_free_cache_params(struct kmem_cache *s)
-{
-	if (!s->memcg_params)
-		return;
-	if (!s->memcg_params->is_root_cache)
-		css_put(&s->memcg_params->memcg->css);
-	kfree(s->memcg_params);
-}
-
 /*
  * Prepares the memcg_caches array of the given kmem cache for disposing
  * memcgs' copies.
@@ -3154,6 +3128,7 @@ static void memcg_kmem_create_cache(struct mem_cgroup *memcg,
 	if (!cachep)
 		return;
 
+	css_get(&memcg->css);
 	list_add(&cachep->memcg_params->list, &memcg->memcg_slab_caches);
 
 	/*
@@ -3167,7 +3142,7 @@ static void memcg_kmem_create_cache(struct mem_cgroup *memcg,
 	root_cache->memcg_params->memcg_caches[id] = cachep;
 }
 
-static void memcg_kmem_destroy_cache(struct kmem_cache *cachep)
+static int memcg_kmem_destroy_cache(struct kmem_cache *cachep)
 {
 	struct kmem_cache *root_cache;
 	struct mem_cgroup *memcg;
@@ -3181,12 +3156,25 @@ static void memcg_kmem_destroy_cache(struct kmem_cache *cachep)
 	memcg = cachep->memcg_params->memcg;
 	id = memcg_cache_id(memcg);
 
+	/*
+	 * Since memcg_caches arrays can be accessed using only slab_mutex for
+	 * protection (e.g. by slabinfo readers), we must clear the cache's
+	 * entry before trying to destroy it.
+	 */
 	BUG_ON(root_cache->memcg_params->memcg_caches[id] != cachep);
 	root_cache->memcg_params->memcg_caches[id] = NULL;
 
 	list_del(&cachep->memcg_params->list);
 
-	kmem_cache_destroy(cachep);
+	if (kmem_cache_destroy(cachep) != 0) {
+		root_cache->memcg_params->memcg_caches[id] = cachep;
+		list_add(&cachep->memcg_params->list,
+			 &memcg->memcg_slab_caches);
+		return -EBUSY;
+	}
+
+	css_put(&memcg->css);
+	return 0;
 }
 
 /*
@@ -3231,9 +3219,7 @@ int __kmem_cache_destroy_memcg_children(struct kmem_cache *s)
 		if (!c)
 			continue;
 
-		memcg_kmem_destroy_cache(c);
-
-		if (cache_from_memcg_idx(s, i))
+		if (memcg_kmem_destroy_cache(c) != 0)
 			failed++;
 	}
 	mutex_unlock(&memcg_slab_mutex);
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 801999247619..055506ba6d37 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -177,7 +177,7 @@ unsigned long calculate_alignment(unsigned long flags,
 static struct kmem_cache *
 do_kmem_cache_create(char *name, size_t object_size, size_t size, size_t align,
 		     unsigned long flags, void (*ctor)(void *),
-		     struct mem_cgroup *memcg, struct kmem_cache *root_cache)
+		     struct memcg_cache_params *memcg_params)
 {
 	struct kmem_cache *s;
 	int err;
@@ -192,10 +192,9 @@ do_kmem_cache_create(char *name, size_t object_size, size_t size, size_t align,
 	s->size = size;
 	s->align = align;
 	s->ctor = ctor;
-
-	err = memcg_alloc_cache_params(memcg, s, root_cache);
-	if (err)
-		goto out_free_cache;
+#ifdef CONFIG_MEMCG_KMEM
+	s->memcg_params = memcg_params;
+#endif
 
 	err = __kmem_cache_create(s, flags);
 	if (err)
@@ -209,7 +208,6 @@ out:
 	return s;
 
 out_free_cache:
-	memcg_free_cache_params(s);
 	kfree(s);
 	goto out;
 }
@@ -275,7 +273,7 @@ kmem_cache_create(const char *name, size_t size, size_t align,
 
 	s = do_kmem_cache_create(cache_name, size, size,
 				 calculate_alignment(flags, align, size),
-				 flags, ctor, NULL, NULL);
+				 flags, ctor, NULL);
 	if (IS_ERR(s)) {
 		err = PTR_ERR(s);
 		kfree(cache_name);
@@ -318,13 +316,21 @@ struct kmem_cache *kmem_cache_create_memcg(struct mem_cgroup *memcg,
 					   const char *memcg_name)
 {
 	struct kmem_cache *s = NULL;
-	char *cache_name;
+	char *cache_name = NULL;
+	struct memcg_cache_params *memcg_params = NULL;
 
 	get_online_cpus();
 	get_online_mems();
 
 	mutex_lock(&slab_mutex);
 
+	memcg_params = kzalloc(sizeof(*memcg_params), GFP_KERNEL);
+	if (!memcg_params)
+		goto out_unlock;
+
+	memcg_params->memcg = memcg;
+	memcg_params->root_cache = root_cache;
+
 	cache_name = kasprintf(GFP_KERNEL, "%s(%d:%s)", root_cache->name,
 			       memcg_cache_id(memcg), memcg_name);
 	if (!cache_name)
@@ -333,11 +339,9 @@ struct kmem_cache *kmem_cache_create_memcg(struct mem_cgroup *memcg,
 	s = do_kmem_cache_create(cache_name, root_cache->object_size,
 				 root_cache->size, root_cache->align,
 				 root_cache->flags, root_cache->ctor,
-				 memcg, root_cache);
-	if (IS_ERR(s)) {
-		kfree(cache_name);
+				 memcg_params);
+	if (IS_ERR(s))
 		s = NULL;
-	}
 
 out_unlock:
 	mutex_unlock(&slab_mutex);
@@ -345,6 +349,10 @@ out_unlock:
 	put_online_mems();
 	put_online_cpus();
 
+	if (!s) {
+		kfree(memcg_params);
+		kfree(cache_name);
+	}
 	return s;
 }
 
@@ -369,8 +377,17 @@ static int kmem_cache_destroy_memcg_children(struct kmem_cache *s)
 }
 #endif /* CONFIG_MEMCG_KMEM */
 
-void kmem_cache_destroy(struct kmem_cache *s)
+/*
+ * kmem_cache_destroy - Destroy a cache.
+ * @s: The cache to destroy.
+ *
+ * Returns 0 on success. If the cache still has objects, -EBUSY is returned and
+ * a warning is printed to the log.
+ */
+int kmem_cache_destroy(struct kmem_cache *s)
 {
+	int ret = 0;
+
 	get_online_cpus();
 	get_online_mems();
 
@@ -380,6 +397,7 @@ void kmem_cache_destroy(struct kmem_cache *s)
 	if (s->refcount)
 		goto out_unlock;
 
+	ret = -EBUSY;
 	if (kmem_cache_destroy_memcg_children(s) != 0)
 		goto out_unlock;
 
@@ -396,9 +414,12 @@ void kmem_cache_destroy(struct kmem_cache *s)
 	if (s->flags & SLAB_DESTROY_BY_RCU)
 		rcu_barrier();
 
-	memcg_free_cache_params(s);
+#ifdef CONFIG_MEMCG_KMEM
+	kfree(s->memcg_params);
+#endif
 	kfree(s->name);
 	kmem_cache_free(kmem_cache, s);
+	ret = 0;
 	goto out;
 
 out_unlock:
@@ -406,6 +427,8 @@ out_unlock:
 out:
 	put_online_mems();
 	put_online_cpus();
+
+	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_destroy);
 
-- 
1.7.10.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH -mm v2 5/6] memcg: cleanup kmem cache creation/destruction functions naming
  2014-04-27  9:04 ` Vladimir Davydov
@ 2014-04-27  9:04   ` Vladimir Davydov
  -1 siblings, 0 replies; 16+ messages in thread
From: Vladimir Davydov @ 2014-04-27  9:04 UTC (permalink / raw)
  To: akpm; +Cc: mhocko, hannes, glommer, cl, penberg, linux-kernel, linux-mm, devel

Current names are rather confusing. Let's try to improve them.

Brief change log:

** old name **                          ** new name **

kmem_cache_create_memcg                 kmem_cache_request_memcg_copy
memcg_kmem_create_cache                 memcg_copy_kmem_cache
memcg_kmem_destroy_cache                memcg_destroy_kmem_cache_copy

__kmem_cache_destroy_memcg_children     __kmem_cache_destroy_memcg_copies
kmem_cache_destroy_memcg_children       kmem_cache_destroy_memcg_copies
mem_cgroup_destroy_all_caches           memcg_destroy_kmem_cache_copies

create_work                             memcg_kmem_cache_copy_work
memcg_create_cache_work_func            memcg_kmem_cache_copy_work_func
memcg_create_cache_enqueue              memcg_schedule_kmem_cache_copy

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
---
 include/linux/memcontrol.h |    2 +-
 include/linux/slab.h       |    6 +--
 mm/memcontrol.c            |   90 +++++++++++++++++++++++++-------------------
 mm/slab_common.c           |   29 +++++++-------
 4 files changed, 69 insertions(+), 58 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 3aee79fc7876..b20f533a92ca 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -498,7 +498,7 @@ __memcg_kmem_get_cache(struct kmem_cache *cachep, gfp_t gfp);
 int __memcg_charge_slab(struct kmem_cache *cachep, gfp_t gfp, int order);
 void __memcg_uncharge_slab(struct kmem_cache *cachep, int order);
 
-int __kmem_cache_destroy_memcg_children(struct kmem_cache *s);
+int __kmem_cache_destroy_memcg_copies(struct kmem_cache *cachep);
 
 /**
  * memcg_kmem_newpage_charge: verify if a new kmem allocation is allowed.
diff --git a/include/linux/slab.h b/include/linux/slab.h
index d041539b2bfb..ecb26a4547fe 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -116,9 +116,9 @@ struct kmem_cache *kmem_cache_create(const char *, size_t, size_t,
 			unsigned long,
 			void (*)(void *));
 #ifdef CONFIG_MEMCG_KMEM
-struct kmem_cache *kmem_cache_create_memcg(struct mem_cgroup *,
-					   struct kmem_cache *,
-					   const char *);
+struct kmem_cache *kmem_cache_request_memcg_copy(struct kmem_cache *,
+						 struct mem_cgroup *,
+						 const char *);
 int kmem_cache_init_memcg_array(struct kmem_cache *, int);
 int kmem_cache_memcg_arrays_grow(int);
 #endif
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 415c81c2710a..c795c3e388dc 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3099,8 +3099,12 @@ static int memcg_prepare_kmem_cache(struct kmem_cache *cachep)
 	return ret;
 }
 
-static void memcg_kmem_create_cache(struct mem_cgroup *memcg,
-				    struct kmem_cache *root_cache)
+/*
+ * Creates a copy of the given kmem cache to be used for fulfilling allocation
+ * requests coming from the memcg to the cache.
+ */
+static void memcg_copy_kmem_cache(struct mem_cgroup *memcg,
+				  struct kmem_cache *root_cache)
 {
 	static char memcg_name[NAME_MAX + 1];
 	struct kmem_cache *cachep;
@@ -3119,7 +3123,7 @@ static void memcg_kmem_create_cache(struct mem_cgroup *memcg,
 		return;
 
 	cgroup_name(memcg->css.cgroup, memcg_name, NAME_MAX + 1);
-	cachep = kmem_cache_create_memcg(memcg, root_cache, memcg_name);
+	cachep = kmem_cache_request_memcg_copy(root_cache, memcg, memcg_name);
 	/*
 	 * If we could not create a memcg cache, do not complain, because
 	 * that's not critical at all as we can always proceed with the root
@@ -3142,7 +3146,8 @@ static void memcg_kmem_create_cache(struct mem_cgroup *memcg,
 	root_cache->memcg_params->memcg_caches[id] = cachep;
 }
 
-static int memcg_kmem_destroy_cache(struct kmem_cache *cachep)
+/* Attempts to destroy a per memcg kmem cache copy. Returns 0 on success. */
+static int memcg_destroy_kmem_cache_copy(struct kmem_cache *cachep)
 {
 	struct kmem_cache *root_cache;
 	struct mem_cgroup *memcg;
@@ -3208,25 +3213,39 @@ static inline void memcg_resume_kmem_account(void)
 	current->memcg_kmem_skip_account--;
 }
 
-int __kmem_cache_destroy_memcg_children(struct kmem_cache *s)
+/*
+ * Attempts to destroy all per memcg copies of the given kmem cache. Called on
+ * kmem cache destruction. Returns 0 on success.
+ */
+int __kmem_cache_destroy_memcg_copies(struct kmem_cache *cachep)
 {
 	struct kmem_cache *c;
-	int i, failed = 0;
+	int i;
+	int ret = 0;
+
+	BUG_ON(!is_root_cache(cachep));
 
 	mutex_lock(&memcg_slab_mutex);
 	for_each_memcg_cache_index(i) {
-		c = cache_from_memcg_idx(s, i);
+		c = cache_from_memcg_idx(cachep, i);
 		if (!c)
 			continue;
 
-		if (memcg_kmem_destroy_cache(c) != 0)
-			failed++;
+		if (memcg_destroy_kmem_cache_copy(c) != 0)
+			ret = -EBUSY;
 	}
 	mutex_unlock(&memcg_slab_mutex);
-	return failed;
+	return ret;
 }
 
-static void mem_cgroup_destroy_all_caches(struct mem_cgroup *memcg)
+/*
+ * Attempts to destroy all kmem cache copies corresponding to the given memcg.
+ * Called on css offline.
+ *
+ * XXX: Caches that still have objects on css offline will be leaked. Need to
+ * reparent them instead.
+ */
+static void memcg_destroy_kmem_cache_copies(struct mem_cgroup *memcg)
 {
 	struct kmem_cache *cachep;
 	struct memcg_cache_params *params, *tmp;
@@ -3239,20 +3258,21 @@ static void mem_cgroup_destroy_all_caches(struct mem_cgroup *memcg)
 		cachep = memcg_params_to_cache(params);
 		kmem_cache_shrink(cachep);
 		if (atomic_read(&cachep->memcg_params->nr_pages) == 0)
-			memcg_kmem_destroy_cache(cachep);
+			memcg_destroy_kmem_cache_copy(cachep);
 	}
 	mutex_unlock(&memcg_slab_mutex);
 }
 
-struct create_work {
+struct memcg_kmem_cache_copy_work {
 	struct mem_cgroup *memcg;
 	struct kmem_cache *cachep;
 	struct work_struct work;
 };
 
-static void memcg_create_cache_work_func(struct work_struct *w)
+static void memcg_kmem_cache_copy_work_func(struct work_struct *w)
 {
-	struct create_work *cw = container_of(w, struct create_work, work);
+	struct memcg_kmem_cache_copy_work *cw = container_of(w,
+			struct memcg_kmem_cache_copy_work, work);
 	struct mem_cgroup *memcg = cw->memcg;
 	struct kmem_cache *cachep = cw->cachep;
 
@@ -3260,22 +3280,19 @@ static void memcg_create_cache_work_func(struct work_struct *w)
 		goto out;
 
 	mutex_lock(&memcg_slab_mutex);
-	memcg_kmem_create_cache(memcg, cachep);
+	memcg_copy_kmem_cache(memcg, cachep);
 	mutex_unlock(&memcg_slab_mutex);
 out:
 	css_put(&memcg->css);
 	kfree(cw);
 }
 
-/*
- * Enqueue the creation of a per-memcg kmem_cache.
- */
-static void __memcg_create_cache_enqueue(struct mem_cgroup *memcg,
-					 struct kmem_cache *cachep)
+static void __memcg_schedule_kmem_cache_copy(struct mem_cgroup *memcg,
+					     struct kmem_cache *cachep)
 {
-	struct create_work *cw;
+	struct memcg_kmem_cache_copy_work *cw;
 
-	cw = kmalloc(sizeof(struct create_work), GFP_NOWAIT);
+	cw = kmalloc(sizeof(*cw), GFP_NOWAIT);
 	if (cw == NULL) {
 		css_put(&memcg->css);
 		return;
@@ -3283,18 +3300,18 @@ static void __memcg_create_cache_enqueue(struct mem_cgroup *memcg,
 
 	cw->memcg = memcg;
 	cw->cachep = cachep;
+	INIT_WORK(&cw->work, memcg_kmem_cache_copy_work_func);
 
-	INIT_WORK(&cw->work, memcg_create_cache_work_func);
 	schedule_work(&cw->work);
 }
 
-static void memcg_create_cache_enqueue(struct mem_cgroup *memcg,
-				       struct kmem_cache *cachep)
+static void memcg_schedule_kmem_cache_copy(struct mem_cgroup *memcg,
+					   struct kmem_cache *cachep)
 {
 	/*
 	 * We need to stop accounting when we kmalloc, because if the
 	 * corresponding kmalloc cache is not yet created, the first allocation
-	 * in __memcg_create_cache_enqueue will recurse.
+	 * in __memcg_schedule_kmem_cache_copy will recurse.
 	 *
 	 * However, it is better to enclose the whole function. Depending on
 	 * the debugging options enabled, INIT_WORK(), for instance, can
@@ -3303,7 +3320,7 @@ static void memcg_create_cache_enqueue(struct mem_cgroup *memcg,
 	 * the safest choice is to do it like this, wrapping the whole function.
 	 */
 	memcg_stop_kmem_account();
-	__memcg_create_cache_enqueue(memcg, cachep);
+	__memcg_schedule_kmem_cache_copy(memcg, cachep);
 	memcg_resume_kmem_account();
 }
 
@@ -3367,22 +3384,17 @@ struct kmem_cache *__memcg_kmem_get_cache(struct kmem_cache *cachep,
 
 	/*
 	 * If we are in a safe context (can wait, and not in interrupt
-	 * context), we could be be predictable and return right away.
+	 * context), we could be predictable and return right away.
 	 * This would guarantee that the allocation being performed
 	 * already belongs in the new cache.
 	 *
 	 * However, there are some clashes that can arrive from locking.
 	 * For instance, because we acquire the slab_mutex while doing
-	 * kmem_cache_dup, this means no further allocation could happen
-	 * with the slab_mutex held.
-	 *
-	 * Also, because cache creation issue get_online_cpus(), this
-	 * creates a lock chain: memcg_slab_mutex -> cpu_hotplug_mutex,
-	 * that ends up reversed during cpu hotplug. (cpuset allocates
-	 * a bunch of GFP_KERNEL memory during cpuup). Due to all that,
+	 * kmem_cache_request_memcg_copy, this means no further
+	 * allocation could happen with the slab_mutex held. So it's
 	 * better to defer everything.
 	 */
-	memcg_create_cache_enqueue(memcg, cachep);
+	memcg_schedule_kmem_cache_copy(memcg, cachep);
 	return cachep;
 out:
 	rcu_read_unlock();
@@ -3506,7 +3518,7 @@ void __memcg_kmem_uncharge_pages(struct page *page, int order)
 	memcg_uncharge_kmem(memcg, PAGE_SIZE << order);
 }
 #else
-static inline void mem_cgroup_destroy_all_caches(struct mem_cgroup *memcg)
+static inline void memcg_destroy_kmem_cache_copies(struct mem_cgroup *memcg)
 {
 }
 #endif /* CONFIG_MEMCG_KMEM */
@@ -6341,7 +6353,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
 	css_for_each_descendant_post(iter, css)
 		mem_cgroup_reparent_charges(mem_cgroup_from_css(iter));
 
-	mem_cgroup_destroy_all_caches(memcg);
+	memcg_destroy_kmem_cache_copies(memcg);
 	vmpressure_cleanup(&memcg->vmpressure);
 }
 
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 055506ba6d37..36c7d32a6f97 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -302,18 +302,18 @@ EXPORT_SYMBOL(kmem_cache_create);
 
 #ifdef CONFIG_MEMCG_KMEM
 /*
- * kmem_cache_create_memcg - Create a cache for a memory cgroup.
+ * kmem_cache_request_memcg_copy - Create a cache for a memory cgroup.
+ * @cachep: The kmem cache to make a copy of.
  * @memcg: The memory cgroup the new cache is for.
- * @root_cache: The parent of the new cache.
  * @memcg_name: The name of the memory cgroup.
  *
  * This function attempts to create a kmem cache that will serve allocation
- * requests going from @memcg to @root_cache. The new cache inherits properties
+ * requests going from @memcg to @cachep. The new cache inherits properties
  * from its parent.
  */
-struct kmem_cache *kmem_cache_create_memcg(struct mem_cgroup *memcg,
-					   struct kmem_cache *root_cache,
-					   const char *memcg_name)
+struct kmem_cache *kmem_cache_request_memcg_copy(struct kmem_cache *cachep,
+						 struct mem_cgroup *memcg,
+						 const char *memcg_name)
 {
 	struct kmem_cache *s = NULL;
 	char *cache_name = NULL;
@@ -329,16 +329,15 @@ struct kmem_cache *kmem_cache_create_memcg(struct mem_cgroup *memcg,
 		goto out_unlock;
 
 	memcg_params->memcg = memcg;
-	memcg_params->root_cache = root_cache;
+	memcg_params->root_cache = cachep;
 
-	cache_name = kasprintf(GFP_KERNEL, "%s(%d:%s)", root_cache->name,
+	cache_name = kasprintf(GFP_KERNEL, "%s(%d:%s)", cachep->name,
 			       memcg_cache_id(memcg), memcg_name);
 	if (!cache_name)
 		goto out_unlock;
 
-	s = do_kmem_cache_create(cache_name, root_cache->object_size,
-				 root_cache->size, root_cache->align,
-				 root_cache->flags, root_cache->ctor,
+	s = do_kmem_cache_create(cache_name, cachep->object_size, cachep->size,
+				 cachep->align, cachep->flags, cachep->ctor,
 				 memcg_params);
 	if (IS_ERR(s))
 		s = NULL;
@@ -356,7 +355,7 @@ out_unlock:
 	return s;
 }
 
-static int kmem_cache_destroy_memcg_children(struct kmem_cache *s)
+static int kmem_cache_destroy_memcg_copies(struct kmem_cache *s)
 {
 	int rc;
 
@@ -365,13 +364,13 @@ static int kmem_cache_destroy_memcg_children(struct kmem_cache *s)
 		return 0;
 
 	mutex_unlock(&slab_mutex);
-	rc = __kmem_cache_destroy_memcg_children(s);
+	rc = __kmem_cache_destroy_memcg_copies(s);
 	mutex_lock(&slab_mutex);
 
 	return rc;
 }
 #else
-static int kmem_cache_destroy_memcg_children(struct kmem_cache *s)
+static int kmem_cache_destroy_memcg_copies(struct kmem_cache *s)
 {
 	return 0;
 }
@@ -398,7 +397,7 @@ int kmem_cache_destroy(struct kmem_cache *s)
 		goto out_unlock;
 
 	ret = -EBUSY;
-	if (kmem_cache_destroy_memcg_children(s) != 0)
+	if (kmem_cache_destroy_memcg_copies(s) != 0)
 		goto out_unlock;
 
 	list_del(&s->list);
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH -mm v2 5/6] memcg: cleanup kmem cache creation/destruction functions naming
@ 2014-04-27  9:04   ` Vladimir Davydov
  0 siblings, 0 replies; 16+ messages in thread
From: Vladimir Davydov @ 2014-04-27  9:04 UTC (permalink / raw)
  To: akpm; +Cc: mhocko, hannes, glommer, cl, penberg, linux-kernel, linux-mm, devel

Current names are rather confusing. Let's try to improve them.

Brief change log:

** old name **                          ** new name **

kmem_cache_create_memcg                 kmem_cache_request_memcg_copy
memcg_kmem_create_cache                 memcg_copy_kmem_cache
memcg_kmem_destroy_cache                memcg_destroy_kmem_cache_copy

__kmem_cache_destroy_memcg_children     __kmem_cache_destroy_memcg_copies
kmem_cache_destroy_memcg_children       kmem_cache_destroy_memcg_copies
mem_cgroup_destroy_all_caches           memcg_destroy_kmem_cache_copies

create_work                             memcg_kmem_cache_copy_work
memcg_create_cache_work_func            memcg_kmem_cache_copy_work_func
memcg_create_cache_enqueue              memcg_schedule_kmem_cache_copy

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
---
 include/linux/memcontrol.h |    2 +-
 include/linux/slab.h       |    6 +--
 mm/memcontrol.c            |   90 +++++++++++++++++++++++++-------------------
 mm/slab_common.c           |   29 +++++++-------
 4 files changed, 69 insertions(+), 58 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 3aee79fc7876..b20f533a92ca 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -498,7 +498,7 @@ __memcg_kmem_get_cache(struct kmem_cache *cachep, gfp_t gfp);
 int __memcg_charge_slab(struct kmem_cache *cachep, gfp_t gfp, int order);
 void __memcg_uncharge_slab(struct kmem_cache *cachep, int order);
 
-int __kmem_cache_destroy_memcg_children(struct kmem_cache *s);
+int __kmem_cache_destroy_memcg_copies(struct kmem_cache *cachep);
 
 /**
  * memcg_kmem_newpage_charge: verify if a new kmem allocation is allowed.
diff --git a/include/linux/slab.h b/include/linux/slab.h
index d041539b2bfb..ecb26a4547fe 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -116,9 +116,9 @@ struct kmem_cache *kmem_cache_create(const char *, size_t, size_t,
 			unsigned long,
 			void (*)(void *));
 #ifdef CONFIG_MEMCG_KMEM
-struct kmem_cache *kmem_cache_create_memcg(struct mem_cgroup *,
-					   struct kmem_cache *,
-					   const char *);
+struct kmem_cache *kmem_cache_request_memcg_copy(struct kmem_cache *,
+						 struct mem_cgroup *,
+						 const char *);
 int kmem_cache_init_memcg_array(struct kmem_cache *, int);
 int kmem_cache_memcg_arrays_grow(int);
 #endif
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 415c81c2710a..c795c3e388dc 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3099,8 +3099,12 @@ static int memcg_prepare_kmem_cache(struct kmem_cache *cachep)
 	return ret;
 }
 
-static void memcg_kmem_create_cache(struct mem_cgroup *memcg,
-				    struct kmem_cache *root_cache)
+/*
+ * Creates a copy of the given kmem cache to be used for fulfilling allocation
+ * requests coming from the memcg to the cache.
+ */
+static void memcg_copy_kmem_cache(struct mem_cgroup *memcg,
+				  struct kmem_cache *root_cache)
 {
 	static char memcg_name[NAME_MAX + 1];
 	struct kmem_cache *cachep;
@@ -3119,7 +3123,7 @@ static void memcg_kmem_create_cache(struct mem_cgroup *memcg,
 		return;
 
 	cgroup_name(memcg->css.cgroup, memcg_name, NAME_MAX + 1);
-	cachep = kmem_cache_create_memcg(memcg, root_cache, memcg_name);
+	cachep = kmem_cache_request_memcg_copy(root_cache, memcg, memcg_name);
 	/*
 	 * If we could not create a memcg cache, do not complain, because
 	 * that's not critical at all as we can always proceed with the root
@@ -3142,7 +3146,8 @@ static void memcg_kmem_create_cache(struct mem_cgroup *memcg,
 	root_cache->memcg_params->memcg_caches[id] = cachep;
 }
 
-static int memcg_kmem_destroy_cache(struct kmem_cache *cachep)
+/* Attempts to destroy a per memcg kmem cache copy. Returns 0 on success. */
+static int memcg_destroy_kmem_cache_copy(struct kmem_cache *cachep)
 {
 	struct kmem_cache *root_cache;
 	struct mem_cgroup *memcg;
@@ -3208,25 +3213,39 @@ static inline void memcg_resume_kmem_account(void)
 	current->memcg_kmem_skip_account--;
 }
 
-int __kmem_cache_destroy_memcg_children(struct kmem_cache *s)
+/*
+ * Attempts to destroy all per memcg copies of the given kmem cache. Called on
+ * kmem cache destruction. Returns 0 on success.
+ */
+int __kmem_cache_destroy_memcg_copies(struct kmem_cache *cachep)
 {
 	struct kmem_cache *c;
-	int i, failed = 0;
+	int i;
+	int ret = 0;
+
+	BUG_ON(!is_root_cache(cachep));
 
 	mutex_lock(&memcg_slab_mutex);
 	for_each_memcg_cache_index(i) {
-		c = cache_from_memcg_idx(s, i);
+		c = cache_from_memcg_idx(cachep, i);
 		if (!c)
 			continue;
 
-		if (memcg_kmem_destroy_cache(c) != 0)
-			failed++;
+		if (memcg_destroy_kmem_cache_copy(c) != 0)
+			ret = -EBUSY;
 	}
 	mutex_unlock(&memcg_slab_mutex);
-	return failed;
+	return ret;
 }
 
-static void mem_cgroup_destroy_all_caches(struct mem_cgroup *memcg)
+/*
+ * Attempts to destroy all kmem cache copies corresponding to the given memcg.
+ * Called on css offline.
+ *
+ * XXX: Caches that still have objects on css offline will be leaked. Need to
+ * reparent them instead.
+ */
+static void memcg_destroy_kmem_cache_copies(struct mem_cgroup *memcg)
 {
 	struct kmem_cache *cachep;
 	struct memcg_cache_params *params, *tmp;
@@ -3239,20 +3258,21 @@ static void mem_cgroup_destroy_all_caches(struct mem_cgroup *memcg)
 		cachep = memcg_params_to_cache(params);
 		kmem_cache_shrink(cachep);
 		if (atomic_read(&cachep->memcg_params->nr_pages) == 0)
-			memcg_kmem_destroy_cache(cachep);
+			memcg_destroy_kmem_cache_copy(cachep);
 	}
 	mutex_unlock(&memcg_slab_mutex);
 }
 
-struct create_work {
+struct memcg_kmem_cache_copy_work {
 	struct mem_cgroup *memcg;
 	struct kmem_cache *cachep;
 	struct work_struct work;
 };
 
-static void memcg_create_cache_work_func(struct work_struct *w)
+static void memcg_kmem_cache_copy_work_func(struct work_struct *w)
 {
-	struct create_work *cw = container_of(w, struct create_work, work);
+	struct memcg_kmem_cache_copy_work *cw = container_of(w,
+			struct memcg_kmem_cache_copy_work, work);
 	struct mem_cgroup *memcg = cw->memcg;
 	struct kmem_cache *cachep = cw->cachep;
 
@@ -3260,22 +3280,19 @@ static void memcg_create_cache_work_func(struct work_struct *w)
 		goto out;
 
 	mutex_lock(&memcg_slab_mutex);
-	memcg_kmem_create_cache(memcg, cachep);
+	memcg_copy_kmem_cache(memcg, cachep);
 	mutex_unlock(&memcg_slab_mutex);
 out:
 	css_put(&memcg->css);
 	kfree(cw);
 }
 
-/*
- * Enqueue the creation of a per-memcg kmem_cache.
- */
-static void __memcg_create_cache_enqueue(struct mem_cgroup *memcg,
-					 struct kmem_cache *cachep)
+static void __memcg_schedule_kmem_cache_copy(struct mem_cgroup *memcg,
+					     struct kmem_cache *cachep)
 {
-	struct create_work *cw;
+	struct memcg_kmem_cache_copy_work *cw;
 
-	cw = kmalloc(sizeof(struct create_work), GFP_NOWAIT);
+	cw = kmalloc(sizeof(*cw), GFP_NOWAIT);
 	if (cw == NULL) {
 		css_put(&memcg->css);
 		return;
@@ -3283,18 +3300,18 @@ static void __memcg_create_cache_enqueue(struct mem_cgroup *memcg,
 
 	cw->memcg = memcg;
 	cw->cachep = cachep;
+	INIT_WORK(&cw->work, memcg_kmem_cache_copy_work_func);
 
-	INIT_WORK(&cw->work, memcg_create_cache_work_func);
 	schedule_work(&cw->work);
 }
 
-static void memcg_create_cache_enqueue(struct mem_cgroup *memcg,
-				       struct kmem_cache *cachep)
+static void memcg_schedule_kmem_cache_copy(struct mem_cgroup *memcg,
+					   struct kmem_cache *cachep)
 {
 	/*
 	 * We need to stop accounting when we kmalloc, because if the
 	 * corresponding kmalloc cache is not yet created, the first allocation
-	 * in __memcg_create_cache_enqueue will recurse.
+	 * in __memcg_schedule_kmem_cache_copy will recurse.
 	 *
 	 * However, it is better to enclose the whole function. Depending on
 	 * the debugging options enabled, INIT_WORK(), for instance, can
@@ -3303,7 +3320,7 @@ static void memcg_create_cache_enqueue(struct mem_cgroup *memcg,
 	 * the safest choice is to do it like this, wrapping the whole function.
 	 */
 	memcg_stop_kmem_account();
-	__memcg_create_cache_enqueue(memcg, cachep);
+	__memcg_schedule_kmem_cache_copy(memcg, cachep);
 	memcg_resume_kmem_account();
 }
 
@@ -3367,22 +3384,17 @@ struct kmem_cache *__memcg_kmem_get_cache(struct kmem_cache *cachep,
 
 	/*
 	 * If we are in a safe context (can wait, and not in interrupt
-	 * context), we could be be predictable and return right away.
+	 * context), we could be predictable and return right away.
 	 * This would guarantee that the allocation being performed
 	 * already belongs in the new cache.
 	 *
 	 * However, there are some clashes that can arrive from locking.
 	 * For instance, because we acquire the slab_mutex while doing
-	 * kmem_cache_dup, this means no further allocation could happen
-	 * with the slab_mutex held.
-	 *
-	 * Also, because cache creation issue get_online_cpus(), this
-	 * creates a lock chain: memcg_slab_mutex -> cpu_hotplug_mutex,
-	 * that ends up reversed during cpu hotplug. (cpuset allocates
-	 * a bunch of GFP_KERNEL memory during cpuup). Due to all that,
+	 * kmem_cache_request_memcg_copy, this means no further
+	 * allocation could happen with the slab_mutex held. So it's
 	 * better to defer everything.
 	 */
-	memcg_create_cache_enqueue(memcg, cachep);
+	memcg_schedule_kmem_cache_copy(memcg, cachep);
 	return cachep;
 out:
 	rcu_read_unlock();
@@ -3506,7 +3518,7 @@ void __memcg_kmem_uncharge_pages(struct page *page, int order)
 	memcg_uncharge_kmem(memcg, PAGE_SIZE << order);
 }
 #else
-static inline void mem_cgroup_destroy_all_caches(struct mem_cgroup *memcg)
+static inline void memcg_destroy_kmem_cache_copies(struct mem_cgroup *memcg)
 {
 }
 #endif /* CONFIG_MEMCG_KMEM */
@@ -6341,7 +6353,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
 	css_for_each_descendant_post(iter, css)
 		mem_cgroup_reparent_charges(mem_cgroup_from_css(iter));
 
-	mem_cgroup_destroy_all_caches(memcg);
+	memcg_destroy_kmem_cache_copies(memcg);
 	vmpressure_cleanup(&memcg->vmpressure);
 }
 
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 055506ba6d37..36c7d32a6f97 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -302,18 +302,18 @@ EXPORT_SYMBOL(kmem_cache_create);
 
 #ifdef CONFIG_MEMCG_KMEM
 /*
- * kmem_cache_create_memcg - Create a cache for a memory cgroup.
+ * kmem_cache_request_memcg_copy - Create a cache for a memory cgroup.
+ * @cachep: The kmem cache to make a copy of.
  * @memcg: The memory cgroup the new cache is for.
- * @root_cache: The parent of the new cache.
  * @memcg_name: The name of the memory cgroup.
  *
  * This function attempts to create a kmem cache that will serve allocation
- * requests going from @memcg to @root_cache. The new cache inherits properties
+ * requests going from @memcg to @cachep. The new cache inherits properties
  * from its parent.
  */
-struct kmem_cache *kmem_cache_create_memcg(struct mem_cgroup *memcg,
-					   struct kmem_cache *root_cache,
-					   const char *memcg_name)
+struct kmem_cache *kmem_cache_request_memcg_copy(struct kmem_cache *cachep,
+						 struct mem_cgroup *memcg,
+						 const char *memcg_name)
 {
 	struct kmem_cache *s = NULL;
 	char *cache_name = NULL;
@@ -329,16 +329,15 @@ struct kmem_cache *kmem_cache_create_memcg(struct mem_cgroup *memcg,
 		goto out_unlock;
 
 	memcg_params->memcg = memcg;
-	memcg_params->root_cache = root_cache;
+	memcg_params->root_cache = cachep;
 
-	cache_name = kasprintf(GFP_KERNEL, "%s(%d:%s)", root_cache->name,
+	cache_name = kasprintf(GFP_KERNEL, "%s(%d:%s)", cachep->name,
 			       memcg_cache_id(memcg), memcg_name);
 	if (!cache_name)
 		goto out_unlock;
 
-	s = do_kmem_cache_create(cache_name, root_cache->object_size,
-				 root_cache->size, root_cache->align,
-				 root_cache->flags, root_cache->ctor,
+	s = do_kmem_cache_create(cache_name, cachep->object_size, cachep->size,
+				 cachep->align, cachep->flags, cachep->ctor,
 				 memcg_params);
 	if (IS_ERR(s))
 		s = NULL;
@@ -356,7 +355,7 @@ out_unlock:
 	return s;
 }
 
-static int kmem_cache_destroy_memcg_children(struct kmem_cache *s)
+static int kmem_cache_destroy_memcg_copies(struct kmem_cache *s)
 {
 	int rc;
 
@@ -365,13 +364,13 @@ static int kmem_cache_destroy_memcg_children(struct kmem_cache *s)
 		return 0;
 
 	mutex_unlock(&slab_mutex);
-	rc = __kmem_cache_destroy_memcg_children(s);
+	rc = __kmem_cache_destroy_memcg_copies(s);
 	mutex_lock(&slab_mutex);
 
 	return rc;
 }
 #else
-static int kmem_cache_destroy_memcg_children(struct kmem_cache *s)
+static int kmem_cache_destroy_memcg_copies(struct kmem_cache *s)
 {
 	return 0;
 }
@@ -398,7 +397,7 @@ int kmem_cache_destroy(struct kmem_cache *s)
 		goto out_unlock;
 
 	ret = -EBUSY;
-	if (kmem_cache_destroy_memcg_children(s) != 0)
+	if (kmem_cache_destroy_memcg_copies(s) != 0)
 		goto out_unlock;
 
 	list_del(&s->list);
-- 
1.7.10.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH -mm v2 6/6] memcg: cleanup kmem_id-related naming
  2014-04-27  9:04 ` Vladimir Davydov
@ 2014-04-27  9:04   ` Vladimir Davydov
  -1 siblings, 0 replies; 16+ messages in thread
From: Vladimir Davydov @ 2014-04-27  9:04 UTC (permalink / raw)
  To: akpm; +Cc: mhocko, hannes, glommer, cl, penberg, linux-kernel, linux-mm, devel

The naming of mem_cgroup->kmemcg_id-related functions is rather
inconsistent. We tend to use cache_id as part of their names, but it
isn't quite right, because kmem_id isn't something specific to kmem
caches. It can be used for indexing any array that stores per memcg
data. For instance, we will use it to make list_lru per memcg in the
future. So let's clean up the names and comments related to kmem_id.

Brief change log:

** old name **                          ** new name **

mem_cgroup->kmemcg_id                   mem_cgroup->kmem_id

memcg_init_cache_id()                   memcg_init_kmem_id()
memcg_cache_id()                        memcg_kmem_id()
cache_from_memcg_idx()                  kmem_cache_of_memcg_by_id()
cache_from_memcg_idx(memcg_cache_id())  kmem_cache_of_memcg()

for_each_memcg_cache_index()            for_each_possible_memcg_kmem_id()

memcg_limited_groups                    memcg_kmem_ida
memcg_limited_groups_array_size         memcg_nr_kmem_ids_max

MEMCG_CACHES_MIN_SIZE                   <constant inlined>
MEMCG_CACHES_MAX_SIZE                   MEMCG_KMEM_ID_MAX + 1

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
---
 include/linux/memcontrol.h |   19 +++----
 mm/memcontrol.c            |  117 +++++++++++++++++++++-----------------------
 mm/slab.c                  |    4 +-
 mm/slab.h                  |   24 ++++++---
 mm/slab_common.c           |   10 ++--
 mm/slub.c                  |   10 ++--
 6 files changed, 93 insertions(+), 91 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index b20f533a92ca..1a5c33fd40a4 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -458,15 +458,10 @@ static inline void sock_release_memcg(struct sock *sk)
 #ifdef CONFIG_MEMCG_KMEM
 extern struct static_key memcg_kmem_enabled_key;
 
-extern int memcg_limited_groups_array_size;
+extern int memcg_nr_kmem_ids_max;
 
-/*
- * Helper macro to loop through all memcg-specific caches. Callers must still
- * check if the cache is valid (it is either valid or NULL).
- * the slab_mutex must be held when looping through those caches
- */
-#define for_each_memcg_cache_index(_idx)	\
-	for ((_idx) = 0; (_idx) < memcg_limited_groups_array_size; (_idx)++)
+#define for_each_possible_memcg_kmem_id(id) \
+	for ((id) = 0; (id) < memcg_nr_kmem_ids_max; (id)++)
 
 static inline bool memcg_kmem_enabled(void)
 {
@@ -490,7 +485,7 @@ void __memcg_kmem_commit_charge(struct page *page,
 				       struct mem_cgroup *memcg, int order);
 void __memcg_kmem_uncharge_pages(struct page *page, int order);
 
-int memcg_cache_id(struct mem_cgroup *memcg);
+int memcg_kmem_id(struct mem_cgroup *memcg);
 
 struct kmem_cache *
 __memcg_kmem_get_cache(struct kmem_cache *cachep, gfp_t gfp);
@@ -591,8 +586,8 @@ memcg_kmem_get_cache(struct kmem_cache *cachep, gfp_t gfp)
 	return __memcg_kmem_get_cache(cachep, gfp);
 }
 #else
-#define for_each_memcg_cache_index(_idx)	\
-	for (; NULL; )
+#define for_each_possible_memcg_kmem_id(id) \
+	for ((id) = 0; 0; )
 
 static inline bool memcg_kmem_enabled(void)
 {
@@ -614,7 +609,7 @@ memcg_kmem_commit_charge(struct page *page, struct mem_cgroup *memcg, int order)
 {
 }
 
-static inline int memcg_cache_id(struct mem_cgroup *memcg)
+static inline int memcg_kmem_id(struct mem_cgroup *memcg)
 {
 	return -1;
 }
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index c795c3e388dc..1077091e995f 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -357,11 +357,22 @@ struct mem_cgroup {
 	struct cg_proto tcp_mem;
 #endif
 #if defined(CONFIG_MEMCG_KMEM)
+	/*
+	 * Each kmem-limited memory cgroup has a unique id. We use it for
+	 * indexing the arrays that store per cgroup data. An example of such
+	 * an array is kmem_cache->memcg_params->memcg_caches.
+	 *
+	 * We introduce a separate id instead of using cgroup->id to avoid
+	 * waste of memory in sparse environments, where we have a lot of
+	 * memory cgroups, but only a few of them are kmem-limited.
+	 *
+	 * For unlimited cgroups kmem_id equals -1.
+	 */
+	int kmem_id;
+
 	/* analogous to slab_common's slab_caches list, but per-memcg;
 	 * protected by memcg_slab_mutex */
 	struct list_head memcg_slab_caches;
-        /* Index in the kmem_cache->memcg_params->memcg_caches array */
-	int kmemcg_id;
 #endif
 
 	int last_scanned_node;
@@ -610,35 +621,28 @@ static void disarm_sock_keys(struct mem_cgroup *memcg)
 #endif
 
 #ifdef CONFIG_MEMCG_KMEM
+/* used for mem_cgroup->kmem_id allocations */
+static DEFINE_IDA(memcg_kmem_ida);
+
 /*
- * This will be the memcg's index in each cache's ->memcg_params->memcg_caches.
- * The main reason for not using cgroup id for this:
- *  this works better in sparse environments, where we have a lot of memcgs,
- *  but only a few kmem-limited. Or also, if we have, for instance, 200
- *  memcgs, and none but the 200th is kmem-limited, we'd have to have a
- *  200 entry array for that.
- *
- * The current size of the caches array is stored in
- * memcg_limited_groups_array_size.  It will double each time we have to
- * increase it.
+ * Max kmem id should be as large as max cgroup id so that we could enable
+ * kmem-accounting for each memory cgroup.
  */
-static DEFINE_IDA(kmem_limited_groups);
-int memcg_limited_groups_array_size;
+#define MEMCG_KMEM_ID_MAX	MEM_CGROUP_ID_MAX
 
 /*
- * MIN_SIZE is different than 1, because we would like to avoid going through
- * the alloc/free process all the time. In a small machine, 4 kmem-limited
- * cgroups is a reasonable guess. In the future, it could be a parameter or
- * tunable, but that is strictly not necessary.
+ * We keep the maximal number of kmem ids that may exist in the system in the
+ * memcg_nr_kmem_ids_max variable. We use it for the size of the arrays indexed
+ * by kmem id (see the mem_cgroup->kmem_id definition).
+ *
+ * If a newly allocated kmem id is greater or equal to memcg_nr_kmem_ids_max,
+ * we double it and reallocate the arrays so that they have enough space to
+ * store data for the new cgroup.
  *
- * MAX_SIZE should be as large as the number of cgrp_ids. Ideally, we could get
- * this constant directly from cgroup, but it is understandable that this is
- * better kept as an internal representation in cgroup.c. In any case, the
- * cgrp_id space is not getting any smaller, and we don't have to necessarily
- * increase ours as well if it increases.
+ * The updates are done with activate_kmem_mutex held, so one must take it to
+ * guarantee a stable value of memcg_nr_kmem_ids_max.
  */
-#define MEMCG_CACHES_MIN_SIZE 4
-#define MEMCG_CACHES_MAX_SIZE MEM_CGROUP_ID_MAX
+int memcg_nr_kmem_ids_max;
 
 /*
  * A lot of the calls to the cache allocation functions are expected to be
@@ -653,7 +657,7 @@ static void disarm_kmem_keys(struct mem_cgroup *memcg)
 {
 	if (memcg_kmem_is_active(memcg)) {
 		static_key_slow_dec(&memcg_kmem_enabled_key);
-		ida_simple_remove(&kmem_limited_groups, memcg->kmemcg_id);
+		ida_simple_remove(&memcg_kmem_ida, memcg->kmem_id);
 	}
 	/*
 	 * This check can't live in kmem destruction function,
@@ -2930,11 +2934,8 @@ static inline bool memcg_can_account_kmem(struct mem_cgroup *memcg)
  */
 static struct kmem_cache *memcg_params_to_cache(struct memcg_cache_params *p)
 {
-	struct kmem_cache *cachep;
-
 	VM_BUG_ON(p->is_root_cache);
-	cachep = p->root_cache;
-	return cache_from_memcg_idx(cachep, memcg_cache_id(p->memcg));
+	return kmem_cache_of_memcg(p->root_cache, p->memcg);
 }
 
 #ifdef CONFIG_SLABINFO
@@ -3017,29 +3018,24 @@ static void memcg_uncharge_kmem(struct mem_cgroup *memcg, u64 size)
 		css_put(&memcg->css);
 }
 
-/*
- * helper for acessing a memcg's index. It will be used as an index in the
- * child cache array in kmem_cache, and also to derive its name. This function
- * will return -1 when this is not a kmem-limited memcg.
- */
-int memcg_cache_id(struct mem_cgroup *memcg)
+int memcg_kmem_id(struct mem_cgroup *memcg)
 {
-	return memcg ? memcg->kmemcg_id : -1;
+	return memcg ? memcg->kmem_id : -1;
 }
 
-static int memcg_init_cache_id(struct mem_cgroup *memcg)
+static int memcg_init_kmem_id(struct mem_cgroup *memcg)
 {
 	int err = 0;
 	int id, size;
 
 	lockdep_assert_held(&activate_kmem_mutex);
 
-	id = ida_simple_get(&kmem_limited_groups,
-			    0, MEMCG_CACHES_MAX_SIZE, GFP_KERNEL);
+	id = ida_simple_get(&memcg_kmem_ida,
+			    0, MEMCG_KMEM_ID_MAX + 1, GFP_KERNEL);
 	if (id < 0)
 		return id;
 
-	if (id < memcg_limited_groups_array_size)
+	if (id < memcg_nr_kmem_ids_max)
 		goto out_setid;
 
 	/*
@@ -3047,10 +3043,10 @@ static int memcg_init_cache_id(struct mem_cgroup *memcg)
 	 * per memcg data. Let's try to grow them then.
 	 */
 	size = id * 2;
-	if (size < MEMCG_CACHES_MIN_SIZE)
-		size = MEMCG_CACHES_MIN_SIZE;
-	else if (size > MEMCG_CACHES_MAX_SIZE)
-		size = MEMCG_CACHES_MAX_SIZE;
+	if (size < 4)
+		size = 4; /* a good number to start with */
+	if (size > MEMCG_KMEM_ID_MAX + 1)
+		size = MEMCG_KMEM_ID_MAX + 1;
 
 	mutex_lock(&memcg_slab_mutex);
 	err = kmem_cache_memcg_arrays_grow(size);
@@ -3064,14 +3060,14 @@ static int memcg_init_cache_id(struct mem_cgroup *memcg)
 	 * walking over such an array won't get an index out of range provided
 	 * they use an appropriate mutex to protect the array's elements.
 	 */
-	memcg_limited_groups_array_size = size;
+	memcg_nr_kmem_ids_max = size;
 
 out_setid:
-	memcg->kmemcg_id = id;
+	memcg->kmem_id = id;
 	return 0;
 
 out_rmid:
-	ida_simple_remove(&kmem_limited_groups, id);
+	ida_simple_remove(&memcg_kmem_ida, id);
 	return err;
 }
 
@@ -3089,11 +3085,10 @@ static int memcg_prepare_kmem_cache(struct kmem_cache *cachep)
 		return 0;
 
 	/* activate_kmem_mutex guarantees a stable value of
-	 * memcg_limited_groups_array_size */
+	 * memcg_nr_kmem_ids_max */
 	mutex_lock(&activate_kmem_mutex);
 	mutex_lock(&memcg_slab_mutex);
-	ret = kmem_cache_init_memcg_array(cachep,
-			memcg_limited_groups_array_size);
+	ret = kmem_cache_init_memcg_array(cachep, memcg_nr_kmem_ids_max);
 	mutex_unlock(&memcg_slab_mutex);
 	mutex_unlock(&activate_kmem_mutex);
 	return ret;
@@ -3112,14 +3107,14 @@ static void memcg_copy_kmem_cache(struct mem_cgroup *memcg,
 
 	lockdep_assert_held(&memcg_slab_mutex);
 
-	id = memcg_cache_id(memcg);
+	id = memcg_kmem_id(memcg);
 
 	/*
 	 * Since per-memcg caches are created asynchronously on first
 	 * allocation (see memcg_kmem_get_cache()), several threads can try to
 	 * create the same cache, but only one of them may succeed.
 	 */
-	if (cache_from_memcg_idx(root_cache, id))
+	if (kmem_cache_of_memcg_by_id(root_cache, id))
 		return;
 
 	cgroup_name(memcg->css.cgroup, memcg_name, NAME_MAX + 1);
@@ -3136,8 +3131,8 @@ static void memcg_copy_kmem_cache(struct mem_cgroup *memcg,
 	list_add(&cachep->memcg_params->list, &memcg->memcg_slab_caches);
 
 	/*
-	 * Since readers won't lock (see cache_from_memcg_idx()), we need a
-	 * barrier here to ensure nobody will see the kmem_cache partially
+	 * Since readers won't lock (see kmem_cache_of_memcg_by_id()), we need
+	 * a barrier here to ensure nobody will see the kmem_cache partially
 	 * initialized.
 	 */
 	smp_wmb();
@@ -3159,7 +3154,7 @@ static int memcg_destroy_kmem_cache_copy(struct kmem_cache *cachep)
 
 	root_cache = cachep->memcg_params->root_cache;
 	memcg = cachep->memcg_params->memcg;
-	id = memcg_cache_id(memcg);
+	id = memcg_kmem_id(memcg);
 
 	/*
 	 * Since memcg_caches arrays can be accessed using only slab_mutex for
@@ -3226,8 +3221,8 @@ int __kmem_cache_destroy_memcg_copies(struct kmem_cache *cachep)
 	BUG_ON(!is_root_cache(cachep));
 
 	mutex_lock(&memcg_slab_mutex);
-	for_each_memcg_cache_index(i) {
-		c = cache_from_memcg_idx(cachep, i);
+	for_each_possible_memcg_kmem_id(i) {
+		c = kmem_cache_of_memcg_by_id(cachep, i);
 		if (!c)
 			continue;
 
@@ -3371,7 +3366,7 @@ struct kmem_cache *__memcg_kmem_get_cache(struct kmem_cache *cachep,
 	if (!memcg_can_account_kmem(memcg))
 		goto out;
 
-	memcg_cachep = cache_from_memcg_idx(cachep, memcg_cache_id(memcg));
+	memcg_cachep = kmem_cache_of_memcg(cachep, memcg);
 	if (likely(memcg_cachep)) {
 		cachep = memcg_cachep;
 		goto out;
@@ -4945,7 +4940,7 @@ static int __memcg_activate_kmem(struct mem_cgroup *memcg,
 	if (err)
 		goto out;
 
-	err = memcg_init_cache_id(memcg);
+	err = memcg_init_kmem_id(memcg);
 	if (err)
 		goto out;
 
@@ -5676,7 +5671,7 @@ static int memcg_init_kmem(struct mem_cgroup *memcg, struct cgroup_subsys *ss)
 {
 	int ret;
 
-	memcg->kmemcg_id = -1;
+	memcg->kmem_id = -1;
 	ret = memcg_propagate_kmem(memcg);
 	if (ret)
 		return ret;
diff --git a/mm/slab.c b/mm/slab.c
index 25317fd1daa2..194322a634e2 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3844,8 +3844,8 @@ static int do_tune_cpucache(struct kmem_cache *cachep, int limit,
 		return ret;
 
 	VM_BUG_ON(!mutex_is_locked(&slab_mutex));
-	for_each_memcg_cache_index(i) {
-		c = cache_from_memcg_idx(cachep, i);
+	for_each_possible_memcg_kmem_id(i) {
+		c = kmem_cache_of_memcg_by_id(cachep, i);
 		if (c)
 			/* return value determined by the parent cache only */
 			__do_tune_cpucache(c, limit, batchcount, shared, gfp);
diff --git a/mm/slab.h b/mm/slab.h
index ba834860fbfd..61f833c569e7 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -145,11 +145,11 @@ static inline const char *cache_name(struct kmem_cache *s)
  * created a memcg's cache is destroyed only along with the root cache, it is
  * true if we are going to allocate from the cache or hold a reference to the
  * root cache by other means. Otherwise, we should hold either the slab_mutex
- * or the memcg's slab_caches_mutex while calling this function and accessing
- * the returned value.
+ * or the memcg_slab_mutex while calling this function and accessing the
+ * returned value.
  */
 static inline struct kmem_cache *
-cache_from_memcg_idx(struct kmem_cache *s, int idx)
+kmem_cache_of_memcg_by_id(struct kmem_cache *s, int id)
 {
 	struct kmem_cache *cachep;
 	struct memcg_cache_params *params;
@@ -159,18 +159,24 @@ cache_from_memcg_idx(struct kmem_cache *s, int idx)
 
 	rcu_read_lock();
 	params = rcu_dereference(s->memcg_params);
-	cachep = params->memcg_caches[idx];
+	cachep = params->memcg_caches[id];
 	rcu_read_unlock();
 
 	/*
 	 * Make sure we will access the up-to-date value. The code updating
 	 * memcg_caches issues a write barrier to match this (see
-	 * memcg_register_cache()).
+	 * memcg_copy_kmem_cache()).
 	 */
 	smp_read_barrier_depends();
 	return cachep;
 }
 
+static inline struct kmem_cache *
+kmem_cache_of_memcg(struct kmem_cache *s, struct mem_cgroup *memcg)
+{
+	return kmem_cache_of_memcg_by_id(s, memcg_kmem_id(memcg));
+}
+
 static inline struct kmem_cache *memcg_root_cache(struct kmem_cache *s)
 {
 	if (is_root_cache(s))
@@ -214,7 +220,13 @@ static inline const char *cache_name(struct kmem_cache *s)
 }
 
 static inline struct kmem_cache *
-cache_from_memcg_idx(struct kmem_cache *s, int idx)
+kmem_cache_of_memcg_by_id(struct kmem_cache *s, int id)
+{
+	return NULL;
+}
+
+static inline struct kmem_cache *
+kmem_cache_of_memcg(struct kmem_cache *s, struct mem_cgroup *memcg)
 {
 	return NULL;
 }
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 36c7d32a6f97..876b40bfe360 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -98,11 +98,11 @@ static int __kmem_cache_init_memcg_array(struct kmem_cache *s, int nr_entries)
 
 	new->is_root_cache = true;
 	if (old) {
-		for_each_memcg_cache_index(i)
+		for_each_possible_memcg_kmem_id(i)
 			new->memcg_caches[i] = old->memcg_caches[i];
 	}
 
-	/* matching rcu_dereference is in cache_from_memcg_idx */
+	/* matching rcu_dereference is in kmem_cache_of_memcg_by_id */
 	rcu_assign_pointer(s->memcg_params, new);
 	if (old)
 		kfree_rcu(old, rcu_head);
@@ -332,7 +332,7 @@ struct kmem_cache *kmem_cache_request_memcg_copy(struct kmem_cache *cachep,
 	memcg_params->root_cache = cachep;
 
 	cache_name = kasprintf(GFP_KERNEL, "%s(%d:%s)", cachep->name,
-			       memcg_cache_id(memcg), memcg_name);
+			       memcg_kmem_id(memcg), memcg_name);
 	if (!cache_name)
 		goto out_unlock;
 
@@ -755,8 +755,8 @@ memcg_accumulate_slabinfo(struct kmem_cache *s, struct slabinfo *info)
 	if (!is_root_cache(s))
 		return;
 
-	for_each_memcg_cache_index(i) {
-		c = cache_from_memcg_idx(s, i);
+	for_each_possible_memcg_kmem_id(i) {
+		c = kmem_cache_of_memcg_by_id(s, i);
 		if (!c)
 			continue;
 
diff --git a/mm/slub.c b/mm/slub.c
index aa30932c5190..006e6bfe257c 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3772,8 +3772,8 @@ __kmem_cache_alias(const char *name, size_t size, size_t align,
 		s->object_size = max(s->object_size, (int)size);
 		s->inuse = max_t(int, s->inuse, ALIGN(size, sizeof(void *)));
 
-		for_each_memcg_cache_index(i) {
-			c = cache_from_memcg_idx(s, i);
+		for_each_possible_memcg_kmem_id(i) {
+			c = kmem_cache_of_memcg_by_id(s, i);
 			if (!c)
 				continue;
 			c->object_size = s->object_size;
@@ -5062,8 +5062,8 @@ static ssize_t slab_attr_store(struct kobject *kobj,
 		 * directly either failed or succeeded, in which case we loop
 		 * through the descendants with best-effort propagation.
 		 */
-		for_each_memcg_cache_index(i) {
-			struct kmem_cache *c = cache_from_memcg_idx(s, i);
+		for_each_possible_memcg_kmem_id(i) {
+			struct kmem_cache *c = kmem_cache_of_memcg_by_id(s, i);
 			if (c)
 				attribute->store(c, buf, len);
 		}
@@ -5198,7 +5198,7 @@ static char *create_unique_id(struct kmem_cache *s)
 #ifdef CONFIG_MEMCG_KMEM
 	if (!is_root_cache(s))
 		p += sprintf(p, "-%08d",
-				memcg_cache_id(s->memcg_params->memcg));
+			     memcg_kmem_id(s->memcg_params->memcg));
 #endif
 
 	BUG_ON(p > name + ID_STR_LENGTH - 1);
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH -mm v2 6/6] memcg: cleanup kmem_id-related naming
@ 2014-04-27  9:04   ` Vladimir Davydov
  0 siblings, 0 replies; 16+ messages in thread
From: Vladimir Davydov @ 2014-04-27  9:04 UTC (permalink / raw)
  To: akpm; +Cc: mhocko, hannes, glommer, cl, penberg, linux-kernel, linux-mm, devel

The naming of mem_cgroup->kmemcg_id-related functions is rather
inconsistent. We tend to use cache_id as part of their names, but it
isn't quite right, because kmem_id isn't something specific to kmem
caches. It can be used for indexing any array that stores per memcg
data. For instance, we will use it to make list_lru per memcg in the
future. So let's clean up the names and comments related to kmem_id.

Brief change log:

** old name **                          ** new name **

mem_cgroup->kmemcg_id                   mem_cgroup->kmem_id

memcg_init_cache_id()                   memcg_init_kmem_id()
memcg_cache_id()                        memcg_kmem_id()
cache_from_memcg_idx()                  kmem_cache_of_memcg_by_id()
cache_from_memcg_idx(memcg_cache_id())  kmem_cache_of_memcg()

for_each_memcg_cache_index()            for_each_possible_memcg_kmem_id()

memcg_limited_groups                    memcg_kmem_ida
memcg_limited_groups_array_size         memcg_nr_kmem_ids_max

MEMCG_CACHES_MIN_SIZE                   <constant inlined>
MEMCG_CACHES_MAX_SIZE                   MEMCG_KMEM_ID_MAX + 1

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
---
 include/linux/memcontrol.h |   19 +++----
 mm/memcontrol.c            |  117 +++++++++++++++++++++-----------------------
 mm/slab.c                  |    4 +-
 mm/slab.h                  |   24 ++++++---
 mm/slab_common.c           |   10 ++--
 mm/slub.c                  |   10 ++--
 6 files changed, 93 insertions(+), 91 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index b20f533a92ca..1a5c33fd40a4 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -458,15 +458,10 @@ static inline void sock_release_memcg(struct sock *sk)
 #ifdef CONFIG_MEMCG_KMEM
 extern struct static_key memcg_kmem_enabled_key;
 
-extern int memcg_limited_groups_array_size;
+extern int memcg_nr_kmem_ids_max;
 
-/*
- * Helper macro to loop through all memcg-specific caches. Callers must still
- * check if the cache is valid (it is either valid or NULL).
- * the slab_mutex must be held when looping through those caches
- */
-#define for_each_memcg_cache_index(_idx)	\
-	for ((_idx) = 0; (_idx) < memcg_limited_groups_array_size; (_idx)++)
+#define for_each_possible_memcg_kmem_id(id) \
+	for ((id) = 0; (id) < memcg_nr_kmem_ids_max; (id)++)
 
 static inline bool memcg_kmem_enabled(void)
 {
@@ -490,7 +485,7 @@ void __memcg_kmem_commit_charge(struct page *page,
 				       struct mem_cgroup *memcg, int order);
 void __memcg_kmem_uncharge_pages(struct page *page, int order);
 
-int memcg_cache_id(struct mem_cgroup *memcg);
+int memcg_kmem_id(struct mem_cgroup *memcg);
 
 struct kmem_cache *
 __memcg_kmem_get_cache(struct kmem_cache *cachep, gfp_t gfp);
@@ -591,8 +586,8 @@ memcg_kmem_get_cache(struct kmem_cache *cachep, gfp_t gfp)
 	return __memcg_kmem_get_cache(cachep, gfp);
 }
 #else
-#define for_each_memcg_cache_index(_idx)	\
-	for (; NULL; )
+#define for_each_possible_memcg_kmem_id(id) \
+	for ((id) = 0; 0; )
 
 static inline bool memcg_kmem_enabled(void)
 {
@@ -614,7 +609,7 @@ memcg_kmem_commit_charge(struct page *page, struct mem_cgroup *memcg, int order)
 {
 }
 
-static inline int memcg_cache_id(struct mem_cgroup *memcg)
+static inline int memcg_kmem_id(struct mem_cgroup *memcg)
 {
 	return -1;
 }
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index c795c3e388dc..1077091e995f 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -357,11 +357,22 @@ struct mem_cgroup {
 	struct cg_proto tcp_mem;
 #endif
 #if defined(CONFIG_MEMCG_KMEM)
+	/*
+	 * Each kmem-limited memory cgroup has a unique id. We use it for
+	 * indexing the arrays that store per cgroup data. An example of such
+	 * an array is kmem_cache->memcg_params->memcg_caches.
+	 *
+	 * We introduce a separate id instead of using cgroup->id to avoid
+	 * waste of memory in sparse environments, where we have a lot of
+	 * memory cgroups, but only a few of them are kmem-limited.
+	 *
+	 * For unlimited cgroups kmem_id equals -1.
+	 */
+	int kmem_id;
+
 	/* analogous to slab_common's slab_caches list, but per-memcg;
 	 * protected by memcg_slab_mutex */
 	struct list_head memcg_slab_caches;
-        /* Index in the kmem_cache->memcg_params->memcg_caches array */
-	int kmemcg_id;
 #endif
 
 	int last_scanned_node;
@@ -610,35 +621,28 @@ static void disarm_sock_keys(struct mem_cgroup *memcg)
 #endif
 
 #ifdef CONFIG_MEMCG_KMEM
+/* used for mem_cgroup->kmem_id allocations */
+static DEFINE_IDA(memcg_kmem_ida);
+
 /*
- * This will be the memcg's index in each cache's ->memcg_params->memcg_caches.
- * The main reason for not using cgroup id for this:
- *  this works better in sparse environments, where we have a lot of memcgs,
- *  but only a few kmem-limited. Or also, if we have, for instance, 200
- *  memcgs, and none but the 200th is kmem-limited, we'd have to have a
- *  200 entry array for that.
- *
- * The current size of the caches array is stored in
- * memcg_limited_groups_array_size.  It will double each time we have to
- * increase it.
+ * Max kmem id should be as large as max cgroup id so that we could enable
+ * kmem-accounting for each memory cgroup.
  */
-static DEFINE_IDA(kmem_limited_groups);
-int memcg_limited_groups_array_size;
+#define MEMCG_KMEM_ID_MAX	MEM_CGROUP_ID_MAX
 
 /*
- * MIN_SIZE is different than 1, because we would like to avoid going through
- * the alloc/free process all the time. In a small machine, 4 kmem-limited
- * cgroups is a reasonable guess. In the future, it could be a parameter or
- * tunable, but that is strictly not necessary.
+ * We keep the maximal number of kmem ids that may exist in the system in the
+ * memcg_nr_kmem_ids_max variable. We use it for the size of the arrays indexed
+ * by kmem id (see the mem_cgroup->kmem_id definition).
+ *
+ * If a newly allocated kmem id is greater or equal to memcg_nr_kmem_ids_max,
+ * we double it and reallocate the arrays so that they have enough space to
+ * store data for the new cgroup.
  *
- * MAX_SIZE should be as large as the number of cgrp_ids. Ideally, we could get
- * this constant directly from cgroup, but it is understandable that this is
- * better kept as an internal representation in cgroup.c. In any case, the
- * cgrp_id space is not getting any smaller, and we don't have to necessarily
- * increase ours as well if it increases.
+ * The updates are done with activate_kmem_mutex held, so one must take it to
+ * guarantee a stable value of memcg_nr_kmem_ids_max.
  */
-#define MEMCG_CACHES_MIN_SIZE 4
-#define MEMCG_CACHES_MAX_SIZE MEM_CGROUP_ID_MAX
+int memcg_nr_kmem_ids_max;
 
 /*
  * A lot of the calls to the cache allocation functions are expected to be
@@ -653,7 +657,7 @@ static void disarm_kmem_keys(struct mem_cgroup *memcg)
 {
 	if (memcg_kmem_is_active(memcg)) {
 		static_key_slow_dec(&memcg_kmem_enabled_key);
-		ida_simple_remove(&kmem_limited_groups, memcg->kmemcg_id);
+		ida_simple_remove(&memcg_kmem_ida, memcg->kmem_id);
 	}
 	/*
 	 * This check can't live in kmem destruction function,
@@ -2930,11 +2934,8 @@ static inline bool memcg_can_account_kmem(struct mem_cgroup *memcg)
  */
 static struct kmem_cache *memcg_params_to_cache(struct memcg_cache_params *p)
 {
-	struct kmem_cache *cachep;
-
 	VM_BUG_ON(p->is_root_cache);
-	cachep = p->root_cache;
-	return cache_from_memcg_idx(cachep, memcg_cache_id(p->memcg));
+	return kmem_cache_of_memcg(p->root_cache, p->memcg);
 }
 
 #ifdef CONFIG_SLABINFO
@@ -3017,29 +3018,24 @@ static void memcg_uncharge_kmem(struct mem_cgroup *memcg, u64 size)
 		css_put(&memcg->css);
 }
 
-/*
- * helper for acessing a memcg's index. It will be used as an index in the
- * child cache array in kmem_cache, and also to derive its name. This function
- * will return -1 when this is not a kmem-limited memcg.
- */
-int memcg_cache_id(struct mem_cgroup *memcg)
+int memcg_kmem_id(struct mem_cgroup *memcg)
 {
-	return memcg ? memcg->kmemcg_id : -1;
+	return memcg ? memcg->kmem_id : -1;
 }
 
-static int memcg_init_cache_id(struct mem_cgroup *memcg)
+static int memcg_init_kmem_id(struct mem_cgroup *memcg)
 {
 	int err = 0;
 	int id, size;
 
 	lockdep_assert_held(&activate_kmem_mutex);
 
-	id = ida_simple_get(&kmem_limited_groups,
-			    0, MEMCG_CACHES_MAX_SIZE, GFP_KERNEL);
+	id = ida_simple_get(&memcg_kmem_ida,
+			    0, MEMCG_KMEM_ID_MAX + 1, GFP_KERNEL);
 	if (id < 0)
 		return id;
 
-	if (id < memcg_limited_groups_array_size)
+	if (id < memcg_nr_kmem_ids_max)
 		goto out_setid;
 
 	/*
@@ -3047,10 +3043,10 @@ static int memcg_init_cache_id(struct mem_cgroup *memcg)
 	 * per memcg data. Let's try to grow them then.
 	 */
 	size = id * 2;
-	if (size < MEMCG_CACHES_MIN_SIZE)
-		size = MEMCG_CACHES_MIN_SIZE;
-	else if (size > MEMCG_CACHES_MAX_SIZE)
-		size = MEMCG_CACHES_MAX_SIZE;
+	if (size < 4)
+		size = 4; /* a good number to start with */
+	if (size > MEMCG_KMEM_ID_MAX + 1)
+		size = MEMCG_KMEM_ID_MAX + 1;
 
 	mutex_lock(&memcg_slab_mutex);
 	err = kmem_cache_memcg_arrays_grow(size);
@@ -3064,14 +3060,14 @@ static int memcg_init_cache_id(struct mem_cgroup *memcg)
 	 * walking over such an array won't get an index out of range provided
 	 * they use an appropriate mutex to protect the array's elements.
 	 */
-	memcg_limited_groups_array_size = size;
+	memcg_nr_kmem_ids_max = size;
 
 out_setid:
-	memcg->kmemcg_id = id;
+	memcg->kmem_id = id;
 	return 0;
 
 out_rmid:
-	ida_simple_remove(&kmem_limited_groups, id);
+	ida_simple_remove(&memcg_kmem_ida, id);
 	return err;
 }
 
@@ -3089,11 +3085,10 @@ static int memcg_prepare_kmem_cache(struct kmem_cache *cachep)
 		return 0;
 
 	/* activate_kmem_mutex guarantees a stable value of
-	 * memcg_limited_groups_array_size */
+	 * memcg_nr_kmem_ids_max */
 	mutex_lock(&activate_kmem_mutex);
 	mutex_lock(&memcg_slab_mutex);
-	ret = kmem_cache_init_memcg_array(cachep,
-			memcg_limited_groups_array_size);
+	ret = kmem_cache_init_memcg_array(cachep, memcg_nr_kmem_ids_max);
 	mutex_unlock(&memcg_slab_mutex);
 	mutex_unlock(&activate_kmem_mutex);
 	return ret;
@@ -3112,14 +3107,14 @@ static void memcg_copy_kmem_cache(struct mem_cgroup *memcg,
 
 	lockdep_assert_held(&memcg_slab_mutex);
 
-	id = memcg_cache_id(memcg);
+	id = memcg_kmem_id(memcg);
 
 	/*
 	 * Since per-memcg caches are created asynchronously on first
 	 * allocation (see memcg_kmem_get_cache()), several threads can try to
 	 * create the same cache, but only one of them may succeed.
 	 */
-	if (cache_from_memcg_idx(root_cache, id))
+	if (kmem_cache_of_memcg_by_id(root_cache, id))
 		return;
 
 	cgroup_name(memcg->css.cgroup, memcg_name, NAME_MAX + 1);
@@ -3136,8 +3131,8 @@ static void memcg_copy_kmem_cache(struct mem_cgroup *memcg,
 	list_add(&cachep->memcg_params->list, &memcg->memcg_slab_caches);
 
 	/*
-	 * Since readers won't lock (see cache_from_memcg_idx()), we need a
-	 * barrier here to ensure nobody will see the kmem_cache partially
+	 * Since readers won't lock (see kmem_cache_of_memcg_by_id()), we need
+	 * a barrier here to ensure nobody will see the kmem_cache partially
 	 * initialized.
 	 */
 	smp_wmb();
@@ -3159,7 +3154,7 @@ static int memcg_destroy_kmem_cache_copy(struct kmem_cache *cachep)
 
 	root_cache = cachep->memcg_params->root_cache;
 	memcg = cachep->memcg_params->memcg;
-	id = memcg_cache_id(memcg);
+	id = memcg_kmem_id(memcg);
 
 	/*
 	 * Since memcg_caches arrays can be accessed using only slab_mutex for
@@ -3226,8 +3221,8 @@ int __kmem_cache_destroy_memcg_copies(struct kmem_cache *cachep)
 	BUG_ON(!is_root_cache(cachep));
 
 	mutex_lock(&memcg_slab_mutex);
-	for_each_memcg_cache_index(i) {
-		c = cache_from_memcg_idx(cachep, i);
+	for_each_possible_memcg_kmem_id(i) {
+		c = kmem_cache_of_memcg_by_id(cachep, i);
 		if (!c)
 			continue;
 
@@ -3371,7 +3366,7 @@ struct kmem_cache *__memcg_kmem_get_cache(struct kmem_cache *cachep,
 	if (!memcg_can_account_kmem(memcg))
 		goto out;
 
-	memcg_cachep = cache_from_memcg_idx(cachep, memcg_cache_id(memcg));
+	memcg_cachep = kmem_cache_of_memcg(cachep, memcg);
 	if (likely(memcg_cachep)) {
 		cachep = memcg_cachep;
 		goto out;
@@ -4945,7 +4940,7 @@ static int __memcg_activate_kmem(struct mem_cgroup *memcg,
 	if (err)
 		goto out;
 
-	err = memcg_init_cache_id(memcg);
+	err = memcg_init_kmem_id(memcg);
 	if (err)
 		goto out;
 
@@ -5676,7 +5671,7 @@ static int memcg_init_kmem(struct mem_cgroup *memcg, struct cgroup_subsys *ss)
 {
 	int ret;
 
-	memcg->kmemcg_id = -1;
+	memcg->kmem_id = -1;
 	ret = memcg_propagate_kmem(memcg);
 	if (ret)
 		return ret;
diff --git a/mm/slab.c b/mm/slab.c
index 25317fd1daa2..194322a634e2 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3844,8 +3844,8 @@ static int do_tune_cpucache(struct kmem_cache *cachep, int limit,
 		return ret;
 
 	VM_BUG_ON(!mutex_is_locked(&slab_mutex));
-	for_each_memcg_cache_index(i) {
-		c = cache_from_memcg_idx(cachep, i);
+	for_each_possible_memcg_kmem_id(i) {
+		c = kmem_cache_of_memcg_by_id(cachep, i);
 		if (c)
 			/* return value determined by the parent cache only */
 			__do_tune_cpucache(c, limit, batchcount, shared, gfp);
diff --git a/mm/slab.h b/mm/slab.h
index ba834860fbfd..61f833c569e7 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -145,11 +145,11 @@ static inline const char *cache_name(struct kmem_cache *s)
  * created a memcg's cache is destroyed only along with the root cache, it is
  * true if we are going to allocate from the cache or hold a reference to the
  * root cache by other means. Otherwise, we should hold either the slab_mutex
- * or the memcg's slab_caches_mutex while calling this function and accessing
- * the returned value.
+ * or the memcg_slab_mutex while calling this function and accessing the
+ * returned value.
  */
 static inline struct kmem_cache *
-cache_from_memcg_idx(struct kmem_cache *s, int idx)
+kmem_cache_of_memcg_by_id(struct kmem_cache *s, int id)
 {
 	struct kmem_cache *cachep;
 	struct memcg_cache_params *params;
@@ -159,18 +159,24 @@ cache_from_memcg_idx(struct kmem_cache *s, int idx)
 
 	rcu_read_lock();
 	params = rcu_dereference(s->memcg_params);
-	cachep = params->memcg_caches[idx];
+	cachep = params->memcg_caches[id];
 	rcu_read_unlock();
 
 	/*
 	 * Make sure we will access the up-to-date value. The code updating
 	 * memcg_caches issues a write barrier to match this (see
-	 * memcg_register_cache()).
+	 * memcg_copy_kmem_cache()).
 	 */
 	smp_read_barrier_depends();
 	return cachep;
 }
 
+static inline struct kmem_cache *
+kmem_cache_of_memcg(struct kmem_cache *s, struct mem_cgroup *memcg)
+{
+	return kmem_cache_of_memcg_by_id(s, memcg_kmem_id(memcg));
+}
+
 static inline struct kmem_cache *memcg_root_cache(struct kmem_cache *s)
 {
 	if (is_root_cache(s))
@@ -214,7 +220,13 @@ static inline const char *cache_name(struct kmem_cache *s)
 }
 
 static inline struct kmem_cache *
-cache_from_memcg_idx(struct kmem_cache *s, int idx)
+kmem_cache_of_memcg_by_id(struct kmem_cache *s, int id)
+{
+	return NULL;
+}
+
+static inline struct kmem_cache *
+kmem_cache_of_memcg(struct kmem_cache *s, struct mem_cgroup *memcg)
 {
 	return NULL;
 }
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 36c7d32a6f97..876b40bfe360 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -98,11 +98,11 @@ static int __kmem_cache_init_memcg_array(struct kmem_cache *s, int nr_entries)
 
 	new->is_root_cache = true;
 	if (old) {
-		for_each_memcg_cache_index(i)
+		for_each_possible_memcg_kmem_id(i)
 			new->memcg_caches[i] = old->memcg_caches[i];
 	}
 
-	/* matching rcu_dereference is in cache_from_memcg_idx */
+	/* matching rcu_dereference is in kmem_cache_of_memcg_by_id */
 	rcu_assign_pointer(s->memcg_params, new);
 	if (old)
 		kfree_rcu(old, rcu_head);
@@ -332,7 +332,7 @@ struct kmem_cache *kmem_cache_request_memcg_copy(struct kmem_cache *cachep,
 	memcg_params->root_cache = cachep;
 
 	cache_name = kasprintf(GFP_KERNEL, "%s(%d:%s)", cachep->name,
-			       memcg_cache_id(memcg), memcg_name);
+			       memcg_kmem_id(memcg), memcg_name);
 	if (!cache_name)
 		goto out_unlock;
 
@@ -755,8 +755,8 @@ memcg_accumulate_slabinfo(struct kmem_cache *s, struct slabinfo *info)
 	if (!is_root_cache(s))
 		return;
 
-	for_each_memcg_cache_index(i) {
-		c = cache_from_memcg_idx(s, i);
+	for_each_possible_memcg_kmem_id(i) {
+		c = kmem_cache_of_memcg_by_id(s, i);
 		if (!c)
 			continue;
 
diff --git a/mm/slub.c b/mm/slub.c
index aa30932c5190..006e6bfe257c 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3772,8 +3772,8 @@ __kmem_cache_alias(const char *name, size_t size, size_t align,
 		s->object_size = max(s->object_size, (int)size);
 		s->inuse = max_t(int, s->inuse, ALIGN(size, sizeof(void *)));
 
-		for_each_memcg_cache_index(i) {
-			c = cache_from_memcg_idx(s, i);
+		for_each_possible_memcg_kmem_id(i) {
+			c = kmem_cache_of_memcg_by_id(s, i);
 			if (!c)
 				continue;
 			c->object_size = s->object_size;
@@ -5062,8 +5062,8 @@ static ssize_t slab_attr_store(struct kobject *kobj,
 		 * directly either failed or succeeded, in which case we loop
 		 * through the descendants with best-effort propagation.
 		 */
-		for_each_memcg_cache_index(i) {
-			struct kmem_cache *c = cache_from_memcg_idx(s, i);
+		for_each_possible_memcg_kmem_id(i) {
+			struct kmem_cache *c = kmem_cache_of_memcg_by_id(s, i);
 			if (c)
 				attribute->store(c, buf, len);
 		}
@@ -5198,7 +5198,7 @@ static char *create_unique_id(struct kmem_cache *s)
 #ifdef CONFIG_MEMCG_KMEM
 	if (!is_root_cache(s))
 		p += sprintf(p, "-%08d",
-				memcg_cache_id(s->memcg_params->memcg));
+			     memcg_kmem_id(s->memcg_params->memcg));
 #endif
 
 	BUG_ON(p > name + ID_STR_LENGTH - 1);
-- 
1.7.10.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH -mm v2 0/6] memcg/kmem: cleanup naming and callflows
  2014-04-27  9:04 ` Vladimir Davydov
@ 2014-05-08  6:40   ` Vladimir Davydov
  -1 siblings, 0 replies; 16+ messages in thread
From: Vladimir Davydov @ 2014-05-08  6:40 UTC (permalink / raw)
  To: akpm; +Cc: mhocko, hannes, glommer, cl, penberg, linux-kernel, linux-mm, devel

Seems this set looks too big for anybody to spend time reviewing it, so
I'm going to split and resend it (actually already started) to ease
review. Please, ignore this one and sorry for the noise.

Thanks.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH -mm v2 0/6] memcg/kmem: cleanup naming and callflows
@ 2014-05-08  6:40   ` Vladimir Davydov
  0 siblings, 0 replies; 16+ messages in thread
From: Vladimir Davydov @ 2014-05-08  6:40 UTC (permalink / raw)
  To: akpm; +Cc: mhocko, hannes, glommer, cl, penberg, linux-kernel, linux-mm, devel

Seems this set looks too big for anybody to spend time reviewing it, so
I'm going to split and resend it (actually already started) to ease
review. Please, ignore this one and sorry for the noise.

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2014-05-08  6:40 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-04-27  9:04 [PATCH -mm v2 0/6] memcg/kmem: cleanup naming and callflows Vladimir Davydov
2014-04-27  9:04 ` Vladimir Davydov
2014-04-27  9:04 ` [PATCH -mm v2 1/6] memcg: get rid of memcg_create_cache_name Vladimir Davydov
2014-04-27  9:04   ` Vladimir Davydov
2014-04-27  9:04 ` [PATCH -mm v2 2/6] memcg: allocate memcg_caches array on first per memcg cache creation Vladimir Davydov
2014-04-27  9:04   ` Vladimir Davydov
2014-04-27  9:04 ` [PATCH -mm v2 3/6] memcg: cleanup memcg_caches arrays relocation path Vladimir Davydov
2014-04-27  9:04   ` Vladimir Davydov
2014-04-27  9:04 ` [PATCH -mm v2 4/6] memcg: get rid of memcg_{alloc,free}_cache_params Vladimir Davydov
2014-04-27  9:04   ` Vladimir Davydov
2014-04-27  9:04 ` [PATCH -mm v2 5/6] memcg: cleanup kmem cache creation/destruction functions naming Vladimir Davydov
2014-04-27  9:04   ` Vladimir Davydov
2014-04-27  9:04 ` [PATCH -mm v2 6/6] memcg: cleanup kmem_id-related naming Vladimir Davydov
2014-04-27  9:04   ` Vladimir Davydov
2014-05-08  6:40 ` [PATCH -mm v2 0/6] memcg/kmem: cleanup naming and callflows Vladimir Davydov
2014-05-08  6:40   ` Vladimir Davydov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.