All of lore.kernel.org
 help / color / mirror / Atom feed
* Common [0/9] Sl[auo]b: Common code rework V7
@ 2012-07-31 17:36 Christoph Lameter
  2012-07-31 17:36 ` Common [1/9] slub: Add debugging to verify correct cache use on kmem_cache_free() Christoph Lameter
                   ` (8 more replies)
  0 siblings, 9 replies; 16+ messages in thread
From: Christoph Lameter @ 2012-07-31 17:36 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: linux-mm, David Rientjes, Matt Mackall, Glauber Costa, Joonsoo Kim


V6->V7:
- Omit pieces that were merged for 3.6
- Fix issues pointed out by Glauber.
- Include the patches up to the point at which
  the slab name handling is unified

V5->V6:
- Patches against Pekka's for-next tree.
- Go slow and cut down to just patches that are safe
  (there will likely be some churn already due to the
  mutex unification between slabs)
- More to come next week when I have more time (
  took me almost the whole week to catch up after
  being gone for awhile).

V4->V5
- Rediff against current upstream + Pekka's cleanup branch.

V3->V4:
- Do not use the COMMON macro anymore.
- Fixup various issues
- No general sysfs support yet due to lockdep issues with
  keys in kmalloc'ed memory.

V2->V3:
- Incorporate more feedback from Joonsoo Kim and Glauber Costa
- And a couple more patches to deal with slab duping and move
  more code to slab_common.c

V1->V2:
- Incorporate glommers feedback.
- Add 2 more patches dealing with common code in kmem_cache_destroy

This is a series of patches that extracts common functionality from
slab allocators into a common code base. The intend is to standardize
as much as possible of the allocator behavior while keeping the
distinctive features of each allocator which are mostly due to their
storage format and serialization approaches.

This patchset makes a beginning by extracting common functionality in
kmem_cache_create() and kmem_cache_destroy(). However, there are
numerous other areas where such work could be beneficial:

1. Extract the sysfs support from SLUB and make it common. That way
   all allocators have a common sysfs API and are handleable in the same
   way regardless of the allocator chose.

2. Extract the error reporting and checking from SLUB and make
   it available for all allocators. This means that all allocators
   will gain the resiliency and error handling capabilties.

3. Extract the memory hotplug and cpu hotplug handling. It seems that
   SLAB may be more sophisticated here. Having common code here will
   make it easier to maintain the special code.

4. Extract the aliasing capability of SLUB. This will enable fast
   slab creation without creating too many additional slab caches.
   The arrays of caches of varying sizes in numerous subsystems
   do not cause the creation of numerous slab caches. Storage
   density is increased and the cache footprint is reduced.

Ultimately it is to be hoped that the special code for each allocator
shrinks to a mininum. This will also make it easier to make modification
to allocators.

In the far future one could envision that the current allocators will
just become storage algorithms that can be chosen based on the need of
the subsystem. F.e.

Cpu cache dependend performance		= Bonwick allocator (SLAB)
Minimal cycle count and cache footprint	= SLUB
Maximum storage density			= K&R allocator (SLOB)


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Common [1/9] slub: Add debugging to verify correct cache use on kmem_cache_free()
  2012-07-31 17:36 Common [0/9] Sl[auo]b: Common code rework V7 Christoph Lameter
@ 2012-07-31 17:36 ` Christoph Lameter
  2012-07-31 17:36 ` Common [2/9] slub: Use kmem_cache for the kmem_cache structure Christoph Lameter
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 16+ messages in thread
From: Christoph Lameter @ 2012-07-31 17:36 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: linux-mm, David Rientjes, Matt Mackall, Glauber Costa, Joonsoo Kim

[-- Attachment #1: slub_new_debug --]
[-- Type: text/plain, Size: 1026 bytes --]

Add additional debugging to check that the objects is actually from the cache
the caller claims. Doing so currently trips up some other debugging code. It
takes a lot to infer from that what was happening.

Signed-off-by: Christoph Lameter <cl@linux.com>

Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c	2012-07-31 11:46:01.544493395 -0500
+++ linux-2.6/mm/slub.c	2012-07-31 11:53:21.832078581 -0500
@@ -2583,6 +2583,13 @@
 
 	page = virt_to_head_page(x);
 
+	if (kmem_cache_debug(s) && page->slab != s) {
+		printk("kmem_cache_free: Wrong slab cache. %s but object"
+			" is from  %s\n", page->slab->name, s->name);
+		WARN_ON(1);
+		return;
+	}
+
 	slab_free(s, page, x, _RET_IP_);
 
 	trace_kmem_cache_free(_RET_IP_, x);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Common [2/9] slub: Use kmem_cache for the kmem_cache structure
  2012-07-31 17:36 Common [0/9] Sl[auo]b: Common code rework V7 Christoph Lameter
  2012-07-31 17:36 ` Common [1/9] slub: Add debugging to verify correct cache use on kmem_cache_free() Christoph Lameter
@ 2012-07-31 17:36 ` Christoph Lameter
  2012-08-01  8:42   ` Glauber Costa
  2012-07-31 17:36 ` Common [3/9] Move list_add() to slab_common.c Christoph Lameter
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 16+ messages in thread
From: Christoph Lameter @ 2012-07-31 17:36 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: linux-mm, David Rientjes, Matt Mackall, Glauber Costa, Joonsoo Kim

[-- Attachment #1: slub_use_kmem_cache --]
[-- Type: text/plain, Size: 1198 bytes --]

Do not use kmalloc() but kmem_cache_alloc() for the allocation
of the kmem_cache structures in slub.

This is the way its supposed to be. Recent merges lost
the freeing of the kmem_cache structure and so this is also
fixing memory leak on kmem_cache_destroy() by adding
the missing free action to sysfs_slab_remove().

Signed-off-by: Christoph Lameter <cl@linux.com>

Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c	2012-07-31 11:58:40.617457553 -0500
+++ linux-2.6/mm/slub.c	2012-07-31 11:58:47.529574942 -0500
@@ -3938,7 +3938,7 @@
 	if (!n)
 		return NULL;
 
-	s = kmalloc(kmem_size, GFP_KERNEL);
+	s = kmem_cache_alloc(kmem_cache, GFP_KERNEL);
 	if (s) {
 		if (kmem_cache_open(s, n,
 				size, align, flags, ctor)) {
@@ -5318,6 +5318,8 @@
 	kobject_uevent(&s->kobj, KOBJ_REMOVE);
 	kobject_del(&s->kobj);
 	kobject_put(&s->kobj);
+	kfree(s->name);
+	kmem_cache_free(kmem_cache, s);
 }
 
 /*

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Common [3/9] Move list_add() to slab_common.c
  2012-07-31 17:36 Common [0/9] Sl[auo]b: Common code rework V7 Christoph Lameter
  2012-07-31 17:36 ` Common [1/9] slub: Add debugging to verify correct cache use on kmem_cache_free() Christoph Lameter
  2012-07-31 17:36 ` Common [2/9] slub: Use kmem_cache for the kmem_cache structure Christoph Lameter
@ 2012-07-31 17:36 ` Christoph Lameter
  2012-07-31 17:36 ` Common [4/9] Extract a common function for kmem_cache_destroy Christoph Lameter
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 16+ messages in thread
From: Christoph Lameter @ 2012-07-31 17:36 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: linux-mm, David Rientjes, Matt Mackall, Glauber Costa, Joonsoo Kim

[-- Attachment #1: move_list_add --]
[-- Type: text/plain, Size: 3247 bytes --]

Move the code to append the new kmem_cache to the list of slab caches to
the kmem_cache_create code in the shared code.

This is possible now since the acquisition of the mutex was moved into
kmem_cache_create().

Reviewed-by: Glauber Costa <glommer@parallels.com>
Reviewed-by: Joonsoo Kim <js1304@gmail.com>
Signed-off-by: Christoph Lameter <cl@linux.com>


---
 mm/slab.c        |    7 +++++--
 mm/slab_common.c |    7 +++++++
 mm/slub.c        |    2 --
 3 files changed, 12 insertions(+), 4 deletions(-)

Index: linux-2.6/mm/slab_common.c
===================================================================
--- linux-2.6.orig/mm/slab_common.c	2012-07-31 11:57:06.959870010 -0500
+++ linux-2.6/mm/slab_common.c	2012-07-31 11:59:47.226589653 -0500
@@ -98,6 +98,13 @@
 
 	s = __kmem_cache_create(name, size, align, flags, ctor);
 
+	/*
+	 * Check if the slab has actually been created and if it was a
+	 * real instatiation. Aliases do not belong on the list
+	 */
+	if (s && s->refcount == 1)
+		list_add(&s->list, &slab_caches);
+
 #ifdef CONFIG_DEBUG_VM
 oops:
 #endif
Index: linux-2.6/mm/slab.c
===================================================================
--- linux-2.6.orig/mm/slab.c	2012-07-31 11:46:01.524493044 -0500
+++ linux-2.6/mm/slab.c	2012-07-31 11:59:47.226589653 -0500
@@ -1538,6 +1538,7 @@
 					ARCH_KMALLOC_FLAGS|SLAB_PANIC,
 					NULL);
 
+	list_add(&sizes[INDEX_AC].cs_cachep->list, &slab_caches);
 	if (INDEX_AC != INDEX_L3) {
 		sizes[INDEX_L3].cs_cachep =
 			__kmem_cache_create(names[INDEX_L3].name,
@@ -1545,6 +1546,7 @@
 				ARCH_KMALLOC_MINALIGN,
 				ARCH_KMALLOC_FLAGS|SLAB_PANIC,
 				NULL);
+		list_add(&sizes[INDEX_L3].cs_cachep->list, &slab_caches);
 	}
 
 	slab_early_init = 0;
@@ -1563,6 +1565,7 @@
 					ARCH_KMALLOC_MINALIGN,
 					ARCH_KMALLOC_FLAGS|SLAB_PANIC,
 					NULL);
+			list_add(&sizes->cs_cachep->list, &slab_caches);
 		}
 #ifdef CONFIG_ZONE_DMA
 		sizes->cs_dmacachep = __kmem_cache_create(
@@ -1572,6 +1575,7 @@
 					ARCH_KMALLOC_FLAGS|SLAB_CACHE_DMA|
 						SLAB_PANIC,
 					NULL);
+		list_add(&sizes->cs_dmacachep->list, &slab_caches);
 #endif
 		sizes++;
 		names++;
@@ -2432,6 +2436,7 @@
 	}
 	cachep->ctor = ctor;
 	cachep->name = name;
+	cachep->refcount = 1;
 
 	if (setup_cpu_cache(cachep, gfp)) {
 		__kmem_cache_destroy(cachep);
@@ -2448,8 +2453,6 @@
 		slab_set_debugobj_lock_classes(cachep);
 	}
 
-	/* cache setup completed, link it into the list */
-	list_add(&cachep->list, &slab_caches);
 	return cachep;
 }
 
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c	2012-07-31 11:58:47.529574942 -0500
+++ linux-2.6/mm/slub.c	2012-07-31 11:59:47.226589653 -0500
@@ -3944,7 +3944,6 @@
 				size, align, flags, ctor)) {
 			int r;
 
-			list_add(&s->list, &slab_caches);
 			mutex_unlock(&slab_mutex);
 			r = sysfs_slab_add(s);
 			mutex_lock(&slab_mutex);
@@ -3952,7 +3951,6 @@
 			if (!r)
 				return s;
 
-			list_del(&s->list);
 			kmem_cache_close(s);
 		}
 		kfree(s);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Common [4/9] Extract a common function for kmem_cache_destroy
  2012-07-31 17:36 Common [0/9] Sl[auo]b: Common code rework V7 Christoph Lameter
                   ` (2 preceding siblings ...)
  2012-07-31 17:36 ` Common [3/9] Move list_add() to slab_common.c Christoph Lameter
@ 2012-07-31 17:36 ` Christoph Lameter
  2012-07-31 17:36 ` Common [5/9] Always use the name "kmem_cache" for the slab cache with the kmem_cache structure Christoph Lameter
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 16+ messages in thread
From: Christoph Lameter @ 2012-07-31 17:36 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: linux-mm, David Rientjes, Matt Mackall, Glauber Costa, Joonsoo Kim

[-- Attachment #1: kmem_cache_destroy --]
[-- Type: text/plain, Size: 7170 bytes --]

kmem_cache_destroy does basically the same in all allocators.

Extract common code which is easy since we already have common mutex handling.

Signed-off-by: Christoph Lameter <cl@linux.com>


---
 mm/slab.c        |   55 +++----------------------------------------------------
 mm/slab.h        |    3 +++
 mm/slab_common.c |   25 +++++++++++++++++++++++++
 mm/slob.c        |   11 +++++++----
 mm/slub.c        |   35 +++++++++++------------------------
 5 files changed, 49 insertions(+), 80 deletions(-)

Index: linux-2.6/mm/slab_common.c
===================================================================
--- linux-2.6.orig/mm/slab_common.c	2012-07-31 12:15:59.691331459 -0500
+++ linux-2.6/mm/slab_common.c	2012-07-31 12:16:00.531346043 -0500
@@ -121,6 +121,31 @@
 }
 EXPORT_SYMBOL(kmem_cache_create);
 
+void kmem_cache_destroy(struct kmem_cache *s)
+{
+	get_online_cpus();
+	mutex_lock(&slab_mutex);
+	s->refcount--;
+	if (!s->refcount) {
+		list_del(&s->list);
+
+		if (!__kmem_cache_shutdown(s)) {
+			if (s->flags & SLAB_DESTROY_BY_RCU)
+				rcu_barrier();
+
+			__kmem_cache_destroy(s);
+		} else {
+			list_add(&s->list, &slab_caches);
+			printk(KERN_ERR "kmem_cache_destroy %s: Slab cache still has objects\n",
+				s->name);
+			dump_stack();
+		}
+	}
+	mutex_unlock(&slab_mutex);
+	put_online_cpus();
+}
+EXPORT_SYMBOL(kmem_cache_destroy);
+
 int slab_is_available(void)
 {
 	return slab_state >= UP;
Index: linux-2.6/mm/slab.c
===================================================================
--- linux-2.6.orig/mm/slab.c	2012-07-31 12:15:59.691331459 -0500
+++ linux-2.6/mm/slab.c	2012-07-31 12:16:00.531346043 -0500
@@ -779,16 +779,6 @@
 	*left_over = slab_size - nr_objs*buffer_size - mgmt_size;
 }
 
-#define slab_error(cachep, msg) __slab_error(__func__, cachep, msg)
-
-static void __slab_error(const char *function, struct kmem_cache *cachep,
-			char *msg)
-{
-	printk(KERN_ERR "slab error in %s(): cache `%s': %s\n",
-	       function, cachep->name, msg);
-	dump_stack();
-}
-
 /*
  * By default on NUMA we use alien caches to stage the freeing of
  * objects allocated from other nodes. This causes massive memory
@@ -2055,7 +2045,7 @@
 	}
 }
 
-static void __kmem_cache_destroy(struct kmem_cache *cachep)
+void __kmem_cache_destroy(struct kmem_cache *cachep)
 {
 	int i;
 	struct kmem_list3 *l3;
@@ -2612,49 +2602,10 @@
 }
 EXPORT_SYMBOL(kmem_cache_shrink);
 
-/**
- * kmem_cache_destroy - delete a cache
- * @cachep: the cache to destroy
- *
- * Remove a &struct kmem_cache object from the slab cache.
- *
- * It is expected this function will be called by a module when it is
- * unloaded.  This will remove the cache completely, and avoid a duplicate
- * cache being allocated each time a module is loaded and unloaded, if the
- * module doesn't have persistent in-kernel storage across loads and unloads.
- *
- * The cache must be empty before calling this function.
- *
- * The caller must guarantee that no one will allocate memory from the cache
- * during the kmem_cache_destroy().
- */
-void kmem_cache_destroy(struct kmem_cache *cachep)
+int __kmem_cache_shutdown(struct kmem_cache *cachep)
 {
-	BUG_ON(!cachep || in_interrupt());
-
-	/* Find the cache in the chain of caches. */
-	get_online_cpus();
-	mutex_lock(&slab_mutex);
-	/*
-	 * the chain is never empty, cache_cache is never destroyed
-	 */
-	list_del(&cachep->list);
-	if (__cache_shrink(cachep)) {
-		slab_error(cachep, "Can't free all objects");
-		list_add(&cachep->list, &slab_caches);
-		mutex_unlock(&slab_mutex);
-		put_online_cpus();
-		return;
-	}
-
-	if (unlikely(cachep->flags & SLAB_DESTROY_BY_RCU))
-		rcu_barrier();
-
-	__kmem_cache_destroy(cachep);
-	mutex_unlock(&slab_mutex);
-	put_online_cpus();
+	return __cache_shrink(cachep);
 }
-EXPORT_SYMBOL(kmem_cache_destroy);
 
 /*
  * Get the memory for a slab management obj.
Index: linux-2.6/mm/slab.h
===================================================================
--- linux-2.6.orig/mm/slab.h	2012-07-31 12:15:50.027163690 -0500
+++ linux-2.6/mm/slab.h	2012-07-31 12:16:00.531346043 -0500
@@ -30,4 +30,7 @@
 struct kmem_cache *__kmem_cache_create(const char *name, size_t size,
 	size_t align, unsigned long flags, void (*ctor)(void *));
 
+int __kmem_cache_shutdown(struct kmem_cache *);
+void __kmem_cache_destroy(struct kmem_cache *);
+
 #endif
Index: linux-2.6/mm/slob.c
===================================================================
--- linux-2.6.orig/mm/slob.c	2012-07-31 12:15:50.047164037 -0500
+++ linux-2.6/mm/slob.c	2012-07-31 12:16:00.531346043 -0500
@@ -538,14 +538,11 @@
 	return c;
 }
 
-void kmem_cache_destroy(struct kmem_cache *c)
+void __kmem_cache_destroy(struct kmem_cache *c)
 {
 	kmemleak_free(c);
-	if (c->flags & SLAB_DESTROY_BY_RCU)
-		rcu_barrier();
 	slob_free(c, sizeof(struct kmem_cache));
 }
-EXPORT_SYMBOL(kmem_cache_destroy);
 
 void *kmem_cache_alloc_node(struct kmem_cache *c, gfp_t flags, int node)
 {
@@ -613,6 +610,12 @@
 }
 EXPORT_SYMBOL(kmem_cache_size);
 
+int __kmem_cache_shutdown(struct kmem_cache *c)
+{
+	/* No way to check for remaining objects */
+	return 0;
+}
+
 int kmem_cache_shrink(struct kmem_cache *d)
 {
 	return 0;
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c	2012-07-31 12:15:59.691331459 -0500
+++ linux-2.6/mm/slub.c	2012-07-31 12:16:18.271653725 -0500
@@ -622,7 +622,7 @@
 	print_trailer(s, page, object);
 }
 
-static void slab_err(struct kmem_cache *s, struct page *page, char *fmt, ...)
+static void slab_err(struct kmem_cache *s, struct page *page, const char *fmt, ...)
 {
 	va_list args;
 	char buf[100];
@@ -3115,7 +3115,7 @@
 				     sizeof(long), GFP_ATOMIC);
 	if (!map)
 		return;
-	slab_err(s, page, "%s", text);
+	slab_err(s, page, text, s->name);
 	slab_lock(page);
 
 	get_map(s, page, map);
@@ -3147,7 +3147,7 @@
 			discard_slab(s, page);
 		} else {
 			list_slab_objects(s, page,
-				"Objects remaining on kmem_cache_close()");
+			"Objects remaining in %s on kmem_cache_close()");
 		}
 	}
 }
@@ -3173,29 +3173,15 @@
 	return 0;
 }
 
-/*
- * Close a cache and release the kmem_cache structure
- * (must be used for caches created using kmem_cache_create)
- */
-void kmem_cache_destroy(struct kmem_cache *s)
+int __kmem_cache_shutdown(struct kmem_cache *s)
 {
-	mutex_lock(&slab_mutex);
-	s->refcount--;
-	if (!s->refcount) {
-		list_del(&s->list);
-		mutex_unlock(&slab_mutex);
-		if (kmem_cache_close(s)) {
-			printk(KERN_ERR "SLUB %s: %s called for cache that "
-				"still has objects.\n", s->name, __func__);
-			dump_stack();
-		}
-		if (s->flags & SLAB_DESTROY_BY_RCU)
-			rcu_barrier();
-		sysfs_slab_remove(s);
-	} else
-		mutex_unlock(&slab_mutex);
+	return kmem_cache_close(s);
+}
+
+void __kmem_cache_destroy(struct kmem_cache *s)
+{
+	sysfs_slab_remove(s);
 }
-EXPORT_SYMBOL(kmem_cache_destroy);
 
 /********************************************************************
  *		Kmalloc subsystem

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Common [5/9] Always use the name "kmem_cache" for the slab cache with the kmem_cache structure.
  2012-07-31 17:36 Common [0/9] Sl[auo]b: Common code rework V7 Christoph Lameter
                   ` (3 preceding siblings ...)
  2012-07-31 17:36 ` Common [4/9] Extract a common function for kmem_cache_destroy Christoph Lameter
@ 2012-07-31 17:36 ` Christoph Lameter
  2012-07-31 17:36 ` Common [6/9] Move freeing of kmem_cache structure to common code Christoph Lameter
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 16+ messages in thread
From: Christoph Lameter @ 2012-07-31 17:36 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: linux-mm, David Rientjes, Matt Mackall, Glauber Costa, Joonsoo Kim

[-- Attachment #1: common_kmem_cache_name --]
[-- Type: text/plain, Size: 9153 bytes --]

Make all allocators use the "kmem_cache" slabname for the "kmem_cache" structure.

Reviewed-by: Glauber Costa <glommer@parallels.com>
Reviewed-by: Joonsoo Kim <js1304@gmail.com>
Signed-off-by: Christoph Lameter <cl@linux.com>

---
 mm/slab.c        |   72 ++++++++++++++++++++++++++++---------------------------
 mm/slab.h        |    6 ++++
 mm/slab_common.c |    1 
 mm/slob.c        |    9 ++++++
 mm/slub.c        |    2 -
 5 files changed, 52 insertions(+), 38 deletions(-)

Index: linux-2.6/mm/slab.c
===================================================================
--- linux-2.6.orig/mm/slab.c	2012-07-31 12:16:00.531346043 -0500
+++ linux-2.6/mm/slab.c	2012-07-31 12:17:55.869349816 -0500
@@ -554,9 +554,9 @@
     { {0, BOOT_CPUCACHE_ENTRIES, 1, 0} };
 
 /* internal cache of cache description objs */
-static struct kmem_list3 *cache_cache_nodelists[MAX_NUMNODES];
-static struct kmem_cache cache_cache = {
-	.nodelists = cache_cache_nodelists,
+static struct kmem_list3 *kmem_cache_nodelists[MAX_NUMNODES];
+static struct kmem_cache kmem_cache_boot = {
+	.nodelists = kmem_cache_nodelists,
 	.batchcount = 1,
 	.limit = BOOT_CPUCACHE_ENTRIES,
 	.shared = 1,
@@ -1442,15 +1442,17 @@
 	int order;
 	int node;
 
+	kmem_cache = &kmem_cache_boot;
+
 	if (num_possible_nodes() == 1)
 		use_alien_caches = 0;
 
 	for (i = 0; i < NUM_INIT_LISTS; i++) {
 		kmem_list3_init(&initkmem_list3[i]);
 		if (i < MAX_NUMNODES)
-			cache_cache.nodelists[i] = NULL;
+			kmem_cache->nodelists[i] = NULL;
 	}
-	set_up_list3s(&cache_cache, CACHE_CACHE);
+	set_up_list3s(kmem_cache, CACHE_CACHE);
 
 	/*
 	 * Fragmentation resistance on low memory - only use bigger
@@ -1462,9 +1464,9 @@
 
 	/* Bootstrap is tricky, because several objects are allocated
 	 * from caches that do not exist yet:
-	 * 1) initialize the cache_cache cache: it contains the struct
-	 *    kmem_cache structures of all caches, except cache_cache itself:
-	 *    cache_cache is statically allocated.
+	 * 1) initialize the kmem_cache cache: it contains the struct
+	 *    kmem_cache structures of all caches, except kmem_cache itself:
+	 *    kmem_cache is statically allocated.
 	 *    Initially an __init data area is used for the head array and the
 	 *    kmem_list3 structures, it's replaced with a kmalloc allocated
 	 *    array at the end of the bootstrap.
@@ -1473,43 +1475,43 @@
 	 *    An __init data area is used for the head array.
 	 * 3) Create the remaining kmalloc caches, with minimally sized
 	 *    head arrays.
-	 * 4) Replace the __init data head arrays for cache_cache and the first
+	 * 4) Replace the __init data head arrays for kmem_cache and the first
 	 *    kmalloc cache with kmalloc allocated arrays.
-	 * 5) Replace the __init data for kmem_list3 for cache_cache and
+	 * 5) Replace the __init data for kmem_list3 for kmem_cache and
 	 *    the other cache's with kmalloc allocated memory.
 	 * 6) Resize the head arrays of the kmalloc caches to their final sizes.
 	 */
 
 	node = numa_mem_id();
 
-	/* 1) create the cache_cache */
+	/* 1) create the kmem_cache */
 	INIT_LIST_HEAD(&slab_caches);
-	list_add(&cache_cache.list, &slab_caches);
-	cache_cache.colour_off = cache_line_size();
-	cache_cache.array[smp_processor_id()] = &initarray_cache.cache;
-	cache_cache.nodelists[node] = &initkmem_list3[CACHE_CACHE + node];
+	list_add(&kmem_cache->list, &slab_caches);
+	kmem_cache->colour_off = cache_line_size();
+	kmem_cache->array[smp_processor_id()] = &initarray_cache.cache;
+	kmem_cache->nodelists[node] = &initkmem_list3[CACHE_CACHE + node];
 
 	/*
 	 * struct kmem_cache size depends on nr_node_ids & nr_cpu_ids
 	 */
-	cache_cache.size = offsetof(struct kmem_cache, array[nr_cpu_ids]) +
+	kmem_cache->size = offsetof(struct kmem_cache, array[nr_cpu_ids]) +
 				  nr_node_ids * sizeof(struct kmem_list3 *);
-	cache_cache.object_size = cache_cache.size;
-	cache_cache.size = ALIGN(cache_cache.size,
+	kmem_cache->object_size = kmem_cache->size;
+	kmem_cache->size = ALIGN(kmem_cache->object_size,
 					cache_line_size());
-	cache_cache.reciprocal_buffer_size =
-		reciprocal_value(cache_cache.size);
+	kmem_cache->reciprocal_buffer_size =
+		reciprocal_value(kmem_cache->size);
 
 	for (order = 0; order < MAX_ORDER; order++) {
-		cache_estimate(order, cache_cache.size,
-			cache_line_size(), 0, &left_over, &cache_cache.num);
-		if (cache_cache.num)
+		cache_estimate(order, kmem_cache->size,
+			cache_line_size(), 0, &left_over, &kmem_cache->num);
+		if (kmem_cache->num)
 			break;
 	}
-	BUG_ON(!cache_cache.num);
-	cache_cache.gfporder = order;
-	cache_cache.colour = left_over / cache_cache.colour_off;
-	cache_cache.slab_size = ALIGN(cache_cache.num * sizeof(kmem_bufctl_t) +
+	BUG_ON(!kmem_cache->num);
+	kmem_cache->gfporder = order;
+	kmem_cache->colour = left_over / kmem_cache->colour_off;
+	kmem_cache->slab_size = ALIGN(kmem_cache->num * sizeof(kmem_bufctl_t) +
 				      sizeof(struct slab), cache_line_size());
 
 	/* 2+3) create the kmalloc caches */
@@ -1576,15 +1578,15 @@
 
 		ptr = kmalloc(sizeof(struct arraycache_init), GFP_NOWAIT);
 
-		BUG_ON(cpu_cache_get(&cache_cache) != &initarray_cache.cache);
-		memcpy(ptr, cpu_cache_get(&cache_cache),
+		BUG_ON(cpu_cache_get(kmem_cache) != &initarray_cache.cache);
+		memcpy(ptr, cpu_cache_get(kmem_cache),
 		       sizeof(struct arraycache_init));
 		/*
 		 * Do not assume that spinlocks can be initialized via memcpy:
 		 */
 		spin_lock_init(&ptr->lock);
 
-		cache_cache.array[smp_processor_id()] = ptr;
+		kmem_cache->array[smp_processor_id()] = ptr;
 
 		ptr = kmalloc(sizeof(struct arraycache_init), GFP_NOWAIT);
 
@@ -1605,7 +1607,7 @@
 		int nid;
 
 		for_each_online_node(nid) {
-			init_list(&cache_cache, &initkmem_list3[CACHE_CACHE + nid], nid);
+			init_list(kmem_cache, &initkmem_list3[CACHE_CACHE + nid], nid);
 
 			init_list(malloc_sizes[INDEX_AC].cs_cachep,
 				  &initkmem_list3[SIZE_AC + nid], nid);
@@ -2062,7 +2064,7 @@
 			kfree(l3);
 		}
 	}
-	kmem_cache_free(&cache_cache, cachep);
+	kmem_cache_free(kmem_cache, cachep);
 }
 
 
@@ -2312,7 +2314,7 @@
 		gfp = GFP_NOWAIT;
 
 	/* Get cache's description obj. */
-	cachep = kmem_cache_zalloc(&cache_cache, gfp);
+	cachep = kmem_cache_zalloc(kmem_cache, gfp);
 	if (!cachep)
 		return NULL;
 
@@ -2370,7 +2372,7 @@
 	if (!cachep->num) {
 		printk(KERN_ERR
 		       "kmem_cache_create: couldn't create cache %s.\n", name);
-		kmem_cache_free(&cache_cache, cachep);
+		kmem_cache_free(kmem_cache, cachep);
 		return NULL;
 	}
 	slab_size = ALIGN(cachep->num * sizeof(kmem_bufctl_t)
@@ -3131,7 +3133,7 @@
 
 static bool slab_should_failslab(struct kmem_cache *cachep, gfp_t flags)
 {
-	if (cachep == &cache_cache)
+	if (cachep == kmem_cache)
 		return false;
 
 	return should_failslab(cachep->object_size, flags, cachep->flags);
Index: linux-2.6/mm/slab.h
===================================================================
--- linux-2.6.orig/mm/slab.h	2012-07-31 12:16:00.531346043 -0500
+++ linux-2.6/mm/slab.h	2012-07-31 12:17:55.869349816 -0500
@@ -25,8 +25,14 @@
 
 /* The slab cache mutex protects the management structures during changes */
 extern struct mutex slab_mutex;
+
+/* The list of all slab caches on the system */
 extern struct list_head slab_caches;
 
+/* The slab cache that manages slab cache information */
+extern struct kmem_cache *kmem_cache;
+
+/* Functions provided by the slab allocators */
 struct kmem_cache *__kmem_cache_create(const char *name, size_t size,
 	size_t align, unsigned long flags, void (*ctor)(void *));
 
Index: linux-2.6/mm/slab_common.c
===================================================================
--- linux-2.6.orig/mm/slab_common.c	2012-07-31 12:16:00.531346043 -0500
+++ linux-2.6/mm/slab_common.c	2012-07-31 12:17:55.869349816 -0500
@@ -22,6 +22,7 @@
 enum slab_state slab_state;
 LIST_HEAD(slab_caches);
 DEFINE_MUTEX(slab_mutex);
+struct kmem_cache *kmem_cache;
 
 /*
  * kmem_cache_create - Create a cache.
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c	2012-07-31 12:16:18.271653725 -0500
+++ linux-2.6/mm/slub.c	2012-07-31 12:17:55.873349885 -0500
@@ -3190,8 +3190,6 @@
 struct kmem_cache *kmalloc_caches[SLUB_PAGE_SHIFT];
 EXPORT_SYMBOL(kmalloc_caches);
 
-static struct kmem_cache *kmem_cache;
-
 #ifdef CONFIG_ZONE_DMA
 static struct kmem_cache *kmalloc_dma_caches[SLUB_PAGE_SHIFT];
 #endif
Index: linux-2.6/mm/slob.c
===================================================================
--- linux-2.6.orig/mm/slob.c	2012-07-31 12:16:00.531346043 -0500
+++ linux-2.6/mm/slob.c	2012-07-31 12:17:55.873349885 -0500
@@ -622,8 +622,16 @@
 }
 EXPORT_SYMBOL(kmem_cache_shrink);
 
+struct kmem_cache kmem_cache_boot = {
+	.name = "kmem_cache",
+	.size = sizeof(struct kmem_cache),
+	.flags = SLAB_PANIC,
+	.align = ARCH_KMALLOC_MINALIGN,
+};
+
 void __init kmem_cache_init(void)
 {
+	kmem_cache = &kmem_cache_boot;
 	slab_state = UP;
 }
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Common [6/9] Move freeing of kmem_cache structure to common code
  2012-07-31 17:36 Common [0/9] Sl[auo]b: Common code rework V7 Christoph Lameter
                   ` (4 preceding siblings ...)
  2012-07-31 17:36 ` Common [5/9] Always use the name "kmem_cache" for the slab cache with the kmem_cache structure Christoph Lameter
@ 2012-07-31 17:36 ` Christoph Lameter
  2012-07-31 17:36 ` Common [7/9] Get rid of __kmem_cache_destroy Christoph Lameter
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 16+ messages in thread
From: Christoph Lameter @ 2012-07-31 17:36 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: linux-mm, David Rientjes, Matt Mackall, Glauber Costa, Joonsoo Kim

[-- Attachment #1: common_kmem_cache_destroy --]
[-- Type: text/plain, Size: 2300 bytes --]

The freeing action is basically the same in all slab allocators.
Move to the common kmem_cache_destroy() function.

Reviewed-by: Joonsoo Kim <js1304@gmail.com>
Signed-off-by: Christoph Lameter <cl@linux.com>

---
 mm/slab.c        |    1 -
 mm/slab_common.c |    1 +
 mm/slob.c        |    2 --
 mm/slub.c        |    1 -
 4 files changed, 1 insertion(+), 4 deletions(-)

Index: linux-2.6/mm/slab_common.c
===================================================================
--- linux-2.6.orig/mm/slab_common.c	2012-07-31 12:20:08.035649028 -0500
+++ linux-2.6/mm/slab_common.c	2012-07-31 12:20:10.139685656 -0500
@@ -135,6 +135,7 @@
 				rcu_barrier();
 
 			__kmem_cache_destroy(s);
+			kmem_cache_free(kmem_cache, s);
 		} else {
 			list_add(&s->list, &slab_caches);
 			printk(KERN_ERR "kmem_cache_destroy %s: Slab cache still has objects\n",
Index: linux-2.6/mm/slob.c
===================================================================
--- linux-2.6.orig/mm/slob.c	2012-07-31 12:20:08.027648887 -0500
+++ linux-2.6/mm/slob.c	2012-07-31 12:20:10.139685656 -0500
@@ -540,8 +540,6 @@
 
 void __kmem_cache_destroy(struct kmem_cache *c)
 {
-	kmemleak_free(c);
-	slob_free(c, sizeof(struct kmem_cache));
 }
 
 void *kmem_cache_alloc_node(struct kmem_cache *c, gfp_t flags, int node)
Index: linux-2.6/mm/slab.c
===================================================================
--- linux-2.6.orig/mm/slab.c	2012-07-31 12:20:08.019648748 -0500
+++ linux-2.6/mm/slab.c	2012-07-31 12:20:10.143685717 -0500
@@ -2064,7 +2064,6 @@
 			kfree(l3);
 		}
 	}
-	kmem_cache_free(kmem_cache, cachep);
 }
 
 
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c	2012-07-31 12:20:08.047649236 -0500
+++ linux-2.6/mm/slub.c	2012-07-31 12:20:37.228157295 -0500
@@ -211,7 +211,6 @@
 static inline void sysfs_slab_remove(struct kmem_cache *s)
 {
 	kfree(s->name);
-	kfree(s);
 }
 
 #endif
@@ -5301,7 +5300,6 @@
 	kobject_del(&s->kobj);
 	kobject_put(&s->kobj);
 	kfree(s->name);
-	kmem_cache_free(kmem_cache, s);
 }
 
 /*

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Common [7/9] Get rid of __kmem_cache_destroy
  2012-07-31 17:36 Common [0/9] Sl[auo]b: Common code rework V7 Christoph Lameter
                   ` (5 preceding siblings ...)
  2012-07-31 17:36 ` Common [6/9] Move freeing of kmem_cache structure to common code Christoph Lameter
@ 2012-07-31 17:36 ` Christoph Lameter
  2012-07-31 17:36 ` Common [8/9] Move duping of slab name to slab_common.c Christoph Lameter
  2012-07-31 17:36 ` Common [9/9] Do slab aliasing call from common code Christoph Lameter
  8 siblings, 0 replies; 16+ messages in thread
From: Christoph Lameter @ 2012-07-31 17:36 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: linux-mm, David Rientjes, Matt Mackall, Glauber Costa, Joonsoo Kim

[-- Attachment #1: no_slab_specific_kmem_cache_destroy --]
[-- Type: text/plain, Size: 4072 bytes --]

What is done there can be done in __kmem_cache_shutdown.

This affects RCU handling somewhat. On rcu free all slab allocators
do not refer to other management structures than the kmem_cache structure.
Therefore these other structures can be freed before the rcu deferred
free to the page allocator occurs.

Reviewed-by: Joonsoo Kim <js1304@gmail.com>
Signed-off-by: Christoph Lameter <cl@linux.com>

---
 mm/slab.c        |   43 +++++++++++++++++++++----------------------
 mm/slab.h        |    1 -
 mm/slab_common.c |    1 -
 mm/slob.c        |    4 ----
 mm/slub.c        |   10 +++++-----
 5 files changed, 26 insertions(+), 33 deletions(-)

Index: linux-2.6/mm/slob.c
===================================================================
--- linux-2.6.orig/mm/slob.c	2012-07-31 12:20:10.139685656 -0500
+++ linux-2.6/mm/slob.c	2012-07-31 12:20:56.976501176 -0500
@@ -538,10 +538,6 @@
 	return c;
 }
 
-void __kmem_cache_destroy(struct kmem_cache *c)
-{
-}
-
 void *kmem_cache_alloc_node(struct kmem_cache *c, gfp_t flags, int node)
 {
 	void *b;
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c	2012-07-31 12:20:37.228157295 -0500
+++ linux-2.6/mm/slub.c	2012-07-31 12:20:56.976501176 -0500
@@ -3174,12 +3174,12 @@
 
 int __kmem_cache_shutdown(struct kmem_cache *s)
 {
-	return kmem_cache_close(s);
-}
+	int rc = kmem_cache_close(s);
 
-void __kmem_cache_destroy(struct kmem_cache *s)
-{
-	sysfs_slab_remove(s);
+	if (!rc)
+		sysfs_slab_remove(s);
+
+	return rc;
 }
 
 /********************************************************************
Index: linux-2.6/mm/slab.c
===================================================================
--- linux-2.6.orig/mm/slab.c	2012-07-31 12:20:10.143685717 -0500
+++ linux-2.6/mm/slab.c	2012-07-31 12:20:56.976501176 -0500
@@ -2047,26 +2047,6 @@
 	}
 }
 
-void __kmem_cache_destroy(struct kmem_cache *cachep)
-{
-	int i;
-	struct kmem_list3 *l3;
-
-	for_each_online_cpu(i)
-	    kfree(cachep->array[i]);
-
-	/* NUMA: free the list3 structures */
-	for_each_online_node(i) {
-		l3 = cachep->nodelists[i];
-		if (l3) {
-			kfree(l3->shared);
-			free_alien_cache(l3->alien);
-			kfree(l3);
-		}
-	}
-}
-
-
 /**
  * calculate_slab_order - calculate size (page order) of slabs
  * @cachep: pointer to the cache that is being created
@@ -2430,7 +2410,7 @@
 	cachep->refcount = 1;
 
 	if (setup_cpu_cache(cachep, gfp)) {
-		__kmem_cache_destroy(cachep);
+		__kmem_cache_shutdown(cachep);
 		return NULL;
 	}
 
@@ -2605,7 +2585,26 @@
 
 int __kmem_cache_shutdown(struct kmem_cache *cachep)
 {
-	return __cache_shrink(cachep);
+	int i;
+	struct kmem_list3 *l3;
+	int rc = __cache_shrink(cachep);
+
+	if (rc)
+		return rc;
+
+	for_each_online_cpu(i)
+	    kfree(cachep->array[i]);
+
+	/* NUMA: free the list3 structures */
+	for_each_online_node(i) {
+		l3 = cachep->nodelists[i];
+		if (l3) {
+			kfree(l3->shared);
+			free_alien_cache(l3->alien);
+			kfree(l3);
+		}
+	}
+	return 0;
 }
 
 /*
Index: linux-2.6/mm/slab.h
===================================================================
--- linux-2.6.orig/mm/slab.h	2012-07-31 12:17:55.869349816 -0500
+++ linux-2.6/mm/slab.h	2012-07-31 12:20:56.976501176 -0500
@@ -37,6 +37,5 @@
 	size_t align, unsigned long flags, void (*ctor)(void *));
 
 int __kmem_cache_shutdown(struct kmem_cache *);
-void __kmem_cache_destroy(struct kmem_cache *);
 
 #endif
Index: linux-2.6/mm/slab_common.c
===================================================================
--- linux-2.6.orig/mm/slab_common.c	2012-07-31 12:20:10.139685656 -0500
+++ linux-2.6/mm/slab_common.c	2012-07-31 12:20:56.976501176 -0500
@@ -134,7 +134,6 @@
 			if (s->flags & SLAB_DESTROY_BY_RCU)
 				rcu_barrier();
 
-			__kmem_cache_destroy(s);
 			kmem_cache_free(kmem_cache, s);
 		} else {
 			list_add(&s->list, &slab_caches);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Common [8/9] Move duping of slab name to slab_common.c
  2012-07-31 17:36 Common [0/9] Sl[auo]b: Common code rework V7 Christoph Lameter
                   ` (6 preceding siblings ...)
  2012-07-31 17:36 ` Common [7/9] Get rid of __kmem_cache_destroy Christoph Lameter
@ 2012-07-31 17:36 ` Christoph Lameter
  2012-07-31 18:02   ` Christoph Lameter
  2012-07-31 17:36 ` Common [9/9] Do slab aliasing call from common code Christoph Lameter
  8 siblings, 1 reply; 16+ messages in thread
From: Christoph Lameter @ 2012-07-31 17:36 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: linux-mm, David Rientjes, Matt Mackall, Glauber Costa, Joonsoo Kim

[-- Attachment #1: dup_name_in_common --]
[-- Type: text/plain, Size: 3278 bytes --]

Duping of the slabname has to be done by each slab. Moving this code
to slab_common avoids duplicate implementations.

With this patch we have common string handling for all slab allocators.
Strings passed to kmem_cache_create() are copied internally. Subsystems
can create temporary strings to create slab caches.

Slabs allocated in early states of bootstrap will never be freed (and those
can never be freed since they are essential to slab allocator operations).
During bootstrap we therefore do not have to worry about duping names.

Signed-off-by: Christoph Lameter <cl@linux.com>

---
 mm/slab_common.c |   24 +++++++++++++++++-------
 mm/slub.c        |    5 -----
 2 files changed, 17 insertions(+), 12 deletions(-)

Index: linux-2.6/mm/slab_common.c
===================================================================
--- linux-2.6.orig/mm/slab_common.c	2012-07-31 12:20:56.976501176 -0500
+++ linux-2.6/mm/slab_common.c	2012-07-31 12:21:37.481206730 -0500
@@ -53,6 +53,7 @@
 		unsigned long flags, void (*ctor)(void *))
 {
 	struct kmem_cache *s = NULL;
+	char *n;
 
 #ifdef CONFIG_DEBUG_VM
 	if (!name || in_interrupt() || size < sizeof(void *) ||
@@ -97,14 +98,22 @@
 	WARN_ON(strchr(name, ' '));	/* It confuses parsers */
 #endif
 
-	s = __kmem_cache_create(name, size, align, flags, ctor);
+	n = kstrdup(name, GFP_KERNEL);
+	if (!n)
+		goto oops;
 
-	/*
-	 * Check if the slab has actually been created and if it was a
-	 * real instatiation. Aliases do not belong on the list
-	 */
-	if (s && s->refcount == 1)
-		list_add(&s->list, &slab_caches);
+	s = __kmem_cache_create(n, size, align, flags, ctor);
+
+	if (s) {
+		/*
+		 * Check if the slab has actually been created and if it was a
+		 * real instatiation. Aliases do not belong on the list
+		 */
+		if (s->refcount == 1)
+			list_add(&s->list, &slab_caches);
+
+	} else
+		kfree(n);
 
 #ifdef CONFIG_DEBUG_VM
 oops:
@@ -134,6 +143,7 @@
 			if (s->flags & SLAB_DESTROY_BY_RCU)
 				rcu_barrier();
 
+			kfree(s->name);
 			kmem_cache_free(kmem_cache, s);
 		} else {
 			list_add(&s->list, &slab_caches);
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c	2012-07-31 12:20:56.976501176 -0500
+++ linux-2.6/mm/slub.c	2012-07-31 12:22:50.362476901 -0500
@@ -208,10 +208,7 @@
 static inline int sysfs_slab_add(struct kmem_cache *s) { return 0; }
 static inline int sysfs_slab_alias(struct kmem_cache *s, const char *p)
 							{ return 0; }
-static inline void sysfs_slab_remove(struct kmem_cache *s)
-{
-	kfree(s->name);
-}
+static inline void sysfs_slab_remove(struct kmem_cache *s) { }
 
 #endif
 
@@ -3917,10 +3914,6 @@
 		return s;
 	}
 
-	n = kstrdup(name, GFP_KERNEL);
-	if (!n)
-		return NULL;
-
 	s = kmem_cache_alloc(kmem_cache, GFP_KERNEL);
 	if (s) {
 		if (kmem_cache_open(s, n,
@@ -3938,7 +3931,6 @@
 		}
 		kfree(s);
 	}
-	kfree(n);
 	return NULL;
 }
 
@@ -5299,7 +5291,6 @@
 	kobject_uevent(&s->kobj, KOBJ_REMOVE);
 	kobject_del(&s->kobj);
 	kobject_put(&s->kobj);
-	kfree(s->name);
 }
 
 /*

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Common [9/9] Do slab aliasing call from common code
  2012-07-31 17:36 Common [0/9] Sl[auo]b: Common code rework V7 Christoph Lameter
                   ` (7 preceding siblings ...)
  2012-07-31 17:36 ` Common [8/9] Move duping of slab name to slab_common.c Christoph Lameter
@ 2012-07-31 17:36 ` Christoph Lameter
  2012-08-02  7:24   ` Glauber Costa
  8 siblings, 1 reply; 16+ messages in thread
From: Christoph Lameter @ 2012-07-31 17:36 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: linux-mm, David Rientjes, Matt Mackall, Glauber Costa, Joonsoo Kim

[-- Attachment #1: slab_alias_common --]
[-- Type: text/plain, Size: 3752 bytes --]

The slab aliasing logic causes some strange contortions in
slub. So add a call to deal with aliases to slab_common.c
but disable it for other slab allocators by providng stubs
that fail to create aliases.

Full general support for aliases will require additional
cleanup passes and more standardization of fields in
kmem_cache.

Signed-off-by: Christoph Lameter <cl@linux.com>


---
 mm/slab.h        |   10 ++++++++++
 mm/slab_common.c |   16 +++++++---------
 mm/slub.c        |   18 ++++++++++++------
 3 files changed, 29 insertions(+), 15 deletions(-)

Index: linux-2.6/mm/slab.h
===================================================================
--- linux-2.6.orig/mm/slab.h	2012-07-31 12:00:10.000000000 -0500
+++ linux-2.6/mm/slab.h	2012-07-31 12:01:17.832133769 -0500
@@ -36,6 +36,16 @@
 struct kmem_cache *__kmem_cache_create(const char *name, size_t size,
 	size_t align, unsigned long flags, void (*ctor)(void *));
 
+#ifdef CONFIG_SLUB
+struct kmem_cache *__kmem_cache_alias(const char *name, size_t size,
+	size_t align, unsigned long flags, void (*ctor)(void *));
+#else
+static inline struct kmem_cache *__kmem_cache_alias(const char *name, size_t size,
+	size_t align, unsigned long flags, void (*ctor)(void *))
+{ return NULL; }
+#endif
+
+
 int __kmem_cache_shutdown(struct kmem_cache *);
 
 #endif
Index: linux-2.6/mm/slab_common.c
===================================================================
--- linux-2.6.orig/mm/slab_common.c	2012-07-31 12:00:18.000000000 -0500
+++ linux-2.6/mm/slab_common.c	2012-07-31 12:01:17.836133832 -0500
@@ -98,21 +98,19 @@
 	WARN_ON(strchr(name, ' '));	/* It confuses parsers */
 #endif
 
+	s = __kmem_cache_alias(name, size, align, flags, ctor);
+	if (s)
+		goto oops;
+
 	n = kstrdup(name, GFP_KERNEL);
 	if (!n)
 		goto oops;
 
 	s = __kmem_cache_create(n, size, align, flags, ctor);
 
-	if (s) {
-		/*
-		 * Check if the slab has actually been created and if it was a
-		 * real instatiation. Aliases do not belong on the list
-		 */
-		if (s->refcount == 1)
-			list_add(&s->list, &slab_caches);
-
-	} else
+	if (s)
+		list_add(&s->list, &slab_caches);
+	else
 		kfree(n);
 
 #ifdef CONFIG_DEBUG_VM
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c	2012-07-31 12:01:05.000000000 -0500
+++ linux-2.6/mm/slub.c	2012-07-31 12:01:54.836765585 -0500
@@ -3681,7 +3681,7 @@
 		slub_max_order = 0;
 
 	kmem_size = offsetof(struct kmem_cache, node) +
-				nr_node_ids * sizeof(struct kmem_cache_node *);
+			nr_node_ids * sizeof(struct kmem_cache_node *);
 
 	/* Allocate two kmem_caches from the page allocator */
 	kmalloc_size = ALIGN(kmem_size, cache_line_size());
@@ -3895,11 +3895,10 @@
 	return NULL;
 }
 
-struct kmem_cache *__kmem_cache_create(const char *name, size_t size,
+struct kmem_cache *__kmem_cache_alias(const char *name, size_t size,
 		size_t align, unsigned long flags, void (*ctor)(void *))
 {
 	struct kmem_cache *s;
-	char *n;
 
 	s = find_mergeable(size, align, flags, name, ctor);
 	if (s) {
@@ -3913,14 +3912,21 @@
 
 		if (sysfs_slab_alias(s, name)) {
 			s->refcount--;
-			return NULL;
+			s = NULL;
 		}
-		return s;
 	}
 
+	return s;
+}
+
+struct kmem_cache *__kmem_cache_create(const char *name, size_t size,
+		size_t align, unsigned long flags, void (*ctor)(void *))
+{
+	struct kmem_cache *s;
+
 	s = kmem_cache_alloc(kmem_cache, GFP_KERNEL);
 	if (s) {
-		if (kmem_cache_open(s, n,
+		if (kmem_cache_open(s, name,
 				size, align, flags, ctor)) {
 			int r;
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Common [8/9] Move duping of slab name to slab_common.c
  2012-07-31 17:36 ` Common [8/9] Move duping of slab name to slab_common.c Christoph Lameter
@ 2012-07-31 18:02   ` Christoph Lameter
  0 siblings, 0 replies; 16+ messages in thread
From: Christoph Lameter @ 2012-07-31 18:02 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: linux-mm, David Rientjes, Matt Mackall, Glauber Costa, Joonsoo Kim

Patch not refreshed.... Sigh... use this one:

Subject: Move duping of slab name to slab_common.c

Duping of the slabname has to be done by each slab. Moving this code
to slab_common avoids duplicate implementations.

With this patch we have common string handling for all slab allocators.
Strings passed to kmem_cache_create() are copied internally. Subsystems
can create temporary strings to create slab caches.

Slabs allocated in early states of bootstrap will never be freed (and those
can never be freed since they are essential to slab allocator operations).
During bootstrap we therefore do not have to worry about duping names.

Signed-off-by: Christoph Lameter <cl@linux.com>

---
 mm/slab_common.c |   24 +++++++++++++++++-------
 mm/slub.c        |    5 -----
 2 files changed, 17 insertions(+), 12 deletions(-)

Index: linux-2.6/mm/slab_common.c
===================================================================
--- linux-2.6.orig/mm/slab_common.c	2012-07-31 12:20:56.976501176 -0500
+++ linux-2.6/mm/slab_common.c	2012-07-31 13:01:46.763397315 -0500
@@ -53,6 +53,7 @@
 		unsigned long flags, void (*ctor)(void *))
 {
 	struct kmem_cache *s = NULL;
+	char *n;

 #ifdef CONFIG_DEBUG_VM
 	if (!name || in_interrupt() || size < sizeof(void *) ||
@@ -97,14 +98,22 @@
 	WARN_ON(strchr(name, ' '));	/* It confuses parsers */
 #endif

-	s = __kmem_cache_create(name, size, align, flags, ctor);
+	n = kstrdup(name, GFP_KERNEL);
+	if (!n)
+		goto oops;

-	/*
-	 * Check if the slab has actually been created and if it was a
-	 * real instatiation. Aliases do not belong on the list
-	 */
-	if (s && s->refcount == 1)
-		list_add(&s->list, &slab_caches);
+	s = __kmem_cache_create(n, size, align, flags, ctor);
+
+	if (s) {
+		/*
+		 * Check if the slab has actually been created and if it was a
+		 * real instatiation. Aliases do not belong on the list
+		 */
+		if (s->refcount == 1)
+			list_add(&s->list, &slab_caches);
+
+	} else
+		kfree(n);

 #ifdef CONFIG_DEBUG_VM
 oops:
@@ -134,6 +143,7 @@
 			if (s->flags & SLAB_DESTROY_BY_RCU)
 				rcu_barrier();

+			kfree(s->name);
 			kmem_cache_free(kmem_cache, s);
 		} else {
 			list_add(&s->list, &slab_caches);
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c	2012-07-31 12:20:56.976501176 -0500
+++ linux-2.6/mm/slub.c	2012-07-31 13:01:46.775397529 -0500
@@ -208,10 +208,7 @@
 static inline int sysfs_slab_add(struct kmem_cache *s) { return 0; }
 static inline int sysfs_slab_alias(struct kmem_cache *s, const char *p)
 							{ return 0; }
-static inline void sysfs_slab_remove(struct kmem_cache *s)
-{
-	kfree(s->name);
-}
+static inline void sysfs_slab_remove(struct kmem_cache *s) { }

 #endif

@@ -3898,7 +3895,6 @@
 		size_t align, unsigned long flags, void (*ctor)(void *))
 {
 	struct kmem_cache *s;
-	char *n;

 	s = find_mergeable(size, align, flags, name, ctor);
 	if (s) {
@@ -3917,13 +3913,9 @@
 		return s;
 	}

-	n = kstrdup(name, GFP_KERNEL);
-	if (!n)
-		return NULL;
-
 	s = kmem_cache_alloc(kmem_cache, GFP_KERNEL);
 	if (s) {
-		if (kmem_cache_open(s, n,
+		if (kmem_cache_open(s, name,
 				size, align, flags, ctor)) {
 			int r;

@@ -3938,7 +3930,6 @@
 		}
 		kfree(s);
 	}
-	kfree(n);
 	return NULL;
 }

@@ -5299,7 +5290,6 @@
 	kobject_uevent(&s->kobj, KOBJ_REMOVE);
 	kobject_del(&s->kobj);
 	kobject_put(&s->kobj);
-	kfree(s->name);
 }

 /*

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Common [2/9] slub: Use kmem_cache for the kmem_cache structure
  2012-07-31 17:36 ` Common [2/9] slub: Use kmem_cache for the kmem_cache structure Christoph Lameter
@ 2012-08-01  8:42   ` Glauber Costa
  2012-08-01 18:05     ` Christoph Lameter
  0 siblings, 1 reply; 16+ messages in thread
From: Glauber Costa @ 2012-08-01  8:42 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Pekka Enberg, linux-mm, David Rientjes, Matt Mackall, Joonsoo Kim

On 07/31/2012 09:36 PM, Christoph Lameter wrote:
> Do not use kmalloc() but kmem_cache_alloc() for the allocation
> of the kmem_cache structures in slub.
> 
> This is the way its supposed to be. Recent merges lost
> the freeing of the kmem_cache structure and so this is also
> fixing memory leak on kmem_cache_destroy() by adding
> the missing free action to sysfs_slab_remove().

This patch seems incomplete to say the least.

1) You are still not touching the !SYSFS version of the function,
that still reads:

static inline void sysfs_slab_remove(struct kmem_cache *s)
{
        kfree(s->name);
        kfree(s);
}

and it is then inconsistent with its SYSFS version.

2) kmem_cache_release still reads:

static void kmem_cache_release(struct kobject *kobj)
{
        struct kmem_cache *s = to_slab(kobj);

        kfree(s->name);
        kfree(s);
}

Since IIRC both kmem_cache_release and sysfs_slab_remove are called
during cache destruction, you now have a double-double-free (a quadruple
free?)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Common [2/9] slub: Use kmem_cache for the kmem_cache structure
  2012-08-01  8:42   ` Glauber Costa
@ 2012-08-01 18:05     ` Christoph Lameter
  0 siblings, 0 replies; 16+ messages in thread
From: Christoph Lameter @ 2012-08-01 18:05 UTC (permalink / raw)
  To: Glauber Costa
  Cc: Pekka Enberg, linux-mm, David Rientjes, Matt Mackall, Joonsoo Kim

On Wed, 1 Aug 2012, Glauber Costa wrote:

> On 07/31/2012 09:36 PM, Christoph Lameter wrote:
> > Do not use kmalloc() but kmem_cache_alloc() for the allocation
> > of the kmem_cache structures in slub.
> >
> > This is the way its supposed to be. Recent merges lost
> > the freeing of the kmem_cache structure and so this is also
> > fixing memory leak on kmem_cache_destroy() by adding
> > the missing free action to sysfs_slab_remove().
>
> This patch seems incomplete to say the least.

Well ok we could have also converted those but these statements will be
removed later anyways. And you can release a kmem_cache allocation
legitimately with kfree so this works just fine. The problem was that the
release in slab_common did a kmem_cache_free() which must have an object
from the correct cache.

Will update those and the Next patchset will include the conversion of
those as well.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Common [9/9] Do slab aliasing call from common code
  2012-07-31 17:36 ` Common [9/9] Do slab aliasing call from common code Christoph Lameter
@ 2012-08-02  7:24   ` Glauber Costa
  2012-08-02 13:57     ` Christoph Lameter
  0 siblings, 1 reply; 16+ messages in thread
From: Glauber Costa @ 2012-08-02  7:24 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Pekka Enberg, linux-mm, David Rientjes, Matt Mackall, Joonsoo Kim

On 07/31/2012 09:36 PM, Christoph Lameter wrote:
> The slab aliasing logic causes some strange contortions in
> slub. So add a call to deal with aliases to slab_common.c
> but disable it for other slab allocators by providng stubs
> that fail to create aliases.
> 
> Full general support for aliases will require additional
> cleanup passes and more standardization of fields in
> kmem_cache.

This one didn't apply for me. I used pekka's tree + your other 8 patches
(being careful about the 8th one). Maybe you need to refresh this as
well as your 8th patch ?

I could go see where it conflicts, but I'd like to make sure I am
reviewing/testing the code exactly as you intended it to be.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Common [9/9] Do slab aliasing call from common code
  2012-08-02  7:24   ` Glauber Costa
@ 2012-08-02 13:57     ` Christoph Lameter
  2012-08-02 13:58       ` Glauber Costa
  0 siblings, 1 reply; 16+ messages in thread
From: Christoph Lameter @ 2012-08-02 13:57 UTC (permalink / raw)
  To: Glauber Costa
  Cc: Pekka Enberg, linux-mm, David Rientjes, Matt Mackall, Joonsoo Kim

On Thu, 2 Aug 2012, Glauber Costa wrote:

> This one didn't apply for me. I used pekka's tree + your other 8 patches
> (being careful about the 8th one). Maybe you need to refresh this as
> well as your 8th patch ?
>
> I could go see where it conflicts, but I'd like to make sure I am
> reviewing/testing the code exactly as you intended it to be.

Yea well it may be better to use yesterdays patchset instead.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Common [9/9] Do slab aliasing call from common code
  2012-08-02 13:57     ` Christoph Lameter
@ 2012-08-02 13:58       ` Glauber Costa
  0 siblings, 0 replies; 16+ messages in thread
From: Glauber Costa @ 2012-08-02 13:58 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Pekka Enberg, linux-mm, David Rientjes, Matt Mackall, Joonsoo Kim

On 08/02/2012 05:57 PM, Christoph Lameter wrote:
> On Thu, 2 Aug 2012, Glauber Costa wrote:
> 
>> This one didn't apply for me. I used pekka's tree + your other 8 patches
>> (being careful about the 8th one). Maybe you need to refresh this as
>> well as your 8th patch ?
>>
>> I could go see where it conflicts, but I'd like to make sure I am
>> reviewing/testing the code exactly as you intended it to be.
> 
> Yea well it may be better to use yesterdays patchset instead.
> 
I've already saw it, and commented on it. Thanks Christoph!

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2012-08-02 13:58 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-07-31 17:36 Common [0/9] Sl[auo]b: Common code rework V7 Christoph Lameter
2012-07-31 17:36 ` Common [1/9] slub: Add debugging to verify correct cache use on kmem_cache_free() Christoph Lameter
2012-07-31 17:36 ` Common [2/9] slub: Use kmem_cache for the kmem_cache structure Christoph Lameter
2012-08-01  8:42   ` Glauber Costa
2012-08-01 18:05     ` Christoph Lameter
2012-07-31 17:36 ` Common [3/9] Move list_add() to slab_common.c Christoph Lameter
2012-07-31 17:36 ` Common [4/9] Extract a common function for kmem_cache_destroy Christoph Lameter
2012-07-31 17:36 ` Common [5/9] Always use the name "kmem_cache" for the slab cache with the kmem_cache structure Christoph Lameter
2012-07-31 17:36 ` Common [6/9] Move freeing of kmem_cache structure to common code Christoph Lameter
2012-07-31 17:36 ` Common [7/9] Get rid of __kmem_cache_destroy Christoph Lameter
2012-07-31 17:36 ` Common [8/9] Move duping of slab name to slab_common.c Christoph Lameter
2012-07-31 18:02   ` Christoph Lameter
2012-07-31 17:36 ` Common [9/9] Do slab aliasing call from common code Christoph Lameter
2012-08-02  7:24   ` Glauber Costa
2012-08-02 13:57     ` Christoph Lameter
2012-08-02 13:58       ` Glauber Costa

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.