linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/6] add kmalloc_align()
@ 2012-03-20 10:21 Lai Jiangshan
  2012-03-20 10:21 ` [RFC PATCH 1/6] kenrel.h: add ALIGN_OF_LAST_BIT() Lai Jiangshan
                   ` (5 more replies)
  0 siblings, 6 replies; 28+ messages in thread
From: Lai Jiangshan @ 2012-03-20 10:21 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, Matt Mackall, Tejun Heo, Andrew Morton
  Cc: linux-kernel, linux-mm, Lai Jiangshan

Add add kmalloc_align() for alignment requirement.
Almost no behavior changed nor overhead added.

Lai Jiangshan (6):
  kenrel.h: add ALIGN_OF_LAST_BIT()
  slub: add kmalloc_align()
  slab: add kmalloc_align()
  don't couple the header size with the alignment
  slob: add kmalloc_align()
  workqueue: use kmalloc_align() instead of hacking

 include/linux/kernel.h   |    2 ++
 include/linux/slab_def.h |    6 ++++++
 include/linux/slob_def.h |   14 +++++++++++++-
 include/linux/slub_def.h |    6 ++++++
 init/Kconfig             |    1 -
 kernel/workqueue.c       |   23 ++++++++-------------------------------------
 mm/slab.c                |    8 ++++----
 mm/slob.c                |   38 +++++++++++++++++++++-----------------
 mm/slub.c                |    2 +-
 9 files changed, 58 insertions(+), 41 deletions(-)

-- 
1.7.4.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [RFC PATCH 1/6] kenrel.h: add ALIGN_OF_LAST_BIT()
  2012-03-20 10:21 [RFC PATCH 0/6] add kmalloc_align() Lai Jiangshan
@ 2012-03-20 10:21 ` Lai Jiangshan
  2012-03-20 11:32   ` Michal Nazarewicz
  2012-03-20 10:21 ` [RFC PATCH 2/6] slub: add kmalloc_align() Lai Jiangshan
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 28+ messages in thread
From: Lai Jiangshan @ 2012-03-20 10:21 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, Matt Mackall, Tejun Heo, Andrew Morton
  Cc: linux-kernel, linux-mm, Lai Jiangshan

Get the biggest 2**y that x % (2**y) == 0 for the align value. 

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
 include/linux/kernel.h |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/include/linux/kernel.h b/include/linux/kernel.h
index 5113462..2c439dc 100644
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -44,6 +44,8 @@
 #define PTR_ALIGN(p, a)		((typeof(p))ALIGN((unsigned long)(p), (a)))
 #define IS_ALIGNED(x, a)		(((x) & ((typeof(x))(a) - 1)) == 0)
 
+#define ALIGN_OF_LAST_BIT(x)	((((x)^((x) - 1))>>1) + 1)
+
 #define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]) + __must_be_array(arr))
 
 /*
-- 
1.7.4.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC PATCH 2/6] slub: add kmalloc_align()
  2012-03-20 10:21 [RFC PATCH 0/6] add kmalloc_align() Lai Jiangshan
  2012-03-20 10:21 ` [RFC PATCH 1/6] kenrel.h: add ALIGN_OF_LAST_BIT() Lai Jiangshan
@ 2012-03-20 10:21 ` Lai Jiangshan
  2012-03-20 14:14   ` Christoph Lameter
  2012-03-20 10:21 ` [RFC PATCH 3/6] slab: " Lai Jiangshan
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 28+ messages in thread
From: Lai Jiangshan @ 2012-03-20 10:21 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, Matt Mackall, Tejun Heo, Andrew Morton
  Cc: linux-kernel, linux-mm, Lai Jiangshan

ALIGN_OF_LAST_BIT(size) is used instead of
ARCH_KMALLOC_MINALIGN when kmalloc kmem_caches are created.

No behavior changed except debug.

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
 include/linux/slub_def.h |    6 ++++++
 mm/slub.c                |    2 +-
 2 files changed, 7 insertions(+), 1 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index a32bcfd..67ac6b4 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -280,6 +280,12 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags)
 	return __kmalloc(size, flags);
 }
 
+static __always_inline
+void *kmalloc_align(size_t size, gfp_t flags, size_t align)
+{
+	return kmalloc(ALIGN(size, align), flags);
+}
+
 #ifdef CONFIG_NUMA
 void *__kmalloc_node(size_t size, gfp_t flags, int node);
 void *kmem_cache_alloc_node(struct kmem_cache *, gfp_t flags, int node);
diff --git a/mm/slub.c b/mm/slub.c
index 4907563..01cf99d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3238,7 +3238,7 @@ static struct kmem_cache *__init create_kmalloc_cache(const char *name,
 	 * This function is called with IRQs disabled during early-boot on
 	 * single CPU so there's no need to take slub_lock here.
 	 */
-	if (!kmem_cache_open(s, name, size, ARCH_KMALLOC_MINALIGN,
+	if (!kmem_cache_open(s, name, size, ALIGN_OF_LAST_BIT(size),
 								flags, NULL))
 		goto panic;
 
-- 
1.7.4.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC PATCH 3/6] slab: add kmalloc_align()
  2012-03-20 10:21 [RFC PATCH 0/6] add kmalloc_align() Lai Jiangshan
  2012-03-20 10:21 ` [RFC PATCH 1/6] kenrel.h: add ALIGN_OF_LAST_BIT() Lai Jiangshan
  2012-03-20 10:21 ` [RFC PATCH 2/6] slub: add kmalloc_align() Lai Jiangshan
@ 2012-03-20 10:21 ` Lai Jiangshan
  2012-03-20 10:21 ` [RFC PATCH 4/6] slob: don't couple the header size with the alignment Lai Jiangshan
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 28+ messages in thread
From: Lai Jiangshan @ 2012-03-20 10:21 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, Matt Mackall, Tejun Heo, Andrew Morton
  Cc: linux-kernel, linux-mm, Lai Jiangshan

ALIGN_OF_LAST_BIT(sizes[INDEX_AC].cs_size) is used instead of
ARCH_KMALLOC_MINALIGN when kmalloc kmem_caches are created.

No behavior changed except debug.

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
 include/linux/slab_def.h |    6 ++++++
 mm/slab.c                |    8 ++++----
 2 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
index fbd1117..fb0c8ab 100644
--- a/include/linux/slab_def.h
+++ b/include/linux/slab_def.h
@@ -159,6 +159,12 @@ found:
 	return __kmalloc(size, flags);
 }
 
+static __always_inline
+void *kmalloc_align(size_t size, gfp_t flags, size_t align)
+{
+	return kmalloc(ALIGN(size, align), flags);
+}
+
 #ifdef CONFIG_NUMA
 extern void *__kmalloc_node(size_t size, gfp_t flags, int node);
 extern void *kmem_cache_alloc_node(struct kmem_cache *, gfp_t flags, int node);
diff --git a/mm/slab.c b/mm/slab.c
index f0bd785..df8edbe 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1587,7 +1587,7 @@ void __init kmem_cache_init(void)
 
 	sizes[INDEX_AC].cs_cachep = kmem_cache_create(names[INDEX_AC].name,
 					sizes[INDEX_AC].cs_size,
-					ARCH_KMALLOC_MINALIGN,
+					ALIGN_OF_LAST_BIT(sizes[INDEX_AC].cs_size),
 					ARCH_KMALLOC_FLAGS|SLAB_PANIC,
 					NULL);
 
@@ -1595,7 +1595,7 @@ void __init kmem_cache_init(void)
 		sizes[INDEX_L3].cs_cachep =
 			kmem_cache_create(names[INDEX_L3].name,
 				sizes[INDEX_L3].cs_size,
-				ARCH_KMALLOC_MINALIGN,
+				ALIGN_OF_LAST_BIT(sizes[INDEX_L3].cs_size),
 				ARCH_KMALLOC_FLAGS|SLAB_PANIC,
 				NULL);
 	}
@@ -1613,7 +1613,7 @@ void __init kmem_cache_init(void)
 		if (!sizes->cs_cachep) {
 			sizes->cs_cachep = kmem_cache_create(names->name,
 					sizes->cs_size,
-					ARCH_KMALLOC_MINALIGN,
+					ALIGN_OF_LAST_BIT(sizes->cs_size),
 					ARCH_KMALLOC_FLAGS|SLAB_PANIC,
 					NULL);
 		}
@@ -1621,7 +1621,7 @@ void __init kmem_cache_init(void)
 		sizes->cs_dmacachep = kmem_cache_create(
 					names->name_dma,
 					sizes->cs_size,
-					ARCH_KMALLOC_MINALIGN,
+					ALIGN_OF_LAST_BIT(sizes->cs_size),
 					ARCH_KMALLOC_FLAGS|SLAB_CACHE_DMA|
 						SLAB_PANIC,
 					NULL);
-- 
1.7.4.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC PATCH 4/6] slob: don't couple the header size with the alignment
  2012-03-20 10:21 [RFC PATCH 0/6] add kmalloc_align() Lai Jiangshan
                   ` (2 preceding siblings ...)
  2012-03-20 10:21 ` [RFC PATCH 3/6] slab: " Lai Jiangshan
@ 2012-03-20 10:21 ` Lai Jiangshan
  2012-03-20 10:21 ` [RFC PATCH 5/6] slob: add kmalloc_align() Lai Jiangshan
  2012-03-20 10:21 ` [RFC PATCH 6/6] workqueue: use kmalloc_align() instead of hacking Lai Jiangshan
  5 siblings, 0 replies; 28+ messages in thread
From: Lai Jiangshan @ 2012-03-20 10:21 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, Matt Mackall, Tejun Heo, Andrew Morton
  Cc: linux-kernel, linux-mm, Lai Jiangshan

kmalloc-ed objects are prepended with a 4-byte header with the kmalloc size,
but in the code, the code of this head is coupled with the alignment code,
so we separate them.

The argument "int align" in slob_page_alloc() and slob_alloc() is split
as "size_t hsize" and "int align" for decoupling.

before patched: prepended header size is always as the same as align.
after patched: prepended header size is always
		max(ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN).

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
 mm/slob.c |   34 +++++++++++++++++++---------------
 1 files changed, 19 insertions(+), 15 deletions(-)

diff --git a/mm/slob.c b/mm/slob.c
index 8105be4..266e518 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -267,7 +267,8 @@ static void slob_free_pages(void *b, int order)
 /*
  * Allocate a slob block within a given slob_page sp.
  */
-static void *slob_page_alloc(struct slob_page *sp, size_t size, int align)
+static void *slob_page_alloc(struct slob_page *sp, size_t size, size_t hsize,
+		int align)
 {
 	slob_t *prev, *cur, *aligned = NULL;
 	int delta = 0, units = SLOB_UNITS(size);
@@ -276,7 +277,8 @@ static void *slob_page_alloc(struct slob_page *sp, size_t size, int align)
 		slobidx_t avail = slob_units(cur);
 
 		if (align) {
-			aligned = (slob_t *)ALIGN((unsigned long)cur, align);
+			aligned = (slob_t *)(ALIGN((unsigned long)cur + hsize,
+					align) - hsize);
 			delta = aligned - cur;
 		}
 		if (avail >= units + delta) { /* room enough? */
@@ -318,7 +320,7 @@ static void *slob_page_alloc(struct slob_page *sp, size_t size, int align)
 /*
  * slob_alloc: entry point into the slob allocator.
  */
-static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
+static void *slob_alloc(size_t size, gfp_t gfp, size_t hsize, int align, int node)
 {
 	struct slob_page *sp;
 	struct list_head *prev;
@@ -350,7 +352,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
 
 		/* Attempt to alloc */
 		prev = sp->list.prev;
-		b = slob_page_alloc(sp, size, align);
+		b = slob_page_alloc(sp, size, hsize, align);
 		if (!b)
 			continue;
 
@@ -378,7 +380,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
 		INIT_LIST_HEAD(&sp->list);
 		set_slob(b, SLOB_UNITS(PAGE_SIZE), b + SLOB_UNITS(PAGE_SIZE));
 		set_slob_page_free(sp, slob_list);
-		b = slob_page_alloc(sp, size, align);
+		b = slob_page_alloc(sp, size, hsize, align);
 		BUG_ON(!b);
 		spin_unlock_irqrestore(&slob_lock, flags);
 	}
@@ -479,26 +481,28 @@ out:
 void *__kmalloc_node(size_t size, gfp_t gfp, int node)
 {
 	unsigned int *m;
-	int align = max(ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
+	int hsize = max(ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
+	int align;
 	void *ret;
 
 	gfp &= gfp_allowed_mask;
+	align = hsize;
 
 	lockdep_trace_alloc(gfp);
 
-	if (size < PAGE_SIZE - align) {
+	if (size < PAGE_SIZE - hsize) {
 		if (!size)
 			return ZERO_SIZE_PTR;
 
-		m = slob_alloc(size + align, gfp, align, node);
+		m = slob_alloc(size + hsize, gfp, hsize, align, node);
 
 		if (!m)
 			return NULL;
 		*m = size;
-		ret = (void *)m + align;
+		ret = (void *)m + hsize;
 
 		trace_kmalloc_node(_RET_IP_, ret,
-				   size, size + align, gfp, node);
+				   size, size + hsize, gfp, node);
 	} else {
 		unsigned int order = get_order(size);
 
@@ -532,9 +536,9 @@ void kfree(const void *block)
 
 	sp = slob_page(block);
 	if (is_slob_page(sp)) {
-		int align = max(ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
-		unsigned int *m = (unsigned int *)(block - align);
-		slob_free(m, *m + align);
+		int hsize = max(ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
+		unsigned int *m = (unsigned int *)(block - hsize);
+		slob_free(m, *m + hsize);
 	} else
 		put_page(&sp->page);
 }
@@ -572,7 +576,7 @@ struct kmem_cache *kmem_cache_create(const char *name, size_t size,
 	struct kmem_cache *c;
 
 	c = slob_alloc(sizeof(struct kmem_cache),
-		GFP_KERNEL, ARCH_KMALLOC_MINALIGN, -1);
+		GFP_KERNEL, 0, ARCH_KMALLOC_MINALIGN, -1);
 
 	if (c) {
 		c->name = name;
@@ -615,7 +619,7 @@ void *kmem_cache_alloc_node(struct kmem_cache *c, gfp_t flags, int node)
 	lockdep_trace_alloc(flags);
 
 	if (c->size < PAGE_SIZE) {
-		b = slob_alloc(c->size, flags, c->align, node);
+		b = slob_alloc(c->size, flags, 0, c->align, node);
 		trace_kmem_cache_alloc_node(_RET_IP_, b, c->size,
 					    SLOB_UNITS(c->size) * SLOB_UNIT,
 					    flags, node);
-- 
1.7.4.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC PATCH 5/6] slob: add kmalloc_align()
  2012-03-20 10:21 [RFC PATCH 0/6] add kmalloc_align() Lai Jiangshan
                   ` (3 preceding siblings ...)
  2012-03-20 10:21 ` [RFC PATCH 4/6] slob: don't couple the header size with the alignment Lai Jiangshan
@ 2012-03-20 10:21 ` Lai Jiangshan
  2012-03-20 10:21 ` [RFC PATCH 6/6] workqueue: use kmalloc_align() instead of hacking Lai Jiangshan
  5 siblings, 0 replies; 28+ messages in thread
From: Lai Jiangshan @ 2012-03-20 10:21 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, Matt Mackall, Tejun Heo, Andrew Morton
  Cc: linux-kernel, linux-mm, Lai Jiangshan

Add a __kmalloc_node_align() for kmalloc_align().

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
 include/linux/slob_def.h |   14 +++++++++++++-
 mm/slob.c                |    8 ++++----
 2 files changed, 17 insertions(+), 5 deletions(-)

diff --git a/include/linux/slob_def.h b/include/linux/slob_def.h
index 0ec00b3..f2b0fe3 100644
--- a/include/linux/slob_def.h
+++ b/include/linux/slob_def.h
@@ -9,7 +9,13 @@ static __always_inline void *kmem_cache_alloc(struct kmem_cache *cachep,
 	return kmem_cache_alloc_node(cachep, flags, -1);
 }
 
-void *__kmalloc_node(size_t size, gfp_t flags, int node);
+void *__kmalloc_node_align(size_t size, gfp_t gfp, int align, int node);
+
+static __always_inline
+void *__kmalloc_node(size_t size, gfp_t flags, int node)
+{
+	return __kmalloc_node_align(size, flags, 0, -1);
+}
 
 static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
 {
@@ -34,4 +40,10 @@ static __always_inline void *__kmalloc(size_t size, gfp_t flags)
 	return kmalloc(size, flags);
 }
 
+static __always_inline
+void *kmalloc_align(size_t size, gfp_t flags, size_t align)
+{
+	return __kmalloc_node_align(ALIGN(size, align), flags, align, -1);
+}
+
 #endif /* __LINUX_SLOB_DEF_H */
diff --git a/mm/slob.c b/mm/slob.c
index 266e518..d46b986 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -478,15 +478,15 @@ out:
  * End of slob allocator proper. Begin kmem_cache_alloc and kmalloc frontend.
  */
 
-void *__kmalloc_node(size_t size, gfp_t gfp, int node)
+void *__kmalloc_node_align(size_t size, gfp_t gfp, int align, int node)
 {
 	unsigned int *m;
 	int hsize = max(ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
-	int align;
 	void *ret;
 
 	gfp &= gfp_allowed_mask;
-	align = hsize;
+	if (align < hsize)
+		align = hsize;
 
 	lockdep_trace_alloc(gfp);
 
@@ -522,7 +522,7 @@ void *__kmalloc_node(size_t size, gfp_t gfp, int node)
 	kmemleak_alloc(ret, size, 1, gfp);
 	return ret;
 }
-EXPORT_SYMBOL(__kmalloc_node);
+EXPORT_SYMBOL(__kmalloc_node_align);
 
 void kfree(const void *block)
 {
-- 
1.7.4.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC PATCH 6/6] workqueue: use kmalloc_align() instead of hacking
  2012-03-20 10:21 [RFC PATCH 0/6] add kmalloc_align() Lai Jiangshan
                   ` (4 preceding siblings ...)
  2012-03-20 10:21 ` [RFC PATCH 5/6] slob: add kmalloc_align() Lai Jiangshan
@ 2012-03-20 10:21 ` Lai Jiangshan
  2012-03-20 15:15   ` Christoph Lameter
  2012-03-20 15:46   ` Tejun Heo
  5 siblings, 2 replies; 28+ messages in thread
From: Lai Jiangshan @ 2012-03-20 10:21 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, Matt Mackall, Tejun Heo, Andrew Morton
  Cc: linux-kernel, linux-mm, Lai Jiangshan

kmalloc_align() makes the code simpler.

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
 kernel/workqueue.c |   23 +++++------------------
 1 files changed, 5 insertions(+), 18 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 5abf42f..beec5fd 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2897,20 +2897,9 @@ static int alloc_cwqs(struct workqueue_struct *wq)
 
 	if (!(wq->flags & WQ_UNBOUND))
 		wq->cpu_wq.pcpu = __alloc_percpu(size, align);
-	else {
-		void *ptr;
-
-		/*
-		 * Allocate enough room to align cwq and put an extra
-		 * pointer at the end pointing back to the originally
-		 * allocated pointer which will be used for free.
-		 */
-		ptr = kzalloc(size + align + sizeof(void *), GFP_KERNEL);
-		if (ptr) {
-			wq->cpu_wq.single = PTR_ALIGN(ptr, align);
-			*(void **)(wq->cpu_wq.single + 1) = ptr;
-		}
-	}
+	else
+		wq->cpu_wq.single = kmalloc_align(size,
+				GFP_KERNEL | __GFP_ZERO, align);
 
 	/* just in case, make sure it's actually aligned */
 	BUG_ON(!IS_ALIGNED(wq->cpu_wq.v, align));
@@ -2921,10 +2910,8 @@ static void free_cwqs(struct workqueue_struct *wq)
 {
 	if (!(wq->flags & WQ_UNBOUND))
 		free_percpu(wq->cpu_wq.pcpu);
-	else if (wq->cpu_wq.single) {
-		/* the pointer to free is stored right after the cwq */
-		kfree(*(void **)(wq->cpu_wq.single + 1));
-	}
+	else if (wq->cpu_wq.single)
+		kfree(wq->cpu_wq.single);
 }
 
 static int wq_clamp_max_active(int max_active, unsigned int flags,
-- 
1.7.4.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH 1/6] kenrel.h: add ALIGN_OF_LAST_BIT()
  2012-03-20 10:21 ` [RFC PATCH 1/6] kenrel.h: add ALIGN_OF_LAST_BIT() Lai Jiangshan
@ 2012-03-20 11:32   ` Michal Nazarewicz
  2012-03-20 14:03     ` Alexey Dobriyan
  2012-03-20 14:20     ` Peter Seebach
  0 siblings, 2 replies; 28+ messages in thread
From: Michal Nazarewicz @ 2012-03-20 11:32 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, Matt Mackall, Tejun Heo,
	Andrew Morton, Lai Jiangshan
  Cc: linux-kernel, linux-mm

On Tue, 20 Mar 2012 11:21:19 +0100, Lai Jiangshan <laijs@cn.fujitsu.com> wrote:

> Get the biggest 2**y that x % (2**y) == 0 for the align value.
>
> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
> ---
>  include/linux/kernel.h |    2 ++
>  1 files changed, 2 insertions(+), 0 deletions(-)
>
> diff --git a/include/linux/kernel.h b/include/linux/kernel.h
> index 5113462..2c439dc 100644
> --- a/include/linux/kernel.h
> +++ b/include/linux/kernel.h
> @@ -44,6 +44,8 @@
>  #define PTR_ALIGN(p, a)		((typeof(p))ALIGN((unsigned long)(p), (a)))
>  #define IS_ALIGNED(x, a)		(((x) & ((typeof(x))(a) - 1)) == 0)
>+#define ALIGN_OF_LAST_BIT(x)	((((x)^((x) - 1))>>1) + 1)

Wouldn't ALIGNMENT() be less confusing? After all, that's what this macro is
calculating, right? Alignment of given address.

> +
>  #define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]) + __must_be_array(arr))
> /*


-- 
Best regards,                                         _     _
.o. | Liege of Serenely Enlightened Majesty of      o' \,=./ `o
..o | Computer Science,  Michał “mina86” Nazarewicz    (o o)
ooo +----<email/xmpp: mpn@google.com>--------------ooO--(_)--Ooo--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH 1/6] kenrel.h: add ALIGN_OF_LAST_BIT()
  2012-03-20 11:32   ` Michal Nazarewicz
@ 2012-03-20 14:03     ` Alexey Dobriyan
  2012-03-20 14:08       ` Christoph Lameter
  2012-03-20 14:20     ` Peter Seebach
  1 sibling, 1 reply; 28+ messages in thread
From: Alexey Dobriyan @ 2012-03-20 14:03 UTC (permalink / raw)
  To: Michal Nazarewicz
  Cc: Christoph Lameter, Pekka Enberg, Matt Mackall, Tejun Heo,
	Andrew Morton, Lai Jiangshan, linux-kernel, linux-mm

On Tue, Mar 20, 2012 at 2:32 PM, Michal Nazarewicz <mina86@mina86.com> wrote:
> On Tue, 20 Mar 2012 11:21:19 +0100, Lai Jiangshan <laijs@cn.fujitsu.com>
> wrote:
>
>> Get the biggest 2**y that x % (2**y) == 0 for the align value.

>> --- a/include/linux/kernel.h
>> +++ b/include/linux/kernel.h
>> @@ -44,6 +44,8 @@
>>  #define PTR_ALIGN(p, a)                ((typeof(p))ALIGN((unsigned
>> long)(p), (a)))
>>  #define IS_ALIGNED(x, a)               (((x) & ((typeof(x))(a) - 1)) ==
>> 0)
>> +#define ALIGN_OF_LAST_BIT(x)   ((((x)^((x) - 1))>>1) + 1)
>
>
> Wouldn't ALIGNMENT() be less confusing? After all, that's what this macro is
> calculating, right? Alignment of given address.

Bits do not have alignment because they aren't directly addressable.
Can you hardcode this sequence with comment, because it looks too
special for macro.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH 1/6] kenrel.h: add ALIGN_OF_LAST_BIT()
  2012-03-20 14:03     ` Alexey Dobriyan
@ 2012-03-20 14:08       ` Christoph Lameter
  0 siblings, 0 replies; 28+ messages in thread
From: Christoph Lameter @ 2012-03-20 14:08 UTC (permalink / raw)
  To: Alexey Dobriyan
  Cc: Michal Nazarewicz, Pekka Enberg, Matt Mackall, Tejun Heo,
	Andrew Morton, Lai Jiangshan, linux-kernel, linux-mm

[-- Attachment #1: Type: TEXT/PLAIN, Size: 471 bytes --]

On Tue, 20 Mar 2012, Alexey Dobriyan wrote:

> >> +#define ALIGN_OF_LAST_BIT(x)   ((((x)^((x) - 1))>>1) + 1)
> >
> >
> > Wouldn't ALIGNMENT() be less confusing? After all, that's what this macro is
> > calculating, right? Alignment of given address.
>
> Bits do not have alignment because they aren't directly addressable.
> Can you hardcode this sequence with comment, because it looks too
> special for macro.

Some sane naming please. This is confusing.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH 2/6] slub: add kmalloc_align()
  2012-03-20 10:21 ` [RFC PATCH 2/6] slub: add kmalloc_align() Lai Jiangshan
@ 2012-03-20 14:14   ` Christoph Lameter
  2012-03-20 14:21     ` Christoph Lameter
  0 siblings, 1 reply; 28+ messages in thread
From: Christoph Lameter @ 2012-03-20 14:14 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: Pekka Enberg, Matt Mackall, Tejun Heo, Andrew Morton,
	linux-kernel, linux-mm

On Tue, 20 Mar 2012, Lai Jiangshan wrote:

> diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
> index a32bcfd..67ac6b4 100644
> --- a/include/linux/slub_def.h
> +++ b/include/linux/slub_def.h
> @@ -280,6 +280,12 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags)
>  	return __kmalloc(size, flags);
>  }
>
> +static __always_inline
> +void *kmalloc_align(size_t size, gfp_t flags, size_t align)
> +{
> +	return kmalloc(ALIGN(size, align), flags);
> +}

This assumes that kmalloc allocates aligned memory. Which it does only
in special cases (power of two cache and debugging off).

>  #ifdef CONFIG_NUMA
>  void *__kmalloc_node(size_t size, gfp_t flags, int node);
>  void *kmem_cache_alloc_node(struct kmem_cache *, gfp_t flags, int node);
> diff --git a/mm/slub.c b/mm/slub.c
> index 4907563..01cf99d 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3238,7 +3238,7 @@ static struct kmem_cache *__init create_kmalloc_cache(const char *name,
>  	 * This function is called with IRQs disabled during early-boot on
>  	 * single CPU so there's no need to take slub_lock here.
>  	 */
> -	if (!kmem_cache_open(s, name, size, ARCH_KMALLOC_MINALIGN,
> +	if (!kmem_cache_open(s, name, size, ALIGN_OF_LAST_BIT(size),
>  								flags, NULL))
>  		goto panic;

Why does the alignment of struct kmem_cache change? I'd rather have a
__alignof__(struct kmem_cache) here with alignment specified with the
struct definition.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH 1/6] kenrel.h: add ALIGN_OF_LAST_BIT()
  2012-03-20 11:32   ` Michal Nazarewicz
  2012-03-20 14:03     ` Alexey Dobriyan
@ 2012-03-20 14:20     ` Peter Seebach
  1 sibling, 0 replies; 28+ messages in thread
From: Peter Seebach @ 2012-03-20 14:20 UTC (permalink / raw)
  To: Michal Nazarewicz
  Cc: Christoph Lameter, Pekka Enberg, Matt Mackall, Tejun Heo,
	Andrew Morton, Lai Jiangshan, linux-kernel, linux-mm

On Tue, 20 Mar 2012 12:32:14 +0100
Michal Nazarewicz <mina86@mina86.com> wrote:

> >+#define ALIGN_OF_LAST_BIT(x)	((((x)^((x) - 1))>>1) + 1)  
> 
> Wouldn't ALIGNMENT() be less confusing? After all, that's what this
> macro is calculating, right? Alignment of given address.

Why not just LAST_BIT(x)?  It's not particularly specific to pointer
alignment, even though that's the context in which it apparently came
up.  So far as I can tell, this isn't even meaningfully defined on
pointer types as such; you'd have to convert.  So the implications for
alignment seem a convenient side-effect, really.

It might be instructive to see some example proposed uses; the question
of why I'd care what alignment something had, rather than whether it
was aligned for a given type, is one that will doubtless keep me awake
nights.

I guess this feels like it answers a question that is usually the wrong
question.  Imagine if you will a couple-page block of memory, full of
unsigned shorts.  Iterate through the array, calculating
ALIGN_OF_LAST_BIT(&a[i]).  Do we really *care* that it's PAGE_SIZE for
some i, and 2 (I assume) for other i, and PAGE_SIZE*2 for either i==0 or
i==PAGE_SIZE?  (Apologies if this is a silly question; maybe this is
such a commonly-needed feature that it's obvious.)

-s
-- 
Listen, get this.  Nobody with a good compiler needs to be justified.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH 2/6] slub: add kmalloc_align()
  2012-03-20 14:14   ` Christoph Lameter
@ 2012-03-20 14:21     ` Christoph Lameter
  0 siblings, 0 replies; 28+ messages in thread
From: Christoph Lameter @ 2012-03-20 14:21 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: Pekka Enberg, Matt Mackall, Tejun Heo, Andrew Morton,
	linux-kernel, linux-mm

On Tue, 20 Mar 2012, Christoph Lameter wrote:

> > diff --git a/mm/slub.c b/mm/slub.c
> > index 4907563..01cf99d 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -3238,7 +3238,7 @@ static struct kmem_cache *__init create_kmalloc_cache(const char *name,
> >  	 * This function is called with IRQs disabled during early-boot on
> >  	 * single CPU so there's no need to take slub_lock here.
> >  	 */
> > -	if (!kmem_cache_open(s, name, size, ARCH_KMALLOC_MINALIGN,
> > +	if (!kmem_cache_open(s, name, size, ALIGN_OF_LAST_BIT(size),
> >  								flags, NULL))
> >  		goto panic;
>
> Why does the alignment of struct kmem_cache change? I'd rather have a
> __alignof__(struct kmem_cache) here with alignment specified with the
> struct definition.

Ok this aligns the data not the cache . Ok I see what is going on here.
So the kmalloc array now has a higher alignment. That means you can align
up to that limit within the structure.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH 6/6] workqueue: use kmalloc_align() instead of hacking
  2012-03-20 10:21 ` [RFC PATCH 6/6] workqueue: use kmalloc_align() instead of hacking Lai Jiangshan
@ 2012-03-20 15:15   ` Christoph Lameter
  2012-03-20 15:46   ` Tejun Heo
  1 sibling, 0 replies; 28+ messages in thread
From: Christoph Lameter @ 2012-03-20 15:15 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: Pekka Enberg, Matt Mackall, Tejun Heo, Andrew Morton,
	linux-kernel, linux-mm

On Tue, 20 Mar 2012, Lai Jiangshan wrote:

> kmalloc_align() makes the code simpler.

Another approach would be to simply create a new slab cache using
kmem_cache_create() with the desired alignment and allocate from
that.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH 6/6] workqueue: use kmalloc_align() instead of hacking
  2012-03-20 10:21 ` [RFC PATCH 6/6] workqueue: use kmalloc_align() instead of hacking Lai Jiangshan
  2012-03-20 15:15   ` Christoph Lameter
@ 2012-03-20 15:46   ` Tejun Heo
  2012-03-21  3:02     ` Lai Jiangshan
  1 sibling, 1 reply; 28+ messages in thread
From: Tejun Heo @ 2012-03-20 15:46 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: Christoph Lameter, Pekka Enberg, Matt Mackall, Andrew Morton,
	linux-kernel, linux-mm

On Tue, Mar 20, 2012 at 06:21:24PM +0800, Lai Jiangshan wrote:
> kmalloc_align() makes the code simpler.
> 
> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
> ---
>  kernel/workqueue.c |   23 +++++------------------
>  1 files changed, 5 insertions(+), 18 deletions(-)
> 
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 5abf42f..beec5fd 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -2897,20 +2897,9 @@ static int alloc_cwqs(struct workqueue_struct *wq)
>  
>  	if (!(wq->flags & WQ_UNBOUND))
>  		wq->cpu_wq.pcpu = __alloc_percpu(size, align);
> -	else {
> -		void *ptr;
> -
> -		/*
> -		 * Allocate enough room to align cwq and put an extra
> -		 * pointer at the end pointing back to the originally
> -		 * allocated pointer which will be used for free.
> -		 */
> -		ptr = kzalloc(size + align + sizeof(void *), GFP_KERNEL);
> -		if (ptr) {
> -			wq->cpu_wq.single = PTR_ALIGN(ptr, align);
> -			*(void **)(wq->cpu_wq.single + 1) = ptr;
> -		}
> -	}
> +	else
> +		wq->cpu_wq.single = kmalloc_align(size,
> +				GFP_KERNEL | __GFP_ZERO, align);
>  
>  	/* just in case, make sure it's actually aligned */
>  	BUG_ON(!IS_ALIGNED(wq->cpu_wq.v, align));
> @@ -2921,10 +2910,8 @@ static void free_cwqs(struct workqueue_struct *wq)
>  {
>  	if (!(wq->flags & WQ_UNBOUND))
>  		free_percpu(wq->cpu_wq.pcpu);
> -	else if (wq->cpu_wq.single) {
> -		/* the pointer to free is stored right after the cwq */
> -		kfree(*(void **)(wq->cpu_wq.single + 1));
> -	}
> +	else if (wq->cpu_wq.single)
> +		kfree(wq->cpu_wq.single);

Yes, this is hacky but I don't think building the whole
kmalloc_align() for only this is a good idea.  If the open coded hack
bothers you just write a simplistic wrapper somewhere.  We can make
that better integrated / more efficient when there are multiple users
of the interface, which I kinda doubt would happen.  The reason why
cwq requiring larger alignment is more historic than anything else
after all.

Thanks.

-- 
tejun

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH 6/6] workqueue: use kmalloc_align() instead of hacking
  2012-03-20 15:46   ` Tejun Heo
@ 2012-03-21  3:02     ` Lai Jiangshan
  2012-03-21  5:14       ` Tejun Heo
  2012-03-21 13:45       ` [RFC PATCH 6/6] workqueue: use kmalloc_align() " Christoph Lameter
  0 siblings, 2 replies; 28+ messages in thread
From: Lai Jiangshan @ 2012-03-21  3:02 UTC (permalink / raw)
  To: Tejun Heo, Pekka Enberg
  Cc: Christoph Lameter, Matt Mackall, Andrew Morton, linux-kernel, linux-mm

On 03/20/2012 11:46 PM, Tejun Heo wrote:
> On Tue, Mar 20, 2012 at 06:21:24PM +0800, Lai Jiangshan wrote:
>> kmalloc_align() makes the code simpler.
>>
>> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
>> ---
>>  kernel/workqueue.c |   23 +++++------------------
>>  1 files changed, 5 insertions(+), 18 deletions(-)
>>
>> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
>> index 5abf42f..beec5fd 100644
>> --- a/kernel/workqueue.c
>> +++ b/kernel/workqueue.c
>> @@ -2897,20 +2897,9 @@ static int alloc_cwqs(struct workqueue_struct *wq)
>>  
>>  	if (!(wq->flags & WQ_UNBOUND))
>>  		wq->cpu_wq.pcpu = __alloc_percpu(size, align);
>> -	else {
>> -		void *ptr;
>> -
>> -		/*
>> -		 * Allocate enough room to align cwq and put an extra
>> -		 * pointer at the end pointing back to the originally
>> -		 * allocated pointer which will be used for free.
>> -		 */
>> -		ptr = kzalloc(size + align + sizeof(void *), GFP_KERNEL);
>> -		if (ptr) {
>> -			wq->cpu_wq.single = PTR_ALIGN(ptr, align);
>> -			*(void **)(wq->cpu_wq.single + 1) = ptr;
>> -		}
>> -	}
>> +	else
>> +		wq->cpu_wq.single = kmalloc_align(size,
>> +				GFP_KERNEL | __GFP_ZERO, align);
>>  
>>  	/* just in case, make sure it's actually aligned */
>>  	BUG_ON(!IS_ALIGNED(wq->cpu_wq.v, align));
>> @@ -2921,10 +2910,8 @@ static void free_cwqs(struct workqueue_struct *wq)
>>  {
>>  	if (!(wq->flags & WQ_UNBOUND))
>>  		free_percpu(wq->cpu_wq.pcpu);
>> -	else if (wq->cpu_wq.single) {
>> -		/* the pointer to free is stored right after the cwq */
>> -		kfree(*(void **)(wq->cpu_wq.single + 1));
>> -	}
>> +	else if (wq->cpu_wq.single)
>> +		kfree(wq->cpu_wq.single);
> 
> Yes, this is hacky but I don't think building the whole
> kmalloc_align() for only this is a good idea.  If the open coded hack
> bothers you just write a simplistic wrapper somewhere.  We can make
> that better integrated / more efficient when there are multiple users
> of the interface, which I kinda doubt would happen.  The reason why
> cwq requiring larger alignment is more historic than anything else
> after all.
> 

Yes, I don't want to build a complex kmalloc_align(). But after I found
that SLAB/SLUB's kmalloc-objects are natural/automatic aligned to
a proper big power of two. I will do nothing if I introduce kmalloc_align()
except just care the debugging.

o	SLAB/SLUB's kmalloc-objects are natural/automatic aligned.
o	70LOC in total, and about 90% are just renaming or wrapping.

I think it is a worth trade-off, it give us convenience and we pay
zero overhead(when runtime) and 70LOC(when coding, pay in a lump sum).

And kmalloc_align() can be used in the following case:
o	a type object need to be aligned with cache-line for it contains a frequent
	update-part and a frequent read-part.
o	The total number of these objects in a given type is not much, creating
	a new slab cache for a given type will be overkill.

This is a RFC patch and it seems mm gurus don't like it. I'm sorry I bother all of you.

Thanks,
Lai



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH 6/6] workqueue: use kmalloc_align() instead of hacking
  2012-03-21  3:02     ` Lai Jiangshan
@ 2012-03-21  5:14       ` Tejun Heo
  2012-03-21 14:12         ` Patch workqueue: create new slab cache " Christoph Lameter
  2012-03-21 13:45       ` [RFC PATCH 6/6] workqueue: use kmalloc_align() " Christoph Lameter
  1 sibling, 1 reply; 28+ messages in thread
From: Tejun Heo @ 2012-03-21  5:14 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: Pekka Enberg, Christoph Lameter, Matt Mackall, Andrew Morton,
	linux-kernel, linux-mm

Hello,

On Tue, Mar 20, 2012 at 8:02 PM, Lai Jiangshan <laijs@cn.fujitsu.com> wrote:
> Yes, I don't want to build a complex kmalloc_align(). But after I found
> that SLAB/SLUB's kmalloc-objects are natural/automatic aligned to
> a proper big power of two. I will do nothing if I introduce kmalloc_align()
> except just care the debugging.
>
> o       SLAB/SLUB's kmalloc-objects are natural/automatic aligned.
> o       70LOC in total, and about 90% are just renaming or wrapping.
>
> I think it is a worth trade-off, it give us convenience and we pay
> zero overhead(when runtime) and 70LOC(when coding, pay in a lump sum).
>
> And kmalloc_align() can be used in the following case:
> o       a type object need to be aligned with cache-line for it contains a frequent
>        update-part and a frequent read-part.
> o       The total number of these objects in a given type is not much, creating
>        a new slab cache for a given type will be overkill.
>
> This is a RFC patch and it seems mm gurus don't like it. I'm sorry I bother all of you.

Ooh, don't be sorry. My only concern is that it doesn't have any user
other than cwq allocation. If you can find other cases which can
benefit from it, it would be great.

Thanks.

-- 
tejun

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH 6/6] workqueue: use kmalloc_align() instead of hacking
  2012-03-21  3:02     ` Lai Jiangshan
  2012-03-21  5:14       ` Tejun Heo
@ 2012-03-21 13:45       ` Christoph Lameter
  2012-03-26  2:00         ` Lai Jiangshan
  1 sibling, 1 reply; 28+ messages in thread
From: Christoph Lameter @ 2012-03-21 13:45 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: Tejun Heo, Pekka Enberg, Matt Mackall, Andrew Morton,
	linux-kernel, linux-mm

On Wed, 21 Mar 2012, Lai Jiangshan wrote:

> Yes, I don't want to build a complex kmalloc_align(). But after I found
> that SLAB/SLUB's kmalloc-objects are natural/automatic aligned to
> a proper big power of two. I will do nothing if I introduce kmalloc_align()
> except just care the debugging.

They are not guaranteed to be aligned to the big power of two! There are
kmalloc caches that are not power of two. Debugging and other
necessary meta data may change alignment in both SLAB and SLUB. SLAB needs
a metadata structure in each page even without debugging that may cause
alignment issues.

> And kmalloc_align() can be used in the following case:
> o	a type object need to be aligned with cache-line for it contains a frequent
> 	update-part and a frequent read-part.
> o	The total number of these objects in a given type is not much, creating
> 	a new slab cache for a given type will be overkill.
>
> This is a RFC patch and it seems mm gurus don't like it. I'm sorry I
> bother all of you.

Ideas are always welcome. Please do not get offended by our problems with
your patch.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Patch workqueue: create new slab cache instead of hacking
  2012-03-21  5:14       ` Tejun Heo
@ 2012-03-21 14:12         ` Christoph Lameter
  2012-03-21 14:49           ` Eric Dumazet
  2012-03-21 16:09           ` Tejun Heo
  0 siblings, 2 replies; 28+ messages in thread
From: Christoph Lameter @ 2012-03-21 14:12 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Lai Jiangshan, Pekka Enberg, Matt Mackall, Andrew Morton,
	linux-kernel, linux-mm

How about this instead?

Subject: workqueues: Use new kmem cache to get aligned memory for workqueues

The workqueue logic currently improvises by doing a kmalloc allocation and
then aligning the object. Create a slab cache for that purpose with the
proper alignment instead.

Cleans up the code and makes things much simpler. No need anymore to carry
an additional pointer to the beginning of the kmalloc object.

Signed-off-by: Christoph Lameter <cl@linux.com>


---
 kernel/workqueue.c |   50 +++++++++++++++++++++-----------------------------
 1 file changed, 21 insertions(+), 29 deletions(-)

Index: linux-2.6/kernel/workqueue.c
===================================================================
--- linux-2.6.orig/kernel/workqueue.c	2012-03-21 09:07:07.000000000 -0500
+++ linux-2.6/kernel/workqueue.c	2012-03-21 09:07:24.000000000 -0500
@@ -2884,36 +2884,27 @@ int keventd_up(void)
 	return system_wq != NULL;
 }

+/*
+ * cwqs are forced aligned according to WORK_STRUCT_FLAG_BITS.
+ * Make sure that the alignment isn't lower than that of
+ * unsigned long long.
+ */
+
+#define WQ_ALIGN (max_t(size_t, 1 << WORK_STRUCT_FLAG_BITS, \
+			   __alignof__(unsigned long long)))
+
+struct kmem_cache *wq_slab;
+
 static int alloc_cwqs(struct workqueue_struct *wq)
 {
-	/*
-	 * cwqs are forced aligned according to WORK_STRUCT_FLAG_BITS.
-	 * Make sure that the alignment isn't lower than that of
-	 * unsigned long long.
-	 */
-	const size_t size = sizeof(struct cpu_workqueue_struct);
-	const size_t align = max_t(size_t, 1 << WORK_STRUCT_FLAG_BITS,
-				   __alignof__(unsigned long long));
-
 	if (!(wq->flags & WQ_UNBOUND))
-		wq->cpu_wq.pcpu = __alloc_percpu(size, align);
-	else {
-		void *ptr;
-
-		/*
-		 * Allocate enough room to align cwq and put an extra
-		 * pointer at the end pointing back to the originally
-		 * allocated pointer which will be used for free.
-		 */
-		ptr = kzalloc(size + align + sizeof(void *), GFP_KERNEL);
-		if (ptr) {
-			wq->cpu_wq.single = PTR_ALIGN(ptr, align);
-			*(void **)(wq->cpu_wq.single + 1) = ptr;
-		}
-	}
+		wq->cpu_wq.pcpu = __alloc_percpu(sizeof(struct cpu_workqueue_struct),
+					WQ_ALIGN);
+	else
+		wq->cpu_wq.single = kmem_cache_zalloc(wq_slab, GFP_KERNEL);

 	/* just in case, make sure it's actually aligned */
-	BUG_ON(!IS_ALIGNED(wq->cpu_wq.v, align));
+	BUG_ON(!IS_ALIGNED(wq->cpu_wq.v, WQ_ALIGN));
 	return wq->cpu_wq.v ? 0 : -ENOMEM;
 }

@@ -2921,10 +2912,8 @@ static void free_cwqs(struct workqueue_s
 {
 	if (!(wq->flags & WQ_UNBOUND))
 		free_percpu(wq->cpu_wq.pcpu);
-	else if (wq->cpu_wq.single) {
-		/* the pointer to free is stored right after the cwq */
-		kfree(*(void **)(wq->cpu_wq.single + 1));
-	}
+	else if (wq->cpu_wq.single)
+		kmem_cache_free(wq_slab, wq->cpu_wq.single);
 }

 static int wq_clamp_max_active(int max_active, unsigned int flags,
@@ -3770,6 +3759,9 @@ static int __init init_workqueues(void)
 	unsigned int cpu;
 	int i;

+	wq_slab = kmem_cache_create("workqueue", sizeof(struct cpu_workqueue_struct),
+			WQ_ALIGN, SLAB_PANIC, NULL);
+
 	cpu_notifier(workqueue_cpu_callback, CPU_PRI_WORKQUEUE);

 	/* initialize gcwqs */

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Patch workqueue: create new slab cache instead of hacking
  2012-03-21 14:12         ` Patch workqueue: create new slab cache " Christoph Lameter
@ 2012-03-21 14:49           ` Eric Dumazet
  2012-03-21 15:03             ` Christoph Lameter
  2012-03-21 16:09           ` Tejun Heo
  1 sibling, 1 reply; 28+ messages in thread
From: Eric Dumazet @ 2012-03-21 14:49 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Tejun Heo, Lai Jiangshan, Pekka Enberg, Matt Mackall,
	Andrew Morton, linux-kernel, linux-mm

On Wed, 2012-03-21 at 09:12 -0500, Christoph Lameter wrote:
> How about this instead?
> 
> Subject: workqueues: Use new kmem cache to get aligned memory for workqueues
> 
> The workqueue logic currently improvises by doing a kmalloc allocation and
> then aligning the object. Create a slab cache for that purpose with the
> proper alignment instead.
> 
> Cleans up the code and makes things much simpler. No need anymore to carry
> an additional pointer to the beginning of the kmalloc object.
> 
> Signed-off-by: Christoph Lameter <cl@linux.com>

Creating a dedicated cache for few objects ? Thats a lot of overhead, at
least for SLAB (no merges of caches)

By the way network stack also wants to align struct net_device (in
function alloc_netdev_mqs(), and uses a custom code.

In this case, as the size of net_device is not constant, we use standard
kzalloc().

No idea why NETDEV_ALIGN is 32 ... Oh well, some old constant instead of
L1_CACHE_BYTES ...





--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Patch workqueue: create new slab cache instead of hacking
  2012-03-21 14:49           ` Eric Dumazet
@ 2012-03-21 15:03             ` Christoph Lameter
  2012-03-21 16:04               ` Eric Dumazet
  0 siblings, 1 reply; 28+ messages in thread
From: Christoph Lameter @ 2012-03-21 15:03 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Tejun Heo, Lai Jiangshan, Pekka Enberg, Matt Mackall,
	Andrew Morton, linux-kernel, linux-mm

On Wed, 21 Mar 2012, Eric Dumazet wrote:

> Creating a dedicated cache for few objects ? Thats a lot of overhead, at
> least for SLAB (no merges of caches)

Its some overhead for SLAB (a lot is what? If you tune down the per cpu
caches it should be a couple of pages) but its none for SLUB. Maybe we
need to add the merge logic to SLAB?

Or maybe we can extract a common higher handling level for kmem_cache
from all slab allocators and make merging standard.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Patch workqueue: create new slab cache instead of hacking
  2012-03-21 15:03             ` Christoph Lameter
@ 2012-03-21 16:04               ` Eric Dumazet
  2012-03-21 17:54                 ` Christoph Lameter
  0 siblings, 1 reply; 28+ messages in thread
From: Eric Dumazet @ 2012-03-21 16:04 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Tejun Heo, Lai Jiangshan, Pekka Enberg, Matt Mackall,
	Andrew Morton, linux-kernel, linux-mm

On Wed, 2012-03-21 at 10:03 -0500, Christoph Lameter wrote:
> On Wed, 21 Mar 2012, Eric Dumazet wrote:
> 
> > Creating a dedicated cache for few objects ? Thats a lot of overhead, at
> > least for SLAB (no merges of caches)
> 
> Its some overhead for SLAB (a lot is what? If you tune down the per cpu
> caches it should be a couple of pages) but its none for SLUB.

SLAB overhead per cache is O(CPUS * nr_node_ids)  (unless alien caches
are disabled)

For few in flight objects, its just better to use standard kmalloc-xxxx
caches.



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Patch workqueue: create new slab cache instead of hacking
  2012-03-21 14:12         ` Patch workqueue: create new slab cache " Christoph Lameter
  2012-03-21 14:49           ` Eric Dumazet
@ 2012-03-21 16:09           ` Tejun Heo
  2012-03-21 17:56             ` Christoph Lameter
  1 sibling, 1 reply; 28+ messages in thread
From: Tejun Heo @ 2012-03-21 16:09 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Lai Jiangshan, Pekka Enberg, Matt Mackall, Andrew Morton,
	linux-kernel, linux-mm

On Wed, Mar 21, 2012 at 09:12:04AM -0500, Christoph Lameter wrote:
> How about this instead?
> 
> Subject: workqueues: Use new kmem cache to get aligned memory for workqueues
> 
> The workqueue logic currently improvises by doing a kmalloc allocation and
> then aligning the object. Create a slab cache for that purpose with the
> proper alignment instead.
> 
> Cleans up the code and makes things much simpler. No need anymore to carry
> an additional pointer to the beginning of the kmalloc object.
> 
> Signed-off-by: Christoph Lameter <cl@linux.com>

I don't know.  At this point, this is only for singlethread and
unbound workqueues and we don't have too many of them left at this
point.  I'd like to avoid creating a slab cache for this.  How about
just leaving it be?  If we develop other use cases for larger
alignments, let's worry about implementing something common then.

Thanks.

-- 
tejun

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Patch workqueue: create new slab cache instead of hacking
  2012-03-21 16:04               ` Eric Dumazet
@ 2012-03-21 17:54                 ` Christoph Lameter
  2012-03-21 18:05                   ` Eric Dumazet
  0 siblings, 1 reply; 28+ messages in thread
From: Christoph Lameter @ 2012-03-21 17:54 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Tejun Heo, Lai Jiangshan, Pekka Enberg, Matt Mackall,
	Andrew Morton, linux-kernel, linux-mm

On Wed, 21 Mar 2012, Eric Dumazet wrote:

> On Wed, 2012-03-21 at 10:03 -0500, Christoph Lameter wrote:
> > On Wed, 21 Mar 2012, Eric Dumazet wrote:
> >
> > > Creating a dedicated cache for few objects ? Thats a lot of overhead, at
> > > least for SLAB (no merges of caches)
> >
> > Its some overhead for SLAB (a lot is what? If you tune down the per cpu
> > caches it should be a couple of pages) but its none for SLUB.
>
> SLAB overhead per cache is O(CPUS * nr_node_ids)  (unless alien caches
> are disabled)

nr_node_ids==2 in the standard case these days. Alien caches are minimal.

> For few in flight objects, its just better to use standard kmalloc-xxxx
> caches.

Its easier to use a custom slab cache. Avoids hackery like we have in
workqueue.c

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Patch workqueue: create new slab cache instead of hacking
  2012-03-21 16:09           ` Tejun Heo
@ 2012-03-21 17:56             ` Christoph Lameter
  0 siblings, 0 replies; 28+ messages in thread
From: Christoph Lameter @ 2012-03-21 17:56 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Lai Jiangshan, Pekka Enberg, Matt Mackall, Andrew Morton,
	linux-kernel, linux-mm

On Wed, 21 Mar 2012, Tejun Heo wrote:

> I don't know.  At this point, this is only for singlethread and
> unbound workqueues and we don't have too many of them left at this
> point.  I'd like to avoid creating a slab cache for this.  How about
> just leaving it be?  If we develop other use cases for larger
> alignments, let's worry about implementing something common then.

We could write a function that identifies a compatible kmalloc cache
or creates a new one if necessary. That would cut down overhead similar to
what slub merge is doing but allows more control by the developer.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Patch workqueue: create new slab cache instead of hacking
  2012-03-21 17:54                 ` Christoph Lameter
@ 2012-03-21 18:05                   ` Eric Dumazet
  2012-03-21 18:20                     ` Christoph Lameter
  0 siblings, 1 reply; 28+ messages in thread
From: Eric Dumazet @ 2012-03-21 18:05 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Tejun Heo, Lai Jiangshan, Pekka Enberg, Matt Mackall,
	Andrew Morton, linux-kernel, linux-mm

On Wed, 2012-03-21 at 12:54 -0500, Christoph Lameter wrote:
> On Wed, 21 Mar 2012, Eric Dumazet wrote:
> 
> > On Wed, 2012-03-21 at 10:03 -0500, Christoph Lameter wrote:
> > > On Wed, 21 Mar 2012, Eric Dumazet wrote:
> > >
> > > > Creating a dedicated cache for few objects ? Thats a lot of overhead, at
> > > > least for SLAB (no merges of caches)
> > >
> > > Its some overhead for SLAB (a lot is what? If you tune down the per cpu
> > > caches it should be a couple of pages) but its none for SLUB.
> >
> > SLAB overhead per cache is O(CPUS * nr_node_ids)  (unless alien caches
> > are disabled)
> 
> nr_node_ids==2 in the standard case these days. Alien caches are minimal.


Thats not true. Some machines use lots of nodes (fake nodes) for various
reasons.

And they cant disable alien caches for performance reasons.



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Patch workqueue: create new slab cache instead of hacking
  2012-03-21 18:05                   ` Eric Dumazet
@ 2012-03-21 18:20                     ` Christoph Lameter
  0 siblings, 0 replies; 28+ messages in thread
From: Christoph Lameter @ 2012-03-21 18:20 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Tejun Heo, Lai Jiangshan, Pekka Enberg, Matt Mackall,
	Andrew Morton, linux-kernel, linux-mm

On Wed, 21 Mar 2012, Eric Dumazet wrote:

> On Wed, 2012-03-21 at 12:54 -0500, Christoph Lameter wrote:
> > On Wed, 21 Mar 2012, Eric Dumazet wrote:
> >
> > > On Wed, 2012-03-21 at 10:03 -0500, Christoph Lameter wrote:
> > > > On Wed, 21 Mar 2012, Eric Dumazet wrote:
> > > >
> > > > > Creating a dedicated cache for few objects ? Thats a lot of overhead, at
> > > > > least for SLAB (no merges of caches)
> > > >
> > > > Its some overhead for SLAB (a lot is what? If you tune down the per cpu
> > > > caches it should be a couple of pages) but its none for SLUB.
> > >
> > > SLAB overhead per cache is O(CPUS * nr_node_ids)  (unless alien caches
> > > are disabled)
> >
> > nr_node_ids==2 in the standard case these days. Alien caches are minimal.
>
>
> Thats not true. Some machines use lots of nodes (fake nodes) for various
> reasons.

Which is not a typical use case.

> And they cant disable alien caches for performance reasons.

Ok then lets genericize the slub merge in some form so that it works for
all slab allocators.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH 6/6] workqueue: use kmalloc_align() instead of hacking
  2012-03-21 13:45       ` [RFC PATCH 6/6] workqueue: use kmalloc_align() " Christoph Lameter
@ 2012-03-26  2:00         ` Lai Jiangshan
  0 siblings, 0 replies; 28+ messages in thread
From: Lai Jiangshan @ 2012-03-26  2:00 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Tejun Heo, Pekka Enberg, Matt Mackall, Andrew Morton,
	linux-kernel, linux-mm

On 03/21/2012 09:45 PM, Christoph Lameter wrote:
> On Wed, 21 Mar 2012, Lai Jiangshan wrote:
> 
>> Yes, I don't want to build a complex kmalloc_align(). But after I found
>> that SLAB/SLUB's kmalloc-objects are natural/automatic aligned to
>> a proper big power of two. I will do nothing if I introduce kmalloc_align()
>> except just care the debugging.
> 
> They are not guaranteed to be aligned to the big power of two! There are
> kmalloc caches that are not power of two. Debugging and other
> necessary meta data may change alignment in both SLAB and SLUB. SLAB needs
> a metadata structure in each page even without debugging that may cause
> alignment issues.

"Debugging and other necessary meta data" are handled in special way in my patches.
(You have already checked the patches.)

Normally, as I said, SLAB/SLUB's kmalloc-objects are natural/automatic aligned,
and my patch does not touch the general cases. Sorry for my bad reply.


> 
>> And kmalloc_align() can be used in the following case:
>> o	a type object need to be aligned with cache-line for it contains a frequent
>> 	update-part and a frequent read-part.
>> o	The total number of these objects in a given type is not much, creating
>> 	a new slab cache for a given type will be overkill.
>>
>> This is a RFC patch and it seems mm gurus don't like it. I'm sorry I
>> bother all of you.
> 
> Ideas are always welcome. Please do not get offended by our problems with
> your patch.
> 
> 
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2012-03-26  2:00 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-03-20 10:21 [RFC PATCH 0/6] add kmalloc_align() Lai Jiangshan
2012-03-20 10:21 ` [RFC PATCH 1/6] kenrel.h: add ALIGN_OF_LAST_BIT() Lai Jiangshan
2012-03-20 11:32   ` Michal Nazarewicz
2012-03-20 14:03     ` Alexey Dobriyan
2012-03-20 14:08       ` Christoph Lameter
2012-03-20 14:20     ` Peter Seebach
2012-03-20 10:21 ` [RFC PATCH 2/6] slub: add kmalloc_align() Lai Jiangshan
2012-03-20 14:14   ` Christoph Lameter
2012-03-20 14:21     ` Christoph Lameter
2012-03-20 10:21 ` [RFC PATCH 3/6] slab: " Lai Jiangshan
2012-03-20 10:21 ` [RFC PATCH 4/6] slob: don't couple the header size with the alignment Lai Jiangshan
2012-03-20 10:21 ` [RFC PATCH 5/6] slob: add kmalloc_align() Lai Jiangshan
2012-03-20 10:21 ` [RFC PATCH 6/6] workqueue: use kmalloc_align() instead of hacking Lai Jiangshan
2012-03-20 15:15   ` Christoph Lameter
2012-03-20 15:46   ` Tejun Heo
2012-03-21  3:02     ` Lai Jiangshan
2012-03-21  5:14       ` Tejun Heo
2012-03-21 14:12         ` Patch workqueue: create new slab cache " Christoph Lameter
2012-03-21 14:49           ` Eric Dumazet
2012-03-21 15:03             ` Christoph Lameter
2012-03-21 16:04               ` Eric Dumazet
2012-03-21 17:54                 ` Christoph Lameter
2012-03-21 18:05                   ` Eric Dumazet
2012-03-21 18:20                     ` Christoph Lameter
2012-03-21 16:09           ` Tejun Heo
2012-03-21 17:56             ` Christoph Lameter
2012-03-21 13:45       ` [RFC PATCH 6/6] workqueue: use kmalloc_align() " Christoph Lameter
2012-03-26  2:00         ` Lai Jiangshan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).