linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH] ARCH_SLAB_MINALIGN for 2.6.10-rc3
@ 2004-12-05 21:30 Manfred Spraul
  2004-12-05 22:20 ` Paul Mundt
  0 siblings, 1 reply; 8+ messages in thread
From: Manfred Spraul @ 2004-12-05 21:30 UTC (permalink / raw)
  To: Paul Mundt; +Cc: Linux Kernel Mailing List

Hi Paul,

>--- orig/include/asm-sh64/uaccess.h
>+++ mod/include/asm-sh64/uaccess.h
>@@ -313,6 +313,12 @@
>    sh64 at the moment). */
> #define ARCH_KMALLOC_MINALIGN 8
> 
>+/*
>+ * We want 8-byte alignment for the slab caches as well, otherwise we have
>+ * the same BYTES_PER_WORD (sizeof(void *)) min align in kmem_cache_create().
>+ */
>+#define ARCH_SLAB_MINALIGN 8
>+
>  
>
Could you make that dependant on !CONFIG_DEBUG_SLAB? Setting align to a 
non-zero value disables some debug code.

The rest is fine with me.

--
    Manfred

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] ARCH_SLAB_MINALIGN for 2.6.10-rc3
  2004-12-05 21:30 [PATCH] ARCH_SLAB_MINALIGN for 2.6.10-rc3 Manfred Spraul
@ 2004-12-05 22:20 ` Paul Mundt
  2004-12-06 22:15   ` Manfred Spraul
  0 siblings, 1 reply; 8+ messages in thread
From: Paul Mundt @ 2004-12-05 22:20 UTC (permalink / raw)
  To: Manfred Spraul; +Cc: Linux Kernel Mailing List

[-- Attachment #1: Type: text/plain, Size: 1140 bytes --]

Hi Manfred,

On Sun, Dec 05, 2004 at 10:30:46PM +0100, Manfred Spraul wrote:
> >--- orig/include/asm-sh64/uaccess.h
> >+++ mod/include/asm-sh64/uaccess.h
> >@@ -313,6 +313,12 @@
> >   sh64 at the moment). */
> >#define ARCH_KMALLOC_MINALIGN 8
> >
> >+/*
> >+ * We want 8-byte alignment for the slab caches as well, otherwise we have
> >+ * the same BYTES_PER_WORD (sizeof(void *)) min align in 
> >kmem_cache_create().
> >+ */
> >+#define ARCH_SLAB_MINALIGN 8
> >+
> > 
> >
> Could you make that dependant on !CONFIG_DEBUG_SLAB? Setting align to a 
> non-zero value disables some debug code.
> 
align is only being set to ARCH_SLAB_MINALIGN in kmem_cache_create()
where it is otherwise being set to BYTES_PER_WORD as a default. Unless I
am missing something, that will always set it non-zero irregardless of
whether ARCH_SLAB_MINALIGN is set.

Are you suggesting that ARCH_SLAB_MINALIGN be set to 0 in the
CONFIG_DEBUG_SLAB case? In that case, the check should be in mm/slab.c
and not in the arch-specific code (as any other platform wishing to have
fixed slab min alignment would have to do the same checks).

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] ARCH_SLAB_MINALIGN for 2.6.10-rc3
  2004-12-05 22:20 ` Paul Mundt
@ 2004-12-06 22:15   ` Manfred Spraul
  2004-12-06 22:59     ` Paul Mundt
  0 siblings, 1 reply; 8+ messages in thread
From: Manfred Spraul @ 2004-12-06 22:15 UTC (permalink / raw)
  To: Paul Mundt; +Cc: Linux Kernel Mailing List

Paul Mundt wrote:

>Hi Manfred,
>
>On Sun, Dec 05, 2004 at 10:30:46PM +0100, Manfred Spraul wrote:
>  
>
>>>--- orig/include/asm-sh64/uaccess.h
>>>+++ mod/include/asm-sh64/uaccess.h
>>>@@ -313,6 +313,12 @@
>>>  sh64 at the moment). */
>>>#define ARCH_KMALLOC_MINALIGN 8
>>>
>>>+/*
>>>+ * We want 8-byte alignment for the slab caches as well, otherwise we have
>>>+ * the same BYTES_PER_WORD (sizeof(void *)) min align in 
>>>kmem_cache_create().
>>>+ */
>>>+#define ARCH_SLAB_MINALIGN 8
>>>+
>>>
>>>
>>>      
>>>
>>Could you make that dependant on !CONFIG_DEBUG_SLAB? Setting align to a 
>>non-zero value disables some debug code.
>>
>>    
>>
>align is only being set to ARCH_SLAB_MINALIGN in kmem_cache_create()
>where it is otherwise being set to BYTES_PER_WORD as a default. Unless I
>am missing something, that will always set it non-zero irregardless of
>whether ARCH_SLAB_MINALIGN is set.
>
>  
>
No, you are right. I didn't read the source carefully enough.
Now that I have reread it, I see one problem:
ARCH_KMALLOC_MINALIGN is a hard limit: It's always honored, the only 
exception is that values larger than the kmalloc block size are ignored. 
I.e. _MINALIGN 32 guarantees that the objects are 32-byte aligned (since 
the smallest block size is 32-bytes). The define was added, because some 
archs really need a certain alignment, otherwise they won't boot. The 
normal alignment for kmalloc caches is cache line alignment, except with 
CONFIG_DEBUG_SLAB, then it's word alignment.

With your patch, ARCH_SLAB_MINALIGN is not a hard limit: A few lines 
further down align is reset to word size if SLAB_RED_ZONE is set. I 
don't like the asymmetry - it just asks for trouble.

I must think about it. Perhaps just rename ARCH_SLAB_MINALIGN to 
ARCH_SLAB_DEFAULTALIGN.

--
    Manfred

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] ARCH_SLAB_MINALIGN for 2.6.10-rc3
  2004-12-06 22:15   ` Manfred Spraul
@ 2004-12-06 22:59     ` Paul Mundt
  2004-12-12 10:48       ` Manfred Spraul
  0 siblings, 1 reply; 8+ messages in thread
From: Paul Mundt @ 2004-12-06 22:59 UTC (permalink / raw)
  To: Manfred Spraul; +Cc: Linux Kernel Mailing List

[-- Attachment #1: Type: text/plain, Size: 648 bytes --]

Hi Manfred,

On Mon, Dec 06, 2004 at 11:15:20PM +0100, Manfred Spraul wrote:
> With your patch, ARCH_SLAB_MINALIGN is not a hard limit: A few lines 
> further down align is reset to word size if SLAB_RED_ZONE is set. I 
> don't like the asymmetry - it just asks for trouble.
> 
Yes, that's true. I don't see much of a point in leaving it as
BYTES_PER_WORD in the SLAB_RED_ZONE case, at least this wasn't
intentional. Would you accept ARCH_SLAB_MINALIGN if this was set
regardless of whether SLAB_RED_ZONE is set or not?

I suppose we can live with align being 0 in the CONFIG_DEBUG_SLAB
case as the unaligned accesses are not fatal..

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] ARCH_SLAB_MINALIGN for 2.6.10-rc3
  2004-12-06 22:59     ` Paul Mundt
@ 2004-12-12 10:48       ` Manfred Spraul
  2004-12-12 15:09         ` Paul Mundt
  0 siblings, 1 reply; 8+ messages in thread
From: Manfred Spraul @ 2004-12-12 10:48 UTC (permalink / raw)
  To: Paul Mundt; +Cc: Linux Kernel Mailing List

[-- Attachment #1: Type: text/plain, Size: 532 bytes --]

Hi Paul,

Sorry for the late reply, attached is my proposal:
I've added the ARCH_SLAB_MINALIGN flag, together with some documentation 
and a small restructuring.
What do you think? It's just the mm/slab.c change, you would have to add

#ifndef CONFIG_DEBUG_SLAB
#define ARCH_SLAB_MINALIGN   8
#endif

into your sh64 header files. ARCH_SLAB_MINALIGN includes 
ARCH_KMALLOC_MINALIGN, so you do not have to set that flag. It doesn't 
hurt, though.

Not really tested - it boots on x86, but that probably doesn't count.

--
    Manfred

[-- Attachment #2: patch-slab-forcealign --]
[-- Type: text/plain, Size: 3661 bytes --]

--- 2.6/mm/slab.c	2004-12-05 16:22:55.000000000 +0100
+++ build-2.6/mm/slab.c	2004-12-12 11:42:31.000000000 +0100
@@ -128,9 +128,28 @@
 #endif
 
 #ifndef ARCH_KMALLOC_MINALIGN
+/*
+ * Enforce a minimum alignment for the kmalloc caches.
+ * Usually, the kmalloc caches are cache_line_size() aligned, except when
+ * DEBUG and FORCED_DEBUG are enabled, then they are BYTES_PER_WORD aligned.
+ * Some archs want to perform DMA into kmalloc caches and need a guaranteed
+ * alignment larger than BYTES_PER_WORD. ARCH_KMALLOC_MINALIGN allows that.
+ * Note that this flag disables some debug features.
+ */
 #define ARCH_KMALLOC_MINALIGN 0
 #endif
 
+#ifndef ARCH_SLAB_MINALIGN
+/*
+ * Enforce a minimum alignment for all caches.
+ * Intended for archs that get misalignment faults even for BYTES_PER_WORD
+ * aligned buffers.
+ * If possible: Do not enable this flag for CONFIG_DEBUG_SLAB, it disables
+ * some debug features.
+ */
+#define ARCH_SLAB_MINALIGN 0
+#endif
+
 #ifndef ARCH_KMALLOC_FLAGS
 #define ARCH_KMALLOC_FLAGS SLAB_HWCACHE_ALIGN
 #endif
@@ -1172,7 +1191,7 @@
 	unsigned long flags, void (*ctor)(void*, kmem_cache_t *, unsigned long),
 	void (*dtor)(void*, kmem_cache_t *, unsigned long))
 {
-	size_t left_over, slab_size;
+	size_t left_over, slab_size, ralign;
 	kmem_cache_t *cachep = NULL;
 
 	/*
@@ -1222,24 +1241,44 @@
 	if (flags & ~CREATE_MASK)
 		BUG();
 
-	if (align) {
-		/* combinations of forced alignment and advanced debugging is
-		 * not yet implemented.
+	/* Check that size is in terms of words.  This is needed to avoid
+	 * unaligned accesses for some archs when redzoning is used, and makes
+	 * sure any on-slab bufctl's are also correctly aligned.
+	 */
+	if (size & (BYTES_PER_WORD-1)) {
+		size += (BYTES_PER_WORD-1);
+		size &= ~(BYTES_PER_WORD-1);
+	}
+	
+	/* calculate out the final buffer alignment: */
+	/* 1) arch recommendation: can be overridden for debug */
+	if (flags & SLAB_HWCACHE_ALIGN) {
+		/* Default alignment: as specified by the arch code.
+		 * Except if an object is really small, then squeeze multiple
+		 * objects into one cacheline.
 		 */
-		flags &= ~(SLAB_RED_ZONE|SLAB_STORE_USER);
+		ralign = cache_line_size();
+		while (size <= ralign/2)
+			ralign /= 2;
 	} else {
-		if (flags & SLAB_HWCACHE_ALIGN) {
-			/* Default alignment: as specified by the arch code.
-			 * Except if an object is really small, then squeeze multiple
-			 * into one cacheline.
-			 */
-			align = cache_line_size();
-			while (size <= align/2)
-				align /= 2;
-		} else {
-			align = BYTES_PER_WORD;
-		}
+		ralign = BYTES_PER_WORD;
+	}
+	/* 2) arch mandated alignment: disables debug if necessary */
+	if (ralign < ARCH_SLAB_MINALIGN) {
+		ralign = ARCH_SLAB_MINALIGN;
+		if (ralign > BYTES_PER_WORD)
+			flags &= ~(SLAB_RED_ZONE|SLAB_STORE_USER);
+	}
+	/* 3) caller mandated alignment: disables debug if necessary */
+	if (ralign < align) {
+		ralign = align;
+		if (ralign > BYTES_PER_WORD)
+			flags &= ~(SLAB_RED_ZONE|SLAB_STORE_USER);
 	}
+	/* 4) Store it. Note that the debug code below can reduce
+	 *    the alignment to BYTES_PER_WORD.
+	 */
+	align = ralign;
 
 	/* Get cache's description obj. */
 	cachep = (kmem_cache_t *) kmem_cache_alloc(&cache_cache, SLAB_KERNEL);
@@ -1247,15 +1286,6 @@
 		goto opps;
 	memset(cachep, 0, sizeof(kmem_cache_t));
 
-	/* Check that size is in terms of words.  This is needed to avoid
-	 * unaligned accesses for some archs when redzoning is used, and makes
-	 * sure any on-slab bufctl's are also correctly aligned.
-	 */
-	if (size & (BYTES_PER_WORD-1)) {
-		size += (BYTES_PER_WORD-1);
-		size &= ~(BYTES_PER_WORD-1);
-	}
-	
 #if DEBUG
 	cachep->reallen = size;
 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] ARCH_SLAB_MINALIGN for 2.6.10-rc3
  2004-12-12 10:48       ` Manfred Spraul
@ 2004-12-12 15:09         ` Paul Mundt
  2004-12-13 21:18           ` Manfred Spraul
  0 siblings, 1 reply; 8+ messages in thread
From: Paul Mundt @ 2004-12-12 15:09 UTC (permalink / raw)
  To: Manfred Spraul; +Cc: Linux Kernel Mailing List

[-- Attachment #1: Type: text/plain, Size: 5692 bytes --]

Hi Manfred,

On Sun, Dec 12, 2004 at 11:48:02AM +0100, Manfred Spraul wrote:
> Sorry for the late reply, attached is my proposal:
> I've added the ARCH_SLAB_MINALIGN flag, together with some documentation 
> and a small restructuring.
> What do you think?
> 
Looks fine to me, just tested on sh64 and it works ok.

> #ifndef CONFIG_DEBUG_SLAB
> #define ARCH_SLAB_MINALIGN   8
> #endif
> 
Right now ARCH_KMALLOC_MINALIGN is set unconditionally on sh64, so it
seems that we lose some debug features there. However, even with
ARCH_SLAB_MINALIGN being set to non-zero, redzoning and slab poisoning
still seem to be functional to some extent.

For instance, I've been using the following patch and this did help pin
down a rather irritating bug in the sh64 switch_to().

Is there any reason not to wrap redzoning to ARCH_SLAB_MINALIGN (and set
it to BYTES_PER_WORD by default). It seems at least that the only thing
BYTES_PER_WORD is needed for in the redzoning case is to determine where
to place the second marker. I can see why this would be problematic with
dynamic slab alignment, but when it is fixed at compile time there
shouldn't be anything prohibiting the use of a non-BYTES_PER_WORD value.

We can live with the unaligned accesses in the CONFIG_DEBUG_SLAB case,
but it would still be nice to at least have some partial redzoning and
poisoning with a forced alignment.

--- linux-2.6.10-rc3/mm/slab.c	2004-12-05 19:05:39.000000000 +0200
+++ linux-sh64-2.6.10-rc3/mm/slab.c	2004-12-08 16:56:25.000000000 +0200
@@ -135,6 +135,10 @@
 #define ARCH_KMALLOC_FLAGS SLAB_HWCACHE_ALIGN
 #endif
 
+#ifndef ARCH_SLAB_MINALIGN
+#define ARCH_SLAB_MINALIGN	BYTES_PER_WORD
+#endif
+
 /* Legal flag mask for kmem_cache_create(). */
 #if DEBUG
 # define CREATE_MASK	(SLAB_DEBUG_INITIAL | SLAB_RED_ZONE | \
@@ -404,14 +408,16 @@
 
 /* memory layout of objects:
  * 0		: objp
- * 0 .. cachep->dbghead - BYTES_PER_WORD - 1: padding. This ensures that
- * 		the end of an object is aligned with the end of the real
- * 		allocation. Catches writes behind the end of the allocation.
- * cachep->dbghead - BYTES_PER_WORD .. cachep->dbghead - 1:
- * 		redzone word.
+ * 0 .. cachep->dbghead - ARCH_SLAB_MINALIGN - 1: padding. This ensures that
+ *		the end of an object is aligned with the end of the real
+ *		allocation. Catches writes behind the end of the allocation.
+ * cachep->dbghead - ARCH_SLAB_MINALIGN .. cachep->dbghead - 1:
+ *		redzone word.
  * cachep->dbghead: The real object.
- * cachep->objsize - 2* BYTES_PER_WORD: redzone word [BYTES_PER_WORD long]
- * cachep->objsize - 1* BYTES_PER_WORD: last caller address [BYTES_PER_WORD long]
+ * cachep->objsize - 2* ARCH_SLAB_MINALIGN:
+ *		redzone word [ARCH_SLAB_MINALIGN long]
+ * cachep->objsize - 1* ARCH_SLAB_MINALIGN:
+ *		last caller address [ARCH_SLAB_MINALIGN long]
  */
 static int obj_dbghead(kmem_cache_t *cachep)
 {
@@ -426,21 +432,21 @@
 static unsigned long *dbg_redzone1(kmem_cache_t *cachep, void *objp)
 {
 	BUG_ON(!(cachep->flags & SLAB_RED_ZONE));
-	return (unsigned long*) (objp+obj_dbghead(cachep)-BYTES_PER_WORD);
+	return (unsigned long*) (objp+obj_dbghead(cachep)-ARCH_SLAB_MINALIGN);
 }
 
 static unsigned long *dbg_redzone2(kmem_cache_t *cachep, void *objp)
 {
 	BUG_ON(!(cachep->flags & SLAB_RED_ZONE));
 	if (cachep->flags & SLAB_STORE_USER)
-		return (unsigned long*) (objp+cachep->objsize-2*BYTES_PER_WORD);
-	return (unsigned long*) (objp+cachep->objsize-BYTES_PER_WORD);
+		return (unsigned long*) (objp+cachep->objsize-2*ARCH_SLAB_MINALIGN);
+	return (unsigned long*) (objp+cachep->objsize-ARCH_SLAB_MINALIGN);
 }
 
 static void **dbg_userword(kmem_cache_t *cachep, void *objp)
 {
 	BUG_ON(!(cachep->flags & SLAB_STORE_USER));
-	return (void**)(objp+cachep->objsize-BYTES_PER_WORD);
+	return (void**)(objp+cachep->objsize-ARCH_SLAB_MINALIGN);
 }
 
 #else
@@ -1204,7 +1210,7 @@
 	 * above the next power of two: caches with object sizes just above a
 	 * power of two have a significant amount of internal fragmentation.
 	 */
-	if ((size < 4096 || fls(size-1) == fls(size-1+3*BYTES_PER_WORD)))
+	if ((size < 4096 || fls(size-1) == fls(size-1+3*ARCH_SLAB_MINALIGN)))
 		flags |= SLAB_RED_ZONE|SLAB_STORE_USER;
 	if (!(flags & SLAB_DESTROY_BY_RCU))
 		flags |= SLAB_POISON;
@@ -1237,7 +1243,7 @@
 			while (size <= align/2)
 				align /= 2;
 		} else {
-			align = BYTES_PER_WORD;
+			align = ARCH_SLAB_MINALIGN;
 		}
 	}
 
@@ -1255,25 +1261,25 @@
 		size += (BYTES_PER_WORD-1);
 		size &= ~(BYTES_PER_WORD-1);
 	}
-	
+
 #if DEBUG
 	cachep->reallen = size;
 
 	if (flags & SLAB_RED_ZONE) {
 		/* redzoning only works with word aligned caches */
-		align = BYTES_PER_WORD;
+		align = ARCH_SLAB_MINALIGN;
 
 		/* add space for red zone words */
-		cachep->dbghead += BYTES_PER_WORD;
-		size += 2*BYTES_PER_WORD;
+		cachep->dbghead += ARCH_SLAB_MINALIGN;
+		size += 2*ARCH_SLAB_MINALIGN;
 	}
 	if (flags & SLAB_STORE_USER) {
 		/* user store requires word alignment and
 		 * one word storage behind the end of the real
 		 * object.
 		 */
-		align = BYTES_PER_WORD;
-		size += BYTES_PER_WORD;
+		align = ARCH_SLAB_MINALIGN;
+		size += ARCH_SLAB_MINALIGN;
 	}
 #if FORCED_DEBUG && defined(CONFIG_DEBUG_PAGEALLOC)
 	if (size > 128 && cachep->reallen > cache_line_size() && size < PAGE_SIZE) {
@@ -2292,7 +2298,7 @@
 {
 	unsigned long addr = (unsigned long) ptr;
 	unsigned long min_addr = PAGE_OFFSET;
-	unsigned long align_mask = BYTES_PER_WORD-1;
+	unsigned long align_mask = ARCH_SLAB_MINALIGN-1;
 	unsigned long size = cachep->objsize;
 	struct page *page;
 

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] ARCH_SLAB_MINALIGN for 2.6.10-rc3
  2004-12-12 15:09         ` Paul Mundt
@ 2004-12-13 21:18           ` Manfred Spraul
  0 siblings, 0 replies; 8+ messages in thread
From: Manfred Spraul @ 2004-12-13 21:18 UTC (permalink / raw)
  To: Paul Mundt; +Cc: Linux Kernel Mailing List

Paul Mundt wrote:

>For instance, I've been using the following patch and this did help pin
>down a rather irritating bug in the sh64 switch_to().
>
>  
>
[snip - useful patch]

I agree with your patch. But I think the change is independant, so it 
should remain a seperate patch. I've just sent my patch to Andrew for 
merging, could you send your patch to Andrew after my change was merged?

--
    Manfred

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH] ARCH_SLAB_MINALIGN for 2.6.10-rc3
@ 2004-12-05 18:25 Paul Mundt
  0 siblings, 0 replies; 8+ messages in thread
From: Paul Mundt @ 2004-12-05 18:25 UTC (permalink / raw)
  To: akpm, anton, richard.curnow; +Cc: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 3269 bytes --]

Some time ago Anton introduced a patch that removed cacheline alignment
for the slab caches, falling back on BYTES_PER_WORD instead. While this
is fine in the general sense, on sh64 it is the source of considerable
unaligned accesses.

For sh64, sizeof(void *) gives 4 bytes, whereas we actually want 8 byte
alignment (pretty much the same behaviour as what we had prior to Anton's
patch, and what we already do for ARCH_KMALLOC_MINALIGN).

Richard was the first to note this:

	One new issue is that there are a lot of new unaligned fixups
	occurring.  I know where too - it's loads and stores to 8-byte fields
	in inodes.  The root cause is the patch by Anton Blanchard : "remove
	cacheline alignment from inode slabs".  I think before that forcing
	the inodes to cacheline-alignment guaranteed 8-byte alignment, but now
	that's been removed, we only get sizeof(void *) alignment.  The
	problem is that pretty much every call to kmem_cache_create, except
	for the ones that create the kmalloc pool, specifies zero as the 3rd
	arg (=align).  (The ones that create the kmalloc pools specify
	ARCH_KMALLOC_MINALIGN which I fixed a while back to 8 for sh64.)
	Ideally we're going to have to come up with a fix for this one, since
	the performance overhead of fixing up loads of inode accesses will be
	pretty high.  It's not obvious to me how to do this unobtrusively - we
	need to either modify kmem_cache_create or modify every file that
	calls it (and import the ARCH_KMALLOC_MINALIGN stuff into each one.)
	I suspect that the KMALLOC alignment wants to be kept conceptually
	separate from the alignment used to create slabs.  So perhaps we could
	propose a new ARCH_SLAB_MINALIGN or some such; if this is defined, the
	maximum of this and the 'align' argument to kmem_cache_create is used
	as the alignment for the slab created.

We have been using the attached ARCH_SLAB_MINALIGN patch for sh64 and this
seems like the least intrusive solution. Thoughts?

Signed-off-by: Paul Mundt <paul.mundt@nokia.com>

 include/asm-sh64/uaccess.h |    6 ++++++
 mm/slab.c                  |    6 +++++-
 2 files changed, 11 insertions(+), 1 deletion(-)

--- orig/include/asm-sh64/uaccess.h
+++ mod/include/asm-sh64/uaccess.h
@@ -313,6 +313,12 @@
    sh64 at the moment). */
 #define ARCH_KMALLOC_MINALIGN 8
 
+/*
+ * We want 8-byte alignment for the slab caches as well, otherwise we have
+ * the same BYTES_PER_WORD (sizeof(void *)) min align in kmem_cache_create().
+ */
+#define ARCH_SLAB_MINALIGN 8
+
 /* Returns 0 if exception not found and fixup.unit otherwise.  */
 extern unsigned long search_exception_table(unsigned long addr);
 extern const struct exception_table_entry *search_exception_tables (unsigned long addr);


--- orig/mm/slab.c
+++ mod/mm/slab.c
@@ -135,6 +135,10 @@
 #define ARCH_KMALLOC_FLAGS SLAB_HWCACHE_ALIGN
 #endif
 
+#ifndef ARCH_SLAB_MINALIGN
+#define ARCH_SLAB_MINALIGN	BYTES_PER_WORD
+#endif
+
 /* Legal flag mask for kmem_cache_create(). */
 #if DEBUG
 # define CREATE_MASK	(SLAB_DEBUG_INITIAL | SLAB_RED_ZONE | \
@@ -1237,7 +1241,7 @@
 			while (size <= align/2)
 				align /= 2;
 		} else {
-			align = BYTES_PER_WORD;
+			align = ARCH_SLAB_MINALIGN;
 		}
 	}
 


[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2004-12-13 21:18 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2004-12-05 21:30 [PATCH] ARCH_SLAB_MINALIGN for 2.6.10-rc3 Manfred Spraul
2004-12-05 22:20 ` Paul Mundt
2004-12-06 22:15   ` Manfred Spraul
2004-12-06 22:59     ` Paul Mundt
2004-12-12 10:48       ` Manfred Spraul
2004-12-12 15:09         ` Paul Mundt
2004-12-13 21:18           ` Manfred Spraul
  -- strict thread matches above, loose matches on Subject: below --
2004-12-05 18:25 Paul Mundt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).