linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH] ARCH_SLAB_MINALIGN for 2.6.10-rc3
@ 2004-12-05 21:30 Manfred Spraul
  2004-12-05 22:20 ` Paul Mundt
  0 siblings, 1 reply; 8+ messages in thread
From: Manfred Spraul @ 2004-12-05 21:30 UTC (permalink / raw)
  To: Paul Mundt; +Cc: Linux Kernel Mailing List

Hi Paul,

>--- orig/include/asm-sh64/uaccess.h
>+++ mod/include/asm-sh64/uaccess.h
>@@ -313,6 +313,12 @@
>    sh64 at the moment). */
> #define ARCH_KMALLOC_MINALIGN 8
> 
>+/*
>+ * We want 8-byte alignment for the slab caches as well, otherwise we have
>+ * the same BYTES_PER_WORD (sizeof(void *)) min align in kmem_cache_create().
>+ */
>+#define ARCH_SLAB_MINALIGN 8
>+
>  
>
Could you make that dependant on !CONFIG_DEBUG_SLAB? Setting align to a 
non-zero value disables some debug code.

The rest is fine with me.

--
    Manfred

^ permalink raw reply	[flat|nested] 8+ messages in thread
* [PATCH] ARCH_SLAB_MINALIGN for 2.6.10-rc3
@ 2004-12-05 18:25 Paul Mundt
  0 siblings, 0 replies; 8+ messages in thread
From: Paul Mundt @ 2004-12-05 18:25 UTC (permalink / raw)
  To: akpm, anton, richard.curnow; +Cc: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 3269 bytes --]

Some time ago Anton introduced a patch that removed cacheline alignment
for the slab caches, falling back on BYTES_PER_WORD instead. While this
is fine in the general sense, on sh64 it is the source of considerable
unaligned accesses.

For sh64, sizeof(void *) gives 4 bytes, whereas we actually want 8 byte
alignment (pretty much the same behaviour as what we had prior to Anton's
patch, and what we already do for ARCH_KMALLOC_MINALIGN).

Richard was the first to note this:

	One new issue is that there are a lot of new unaligned fixups
	occurring.  I know where too - it's loads and stores to 8-byte fields
	in inodes.  The root cause is the patch by Anton Blanchard : "remove
	cacheline alignment from inode slabs".  I think before that forcing
	the inodes to cacheline-alignment guaranteed 8-byte alignment, but now
	that's been removed, we only get sizeof(void *) alignment.  The
	problem is that pretty much every call to kmem_cache_create, except
	for the ones that create the kmalloc pool, specifies zero as the 3rd
	arg (=align).  (The ones that create the kmalloc pools specify
	ARCH_KMALLOC_MINALIGN which I fixed a while back to 8 for sh64.)
	Ideally we're going to have to come up with a fix for this one, since
	the performance overhead of fixing up loads of inode accesses will be
	pretty high.  It's not obvious to me how to do this unobtrusively - we
	need to either modify kmem_cache_create or modify every file that
	calls it (and import the ARCH_KMALLOC_MINALIGN stuff into each one.)
	I suspect that the KMALLOC alignment wants to be kept conceptually
	separate from the alignment used to create slabs.  So perhaps we could
	propose a new ARCH_SLAB_MINALIGN or some such; if this is defined, the
	maximum of this and the 'align' argument to kmem_cache_create is used
	as the alignment for the slab created.

We have been using the attached ARCH_SLAB_MINALIGN patch for sh64 and this
seems like the least intrusive solution. Thoughts?

Signed-off-by: Paul Mundt <paul.mundt@nokia.com>

 include/asm-sh64/uaccess.h |    6 ++++++
 mm/slab.c                  |    6 +++++-
 2 files changed, 11 insertions(+), 1 deletion(-)

--- orig/include/asm-sh64/uaccess.h
+++ mod/include/asm-sh64/uaccess.h
@@ -313,6 +313,12 @@
    sh64 at the moment). */
 #define ARCH_KMALLOC_MINALIGN 8
 
+/*
+ * We want 8-byte alignment for the slab caches as well, otherwise we have
+ * the same BYTES_PER_WORD (sizeof(void *)) min align in kmem_cache_create().
+ */
+#define ARCH_SLAB_MINALIGN 8
+
 /* Returns 0 if exception not found and fixup.unit otherwise.  */
 extern unsigned long search_exception_table(unsigned long addr);
 extern const struct exception_table_entry *search_exception_tables (unsigned long addr);


--- orig/mm/slab.c
+++ mod/mm/slab.c
@@ -135,6 +135,10 @@
 #define ARCH_KMALLOC_FLAGS SLAB_HWCACHE_ALIGN
 #endif
 
+#ifndef ARCH_SLAB_MINALIGN
+#define ARCH_SLAB_MINALIGN	BYTES_PER_WORD
+#endif
+
 /* Legal flag mask for kmem_cache_create(). */
 #if DEBUG
 # define CREATE_MASK	(SLAB_DEBUG_INITIAL | SLAB_RED_ZONE | \
@@ -1237,7 +1241,7 @@
 			while (size <= align/2)
 				align /= 2;
 		} else {
-			align = BYTES_PER_WORD;
+			align = ARCH_SLAB_MINALIGN;
 		}
 	}
 


[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2004-12-13 21:18 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2004-12-05 21:30 [PATCH] ARCH_SLAB_MINALIGN for 2.6.10-rc3 Manfred Spraul
2004-12-05 22:20 ` Paul Mundt
2004-12-06 22:15   ` Manfred Spraul
2004-12-06 22:59     ` Paul Mundt
2004-12-12 10:48       ` Manfred Spraul
2004-12-12 15:09         ` Paul Mundt
2004-12-13 21:18           ` Manfred Spraul
  -- strict thread matches above, loose matches on Subject: below --
2004-12-05 18:25 Paul Mundt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).