All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] x86: reorder mm_context_t to remove x86_64 alignment padding & so shrink mm_struct
@ 2011-05-24 13:49 Richard Kennedy
  2011-05-25 21:34 ` [tip:x86/urgent] x86: Reorder mm_context_t to remove x86_64 alignment padding and thus " tip-bot for Richard Kennedy
  0 siblings, 1 reply; 2+ messages in thread
From: Richard Kennedy @ 2011-05-24 13:49 UTC (permalink / raw)
  To: Ingo Molnar, Thomas Gleixner; +Cc: lkml, the arch/x86 maintainers, wilsons

Reorder mm_context_t to remove alignment padding on 64 bit builds
shrinking its size from 64 to 56 bytes.
    
This allows mm_struct to shrink from 840 to 832 bytes, so using one
fewer cache lines, and getting more objects per slab when using slub.
    
    
slabinfo mm_struct reports
before :-
    
    Sizes (bytes)     Slabs
    -----------------------------------
    Object :     840  Total  :       7
    SlabObj:     896  Full   :       1
    SlabSiz:   16384  Partial:       4
    Loss   :      56  CpuSlab:       2
    Align  :      64  Objects:      18
    
after :-

    Sizes (bytes)     Slabs
    ----------------------------------
    Object :     832  Total  :       7
    SlabObj:     832  Full   :       1
    SlabSiz:   16384  Partial:       4
    Loss   :       0  CpuSlab:       2
    Align  :      64  Objects:      19
    
Signed-off-by: Richard Kennedy <richard@rsk.demon.co.uk>

---
patch against v2.6.39
compiled & tested on x86_64.

regards
Richard



diff --git a/arch/x86/include/asm/mmu.h b/arch/x86/include/asm/mmu.h
index aeff3e8..5f55e69 100644
--- a/arch/x86/include/asm/mmu.h
+++ b/arch/x86/include/asm/mmu.h
@@ -11,14 +11,14 @@
 typedef struct {
 	void *ldt;
 	int size;
-	struct mutex lock;
-	void *vdso;
 
 #ifdef CONFIG_X86_64
 	/* True if mm supports a task running in 32 bit compatibility mode. */
 	unsigned short ia32_compat;
 #endif
 
+	struct mutex lock;
+	void *vdso;
 } mm_context_t;
 
 #ifdef CONFIG_SMP



^ permalink raw reply related	[flat|nested] 2+ messages in thread

* [tip:x86/urgent] x86: Reorder mm_context_t to remove x86_64 alignment padding and thus shrink mm_struct
  2011-05-24 13:49 [PATCH] x86: reorder mm_context_t to remove x86_64 alignment padding & so shrink mm_struct Richard Kennedy
@ 2011-05-25 21:34 ` tip-bot for Richard Kennedy
  0 siblings, 0 replies; 2+ messages in thread
From: tip-bot for Richard Kennedy @ 2011-05-25 21:34 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, torvalds, penberg, richard, akpm, tglx, mingo

Commit-ID:  af6a25f0e1ec0265c267e6ee4513925eaba6d0ed
Gitweb:     http://git.kernel.org/tip/af6a25f0e1ec0265c267e6ee4513925eaba6d0ed
Author:     Richard Kennedy <richard@rsk.demon.co.uk>
AuthorDate: Tue, 24 May 2011 14:49:59 +0100
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Wed, 25 May 2011 16:16:41 +0200

x86: Reorder mm_context_t to remove x86_64 alignment padding and thus shrink mm_struct

Reorder mm_context_t to remove alignment padding on 64 bit
builds shrinking its size from 64 to 56 bytes.

This allows mm_struct to shrink from 840 to 832 bytes, so using
one fewer cache lines, and getting more objects per slab when
using slub.

slabinfo mm_struct reports
before :-

    Sizes (bytes)     Slabs
    -----------------------------------
    Object :     840  Total  :       7
    SlabObj:     896  Full   :       1
    SlabSiz:   16384  Partial:       4
    Loss   :      56  CpuSlab:       2
    Align  :      64  Objects:      18

after :-

    Sizes (bytes)     Slabs
    ----------------------------------
    Object :     832  Total  :       7
    SlabObj:     832  Full   :       1
    SlabSiz:   16384  Partial:       4
    Loss   :       0  CpuSlab:       2
    Align  :      64  Objects:      19

Signed-off-by: Richard Kennedy <richard@rsk.demon.co.uk>
Cc: wilsons@start.ca
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Pekka Enberg <penberg@kernel.org>
Link: http://lkml.kernel.org/r/1306244999.1999.5.camel@castor.rsk
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
 arch/x86/include/asm/mmu.h |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/mmu.h b/arch/x86/include/asm/mmu.h
index aeff3e8..5f55e69 100644
--- a/arch/x86/include/asm/mmu.h
+++ b/arch/x86/include/asm/mmu.h
@@ -11,14 +11,14 @@
 typedef struct {
 	void *ldt;
 	int size;
-	struct mutex lock;
-	void *vdso;
 
 #ifdef CONFIG_X86_64
 	/* True if mm supports a task running in 32 bit compatibility mode. */
 	unsigned short ia32_compat;
 #endif
 
+	struct mutex lock;
+	void *vdso;
 } mm_context_t;
 
 #ifdef CONFIG_SMP

^ permalink raw reply related	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2011-05-25 21:35 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-05-24 13:49 [PATCH] x86: reorder mm_context_t to remove x86_64 alignment padding & so shrink mm_struct Richard Kennedy
2011-05-25 21:34 ` [tip:x86/urgent] x86: Reorder mm_context_t to remove x86_64 alignment padding and thus " tip-bot for Richard Kennedy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.