* [PATCH] x86: correct internode cache alignment
@ 2012-03-03 11:27 Alex Shi
2012-03-05 8:39 ` [tip:x86/asm] x86/numa: Improve " tip-bot for Alex Shi
0 siblings, 1 reply; 2+ messages in thread
From: Alex Shi @ 2012-03-03 11:27 UTC (permalink / raw)
To: mingo; +Cc: tglx, hpa, linux-kernel, x86, asit.k.mallick
Currently cache alignment among nodes in kernel is still 128 bytes on
NUMA machine, that get from old P4 processors. But now most of modern
CPU use the same size: 64 bytes from L1 to last level L3. so let's
remove the incorrect setting, and directly use the L1 cache size to do
SMP cache line alignment.
This patch save some memory space on kernel data. The System.map is
quite different with/without this change:
before patched after patched
...
000000000000b000 d tlb_vector_| 000000000000b000 d tlb_vector
000000000000b080 d cpu_loops_p| 000000000000b040 d cpu_loops_
...
Signed-off-by: Alex Shi <alex.shi@intel.com>
---
arch/x86/Kconfig.cpu | 1 -
1 files changed, 0 insertions(+), 1 deletions(-)
diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
index 3c57033..6443c6f 100644
--- a/arch/x86/Kconfig.cpu
+++ b/arch/x86/Kconfig.cpu
@@ -303,7 +303,6 @@ config X86_GENERIC
config X86_INTERNODE_CACHE_SHIFT
int
default "12" if X86_VSMP
- default "7" if NUMA
default X86_L1_CACHE_SHIFT
config X86_CMPXCHG
--
1.6.3.3
^ permalink raw reply related [flat|nested] 2+ messages in thread
* [tip:x86/asm] x86/numa: Improve internode cache alignment
2012-03-03 11:27 [PATCH] x86: correct internode cache alignment Alex Shi
@ 2012-03-05 8:39 ` tip-bot for Alex Shi
0 siblings, 0 replies; 2+ messages in thread
From: tip-bot for Alex Shi @ 2012-03-05 8:39 UTC (permalink / raw)
To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, alex.shi, tglx, mingo
Commit-ID: 901b04450a0ff44d579158b8b0492ce7e66cd442
Gitweb: http://git.kernel.org/tip/901b04450a0ff44d579158b8b0492ce7e66cd442
Author: Alex Shi <alex.shi@intel.com>
AuthorDate: Sat, 3 Mar 2012 19:27:27 +0800
Committer: Ingo Molnar <mingo@elte.hu>
CommitDate: Mon, 5 Mar 2012 09:19:20 +0100
x86/numa: Improve internode cache alignment
Currently cache alignment among nodes in the kernel is still 128
bytes on x86 NUMA machines - we got that X86_INTERNODE_CACHE_SHIFT
default from old P4 processors.
But now most modern x86 CPUs use the same size: 64 bytes from L1 to
last level L3. so let's remove the incorrect setting, and directly
use the L1 cache size to do SMP cache line alignment.
This patch saves some memory space on kernel data, and it also
improves the cache locality of kernel data.
The System.map is quite different with/without this change:
before patch after patch
...
000000000000b000 d tlb_vector_| 000000000000b000 d tlb_vector
000000000000b080 d cpu_loops_p| 000000000000b040 d cpu_loops_
...
Signed-off-by: Alex Shi <alex.shi@intel.com>
Cc: asit.k.mallick@intel.com
Link: http://lkml.kernel.org/r/1330774047-18597-1-git-send-email-alex.shi@intel.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
arch/x86/Kconfig.cpu | 1 -
1 files changed, 0 insertions(+), 1 deletions(-)
diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
index 3c57033..6443c6f 100644
--- a/arch/x86/Kconfig.cpu
+++ b/arch/x86/Kconfig.cpu
@@ -303,7 +303,6 @@ config X86_GENERIC
config X86_INTERNODE_CACHE_SHIFT
int
default "12" if X86_VSMP
- default "7" if NUMA
default X86_L1_CACHE_SHIFT
config X86_CMPXCHG
^ permalink raw reply related [flat|nested] 2+ messages in thread
end of thread, other threads:[~2012-03-05 8:40 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-03-03 11:27 [PATCH] x86: correct internode cache alignment Alex Shi
2012-03-05 8:39 ` [tip:x86/asm] x86/numa: Improve " tip-bot for Alex Shi
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.