linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] x86: Align TLB invalidation info
@ 2018-01-31 21:00 Nadav Amit
  2018-01-31 21:03 ` Andy Lutomirski
  0 siblings, 1 reply; 2+ messages in thread
From: Nadav Amit @ 2018-01-31 21:00 UTC (permalink / raw)
  To: x86
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, linux-kernel,
	Peter Zijlstra, Nadav Amit, Nadav Amit, Andy Lutomirski,
	Dave Hansen

The TLB invalidation info is allocated on the stack, which might cause
it to be unaligned. Since this information may be transferred to
different cores for TLB shootdown, this might result in an additional
cache-line bouncing between the cores.

We do not use __cacheline_aligned() since it also defines the section,
which is inappropriate for stack variables.

Signed-off-by: Nadav Amit <namit@vmware.com>

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>

--
v1 -> v2: use __aligned instead of all the mess (Andy)
---
 arch/x86/mm/tlb.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 5bfe61a5e8e3..9690112e3a82 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -576,7 +576,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
 {
 	int cpu;
 
-	struct flush_tlb_info info = {
+	struct flush_tlb_info info __aligned(SMP_CACHE_BYTES) = {
 		.mm = mm,
 	};
 
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH v2] x86: Align TLB invalidation info
  2018-01-31 21:00 [PATCH v2] x86: Align TLB invalidation info Nadav Amit
@ 2018-01-31 21:03 ` Andy Lutomirski
  0 siblings, 0 replies; 2+ messages in thread
From: Andy Lutomirski @ 2018-01-31 21:03 UTC (permalink / raw)
  To: Nadav Amit
  Cc: X86 ML, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, LKML,
	Peter Zijlstra, Nadav Amit, Andy Lutomirski, Dave Hansen

On Wed, Jan 31, 2018 at 1:00 PM, Nadav Amit <namit@vmware.com> wrote:
> The TLB invalidation info is allocated on the stack, which might cause
> it to be unaligned. Since this information may be transferred to
> different cores for TLB shootdown, this might result in an additional
> cache-line bouncing between the cores.
>
> We do not use __cacheline_aligned() since it also defines the section,
> which is inappropriate for stack variables.
>
> Signed-off-by: Nadav Amit <namit@vmware.com>
>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>

Acked-by: Andy Lutomirski <luto@kernel.org>

This is basically free and adds no mess, so I think it's probably okay
even in the absence that it's a huge win.

But Dave is right, the commit message needs updating.  It will reduce
the number of cachelines that become shared and then get exclusively
owned by the originator from 2 to 1.  This isn't really "bouncing".

>
> --
> v1 -> v2: use __aligned instead of all the mess (Andy)
> ---
>  arch/x86/mm/tlb.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 5bfe61a5e8e3..9690112e3a82 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -576,7 +576,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
>  {
>         int cpu;
>
> -       struct flush_tlb_info info = {
> +       struct flush_tlb_info info __aligned(SMP_CACHE_BYTES) = {
>                 .mm = mm,
>         };
>
> --
> 2.14.1
>

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2018-01-31 21:03 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-31 21:00 [PATCH v2] x86: Align TLB invalidation info Nadav Amit
2018-01-31 21:03 ` Andy Lutomirski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).