All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] x86/tlb: Revert: Align TLB invalidation info
@ 2019-04-16  8:03 Peter Zijlstra
  2019-04-16  8:13 ` [tip:x86/urgent] x86/mm/tlb: Revert "x86/mm: Align TLB invalidation info" tip-bot for Peter Zijlstra
  2019-04-16 17:45 ` [PATCH] x86/tlb: Revert: Align TLB invalidation info Linus Torvalds
  0 siblings, 2 replies; 5+ messages in thread
From: Peter Zijlstra @ 2019-04-16  8:03 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov
  Cc: Linus Torvalds, linux-kernel, Andy Lutomirski, Nadav Amit, Dave Hansen


It was found that under some .config options (notably L1_CACHE_SHIFT=7)
and compiler combinations this on-stack alignment leads to a 320 byte
stack usage, which then triggers a KASAN stack warning elsewhere.

Using 320 bytes of stack space for a 40 byte structure is ludicrous and
clearly not right.

Fixes: 515ab7c41306 ("x86/mm: Align TLB invalidation info")
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
Index: linux-2.6/arch/x86/mm/tlb.c
===================================================================
--- linux-2.6.orig/arch/x86/mm/tlb.c
+++ linux-2.6/arch/x86/mm/tlb.c
@@ -728,7 +728,7 @@ void flush_tlb_mm_range(struct mm_struct
 {
 	int cpu;
 
-	struct flush_tlb_info info __aligned(SMP_CACHE_BYTES) = {
+	struct flush_tlb_info info = {
 		.mm = mm,
 		.stride_shift = stride_shift,
 		.freed_tables = freed_tables,

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [tip:x86/urgent] x86/mm/tlb: Revert "x86/mm: Align TLB invalidation info"
  2019-04-16  8:03 [PATCH] x86/tlb: Revert: Align TLB invalidation info Peter Zijlstra
@ 2019-04-16  8:13 ` tip-bot for Peter Zijlstra
  2019-04-16 17:45 ` [PATCH] x86/tlb: Revert: Align TLB invalidation info Linus Torvalds
  1 sibling, 0 replies; 5+ messages in thread
From: tip-bot for Peter Zijlstra @ 2019-04-16  8:13 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: tglx, peterz, luto, mingo, dave.hansen, linux-kernel, hpa,
	torvalds, namit, bp

Commit-ID:  780e0106d468a2962b16b52fdf42898f2639e0a0
Gitweb:     https://git.kernel.org/tip/780e0106d468a2962b16b52fdf42898f2639e0a0
Author:     Peter Zijlstra <peterz@infradead.org>
AuthorDate: Tue, 16 Apr 2019 10:03:35 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 16 Apr 2019 10:10:13 +0200

x86/mm/tlb: Revert "x86/mm: Align TLB invalidation info"

Revert the following commit:

  515ab7c41306: ("x86/mm: Align TLB invalidation info")

I found out (the hard way) that under some .config options (notably L1_CACHE_SHIFT=7)
and compiler combinations this on-stack alignment leads to a 320 byte
stack usage, which then triggers a KASAN stack warning elsewhere.

Using 320 bytes of stack space for a 40 byte structure is ludicrous and
clearly not right.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Acked-by: Nadav Amit <namit@vmware.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: 515ab7c41306 ("x86/mm: Align TLB invalidation info")
Link: http://lkml.kernel.org/r/20190416080335.GM7905@worktop.programming.kicks-ass.net
[ Minor changelog edits. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/mm/tlb.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index bc4bc7b2f075..487b8474c01c 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -728,7 +728,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
 {
 	int cpu;
 
-	struct flush_tlb_info info __aligned(SMP_CACHE_BYTES) = {
+	struct flush_tlb_info info = {
 		.mm = mm,
 		.stride_shift = stride_shift,
 		.freed_tables = freed_tables,

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] x86/tlb: Revert: Align TLB invalidation info
  2019-04-16  8:03 [PATCH] x86/tlb: Revert: Align TLB invalidation info Peter Zijlstra
  2019-04-16  8:13 ` [tip:x86/urgent] x86/mm/tlb: Revert "x86/mm: Align TLB invalidation info" tip-bot for Peter Zijlstra
@ 2019-04-16 17:45 ` Linus Torvalds
  2019-04-16 18:28   ` Peter Zijlstra
  1 sibling, 1 reply; 5+ messages in thread
From: Linus Torvalds @ 2019-04-16 17:45 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Linux List Kernel Mailing, Andy Lutomirski, Nadav Amit,
	Dave Hansen

On Tue, Apr 16, 2019 at 1:03 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> Using 320 bytes of stack space for a 40 byte structure is ludicrous and
> clearly not right.

Ack.

That said, I wish we didn't have these stack structures at all. Or at
least were more careful about them. For example, another case of this
struct on the stack looks really iffy too:

                struct flush_tlb_info info;
                info.start = start;
                info.end = end;
                on_each_cpu(do_kernel_range_flush, &info, 1);

note how it only initializes two of the fields, and leaves the others
entirely randomly initialized with garbage?

Yeah, yeah, "do_kernel_range_flush()" only uses those two fields, but
it still makes my skin crawl how we basically pass a largely
uninitialized structure and have other CPU's look at it.

And in another case we do have a nicely initialized structure

    void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
    {
        struct flush_tlb_info info = {
                .mm = NULL,
                .start = 0UL,
                .end = TLB_FLUSH_ALL,
        };

but it looks like it shouldn't have been on the stack in the first
place, because as far as I can tell it's entirely constant, and it
should just be a "static const" structure initialized at compile time.

So as far as I can tell, we could do something like

-static void flush_tlb_func_local(void *info, enum tlb_flush_reason reason)
+static void flush_tlb_func_local(const void *info, enum
tlb_flush_reason reason)
-       struct flush_tlb_info info = {
+       static const struct flush_tlb_info info = {

for that case.

End result: it looks like we have three of these stack things, and all
three had something odd in them.

So very much Ack on that patch, but maybe we could do a bit more cleanup here?

                   Linus

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] x86/tlb: Revert: Align TLB invalidation info
  2019-04-16 17:45 ` [PATCH] x86/tlb: Revert: Align TLB invalidation info Linus Torvalds
@ 2019-04-16 18:28   ` Peter Zijlstra
  2019-04-17  4:52     ` Nadav Amit
  0 siblings, 1 reply; 5+ messages in thread
From: Peter Zijlstra @ 2019-04-16 18:28 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Linux List Kernel Mailing, Andy Lutomirski, Nadav Amit,
	Dave Hansen

On Tue, Apr 16, 2019 at 10:45:05AM -0700, Linus Torvalds wrote:
> So very much Ack on that patch, but maybe we could do a bit more cleanup here?

Yeah, Nadav was going to try and clean that up. But I figured we should
get this revert in and backported while it's hot :-)

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] x86/tlb: Revert: Align TLB invalidation info
  2019-04-16 18:28   ` Peter Zijlstra
@ 2019-04-17  4:52     ` Nadav Amit
  0 siblings, 0 replies; 5+ messages in thread
From: Nadav Amit @ 2019-04-17  4:52 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linus Torvalds, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Linux List Kernel Mailing, Andy Lutomirski, Dave Hansen

> On Apr 16, 2019, at 11:28 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> 
> On Tue, Apr 16, 2019 at 10:45:05AM -0700, Linus Torvalds wrote:
>> So very much Ack on that patch, but maybe we could do a bit more cleanup here?
> 
> Yeah, Nadav was going to try and clean that up. But I figured we should
> get this revert in and backported while it's hot :-)

I will get to it hopefully next week. I need to do some benchmarking to see
the impact of getting it off the stack, although usually the IPI itself
dominates the TLB shootdown performance overhead.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2019-04-17  4:52 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-16  8:03 [PATCH] x86/tlb: Revert: Align TLB invalidation info Peter Zijlstra
2019-04-16  8:13 ` [tip:x86/urgent] x86/mm/tlb: Revert "x86/mm: Align TLB invalidation info" tip-bot for Peter Zijlstra
2019-04-16 17:45 ` [PATCH] x86/tlb: Revert: Align TLB invalidation info Linus Torvalds
2019-04-16 18:28   ` Peter Zijlstra
2019-04-17  4:52     ` Nadav Amit

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.