* [0/3] A few 32bit x86 fixes @ 2018-10-16 20:25 Sebastian Andrzej Siewior 2018-10-16 20:25 ` [PATCH 1/3] x86/mm/pat: Disable preemption around __flush_tlb_all() Sebastian Andrzej Siewior ` (2 more replies) 0 siblings, 3 replies; 26+ messages in thread From: Sebastian Andrzej Siewior @ 2018-10-16 20:25 UTC (permalink / raw) To: linux-kernel Cc: x86, Dave Hansen, Andy Lutomirski, Peter Zijlstra, Borislav Petkov I wanted to get rid of fpu->initialized but I did not understand the FPU EMU code. So I decided to enable it and see how it works. Turns out it doesn't work. While #3 fixes this I propose the removal of it since probably nobody is using it. While at, we could also remove the 32bit x86 code since it makes things way more complicated. Anyway, I tried fma() in userland and it worked. The only thing that popped was not related to FPU and is addressed in #1. Sebastian ^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH 1/3] x86/mm/pat: Disable preemption around __flush_tlb_all() 2018-10-16 20:25 [0/3] A few 32bit x86 fixes Sebastian Andrzej Siewior @ 2018-10-16 20:25 ` Sebastian Andrzej Siewior 2018-10-16 21:25 ` Andy Lutomirski 2018-10-17 9:54 ` [PATCH 1/3] " David Laight 2018-10-16 20:25 ` [PATCH 2/3] x86/fpu: Remove second definition of fpu in __fpu__restore_sig() Sebastian Andrzej Siewior 2018-10-16 20:25 ` [PATCH 3/3] x86/fpu: Save FPU registers on context switch if there is a FPU Sebastian Andrzej Siewior 2 siblings, 2 replies; 26+ messages in thread From: Sebastian Andrzej Siewior @ 2018-10-16 20:25 UTC (permalink / raw) To: linux-kernel Cc: x86, Dave Hansen, Andy Lutomirski, Peter Zijlstra, Borislav Petkov, Sebastian Andrzej Siewior I've seen the WARN_ON_ONCE(__read_cr3() != build_cr3()) in switch_mm_irqs_off() every once in a while during a snapshotted system upgrade. I also saw the warning early during which was introduced in commit decab0888e6e ("x86/mm: Remove preempt_disable/enable() from __native_flush_tlb()"). The callchain is get_page_from_freelist() -> post_alloc_hook() -> __kernel_map_pages() with CONFIG_DEBUG_PAGEALLOC enabled. Turns out, once I disable preemption around __flush_tlb_all() both warnings do not appear. Disable preemption during CR3 reset / __flush_tlb_all(). Fixes: decab0888e6e ("x86/mm: Remove preempt_disable/enable() from __native_flush_tlb()") Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> --- arch/x86/mm/pageattr.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index 51a5a69ecac9f..fe6b21f0a6631 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -2088,7 +2088,9 @@ void __kernel_map_pages(struct page *page, int numpages, int enable) * We should perform an IPI and flush all tlbs, * but that can deadlock->flush only current cpu: */ + preempt_disable(); __flush_tlb_all(); + preempt_enable(); arch_flush_lazy_mmu_mode(); } -- 2.19.1 ^ permalink raw reply related [flat|nested] 26+ messages in thread
* Re: [PATCH 1/3] x86/mm/pat: Disable preemption around __flush_tlb_all() 2018-10-16 20:25 ` [PATCH 1/3] x86/mm/pat: Disable preemption around __flush_tlb_all() Sebastian Andrzej Siewior @ 2018-10-16 21:25 ` Andy Lutomirski 2018-10-16 21:38 ` Sebastian Andrzej Siewior 2018-10-17 9:54 ` [PATCH 1/3] " David Laight 1 sibling, 1 reply; 26+ messages in thread From: Andy Lutomirski @ 2018-10-16 21:25 UTC (permalink / raw) To: Sebastian Andrzej Siewior Cc: LKML, X86 ML, Dave Hansen, Andrew Lutomirski, Peter Zijlstra, Borislav Petkov On Tue, Oct 16, 2018 at 1:25 PM Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote: > > I've seen the WARN_ON_ONCE(__read_cr3() != build_cr3()) in > switch_mm_irqs_off() every once in a while during a snapshotted system > upgrade. > I also saw the warning early during which was introduced in commit > decab0888e6e ("x86/mm: Remove preempt_disable/enable() from > __native_flush_tlb()"). The callchain is > > get_page_from_freelist() -> post_alloc_hook() -> __kernel_map_pages() > > with CONFIG_DEBUG_PAGEALLOC enabled. > > Turns out, once I disable preemption around __flush_tlb_all() both > warnings do not appear. > > Disable preemption during CR3 reset / __flush_tlb_all(). > > Fixes: decab0888e6e ("x86/mm: Remove preempt_disable/enable() from __native_flush_tlb()") > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> > --- > arch/x86/mm/pageattr.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c > index 51a5a69ecac9f..fe6b21f0a6631 100644 > --- a/arch/x86/mm/pageattr.c > +++ b/arch/x86/mm/pageattr.c > @@ -2088,7 +2088,9 @@ void __kernel_map_pages(struct page *page, int numpages, int enable) > * We should perform an IPI and flush all tlbs, > * but that can deadlock->flush only current cpu: > */ > + preempt_disable(); > __flush_tlb_all(); > + preempt_enable(); > Depending on your CPU, __flush_tlb_all() is either __native_flush_tlb_global() or __native_flush_tlb(). Only __native_flush_tlb() could have any problem with preemption, but it has a WARN_ON_ONCE(preemptible()); in it. Can you try to figure out why that's not firing for you? I suspect that a better fix would be to put preempt_disable() into __native_flulsh_tlb(), but I'd still like to understand why the warning isn't working. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 1/3] x86/mm/pat: Disable preemption around __flush_tlb_all() 2018-10-16 21:25 ` Andy Lutomirski @ 2018-10-16 21:38 ` Sebastian Andrzej Siewior 2018-10-16 23:28 ` Andy Lutomirski 0 siblings, 1 reply; 26+ messages in thread From: Sebastian Andrzej Siewior @ 2018-10-16 21:38 UTC (permalink / raw) To: Andy Lutomirski Cc: LKML, X86 ML, Dave Hansen, Peter Zijlstra, Borislav Petkov On 2018-10-16 14:25:07 [-0700], Andy Lutomirski wrote: > > diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c > > index 51a5a69ecac9f..fe6b21f0a6631 100644 > > --- a/arch/x86/mm/pageattr.c > > +++ b/arch/x86/mm/pageattr.c > > @@ -2088,7 +2088,9 @@ void __kernel_map_pages(struct page *page, int numpages, int enable) > > * We should perform an IPI and flush all tlbs, > > * but that can deadlock->flush only current cpu: > > */ > > + preempt_disable(); > > __flush_tlb_all(); > > + preempt_enable(); > > > > Depending on your CPU, __flush_tlb_all() is either > __native_flush_tlb_global() or __native_flush_tlb(). Only > __native_flush_tlb() could have any problem with preemption, but it > has a WARN_ON_ONCE(preemptible()); in it. Can you try to figure out > why that's not firing for you? It is firing, it is the warning that was introduced in commit decab0888e6e (as mention in the commit message; I just noticed it way later because it popped early in the boot log). > I suspect that a better fix would be to put preempt_disable() into > __native_flulsh_tlb(), but I'd still like to understand why the > warning isn't working. __native_flulsh_tlb() just had its preempt_disable() removed in decab0888e6e and __kernel_map_pages() is only called from the debug code. The other caller of __native_flulsh_tlb() seem to hold a lock or run with disabled interrupts. Sebastian ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 1/3] x86/mm/pat: Disable preemption around __flush_tlb_all() 2018-10-16 21:38 ` Sebastian Andrzej Siewior @ 2018-10-16 23:28 ` Andy Lutomirski 2018-10-17 10:34 ` [PATCH 1/3 v2] " Sebastian Andrzej Siewior 0 siblings, 1 reply; 26+ messages in thread From: Andy Lutomirski @ 2018-10-16 23:28 UTC (permalink / raw) To: Sebastian Andrzej Siewior Cc: Andrew Lutomirski, LKML, X86 ML, Dave Hansen, Peter Zijlstra, Borislav Petkov On Tue, Oct 16, 2018 at 2:39 PM Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote: > > On 2018-10-16 14:25:07 [-0700], Andy Lutomirski wrote: > > > diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c > > > index 51a5a69ecac9f..fe6b21f0a6631 100644 > > > --- a/arch/x86/mm/pageattr.c > > > +++ b/arch/x86/mm/pageattr.c > > > @@ -2088,7 +2088,9 @@ void __kernel_map_pages(struct page *page, int numpages, int enable) > > > * We should perform an IPI and flush all tlbs, > > > * but that can deadlock->flush only current cpu: > > > */ > > > + preempt_disable(); > > > __flush_tlb_all(); > > > + preempt_enable(); > > > > > > > Depending on your CPU, __flush_tlb_all() is either > > __native_flush_tlb_global() or __native_flush_tlb(). Only > > __native_flush_tlb() could have any problem with preemption, but it > > has a WARN_ON_ONCE(preemptible()); in it. Can you try to figure out > > why that's not firing for you? > > It is firing, it is the warning that was introduced in commit > decab0888e6e (as mention in the commit message; I just noticed it way > later because it popped early in the boot log). > > > I suspect that a better fix would be to put preempt_disable() into > > __native_flulsh_tlb(), but I'd still like to understand why the > > warning isn't working. > > __native_flulsh_tlb() just had its preempt_disable() removed in > decab0888e6e and __kernel_map_pages() is only called from the debug > code. The other caller of __native_flulsh_tlb() seem to hold a lock or > run with disabled interrupts. > > Sebastian Fair enough. But can you copy the warning to __flush_tlb_all() so it's checked on all systems (but make it VM_WARN_ON_ONCE)? --Andy ^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH 1/3 v2] x86/mm/pat: Disable preemption around __flush_tlb_all() 2018-10-16 23:28 ` Andy Lutomirski @ 2018-10-17 10:34 ` Sebastian Andrzej Siewior 2018-10-29 18:10 ` [tip:x86/urgent] " tip-bot for Sebastian Andrzej Siewior 0 siblings, 1 reply; 26+ messages in thread From: Sebastian Andrzej Siewior @ 2018-10-17 10:34 UTC (permalink / raw) To: Andy Lutomirski Cc: LKML, X86 ML, Dave Hansen, Peter Zijlstra, Borislav Petkov I've seen the WARN_ON_ONCE(__read_cr3() != build_cr3()) in switch_mm_irqs_off() every once in a while during a snapshotted system upgrade. I also saw the warning early during which was introduced in commit decab0888e6e ("x86/mm: Remove preempt_disable/enable() from __native_flush_tlb()"). The callchain is get_page_from_freelist() -> post_alloc_hook() -> __kernel_map_pages() with CONFIG_DEBUG_PAGEALLOC enabled. Turns out, once I disable preemption around __flush_tlb_all() both warnings do not appear. Disable preemption during CR3 reset / __flush_tlb_all() and add a comment why preemption is disabled. Add another preemptible() check in __flush_tlb_all() so we catch users with enabled preemption and with the PGE which would not trigger the warning in __native_flush_tlb() (suggested by Andy Lutomirski). Fixes: decab0888e6e ("x86/mm: Remove preempt_disable/enable() from __native_flush_tlb()") Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> --- v1…v2: - Add a comment before disabling preemption explaining why this is done. - Add a preemption check to __flush_tlb_all(). arch/x86/include/asm/tlbflush.h | 6 ++++++ arch/x86/mm/pageattr.c | 6 +++++- 2 files changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 58ce5288878e8..0e2130d8d6b12 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -469,6 +469,12 @@ static inline void __native_flush_tlb_one_user(unsigned long addr) */ static inline void __flush_tlb_all(void) { + /* + * This is to catch users with enabled preemption and the PGE feature + * and don't trigger the warning in __native_flush_tlb(). + */ + VM_WARN_ON_ONCE(preemptible()); + if (boot_cpu_has(X86_FEATURE_PGE)) { __flush_tlb_global(); } else { diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index 51a5a69ecac9f..e2d4b25c7aa44 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -2086,9 +2086,13 @@ void __kernel_map_pages(struct page *page, int numpages, int enable) /* * We should perform an IPI and flush all tlbs, - * but that can deadlock->flush only current cpu: + * but that can deadlock->flush only current cpu. + * Preemption needs to be disabled around __flush_tlb_all() due to + * CR3 reload in __native_flush_tlb(). */ + preempt_disable(); __flush_tlb_all(); + preempt_enable(); arch_flush_lazy_mmu_mode(); } -- 2.19.1 ^ permalink raw reply related [flat|nested] 26+ messages in thread
* [tip:x86/urgent] x86/mm/pat: Disable preemption around __flush_tlb_all() 2018-10-17 10:34 ` [PATCH 1/3 v2] " Sebastian Andrzej Siewior @ 2018-10-29 18:10 ` tip-bot for Sebastian Andrzej Siewior 2018-11-05 21:56 ` Dan Williams 0 siblings, 1 reply; 26+ messages in thread From: tip-bot for Sebastian Andrzej Siewior @ 2018-10-29 18:10 UTC (permalink / raw) To: linux-tip-commits Cc: linux-kernel, dave.hansen, peterz, tglx, luto, bp, hpa, bigeasy, mingo Commit-ID: f77084d96355f5fba8e2c1fb3a51a393b1570de7 Gitweb: https://git.kernel.org/tip/f77084d96355f5fba8e2c1fb3a51a393b1570de7 Author: Sebastian Andrzej Siewior <bigeasy@linutronix.de> AuthorDate: Wed, 17 Oct 2018 12:34:32 +0200 Committer: Thomas Gleixner <tglx@linutronix.de> CommitDate: Mon, 29 Oct 2018 19:04:31 +0100 x86/mm/pat: Disable preemption around __flush_tlb_all() The WARN_ON_ONCE(__read_cr3() != build_cr3()) in switch_mm_irqs_off() triggers every once in a while during a snapshotted system upgrade. The warning triggers since commit decab0888e6e ("x86/mm: Remove preempt_disable/enable() from __native_flush_tlb()"). The callchain is: get_page_from_freelist() -> post_alloc_hook() -> __kernel_map_pages() with CONFIG_DEBUG_PAGEALLOC enabled. Disable preemption during CR3 reset / __flush_tlb_all() and add a comment why preemption has to be disabled so it won't be removed accidentaly. Add another preemptible() check in __flush_tlb_all() to catch callers with enabled preemption when PGE is enabled, because PGE enabled does not trigger the warning in __native_flush_tlb(). Suggested by Andy Lutomirski. Fixes: decab0888e6e ("x86/mm: Remove preempt_disable/enable() from __native_flush_tlb()") Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Andy Lutomirski <luto@kernel.org> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Borislav Petkov <bp@alien8.de> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20181017103432.zgv46nlu3hc7k4rq@linutronix.de --- arch/x86/include/asm/tlbflush.h | 6 ++++++ arch/x86/mm/pageattr.c | 6 +++++- 2 files changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 323a313947e0..d760611cfc35 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -453,6 +453,12 @@ static inline void __native_flush_tlb_one_user(unsigned long addr) */ static inline void __flush_tlb_all(void) { + /* + * This is to catch users with enabled preemption and the PGE feature + * and don't trigger the warning in __native_flush_tlb(). + */ + VM_WARN_ON_ONCE(preemptible()); + if (boot_cpu_has(X86_FEATURE_PGE)) { __flush_tlb_global(); } else { diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index 62bb30b4bd2a..a1004dec98ea 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -2309,9 +2309,13 @@ void __kernel_map_pages(struct page *page, int numpages, int enable) /* * We should perform an IPI and flush all tlbs, - * but that can deadlock->flush only current cpu: + * but that can deadlock->flush only current cpu. + * Preemption needs to be disabled around __flush_tlb_all() due to + * CR3 reload in __native_flush_tlb(). */ + preempt_disable(); __flush_tlb_all(); + preempt_enable(); arch_flush_lazy_mmu_mode(); } ^ permalink raw reply related [flat|nested] 26+ messages in thread
* Re: [tip:x86/urgent] x86/mm/pat: Disable preemption around __flush_tlb_all() 2018-10-29 18:10 ` [tip:x86/urgent] " tip-bot for Sebastian Andrzej Siewior @ 2018-11-05 21:56 ` Dan Williams 0 siblings, 0 replies; 26+ messages in thread From: Dan Williams @ 2018-11-05 21:56 UTC (permalink / raw) To: Borislav Petkov, Andy Lutomirski, H. Peter Anvin, bigeasy, Ingo Molnar, Linux Kernel Mailing List, Peter Zijlstra, Dave Hansen, Thomas Gleixner Cc: linux-tip-commits On Mon, Oct 29, 2018 at 11:12 AM tip-bot for Sebastian Andrzej Siewior <tipbot@zytor.com> wrote: > > Commit-ID: f77084d96355f5fba8e2c1fb3a51a393b1570de7 > Gitweb: https://git.kernel.org/tip/f77084d96355f5fba8e2c1fb3a51a393b1570de7 > Author: Sebastian Andrzej Siewior <bigeasy@linutronix.de> > AuthorDate: Wed, 17 Oct 2018 12:34:32 +0200 > Committer: Thomas Gleixner <tglx@linutronix.de> > CommitDate: Mon, 29 Oct 2018 19:04:31 +0100 > > x86/mm/pat: Disable preemption around __flush_tlb_all() > > The WARN_ON_ONCE(__read_cr3() != build_cr3()) in switch_mm_irqs_off() > triggers every once in a while during a snapshotted system upgrade. > > The warning triggers since commit decab0888e6e ("x86/mm: Remove > preempt_disable/enable() from __native_flush_tlb()"). The callchain is: > > get_page_from_freelist() -> post_alloc_hook() -> __kernel_map_pages() > > with CONFIG_DEBUG_PAGEALLOC enabled. > > Disable preemption during CR3 reset / __flush_tlb_all() and add a comment > why preemption has to be disabled so it won't be removed accidentaly. > > Add another preemptible() check in __flush_tlb_all() to catch callers with > enabled preemption when PGE is enabled, because PGE enabled does not > trigger the warning in __native_flush_tlb(). Suggested by Andy Lutomirski. > > Fixes: decab0888e6e ("x86/mm: Remove preempt_disable/enable() from __native_flush_tlb()") > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de> > Cc: Andy Lutomirski <luto@kernel.org> > Cc: Dave Hansen <dave.hansen@linux.intel.com> > Cc: Peter Zijlstra <peterz@infradead.org> > Cc: Borislav Petkov <bp@alien8.de> > Cc: stable@vger.kernel.org > Link: https://lkml.kernel.org/r/20181017103432.zgv46nlu3hc7k4rq@linutronix.de > --- > arch/x86/include/asm/tlbflush.h | 6 ++++++ > arch/x86/mm/pageattr.c | 6 +++++- > 2 files changed, 11 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h > index 323a313947e0..d760611cfc35 100644 > --- a/arch/x86/include/asm/tlbflush.h > +++ b/arch/x86/include/asm/tlbflush.h > @@ -453,6 +453,12 @@ static inline void __native_flush_tlb_one_user(unsigned long addr) > */ > static inline void __flush_tlb_all(void) > { > + /* > + * This is to catch users with enabled preemption and the PGE feature > + * and don't trigger the warning in __native_flush_tlb(). > + */ > + VM_WARN_ON_ONCE(preemptible()); This warning triggers 100% of the time for the pmem use case and it seems it would also trigger for any memory hotplug use case that uses arch_add_memory(). WARNING: CPU: 35 PID: 911 at ./arch/x86/include/asm/tlbflush.h:460 __flush_tlb_all+0x1b/0x3a CPU: 35 PID: 911 Comm: systemd-udevd Tainted: G OE 4.20.0-rc1+ #2583 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.1-0-g0551a4be2c-prebuilt.qemu-project.org 04/01/2014 RIP: 0010:__flush_tlb_all+0x1b/0x3a [..] Call Trace: phys_pud_init+0x29c/0x2bb kernel_physical_mapping_init+0xfc/0x219 init_memory_mapping+0x1a5/0x3b0 arch_add_memory+0x2c/0x50 devm_memremap_pages+0x3aa/0x610 pmem_attach_disk+0x585/0x700 [nd_pmem] ...could we just move the preempt_disable() inside __flush_tlb_all()? ^ permalink raw reply [flat|nested] 26+ messages in thread
* RE: [PATCH 1/3] x86/mm/pat: Disable preemption around __flush_tlb_all() 2018-10-16 20:25 ` [PATCH 1/3] x86/mm/pat: Disable preemption around __flush_tlb_all() Sebastian Andrzej Siewior 2018-10-16 21:25 ` Andy Lutomirski @ 2018-10-17 9:54 ` David Laight 2018-10-17 10:39 ` 'Sebastian Andrzej Siewior' 2018-10-17 11:11 ` Peter Zijlstra 1 sibling, 2 replies; 26+ messages in thread From: David Laight @ 2018-10-17 9:54 UTC (permalink / raw) To: 'Sebastian Andrzej Siewior', linux-kernel Cc: x86, Dave Hansen, Andy Lutomirski, Peter Zijlstra, Borislav Petkov From: Sebastian Andrzej Siewior > Sent: 16 October 2018 21:25 > I've seen the WARN_ON_ONCE(__read_cr3() != build_cr3()) in > switch_mm_irqs_off() every once in a while during a snapshotted system > upgrade. > I also saw the warning early during which was introduced in commit > decab0888e6e ("x86/mm: Remove preempt_disable/enable() from > __native_flush_tlb()"). The callchain is > > get_page_from_freelist() -> post_alloc_hook() -> __kernel_map_pages() > > with CONFIG_DEBUG_PAGEALLOC enabled. > > Turns out, once I disable preemption around __flush_tlb_all() both > warnings do not appear. > > Disable preemption during CR3 reset / __flush_tlb_all(). > > Fixes: decab0888e6e ("x86/mm: Remove preempt_disable/enable() from __native_flush_tlb()") > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> > --- > arch/x86/mm/pageattr.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c > index 51a5a69ecac9f..fe6b21f0a6631 100644 > --- a/arch/x86/mm/pageattr.c > +++ b/arch/x86/mm/pageattr.c > @@ -2088,7 +2088,9 @@ void __kernel_map_pages(struct page *page, int numpages, int enable) > * We should perform an IPI and flush all tlbs, > * but that can deadlock->flush only current cpu: > */ > + preempt_disable(); > __flush_tlb_all(); > + preempt_enable(); Can it make any sense to flush the tlb with preemption enabled? Surely preemption must be disabled over something else as well? David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales) ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 1/3] x86/mm/pat: Disable preemption around __flush_tlb_all() 2018-10-17 9:54 ` [PATCH 1/3] " David Laight @ 2018-10-17 10:39 ` 'Sebastian Andrzej Siewior' 2018-10-17 11:45 ` David Laight 2018-10-17 11:11 ` Peter Zijlstra 1 sibling, 1 reply; 26+ messages in thread From: 'Sebastian Andrzej Siewior' @ 2018-10-17 10:39 UTC (permalink / raw) To: David Laight Cc: linux-kernel, x86, Dave Hansen, Andy Lutomirski, Peter Zijlstra, Borislav Petkov On 2018-10-17 09:54:38 [+0000], David Laight wrote: > Can it make any sense to flush the tlb with preemption enabled? it might. Usually it is disabled for other reasons. > Surely preemption must be disabled over something else as well? In this case it is due to the CR3 reload. I don't see anything else. > David Sebastian ^ permalink raw reply [flat|nested] 26+ messages in thread
* RE: [PATCH 1/3] x86/mm/pat: Disable preemption around __flush_tlb_all() 2018-10-17 10:39 ` 'Sebastian Andrzej Siewior' @ 2018-10-17 11:45 ` David Laight 2018-10-17 12:00 ` 'Sebastian Andrzej Siewior' 0 siblings, 1 reply; 26+ messages in thread From: David Laight @ 2018-10-17 11:45 UTC (permalink / raw) To: 'Sebastian Andrzej Siewior' Cc: linux-kernel, x86, Dave Hansen, Andy Lutomirski, Peter Zijlstra, Borislav Petkov From: 'Sebastian Andrzej Siewior' > Sent: 17 October 2018 11:39 > On 2018-10-17 09:54:38 [+0000], David Laight wrote: > > Can it make any sense to flush the tlb with preemption enabled? > it might. Usually it is disabled for other reasons. That's what I mean, it should be disabled by the caller. > > Surely preemption must be disabled over something else as well? > In this case it is due to the CR3 reload. I don't see anything else. Right, so it should be disabled before the CR3 reload and enabled after? David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales) ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 1/3] x86/mm/pat: Disable preemption around __flush_tlb_all() 2018-10-17 11:45 ` David Laight @ 2018-10-17 12:00 ` 'Sebastian Andrzej Siewior' 0 siblings, 0 replies; 26+ messages in thread From: 'Sebastian Andrzej Siewior' @ 2018-10-17 12:00 UTC (permalink / raw) To: David Laight Cc: linux-kernel, x86, Dave Hansen, Andy Lutomirski, Peter Zijlstra, Borislav Petkov On 2018-10-17 11:45:59 [+0000], David Laight wrote: > Right, so it should be disabled before the CR3 reload and enabled after? It was before commit decab0888e6e1 ("x86/mm: Remove preempt_disable/enable() from __native_flush_tlb()") then it was lifted to the caller. > David Sebastian ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 1/3] x86/mm/pat: Disable preemption around __flush_tlb_all() 2018-10-17 9:54 ` [PATCH 1/3] " David Laight 2018-10-17 10:39 ` 'Sebastian Andrzej Siewior' @ 2018-10-17 11:11 ` Peter Zijlstra 2018-10-17 11:17 ` Thomas Gleixner 1 sibling, 1 reply; 26+ messages in thread From: Peter Zijlstra @ 2018-10-17 11:11 UTC (permalink / raw) To: David Laight Cc: 'Sebastian Andrzej Siewior', linux-kernel, x86, Dave Hansen, Andy Lutomirski, Borislav Petkov On Wed, Oct 17, 2018 at 09:54:38AM +0000, David Laight wrote: > From: Sebastian Andrzej Siewior > > Sent: 16 October 2018 21:25 > > I've seen the WARN_ON_ONCE(__read_cr3() != build_cr3()) in > > switch_mm_irqs_off() every once in a while during a snapshotted system > > upgrade. > > I also saw the warning early during which was introduced in commit > > decab0888e6e ("x86/mm: Remove preempt_disable/enable() from > > __native_flush_tlb()"). The callchain is > > > > get_page_from_freelist() -> post_alloc_hook() -> __kernel_map_pages() > > > > with CONFIG_DEBUG_PAGEALLOC enabled. > > > > Turns out, once I disable preemption around __flush_tlb_all() both > > warnings do not appear. > > > > Disable preemption during CR3 reset / __flush_tlb_all(). > > > > Fixes: decab0888e6e ("x86/mm: Remove preempt_disable/enable() from __native_flush_tlb()") > > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> > > --- > > arch/x86/mm/pageattr.c | 2 ++ > > 1 file changed, 2 insertions(+) > > > > diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c > > index 51a5a69ecac9f..fe6b21f0a6631 100644 > > --- a/arch/x86/mm/pageattr.c > > +++ b/arch/x86/mm/pageattr.c > > @@ -2088,7 +2088,9 @@ void __kernel_map_pages(struct page *page, int numpages, int enable) > > * We should perform an IPI and flush all tlbs, > > * but that can deadlock->flush only current cpu: > > */ > > + preempt_disable(); > > __flush_tlb_all(); > > + preempt_enable(); > > Can it make any sense to flush the tlb with preemption enabled? > Surely preemption must be disabled over something else as well? This code is fishy anyway, for only doing that local invalidate. Ideally we'd never ever merge anything that only does local invalidates, on a global address space, that's just broken. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 1/3] x86/mm/pat: Disable preemption around __flush_tlb_all() 2018-10-17 11:11 ` Peter Zijlstra @ 2018-10-17 11:17 ` Thomas Gleixner 2018-10-17 15:47 ` Peter Zijlstra 0 siblings, 1 reply; 26+ messages in thread From: Thomas Gleixner @ 2018-10-17 11:17 UTC (permalink / raw) To: Peter Zijlstra Cc: David Laight, 'Sebastian Andrzej Siewior', linux-kernel, x86, Dave Hansen, Andy Lutomirski, Borislav Petkov On Wed, 17 Oct 2018, Peter Zijlstra wrote: > On Wed, Oct 17, 2018 at 09:54:38AM +0000, David Laight wrote: > > > * We should perform an IPI and flush all tlbs, > > > * but that can deadlock->flush only current cpu: > > > */ > > > + preempt_disable(); > > > __flush_tlb_all(); > > > + preempt_enable(); > > > > Can it make any sense to flush the tlb with preemption enabled? > > Surely preemption must be disabled over something else as well? > > This code is fishy anyway, for only doing that local invalidate. > > Ideally we'd never ever merge anything that only does local invalidates, > on a global address space, that's just broken. A little bit late to lament about that. So should we just replace it with cpa_flush_all() ? Thanks, tglx ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 1/3] x86/mm/pat: Disable preemption around __flush_tlb_all() 2018-10-17 11:17 ` Thomas Gleixner @ 2018-10-17 15:47 ` Peter Zijlstra 2018-10-17 15:55 ` Thomas Gleixner 2018-10-17 16:00 ` 'Sebastian Andrzej Siewior' 0 siblings, 2 replies; 26+ messages in thread From: Peter Zijlstra @ 2018-10-17 15:47 UTC (permalink / raw) To: Thomas Gleixner Cc: David Laight, 'Sebastian Andrzej Siewior', linux-kernel, x86, Dave Hansen, Andy Lutomirski, Borislav Petkov On Wed, Oct 17, 2018 at 01:17:18PM +0200, Thomas Gleixner wrote: > On Wed, 17 Oct 2018, Peter Zijlstra wrote: > > On Wed, Oct 17, 2018 at 09:54:38AM +0000, David Laight wrote: > > > > * We should perform an IPI and flush all tlbs, > > > > * but that can deadlock->flush only current cpu: > > > > */ > > > > + preempt_disable(); > > > > __flush_tlb_all(); > > > > + preempt_enable(); > > > > > > Can it make any sense to flush the tlb with preemption enabled? > > > Surely preemption must be disabled over something else as well? > > > > This code is fishy anyway, for only doing that local invalidate. > > > > Ideally we'd never ever merge anything that only does local invalidates, > > on a global address space, that's just broken. > > A little bit late to lament about that. For this, yes :/ But for future stuff we should really not allow such things anymore. > So should we just replace it with cpa_flush_all() ? The comment there suggests that will deadlock, supposedly because the kernel_map_page() call can happen with IRQs disabled or such. I've not deeply looked at this. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 1/3] x86/mm/pat: Disable preemption around __flush_tlb_all() 2018-10-17 15:47 ` Peter Zijlstra @ 2018-10-17 15:55 ` Thomas Gleixner 2018-10-17 16:00 ` 'Sebastian Andrzej Siewior' 1 sibling, 0 replies; 26+ messages in thread From: Thomas Gleixner @ 2018-10-17 15:55 UTC (permalink / raw) To: Peter Zijlstra Cc: David Laight, 'Sebastian Andrzej Siewior', linux-kernel, x86, Dave Hansen, Andy Lutomirski, Borislav Petkov On Wed, 17 Oct 2018, Peter Zijlstra wrote: > On Wed, Oct 17, 2018 at 01:17:18PM +0200, Thomas Gleixner wrote: > > On Wed, 17 Oct 2018, Peter Zijlstra wrote: > > > On Wed, Oct 17, 2018 at 09:54:38AM +0000, David Laight wrote: > > > > > * We should perform an IPI and flush all tlbs, > > > > > * but that can deadlock->flush only current cpu: > > > > > */ > > > > > + preempt_disable(); > > > > > __flush_tlb_all(); > > > > > + preempt_enable(); > > > > > > > > Can it make any sense to flush the tlb with preemption enabled? > > > > Surely preemption must be disabled over something else as well? > > > > > > This code is fishy anyway, for only doing that local invalidate. > > > > > > Ideally we'd never ever merge anything that only does local invalidates, > > > on a global address space, that's just broken. > > > > A little bit late to lament about that. > > For this, yes :/ But for future stuff we should really not allow such > things anymore. > > > So should we just replace it with cpa_flush_all() ? > > The comment there suggests that will deadlock, supposedly because the > kernel_map_page() call can happen with IRQs disabled or such. > > I've not deeply looked at this. Bah, right. Forgot about that. Thanks, tglx ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 1/3] x86/mm/pat: Disable preemption around __flush_tlb_all() 2018-10-17 15:47 ` Peter Zijlstra 2018-10-17 15:55 ` Thomas Gleixner @ 2018-10-17 16:00 ` 'Sebastian Andrzej Siewior' 2018-10-17 16:22 ` Peter Zijlstra 1 sibling, 1 reply; 26+ messages in thread From: 'Sebastian Andrzej Siewior' @ 2018-10-17 16:00 UTC (permalink / raw) To: Peter Zijlstra Cc: Thomas Gleixner, David Laight, linux-kernel, x86, Dave Hansen, Andy Lutomirski, Borislav Petkov On 2018-10-17 17:47:07 [+0200], Peter Zijlstra wrote: > > > Ideally we'd never ever merge anything that only does local invalidates, > > > on a global address space, that's just broken. > > > > A little bit late to lament about that. > > For this, yes :/ But for future stuff we should really not allow such > things anymore. so we stay as is? > > So should we just replace it with cpa_flush_all() ? > > The comment there suggests that will deadlock, supposedly because the > kernel_map_page() call can happen with IRQs disabled or such. > > I've not deeply looked at this. free_pages() / __free_pages() for instance ends may end up in kernel_map_pages() (via free_pages_prepare()). And if this is invoked with disabled interrupts… Sebastian ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 1/3] x86/mm/pat: Disable preemption around __flush_tlb_all() 2018-10-17 16:00 ` 'Sebastian Andrzej Siewior' @ 2018-10-17 16:22 ` Peter Zijlstra 0 siblings, 0 replies; 26+ messages in thread From: Peter Zijlstra @ 2018-10-17 16:22 UTC (permalink / raw) To: 'Sebastian Andrzej Siewior' Cc: Thomas Gleixner, David Laight, linux-kernel, x86, Dave Hansen, Andy Lutomirski, Borislav Petkov On Wed, Oct 17, 2018 at 06:00:51PM +0200, 'Sebastian Andrzej Siewior' wrote: > On 2018-10-17 17:47:07 [+0200], Peter Zijlstra wrote: > > > > Ideally we'd never ever merge anything that only does local invalidates, > > > > on a global address space, that's just broken. > > > > > > A little bit late to lament about that. > > > > For this, yes :/ But for future stuff we should really not allow such > > things anymore. > > so we stay as is? Well, we can do that preempt kludge you propose to shut up the warning I suppose. But a kludge it is. That code really wants a global invalidate, but we cannot (not without massive surgery in any case). ^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH 2/3] x86/fpu: Remove second definition of fpu in __fpu__restore_sig() 2018-10-16 20:25 [0/3] A few 32bit x86 fixes Sebastian Andrzej Siewior 2018-10-16 20:25 ` [PATCH 1/3] x86/mm/pat: Disable preemption around __flush_tlb_all() Sebastian Andrzej Siewior @ 2018-10-16 20:25 ` Sebastian Andrzej Siewior 2018-10-16 21:26 ` Andy Lutomirski ` (2 more replies) 2018-10-16 20:25 ` [PATCH 3/3] x86/fpu: Save FPU registers on context switch if there is a FPU Sebastian Andrzej Siewior 2 siblings, 3 replies; 26+ messages in thread From: Sebastian Andrzej Siewior @ 2018-10-16 20:25 UTC (permalink / raw) To: linux-kernel Cc: x86, Dave Hansen, Andy Lutomirski, Peter Zijlstra, Borislav Petkov, Sebastian Andrzej Siewior Commit c5bedc6847c3b ("x86/fpu: Get rid of PF_USED_MATH usage, convert it to fpu->fpstate_active") introduced the `fpu' variable at top of __restore_xstate_sig() which now shadows the other definition: |arch/x86/kernel/fpu/signal.c:318:28: warning: symbol 'fpu' shadows an earlier one |arch/x86/kernel/fpu/signal.c:271:20: originally declared here Remove the shadowed definition of `fpu'. Fixes: c5bedc6847c3b ("x86/fpu: Get rid of PF_USED_MATH usage, convert it to fpu->fpstate_active") Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> --- arch/x86/kernel/fpu/signal.c | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c index 23f1691670b66..61a949d84dfa5 100644 --- a/arch/x86/kernel/fpu/signal.c +++ b/arch/x86/kernel/fpu/signal.c @@ -314,7 +314,6 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size) * thread's fpu state, reconstruct fxstate from the fsave * header. Validate and sanitize the copied state. */ - struct fpu *fpu = &tsk->thread.fpu; struct user_i387_ia32_struct env; int err = 0; -- 2.19.1 ^ permalink raw reply related [flat|nested] 26+ messages in thread
* Re: [PATCH 2/3] x86/fpu: Remove second definition of fpu in __fpu__restore_sig() 2018-10-16 20:25 ` [PATCH 2/3] x86/fpu: Remove second definition of fpu in __fpu__restore_sig() Sebastian Andrzej Siewior @ 2018-10-16 21:26 ` Andy Lutomirski 2018-10-17 9:09 ` [tip:x86/urgent] " tip-bot for Sebastian Andrzej Siewior 2018-10-18 6:22 ` tip-bot for Sebastian Andrzej Siewior 2 siblings, 0 replies; 26+ messages in thread From: Andy Lutomirski @ 2018-10-16 21:26 UTC (permalink / raw) To: Sebastian Andrzej Siewior Cc: LKML, X86 ML, Dave Hansen, Andrew Lutomirski, Peter Zijlstra, Borislav Petkov On Tue, Oct 16, 2018 at 1:25 PM Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote: > > Commit c5bedc6847c3b ("x86/fpu: Get rid of PF_USED_MATH usage, convert > it to fpu->fpstate_active") introduced the `fpu' variable at top of > __restore_xstate_sig() which now shadows the other definition: Reviewed-by: Andy Lutomirski <luto@kernel.org> ^ permalink raw reply [flat|nested] 26+ messages in thread
* [tip:x86/urgent] x86/fpu: Remove second definition of fpu in __fpu__restore_sig() 2018-10-16 20:25 ` [PATCH 2/3] x86/fpu: Remove second definition of fpu in __fpu__restore_sig() Sebastian Andrzej Siewior 2018-10-16 21:26 ` Andy Lutomirski @ 2018-10-17 9:09 ` tip-bot for Sebastian Andrzej Siewior 2018-10-18 6:22 ` tip-bot for Sebastian Andrzej Siewior 2 siblings, 0 replies; 26+ messages in thread From: tip-bot for Sebastian Andrzej Siewior @ 2018-10-17 9:09 UTC (permalink / raw) To: linux-tip-commits Cc: mingo, bigeasy, torvalds, bp, linux-kernel, tglx, hpa, luto, dave.hansen, peterz Commit-ID: 5ac33bcb25ad30200c4e6d11802e9e7664463529 Gitweb: https://git.kernel.org/tip/5ac33bcb25ad30200c4e6d11802e9e7664463529 Author: Sebastian Andrzej Siewior <bigeasy@linutronix.de> AuthorDate: Tue, 16 Oct 2018 22:25:24 +0200 Committer: Ingo Molnar <mingo@kernel.org> CommitDate: Wed, 17 Oct 2018 08:07:41 +0200 x86/fpu: Remove second definition of fpu in __fpu__restore_sig() Commit: c5bedc6847c3b ("x86/fpu: Get rid of PF_USED_MATH usage, convert it to fpu->fpstate_active") introduced the 'fpu' variable at top of __restore_xstate_sig(), which now shadows the other definition: arch/x86/kernel/fpu/signal.c:318:28: warning: symbol 'fpu' shadows an earlier one arch/x86/kernel/fpu/signal.c:271:20: originally declared here Remove the shadowed definition of 'fpu', as the two definitions are the same. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: c5bedc6847c3b ("x86/fpu: Get rid of PF_USED_MATH usage, convert it to fpu->fpstate_active") Link: http://lkml.kernel.org/r/20181016202525.29437-3-bigeasy@linutronix.de Signed-off-by: Ingo Molnar <mingo@kernel.org> --- arch/x86/kernel/fpu/signal.c | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c index 23f1691670b6..61a949d84dfa 100644 --- a/arch/x86/kernel/fpu/signal.c +++ b/arch/x86/kernel/fpu/signal.c @@ -314,7 +314,6 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size) * thread's fpu state, reconstruct fxstate from the fsave * header. Validate and sanitize the copied state. */ - struct fpu *fpu = &tsk->thread.fpu; struct user_i387_ia32_struct env; int err = 0; ^ permalink raw reply related [flat|nested] 26+ messages in thread
* [tip:x86/urgent] x86/fpu: Remove second definition of fpu in __fpu__restore_sig() 2018-10-16 20:25 ` [PATCH 2/3] x86/fpu: Remove second definition of fpu in __fpu__restore_sig() Sebastian Andrzej Siewior 2018-10-16 21:26 ` Andy Lutomirski 2018-10-17 9:09 ` [tip:x86/urgent] " tip-bot for Sebastian Andrzej Siewior @ 2018-10-18 6:22 ` tip-bot for Sebastian Andrzej Siewior 2 siblings, 0 replies; 26+ messages in thread From: tip-bot for Sebastian Andrzej Siewior @ 2018-10-18 6:22 UTC (permalink / raw) To: linux-tip-commits Cc: bp, torvalds, hpa, luto, linux-kernel, mingo, dave.hansen, tglx, bigeasy, peterz Commit-ID: 6aa676761d4c1acfa31320e55fa1f83f3fcbbc7a Gitweb: https://git.kernel.org/tip/6aa676761d4c1acfa31320e55fa1f83f3fcbbc7a Author: Sebastian Andrzej Siewior <bigeasy@linutronix.de> AuthorDate: Tue, 16 Oct 2018 22:25:24 +0200 Committer: Ingo Molnar <mingo@kernel.org> CommitDate: Wed, 17 Oct 2018 12:30:31 +0200 x86/fpu: Remove second definition of fpu in __fpu__restore_sig() Commit: c5bedc6847c3b ("x86/fpu: Get rid of PF_USED_MATH usage, convert it to fpu->fpstate_active") introduced the 'fpu' variable at top of __restore_xstate_sig(), which now shadows the other definition: arch/x86/kernel/fpu/signal.c:318:28: warning: symbol 'fpu' shadows an earlier one arch/x86/kernel/fpu/signal.c:271:20: originally declared here Remove the shadowed definition of 'fpu', as the two definitions are the same. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: c5bedc6847c3b ("x86/fpu: Get rid of PF_USED_MATH usage, convert it to fpu->fpstate_active") Link: http://lkml.kernel.org/r/20181016202525.29437-3-bigeasy@linutronix.de Signed-off-by: Ingo Molnar <mingo@kernel.org> --- arch/x86/kernel/fpu/signal.c | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c index 23f1691670b6..61a949d84dfa 100644 --- a/arch/x86/kernel/fpu/signal.c +++ b/arch/x86/kernel/fpu/signal.c @@ -314,7 +314,6 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size) * thread's fpu state, reconstruct fxstate from the fsave * header. Validate and sanitize the copied state. */ - struct fpu *fpu = &tsk->thread.fpu; struct user_i387_ia32_struct env; int err = 0; ^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 3/3] x86/fpu: Save FPU registers on context switch if there is a FPU 2018-10-16 20:25 [0/3] A few 32bit x86 fixes Sebastian Andrzej Siewior 2018-10-16 20:25 ` [PATCH 1/3] x86/mm/pat: Disable preemption around __flush_tlb_all() Sebastian Andrzej Siewior 2018-10-16 20:25 ` [PATCH 2/3] x86/fpu: Remove second definition of fpu in __fpu__restore_sig() Sebastian Andrzej Siewior @ 2018-10-16 20:25 ` Sebastian Andrzej Siewior 2018-10-16 23:00 ` Andy Lutomirski ` (2 more replies) 2 siblings, 3 replies; 26+ messages in thread From: Sebastian Andrzej Siewior @ 2018-10-16 20:25 UTC (permalink / raw) To: linux-kernel Cc: x86, Dave Hansen, Andy Lutomirski, Peter Zijlstra, Borislav Petkov, Sebastian Andrzej Siewior, stable Booting a 486 with "no387 nofxsr" ends with | math_emulate: 0060:c101987d | Kernel panic - not syncing: Math emulation needed in kernel on the first context switch in user land. The reason is that copy_fpregs_to_fpstate() tries `fnsave' which does not work. This happens since commit f1c8cd0176078 ("x86/fpu: Change fpu->fpregs_active users to fpu->fpstate_active"). Add a check for X86_FEATURE_FPU before trying to save FPU registers (we have such a check switch_fpu_finish() already). Fixes: f1c8cd0176078 ("x86/fpu: Change fpu->fpregs_active users to fpu->fpstate_active") Cc: stable@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> --- arch/x86/include/asm/fpu/internal.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h index a38bf5a1e37ad..69dcdf195b611 100644 --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -528,7 +528,7 @@ static inline void fpregs_activate(struct fpu *fpu) static inline void switch_fpu_prepare(struct fpu *old_fpu, int cpu) { - if (old_fpu->initialized) { + if (static_cpu_has(X86_FEATURE_FPU) && old_fpu->initialized) { if (!copy_fpregs_to_fpstate(old_fpu)) old_fpu->last_cpu = -1; else -- 2.19.1 ^ permalink raw reply related [flat|nested] 26+ messages in thread
* Re: [PATCH 3/3] x86/fpu: Save FPU registers on context switch if there is a FPU 2018-10-16 20:25 ` [PATCH 3/3] x86/fpu: Save FPU registers on context switch if there is a FPU Sebastian Andrzej Siewior @ 2018-10-16 23:00 ` Andy Lutomirski 2018-10-17 9:10 ` [tip:x86/urgent] x86/fpu: Fix i486 + no387 boot crash by only saving FPU registers on context switch if there is an FPU tip-bot for Sebastian Andrzej Siewior 2018-10-18 6:22 ` tip-bot for Sebastian Andrzej Siewior 2 siblings, 0 replies; 26+ messages in thread From: Andy Lutomirski @ 2018-10-16 23:00 UTC (permalink / raw) To: Sebastian Andrzej Siewior Cc: LKML, X86 ML, Dave Hansen, Andrew Lutomirski, Peter Zijlstra, Borislav Petkov, stable On Tue, Oct 16, 2018 at 1:25 PM Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote: > > Booting a 486 with "no387 nofxsr" ends with > | math_emulate: 0060:c101987d > | Kernel panic - not syncing: Math emulation needed in kernel > > on the first context switch in user land. The reason is that > copy_fpregs_to_fpstate() tries `fnsave' which does not work. This > happens since commit f1c8cd0176078 ("x86/fpu: Change fpu->fpregs_active > users to fpu->fpstate_active"). > > Add a check for X86_FEATURE_FPU before trying to save FPU registers (we > have such a check switch_fpu_finish() already). > > Fixes: f1c8cd0176078 ("x86/fpu: Change fpu->fpregs_active users to fpu->fpstate_active") > Cc: stable@vger.kernel.org > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by: Andy Lutomirski <luto@kernel.org> ^ permalink raw reply [flat|nested] 26+ messages in thread
* [tip:x86/urgent] x86/fpu: Fix i486 + no387 boot crash by only saving FPU registers on context switch if there is an FPU 2018-10-16 20:25 ` [PATCH 3/3] x86/fpu: Save FPU registers on context switch if there is a FPU Sebastian Andrzej Siewior 2018-10-16 23:00 ` Andy Lutomirski @ 2018-10-17 9:10 ` tip-bot for Sebastian Andrzej Siewior 2018-10-18 6:22 ` tip-bot for Sebastian Andrzej Siewior 2 siblings, 0 replies; 26+ messages in thread From: tip-bot for Sebastian Andrzej Siewior @ 2018-10-17 9:10 UTC (permalink / raw) To: linux-tip-commits Cc: peterz, dave.hansen, torvalds, bp, hpa, mingo, bigeasy, luto, tglx, linux-kernel Commit-ID: eb4c6255bd6b0a4d3a7daf4f05e3c7a91eed74bf Gitweb: https://git.kernel.org/tip/eb4c6255bd6b0a4d3a7daf4f05e3c7a91eed74bf Author: Sebastian Andrzej Siewior <bigeasy@linutronix.de> AuthorDate: Tue, 16 Oct 2018 22:25:25 +0200 Committer: Ingo Molnar <mingo@kernel.org> CommitDate: Wed, 17 Oct 2018 08:07:51 +0200 x86/fpu: Fix i486 + no387 boot crash by only saving FPU registers on context switch if there is an FPU Booting an i486 with "no387 nofxsr" ends with with the following crash: math_emulate: 0060:c101987d Kernel panic - not syncing: Math emulation needed in kernel on the first context switch in user land. The reason is that copy_fpregs_to_fpstate() tries FNSAVE which does not work as the FPU is turned off. This bug was introduced in: f1c8cd0176078 ("x86/fpu: Change fpu->fpregs_active users to fpu->fpstate_active") Add a check for X86_FEATURE_FPU before trying to save FPU registers (we have such a check in switch_fpu_finish() already). Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Fixes: f1c8cd0176078 ("x86/fpu: Change fpu->fpregs_active users to fpu->fpstate_active") Link: http://lkml.kernel.org/r/20181016202525.29437-4-bigeasy@linutronix.de Signed-off-by: Ingo Molnar <mingo@kernel.org> --- arch/x86/include/asm/fpu/internal.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h index a38bf5a1e37a..69dcdf195b61 100644 --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -528,7 +528,7 @@ static inline void fpregs_activate(struct fpu *fpu) static inline void switch_fpu_prepare(struct fpu *old_fpu, int cpu) { - if (old_fpu->initialized) { + if (static_cpu_has(X86_FEATURE_FPU) && old_fpu->initialized) { if (!copy_fpregs_to_fpstate(old_fpu)) old_fpu->last_cpu = -1; else ^ permalink raw reply related [flat|nested] 26+ messages in thread
* [tip:x86/urgent] x86/fpu: Fix i486 + no387 boot crash by only saving FPU registers on context switch if there is an FPU 2018-10-16 20:25 ` [PATCH 3/3] x86/fpu: Save FPU registers on context switch if there is a FPU Sebastian Andrzej Siewior 2018-10-16 23:00 ` Andy Lutomirski 2018-10-17 9:10 ` [tip:x86/urgent] x86/fpu: Fix i486 + no387 boot crash by only saving FPU registers on context switch if there is an FPU tip-bot for Sebastian Andrzej Siewior @ 2018-10-18 6:22 ` tip-bot for Sebastian Andrzej Siewior 2 siblings, 0 replies; 26+ messages in thread From: tip-bot for Sebastian Andrzej Siewior @ 2018-10-18 6:22 UTC (permalink / raw) To: linux-tip-commits Cc: bp, tglx, hpa, dave.hansen, torvalds, peterz, linux-kernel, luto, mingo, bigeasy Commit-ID: 2224d616528194b02424c91c2ee254b3d29942c3 Gitweb: https://git.kernel.org/tip/2224d616528194b02424c91c2ee254b3d29942c3 Author: Sebastian Andrzej Siewior <bigeasy@linutronix.de> AuthorDate: Tue, 16 Oct 2018 22:25:25 +0200 Committer: Ingo Molnar <mingo@kernel.org> CommitDate: Wed, 17 Oct 2018 12:30:38 +0200 x86/fpu: Fix i486 + no387 boot crash by only saving FPU registers on context switch if there is an FPU Booting an i486 with "no387 nofxsr" ends with with the following crash: math_emulate: 0060:c101987d Kernel panic - not syncing: Math emulation needed in kernel on the first context switch in user land. The reason is that copy_fpregs_to_fpstate() tries FNSAVE which does not work as the FPU is turned off. This bug was introduced in: f1c8cd0176078 ("x86/fpu: Change fpu->fpregs_active users to fpu->fpstate_active") Add a check for X86_FEATURE_FPU before trying to save FPU registers (we have such a check in switch_fpu_finish() already). Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Fixes: f1c8cd0176078 ("x86/fpu: Change fpu->fpregs_active users to fpu->fpstate_active") Link: http://lkml.kernel.org/r/20181016202525.29437-4-bigeasy@linutronix.de Signed-off-by: Ingo Molnar <mingo@kernel.org> --- arch/x86/include/asm/fpu/internal.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h index a38bf5a1e37a..69dcdf195b61 100644 --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -528,7 +528,7 @@ static inline void fpregs_activate(struct fpu *fpu) static inline void switch_fpu_prepare(struct fpu *old_fpu, int cpu) { - if (old_fpu->initialized) { + if (static_cpu_has(X86_FEATURE_FPU) && old_fpu->initialized) { if (!copy_fpregs_to_fpstate(old_fpu)) old_fpu->last_cpu = -1; else ^ permalink raw reply related [flat|nested] 26+ messages in thread
end of thread, other threads:[~2018-11-05 21:56 UTC | newest] Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2018-10-16 20:25 [0/3] A few 32bit x86 fixes Sebastian Andrzej Siewior 2018-10-16 20:25 ` [PATCH 1/3] x86/mm/pat: Disable preemption around __flush_tlb_all() Sebastian Andrzej Siewior 2018-10-16 21:25 ` Andy Lutomirski 2018-10-16 21:38 ` Sebastian Andrzej Siewior 2018-10-16 23:28 ` Andy Lutomirski 2018-10-17 10:34 ` [PATCH 1/3 v2] " Sebastian Andrzej Siewior 2018-10-29 18:10 ` [tip:x86/urgent] " tip-bot for Sebastian Andrzej Siewior 2018-11-05 21:56 ` Dan Williams 2018-10-17 9:54 ` [PATCH 1/3] " David Laight 2018-10-17 10:39 ` 'Sebastian Andrzej Siewior' 2018-10-17 11:45 ` David Laight 2018-10-17 12:00 ` 'Sebastian Andrzej Siewior' 2018-10-17 11:11 ` Peter Zijlstra 2018-10-17 11:17 ` Thomas Gleixner 2018-10-17 15:47 ` Peter Zijlstra 2018-10-17 15:55 ` Thomas Gleixner 2018-10-17 16:00 ` 'Sebastian Andrzej Siewior' 2018-10-17 16:22 ` Peter Zijlstra 2018-10-16 20:25 ` [PATCH 2/3] x86/fpu: Remove second definition of fpu in __fpu__restore_sig() Sebastian Andrzej Siewior 2018-10-16 21:26 ` Andy Lutomirski 2018-10-17 9:09 ` [tip:x86/urgent] " tip-bot for Sebastian Andrzej Siewior 2018-10-18 6:22 ` tip-bot for Sebastian Andrzej Siewior 2018-10-16 20:25 ` [PATCH 3/3] x86/fpu: Save FPU registers on context switch if there is a FPU Sebastian Andrzej Siewior 2018-10-16 23:00 ` Andy Lutomirski 2018-10-17 9:10 ` [tip:x86/urgent] x86/fpu: Fix i486 + no387 boot crash by only saving FPU registers on context switch if there is an FPU tip-bot for Sebastian Andrzej Siewior 2018-10-18 6:22 ` tip-bot for Sebastian Andrzej Siewior
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.