From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753534AbcD1KtC (ORCPT ); Thu, 28 Apr 2016 06:49:02 -0400 Received: from terminus.zytor.com ([198.137.202.10]:48732 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753035AbcD1KtA (ORCPT ); Thu, 28 Apr 2016 06:49:00 -0400 Date: Thu, 28 Apr 2016 03:48:07 -0700 From: tip-bot for Andy Lutomirski Message-ID: Cc: peterz@infradead.org, mingo@kernel.org, bp@suse.de, luto@kernel.org, tglx@linutronix.de, hpa@zytor.com, torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, bp@alien8.de Reply-To: mingo@kernel.org, peterz@infradead.org, bp@alien8.de, linux-kernel@vger.kernel.org, torvalds@linux-foundation.org, hpa@zytor.com, bp@suse.de, luto@kernel.org, tglx@linutronix.de In-Reply-To: References: To: linux-tip-commits@vger.kernel.org Subject: [tip:sched/core] x86/mm, sched/core: Turn off IRQs in switch_mm() Git-Commit-ID: 078194f8e9fe3cf54c8fd8bded48a1db5bd8eb8a X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit-ID: 078194f8e9fe3cf54c8fd8bded48a1db5bd8eb8a Gitweb: http://git.kernel.org/tip/078194f8e9fe3cf54c8fd8bded48a1db5bd8eb8a Author: Andy Lutomirski AuthorDate: Tue, 26 Apr 2016 09:39:09 -0700 Committer: Ingo Molnar CommitDate: Thu, 28 Apr 2016 11:44:20 +0200 x86/mm, sched/core: Turn off IRQs in switch_mm() Potential races between switch_mm() and TLB-flush or LDT-flush IPIs could be very messy. AFAICT the code is currently okay, whether by accident or by careful design, but enabling PCID will make it considerably more complicated and will no longer be obviously safe. Fix it with a big hammer: run switch_mm() with IRQs off. To avoid a performance hit in the scheduler, we take advantage of our knowledge that the scheduler already has IRQs disabled when it calls switch_mm(). Signed-off-by: Andy Lutomirski Reviewed-by: Borislav Petkov Cc: Borislav Petkov Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Link: http://lkml.kernel.org/r/f19baf759693c9dcae64bbff76189db77cb13398.1461688545.git.luto@kernel.org Signed-off-by: Ingo Molnar --- arch/x86/include/asm/mmu_context.h | 3 +++ arch/x86/mm/tlb.c | 10 ++++++++++ 2 files changed, 13 insertions(+) diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h index bb911dd..39634819 100644 --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -118,6 +118,9 @@ static inline void destroy_context(struct mm_struct *mm) extern void switch_mm(struct mm_struct *prev, struct mm_struct *next, struct task_struct *tsk); +extern void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, + struct task_struct *tsk); +#define switch_mm_irqs_off switch_mm_irqs_off #define activate_mm(prev, next) \ do { \ diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index ce7a0c9..5643fd0 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -64,6 +64,16 @@ EXPORT_SYMBOL_GPL(leave_mm); void switch_mm(struct mm_struct *prev, struct mm_struct *next, struct task_struct *tsk) { + unsigned long flags; + + local_irq_save(flags); + switch_mm_irqs_off(prev, next, tsk); + local_irq_restore(flags); +} + +void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, + struct task_struct *tsk) +{ unsigned cpu = smp_processor_id(); if (likely(prev != next)) {