From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752658AbdFVHcd (ORCPT ); Thu, 22 Jun 2017 03:32:33 -0400 Received: from mail-wr0-f195.google.com ([209.85.128.195]:36512 "EHLO mail-wr0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751143AbdFVHcb (ORCPT ); Thu, 22 Jun 2017 03:32:31 -0400 Date: Thu, 22 Jun 2017 09:32:27 +0200 From: Ingo Molnar To: Andy Lutomirski Cc: Nadav Amit , X86 ML , LKML , Borislav Petkov , Linus Torvalds , Andrew Morton , Mel Gorman , "linux-mm@kvack.org" , Rik van Riel , Dave Hansen , Arjan van de Ven , Peter Zijlstra Subject: Re: [PATCH v3 01/11] x86/mm: Don't reenter flush_tlb_func_common() Message-ID: <20170622073227.lep4fmqypq6habnn@gmail.com> References: <207CCA52-C1A0-4AEF-BABF-FA6552CFB71F@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org * Andy Lutomirski wrote: > On Wed, Jun 21, 2017 at 4:26 PM, Nadav Amit wrote: > > Andy Lutomirski wrote: > > > >> index 2a5e851f2035..f06239c6919f 100644 > >> --- a/arch/x86/mm/tlb.c > >> +++ b/arch/x86/mm/tlb.c > >> @@ -208,6 +208,9 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, > >> static void flush_tlb_func_common(const struct flush_tlb_info *f, > >> bool local, enum tlb_flush_reason reason) > >> { > >> + /* This code cannot presently handle being reentered. */ > >> + VM_WARN_ON(!irqs_disabled()); > >> + > >> if (this_cpu_read(cpu_tlbstate.state) != TLBSTATE_OK) { > >> leave_mm(smp_processor_id()); > >> return; > >> @@ -313,8 +316,12 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, > >> info.end = TLB_FLUSH_ALL; > >> } > >> > >> - if (mm == this_cpu_read(cpu_tlbstate.loaded_mm)) > >> + if (mm == this_cpu_read(cpu_tlbstate.loaded_mm)) { > > > > Perhaps you want to add: > > > > VM_WARN_ON(irqs_disabled()); > > > > here > > > >> + local_irq_disable(); > >> flush_tlb_func_local(&info, TLB_LOCAL_MM_SHOOTDOWN); > >> + local_irq_enable(); > >> + } > >> + > >> if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids) > >> flush_tlb_others(mm_cpumask(mm), &info); > >> put_cpu(); > >> @@ -370,8 +377,12 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) > >> > >> int cpu = get_cpu(); > >> > >> - if (cpumask_test_cpu(cpu, &batch->cpumask)) > >> + if (cpumask_test_cpu(cpu, &batch->cpumask)) { > > > > and here? > > > > Will do. > > What I really want is lockdep_assert_irqs_disabled() or, even better, > for this to be implicit when calling local_irq_disable(). Ingo? I tried that once many years ago and IIRC there were problems - but maybe we could try it again and enforce it, as I agree that the following pattern: local_irq_disable(); ... local_irq_disable(); ... local_irq_enable(); ... local_irq_enable(); .. is actively dangerous. Thanks, Ingo