From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932515AbcFOCVV (ORCPT ); Tue, 14 Jun 2016 22:21:21 -0400 Received: from mail-oi0-f53.google.com ([209.85.218.53]:35711 "EHLO mail-oi0-f53.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932424AbcFOCVT (ORCPT ); Tue, 14 Jun 2016 22:21:19 -0400 MIME-Version: 1.0 In-Reply-To: <5760792D.90000@linux.intel.com> References: <1465919919-2093-1-git-send-email-lukasz.anaczkowski@intel.com> <7FB15233-B347-4A87-9506-A9E10D331292@gmail.com> <57603C61.5000408@linux.intel.com> <2471A3E8-FF69-4720-A3BF-BDC6094A6A70@gmail.com> <5760792D.90000@linux.intel.com> From: Andy Lutomirski Date: Tue, 14 Jun 2016 19:20:59 -0700 Message-ID: Subject: Re: [PATCH] Linux VM workaround for Knights Landing A/D leak To: Dave Hansen Cc: Nadav Amit , Lukasz Anaczkowski , LKML , "linux-mm@kvack.org" , Thomas Gleixner , Ingo Molnar , Andi Kleen , "Kirill A. Shutemov" , Michal Hocko , Andrew Morton , "H. Peter Anvin" , harish.srinivasappa@intel.com, lukasz.odzioba@intel.com Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jun 14, 2016 at 2:37 PM, Dave Hansen wrote: > On 06/14/2016 01:16 PM, Nadav Amit wrote: >> Dave Hansen wrote: >> >>> On 06/14/2016 09:47 AM, Nadav Amit wrote: >>>> Lukasz Anaczkowski wrote: >>>> >>>>>> From: Andi Kleen >>>>>> +void fix_pte_leak(struct mm_struct *mm, unsigned long addr, pte_t *ptep) >>>>>> +{ >>>> Here there should be a call to smp_mb__after_atomic() to synchronize with >>>> switch_mm. I submitted a similar patch, which is still pending (hint). >>>> >>>>>> + if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids) { >>>>>> + trace_tlb_flush(TLB_LOCAL_SHOOTDOWN, TLB_FLUSH_ALL); >>>>>> + flush_tlb_others(mm_cpumask(mm), mm, addr, >>>>>> + addr + PAGE_SIZE); >>>>>> + mb(); >>>>>> + set_pte(ptep, __pte(0)); >>>>>> + } >>>>>> +} >>> >>> Shouldn't that barrier be incorporated in the TLB flush code itself and >>> not every single caller (like this code is)? >>> >>> It is insane to require individual TLB flushers to be concerned with the >>> barriers. >> >> IMHO it is best to use existing flushing interfaces instead of creating >> new ones. > > Yeah, or make these things a _little_ harder to get wrong. That little > snippet above isn't so crazy that we should be depending on open-coded > barriers to get it right. > > Should we just add a barrier to mm_cpumask() itself? That should stop > the race. Or maybe we need a new primitive like: > > /* > * Call this if a full barrier has been executed since the last > * pagetable modification operation. > */ > static int __other_cpus_need_tlb_flush(struct mm_struct *mm) > { > /* cpumask_any_but() returns >= nr_cpu_ids if no cpus set. */ > return cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < > nr_cpu_ids; > } > > > static int other_cpus_need_tlb_flush(struct mm_struct *mm) > { > /* > * Synchronizes with switch_mm. Makes sure that we do not > * observe a bit having been cleared in mm_cpumask() before > * the other processor has seen our pagetable update. See > * switch_mm(). > */ > smp_mb__after_atomic(); > > return __other_cpus_need_tlb_flush(mm) > } > > We should be able to deploy other_cpus_need_tlb_flush() in most of the > cases where we are doing "cpumask_any_but(mm_cpumask(mm), > smp_processor_id()) < nr_cpu_ids". IMO this is a bit nuts. smp_mb__after_atomic() doesn't do anything on x86. And, even if it did, why should the flush code assume that the previous store was atomic? What's the issue being fixed / worked around here? From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-io0-f197.google.com (mail-io0-f197.google.com [209.85.223.197]) by kanga.kvack.org (Postfix) with ESMTP id F2ECD6B007E for ; Tue, 14 Jun 2016 22:21:19 -0400 (EDT) Received: by mail-io0-f197.google.com with SMTP id l5so24095812ioa.0 for ; Tue, 14 Jun 2016 19:21:19 -0700 (PDT) Received: from mail-oi0-x231.google.com (mail-oi0-x231.google.com. [2607:f8b0:4003:c06::231]) by mx.google.com with ESMTPS id s205si3701878oif.23.2016.06.14.19.21.19 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 14 Jun 2016 19:21:19 -0700 (PDT) Received: by mail-oi0-x231.google.com with SMTP id w5so13256114oib.2 for ; Tue, 14 Jun 2016 19:21:19 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: <5760792D.90000@linux.intel.com> References: <1465919919-2093-1-git-send-email-lukasz.anaczkowski@intel.com> <7FB15233-B347-4A87-9506-A9E10D331292@gmail.com> <57603C61.5000408@linux.intel.com> <2471A3E8-FF69-4720-A3BF-BDC6094A6A70@gmail.com> <5760792D.90000@linux.intel.com> From: Andy Lutomirski Date: Tue, 14 Jun 2016 19:20:59 -0700 Message-ID: Subject: Re: [PATCH] Linux VM workaround for Knights Landing A/D leak Content-Type: text/plain; charset=UTF-8 Sender: owner-linux-mm@kvack.org List-ID: To: Dave Hansen Cc: Nadav Amit , Lukasz Anaczkowski , LKML , "linux-mm@kvack.org" , Thomas Gleixner , Ingo Molnar , Andi Kleen , "Kirill A. Shutemov" , Michal Hocko , Andrew Morton , "H. Peter Anvin" , harish.srinivasappa@intel.com, lukasz.odzioba@intel.com On Tue, Jun 14, 2016 at 2:37 PM, Dave Hansen wrote: > On 06/14/2016 01:16 PM, Nadav Amit wrote: >> Dave Hansen wrote: >> >>> On 06/14/2016 09:47 AM, Nadav Amit wrote: >>>> Lukasz Anaczkowski wrote: >>>> >>>>>> From: Andi Kleen >>>>>> +void fix_pte_leak(struct mm_struct *mm, unsigned long addr, pte_t *ptep) >>>>>> +{ >>>> Here there should be a call to smp_mb__after_atomic() to synchronize with >>>> switch_mm. I submitted a similar patch, which is still pending (hint). >>>> >>>>>> + if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids) { >>>>>> + trace_tlb_flush(TLB_LOCAL_SHOOTDOWN, TLB_FLUSH_ALL); >>>>>> + flush_tlb_others(mm_cpumask(mm), mm, addr, >>>>>> + addr + PAGE_SIZE); >>>>>> + mb(); >>>>>> + set_pte(ptep, __pte(0)); >>>>>> + } >>>>>> +} >>> >>> Shouldn't that barrier be incorporated in the TLB flush code itself and >>> not every single caller (like this code is)? >>> >>> It is insane to require individual TLB flushers to be concerned with the >>> barriers. >> >> IMHO it is best to use existing flushing interfaces instead of creating >> new ones. > > Yeah, or make these things a _little_ harder to get wrong. That little > snippet above isn't so crazy that we should be depending on open-coded > barriers to get it right. > > Should we just add a barrier to mm_cpumask() itself? That should stop > the race. Or maybe we need a new primitive like: > > /* > * Call this if a full barrier has been executed since the last > * pagetable modification operation. > */ > static int __other_cpus_need_tlb_flush(struct mm_struct *mm) > { > /* cpumask_any_but() returns >= nr_cpu_ids if no cpus set. */ > return cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < > nr_cpu_ids; > } > > > static int other_cpus_need_tlb_flush(struct mm_struct *mm) > { > /* > * Synchronizes with switch_mm. Makes sure that we do not > * observe a bit having been cleared in mm_cpumask() before > * the other processor has seen our pagetable update. See > * switch_mm(). > */ > smp_mb__after_atomic(); > > return __other_cpus_need_tlb_flush(mm) > } > > We should be able to deploy other_cpus_need_tlb_flush() in most of the > cases where we are doing "cpumask_any_but(mm_cpumask(mm), > smp_processor_id()) < nr_cpu_ids". IMO this is a bit nuts. smp_mb__after_atomic() doesn't do anything on x86. And, even if it did, why should the flush code assume that the previous store was atomic? What's the issue being fixed / worked around here? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org