From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752065AbdLDWzM (ORCPT ); Mon, 4 Dec 2017 17:55:12 -0500 Received: from mail.kernel.org ([198.145.29.99]:43824 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751436AbdLDWzI (ORCPT ); Mon, 4 Dec 2017 17:55:08 -0500 DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E836B219A9 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=luto@kernel.org X-Google-Smtp-Source: AGs4zMYqJcgOUvisIP+wltM4MQt/K7o7MVblyQEtB86/t6zj19xvj8a2vdBZx81/XD3xjAa01kVfn6vkL2cFQhcO1Sc= MIME-Version: 1.0 In-Reply-To: <20171204224757.GC20227@worktop.programming.kicks-ass.net> References: <20171204140706.296109558@linutronix.de> <20171204150609.002009374@linutronix.de> <20171204224757.GC20227@worktop.programming.kicks-ass.net> From: Andy Lutomirski Date: Mon, 4 Dec 2017 14:54:46 -0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [patch 51/60] x86/mm: Allow flushing for future ASID switches To: Peter Zijlstra Cc: Andy Lutomirski , Thomas Gleixner , LKML , X86 ML , Linus Torvalds , Dave Hansen , Borislav Petkov , Greg KH , Kees Cook , Hugh Dickins , Brian Gerst , Josh Poimboeuf , Denys Vlasenko , Rik van Riel , Boris Ostrovsky , Juergen Gross , David Laight , Eduardo Valentin , aliguori@amazon.com, Will Deacon , Daniel Gruss , Dave Hansen , Ingo Molnar , michael.schwarz@iaik.tugraz.at, Borislav Petkov , moritz.lipp@iaik.tugraz.at, richard.fellner@student.tugraz.at, Andrew Banman , mike.travis@hpe.com Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Dec 4, 2017 at 2:47 PM, Peter Zijlstra wrote: > On Mon, Dec 04, 2017 at 02:22:54PM -0800, Andy Lutomirski wrote: > >> > +static inline void invalidate_pcid_other(void) >> > +{ >> > + /* >> > + * With global pages, all of the shared kenel page tables >> > + * are set as _PAGE_GLOBAL. We have no shared nonglobals >> > + * and nothing to do here. >> > + */ >> > + if (!static_cpu_has_bug(X86_BUG_CPU_SECURE_MODE_KPTI)) >> > + return; >> >> I think I'd be more comfortable if this check were in the caller, not >> here. Shouldn't a function called invalidate_pcid_other() do what the >> name says? > > Yeah, you're probably right. The thing is course that we only ever need > that operation for kpti (as of now). But me renaming this stuff made > this problem :/ > >> > + this_cpu_write(cpu_tlbstate.invalidate_other, true); >> >> Why do we need this extra variable instead of just looping over all >> other ASIDs and invalidating them? It would be something like: >> >> for (i = 1; i < TLB_NR_DYN_ASIDS; i++) { >> if (i != this_cpu_read(cpu_tlbstate.loaded_mm_asid)) >> this_cpu_write(cpu_tlbstate.ctxs[i].ctx_id, 0); >> } >> >> modulo epic whitespace damage and possible typos. > > I think the point is that we can do many invalidate_other's before we > ever do a switch_mm(). The above would be more expensive. > > Not sure it would matter in practise though. > >> > static inline void __flush_tlb_one(unsigned long addr) >> > { >> > count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ONE); >> > __flush_tlb_single(addr); >> > + /* >> > + * Invalidate other address spaces inaccessible to single-page >> > + * invalidation: >> > + */ >> >> Ugh. If I'm reading this right, __flush_tlb_single() means "flush one >> user address" and __flush_tlb_one() means "flush one kernel address". > > That would make sense, woulnd't it? :-) But afaict the __flush_tlb_one() > user in tlb_uv.c is in fact for userspace and should be > __flush_tlb_single(). > > Andrew, Mike, can either of you shed light on what exactly you need > invalidated there? > >> That's, um, not exactly obvious. Could this be at least commented >> better? > > As is __flush_tlb_single() does user and __flush_tlb_one() does > user+kernel. Yep. A one-liner above the function to that effect would make it *way* clearer what's going on.