From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752857AbdHKMHf (ORCPT ); Fri, 11 Aug 2017 08:07:35 -0400 Received: from bombadil.infradead.org ([65.50.211.133]:48064 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752610AbdHKMHe (ORCPT ); Fri, 11 Aug 2017 08:07:34 -0400 Date: Fri, 11 Aug 2017 14:07:14 +0200 From: Peter Zijlstra To: Andrew Cooper Cc: Vitaly Kuznetsov , Juergen Gross , Stephen Hemminger , boris.ostrovsky@oracle.com, "linux-tip-commits@vger.kernel.org" , Jork Loeser , Haiyang Zhang , "linux-kernel@vger.kernel.org" , "rostedt@goodmis.org" , Simon Xiao , "andy.shevchenko@gmail.com" , "luto@kernel.org" , "hpa@zytor.com" , xen-devel@lists.xenproject.org, "tglx@linutronix.de" , KY Srinivasan , "torvalds@linux-foundation.org" , "mingo@kernel.org" Subject: Re: [Xen-devel] [tip:x86/platform] x86/hyper-v: Use hypercall for remote TLB flush Message-ID: <20170811120714.rwr24ewr4mjzwznn@hirez.programming.kicks-ass.net> References: <20170802160921.21791-8-vkuznets@redhat.com> <20170810185646.GI6524@worktop.programming.kicks-ass.net> <20170810192742.GJ6524@worktop.programming.kicks-ass.net> <87lgmqqwzl.fsf@vitty.brq.redhat.com> <20170811105625.hmdfnp3yh72zut33@hirez.programming.kicks-ass.net> <43ddd29a-1670-ef0b-c327-10a2dca67cb4@citrix.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <43ddd29a-1670-ef0b-c327-10a2dca67cb4@citrix.com> User-Agent: NeoMutt/20170609 (1.8.3) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Aug 11, 2017 at 12:05:45PM +0100, Andrew Cooper wrote: > >> Oh, I see your concern. Hyper-V, however, is not the first x86 > >> hypervisor trying to avoid IPIs on remote TLB flush, Xen does this > >> too. Briefly looking at xen_flush_tlb_others() I don't see anything > >> special, do we know how serialization is achieved there? > > No idea on how Xen works, I always just hope it goes away :-) But lets > > ask some Xen folks. > > How is the software pagewalker relying on IF being clear safe at all (on > native, let alone under virtualisation)? Hardware has no architectural > requirement to keep entries in the TLB. No, but it _can_, therefore when we unhook pages we _must_ invalidate. It goes like: CPU0 CPU1 unhook page cli traverse page tables TLB invalidate ---> sti TLB invalidate <------ complete free page So the CPU1 page-table walker gets an existence guarantee of the page-tables by clearing IF. > In the virtualisation case, at any point the vcpu can be scheduled on a > different pcpu even during a critical region like that, so the TLB > really can empty itself under your feet. Not the point.