From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linutronix.de (146.0.238.70:993) by crypto-ml.lab.linutronix.de with IMAP4-SSL for ; 28 Jun 2018 16:41:06 -0000 Received: from aserp2130.oracle.com ([141.146.126.79]) by Galois.linutronix.de with esmtps (TLS1.2:RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1fYZyi-0002RU-SQ for speck@linutronix.de; Thu, 28 Jun 2018 18:41:05 +0200 Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1]) by aserp2130.oracle.com (8.16.0.22/8.16.0.22) with SMTP id w5SGd2Dt009983 for ; Thu, 28 Jun 2018 16:40:57 GMT Received: from userv0022.oracle.com (userv0022.oracle.com [156.151.31.74]) by aserp2130.oracle.com with ESMTP id 2jukmu37wp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK) for ; Thu, 28 Jun 2018 16:40:57 +0000 Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by userv0022.oracle.com (8.14.4/8.14.4) with ESMTP id w5SGeug6011790 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK) for ; Thu, 28 Jun 2018 16:40:56 GMT Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15]) by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id w5SGeu9I026576 for ; Thu, 28 Jun 2018 16:40:56 GMT Date: Thu, 28 Jun 2018 12:40:48 -0400 From: Konrad Rzeszutek Wilk Subject: [MODERATED] Re: [PATCH v4 8/8] [PATCH v4 8/8] Linux Patch #8 Message-ID: <20180628164047.GA3445@char.US.ORACLE.com> References: <20180623135446.010873334@localhost.localdomain> <20180627144326.GD21873@char.US.ORACLE.com> MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit To: speck@linutronix.de List-ID: > --- a/arch/x86/kvm/vmx.c > +++ b/arch/x86/kvm/vmx.c > @@ -194,6 +194,14 @@ module_param(ple_window_max, uint, 0444) > > extern const ulong vmx_return; > > +static DEFINE_STATIC_KEY_FALSE(vmx_l1d_should_flush); > + > +enum vmx_l1d_flush_state { > + VMENTER_L1D_FLUSH_NEVER, > + VMENTER_L1D_FLUSH_COND, > + VMENTER_L1D_FLUSH_ALWAYS, > +}; > + > struct kvm_vmx { > struct kvm kvm; > > @@ -2653,38 +2661,10 @@ static void vmx_prepare_guest_switch(str > { > vmx_save_host_state(vcpu); > > - if (!enable_ept || static_cpu_has(X86_FEATURE_HYPERVISOR) || > - !static_cpu_has(X86_BUG_L1TF)) { > - vcpu->arch.flush_cache_req = false; > - return; > - } > + if (static_branch_unlikely(&vmx_l1d_should_flush)) { > + bool force = vmentry_l1d_flush == VMENTER_L1D_FLUSH_ALWAYS; > > - switch (vmentry_l1d_flush) { > - case 0: > - vcpu->arch.flush_cache_req = false; > - break; > - case 1: > - /* > - * If vmentry_l1d_flush is 1, each vmexit handler is responsible for > - * setting vcpu->arch.vcpu_unconfined. Currently this happens in the > - * following cases: > - * - vmlaunch/vmresume: we do not want the cache to be cleared by a > - * nested hypervisor *and* by KVM on bare metal, so we just do it > - * on every nested entry. Nested hypervisors do not bother clearing > - * the cache. > - * - anything that runs the emulator (the slow paths for EPT misconfig > - * or I/O instruction) > - * - anything that can cause get_user_pages (EPT violation, and again > - * the slow paths for EPT misconfig or I/O instruction) > - * - anything that can run code outside KVM (external interrupt, > - * which can run interrupt handlers or irqs; or the sched_in > - * preempt notifier) > - */ > - break; > - case 2: > - default: > - vcpu->arch.flush_cache_req = true; > - break; > + vcpu->arch.flush_cache_req = force; This ought to be: if (force) vcpu->arch.flush_cache_req = force; or perhaps: if (vmentry_l1d_flush == VMENTER_L1D_FLUSH_ALWAYS) vcpu->arch.flush_cache_req = true; The problem is that if we are in 'vmentry_l1d_flush=1' we should not modify vcpu->arch.flush_cache_req as the vcpu_enter_guest does: 7464 7465 vcpu->arch.flush_cache_req = vcpu->arch.vcpu_unconfined; <=== 7466 kvm_x86_ops->prepare_guest_switch(vcpu); 7467 vcpu->arch.vcpu_unconfined = false; 7468 if (vcpu->arch.flush_cache_req) 7469 vcpu->stat.l1d_flush++; Rethinking this, maybe rip the above from vcpu_enter_guest and in this function will do: vcpu->arch.flush_cache_req = (vmentry_l1d_flush == VMENTER_L1D_FLUSH_ALWAYS) ? true : vcpu->arch.vcpu_unconfined; or so? Let me try that out.