From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linutronix.de (146.0.238.70:993) by crypto-ml.lab.linutronix.de with IMAP4-SSL for ; 02 Jul 2018 17:01:26 -0000 Received: from p4fea482e.dip0.t-ipconnect.de ([79.234.72.46] helo=nanos.glx-home) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1fa2Cb-0007m6-JE for speck@linutronix.de; Mon, 02 Jul 2018 19:01:25 +0200 Date: Mon, 2 Jul 2018 19:01:22 +0200 (CEST) From: Thomas Gleixner Subject: Re: [patch V5 05/10] KVM magic # 5 In-Reply-To: <20180702163559.GD17137@char.US.ORACLE.com> Message-ID: References: <20180702154426.910579106@linutronix.de> <20180702160528.906801335@linutronix.de> <20180702163559.GD17137@char.US.ORACLE.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit To: speck@linutronix.de List-ID: On Mon, 2 Jul 2018, speck for Konrad Rzeszutek Wilk wrote: > > -static void __maybe_unused vmx_l1d_flush(void) > > +static void vmx_l1d_flush(struct kvm_vcpu *vcpu) > > { > > int size = PAGE_SIZE << L1D_CACHE_ORDER; > > + bool always; > > + > > + /* > > + * If the mitigation mode is 'flush always', keep the flush bit > > + * set, otherwise clear it. It gets set again either from > > + * vcpu_run() or from one of the unsafe VMEXIT handlers. > > + */ > > + always = vmentry_l1d_flush == VMENTER_L1D_FLUSH_ALWAYS; > > + vcpu->arch.l1tf_flush_l1d = always; > > You did the reset of arch.l1tf_flush_l1d _after_ we have done > this vmx_l1d_flush call, nice!! So obvious in retrospect. :) > > @@ -4799,6 +4800,8 @@ int kvm_read_guest_virt(struct kvm_vcpu > > { > > u32 access = (kvm_x86_ops->get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0; > > > > + /* The gva_to_pa walker can pull in tons of pages. */ > > + vcpu->arch.l1tf_flush_l1d = true; > > > I think also kvm_write_guest_virt_system ? That covers vmptrs and vmread. That fell through the cracks when picking the bits from the original patch. Fixed. Thanks, tglx