From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linutronix.de (146.0.238.70:993) by crypto-ml.lab.linutronix.de with IMAP4-SSL for ; 02 Aug 2018 11:58:40 -0000 Received: from mx2.suse.de ([195.135.220.15] helo=mx1.suse.de) by Galois.linutronix.de with esmtps (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1flC9s-0004ip-TL for speck@linutronix.de; Thu, 02 Aug 2018 13:52:46 +0200 Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id B0753AED7 for ; Thu, 2 Aug 2018 11:52:39 +0000 (UTC) Message-Id: In-Reply-To: References: From: Nicolai Stange Date: Sat, 21 Jul 2018 22:35:28 +0200 Subject: [MODERATED] [PATCH v3 3/8] kvm: handle host mode irqs 3 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit MIME-Version: 1.0 To: speck@linutronix.de List-ID: Currently, vmx_vcpu_run() checks if ->l1tf_flush_l1d is set and invokes vmx_l1d_flush() if so. This test is unncessary for the "always flush L1d" mode. Move the check to vmx_l1d_flush()'s conditional mode code path. Notes: - vmx_l1d_flush() is likely to get inlined anyway and thus, there's no extra function call. - This change inverts the (static) branch prediction, but there hadn't been any explicit likely()/unlikely() annotations before and so I decided to leave it as is. Signed-off-by: Nicolai Stange --- arch/x86/kvm/vmx.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 5139738cc5a9..ec955b870756 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -9693,12 +9693,16 @@ static void vmx_l1d_flush(struct kvm_vcpu *vcpu) * 'always' */ if (static_branch_likely(&vmx_l1d_flush_cond)) { + bool flush_l1d = vcpu->arch.l1tf_flush_l1d; + /* * Clear the flush bit, it gets set again either from * vcpu_run() or from one of the unsafe VMEXIT * handlers. */ vcpu->arch.l1tf_flush_l1d = false; + if (!flush_l1d) + return; } vcpu->stat.l1d_flush++; @@ -10228,10 +10232,8 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) evmcs_rsp = static_branch_unlikely(&enable_evmcs) ? (unsigned long)¤t_evmcs->host_rsp : 0; - if (static_branch_unlikely(&vmx_l1d_should_flush)) { - if (vcpu->arch.l1tf_flush_l1d) - vmx_l1d_flush(vcpu); - } + if (static_branch_unlikely(&vmx_l1d_should_flush)) + vmx_l1d_flush(vcpu); asm( /* Store host registers */ -- 2.13.7