From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81FF2C433ED for ; Thu, 1 Apr 2021 22:56:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 398DA60232 for ; Thu, 1 Apr 2021 22:56:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235717AbhDAW4N (ORCPT ); Thu, 1 Apr 2021 18:56:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42222 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234374AbhDAW4M (ORCPT ); Thu, 1 Apr 2021 18:56:12 -0400 Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AF24EC061788 for ; Thu, 1 Apr 2021 15:56:11 -0700 (PDT) Received: by mail-pf1-x42f.google.com with SMTP id x26so2514790pfn.0 for ; Thu, 01 Apr 2021 15:56:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=ZTTgS3U28JACFf7bOON3o/EXQcrX2RuFhdzGfYRYroc=; b=tY5qBxY2DUXqPbQLNw6gCVq/u0kDRkwgTcdMmt9ByBsMDaRgyHBOJCZEykozzIsIj6 pRadBCBnBG//S3AEUMZbqMOovaagVUzcL5IoXwPaefCBC9p7l1/mzla66n1Qyv0LGe3h YFw8+r3TJEiGF+cR+xr9mMgUrmqJtsS6WoOeYkmOsWJv7VJX7GJFvbi23NNMomMQkXMY usiZnrDiI1eBgFTGoeoUoHIIKm01pia6mMy+KTkpVDF1LOGEu0bDyCD3hieyamHd8fOH kqW0haXzowD4LTELobikYX29ZuW6BypXf4pF8G9VbZhtNF0UKTL9yTFH9TUphBFLo96F KrHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=ZTTgS3U28JACFf7bOON3o/EXQcrX2RuFhdzGfYRYroc=; b=frAUA9UazEKU6agBw4VX5fkLMQ9Fcj4Hz1apEk2Q9ZAA2qulfs19iy05DYRw6ZL6E5 mUlx8hi1s+Z3zvXK+HOKvlOClUU6q6Elhm/a2/Aa9g4TwxWdUqEETowsoI1O2nMmQofH vR2sFk6eBKdZbLLPj19s0Em+t4svC5/QsGq7qPnQSoNOdpZXX6ZWIrC+xqW24OnCCFiC MrpUGNg60YiJVBHiKUgUt0nhs874crqTDER5P6SthV0wYSL1MHKNeU3j4GOsFqjiRNa+ hslzVlBeL75ZujW0woxyTI3D/rl5JMtr0SgzX9UfOku2+1oyEZKOSQIttonyWttH8zrB iBKg== X-Gm-Message-State: AOAM533facEcwYrtC1RvQW4vZNoGFiSkjZS5FTS+eetk4Si0YJ6RKO9K AseyGPYGIbD2HFwM/ylaxHzLMw== X-Google-Smtp-Source: ABdhPJw4zb31BAWzNmtQ+kn86mj77dDS0dJB+7RYwW45A/mZ3N/bbP1toV7L4Yv69L+/sikeyYvXNQ== X-Received: by 2002:a05:6a00:170c:b029:225:8851:5b3c with SMTP id h12-20020a056a00170cb029022588515b3cmr9612952pfc.0.1617317771076; Thu, 01 Apr 2021 15:56:11 -0700 (PDT) Received: from google.com (240.111.247.35.bc.googleusercontent.com. [35.247.111.240]) by smtp.gmail.com with ESMTPSA id r9sm6331943pgg.12.2021.04.01.15.56.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 01 Apr 2021 15:56:10 -0700 (PDT) Date: Thu, 1 Apr 2021 22:56:06 +0000 From: Sean Christopherson To: Paolo Bonzini Cc: Maxim Levitsky , kvm@vger.kernel.org, Thomas Gleixner , Wanpeng Li , Borislav Petkov , Jim Mattson , "open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)" , Vitaly Kuznetsov , "H. Peter Anvin" , Joerg Roedel , Ingo Molnar , "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" Subject: Re: [PATCH 3/4] KVM: x86: correctly merge pending and injected exception Message-ID: References: <20210401143817.1030695-1-mlevitsk@redhat.com> <20210401143817.1030695-4-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 01, 2021, Paolo Bonzini wrote: > On 01/04/21 16:38, Maxim Levitsky wrote: > > +static int kvm_do_deliver_pending_exception(struct kvm_vcpu *vcpu) > > +{ > > + int class1, class2, ret; > > + > > + /* try to deliver current pending exception as VM exit */ > > + if (is_guest_mode(vcpu)) { > > + ret = kvm_x86_ops.nested_ops->deliver_exception_as_vmexit(vcpu); > > + if (ret || !vcpu->arch.pending_exception.valid) > > + return ret; > > + } > > + > > + /* No injected exception, so just deliver the payload and inject it */ > > + if (!vcpu->arch.injected_exception.valid) { > > + trace_kvm_inj_exception(vcpu->arch.pending_exception.nr, > > + vcpu->arch.pending_exception.has_error_code, > > + vcpu->arch.pending_exception.error_code); > > +queue: > > If you move the queue label to the top of the function, you can "goto queue" for #DF as well and you don't need to call kvm_do_deliver_pending_exception again. In fact you can merge this function and kvm_deliver_pending_exception completely: > > > static int kvm_deliver_pending_exception_as_vmexit(struct kvm_vcpu *vcpu) > { > WARN_ON(!vcpu->arch.pending_exception.valid); > if (is_guest_mode(vcpu)) > return kvm_x86_ops.nested_ops->deliver_exception_as_vmexit(vcpu); > else > return 0; > } > > static int kvm_merge_injected_exception(struct kvm_vcpu *vcpu) > { > /* > * First check if the pending exception takes precedence > * over the injected one, which will be reported in the > * vmexit info. > */ > ret = kvm_deliver_pending_exception_as_vmexit(vcpu); > if (ret || !vcpu->arch.pending_exception.valid) > return ret; > > if (vcpu->arch.injected_exception.nr == DF_VECTOR) { > ... > return 0; > } > ... > if ((class1 == EXCPT_CONTRIBUTORY && class2 == EXCPT_CONTRIBUTORY) > || (class1 == EXCPT_PF && class2 != EXCPT_BENIGN)) { > ... > } > vcpu->arch.injected_exception.valid = false; > } > > static int kvm_deliver_pending_exception(struct kvm_vcpu *vcpu) > { > if (!vcpu->arch.pending_exception.valid) > return 0; > > if (vcpu->arch.injected_exception.valid) > kvm_merge_injected_exception(vcpu); > > ret = kvm_deliver_pending_exception_as_vmexit(vcpu)); > if (ret || !vcpu->arch.pending_exception.valid) I really don't like querying arch.pending_exception.valid to see if the exception was morphed to a VM-Exit. I also find kvm_deliver_pending_exception_as_vmexit() to be misleading; to me, that reads as being a command, i.e. "deliver this pending exception as a VM-Exit". It' also be nice to make the helpers closer to pure functions, i.e. pass the exception as a param instead of pulling it from vcpu->arch. Now that we have static_call, the number of calls into vendor code isn't a huge issue. Moving nested_run_pending to arch code would help, too. What about doing something like: static bool kvm_l1_wants_exception_vmexit(struct kvm_vcpu *vcpu, u8 vector) { return is_guest_mode(vcpu) && kvm_x86_l1_wants_exception(vcpu, vector); } ... if (!kvm_x86_exception_allowed(vcpu)) return -EBUSY; if (kvm_l1_wants_exception_vmexit(vcpu, vcpu->arch...)) return kvm_x86_deliver_exception_as_vmexit(...); > return ret; > > trace_kvm_inj_exception(vcpu->arch.pending_exception.nr, > vcpu->arch.pending_exception.has_error_code, > vcpu->arch.pending_exception.error_code); > ... > } >