From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9CCCC6FA90 for ; Wed, 21 Sep 2022 02:02:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230077AbiIUCCq (ORCPT ); Tue, 20 Sep 2022 22:02:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231194AbiIUCCm (ORCPT ); Tue, 20 Sep 2022 22:02:42 -0400 Received: from mailtransmit05.runbox.com (mailtransmit05.runbox.com [IPv6:2a0c:5a00:149::26]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DDC0778BF9 for ; Tue, 20 Sep 2022 19:02:38 -0700 (PDT) Received: from mailtransmit02.runbox ([10.9.9.162] helo=aibo.runbox.com) by mailtransmit05.runbox.com with esmtps (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (Exim 4.93) (envelope-from ) id 1oap4P-00Eq8D-7k for kvm@vger.kernel.org; Wed, 21 Sep 2022 04:02:37 +0200 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=rbox.co; s=selector2; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References :In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=p7eA/YhudeL774+M8pHB4lgypqtLnbm/u/epHCCio+U=; b=K/4C3wvqONTx668ZAvieEgTMEz Pjr4z3EccbVuOB5TaTws50qPHTgmaDso1M30wwyYXbjCo5bibmZ3sN9XXWMAuV8eE0C7OLOnB2V/z wBp/OBJRsFxyJYwpsDess7SOqAIAyC1sptPYWi9Fkl4hko3ZASGEM3UZPY5BRbXUvu8K8cR0ooKOI JSuz23GTDYiJQUiQmwnxRd5aso4+Js+IPGF0pVTqTSfvdHBSxmWM887WiK73EcAD46M3En4NDbeo/ /1B+ZW5I8XpLmSjokLzQyVrge/Ysoxff1eLZqPC9p5qHepyAQO21RWlaq5C0WXtQaKxnD1xQvjyur vD3Vsmkg==; Received: from [10.9.9.73] (helo=submission02.runbox) by mailtransmit02.runbox with esmtp (Exim 4.86_2) (envelope-from ) id 1oap4M-0007Bx-Qv; Wed, 21 Sep 2022 04:02:36 +0200 Received: by submission02.runbox with esmtpsa [Authenticated ID (604044)] (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) id 1oap3y-0006er-1B; Wed, 21 Sep 2022 04:02:10 +0200 From: Michal Luczaj To: kvm@vger.kernel.org Cc: seanjc@google.com, pbonzini@redhat.com, Michal Luczaj Subject: [PATCH 6/8] KVM: x86: Clean up hva_to_pfn_retry() Date: Wed, 21 Sep 2022 04:01:38 +0200 Message-Id: <20220921020140.3240092-7-mhal@rbox.co> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20220921020140.3240092-1-mhal@rbox.co> References: <20220921020140.3240092-1-mhal@rbox.co> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Make hva_to_pfn_retry() use kvm instance cached in gfn_to_pfn_cache. Suggested-by: Sean Christopherson Signed-off-by: Michal Luczaj --- virt/kvm/pfncache.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index eb91025d7242..a2c95e393e34 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -135,7 +135,7 @@ static inline bool mmu_notifier_retry_cache(struct kvm *kvm, unsigned long mmu_s return kvm->mmu_invalidate_seq != mmu_seq; } -static kvm_pfn_t hva_to_pfn_retry(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) +static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) { /* Note, the new page offset may be different than the old! */ void *old_khva = gpc->khva - offset_in_page(gpc->khva); @@ -155,7 +155,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) gpc->valid = false; do { - mmu_seq = kvm->mmu_invalidate_seq; + mmu_seq = gpc->kvm->mmu_invalidate_seq; smp_rmb(); write_unlock_irq(&gpc->lock); @@ -213,7 +213,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) * attempting to refresh. */ WARN_ON_ONCE(gpc->valid); - } while (mmu_notifier_retry_cache(kvm, mmu_seq)); + } while (mmu_notifier_retry_cache(gpc->kvm, mmu_seq)); gpc->valid = true; gpc->pfn = new_pfn; @@ -285,7 +285,7 @@ int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa, * drop the lock and do the HVA to PFN lookup again. */ if (!gpc->valid || old_uhva != gpc->uhva) { - ret = hva_to_pfn_retry(kvm, gpc); + ret = hva_to_pfn_retry(gpc); } else { /* If the HVA→PFN mapping was already valid, don't unmap it. */ old_pfn = KVM_PFN_ERR_FAULT; -- 2.37.3