From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CEEEBC4338F for ; Tue, 3 Aug 2021 04:46:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B60B660EE9 for ; Tue, 3 Aug 2021 04:46:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233830AbhHCEqo (ORCPT ); Tue, 3 Aug 2021 00:46:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57724 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233709AbhHCEql (ORCPT ); Tue, 3 Aug 2021 00:46:41 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83B69C0613D5 for ; Mon, 2 Aug 2021 21:46:28 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id z17-20020a17090ad791b029017763b6fe71so14583621pju.3 for ; Mon, 02 Aug 2021 21:46:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=uvGN8o0svNEgNpDAlc6hd0uZYNG8aDJpGIKyIlckztA=; b=EstkdbR37/abuK+nZocc83eoDyxbAueC4xW7VBFBL+11H/s8A7Nj3uKtpONLq5G/80 3A7l15GjFqzxPajJGMmk70pYdv1YC1UE8P0m9zC/39TbfL61vU0RPUIOK5R5SwNsNabe IzkvkvPu9WCs2VagDvbLQNJpDjTpgnUs2FfQCESqF+GoSNgq/+wM/IYW7yNtaYy8lPBN ggEN1Q6XqsdfJPdBJGzFtVTl1EKWycwQ7oJI+LlHeeiJcCG6f5XS3KkRiTyk99EbYbWI MAP7GBEINtiboJmngr63BAc2Hj7CRxN2bqDyDY9RZRKzUvpYTR80oNAHdOu2Nb17IFWR e30w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=uvGN8o0svNEgNpDAlc6hd0uZYNG8aDJpGIKyIlckztA=; b=CnRZ7w4XjuVQNoLBoiNa1X4tfPFEgjteKf/YNFbeAP0TS5CznyXmvYSe/hx1mgyilW 2qjM5AFzL+K33Uq2YYF1uxCNefMciEi5M5xByl3jAKEEY9KwM4MNx7CzOafUBE8aWIEW 4hB4TOV3ISU6PNFWmCCleRyzdsU1l+RLTEzGe9MoqEkVUuMMb3V67OjMdsqo04WUokuE 4GtD0YDNntTEBoDVu49J8bqrmeN0ASrQnSxfE7d30PyW2wKgb8ks5J+wjhwVZssCBQq4 2MPIE5Anm86SLhLd18pxS0sOjXX/ZFDIgYd9XIjqAONUASSkON8s63bapu2JVwKVcImk IpaA== X-Gm-Message-State: AOAM533uDVtMJoO2kyHIVKgw9nkNvvqu20jvaJdjjc+MU03064h9XgX6 XM3D9Yil7ylMZ2Es6MKV9j+3vJV/9Uam X-Google-Smtp-Source: ABdhPJyaw9WJ7iFBQaTLk0jwa6AM5CsSnxVzPD4B9jsp6ckJsVDFg8I3HQCXj6lroq4+P5FfD6XYc6lJFXoZ X-Received: from mihenry-linux-desktop.kir.corp.google.com ([2620:15c:29:204:4304:2e3e:d2f5:48c8]) (user=mizhang job=sendgmr) by 2002:a17:902:e88f:b029:12c:c949:36dd with SMTP id w15-20020a170902e88fb029012cc94936ddmr1292433plg.84.1627965987962; Mon, 02 Aug 2021 21:46:27 -0700 (PDT) Reply-To: Mingwei Zhang Date: Mon, 2 Aug 2021 21:46:05 -0700 In-Reply-To: <20210803044607.599629-1-mizhang@google.com> Message-Id: <20210803044607.599629-2-mizhang@google.com> Mime-Version: 1.0 References: <20210803044607.599629-1-mizhang@google.com> X-Mailer: git-send-email 2.32.0.554.ge1b32706d8-goog Subject: [PATCH v4 1/3] KVM: x86/mmu: Remove redundant spte present check in mmu_set_spte From: Mingwei Zhang To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , David Matlack , Mingwei Zhang , Jing Zhang Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Drop an unnecessary is_shadow_present_pte() check when updating the rmaps after installing a non-MMIO SPTE. set_spte() is used only to create shadow-present SPTEs, e.g. MMIO SPTEs are handled early on, mmu_set_spte() runs with mmu_lock held for write, i.e. the SPTE can't be zapped between writing the SPTE and updating the rmaps. Opportunistically combine the "new SPTE" logic for large pages and rmaps. No functional change intended. Suggested-by: Ben Gardon Reviewed-by: David Matlack Reviewed-by: Ben Gardon Reviewed-by: Sean Christopherson Signed-off-by: Mingwei Zhang --- arch/x86/kvm/mmu/mmu.c | 16 ++++++---------- 1 file changed, 6 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a8cdfd8d45c4..f614e9df3c3b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2734,17 +2734,13 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, pgprintk("%s: setting spte %llx\n", __func__, *sptep); trace_kvm_mmu_set_spte(level, gfn, sptep); - if (!was_rmapped && is_large_pte(*sptep)) - ++vcpu->kvm->stat.lpages; - if (is_shadow_present_pte(*sptep)) { - if (!was_rmapped) { - rmap_count = rmap_add(vcpu, sptep, gfn); - if (rmap_count > vcpu->kvm->stat.max_mmu_rmap_size) - vcpu->kvm->stat.max_mmu_rmap_size = rmap_count; - if (rmap_count > RMAP_RECYCLE_THRESHOLD) - rmap_recycle(vcpu, sptep, gfn); - } + if (!was_rmapped) { + if (is_large_pte(*sptep)) + ++vcpu->kvm->stat.lpages; + rmap_count = rmap_add(vcpu, sptep, gfn); + if (rmap_count > RMAP_RECYCLE_THRESHOLD) + rmap_recycle(vcpu, sptep, gfn); } return ret; -- 2.32.0.554.ge1b32706d8-goog