From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E60AC433EF for ; Tue, 16 Nov 2021 03:17:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 40A106320D for ; Tue, 16 Nov 2021 03:17:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232302AbhKPDUI (ORCPT ); Mon, 15 Nov 2021 22:20:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56186 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238417AbhKPDSD (ORCPT ); Mon, 15 Nov 2021 22:18:03 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 655F4C125D59 for ; Mon, 15 Nov 2021 15:46:20 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id f16-20020a170902ce9000b001436ba39b2bso6878628plg.3 for ; Mon, 15 Nov 2021 15:46:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=TaawULiB60gNJQHEA+L17NJPz8Ipx3YceDb8vfSdElI=; b=YVCMfqo7x3YELrx++SmAnLWSRYdYMvUd7XOA/7N2LvPPnYgV9V3xB+YIpK9MmoDWi4 HmEf1vhRPiBq65CO6iUKDp7MSvrjcXJ06eftMiP862dpT0FYE8WdlmYbXBv6jzDL8Os5 DCvtd0gTnj9vwFa/HxOrZE+p6KEWugMim2yyz5HDo9JKkHCVVls3vzRnd+U9tQSinIOy BQSaJCBC4/0pMzPqJQctunAYdgx8CYkANU8IARqArmObCYCXpvz6T3HDrUkKXzChIgO8 XxuIGZYlum/js6ZZD4OsFd4vyydu5TYKvNrdcE/dYiJ2eeAyuF0phXdRg915MyswHCtp qbMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=TaawULiB60gNJQHEA+L17NJPz8Ipx3YceDb8vfSdElI=; b=tB+WqeN9uMFhFKExVeKLbGr5LX7tMkhVHvMRlH+/04mjjH/ZNK6ciKYhAhuj8oPX4I y2ppGvCy4jEpEHWghx5szbXfsYKNqiInKffE7Mp3sLWQ9lRf9pQPWdIbGHpJKk354W1C ZXkMpEgsEQAsqOQEwJrGouxavpfNpt3zpGoI6Q/CvFxdnolJ5IooVu1zJnLLHKbfTVD4 94RZAPm22lkA87GAL43pssktQaMx4Kp/YvA8qcRMFDtZwzjU5yratLE6TAQyMZihPS84 LjaVkYr7CqX/JmWNW7HzGGC0oipZkDpmlkF6mp0/dGWUXqctrmfS5/YaQzusvti1lo2Q JrsQ== X-Gm-Message-State: AOAM532Wsh/y1xRCGGmq/zTXYx9H3oyqwauDJAJHUqWfRre1TVzek+qi 0eyEkfKB5lIb+QerA75QAp7LbexrZz6PChBHW2LEFah9cEoaaWIKLGnjXifCRwd5BYINajTuAOZ ojgRiolw+lcV2H0sPQn19uK/5Ot5Yv6hYkq03BrGTKmNBWniJ7VWd5Ipyht36qwjn6XFbzj6E X-Google-Smtp-Source: ABdhPJxCRMs7bDESGivm495NoEttjPJVoSUQjlL6shijeGmoCpLiTsFIqbnUqe9igl8QeSsFe3yaAH4wl0vA X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:916d:2253:5849:9965]) (user=bgardon job=sendgmr) by 2002:a05:6a00:2496:b0:49f:eba0:6575 with SMTP id c22-20020a056a00249600b0049feba06575mr36565288pfv.78.1637019979746; Mon, 15 Nov 2021 15:46:19 -0800 (PST) Date: Mon, 15 Nov 2021 15:45:55 -0800 In-Reply-To: <20211115234603.2908381-1-bgardon@google.com> Message-Id: <20211115234603.2908381-8-bgardon@google.com> Mime-Version: 1.0 References: <20211115234603.2908381-1-bgardon@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [PATCH 07/15] KVM: x86/mmu: Factor shadow_zero_check out of make_spte From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In the interest of devloping a version of make_spte that can function without a vCPU pointer, factor out the shadow_zero_mask to be an additional argument to the function. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/spte.c | 11 +++++++---- arch/x86/kvm/mmu/spte.h | 3 ++- 2 files changed, 9 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index b7271daa06c5..d3b059e96c6e 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -93,7 +93,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, bool ad_need_write_protect, - u64 mt_mask, u64 *new_spte) + u64 mt_mask, struct rsvd_bits_validate *shadow_zero_check, + u64 *new_spte) { int level = sp->role.level; u64 spte = SPTE_MMU_PRESENT_MASK; @@ -176,9 +177,9 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, if (prefetch) spte = mark_spte_for_access_track(spte); - WARN_ONCE(is_rsvd_spte(&vcpu->arch.mmu->shadow_zero_check, spte, level), + WARN_ONCE(is_rsvd_spte(shadow_zero_check, spte, level), "spte = 0x%llx, level = %d, rsvd bits = 0x%llx", spte, level, - get_rsvd_bits(&vcpu->arch.mmu->shadow_zero_check, spte, level)); + get_rsvd_bits(shadow_zero_check, spte, level)); if ((spte & PT_WRITABLE_MASK) && kvm_slot_dirty_track_enabled(slot)) { /* Enforced by kvm_mmu_hugepage_adjust. */ @@ -198,10 +199,12 @@ bool vcpu_make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, bool ad_need_write_protect = kvm_vcpu_ad_need_write_protect(vcpu); u64 mt_mask = static_call(kvm_x86_get_mt_mask)(vcpu, gfn, kvm_is_mmio_pfn(pfn)); + struct rsvd_bits_validate *shadow_zero_check = &vcpu->arch.mmu->shadow_zero_check; return make_spte(vcpu, sp, slot, pte_access, gfn, pfn, old_spte, prefetch, can_unsync, host_writable, - ad_need_write_protect, mt_mask, new_spte); + ad_need_write_protect, mt_mask, shadow_zero_check, + new_spte); } diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index e739f2ebf844..6134a10487c4 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -333,7 +333,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, bool ad_need_write_protect, - u64 mt_mask, u64 *new_spte); + u64 mt_mask, struct rsvd_bits_validate *shadow_zero_check, + u64 *new_spte); bool vcpu_make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct kvm_memory_slot *slot, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, -- 2.34.0.rc1.387.gb447b232ab-goog