From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1467DC54EBD for ; Mon, 9 Jan 2023 18:47:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237355AbjAISr3 (ORCPT ); Mon, 9 Jan 2023 13:47:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36534 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237222AbjAISrY (ORCPT ); Mon, 9 Jan 2023 13:47:24 -0500 Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com [IPv6:2a00:1450:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 60F8E1A058 for ; Mon, 9 Jan 2023 10:47:23 -0800 (PST) Received: by mail-ej1-x62c.google.com with SMTP id vm8so22497538ejc.2 for ; Mon, 09 Jan 2023 10:47:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=VSLCkgRxPXWXMTSkxLq4wqVTmi1HEC8Jc15sBo3hsxU=; b=o6BX2rq01AVrJKPxEo13bfQkBfaDVMK+QbxAZSUlU2ecB54iCbjdCSG/9fRT4kFNR2 9Lkpw/ICRmplQursH2brjIrKyYwCMpcFPev81YyObFN37wUnSCI71Ox2hF7eGH0utLtc PThzAHjJOFWiayUCPbqXLzIFGQ56l0tXJlLjxOdoRMZCn26Q8k5l9UFKaOWefkIOPEAf /1cF/mCknvjFeS2vQGJqGj/pQm3H6+SD49oR5WHqXTjkyqa1GPJzfPzWTQrzs1ggwodi S+MATMuYeJWMpXltu/DRtnz7+En9Ji0pGPt2Awnp6+zFRAfITvjMTM9jRacUJbHg9rTd 6lOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=VSLCkgRxPXWXMTSkxLq4wqVTmi1HEC8Jc15sBo3hsxU=; b=jfr+5md9lMTIruL44rMCwbSZXHgKXlje2/3dajtozqHypb3DQ3CMOUsIBeI54rPKJL UYoqNY7ZdYvfL68DrqGMVXKv8VCze3XiT5zJdxSsi+UkHIfm1FEkIau2tfWI3E08Qzg+ +Bfz+bXOHW0SrUNRB5ZZtaTYcrnNHbY07YiP1OgMIisNNt84SVDhx8ABMobmaO+LBIac Pm69Al79z5YcXQQL4HFoZfbMU5iYV2GVn1vZEcvpv2orKq/BsioqtQXcbFIH+8NNwR1r AHCXKuVJEJeknVkziiGSx4PjR7Kf3nxjmlPHkIfHoxsIFktPxGxXRA1bTGxQPA+2M6O1 lMOw== X-Gm-Message-State: AFqh2krAcw7gRWomeVhqzrs07cXC4SM4XAzN6lC8ULvnlemfCsA7JVo8 aVmG8gMIP/qo36AmAc8VSa12sD5IpjyDwYqsKm+J5A== X-Google-Smtp-Source: AMrXdXvo5wUENKzrMfZI2ewVd2Uu0zsa5mESC6RMQY0/fFvr0uUuVvcJd9KHpXWwQiPUSV+TO9DbGaRFNbFPr9Tk4qw= X-Received: by 2002:a17:907:91cd:b0:854:cd76:e982 with SMTP id h13-20020a17090791cd00b00854cd76e982mr46397ejz.364.1673290041879; Mon, 09 Jan 2023 10:47:21 -0800 (PST) MIME-Version: 1.0 References: <20221221222418.3307832-1-bgardon@google.com> <20221221222418.3307832-5-bgardon@google.com> In-Reply-To: From: Ben Gardon Date: Mon, 9 Jan 2023 10:47:10 -0800 Message-ID: Subject: Re: [RFC 04/14] KVM: x86/MMU: Expose functions for paging_tmpl.h To: David Matlack Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Paolo Bonzini , Peter Xu , Sean Christopherson , Vipin Sharma , Nagareddy Reddy Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jan 6, 2023 at 11:49 AM David Matlack wrote: > > On Wed, Dec 21, 2022 at 10:24:08PM +0000, Ben Gardon wrote: > > In preparation for moving paging_tmpl.h to shadow_mmu.c, expose various > > functions it needs through mmu_internal.h. This includes modifying the > > BUILD_MMU_ROLE_ACCESSOR macro so that it does not automatically include > > the static label, since some but not all of the accessors are needed by > > paging_tmpl.h. > > > > No functional change intended. > > > > Signed-off-by: Ben Gardon > > --- > > arch/x86/kvm/mmu/mmu.c | 32 ++++++++++++++++---------------- > > arch/x86/kvm/mmu/mmu_internal.h | 16 ++++++++++++++++ > > 2 files changed, 32 insertions(+), 16 deletions(-) > > > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > index bf14e181eb12..a17e8a79e4df 100644 > > --- a/arch/x86/kvm/mmu/mmu.c > > +++ b/arch/x86/kvm/mmu/mmu.c > > @@ -153,18 +153,18 @@ BUILD_MMU_ROLE_REGS_ACCESSOR(efer, lma, EFER_LMA); > > * and the vCPU may be incorrect/irrelevant. > > */ > > #define BUILD_MMU_ROLE_ACCESSOR(base_or_ext, reg, name) \ > > -static inline bool __maybe_unused is_##reg##_##name(struct kvm_mmu *mmu) \ > > +inline bool __maybe_unused is_##reg##_##name(struct kvm_mmu *mmu) \ > > { \ > > return !!(mmu->cpu_role. base_or_ext . reg##_##name); \ > > } > > BUILD_MMU_ROLE_ACCESSOR(base, cr0, wp); > > -BUILD_MMU_ROLE_ACCESSOR(ext, cr4, pse); > > +static BUILD_MMU_ROLE_ACCESSOR(ext, cr4, pse); > > BUILD_MMU_ROLE_ACCESSOR(ext, cr4, smep); > > -BUILD_MMU_ROLE_ACCESSOR(ext, cr4, smap); > > -BUILD_MMU_ROLE_ACCESSOR(ext, cr4, pke); > > -BUILD_MMU_ROLE_ACCESSOR(ext, cr4, la57); > > +static BUILD_MMU_ROLE_ACCESSOR(ext, cr4, smap); > > +static BUILD_MMU_ROLE_ACCESSOR(ext, cr4, pke); > > +static BUILD_MMU_ROLE_ACCESSOR(ext, cr4, la57); > > BUILD_MMU_ROLE_ACCESSOR(base, efer, nx); > > -BUILD_MMU_ROLE_ACCESSOR(ext, efer, lma); > > +static BUILD_MMU_ROLE_ACCESSOR(ext, efer, lma); > > Suggest moving all the BUILD_MMU_ROLE*() macros to mmu_internal.h, since > they are already static inline. That would be a cleaner patch and reduce > future churn if shadow_mmu.c ever needs to use a different role accessor > at some point. That sounds reasonable. Will do in V1. > > > > > static inline bool is_cr0_pg(struct kvm_mmu *mmu) > > { > > @@ -210,7 +210,7 @@ void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, > > kvm_flush_remote_tlbs_with_range(kvm, &range); > > } > > > > -static gfn_t get_mmio_spte_gfn(u64 spte) > > +gfn_t get_mmio_spte_gfn(u64 spte) > > { > > u64 gpa = spte & shadow_nonpresent_or_rsvd_lower_gfn_mask; > > > > @@ -240,7 +240,7 @@ static bool check_mmio_spte(struct kvm_vcpu *vcpu, u64 spte) > > return likely(kvm_gen == spte_gen); > > } > > > > -static int is_cpuid_PSE36(void) > > +int is_cpuid_PSE36(void) > > { > > return 1; > > } > > Can we just drop is_cpuid_PSE36(), e.g. as a precursor patch? It just > returns 1... Yeah, good idea. Looks like we can eliminate a little dead code doing that too.