From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.1 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77240C28CBC for ; Sat, 9 May 2020 10:59:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3F57221473 for ; Sat, 9 May 2020 10:59:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1589021958; bh=2seki1yMcX+JSM6LXSPPyxLywnbmcos26i1LfBlyMKw=; h=Date:From:To:Cc:Subject:In-Reply-To:References:List-ID:From; b=h3JWqPEQ8FabCnGi5iBWq4glhIIQ0obCDXu8dgshkKJsMSZOg4cP6UqL1FSlHfJZL xnONVkHIK50wHSBSVKFd1F6HJh2ct7Shd+3fq174llClDSeEAlX2c6W8nm2wXgDiAe QLcaxKiB0EL6e0CEi1asXTjhltU88hUGjYnmq0Pk= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727945AbgEIK7R (ORCPT ); Sat, 9 May 2020 06:59:17 -0400 Received: from mail.kernel.org ([198.145.29.99]:47624 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725920AbgEIK7Q (ORCPT ); Sat, 9 May 2020 06:59:16 -0400 Received: from disco-boy.misterjones.org (disco-boy.misterjones.org [51.254.78.96]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 4E90621775; Sat, 9 May 2020 10:59:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1589021955; bh=2seki1yMcX+JSM6LXSPPyxLywnbmcos26i1LfBlyMKw=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=S2SKLstvTxZuONsyfQmzS5L/tHWpDjLq7uApbj6sPNFMk0F/4SY4aCSfdVUrw+gAY BWa5w/DwWSXyGhaqZZWyZJ1nt5sjty//GwOQTrXAID3L8tCUzAmwnivbhHOEI+UQJ0 Y3ShsO8eu0HbaynvF060UD179ZUlKpDeSsJ+Qfjo= Received: from 78.163-31-62.static.virginmediabusiness.co.uk ([62.31.163.78] helo=wait-a-minute.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1jXNCL-00AqsR-Dl; Sat, 09 May 2020 11:59:13 +0100 Date: Sat, 09 May 2020 11:59:07 +0100 Message-ID: <875zd51iis.wl-maz@kernel.org> From: Marc Zyngier To: Anshuman Khandual Cc: linux-arm-kernel@lists.infradead.org, Catalin Marinas , Will Deacon , Mark Rutland , James Morse , Suzuki K Poulose , kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org Subject: Re: [PATCH V2] arm64/cpufeature: Validate hypervisor capabilities during CPU hotplug In-Reply-To: <1588906358-7845-1-git-send-email-anshuman.khandual@arm.com> References: <1588906358-7845-1-git-send-email-anshuman.khandual@arm.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL/10.8 EasyPG/1.0.0 Emacs/26 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII X-SA-Exim-Connect-IP: 62.31.163.78 X-SA-Exim-Rcpt-To: anshuman.khandual@arm.com, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, will@kernel.org, mark.rutland@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 08 May 2020 03:52:38 +0100, Anshuman Khandual wrote: > > This validates hypervisor capabilities like VMID width, IPA range for any > hot plug CPU against system finalized values. While here, it factors out > get_vmid_bits() for general use and also defines ID_AA64MMFR0_PARANGE_MASK. Maybe add a quick word on the fact that we use KVM's view of the IPA space to allow a CPU to come up. > > Cc: Catalin Marinas > Cc: Will Deacon > Cc: Marc Zyngier > Cc: Mark Rutland > Cc: James Morse > Cc: Suzuki K Poulose > Cc: linux-arm-kernel@lists.infradead.org > Cc: kvmarm@lists.cs.columbia.edu > Cc: linux-kernel@vger.kernel.org > > Suggested-by: Suzuki Poulose > Signed-off-by: Anshuman Khandual > --- > Changes in V2: > > - Added is_hyp_mode_available() check per Marc > - Moved verify_kvm_capabilities() into cpufeature.c per Marc > - Added helper get_kvm_ipa_limit() to fetch kvm_ipa_limit per Marc > - Renamed kvm as hyp including the commit message per Marc > > Changes in V1: (https://patchwork.kernel.org/patch/11532565/) > > arch/arm64/include/asm/cpufeature.h | 20 +++++++++++++++++ > arch/arm64/include/asm/kvm_mmu.h | 2 +- > arch/arm64/include/asm/sysreg.h | 1 + > arch/arm64/kernel/cpufeature.c | 33 +++++++++++++++++++++++++++++ > arch/arm64/kvm/reset.c | 11 ++++++++-- > 5 files changed, 64 insertions(+), 3 deletions(-) > > diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h > index afe08251ff95..fbbb4d2216f0 100644 > --- a/arch/arm64/include/asm/cpufeature.h > +++ b/arch/arm64/include/asm/cpufeature.h > @@ -745,6 +745,26 @@ static inline bool cpu_has_hw_af(void) > extern bool cpu_has_amu_feat(int cpu); > #endif > > +static inline unsigned int get_vmid_bits(u64 mmfr1) > +{ > + int vmid_bits; > + > + vmid_bits = cpuid_feature_extract_unsigned_field(mmfr1, > + ID_AA64MMFR1_VMIDBITS_SHIFT); > + if (vmid_bits == ID_AA64MMFR1_VMIDBITS_16) > + return 16; > + > + /* > + * Return the default here even if any reserved > + * value is fetched from the system register. > + */ > + return 8; > +} > + > +#ifdef CONFIG_KVM_ARM_HOST nit: useless #ifdefery. > +u32 get_kvm_ipa_limit(void); > +#endif > + > #endif /* __ASSEMBLY__ */ > > #endif > diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h > index 30b0e8d6b895..a7137e144b97 100644 > --- a/arch/arm64/include/asm/kvm_mmu.h > +++ b/arch/arm64/include/asm/kvm_mmu.h > @@ -416,7 +416,7 @@ static inline unsigned int kvm_get_vmid_bits(void) > { > int reg = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); > > - return (cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR1_VMIDBITS_SHIFT) == 2) ? 16 : 8; > + return get_vmid_bits(reg); > } > > /* > diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h > index c4ac0ac25a00..3510a4668970 100644 > --- a/arch/arm64/include/asm/sysreg.h > +++ b/arch/arm64/include/asm/sysreg.h > @@ -705,6 +705,7 @@ > #define ID_AA64MMFR0_TGRAN16_SUPPORTED 0x1 > #define ID_AA64MMFR0_PARANGE_48 0x5 > #define ID_AA64MMFR0_PARANGE_52 0x6 > +#define ID_AA64MMFR0_PARANGE_MASK 0x7 I realise this is already like this in the current code, but using 7 as a mask value for the feature feels wrong. If we ever get a value with bit 3 of the capability being set, we will confuse it with some other configuration. We should be more careful and pass the full value of the feature to id_aa64mmfr0_parange_to_phys_shift(), which already does the right thing. > > #ifdef CONFIG_ARM64_PA_BITS_52 > #define ID_AA64MMFR0_PARANGE_MAX ID_AA64MMFR0_PARANGE_52 > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c > index 9fac745aa7bb..7e5ff452574c 100644 > --- a/arch/arm64/kernel/cpufeature.c > +++ b/arch/arm64/kernel/cpufeature.c > @@ -2181,6 +2181,36 @@ static void verify_sve_features(void) > /* Add checks on other ZCR bits here if necessary */ > } > > +#ifdef CONFIG_KVM_ARM_HOST > +void verify_hyp_capabilities(void) > +{ > + u64 safe_mmfr1, mmfr0, mmfr1; > + int parange, ipa_max; > + unsigned int safe_vmid_bits, vmid_bits; > + > + safe_mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); > + mmfr0 = read_cpuid(ID_AA64MMFR0_EL1); > + mmfr1 = read_cpuid(ID_AA64MMFR1_EL1); > + > + /* Verify VMID bits */ > + safe_vmid_bits = get_vmid_bits(safe_mmfr1); > + vmid_bits = get_vmid_bits(mmfr1); > + if (vmid_bits < safe_vmid_bits) { > + pr_crit("CPU%d: VMID width mismatch\n", smp_processor_id()); > + cpu_die_early(); > + } > + > + /* Verify IPA range */ > + parange = mmfr0 & ID_AA64MMFR0_PARANGE_MASK; > + ipa_max = id_aa64mmfr0_parange_to_phys_shift(parange); > + if (ipa_max < get_kvm_ipa_limit()) { > + pr_crit("CPU%d: IPA range mismatch\n", smp_processor_id()); > + cpu_die_early(); > + } > +} > +#else /* !CONFIG_KVM_ARM_HOST */ > +static inline void verify_hyp_capabilities(void) { } > +#endif /* CONFIG_KVM_ARM_HOST */ > > /* > * Run through the enabled system capabilities and enable() it on this CPU. > @@ -2206,6 +2236,9 @@ static void verify_local_cpu_capabilities(void) > > if (system_supports_sve()) > verify_sve_features(); > + > + if (is_hyp_mode_available()) > + verify_hyp_capabilities(); > } > > void check_local_cpu_capabilities(void) > diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c > index 30b7ea680f66..1131b112dda2 100644 > --- a/arch/arm64/kvm/reset.c > +++ b/arch/arm64/kvm/reset.c > @@ -340,11 +340,17 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) > return ret; > } > > +u32 get_kvm_ipa_limit(void) > +{ > + return kvm_ipa_limit; > +} > + > void kvm_set_ipa_limit(void) > { > unsigned int ipa_max, pa_max, va_max, parange; > > - parange = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1) & 0x7; > + parange = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1) & > + ID_AA64MMFR0_PARANGE_MASK; > pa_max = id_aa64mmfr0_parange_to_phys_shift(parange); > > /* Clamp the IPA limit to the PA size supported by the kernel */ > @@ -406,7 +412,8 @@ int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type) > phys_shift = KVM_PHYS_SHIFT; > } > > - parange = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1) & 7; > + parange = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1) & > + ID_AA64MMFR0_PARANGE_MASK; > if (parange > ID_AA64MMFR0_PARANGE_MAX) > parange = ID_AA64MMFR0_PARANGE_MAX; > vtcr |= parange << VTCR_EL2_PS_SHIFT; > -- > 2.20.1 > > With the couple of nits above addressed: Reviewed-by: Marc Zyngier M. -- Without deviation from the norm, progress is not possible.