From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BDA4C433F5 for ; Thu, 27 Jan 2022 11:48:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240914AbiA0LsQ (ORCPT ); Thu, 27 Jan 2022 06:48:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240877AbiA0LsL (ORCPT ); Thu, 27 Jan 2022 06:48:11 -0500 Received: from sin.source.kernel.org (sin.source.kernel.org [IPv6:2604:1380:40e1:4800::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F01DC061747 for ; Thu, 27 Jan 2022 03:48:11 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id A0F4CCE21A1 for ; Thu, 27 Jan 2022 11:48:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E3D87C340EB; Thu, 27 Jan 2022 11:48:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643284087; bh=xCqfeUzSsfMIQ36LI/gRTI/XYXEjr23cIsrNayduwco=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=RWpiHXrAzPQ/+jzK0XDusG6l8RdoqqvHeOxNtT9PmIsNLNZErjJ8XsRyLvpT+WEMi 31NiMgBzHeJ+WPlueHen21rhVye0UVontHjS97uYvRe1KsHYdt0wD4wT6LP4PbIx8d uDAYlFy0QFOUL2dM9Pssf20klHRfwP1yvf8KQssi2VYGUlqEUiTfpv9dO3xyaS9x45 gggos/HEBi8neB9+HJ0y4EgI1pt+ap0nbHtQFPrT8zXcSYnk2nnsFnD0IH95KZSgDG iROI65D7s1ZPT33lL4KpOboD8aRvluYRuJ6DvUM/QKM3bl0JGnH+QGAdkUCWLjyTdO gudATOFkMYGfw== Received: from sofa.misterjones.org ([185.219.108.64] helo=why.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1nD3G1-003UZm-TI; Thu, 27 Jan 2022 11:48:06 +0000 Date: Thu, 27 Jan 2022 11:48:05 +0000 Message-ID: <8735l9762y.wl-maz@kernel.org> From: Marc Zyngier To: Ganapatrao Kulkarni Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, Andre Przywara , Christoffer Dall , Jintack Lim , Haibo Xu , James Morse , Suzuki K Poulose , Alexandru Elisei , kernel-team@android.com Subject: Re: [PATCH v5 67/69] KVM: arm64: nv: Enable ARMv8.4-NV support In-Reply-To: <7fe1ce9e-1b86-ed57-a0e5-117d1b9011b4@os.amperecomputing.com> References: <20211129200150.351436-1-maz@kernel.org> <20211129200150.351436-68-maz@kernel.org> <7fe1ce9e-1b86-ed57-a0e5-117d1b9011b4@os.amperecomputing.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/27.1 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: gankulkarni@os.amperecomputing.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, andre.przywara@arm.com, christoffer.dall@arm.com, jintack@cs.columbia.edu, haibo.xu@linaro.org, james.morse@arm.com, suzuki.poulose@arm.com, alexandru.elisei@arm.com, kernel-team@android.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On Tue, 18 Jan 2022 11:50:18 +0000, Ganapatrao Kulkarni wrote: > > > > On 30-11-2021 01:31 am, Marc Zyngier wrote: > > As all the VNCR-capable system registers are nicely separated > > from the rest of the crowd, let's set HCR_EL2.NV2 on and let > > the ball rolling. > > > > Signed-off-by: Marc Zyngier > > --- > > arch/arm64/include/asm/kvm_arm.h | 1 + > > arch/arm64/include/asm/kvm_emulate.h | 23 +++++++++++++---------- > > arch/arm64/include/asm/sysreg.h | 1 + > > arch/arm64/kvm/hyp/vhe/switch.c | 14 +++++++++++++- > > 4 files changed, 28 insertions(+), 11 deletions(-) > > > > diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h > > index b603466803d2..18c35446249f 100644 > > --- a/arch/arm64/include/asm/kvm_arm.h > > +++ b/arch/arm64/include/asm/kvm_arm.h > > @@ -20,6 +20,7 @@ > > #define HCR_AMVOFFEN (UL(1) << 51) > > #define HCR_FIEN (UL(1) << 47) > > #define HCR_FWB (UL(1) << 46) > > +#define HCR_NV2 (UL(1) << 45) > > #define HCR_AT (UL(1) << 44) > > #define HCR_NV1 (UL(1) << 43) > > #define HCR_NV (UL(1) << 42) > > diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h > > index 1664430be698..f282997e4a4c 100644 > > --- a/arch/arm64/include/asm/kvm_emulate.h > > +++ b/arch/arm64/include/asm/kvm_emulate.h > > @@ -245,21 +245,24 @@ static inline bool is_hyp_ctxt(const struct kvm_vcpu *vcpu) > > static inline u64 __fixup_spsr_el2_write(struct kvm_cpu_context > > *ctxt, u64 val) > > { > > - if (!__vcpu_el2_e2h_is_set(ctxt)) { > > - /* > > - * Clear the .M field when writing SPSR to the CPU, so that we > > - * can detect when the CPU clobbered our SPSR copy during a > > - * local exception. > > - */ > > - val &= ~0xc; > > - } > > + struct kvm_vcpu *vcpu = container_of(ctxt, struct kvm_vcpu, arch.ctxt); > > + > > + if (enhanced_nested_virt_in_use(vcpu) || __vcpu_el2_e2h_is_set(ctxt)) > > + return val; > > - return val; > > + /* > > + * Clear the .M field when writing SPSR to the CPU, so that we > > + * can detect when the CPU clobbered our SPSR copy during a > > + * local exception. > > + */ > > + return val &= ~0xc; > > } > > static inline u64 __fixup_spsr_el2_read(const struct > > kvm_cpu_context *ctxt, u64 val) > > { > > - if (__vcpu_el2_e2h_is_set(ctxt)) > > + struct kvm_vcpu *vcpu = container_of(ctxt, struct kvm_vcpu, arch.ctxt); > > + > > + if (enhanced_nested_virt_in_use(vcpu) || __vcpu_el2_e2h_is_set(ctxt)) > > return val; > > /* > > diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h > > index 71e6a0410e7c..5de90138d0a4 100644 > > --- a/arch/arm64/include/asm/sysreg.h > > +++ b/arch/arm64/include/asm/sysreg.h > > @@ -550,6 +550,7 @@ > > #define SYS_TCR_EL2 sys_reg(3, 4, 2, 0, 2) > > #define SYS_VTTBR_EL2 sys_reg(3, 4, 2, 1, 0) > > #define SYS_VTCR_EL2 sys_reg(3, 4, 2, 1, 2) > > +#define SYS_VNCR_EL2 sys_reg(3, 4, 2, 2, 0) > > #define SYS_ZCR_EL2 sys_reg(3, 4, 1, 2, 0) > > #define SYS_TRFCR_EL2 sys_reg(3, 4, 1, 2, 1) > > diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c > > index ef4488db6dc1..5cadda79089a 100644 > > --- a/arch/arm64/kvm/hyp/vhe/switch.c > > +++ b/arch/arm64/kvm/hyp/vhe/switch.c > > @@ -45,7 +45,13 @@ static void __activate_traps(struct kvm_vcpu *vcpu) > > * the EL1 virtual memory control register accesses > > * as well as the AT S1 operations. > > */ > > - hcr |= HCR_TVM | HCR_TRVM | HCR_AT | HCR_TTLB | HCR_NV1; > > + if (enhanced_nested_virt_in_use(vcpu)) { > > + hcr &= ~HCR_TVM; > > I think, we should clear TRVM also? > hcr &= ~(HCR_TVM | HCR_TRVM); Hmmm. But TRVM is never set the first place, is it? It is only here that we augment the host HCR_EL2 with various trap configurations depending on whether the host is NV2 capable or not, whether the guest is VHE or not, and whether the guest as set of additional flags of its own. Given that, I don't think there is a need to clear this bit. Thanks, M. -- Without deviation from the norm, progress is not possible.