From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7022FC433F5 for ; Mon, 4 Oct 2021 17:27:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4EAC961425 for ; Mon, 4 Oct 2021 17:27:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234422AbhJDR3E (ORCPT ); Mon, 4 Oct 2021 13:29:04 -0400 Received: from mail.kernel.org ([198.145.29.99]:55690 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234027AbhJDR3D (ORCPT ); Mon, 4 Oct 2021 13:29:03 -0400 Received: from disco-boy.misterjones.org (disco-boy.misterjones.org [51.254.78.96]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 56E5161452; Mon, 4 Oct 2021 17:27:14 +0000 (UTC) Received: from sofa.misterjones.org ([185.219.108.64] helo=why.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1mXRk8-00Eh0o-CV; Mon, 04 Oct 2021 18:27:12 +0100 Date: Mon, 04 Oct 2021 18:27:11 +0100 Message-ID: <87tuhwr98w.wl-maz@kernel.org> From: Marc Zyngier To: Fuad Tabba Cc: kvmarm@lists.cs.columbia.edu, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, drjones@redhat.com, oupton@google.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com Subject: Re: [PATCH v6 11/12] KVM: arm64: Trap access to pVM restricted features In-Reply-To: <20210922124704.600087-12-tabba@google.com> References: <20210922124704.600087-1-tabba@google.com> <20210922124704.600087-12-tabba@google.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/27.1 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: tabba@google.com, kvmarm@lists.cs.columbia.edu, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, drjones@redhat.com, oupton@google.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Hi Fuad, On Wed, 22 Sep 2021 13:47:03 +0100, Fuad Tabba wrote: > > Trap accesses to restricted features for VMs running in protected > mode. > > Access to feature registers are emulated, and only supported > features are exposed to protected VMs. > > Accesses to restricted registers as well as restricted > instructions are trapped, and an undefined exception is injected > into the protected guests, i.e., with EC = 0x0 (unknown reason). > This EC is the one used, according to the Arm Architecture > Reference Manual, for unallocated or undefined system registers > or instructions. > > Only affects the functionality of protected VMs. Otherwise, > should not affect non-protected VMs when KVM is running in > protected mode. > > Signed-off-by: Fuad Tabba > --- > arch/arm64/kvm/hyp/nvhe/switch.c | 60 ++++++++++++++++++++++++++++++++ > 1 file changed, 60 insertions(+) > > diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c > index 49080c607838..2bf5952f651b 100644 > --- a/arch/arm64/kvm/hyp/nvhe/switch.c > +++ b/arch/arm64/kvm/hyp/nvhe/switch.c > @@ -20,6 +20,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -28,6 +29,7 @@ > #include > > #include > +#include > > /* Non-VHE specific context */ > DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data); > @@ -158,6 +160,49 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) > write_sysreg(pmu->events_host, pmcntenset_el0); > } > > +/** > + * Handler for protected VM restricted exceptions. > + * > + * Inject an undefined exception into the guest and return true to indicate that > + * the hypervisor has handled the exit, and control should go back to the guest. > + */ > +static bool kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu, u64 *exit_code) > +{ > + __inject_undef64(vcpu); > + return true; > +} > + > +/** > + * Handler for protected VM MSR, MRS or System instruction execution in AArch64. > + * > + * Returns true if the hypervisor has handled the exit, and control should go > + * back to the guest, or false if it hasn't. > + */ > +static bool kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu, u64 *exit_code) > +{ > + if (kvm_handle_pvm_sysreg(vcpu, exit_code)) > + return true; > + else > + return kvm_hyp_handle_sysreg(vcpu, exit_code); nit: drop the else. I wonder though: what if there is an overlap between between the pVM handling and the normal KVM stuff? Are we guaranteed that there is none? For example, ESR_ELx_EC_SYS64 is used when working around some bugs (see the TX2 TVM handling). What happens if you return early and don't let it happen? This has a huge potential for some bad breakage. > +} > + > +/** > + * Handler for protected floating-point and Advanced SIMD accesses. > + * > + * Returns true if the hypervisor has handled the exit, and control should go > + * back to the guest, or false if it hasn't. > + */ > +static bool kvm_handle_pvm_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code) > +{ > + /* Linux guests assume support for floating-point and Advanced SIMD. */ > + BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_FP), > + PVM_ID_AA64PFR0_ALLOW)); > + BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_ASIMD), > + PVM_ID_AA64PFR0_ALLOW)); > + > + return kvm_hyp_handle_fpsimd(vcpu, exit_code); > +} > + > static const exit_handler_fn hyp_exit_handlers[] = { > [0 ... ESR_ELx_EC_MAX] = NULL, > [ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15, > @@ -170,8 +215,23 @@ static const exit_handler_fn hyp_exit_handlers[] = { > [ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth, > }; > > +static const exit_handler_fn pvm_exit_handlers[] = { > + [0 ... ESR_ELx_EC_MAX] = NULL, > + [ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15, > + [ESR_ELx_EC_CP15_64] = kvm_hyp_handle_cp15, Heads up, this one was bogus, and I removed it in my patches[1]. But it really begs the question: given that you really don't want to handle any AArch32 for protected VMs, why handling anything at all the first place? You really should let the exit happen and let the outer run loop deal with it. > + [ESR_ELx_EC_SYS64] = kvm_handle_pvm_sys64, > + [ESR_ELx_EC_SVE] = kvm_handle_pvm_restricted, > + [ESR_ELx_EC_FP_ASIMD] = kvm_handle_pvm_fpsimd, > + [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low, > + [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, > + [ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth, > +}; > + > static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm *kvm) > { > + if (unlikely(kvm_vm_is_protected(kvm))) > + return pvm_exit_handlers; > + > return hyp_exit_handlers; > } > > -- > 2.33.0.464.g1972c5931b-goog > > Thanks, M. [1] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/commit/?h=kvm-arm64/early-ec-handlers&id=f84ff369795ed47f2cd5e556170166ee8b3a988f -- Without deviation from the norm, progress is not possible. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 493EFC433FE for ; Mon, 4 Oct 2021 17:27:20 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id BA16D61452 for ; Mon, 4 Oct 2021 17:27:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org BA16D61452 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 564F94B149; Mon, 4 Oct 2021 13:27:19 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id HMeI4e4tMz27; Mon, 4 Oct 2021 13:27:18 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 014DE4B29B; Mon, 4 Oct 2021 13:27:18 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 76D114B236 for ; Mon, 4 Oct 2021 13:27:17 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id dEKTH4Wn5CWb for ; Mon, 4 Oct 2021 13:27:15 -0400 (EDT) Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id 6957B4B149 for ; Mon, 4 Oct 2021 13:27:15 -0400 (EDT) Received: from disco-boy.misterjones.org (disco-boy.misterjones.org [51.254.78.96]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 56E5161452; Mon, 4 Oct 2021 17:27:14 +0000 (UTC) Received: from sofa.misterjones.org ([185.219.108.64] helo=why.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1mXRk8-00Eh0o-CV; Mon, 04 Oct 2021 18:27:12 +0100 Date: Mon, 04 Oct 2021 18:27:11 +0100 Message-ID: <87tuhwr98w.wl-maz@kernel.org> From: Marc Zyngier To: Fuad Tabba Subject: Re: [PATCH v6 11/12] KVM: arm64: Trap access to pVM restricted features In-Reply-To: <20210922124704.600087-12-tabba@google.com> References: <20210922124704.600087-1-tabba@google.com> <20210922124704.600087-12-tabba@google.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/27.1 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: tabba@google.com, kvmarm@lists.cs.columbia.edu, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, drjones@redhat.com, oupton@google.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Cc: kernel-team@android.com, kvm@vger.kernel.org, pbonzini@redhat.com, will@kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu Hi Fuad, On Wed, 22 Sep 2021 13:47:03 +0100, Fuad Tabba wrote: > > Trap accesses to restricted features for VMs running in protected > mode. > > Access to feature registers are emulated, and only supported > features are exposed to protected VMs. > > Accesses to restricted registers as well as restricted > instructions are trapped, and an undefined exception is injected > into the protected guests, i.e., with EC = 0x0 (unknown reason). > This EC is the one used, according to the Arm Architecture > Reference Manual, for unallocated or undefined system registers > or instructions. > > Only affects the functionality of protected VMs. Otherwise, > should not affect non-protected VMs when KVM is running in > protected mode. > > Signed-off-by: Fuad Tabba > --- > arch/arm64/kvm/hyp/nvhe/switch.c | 60 ++++++++++++++++++++++++++++++++ > 1 file changed, 60 insertions(+) > > diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c > index 49080c607838..2bf5952f651b 100644 > --- a/arch/arm64/kvm/hyp/nvhe/switch.c > +++ b/arch/arm64/kvm/hyp/nvhe/switch.c > @@ -20,6 +20,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -28,6 +29,7 @@ > #include > > #include > +#include > > /* Non-VHE specific context */ > DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data); > @@ -158,6 +160,49 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) > write_sysreg(pmu->events_host, pmcntenset_el0); > } > > +/** > + * Handler for protected VM restricted exceptions. > + * > + * Inject an undefined exception into the guest and return true to indicate that > + * the hypervisor has handled the exit, and control should go back to the guest. > + */ > +static bool kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu, u64 *exit_code) > +{ > + __inject_undef64(vcpu); > + return true; > +} > + > +/** > + * Handler for protected VM MSR, MRS or System instruction execution in AArch64. > + * > + * Returns true if the hypervisor has handled the exit, and control should go > + * back to the guest, or false if it hasn't. > + */ > +static bool kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu, u64 *exit_code) > +{ > + if (kvm_handle_pvm_sysreg(vcpu, exit_code)) > + return true; > + else > + return kvm_hyp_handle_sysreg(vcpu, exit_code); nit: drop the else. I wonder though: what if there is an overlap between between the pVM handling and the normal KVM stuff? Are we guaranteed that there is none? For example, ESR_ELx_EC_SYS64 is used when working around some bugs (see the TX2 TVM handling). What happens if you return early and don't let it happen? This has a huge potential for some bad breakage. > +} > + > +/** > + * Handler for protected floating-point and Advanced SIMD accesses. > + * > + * Returns true if the hypervisor has handled the exit, and control should go > + * back to the guest, or false if it hasn't. > + */ > +static bool kvm_handle_pvm_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code) > +{ > + /* Linux guests assume support for floating-point and Advanced SIMD. */ > + BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_FP), > + PVM_ID_AA64PFR0_ALLOW)); > + BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_ASIMD), > + PVM_ID_AA64PFR0_ALLOW)); > + > + return kvm_hyp_handle_fpsimd(vcpu, exit_code); > +} > + > static const exit_handler_fn hyp_exit_handlers[] = { > [0 ... ESR_ELx_EC_MAX] = NULL, > [ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15, > @@ -170,8 +215,23 @@ static const exit_handler_fn hyp_exit_handlers[] = { > [ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth, > }; > > +static const exit_handler_fn pvm_exit_handlers[] = { > + [0 ... ESR_ELx_EC_MAX] = NULL, > + [ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15, > + [ESR_ELx_EC_CP15_64] = kvm_hyp_handle_cp15, Heads up, this one was bogus, and I removed it in my patches[1]. But it really begs the question: given that you really don't want to handle any AArch32 for protected VMs, why handling anything at all the first place? You really should let the exit happen and let the outer run loop deal with it. > + [ESR_ELx_EC_SYS64] = kvm_handle_pvm_sys64, > + [ESR_ELx_EC_SVE] = kvm_handle_pvm_restricted, > + [ESR_ELx_EC_FP_ASIMD] = kvm_handle_pvm_fpsimd, > + [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low, > + [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, > + [ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth, > +}; > + > static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm *kvm) > { > + if (unlikely(kvm_vm_is_protected(kvm))) > + return pvm_exit_handlers; > + > return hyp_exit_handlers; > } > > -- > 2.33.0.464.g1972c5931b-goog > > Thanks, M. [1] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/commit/?h=kvm-arm64/early-ec-handlers&id=f84ff369795ed47f2cd5e556170166ee8b3a988f -- Without deviation from the norm, progress is not possible. _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA807C433EF for ; Mon, 4 Oct 2021 17:29:35 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 73E4B61458 for ; Mon, 4 Oct 2021 17:29:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 73E4B61458 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Subject:Cc:To:From:Message-ID:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=mI571rEORBL2jcG1I71Oc5v7CDy33skcuY7/3/s6eLw=; b=0W8uFu9LRTvqc7 p4KUPNNuamj5XmFplExYX2DA3vqzINJ3DmxJdTX8iCzTP9R5qt1vYtBb0Wk2BJPzQdHDAuxhbOavN fWgN4Ww6BbHCeBHqdDwSHpLU0kArp7Wn2hJBOsarHab962Ms+A8AlbyvXoeF8jFQdSUJ2CqbXeM/T fDQmeRZCVke4b+1l8s6J5SJ9U8CR4Slv+4iKY3lDr3UlZbSs/v7Emf0k7F/m4uM4hCOSb0Wj88tUM 7dQgSI2LIYfF+kDl/JqwBslWVaCnnHQY9JTptkV8+JppIUkX0XEYHgavded1a+gEIhMoHc91zmdnT j/bCMY/oJC4C9I0ExlBg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXRkE-007NGf-8b; Mon, 04 Oct 2021 17:27:18 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXRkA-007NFm-IB for linux-arm-kernel@lists.infradead.org; Mon, 04 Oct 2021 17:27:16 +0000 Received: from disco-boy.misterjones.org (disco-boy.misterjones.org [51.254.78.96]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 56E5161452; Mon, 4 Oct 2021 17:27:14 +0000 (UTC) Received: from sofa.misterjones.org ([185.219.108.64] helo=why.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1mXRk8-00Eh0o-CV; Mon, 04 Oct 2021 18:27:12 +0100 Date: Mon, 04 Oct 2021 18:27:11 +0100 Message-ID: <87tuhwr98w.wl-maz@kernel.org> From: Marc Zyngier To: Fuad Tabba Cc: kvmarm@lists.cs.columbia.edu, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, drjones@redhat.com, oupton@google.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com Subject: Re: [PATCH v6 11/12] KVM: arm64: Trap access to pVM restricted features In-Reply-To: <20210922124704.600087-12-tabba@google.com> References: <20210922124704.600087-1-tabba@google.com> <20210922124704.600087-12-tabba@google.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/27.1 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: tabba@google.com, kvmarm@lists.cs.columbia.edu, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, drjones@redhat.com, oupton@google.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211004_102714_654354_486A961A X-CRM114-Status: GOOD ( 32.53 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Hi Fuad, On Wed, 22 Sep 2021 13:47:03 +0100, Fuad Tabba wrote: > > Trap accesses to restricted features for VMs running in protected > mode. > > Access to feature registers are emulated, and only supported > features are exposed to protected VMs. > > Accesses to restricted registers as well as restricted > instructions are trapped, and an undefined exception is injected > into the protected guests, i.e., with EC = 0x0 (unknown reason). > This EC is the one used, according to the Arm Architecture > Reference Manual, for unallocated or undefined system registers > or instructions. > > Only affects the functionality of protected VMs. Otherwise, > should not affect non-protected VMs when KVM is running in > protected mode. > > Signed-off-by: Fuad Tabba > --- > arch/arm64/kvm/hyp/nvhe/switch.c | 60 ++++++++++++++++++++++++++++++++ > 1 file changed, 60 insertions(+) > > diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c > index 49080c607838..2bf5952f651b 100644 > --- a/arch/arm64/kvm/hyp/nvhe/switch.c > +++ b/arch/arm64/kvm/hyp/nvhe/switch.c > @@ -20,6 +20,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -28,6 +29,7 @@ > #include > > #include > +#include > > /* Non-VHE specific context */ > DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data); > @@ -158,6 +160,49 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) > write_sysreg(pmu->events_host, pmcntenset_el0); > } > > +/** > + * Handler for protected VM restricted exceptions. > + * > + * Inject an undefined exception into the guest and return true to indicate that > + * the hypervisor has handled the exit, and control should go back to the guest. > + */ > +static bool kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu, u64 *exit_code) > +{ > + __inject_undef64(vcpu); > + return true; > +} > + > +/** > + * Handler for protected VM MSR, MRS or System instruction execution in AArch64. > + * > + * Returns true if the hypervisor has handled the exit, and control should go > + * back to the guest, or false if it hasn't. > + */ > +static bool kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu, u64 *exit_code) > +{ > + if (kvm_handle_pvm_sysreg(vcpu, exit_code)) > + return true; > + else > + return kvm_hyp_handle_sysreg(vcpu, exit_code); nit: drop the else. I wonder though: what if there is an overlap between between the pVM handling and the normal KVM stuff? Are we guaranteed that there is none? For example, ESR_ELx_EC_SYS64 is used when working around some bugs (see the TX2 TVM handling). What happens if you return early and don't let it happen? This has a huge potential for some bad breakage. > +} > + > +/** > + * Handler for protected floating-point and Advanced SIMD accesses. > + * > + * Returns true if the hypervisor has handled the exit, and control should go > + * back to the guest, or false if it hasn't. > + */ > +static bool kvm_handle_pvm_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code) > +{ > + /* Linux guests assume support for floating-point and Advanced SIMD. */ > + BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_FP), > + PVM_ID_AA64PFR0_ALLOW)); > + BUILD_BUG_ON(!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_ASIMD), > + PVM_ID_AA64PFR0_ALLOW)); > + > + return kvm_hyp_handle_fpsimd(vcpu, exit_code); > +} > + > static const exit_handler_fn hyp_exit_handlers[] = { > [0 ... ESR_ELx_EC_MAX] = NULL, > [ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15, > @@ -170,8 +215,23 @@ static const exit_handler_fn hyp_exit_handlers[] = { > [ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth, > }; > > +static const exit_handler_fn pvm_exit_handlers[] = { > + [0 ... ESR_ELx_EC_MAX] = NULL, > + [ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15, > + [ESR_ELx_EC_CP15_64] = kvm_hyp_handle_cp15, Heads up, this one was bogus, and I removed it in my patches[1]. But it really begs the question: given that you really don't want to handle any AArch32 for protected VMs, why handling anything at all the first place? You really should let the exit happen and let the outer run loop deal with it. > + [ESR_ELx_EC_SYS64] = kvm_handle_pvm_sys64, > + [ESR_ELx_EC_SVE] = kvm_handle_pvm_restricted, > + [ESR_ELx_EC_FP_ASIMD] = kvm_handle_pvm_fpsimd, > + [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low, > + [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, > + [ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth, > +}; > + > static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm *kvm) > { > + if (unlikely(kvm_vm_is_protected(kvm))) > + return pvm_exit_handlers; > + > return hyp_exit_handlers; > } > > -- > 2.33.0.464.g1972c5931b-goog > > Thanks, M. [1] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/commit/?h=kvm-arm64/early-ec-handlers&id=f84ff369795ed47f2cd5e556170166ee8b3a988f -- Without deviation from the norm, progress is not possible. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel