From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1EF85C433DF for ; Thu, 4 Jun 2020 15:23:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F205C206E6 for ; Thu, 4 Jun 2020 15:23:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729349AbgFDPXj (ORCPT ); Thu, 4 Jun 2020 11:23:39 -0400 Received: from foss.arm.com ([217.140.110.172]:45786 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728145AbgFDPXj (ORCPT ); Thu, 4 Jun 2020 11:23:39 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 75CFE1FB; Thu, 4 Jun 2020 08:23:38 -0700 (PDT) Received: from C02TD0UTHF1T.local (unknown [10.57.9.165]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3C8D93F305; Thu, 4 Jun 2020 08:23:36 -0700 (PDT) Date: Thu, 4 Jun 2020 16:23:33 +0100 From: Mark Rutland To: Marc Zyngier Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, James Morse , Julien Thierry , Suzuki K Poulose , Will Deacon , Catalin Marinas , kernel-team@android.com Subject: Re: [PATCH 2/3] KVM: arm64: Handle PtrAuth traps early Message-ID: <20200604152333.GD75320@C02TD0UTHF1T.local> References: <20200604133354.1279412-1-maz@kernel.org> <20200604133354.1279412-3-maz@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200604133354.1279412-3-maz@kernel.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On Thu, Jun 04, 2020 at 02:33:53PM +0100, Marc Zyngier wrote: > The current way we deal with PtrAuth is a bit heavy handed: > > - We forcefully save the host's keys on each vcpu_load() > - Handling the PtrAuth trap forces us to go all the way back > to the exit handling code to just set the HCR bits > > Overall, this is pretty heavy handed. A better approach would be > to handle it the same way we deal with the FPSIMD registers: > > - On vcpu_load() disable PtrAuth for the guest > - On first use, save the host's keys, enable PtrAuth in the > guest > > Crutially, this can happen as a fixup, which is done very early > on exit. We can then reenter the guest immediately without > leaving the hypervisor role. > > Another thing is that it simplify the rest of the host handling: > exiting all the way to the host means that the only possible > outcome for this trap is to inject an UNDEF. > > Signed-off-by: Marc Zyngier > --- > arch/arm64/kvm/arm.c | 17 +---------- > arch/arm64/kvm/handle_exit.c | 17 ++--------- > arch/arm64/kvm/hyp/switch.c | 59 ++++++++++++++++++++++++++++++++++++ > arch/arm64/kvm/sys_regs.c | 13 +++----- > 4 files changed, 68 insertions(+), 38 deletions(-) [...] > +static bool __hyp_text __hyp_handle_ptrauth(struct kvm_vcpu *vcpu) > +{ > + u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_hsr(vcpu)); > + u32 ec = kvm_vcpu_trap_get_class(vcpu); > + struct kvm_cpu_context *ctxt; > + u64 val; > + > + if (!vcpu_has_ptrauth(vcpu)) > + return false; > + > + switch (ec) { > + case ESR_ELx_EC_PAC: > + break; > + case ESR_ELx_EC_SYS64: > + switch (sysreg) { > + case SYS_APIAKEYLO_EL1: > + case SYS_APIAKEYHI_EL1: > + case SYS_APIBKEYLO_EL1: > + case SYS_APIBKEYHI_EL1: > + case SYS_APDAKEYLO_EL1: > + case SYS_APDAKEYHI_EL1: > + case SYS_APDBKEYLO_EL1: > + case SYS_APDBKEYHI_EL1: > + case SYS_APGAKEYLO_EL1: > + case SYS_APGAKEYHI_EL1: > + break; > + default: > + return false; > + } > + break; > + default: > + return false; > + } The ESR triage looks correct, but I think it might be clearer split out into a helper, since you can avoid the default cases with direct returns, and you could avoid the nested switch, e.g. static inline bool __hyp_text esr_is_ptrauth_trap(u32 esr) { u32 ec = ESR_ELx_EC(esr); if (ec == ESR_ELx_EC_PAC) return true; if (ec != ESR_ELx_EC_SYS64) return false; switch (esr_sys64_to_sysreg(esr)) { case SYS_APIAKEYLO_EL1: case SYS_APIAKEYHI_EL1: case SYS_APIBKEYLO_EL1: case SYS_APIBKEYHI_EL1: case SYS_APDAKEYLO_EL1: case SYS_APDAKEYHI_EL1: case SYS_APDBKEYLO_EL1: case SYS_APDBKEYHI_EL1: case SYS_APGAKEYLO_EL1: case SYS_APGAKEYHI_EL1: return true; } return false; } > + > + ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); > + __ptrauth_save_key(ctxt->sys_regs, APIA); > + __ptrauth_save_key(ctxt->sys_regs, APIB); > + __ptrauth_save_key(ctxt->sys_regs, APDA); > + __ptrauth_save_key(ctxt->sys_regs, APDB); > + __ptrauth_save_key(ctxt->sys_regs, APGA); > + > + vcpu_ptrauth_enable(vcpu); > + > + val = read_sysreg(hcr_el2); > + val |= (HCR_API | HCR_APK); > + write_sysreg(val, hcr_el2); > + > + return true; > +} > + > /* > * Return true when we were able to fixup the guest exit and should return to > * the guest, false when we should restore the host state and return to the > @@ -524,6 +580,9 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) > if (__hyp_handle_fpsimd(vcpu)) > return true; > > + if (__hyp_handle_ptrauth(vcpu)) > + return true; > + > if (!__populate_fault_info(vcpu)) > return true; > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c > index ad1d57501d6d..564995084cf8 100644 > --- a/arch/arm64/kvm/sys_regs.c > +++ b/arch/arm64/kvm/sys_regs.c > @@ -1034,16 +1034,13 @@ static bool trap_ptrauth(struct kvm_vcpu *vcpu, > struct sys_reg_params *p, > const struct sys_reg_desc *rd) > { > - kvm_arm_vcpu_ptrauth_trap(vcpu); > - > /* > - * Return false for both cases as we never skip the trapped > - * instruction: > - * > - * - Either we re-execute the same key register access instruction > - * after enabling ptrauth. > - * - Or an UNDEF is injected as ptrauth is not supported/enabled. > + * If we land here, that is because we didn't fixup the access on exit > + * by allowing the PtrAuth sysregs. The only way this happens is when > + * the guest does not have PtrAuth support enabled. > */ > + kvm_inject_undefined(vcpu); > + > return false; > } > > -- > 2.26.2 > Regardless of the suggestion above, this looks sound to me. I agree that it's much nicer to handle this in hyp, and AFAICT the context switch should do the right thing, so: Reviewed-by: Mark Rutland Thanks, Mark. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4311C433DF for ; Thu, 4 Jun 2020 15:23:43 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id 698ED207DF for ; Thu, 4 Jun 2020 15:23:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 698ED207DF Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvmarm-bounces@lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id DAC3D4B404; Thu, 4 Jun 2020 11:23:42 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id UVSMlebvJBbZ; Thu, 4 Jun 2020 11:23:41 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 8B8C14B400; Thu, 4 Jun 2020 11:23:41 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 61DC34B400 for ; Thu, 4 Jun 2020 11:23:40 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id GbJ80S81SiQ9 for ; Thu, 4 Jun 2020 11:23:39 -0400 (EDT) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mm01.cs.columbia.edu (Postfix) with ESMTP id EDB9D4B1ED for ; Thu, 4 Jun 2020 11:23:38 -0400 (EDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 75CFE1FB; Thu, 4 Jun 2020 08:23:38 -0700 (PDT) Received: from C02TD0UTHF1T.local (unknown [10.57.9.165]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3C8D93F305; Thu, 4 Jun 2020 08:23:36 -0700 (PDT) Date: Thu, 4 Jun 2020 16:23:33 +0100 From: Mark Rutland To: Marc Zyngier Subject: Re: [PATCH 2/3] KVM: arm64: Handle PtrAuth traps early Message-ID: <20200604152333.GD75320@C02TD0UTHF1T.local> References: <20200604133354.1279412-1-maz@kernel.org> <20200604133354.1279412-3-maz@kernel.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20200604133354.1279412-3-maz@kernel.org> Cc: kernel-team@android.com, kvm@vger.kernel.org, Catalin Marinas , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu On Thu, Jun 04, 2020 at 02:33:53PM +0100, Marc Zyngier wrote: > The current way we deal with PtrAuth is a bit heavy handed: > > - We forcefully save the host's keys on each vcpu_load() > - Handling the PtrAuth trap forces us to go all the way back > to the exit handling code to just set the HCR bits > > Overall, this is pretty heavy handed. A better approach would be > to handle it the same way we deal with the FPSIMD registers: > > - On vcpu_load() disable PtrAuth for the guest > - On first use, save the host's keys, enable PtrAuth in the > guest > > Crutially, this can happen as a fixup, which is done very early > on exit. We can then reenter the guest immediately without > leaving the hypervisor role. > > Another thing is that it simplify the rest of the host handling: > exiting all the way to the host means that the only possible > outcome for this trap is to inject an UNDEF. > > Signed-off-by: Marc Zyngier > --- > arch/arm64/kvm/arm.c | 17 +---------- > arch/arm64/kvm/handle_exit.c | 17 ++--------- > arch/arm64/kvm/hyp/switch.c | 59 ++++++++++++++++++++++++++++++++++++ > arch/arm64/kvm/sys_regs.c | 13 +++----- > 4 files changed, 68 insertions(+), 38 deletions(-) [...] > +static bool __hyp_text __hyp_handle_ptrauth(struct kvm_vcpu *vcpu) > +{ > + u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_hsr(vcpu)); > + u32 ec = kvm_vcpu_trap_get_class(vcpu); > + struct kvm_cpu_context *ctxt; > + u64 val; > + > + if (!vcpu_has_ptrauth(vcpu)) > + return false; > + > + switch (ec) { > + case ESR_ELx_EC_PAC: > + break; > + case ESR_ELx_EC_SYS64: > + switch (sysreg) { > + case SYS_APIAKEYLO_EL1: > + case SYS_APIAKEYHI_EL1: > + case SYS_APIBKEYLO_EL1: > + case SYS_APIBKEYHI_EL1: > + case SYS_APDAKEYLO_EL1: > + case SYS_APDAKEYHI_EL1: > + case SYS_APDBKEYLO_EL1: > + case SYS_APDBKEYHI_EL1: > + case SYS_APGAKEYLO_EL1: > + case SYS_APGAKEYHI_EL1: > + break; > + default: > + return false; > + } > + break; > + default: > + return false; > + } The ESR triage looks correct, but I think it might be clearer split out into a helper, since you can avoid the default cases with direct returns, and you could avoid the nested switch, e.g. static inline bool __hyp_text esr_is_ptrauth_trap(u32 esr) { u32 ec = ESR_ELx_EC(esr); if (ec == ESR_ELx_EC_PAC) return true; if (ec != ESR_ELx_EC_SYS64) return false; switch (esr_sys64_to_sysreg(esr)) { case SYS_APIAKEYLO_EL1: case SYS_APIAKEYHI_EL1: case SYS_APIBKEYLO_EL1: case SYS_APIBKEYHI_EL1: case SYS_APDAKEYLO_EL1: case SYS_APDAKEYHI_EL1: case SYS_APDBKEYLO_EL1: case SYS_APDBKEYHI_EL1: case SYS_APGAKEYLO_EL1: case SYS_APGAKEYHI_EL1: return true; } return false; } > + > + ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); > + __ptrauth_save_key(ctxt->sys_regs, APIA); > + __ptrauth_save_key(ctxt->sys_regs, APIB); > + __ptrauth_save_key(ctxt->sys_regs, APDA); > + __ptrauth_save_key(ctxt->sys_regs, APDB); > + __ptrauth_save_key(ctxt->sys_regs, APGA); > + > + vcpu_ptrauth_enable(vcpu); > + > + val = read_sysreg(hcr_el2); > + val |= (HCR_API | HCR_APK); > + write_sysreg(val, hcr_el2); > + > + return true; > +} > + > /* > * Return true when we were able to fixup the guest exit and should return to > * the guest, false when we should restore the host state and return to the > @@ -524,6 +580,9 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) > if (__hyp_handle_fpsimd(vcpu)) > return true; > > + if (__hyp_handle_ptrauth(vcpu)) > + return true; > + > if (!__populate_fault_info(vcpu)) > return true; > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c > index ad1d57501d6d..564995084cf8 100644 > --- a/arch/arm64/kvm/sys_regs.c > +++ b/arch/arm64/kvm/sys_regs.c > @@ -1034,16 +1034,13 @@ static bool trap_ptrauth(struct kvm_vcpu *vcpu, > struct sys_reg_params *p, > const struct sys_reg_desc *rd) > { > - kvm_arm_vcpu_ptrauth_trap(vcpu); > - > /* > - * Return false for both cases as we never skip the trapped > - * instruction: > - * > - * - Either we re-execute the same key register access instruction > - * after enabling ptrauth. > - * - Or an UNDEF is injected as ptrauth is not supported/enabled. > + * If we land here, that is because we didn't fixup the access on exit > + * by allowing the PtrAuth sysregs. The only way this happens is when > + * the guest does not have PtrAuth support enabled. > */ > + kvm_inject_undefined(vcpu); > + > return false; > } > > -- > 2.26.2 > Regardless of the suggestion above, this looks sound to me. I agree that it's much nicer to handle this in hyp, and AFAICT the context switch should do the right thing, so: Reviewed-by: Mark Rutland Thanks, Mark. _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 941F4C433E0 for ; Thu, 4 Jun 2020 15:23:43 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 664FC206E6 for ; Thu, 4 Jun 2020 15:23:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="tiQTH2v8" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 664FC206E6 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8+/ol8flsHIgayBVqtTqa4Wadi0RJjxkjYtJ1+mhO8I=; b=tiQTH2v8oZ1+qK OOUBhfjNzjRKBHQwl0+fvacw4TuUapyKkRgPtsooE5AY4R+zA9F+WptanRKvFKDqmBd/hteVYToVr Ys0YX/EEZuouT6m5+nyypP1g1QqwuXbdZ/Pk74wsje5jSenSWtMd8VfxZ5DK3/Kwry+iykqPthNHT iJ4lyCUwPN8LkWnZG5DZzRhNlhWrgJYILYj2zgqrWXl0HIxMOh8peNrWBnb57B0c/V4LpAeHN0RXZ BagqOcHpNI7FsjIQjBW1mld+2WDjYaZcg0RFarlSFmIjuBa/Nukg4vxQhESqd9t7pbF4vXYBAw7tW tOrCtrpxHVdAoDoPeHug==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jgriY-0001f2-CB; Thu, 04 Jun 2020 15:23:42 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jgriV-0001eL-5Q for linux-arm-kernel@lists.infradead.org; Thu, 04 Jun 2020 15:23:40 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 75CFE1FB; Thu, 4 Jun 2020 08:23:38 -0700 (PDT) Received: from C02TD0UTHF1T.local (unknown [10.57.9.165]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3C8D93F305; Thu, 4 Jun 2020 08:23:36 -0700 (PDT) Date: Thu, 4 Jun 2020 16:23:33 +0100 From: Mark Rutland To: Marc Zyngier Subject: Re: [PATCH 2/3] KVM: arm64: Handle PtrAuth traps early Message-ID: <20200604152333.GD75320@C02TD0UTHF1T.local> References: <20200604133354.1279412-1-maz@kernel.org> <20200604133354.1279412-3-maz@kernel.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20200604133354.1279412-3-maz@kernel.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200604_082339_289581_8FDADF37 X-CRM114-Status: GOOD ( 25.93 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kernel-team@android.com, kvm@vger.kernel.org, Suzuki K Poulose , Catalin Marinas , James Morse , Julien Thierry , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Thu, Jun 04, 2020 at 02:33:53PM +0100, Marc Zyngier wrote: > The current way we deal with PtrAuth is a bit heavy handed: > > - We forcefully save the host's keys on each vcpu_load() > - Handling the PtrAuth trap forces us to go all the way back > to the exit handling code to just set the HCR bits > > Overall, this is pretty heavy handed. A better approach would be > to handle it the same way we deal with the FPSIMD registers: > > - On vcpu_load() disable PtrAuth for the guest > - On first use, save the host's keys, enable PtrAuth in the > guest > > Crutially, this can happen as a fixup, which is done very early > on exit. We can then reenter the guest immediately without > leaving the hypervisor role. > > Another thing is that it simplify the rest of the host handling: > exiting all the way to the host means that the only possible > outcome for this trap is to inject an UNDEF. > > Signed-off-by: Marc Zyngier > --- > arch/arm64/kvm/arm.c | 17 +---------- > arch/arm64/kvm/handle_exit.c | 17 ++--------- > arch/arm64/kvm/hyp/switch.c | 59 ++++++++++++++++++++++++++++++++++++ > arch/arm64/kvm/sys_regs.c | 13 +++----- > 4 files changed, 68 insertions(+), 38 deletions(-) [...] > +static bool __hyp_text __hyp_handle_ptrauth(struct kvm_vcpu *vcpu) > +{ > + u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_hsr(vcpu)); > + u32 ec = kvm_vcpu_trap_get_class(vcpu); > + struct kvm_cpu_context *ctxt; > + u64 val; > + > + if (!vcpu_has_ptrauth(vcpu)) > + return false; > + > + switch (ec) { > + case ESR_ELx_EC_PAC: > + break; > + case ESR_ELx_EC_SYS64: > + switch (sysreg) { > + case SYS_APIAKEYLO_EL1: > + case SYS_APIAKEYHI_EL1: > + case SYS_APIBKEYLO_EL1: > + case SYS_APIBKEYHI_EL1: > + case SYS_APDAKEYLO_EL1: > + case SYS_APDAKEYHI_EL1: > + case SYS_APDBKEYLO_EL1: > + case SYS_APDBKEYHI_EL1: > + case SYS_APGAKEYLO_EL1: > + case SYS_APGAKEYHI_EL1: > + break; > + default: > + return false; > + } > + break; > + default: > + return false; > + } The ESR triage looks correct, but I think it might be clearer split out into a helper, since you can avoid the default cases with direct returns, and you could avoid the nested switch, e.g. static inline bool __hyp_text esr_is_ptrauth_trap(u32 esr) { u32 ec = ESR_ELx_EC(esr); if (ec == ESR_ELx_EC_PAC) return true; if (ec != ESR_ELx_EC_SYS64) return false; switch (esr_sys64_to_sysreg(esr)) { case SYS_APIAKEYLO_EL1: case SYS_APIAKEYHI_EL1: case SYS_APIBKEYLO_EL1: case SYS_APIBKEYHI_EL1: case SYS_APDAKEYLO_EL1: case SYS_APDAKEYHI_EL1: case SYS_APDBKEYLO_EL1: case SYS_APDBKEYHI_EL1: case SYS_APGAKEYLO_EL1: case SYS_APGAKEYHI_EL1: return true; } return false; } > + > + ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); > + __ptrauth_save_key(ctxt->sys_regs, APIA); > + __ptrauth_save_key(ctxt->sys_regs, APIB); > + __ptrauth_save_key(ctxt->sys_regs, APDA); > + __ptrauth_save_key(ctxt->sys_regs, APDB); > + __ptrauth_save_key(ctxt->sys_regs, APGA); > + > + vcpu_ptrauth_enable(vcpu); > + > + val = read_sysreg(hcr_el2); > + val |= (HCR_API | HCR_APK); > + write_sysreg(val, hcr_el2); > + > + return true; > +} > + > /* > * Return true when we were able to fixup the guest exit and should return to > * the guest, false when we should restore the host state and return to the > @@ -524,6 +580,9 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) > if (__hyp_handle_fpsimd(vcpu)) > return true; > > + if (__hyp_handle_ptrauth(vcpu)) > + return true; > + > if (!__populate_fault_info(vcpu)) > return true; > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c > index ad1d57501d6d..564995084cf8 100644 > --- a/arch/arm64/kvm/sys_regs.c > +++ b/arch/arm64/kvm/sys_regs.c > @@ -1034,16 +1034,13 @@ static bool trap_ptrauth(struct kvm_vcpu *vcpu, > struct sys_reg_params *p, > const struct sys_reg_desc *rd) > { > - kvm_arm_vcpu_ptrauth_trap(vcpu); > - > /* > - * Return false for both cases as we never skip the trapped > - * instruction: > - * > - * - Either we re-execute the same key register access instruction > - * after enabling ptrauth. > - * - Or an UNDEF is injected as ptrauth is not supported/enabled. > + * If we land here, that is because we didn't fixup the access on exit > + * by allowing the PtrAuth sysregs. The only way this happens is when > + * the guest does not have PtrAuth support enabled. > */ > + kvm_inject_undefined(vcpu); > + > return false; > } > > -- > 2.26.2 > Regardless of the suggestion above, this looks sound to me. I agree that it's much nicer to handle this in hyp, and AFAICT the context switch should do the right thing, so: Reviewed-by: Mark Rutland Thanks, Mark. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel