From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E45EC433F5 for ; Tue, 5 Oct 2021 09:05:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5AA7A6137C for ; Tue, 5 Oct 2021 09:05:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232884AbhJEJHg (ORCPT ); Tue, 5 Oct 2021 05:07:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52918 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232478AbhJEJHf (ORCPT ); Tue, 5 Oct 2021 05:07:35 -0400 Received: from mail-oi1-x230.google.com (mail-oi1-x230.google.com [IPv6:2607:f8b0:4864:20::230]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 71D4AC061745 for ; Tue, 5 Oct 2021 02:05:45 -0700 (PDT) Received: by mail-oi1-x230.google.com with SMTP id w190so222567oif.5 for ; Tue, 05 Oct 2021 02:05:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ZM3Zs1hD0IvxW0NEoOVRpgIBxIg0g/Aefp92Tu/RS14=; b=DGxWG2YlpVunKcb7OQr0SKzOX9CTc0PhPEnwq/MUvowulZ084/74w+yMqV8HfUqupl Z8i6/b71iIdwf+SVw4wxivnoh5EpKdwJEDO7eb4Wp1Zv8utuUQc65OrqfF+TcYSt6Gwy 3SF4c04jFlYufVJ3N3z1MG3Beo6zWubWbiNzAncDftNled8YOM767waM69j1UDGGuy6L 1+/BL38Q7i0U/Y3mY4Allok0YDrxxDrc8oBAuit2EmoDXcENtSNL5/iGh18RsDGpzjHt iAoM9jSihkQLb79YcfOmYUZ06/jPuKCqzVhfEUK3YihTHoRGo4Jf5lxo5CX9KoWqXb2/ qvWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ZM3Zs1hD0IvxW0NEoOVRpgIBxIg0g/Aefp92Tu/RS14=; b=sczYBnuB121eOEB4BFLOHxeEMOa+S0jzdalzZ9eDUtwHcnrMRIV0fdzmdJgIN3F0Tg 3ZoJgcZxOeMT2tjpdOmfEVhq4yZ78jxIMd3MdSH9I25I2GzJ21GW9Af6wTUDiBJPZOlv 8FMU3NjIxmrcj+W1IB0o1z11eWveIAkVmv3VhAVuVBUSnGPdjk85nbGax6LybLwkle+X FTak1KXspETNbB5b9M8QCyOX4wbYXIhomiW45JXcdABdFVhDSesazsVWPyxzV9yWuqc+ plQXHQmOaA6tyKPophe21Jyv9EOOnVCBV8T3834xyaHk/nOKgGbbMiWUnNIdr3tbd3Uw 9C7w== X-Gm-Message-State: AOAM532zqRjA5gNp+GYkB5V/dOMipj0Wqb8mYZIIjfyNzKtHLx0A9mbc 0y9TSiZ75slb9nYC1zKj2ulZ/mpvyMQJuBxaOUyT0A== X-Google-Smtp-Source: ABdhPJz67hcp3PPxwZdQScqsFRDex6KiMAh2iDnZJJt+LUbAzcOQmphJP5sqXjayKo9PJSWuAvxT7mO1+VdmpRsxF9Q= X-Received: by 2002:aca:604:: with SMTP id 4mr1531266oig.8.1633424744609; Tue, 05 Oct 2021 02:05:44 -0700 (PDT) MIME-Version: 1.0 References: <20210922124704.600087-1-tabba@google.com> <20210922124704.600087-13-tabba@google.com> <87sfxfrh6k.wl-maz@kernel.org> In-Reply-To: <87sfxfrh6k.wl-maz@kernel.org> From: Fuad Tabba Date: Tue, 5 Oct 2021 10:05:08 +0100 Message-ID: Subject: Re: [PATCH v6 12/12] KVM: arm64: Handle protected guests at 32 bits To: Marc Zyngier Cc: kvmarm@lists.cs.columbia.edu, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, drjones@redhat.com, oupton@google.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Hi Marc, On Tue, Oct 5, 2021 at 9:48 AM Marc Zyngier wrote: > > On Wed, 22 Sep 2021 13:47:04 +0100, > Fuad Tabba wrote: > > > > Protected KVM does not support protected AArch32 guests. However, > > it is possible for the guest to force run AArch32, potentially > > causing problems. Add an extra check so that if the hypervisor > > catches the guest doing that, it can prevent the guest from > > running again by resetting vcpu->arch.target and returning > > ARM_EXCEPTION_IL. > > > > If this were to happen, The VMM can try and fix it by re- > > initializing the vcpu with KVM_ARM_VCPU_INIT, however, this is > > likely not possible for protected VMs. > > > > Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric > > AArch32 systems") > > > > Signed-off-by: Fuad Tabba > > --- > > arch/arm64/kvm/hyp/nvhe/switch.c | 40 ++++++++++++++++++++++++++++++++ > > 1 file changed, 40 insertions(+) > > > > diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c > > index 2bf5952f651b..d66226e49013 100644 > > --- a/arch/arm64/kvm/hyp/nvhe/switch.c > > +++ b/arch/arm64/kvm/hyp/nvhe/switch.c > > @@ -235,6 +235,43 @@ static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm *kvm) > > return hyp_exit_handlers; > > } > > > > +/* > > + * Some guests (e.g., protected VMs) might not be allowed to run in AArch32. > > + * The ARMv8 architecture does not give the hypervisor a mechanism to prevent a > > + * guest from dropping to AArch32 EL0 if implemented by the CPU. If the > > + * hypervisor spots a guest in such a state ensure it is handled, and don't > > + * trust the host to spot or fix it. The check below is based on the one in > > + * kvm_arch_vcpu_ioctl_run(). > > + * > > + * Returns false if the guest ran in AArch32 when it shouldn't have, and > > + * thus should exit to the host, or true if a the guest run loop can continue. > > + */ > > +static bool handle_aarch32_guest(struct kvm_vcpu *vcpu, u64 *exit_code) > > +{ > > + struct kvm *kvm = (struct kvm *) kern_hyp_va(vcpu->kvm); > > There is no need for an extra cast. kern_hyp_va() already provides a > cast to the type of the parameter. Will drop it. > > + bool is_aarch32_allowed = > > + FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL0), > > + get_pvm_id_aa64pfr0(vcpu)) >= > > + ID_AA64PFR0_ELx_32BIT_64BIT; > > + > > + > > + if (kvm_vm_is_protected(kvm) && > > + vcpu_mode_is_32bit(vcpu) && > > + !is_aarch32_allowed) { > > Do we really need to go through this is_aarch32_allowed check? > Protected VMs don't have AArch32, and we don't have the infrastructure > to handle 32bit at all. For non-protected VMs, we already have what we > need at EL1. So the extra check only adds complexity. No. I could change it to a build-time assertion just to make sure that AArch32 is not allowed instead. Thanks, /fuad > > + /* > > + * As we have caught the guest red-handed, decide that it isn't > > + * fit for purpose anymore by making the vcpu invalid. The VMM > > + * can try and fix it by re-initializing the vcpu with > > + * KVM_ARM_VCPU_INIT, however, this is likely not possible for > > + * protected VMs. > > + */ > > + vcpu->arch.target = -1; > > + *exit_code = ARM_EXCEPTION_IL; > > + return false; > > + } > > + > > + return true; > > +} > > + > > /* Switch to the guest for legacy non-VHE systems */ > > int __kvm_vcpu_run(struct kvm_vcpu *vcpu) > > { > > @@ -297,6 +334,9 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) > > /* Jump in the fire! */ > > exit_code = __guest_enter(vcpu); > > > > + if (unlikely(!handle_aarch32_guest(vcpu, &exit_code))) > > + break; > > + > > /* And we're baaack! */ > > } while (fixup_guest_exit(vcpu, &exit_code)); > > > > Thanks, > > M. > > -- > Without deviation from the norm, progress is not possible. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70745C433F5 for ; Tue, 5 Oct 2021 09:05:50 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id D9FA96137C for ; Tue, 5 Oct 2021 09:05:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D9FA96137C Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 66ED34B2A6; Tue, 5 Oct 2021 05:05:49 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Authentication-Results: mm01.cs.columbia.edu (amavisd-new); dkim=softfail (fail, message has been altered) header.i=@google.com Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id qFOImI02H8Gb; Tue, 5 Oct 2021 05:05:48 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 50C9D4B2C0; Tue, 5 Oct 2021 05:05:48 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 5E24A4B2A6 for ; Tue, 5 Oct 2021 05:05:46 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id W8PIMg6MrVHb for ; Tue, 5 Oct 2021 05:05:45 -0400 (EDT) Received: from mail-oi1-f177.google.com (mail-oi1-f177.google.com [209.85.167.177]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id 6359D4B2A5 for ; Tue, 5 Oct 2021 05:05:45 -0400 (EDT) Received: by mail-oi1-f177.google.com with SMTP id v10so1113754oic.12 for ; Tue, 05 Oct 2021 02:05:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ZM3Zs1hD0IvxW0NEoOVRpgIBxIg0g/Aefp92Tu/RS14=; b=DGxWG2YlpVunKcb7OQr0SKzOX9CTc0PhPEnwq/MUvowulZ084/74w+yMqV8HfUqupl Z8i6/b71iIdwf+SVw4wxivnoh5EpKdwJEDO7eb4Wp1Zv8utuUQc65OrqfF+TcYSt6Gwy 3SF4c04jFlYufVJ3N3z1MG3Beo6zWubWbiNzAncDftNled8YOM767waM69j1UDGGuy6L 1+/BL38Q7i0U/Y3mY4Allok0YDrxxDrc8oBAuit2EmoDXcENtSNL5/iGh18RsDGpzjHt iAoM9jSihkQLb79YcfOmYUZ06/jPuKCqzVhfEUK3YihTHoRGo4Jf5lxo5CX9KoWqXb2/ qvWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ZM3Zs1hD0IvxW0NEoOVRpgIBxIg0g/Aefp92Tu/RS14=; b=W5SVlg0YpYiWXtXojrKT2NmvPUE03KIZqvGBf4PMReInyBzISiGVSLR3202dSGRX0T asIbLsjuSMStzJw8I4dWeEZxwhBb+hT7rs2cflWTMliNbYw71ja9/VGzNjc/1T1eU+m3 7NiFrT87+k5XY93dwAsrnBFpEmTeUQeR6mlKA/y4RPeoNOmQMbbA7yVSA9G4eUk6BNZF HOHtb7H8AmIcOpd+4Yfi/ngKdymlm6nFwm77g/U3dE/Ws9FMC5k/SvIxq5fCjs6COLwC Ooj0p1xbb6cGiEDR7jTEwnJrt+1GdHj9Rz3P82IBjO9+RhMMbw41usjLLXf9Ht+L37nD TcTg== X-Gm-Message-State: AOAM533XuASbo8sHxoEIdfxEA0+dNPLrjtC8UupCVjv44NPHsVtDt4ko KnZzWXs2wcEPEajRcIS+PiEw2d56QAnx1mgS+sqyHg== X-Google-Smtp-Source: ABdhPJz67hcp3PPxwZdQScqsFRDex6KiMAh2iDnZJJt+LUbAzcOQmphJP5sqXjayKo9PJSWuAvxT7mO1+VdmpRsxF9Q= X-Received: by 2002:aca:604:: with SMTP id 4mr1531266oig.8.1633424744609; Tue, 05 Oct 2021 02:05:44 -0700 (PDT) MIME-Version: 1.0 References: <20210922124704.600087-1-tabba@google.com> <20210922124704.600087-13-tabba@google.com> <87sfxfrh6k.wl-maz@kernel.org> In-Reply-To: <87sfxfrh6k.wl-maz@kernel.org> From: Fuad Tabba Date: Tue, 5 Oct 2021 10:05:08 +0100 Message-ID: Subject: Re: [PATCH v6 12/12] KVM: arm64: Handle protected guests at 32 bits To: Marc Zyngier Cc: kernel-team@android.com, kvm@vger.kernel.org, pbonzini@redhat.com, will@kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu Hi Marc, On Tue, Oct 5, 2021 at 9:48 AM Marc Zyngier wrote: > > On Wed, 22 Sep 2021 13:47:04 +0100, > Fuad Tabba wrote: > > > > Protected KVM does not support protected AArch32 guests. However, > > it is possible for the guest to force run AArch32, potentially > > causing problems. Add an extra check so that if the hypervisor > > catches the guest doing that, it can prevent the guest from > > running again by resetting vcpu->arch.target and returning > > ARM_EXCEPTION_IL. > > > > If this were to happen, The VMM can try and fix it by re- > > initializing the vcpu with KVM_ARM_VCPU_INIT, however, this is > > likely not possible for protected VMs. > > > > Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric > > AArch32 systems") > > > > Signed-off-by: Fuad Tabba > > --- > > arch/arm64/kvm/hyp/nvhe/switch.c | 40 ++++++++++++++++++++++++++++++++ > > 1 file changed, 40 insertions(+) > > > > diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c > > index 2bf5952f651b..d66226e49013 100644 > > --- a/arch/arm64/kvm/hyp/nvhe/switch.c > > +++ b/arch/arm64/kvm/hyp/nvhe/switch.c > > @@ -235,6 +235,43 @@ static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm *kvm) > > return hyp_exit_handlers; > > } > > > > +/* > > + * Some guests (e.g., protected VMs) might not be allowed to run in AArch32. > > + * The ARMv8 architecture does not give the hypervisor a mechanism to prevent a > > + * guest from dropping to AArch32 EL0 if implemented by the CPU. If the > > + * hypervisor spots a guest in such a state ensure it is handled, and don't > > + * trust the host to spot or fix it. The check below is based on the one in > > + * kvm_arch_vcpu_ioctl_run(). > > + * > > + * Returns false if the guest ran in AArch32 when it shouldn't have, and > > + * thus should exit to the host, or true if a the guest run loop can continue. > > + */ > > +static bool handle_aarch32_guest(struct kvm_vcpu *vcpu, u64 *exit_code) > > +{ > > + struct kvm *kvm = (struct kvm *) kern_hyp_va(vcpu->kvm); > > There is no need for an extra cast. kern_hyp_va() already provides a > cast to the type of the parameter. Will drop it. > > + bool is_aarch32_allowed = > > + FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL0), > > + get_pvm_id_aa64pfr0(vcpu)) >= > > + ID_AA64PFR0_ELx_32BIT_64BIT; > > + > > + > > + if (kvm_vm_is_protected(kvm) && > > + vcpu_mode_is_32bit(vcpu) && > > + !is_aarch32_allowed) { > > Do we really need to go through this is_aarch32_allowed check? > Protected VMs don't have AArch32, and we don't have the infrastructure > to handle 32bit at all. For non-protected VMs, we already have what we > need at EL1. So the extra check only adds complexity. No. I could change it to a build-time assertion just to make sure that AArch32 is not allowed instead. Thanks, /fuad > > + /* > > + * As we have caught the guest red-handed, decide that it isn't > > + * fit for purpose anymore by making the vcpu invalid. The VMM > > + * can try and fix it by re-initializing the vcpu with > > + * KVM_ARM_VCPU_INIT, however, this is likely not possible for > > + * protected VMs. > > + */ > > + vcpu->arch.target = -1; > > + *exit_code = ARM_EXCEPTION_IL; > > + return false; > > + } > > + > > + return true; > > +} > > + > > /* Switch to the guest for legacy non-VHE systems */ > > int __kvm_vcpu_run(struct kvm_vcpu *vcpu) > > { > > @@ -297,6 +334,9 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) > > /* Jump in the fire! */ > > exit_code = __guest_enter(vcpu); > > > > + if (unlikely(!handle_aarch32_guest(vcpu, &exit_code))) > > + break; > > + > > /* And we're baaack! */ > > } while (fixup_guest_exit(vcpu, &exit_code)); > > > > Thanks, > > M. > > -- > Without deviation from the norm, progress is not possible. _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C1E1C433F5 for ; Tue, 5 Oct 2021 09:08:00 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 35C8C6126A for ; Tue, 5 Oct 2021 09:08:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 35C8C6126A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:Subject:Message-ID:Date:From: In-Reply-To:References:MIME-Version:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=iQQSVLPWQ1/AeQ06xX/3ZtJYeUHItQE2g9xUpnS6tN8=; b=hMxNK6wap25Ola uAuPVR8rqiKCSNcn51jgEDrvrvFqxmcSoLWfEMpDxDylq1Z8qzZAorQWBQBWwTpgD4gPknVl4AEvH M8hzVSEfZqmTsslDUD/XCrhetHh6Z7HAfTy09oO6kjavI/HNlcpCpKeTJf7KiljRoP7Et0t4cyjP4 1uSyyImeFHum5gRnPZPM8a42fmif+FwCmbzJoQw+B6yUUpFrTxOWegqkg8LtaFLu8l2dxCsOYmm8u GX6qf9aqYw3VhsoVj6oi217B8TShWH0yJ2zUNrvlSo4dPFLmiND3j7gtt/mNgJFfv6HKsaaR7oKsg 1CzXNUtk75Lg1rCk7WiA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXgOW-009erz-QO; Tue, 05 Oct 2021 09:05:53 +0000 Received: from mail-oi1-x233.google.com ([2607:f8b0:4864:20::233]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXgOQ-009epa-34 for linux-arm-kernel@lists.infradead.org; Tue, 05 Oct 2021 09:05:47 +0000 Received: by mail-oi1-x233.google.com with SMTP id n64so25360981oih.2 for ; Tue, 05 Oct 2021 02:05:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ZM3Zs1hD0IvxW0NEoOVRpgIBxIg0g/Aefp92Tu/RS14=; b=DGxWG2YlpVunKcb7OQr0SKzOX9CTc0PhPEnwq/MUvowulZ084/74w+yMqV8HfUqupl Z8i6/b71iIdwf+SVw4wxivnoh5EpKdwJEDO7eb4Wp1Zv8utuUQc65OrqfF+TcYSt6Gwy 3SF4c04jFlYufVJ3N3z1MG3Beo6zWubWbiNzAncDftNled8YOM767waM69j1UDGGuy6L 1+/BL38Q7i0U/Y3mY4Allok0YDrxxDrc8oBAuit2EmoDXcENtSNL5/iGh18RsDGpzjHt iAoM9jSihkQLb79YcfOmYUZ06/jPuKCqzVhfEUK3YihTHoRGo4Jf5lxo5CX9KoWqXb2/ qvWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ZM3Zs1hD0IvxW0NEoOVRpgIBxIg0g/Aefp92Tu/RS14=; b=cLN3rQWJh8qffdrrsNlaYYUpagLJt9qd5/iv+7RlvVhvFPbahoCJjLaxGwTx0iXZ+c MjlRoBRydJstUJ3zNPiqeVH8sbZpruuVkQNtKsPy1gFBOFdOGOTI4rZU3lvsmMw4NMIP 54l4dinHaZ3V+ni2eAzNrj9t5gh2PCn0xCarpbhwzkPWtBw/QNir7POAJaq3NBmkIiXl wIbCGyFaCMg9GjX4ryolAFD1pnu4wSC/8zAF+t13EVT17lbk5itdV/KeFNjhwf5ktYNC 4le9LL28FKT4yxqVUfkNrBsy/8654IxNAOWqWUlsoNty5UW6vSKmzwzn4zpKIcbg279d Qr2w== X-Gm-Message-State: AOAM533GUxnSE9KkYuaHI4JrZn3VmpfC3xYbJWK3hEqTblNsT0l8CRNh ta91Opy/F6RQzxnIXvlche8iRFAGO5Pp3Wl/lW+lLw== X-Google-Smtp-Source: ABdhPJz67hcp3PPxwZdQScqsFRDex6KiMAh2iDnZJJt+LUbAzcOQmphJP5sqXjayKo9PJSWuAvxT7mO1+VdmpRsxF9Q= X-Received: by 2002:aca:604:: with SMTP id 4mr1531266oig.8.1633424744609; Tue, 05 Oct 2021 02:05:44 -0700 (PDT) MIME-Version: 1.0 References: <20210922124704.600087-1-tabba@google.com> <20210922124704.600087-13-tabba@google.com> <87sfxfrh6k.wl-maz@kernel.org> In-Reply-To: <87sfxfrh6k.wl-maz@kernel.org> From: Fuad Tabba Date: Tue, 5 Oct 2021 10:05:08 +0100 Message-ID: Subject: Re: [PATCH v6 12/12] KVM: arm64: Handle protected guests at 32 bits To: Marc Zyngier Cc: kvmarm@lists.cs.columbia.edu, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, drjones@redhat.com, oupton@google.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211005_020546_187261_E7DF45D4 X-CRM114-Status: GOOD ( 40.29 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Hi Marc, On Tue, Oct 5, 2021 at 9:48 AM Marc Zyngier wrote: > > On Wed, 22 Sep 2021 13:47:04 +0100, > Fuad Tabba wrote: > > > > Protected KVM does not support protected AArch32 guests. However, > > it is possible for the guest to force run AArch32, potentially > > causing problems. Add an extra check so that if the hypervisor > > catches the guest doing that, it can prevent the guest from > > running again by resetting vcpu->arch.target and returning > > ARM_EXCEPTION_IL. > > > > If this were to happen, The VMM can try and fix it by re- > > initializing the vcpu with KVM_ARM_VCPU_INIT, however, this is > > likely not possible for protected VMs. > > > > Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric > > AArch32 systems") > > > > Signed-off-by: Fuad Tabba > > --- > > arch/arm64/kvm/hyp/nvhe/switch.c | 40 ++++++++++++++++++++++++++++++++ > > 1 file changed, 40 insertions(+) > > > > diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c > > index 2bf5952f651b..d66226e49013 100644 > > --- a/arch/arm64/kvm/hyp/nvhe/switch.c > > +++ b/arch/arm64/kvm/hyp/nvhe/switch.c > > @@ -235,6 +235,43 @@ static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm *kvm) > > return hyp_exit_handlers; > > } > > > > +/* > > + * Some guests (e.g., protected VMs) might not be allowed to run in AArch32. > > + * The ARMv8 architecture does not give the hypervisor a mechanism to prevent a > > + * guest from dropping to AArch32 EL0 if implemented by the CPU. If the > > + * hypervisor spots a guest in such a state ensure it is handled, and don't > > + * trust the host to spot or fix it. The check below is based on the one in > > + * kvm_arch_vcpu_ioctl_run(). > > + * > > + * Returns false if the guest ran in AArch32 when it shouldn't have, and > > + * thus should exit to the host, or true if a the guest run loop can continue. > > + */ > > +static bool handle_aarch32_guest(struct kvm_vcpu *vcpu, u64 *exit_code) > > +{ > > + struct kvm *kvm = (struct kvm *) kern_hyp_va(vcpu->kvm); > > There is no need for an extra cast. kern_hyp_va() already provides a > cast to the type of the parameter. Will drop it. > > + bool is_aarch32_allowed = > > + FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL0), > > + get_pvm_id_aa64pfr0(vcpu)) >= > > + ID_AA64PFR0_ELx_32BIT_64BIT; > > + > > + > > + if (kvm_vm_is_protected(kvm) && > > + vcpu_mode_is_32bit(vcpu) && > > + !is_aarch32_allowed) { > > Do we really need to go through this is_aarch32_allowed check? > Protected VMs don't have AArch32, and we don't have the infrastructure > to handle 32bit at all. For non-protected VMs, we already have what we > need at EL1. So the extra check only adds complexity. No. I could change it to a build-time assertion just to make sure that AArch32 is not allowed instead. Thanks, /fuad > > + /* > > + * As we have caught the guest red-handed, decide that it isn't > > + * fit for purpose anymore by making the vcpu invalid. The VMM > > + * can try and fix it by re-initializing the vcpu with > > + * KVM_ARM_VCPU_INIT, however, this is likely not possible for > > + * protected VMs. > > + */ > > + vcpu->arch.target = -1; > > + *exit_code = ARM_EXCEPTION_IL; > > + return false; > > + } > > + > > + return true; > > +} > > + > > /* Switch to the guest for legacy non-VHE systems */ > > int __kvm_vcpu_run(struct kvm_vcpu *vcpu) > > { > > @@ -297,6 +334,9 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) > > /* Jump in the fire! */ > > exit_code = __guest_enter(vcpu); > > > > + if (unlikely(!handle_aarch32_guest(vcpu, &exit_code))) > > + break; > > + > > /* And we're baaack! */ > > } while (fixup_guest_exit(vcpu, &exit_code)); > > > > Thanks, > > M. > > -- > Without deviation from the norm, progress is not possible. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel