From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E788EC4338F for ; Mon, 23 Aug 2021 10:27:55 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AA28F6127B for ; Mon, 23 Aug 2021 10:27:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org AA28F6127B Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:Subject:Message-ID:Date:From: In-Reply-To:References:MIME-Version:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=VT2v4iqzQYMAzwJ156X83uqp+FE4NEvEeXqoNWOyYW0=; b=Y03TdLNJL7f86h igrBpGrVvv+3DkbkeOVDQGKiSawZwaceRhiGXpDbTIbm1jBaOpKFG3DyYkdBdJCNbgklYDMQsZMj9 Hd9f9Qhaq6CpzD1JAJdXUGsQxDUiJGD6KGKMdCrv6EZHnmsLNDB+5XeGXjL5c8AQyzxiXW7ICC2Kg hxzMHi07Z/xPj5hukcMwfX1vd/nv4NRcQBJJRzBidVg4I3DXpJLEJJ2zAw8d23t74ObI7IonpljGe mEE3yH/UUwAh79vtZYSNTuKEyhca03BQY6izGFtY6xIV6x40nnRVFZAK5/LunSC/DbNhALBfLsS2i XIPLaMJ6Qumw+x9wAXPA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mI79f-00GM5H-JK; Mon, 23 Aug 2021 10:26:11 +0000 Received: from mail-ot1-x335.google.com ([2607:f8b0:4864:20::335]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mI79Y-00GM29-6e for linux-arm-kernel@lists.infradead.org; Mon, 23 Aug 2021 10:26:08 +0000 Received: by mail-ot1-x335.google.com with SMTP id k12-20020a056830150c00b0051abe7f680bso29964939otp.1 for ; Mon, 23 Aug 2021 03:25:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=iil3g1xzE54TUwgwxXbVjp/I2BXWoZeDWrISpeeZe14=; b=BEW3pF7bCpIXQquKczEegFeZjB4PN2FJINV+cpI96e+ZQKAUumwIkxkDwpbx3KxVX/ jEV7r2X/PXB2J5H9vROz1yIGrg/C9BzHL2ENP4rws1a69K/LMgiALseDgDCqMkZf1zTT iXMVQXasH/iLEIZZQ2O019Dd8bfQCg7UE4mAC5hsYCzmN3+VqRSGWNbgF2jwGtyaNrbq 2dh21hx5nfjAG1EedyrdGR+2AIZE0gAzriJ9sCGtm4N9Rxdpdj5w5BYRgmlh+IHNZiu9 wOpyw2qGxapWYcgQ0s4wMPR5DW1y51wv1BZX/EIraK4Wzwln68WzoQTFeaTaW1NQ3nYg c3VQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=iil3g1xzE54TUwgwxXbVjp/I2BXWoZeDWrISpeeZe14=; b=Sj1h+COTykHfUWygLHK6m8u5XXT8NwLZMW3fICL4sy83UpFNaHnoLu1mJFJ6OOMgJJ wZRPy/keMwLnNwZO7vKpqry/yacwwH6lGvpkFyZtkpuTyTNi9aV7y+x33zhaWfeATquM mZQSPd0fpGWbczGtfXzH8xpwLUUx6bCgSNRCBHpIzIkpkCAT3wQo8ic+c00/x7ih5SEF SKiORtGj652sm4UqAud37p5NFJh1aesFKxdwTd0NoKklhza3pYaBqmdLKWtmoT5VvVXc dPRFckUhbAl3KMgasZvGttcADzsqVBJN+ZWLiID1hpDGci5q482Cfn4aHS/iOF0aG8C1 zkLg== X-Gm-Message-State: AOAM530KdKbimJ7nMOIi2iScrnk3i6NIo/iZR5SzO4/3sTTagMfremXN fIhe0tGxJ4voRShVoGcFJn9qhYzUm8pQFT9hdjN3lw== X-Google-Smtp-Source: ABdhPJyc5KKsDid+ISzlkW07QeZX7D5K3hIbXU4y15H3axJsNCGDbA+XxfiyQc68SzeSTbGJu39OzYXXM1Av29ZlITE= X-Received: by 2002:a05:6830:1dac:: with SMTP id z12mr23465282oti.52.1629714356807; Mon, 23 Aug 2021 03:25:56 -0700 (PDT) MIME-Version: 1.0 References: <20210817081134.2918285-1-tabba@google.com> <20210817081134.2918285-16-tabba@google.com> In-Reply-To: From: Fuad Tabba Date: Mon, 23 Aug 2021 11:25:20 +0100 Message-ID: Subject: Re: [PATCH v4 15/15] KVM: arm64: Handle protected guests at 32 bits To: Oliver Upton Cc: kvmarm@lists.cs.columbia.edu, maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, pbonzini@redhat.com, drjones@redhat.com, qperret@google.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210823_032604_295719_548C3D16 X-CRM114-Status: GOOD ( 39.76 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Hi Oliver, On Thu, Aug 19, 2021 at 9:10 AM Oliver Upton wrote: > > Hi Fuad, > > On Tue, Aug 17, 2021 at 1:12 AM Fuad Tabba wrote: > > > > Protected KVM does not support protected AArch32 guests. However, > > it is possible for the guest to force run AArch32, potentially > > causing problems. Add an extra check so that if the hypervisor > > catches the guest doing that, it can prevent the guest from > > running again by resetting vcpu->arch.target and returning > > ARM_EXCEPTION_IL. > > > > If this were to happen, The VMM can try and fix it by re- > > initializing the vcpu with KVM_ARM_VCPU_INIT, however, this is > > likely not possible for protected VMs. > > > > Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric > > AArch32 systems") > > > > Signed-off-by: Fuad Tabba > > --- > > arch/arm64/kvm/hyp/nvhe/switch.c | 37 ++++++++++++++++++++++++++++++++ > > 1 file changed, 37 insertions(+) > > > > diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c > > index 398e62098898..0c24b7f473bf 100644 > > --- a/arch/arm64/kvm/hyp/nvhe/switch.c > > +++ b/arch/arm64/kvm/hyp/nvhe/switch.c > > @@ -20,6 +20,7 @@ > > #include > > #include > > #include > > +#include > > #include > > #include > > #include > > @@ -195,6 +196,39 @@ exit_handle_fn kvm_get_nvhe_exit_handler(struct kvm_vcpu *vcpu) > > return NULL; > > } > > > > +/* > > + * Some guests (e.g., protected VMs) might not be allowed to run in AArch32. The > > + * check below is based on the one in kvm_arch_vcpu_ioctl_run(). > > + * The ARMv8 architecture does not give the hypervisor a mechanism to prevent a > > + * guest from dropping to AArch32 EL0 if implemented by the CPU. If the > > + * hypervisor spots a guest in such a state ensure it is handled, and don't > > + * trust the host to spot or fix it. > > + * > > + * Returns true if the check passed and the guest run loop can continue, or > > + * false if the guest should exit to the host. > > + */ > > +static bool check_aarch32_guest(struct kvm_vcpu *vcpu, u64 *exit_code) > > This does a bit more than just check & return, so maybe call it > handle_aarch32_guest()? > > > +{ > > + if (kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) && > > maybe initialize a local with a hyp pointer to the kvm structure. Will do. > > + vcpu_mode_is_32bit(vcpu) && > > + FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL0), > > + PVM_ID_AA64PFR0_RESTRICT_UNSIGNED) < > > + ID_AA64PFR0_ELx_32BIT_64BIT) { > > It may be more readable to initialize a local variable with this > feature check, i.e: > > bool aarch32_allowed = FIELD_GET(...) == ID_AA64PFR0_ELx_32BIT_64BIT; > > and then: > > if (kvm_vm_is_protected(kvm) && vcpu_mode_is_32bit(vcpu) && > !aarch32_allowed) { I agree. Thanks, /fuad > > + /* > > + * As we have caught the guest red-handed, decide that it isn't > > + * fit for purpose anymore by making the vcpu invalid. The VMM > > + * can try and fix it by re-initializing the vcpu with > > + * KVM_ARM_VCPU_INIT, however, this is likely not possible for > > + * protected VMs. > > + */ > > + vcpu->arch.target = -1; > > + *exit_code = ARM_EXCEPTION_IL; > > + return false; > > + } > > + > > + return true; > > +} > > + > > /* Switch to the guest for legacy non-VHE systems */ > > int __kvm_vcpu_run(struct kvm_vcpu *vcpu) > > { > > @@ -255,6 +289,9 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) > > /* Jump in the fire! */ > > exit_code = __guest_enter(vcpu); > > > > + if (unlikely(!check_aarch32_guest(vcpu, &exit_code))) > > + break; > > + > > /* And we're baaack! */ > > } while (fixup_guest_exit(vcpu, &exit_code)); > > > > -- > > 2.33.0.rc1.237.g0d66db33f3-goog > > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel