All of lore.kernel.org
 help / color / mirror / Atom feed
From: Marc Zyngier <maz@kernel.org>
To: Fuad Tabba <tabba@google.com>
Cc: kvmarm@lists.cs.columbia.edu, will@kernel.org,
	james.morse@arm.com, alexandru.elisei@arm.com,
	suzuki.poulose@arm.com, mark.rutland@arm.com,
	christoffer.dall@arm.com, pbonzini@redhat.com,
	drjones@redhat.com, oupton@google.com, qperret@google.com,
	kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	kernel-team@android.com
Subject: Re: [PATCH v6 12/12] KVM: arm64: Handle protected guests at 32 bits
Date: Tue, 05 Oct 2021 09:48:03 +0100	[thread overview]
Message-ID: <87sfxfrh6k.wl-maz@kernel.org> (raw)
In-Reply-To: <20210922124704.600087-13-tabba@google.com>

On Wed, 22 Sep 2021 13:47:04 +0100,
Fuad Tabba <tabba@google.com> wrote:
> 
> Protected KVM does not support protected AArch32 guests. However,
> it is possible for the guest to force run AArch32, potentially
> causing problems. Add an extra check so that if the hypervisor
> catches the guest doing that, it can prevent the guest from
> running again by resetting vcpu->arch.target and returning
> ARM_EXCEPTION_IL.
> 
> If this were to happen, The VMM can try and fix it by re-
> initializing the vcpu with KVM_ARM_VCPU_INIT, however, this is
> likely not possible for protected VMs.
> 
> Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric
> AArch32 systems")
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/kvm/hyp/nvhe/switch.c | 40 ++++++++++++++++++++++++++++++++
>  1 file changed, 40 insertions(+)
> 
> diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
> index 2bf5952f651b..d66226e49013 100644
> --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> @@ -235,6 +235,43 @@ static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm *kvm)
>  	return hyp_exit_handlers;
>  }
>  
> +/*
> + * Some guests (e.g., protected VMs) might not be allowed to run in AArch32.
> + * The ARMv8 architecture does not give the hypervisor a mechanism to prevent a
> + * guest from dropping to AArch32 EL0 if implemented by the CPU. If the
> + * hypervisor spots a guest in such a state ensure it is handled, and don't
> + * trust the host to spot or fix it.  The check below is based on the one in
> + * kvm_arch_vcpu_ioctl_run().
> + *
> + * Returns false if the guest ran in AArch32 when it shouldn't have, and
> + * thus should exit to the host, or true if a the guest run loop can continue.
> + */
> +static bool handle_aarch32_guest(struct kvm_vcpu *vcpu, u64 *exit_code)
> +{
> +	struct kvm *kvm = (struct kvm *) kern_hyp_va(vcpu->kvm);

There is no need for an extra cast. kern_hyp_va() already provides a
cast to the type of the parameter.

> +	bool is_aarch32_allowed =
> +		FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL0),
> +			  get_pvm_id_aa64pfr0(vcpu)) >=
> +				ID_AA64PFR0_ELx_32BIT_64BIT;
> +
> +
> +	if (kvm_vm_is_protected(kvm) &&
> +	    vcpu_mode_is_32bit(vcpu) &&
> +	    !is_aarch32_allowed) {

Do we really need to go through this is_aarch32_allowed check?
Protected VMs don't have AArch32, and we don't have the infrastructure
to handle 32bit at all. For non-protected VMs, we already have what we
need at EL1. So the extra check only adds complexity.

> +		/*
> +		 * As we have caught the guest red-handed, decide that it isn't
> +		 * fit for purpose anymore by making the vcpu invalid. The VMM
> +		 * can try and fix it by re-initializing the vcpu with
> +		 * KVM_ARM_VCPU_INIT, however, this is likely not possible for
> +		 * protected VMs.
> +		 */
> +		vcpu->arch.target = -1;
> +		*exit_code = ARM_EXCEPTION_IL;
> +		return false;
> +	}
> +
> +	return true;
> +}
> +
>  /* Switch to the guest for legacy non-VHE systems */
>  int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
>  {
> @@ -297,6 +334,9 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
>  		/* Jump in the fire! */
>  		exit_code = __guest_enter(vcpu);
>  
> +		if (unlikely(!handle_aarch32_guest(vcpu, &exit_code)))
> +			break;
> +
>  		/* And we're baaack! */
>  	} while (fixup_guest_exit(vcpu, &exit_code));
>  

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

WARNING: multiple messages have this Message-ID (diff)
From: Marc Zyngier <maz@kernel.org>
To: Fuad Tabba <tabba@google.com>
Cc: kernel-team@android.com, kvm@vger.kernel.org,
	pbonzini@redhat.com, will@kernel.org,
	kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH v6 12/12] KVM: arm64: Handle protected guests at 32 bits
Date: Tue, 05 Oct 2021 09:48:03 +0100	[thread overview]
Message-ID: <87sfxfrh6k.wl-maz@kernel.org> (raw)
In-Reply-To: <20210922124704.600087-13-tabba@google.com>

On Wed, 22 Sep 2021 13:47:04 +0100,
Fuad Tabba <tabba@google.com> wrote:
> 
> Protected KVM does not support protected AArch32 guests. However,
> it is possible for the guest to force run AArch32, potentially
> causing problems. Add an extra check so that if the hypervisor
> catches the guest doing that, it can prevent the guest from
> running again by resetting vcpu->arch.target and returning
> ARM_EXCEPTION_IL.
> 
> If this were to happen, The VMM can try and fix it by re-
> initializing the vcpu with KVM_ARM_VCPU_INIT, however, this is
> likely not possible for protected VMs.
> 
> Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric
> AArch32 systems")
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/kvm/hyp/nvhe/switch.c | 40 ++++++++++++++++++++++++++++++++
>  1 file changed, 40 insertions(+)
> 
> diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
> index 2bf5952f651b..d66226e49013 100644
> --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> @@ -235,6 +235,43 @@ static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm *kvm)
>  	return hyp_exit_handlers;
>  }
>  
> +/*
> + * Some guests (e.g., protected VMs) might not be allowed to run in AArch32.
> + * The ARMv8 architecture does not give the hypervisor a mechanism to prevent a
> + * guest from dropping to AArch32 EL0 if implemented by the CPU. If the
> + * hypervisor spots a guest in such a state ensure it is handled, and don't
> + * trust the host to spot or fix it.  The check below is based on the one in
> + * kvm_arch_vcpu_ioctl_run().
> + *
> + * Returns false if the guest ran in AArch32 when it shouldn't have, and
> + * thus should exit to the host, or true if a the guest run loop can continue.
> + */
> +static bool handle_aarch32_guest(struct kvm_vcpu *vcpu, u64 *exit_code)
> +{
> +	struct kvm *kvm = (struct kvm *) kern_hyp_va(vcpu->kvm);

There is no need for an extra cast. kern_hyp_va() already provides a
cast to the type of the parameter.

> +	bool is_aarch32_allowed =
> +		FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL0),
> +			  get_pvm_id_aa64pfr0(vcpu)) >=
> +				ID_AA64PFR0_ELx_32BIT_64BIT;
> +
> +
> +	if (kvm_vm_is_protected(kvm) &&
> +	    vcpu_mode_is_32bit(vcpu) &&
> +	    !is_aarch32_allowed) {

Do we really need to go through this is_aarch32_allowed check?
Protected VMs don't have AArch32, and we don't have the infrastructure
to handle 32bit at all. For non-protected VMs, we already have what we
need at EL1. So the extra check only adds complexity.

> +		/*
> +		 * As we have caught the guest red-handed, decide that it isn't
> +		 * fit for purpose anymore by making the vcpu invalid. The VMM
> +		 * can try and fix it by re-initializing the vcpu with
> +		 * KVM_ARM_VCPU_INIT, however, this is likely not possible for
> +		 * protected VMs.
> +		 */
> +		vcpu->arch.target = -1;
> +		*exit_code = ARM_EXCEPTION_IL;
> +		return false;
> +	}
> +
> +	return true;
> +}
> +
>  /* Switch to the guest for legacy non-VHE systems */
>  int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
>  {
> @@ -297,6 +334,9 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
>  		/* Jump in the fire! */
>  		exit_code = __guest_enter(vcpu);
>  
> +		if (unlikely(!handle_aarch32_guest(vcpu, &exit_code)))
> +			break;
> +
>  		/* And we're baaack! */
>  	} while (fixup_guest_exit(vcpu, &exit_code));
>  

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

WARNING: multiple messages have this Message-ID (diff)
From: Marc Zyngier <maz@kernel.org>
To: Fuad Tabba <tabba@google.com>
Cc: kvmarm@lists.cs.columbia.edu, will@kernel.org,
	james.morse@arm.com, alexandru.elisei@arm.com,
	suzuki.poulose@arm.com, mark.rutland@arm.com,
	christoffer.dall@arm.com, pbonzini@redhat.com,
	drjones@redhat.com, oupton@google.com, qperret@google.com,
	kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	kernel-team@android.com
Subject: Re: [PATCH v6 12/12] KVM: arm64: Handle protected guests at 32 bits
Date: Tue, 05 Oct 2021 09:48:03 +0100	[thread overview]
Message-ID: <87sfxfrh6k.wl-maz@kernel.org> (raw)
In-Reply-To: <20210922124704.600087-13-tabba@google.com>

On Wed, 22 Sep 2021 13:47:04 +0100,
Fuad Tabba <tabba@google.com> wrote:
> 
> Protected KVM does not support protected AArch32 guests. However,
> it is possible for the guest to force run AArch32, potentially
> causing problems. Add an extra check so that if the hypervisor
> catches the guest doing that, it can prevent the guest from
> running again by resetting vcpu->arch.target and returning
> ARM_EXCEPTION_IL.
> 
> If this were to happen, The VMM can try and fix it by re-
> initializing the vcpu with KVM_ARM_VCPU_INIT, however, this is
> likely not possible for protected VMs.
> 
> Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric
> AArch32 systems")
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/kvm/hyp/nvhe/switch.c | 40 ++++++++++++++++++++++++++++++++
>  1 file changed, 40 insertions(+)
> 
> diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
> index 2bf5952f651b..d66226e49013 100644
> --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> @@ -235,6 +235,43 @@ static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm *kvm)
>  	return hyp_exit_handlers;
>  }
>  
> +/*
> + * Some guests (e.g., protected VMs) might not be allowed to run in AArch32.
> + * The ARMv8 architecture does not give the hypervisor a mechanism to prevent a
> + * guest from dropping to AArch32 EL0 if implemented by the CPU. If the
> + * hypervisor spots a guest in such a state ensure it is handled, and don't
> + * trust the host to spot or fix it.  The check below is based on the one in
> + * kvm_arch_vcpu_ioctl_run().
> + *
> + * Returns false if the guest ran in AArch32 when it shouldn't have, and
> + * thus should exit to the host, or true if a the guest run loop can continue.
> + */
> +static bool handle_aarch32_guest(struct kvm_vcpu *vcpu, u64 *exit_code)
> +{
> +	struct kvm *kvm = (struct kvm *) kern_hyp_va(vcpu->kvm);

There is no need for an extra cast. kern_hyp_va() already provides a
cast to the type of the parameter.

> +	bool is_aarch32_allowed =
> +		FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL0),
> +			  get_pvm_id_aa64pfr0(vcpu)) >=
> +				ID_AA64PFR0_ELx_32BIT_64BIT;
> +
> +
> +	if (kvm_vm_is_protected(kvm) &&
> +	    vcpu_mode_is_32bit(vcpu) &&
> +	    !is_aarch32_allowed) {

Do we really need to go through this is_aarch32_allowed check?
Protected VMs don't have AArch32, and we don't have the infrastructure
to handle 32bit at all. For non-protected VMs, we already have what we
need at EL1. So the extra check only adds complexity.

> +		/*
> +		 * As we have caught the guest red-handed, decide that it isn't
> +		 * fit for purpose anymore by making the vcpu invalid. The VMM
> +		 * can try and fix it by re-initializing the vcpu with
> +		 * KVM_ARM_VCPU_INIT, however, this is likely not possible for
> +		 * protected VMs.
> +		 */
> +		vcpu->arch.target = -1;
> +		*exit_code = ARM_EXCEPTION_IL;
> +		return false;
> +	}
> +
> +	return true;
> +}
> +
>  /* Switch to the guest for legacy non-VHE systems */
>  int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
>  {
> @@ -297,6 +334,9 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
>  		/* Jump in the fire! */
>  		exit_code = __guest_enter(vcpu);
>  
> +		if (unlikely(!handle_aarch32_guest(vcpu, &exit_code)))
> +			break;
> +
>  		/* And we're baaack! */
>  	} while (fixup_guest_exit(vcpu, &exit_code));
>  

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2021-10-05  8:48 UTC|newest]

Thread overview: 90+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-22 12:46 [PATCH v6 00/12] KVM: arm64: Fixed features for protected VMs Fuad Tabba
2021-09-22 12:46 ` Fuad Tabba
2021-09-22 12:46 ` Fuad Tabba
2021-09-22 12:46 ` [PATCH v6 01/12] KVM: arm64: Move __get_fault_info() and co into their own include file Fuad Tabba
2021-09-22 12:46   ` Fuad Tabba
2021-09-22 12:46   ` Fuad Tabba
2021-09-30 13:04   ` Will Deacon
2021-09-30 13:04     ` Will Deacon
2021-09-30 13:04     ` Will Deacon
2021-09-22 12:46 ` [PATCH v6 02/12] KVM: arm64: Don't include switch.h into nvhe/kvm-main.c Fuad Tabba
2021-09-22 12:46   ` Fuad Tabba
2021-09-22 12:46   ` Fuad Tabba
2021-09-30 13:07   ` Will Deacon
2021-09-30 13:07     ` Will Deacon
2021-09-30 13:07     ` Will Deacon
2021-09-22 12:46 ` [PATCH v6 03/12] KVM: arm64: Move early handlers to per-EC handlers Fuad Tabba
2021-09-22 12:46   ` Fuad Tabba
2021-09-22 12:46   ` Fuad Tabba
2021-09-30 13:35   ` Will Deacon
2021-09-30 13:35     ` Will Deacon
2021-09-30 13:35     ` Will Deacon
2021-09-30 16:02     ` Marc Zyngier
2021-09-30 16:02       ` Marc Zyngier
2021-09-30 16:02       ` Marc Zyngier
2021-09-30 16:27     ` Marc Zyngier
2021-09-30 16:27       ` Marc Zyngier
2021-09-30 16:27       ` Marc Zyngier
2021-09-22 12:46 ` [PATCH v6 04/12] KVM: arm64: Add missing FORCE prerequisite in Makefile Fuad Tabba
2021-09-22 12:46   ` Fuad Tabba
2021-09-22 12:46   ` Fuad Tabba
2021-09-22 14:17   ` Marc Zyngier
2021-09-22 14:17     ` Marc Zyngier
2021-09-22 14:17     ` Marc Zyngier
2021-09-22 12:46 ` [PATCH v6 05/12] KVM: arm64: Pass struct kvm to per-EC handlers Fuad Tabba
2021-09-22 12:46   ` Fuad Tabba
2021-09-22 12:46   ` Fuad Tabba
2021-09-22 12:46 ` [PATCH v6 06/12] KVM: arm64: Add missing field descriptor for MDCR_EL2 Fuad Tabba
2021-09-22 12:46   ` Fuad Tabba
2021-09-22 12:46   ` Fuad Tabba
2021-09-22 12:46 ` [PATCH v6 07/12] KVM: arm64: Simplify masking out MTE in feature id reg Fuad Tabba
2021-09-22 12:46   ` Fuad Tabba
2021-09-22 12:46   ` Fuad Tabba
2021-09-22 12:47 ` [PATCH v6 08/12] KVM: arm64: Add handlers for protected VM System Registers Fuad Tabba
2021-09-22 12:47   ` Fuad Tabba
2021-09-22 12:47   ` Fuad Tabba
2021-10-05  8:52   ` Andrew Jones
2021-10-05  8:52     ` Andrew Jones
2021-10-05  8:52     ` Andrew Jones
2021-10-05 16:43     ` Fuad Tabba
2021-10-05 16:43       ` Fuad Tabba
2021-10-05 16:43       ` Fuad Tabba
2021-10-05  9:53   ` Marc Zyngier
2021-10-05  9:53     ` Marc Zyngier
2021-10-05  9:53     ` Marc Zyngier
2021-10-05 16:49     ` Fuad Tabba
2021-10-05 16:49       ` Fuad Tabba
2021-10-05 16:49       ` Fuad Tabba
2021-09-22 12:47 ` [PATCH v6 09/12] KVM: arm64: Initialize trap registers for protected VMs Fuad Tabba
2021-09-22 12:47   ` Fuad Tabba
2021-09-22 12:47   ` Fuad Tabba
2021-10-05  9:23   ` Marc Zyngier
2021-10-05  9:23     ` Marc Zyngier
2021-10-05  9:23     ` Marc Zyngier
2021-10-05  9:33     ` Fuad Tabba
2021-10-05  9:33       ` Fuad Tabba
2021-10-05  9:33       ` Fuad Tabba
2021-10-06  6:56   ` Andrew Jones
2021-10-06  6:56     ` Andrew Jones
2021-10-06  6:56     ` Andrew Jones
2021-09-22 12:47 ` [PATCH v6 10/12] KVM: arm64: Move sanitized copies of CPU features Fuad Tabba
2021-09-22 12:47   ` Fuad Tabba
2021-09-22 12:47   ` Fuad Tabba
2021-09-22 12:47 ` [PATCH v6 11/12] KVM: arm64: Trap access to pVM restricted features Fuad Tabba
2021-09-22 12:47   ` Fuad Tabba
2021-09-22 12:47   ` Fuad Tabba
2021-10-04 17:27   ` Marc Zyngier
2021-10-04 17:27     ` Marc Zyngier
2021-10-04 17:27     ` Marc Zyngier
2021-10-05  7:20     ` Fuad Tabba
2021-10-05  7:20       ` Fuad Tabba
2021-10-05  7:20       ` Fuad Tabba
2021-09-22 12:47 ` [PATCH v6 12/12] KVM: arm64: Handle protected guests at 32 bits Fuad Tabba
2021-09-22 12:47   ` Fuad Tabba
2021-09-22 12:47   ` Fuad Tabba
2021-10-05  8:48   ` Marc Zyngier [this message]
2021-10-05  8:48     ` Marc Zyngier
2021-10-05  8:48     ` Marc Zyngier
2021-10-05  9:05     ` Fuad Tabba
2021-10-05  9:05       ` Fuad Tabba
2021-10-05  9:05       ` Fuad Tabba

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87sfxfrh6k.wl-maz@kernel.org \
    --to=maz@kernel.org \
    --cc=alexandru.elisei@arm.com \
    --cc=christoffer.dall@arm.com \
    --cc=drjones@redhat.com \
    --cc=james.morse@arm.com \
    --cc=kernel-team@android.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=mark.rutland@arm.com \
    --cc=oupton@google.com \
    --cc=pbonzini@redhat.com \
    --cc=qperret@google.com \
    --cc=suzuki.poulose@arm.com \
    --cc=tabba@google.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.