All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] KVM: arm64: Prevent unconditional donation of unmapped regions from the host
@ 2023-05-18  9:58 Will Deacon
  2023-05-21 20:08 ` Oliver Upton
  2023-05-24 12:49 ` Marc Zyngier
  0 siblings, 2 replies; 3+ messages in thread
From: Will Deacon @ 2023-05-18  9:58 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, oliver.upton, Will Deacon, Quentin Perret

Since host stage-2 mappings are created lazily, we cannot rely solely on
the pte in order to recover the target physical address when checking a
host-initiated memory transition as this permits donation of unmapped
regions corresponding to MMIO or "no-map" memory.

Instead of inspecting the pte, move the addr_is_allowed_memory() check
into the host callback function where it is passed the physical address
directly from the walker.

Cc: Quentin Perret <qperret@google.com>
Fixes: e82edcc75c4e ("KVM: arm64: Implement do_share() helper for sharing memory")
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/kvm/hyp/nvhe/mem_protect.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
index 2e9ec4a2a4a3..a8813b212996 100644
--- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
@@ -575,7 +575,7 @@ struct pkvm_mem_donation {
 
 struct check_walk_data {
 	enum pkvm_page_state	desired;
-	enum pkvm_page_state	(*get_page_state)(kvm_pte_t pte);
+	enum pkvm_page_state	(*get_page_state)(kvm_pte_t pte, u64 addr);
 };
 
 static int __check_page_state_visitor(const struct kvm_pgtable_visit_ctx *ctx,
@@ -583,10 +583,7 @@ static int __check_page_state_visitor(const struct kvm_pgtable_visit_ctx *ctx,
 {
 	struct check_walk_data *d = ctx->arg;
 
-	if (kvm_pte_valid(ctx->old) && !addr_is_allowed_memory(kvm_pte_to_phys(ctx->old)))
-		return -EINVAL;
-
-	return d->get_page_state(ctx->old) == d->desired ? 0 : -EPERM;
+	return d->get_page_state(ctx->old, ctx->addr) == d->desired ? 0 : -EPERM;
 }
 
 static int check_page_state_range(struct kvm_pgtable *pgt, u64 addr, u64 size,
@@ -601,8 +598,11 @@ static int check_page_state_range(struct kvm_pgtable *pgt, u64 addr, u64 size,
 	return kvm_pgtable_walk(pgt, addr, size, &walker);
 }
 
-static enum pkvm_page_state host_get_page_state(kvm_pte_t pte)
+static enum pkvm_page_state host_get_page_state(kvm_pte_t pte, u64 addr)
 {
+	if (!addr_is_allowed_memory(addr))
+		return PKVM_NOPAGE;
+
 	if (!kvm_pte_valid(pte) && pte)
 		return PKVM_NOPAGE;
 
@@ -709,7 +709,7 @@ static int host_complete_donation(u64 addr, const struct pkvm_mem_transition *tx
 	return host_stage2_set_owner_locked(addr, size, host_id);
 }
 
-static enum pkvm_page_state hyp_get_page_state(kvm_pte_t pte)
+static enum pkvm_page_state hyp_get_page_state(kvm_pte_t pte, u64 addr)
 {
 	if (!kvm_pte_valid(pte))
 		return PKVM_NOPAGE;
-- 
2.40.1.698.g37aff9b760-goog


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] KVM: arm64: Prevent unconditional donation of unmapped regions from the host
  2023-05-18  9:58 [PATCH] KVM: arm64: Prevent unconditional donation of unmapped regions from the host Will Deacon
@ 2023-05-21 20:08 ` Oliver Upton
  2023-05-24 12:49 ` Marc Zyngier
  1 sibling, 0 replies; 3+ messages in thread
From: Oliver Upton @ 2023-05-21 20:08 UTC (permalink / raw)
  To: Will Deacon; +Cc: kvmarm, maz, Quentin Perret

Hi Will,

On Thu, May 18, 2023 at 10:58:44AM +0100, Will Deacon wrote:
> Since host stage-2 mappings are created lazily, we cannot rely solely on
> the pte in order to recover the target physical address when checking a
> host-initiated memory transition as this permits donation of unmapped
> regions corresponding to MMIO or "no-map" memory.
> 
> Instead of inspecting the pte, move the addr_is_allowed_memory() check
> into the host callback function where it is passed the physical address
> directly from the walker.

nit: the subtext here is that the host's stage-2 is an identity map.
That part might not be entirely obvious, as you're passing through the
input address of the walk.

Not worth respinning, but I wonder if some additional documentation on
host stage-2 management would help folks outside of pKVM with reviewing
pKVM changes. Cache behavior could also use a bit more clarification so
folks understand why pKVM elides CMOs and doesn't use FWB.

Of course, I'm likely fooling myself in believing that additional
documentation will get non-pKVM folks to review pKVM changes :)

> Cc: Quentin Perret <qperret@google.com>
> Fixes: e82edcc75c4e ("KVM: arm64: Implement do_share() helper for sharing memory")
> Signed-off-by: Will Deacon <will@kernel.org>

Reviewed-by: Oliver Upton <oliver.upton@linux.dev>

> ---
>  arch/arm64/kvm/hyp/nvhe/mem_protect.c | 14 +++++++-------
>  1 file changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> index 2e9ec4a2a4a3..a8813b212996 100644
> --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> @@ -575,7 +575,7 @@ struct pkvm_mem_donation {
>  
>  struct check_walk_data {
>  	enum pkvm_page_state	desired;
> -	enum pkvm_page_state	(*get_page_state)(kvm_pte_t pte);
> +	enum pkvm_page_state	(*get_page_state)(kvm_pte_t pte, u64 addr);
>  };
>  
>  static int __check_page_state_visitor(const struct kvm_pgtable_visit_ctx *ctx,
> @@ -583,10 +583,7 @@ static int __check_page_state_visitor(const struct kvm_pgtable_visit_ctx *ctx,
>  {
>  	struct check_walk_data *d = ctx->arg;
>  
> -	if (kvm_pte_valid(ctx->old) && !addr_is_allowed_memory(kvm_pte_to_phys(ctx->old)))
> -		return -EINVAL;
> -
> -	return d->get_page_state(ctx->old) == d->desired ? 0 : -EPERM;
> +	return d->get_page_state(ctx->old, ctx->addr) == d->desired ? 0 : -EPERM;
>  }
>  
>  static int check_page_state_range(struct kvm_pgtable *pgt, u64 addr, u64 size,
> @@ -601,8 +598,11 @@ static int check_page_state_range(struct kvm_pgtable *pgt, u64 addr, u64 size,
>  	return kvm_pgtable_walk(pgt, addr, size, &walker);
>  }
>  
> -static enum pkvm_page_state host_get_page_state(kvm_pte_t pte)
> +static enum pkvm_page_state host_get_page_state(kvm_pte_t pte, u64 addr)
>  {
> +	if (!addr_is_allowed_memory(addr))
> +		return PKVM_NOPAGE;
> +
>  	if (!kvm_pte_valid(pte) && pte)
>  		return PKVM_NOPAGE;
>  
> @@ -709,7 +709,7 @@ static int host_complete_donation(u64 addr, const struct pkvm_mem_transition *tx
>  	return host_stage2_set_owner_locked(addr, size, host_id);
>  }
>  
> -static enum pkvm_page_state hyp_get_page_state(kvm_pte_t pte)
> +static enum pkvm_page_state hyp_get_page_state(kvm_pte_t pte, u64 addr)
>  {
>  	if (!kvm_pte_valid(pte))
>  		return PKVM_NOPAGE;
> -- 
> 2.40.1.698.g37aff9b760-goog
> 

-- 
Thanks,
Oliver

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] KVM: arm64: Prevent unconditional donation of unmapped regions from the host
  2023-05-18  9:58 [PATCH] KVM: arm64: Prevent unconditional donation of unmapped regions from the host Will Deacon
  2023-05-21 20:08 ` Oliver Upton
@ 2023-05-24 12:49 ` Marc Zyngier
  1 sibling, 0 replies; 3+ messages in thread
From: Marc Zyngier @ 2023-05-24 12:49 UTC (permalink / raw)
  To: Will Deacon, kvmarm; +Cc: oliver.upton, Quentin Perret

On Thu, 18 May 2023 10:58:44 +0100, Will Deacon wrote:
> Since host stage-2 mappings are created lazily, we cannot rely solely on
> the pte in order to recover the target physical address when checking a
> host-initiated memory transition as this permits donation of unmapped
> regions corresponding to MMIO or "no-map" memory.
> 
> Instead of inspecting the pte, move the addr_is_allowed_memory() check
> into the host callback function where it is passed the physical address
> directly from the walker.

Applied to fixes, thanks!

[1/1] KVM: arm64: Prevent unconditional donation of unmapped regions from the host
      commit: 09cce60bddd6461a93a5bf434265a47827d1bc6f

Cheers,

	M.
-- 
Without deviation from the norm, progress is not possible.



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2023-05-24 12:49 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-18  9:58 [PATCH] KVM: arm64: Prevent unconditional donation of unmapped regions from the host Will Deacon
2023-05-21 20:08 ` Oliver Upton
2023-05-24 12:49 ` Marc Zyngier

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.