All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sean Christopherson <seanjc@google.com>
To: Michal Luczaj <mhal@rbox.co>
Cc: kvm@vger.kernel.org, pbonzini@redhat.com
Subject: Re: [PATCH 8/8] KVM: x86: Fix NULL pointer dereference in kvm_xen_set_evtchn_fast()
Date: Mon, 10 Oct 2022 23:28:56 +0000	[thread overview]
Message-ID: <Y0SquPNxS5AOGcDP@google.com> (raw)
In-Reply-To: <20220921020140.3240092-9-mhal@rbox.co>

On Wed, Sep 21, 2022, Michal Luczaj wrote:
> There's a race between kvm_xen_set_evtchn_fast() and kvm_gpc_activate()
> resulting in a near-NULL pointer write.
> 
> 1. Deactivate shinfo cache:
> 
> kvm_xen_hvm_set_attr
> case KVM_XEN_ATTR_TYPE_SHARED_INFO
>  kvm_gpc_deactivate
>   kvm_gpc_unmap
>    gpc->valid = false
>    gpc->khva = NULL
>   gpc->active = false
> 
> Result: active = false, valid = false
> 
> 2. Cause cache refresh:
> 
> kvm_arch_vm_ioctl
> case KVM_XEN_HVM_EVTCHN_SEND
>  kvm_xen_hvm_evtchn_send
>   kvm_xen_set_evtchn
>    kvm_xen_set_evtchn_fast
>     kvm_gpc_check
>     return -EWOULDBLOCK because !gpc->valid
>    kvm_xen_set_evtchn_fast
>     return -EWOULDBLOCK
>    kvm_gpc_refresh
>     hva_to_pfn_retry
>      gpc->valid = true
>      gpc->khva = not NULL
> 
> Result: active = false, valid = true

This is the real bug.  KVM should not succesfully refresh an inactive cache.
It's not just the potential for NULL pointer deref, the cache also isn't on the
list of active caches, i.e. won't get mmu_notifier events, and so KVM could get
a use-after-free of userspace memory.

KVM_XEN_HVM_EVTCHN_SEND does check that the per-vCPU cache is active, but does so
outside of the gpc->lock.

Minus your race condition analysis, which I'll insert into the changelog (assuming
this works), I believe the proper fix is to check "active" during check and refresh.
Oof, and there are ordering bugs too.  Compile-tested patch below.

If this fixes things on your end (I'll properly test tomorrow too), I'll post a
v2 of the entire series.  There are some cleanups that can be done on top, e.g.
I think we should drop kvm_gpc_unmap() entirely until there's actually a user,
because it's not at all obvious that it's (a) necessary and (b) has desirable
behavior.

Note, the below patch applies after patch 1 of this series.  I don't know if anyone
will actually want to backport the fix, but it's not too hard to keep the backport
dependency to just patch 1.

--
From: Sean Christopherson <seanjc@google.com>
Date: Mon, 10 Oct 2022 13:06:13 -0700
Subject: [PATCH] KVM: Reject attempts to consume or refresh inactive
 gfn_to_pfn_cache

Reject kvm_gpc_check() and kvm_gpc_refresh() if the cache is inactive.
No checking the active flag during refresh is particular egregious, as
KVM can end up with a valid, inactive cache, which can lead to a variety
of use-after-free bugs, e.g. consuming a NULL kernel pointer or missing
an mmu_notifier invalidation due to the cache not being on the list of
gfns to invalidate.

Note, "active" needs to be set if and only if the cache is on the list
of caches, i.e. is reachable via mmu_notifier events.  If a relevant
mmu_notifier event occurs while the cache is "active" but not on the
list, KVM will not acquire the cache's lock and so will not serailize
the mmu_notifier event with active users and/or kvm_gpc_refresh().

A race between KVM_XEN_ATTR_TYPE_SHARED_INFO and KVM_XEN_HVM_EVTCHN_SEND
can be exploited to trigger the bug.

<will insert your awesome race analysis>

Reported-by: : Michal Luczaj <mhal@rbox.co>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 virt/kvm/pfncache.c | 36 ++++++++++++++++++++++++++++++------
 1 file changed, 30 insertions(+), 6 deletions(-)

diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c
index b32ed4a7c900..dfc72aa88d71 100644
--- a/virt/kvm/pfncache.c
+++ b/virt/kvm/pfncache.c
@@ -81,6 +81,9 @@ bool kvm_gfn_to_pfn_cache_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
 {
 	struct kvm_memslots *slots = kvm_memslots(kvm);
 
+	if (!gpc->active)
+		return false;
+
 	if ((gpa & ~PAGE_MASK) + len > PAGE_SIZE)
 		return false;
 
@@ -240,8 +243,9 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
 {
 	struct kvm_memslots *slots = kvm_memslots(kvm);
 	unsigned long page_offset = gpa & ~PAGE_MASK;
-	kvm_pfn_t old_pfn, new_pfn;
+	bool unmap_old = false;
 	unsigned long old_uhva;
+	kvm_pfn_t old_pfn;
 	void *old_khva;
 	int ret = 0;
 
@@ -261,6 +265,9 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
 
 	write_lock_irq(&gpc->lock);
 
+	if (!gpc->active)
+		goto out_unlock;
+
 	old_pfn = gpc->pfn;
 	old_khva = gpc->khva - offset_in_page(gpc->khva);
 	old_uhva = gpc->uhva;
@@ -305,14 +312,15 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
 		gpc->khva = NULL;
 	}
 
-	/* Snapshot the new pfn before dropping the lock! */
-	new_pfn = gpc->pfn;
+	/* Detect a pfn change before dropping the lock! */
+	unmap_old = (old_pfn != gpc->pfn);
 
+out_unlock:
 	write_unlock_irq(&gpc->lock);
 
 	mutex_unlock(&gpc->refresh_lock);
 
-	if (old_pfn != new_pfn)
+	if (unmap_old)
 		gpc_unmap_khva(kvm, old_pfn, old_khva);
 
 	return ret;
@@ -368,11 +376,19 @@ int kvm_gpc_activate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
 		gpc->vcpu = vcpu;
 		gpc->usage = usage;
 		gpc->valid = false;
-		gpc->active = true;
 
 		spin_lock(&kvm->gpc_lock);
 		list_add(&gpc->list, &kvm->gpc_list);
 		spin_unlock(&kvm->gpc_lock);
+
+		/*
+		 * Activate the cache after adding it to the list, a concurrent
+		 * refresh must not establish a mapping until the cache is
+		 * reachable by mmu_notifier events.
+		 */
+		write_lock_irq(&gpc->lock);
+		gpc->active = true;
+		write_unlock_irq(&gpc->lock);
 	}
 	return kvm_gfn_to_pfn_cache_refresh(kvm, gpc, gpa, len);
 }
@@ -381,12 +397,20 @@ EXPORT_SYMBOL_GPL(kvm_gpc_activate);
 void kvm_gpc_deactivate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc)
 {
 	if (gpc->active) {
+		/*
+		 * Deactivate the cache before removing it from the list, KVM
+		 * must stall mmu_notifier events until all users go away, i.e.
+		 * until gpc->lock is dropped and refresh is guaranteed to fail.
+		 */
+		write_lock_irq(&gpc->lock);
+		gpc->active = false;
+		write_unlock_irq(&gpc->lock);
+
 		spin_lock(&kvm->gpc_lock);
 		list_del(&gpc->list);
 		spin_unlock(&kvm->gpc_lock);
 
 		kvm_gfn_to_pfn_cache_unmap(kvm, gpc);
-		gpc->active = false;
 	}
 }
 EXPORT_SYMBOL_GPL(kvm_gpc_deactivate);

base-commit: 09e5b3d617d28e3011253370f827151cc6cba6ad
-- 


  reply	other threads:[~2022-10-10 23:29 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-16  0:54 [RFC PATCH 0/4] KVM: x86/xen: shinfo cache lock corruption Michal Luczaj
2022-09-16  0:54 ` [RFC PATCH 1/4] KVM: x86/xen: Ensure kvm_xen_set_evtchn_fast() can use shinfo_cache Michal Luczaj
2022-09-16  0:54 ` [RFC PATCH 2/4] KVM: x86/xen: Ensure kvm_xen_schedop_poll() " Michal Luczaj
2022-09-16  0:54 ` [RFC PATCH 3/4] KVM: x86/xen: Disallow gpc locks reinitialization Michal Luczaj
2022-09-16 17:12   ` Sean Christopherson
2022-09-18 23:13     ` Michal Luczaj
2022-09-21  2:01     ` [PATCH 0/8] KVM: x86: gfn_to_pfn_cache cleanups and a fix Michal Luczaj
2022-09-21  2:01       ` [PATCH 1/8] KVM: x86: Add initializer for gfn_to_pfn_cache Michal Luczaj
2022-10-10 23:38         ` Sean Christopherson
2022-09-21  2:01       ` [PATCH 2/8] KVM: x86: Shorten gfn_to_pfn_cache function names Michal Luczaj
2022-09-21  2:01       ` [PATCH 3/8] KVM: x86: Remove unused argument in gpc_unmap_khva() Michal Luczaj
2022-09-21  2:01       ` [PATCH 4/8] KVM: x86: Store immutable gfn_to_pfn_cache properties Michal Luczaj
2022-10-10 23:42         ` Sean Christopherson
2022-10-11  0:37         ` Sean Christopherson
2022-09-21  2:01       ` [PATCH 5/8] KVM: x86: Clean up kvm_gpc_check() Michal Luczaj
2022-09-21  2:01       ` [PATCH 6/8] KVM: x86: Clean up hva_to_pfn_retry() Michal Luczaj
2022-09-21  2:01       ` [PATCH 7/8] KVM: x86: Clean up kvm_gpc_refresh(), kvm_gpc_unmap() Michal Luczaj
2022-09-21  2:01       ` [PATCH 8/8] KVM: x86: Fix NULL pointer dereference in kvm_xen_set_evtchn_fast() Michal Luczaj
2022-10-10 23:28         ` Sean Christopherson [this message]
2022-10-13  0:22           ` Sean Christopherson
2022-10-13 20:28             ` Sean Christopherson
2022-10-20 15:59               ` Michal Luczaj
2022-10-20 16:58                 ` Sean Christopherson
2022-10-21  2:39                   ` Michal Luczaj
2022-10-05 12:30     ` [PATCH v2 0/8] KVM: x86: gfn_to_pfn_cache cleanups and a fix Michal Luczaj
2022-10-05 12:30       ` [PATCH v2 1/8] KVM: x86: Add initializer for gfn_to_pfn_cache Michal Luczaj
2022-10-05 12:30       ` [PATCH v2 2/8] KVM: x86: Shorten gfn_to_pfn_cache function names Michal Luczaj
2022-10-05 12:30       ` [PATCH v2 3/8] KVM: x86: Remove unused argument in gpc_unmap_khva() Michal Luczaj
2022-10-05 12:30       ` [PATCH v2 4/8] KVM: x86: Store immutable gfn_to_pfn_cache properties Michal Luczaj
2022-10-05 12:30       ` [PATCH v2 5/8] KVM: x86: Clean up kvm_gpc_check() Michal Luczaj
2022-10-05 12:30       ` [PATCH v2 6/8] KVM: x86: Clean up hva_to_pfn_retry() Michal Luczaj
2022-10-05 12:30       ` [PATCH v2 7/8] KVM: x86: Clean up kvm_gpc_refresh(), kvm_gpc_unmap() Michal Luczaj
2022-10-05 12:30       ` [PATCH v2 8/8] KVM: x86: Fix NULL pointer dereference in kvm_xen_set_evtchn_fast() Michal Luczaj
2022-09-16  0:54 ` [RFC PATCH 4/4] KVM: x86/xen: Test shinfo_cache lock races Michal Luczaj

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y0SquPNxS5AOGcDP@google.com \
    --to=seanjc@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=mhal@rbox.co \
    --cc=pbonzini@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.