All of lore.kernel.org
 help / color / mirror / Atom feed
From: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
To: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Cc: Avi Kivity <avi@redhat.com>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	LKML <linux-kernel@vger.kernel.org>, KVM <kvm@vger.kernel.org>
Subject: [PATCH v3 9/9] KVM: MMU: document mmu-lock and fast page fault
Date: Fri, 20 Apr 2012 16:21:15 +0800	[thread overview]
Message-ID: <4F911C7B.2030602@linux.vnet.ibm.com> (raw)
In-Reply-To: <4F911B74.4040305@linux.vnet.ibm.com>

Document fast page fault and mmu-lock in locking.txt

Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
---
 Documentation/virtual/kvm/locking.txt |  148 ++++++++++++++++++++++++++++++++-
 1 files changed, 147 insertions(+), 1 deletions(-)

diff --git a/Documentation/virtual/kvm/locking.txt b/Documentation/virtual/kvm/locking.txt
index 3b4cd3b..089d462 100644
--- a/Documentation/virtual/kvm/locking.txt
+++ b/Documentation/virtual/kvm/locking.txt
@@ -6,7 +6,147 @@ KVM Lock Overview

 (to be written)

-2. Reference
+3: Exception
+------------
+
+Fast page fault:
+
+Fast page fault is the fast path which fixes the guest page fault out of
+the mmu-lock on x86. Currently, the page fault can be fast only if the
+shadow page table is present and it is caused by write-protect, that means
+we just need change the W bit of the spte.
+
+What we use to avoid all the race is the SPTE_ALLOW_WRITE bit and
+SPTE_WRITE_PROTECT bit on the spte:
+- SPTE_ALLOW_WRITE means the gfn is writable on both guest and host.
+- SPTE_WRITE_PROTECT means the gfn is not write-protected for shadow page
+  write protection.
+
+On fast page fault path, we will use cmpxchg to atomically set the spte W
+bit if spte.SPTE_WRITE_PROTECT = 1 and spte.SPTE_WRITE_PROTECT = 0, this is
+safe because whenever changing these bits can be detected by cmpxchg.
+
+But we need carefully check these cases:
+1): The mapping from gfn to pfn
+
+The mapping from gfn to pfn may be changed since we can only ensure the pfn
+is not changed during cmpxchg. This is a ABA problem, for example, below case
+will happen:
+
+At the beginning:
+gpte = gfn1
+gfn1 is mapped to pfn1 on host
+spte is the shadow page table entry corresponding with gpte and
+spte = pfn1
+
+   VCPU 0                           VCPU0
+on fast page fault path:
+
+   old_spte = *spte;
+                                 pfn1 is swapped out:
+                                    spte = 0;
+
+                                 pfn1 is re-alloced for gfn2.
+
+                                 gpte is changed to point to
+                                 gfn2 by the guest:
+                                    spte = pfn1;
+
+   if (cmpxchg(spte, old_spte, old_spte+W)
+	mark_page_dirty(vcpu->kvm, gfn1)
+             OOPS!!!
+
+We dirty-log for gfn1, that means gfn2 is lost in dirty-bitmap.
+
+For direct sp, we can easily avoid it since the spte of direct sp is fixed
+to gfn. For indirect sp, before we do cmpxchg, we call gfn_to_pfn_atomic()
+to pin gfn to pfn, because after gfn_to_pfn_atomic():
+- We have held the refcount of pfn that means the pfn can not be freed and
+  be reused for another gfn.
+- The pfn is writable that means it can not be shared between different gfns
+  by KSM.
+
+Then, we can ensure the dirty bitmaps is correctly set for a gfn.
+
+2): flush tlbs due to shadow page table write-protected
+
+In rmap_write_protect(), we always need flush tlbs if spte.SPTE_ALLOW_WRITE = 1
+even if the current spte is read-only. The reason is fast page fault path
+will mark the spte to writable and the writable spte will be cached into tlb.
+Like below case:
+
+At the beginning:
+spte.W = 0
+spte.SPTE_WRITE_PROTECT = 0
+spte.SPTE_ALLOW_WRITE = 1
+
+   VCPU 0                          VCPU0
+In rmap_write_protect():
+
+   flush = false;
+
+   if (spte.W == 1)
+      flush = true;
+
+                               On fast page fault path:
+                                  old_spte = *spte
+                                  cmpxchg(spte, old_spte, old_spte + W)
+
+                               the spte is fetched/prefetched into
+                               tlb by CPU
+
+   spte = (spte | SPTE_WRITE_PROTECT) &
+                      ~PT_WRITABLE_MASK;
+
+   if (flush)
+         kvm_flush_remote_tlbs(vcpu->kvm)
+               OOPS!!!
+
+The tlbs are not flushed since the spte is read-only, but invalid writable
+spte has been cached in the tlbs caused by fast page fault.
+
+3): Dirty bit tracking
+In the origin code, the spte can be fast updated (non-atomically) if the
+spte is read-only and the Accessed bit has already been set since the
+Accessed bit and Dirty bit can not be lost.
+
+But it is not true after fast page fault since the spte can be marked
+writable between reading spte and updating spte. Like below case:
+
+At the beginning:
+spte.W = 0
+spte.Accessed = 1
+
+   VCPU 0                                       VCPU0
+In mmu_spte_clear_track_bits():
+
+   old_spte = *spte;
+
+   /* 'if' condition is satisfied. */
+   if (old_spte.Accssed == 1 &&
+        old_spte.W == 0)
+      spte = 0ull;
+                                         on fast page fault path:
+                                             spte.W = 1
+                                         memory write on the spte:
+                                             spte.Dirty = 1
+
+
+   else
+      old_spte = xchg(spte, 0ull)
+
+
+   if (old_spte.Accssed == 1)
+      kvm_set_pfn_accessed(spte.pfn);
+   if (old_spte.Dirty == 1)
+      kvm_set_pfn_dirty(spte.pfn);
+      OOPS!!!
+
+The Dirt bit is lost in this case. We can call the slow path
+(__update_clear_spte_slow()) to update the spte if the spte can be changed
+by fast page fault.
+
+3. Reference
 ------------

 Name:		kvm_lock
@@ -23,3 +163,9 @@ Arch:		x86
 Protects:	- kvm_arch::{last_tsc_write,last_tsc_nsec,last_tsc_offset}
 		- tsc offset in vmcb
 Comment:	'raw' because updating the tsc offsets must not be preempted.
+
+Name:		kvm->mmu_lock
+Type:		spinlock_t
+Arch:		any
+Protects:	-shadow page/shadow tlb entry
+Comment:	it is a spinlock since it will be used in mmu notifier.
-- 
1.7.7.6


  parent reply	other threads:[~2012-04-20  8:38 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-04-20  8:16 [PATCH v3 0/9] KVM: MMU: fast page fault Xiao Guangrong
2012-04-20  8:17 ` [PATCH v3 1/9] KVM: MMU: return bool in __rmap_write_protect Xiao Guangrong
2012-04-20  8:17 ` [PATCH v3 2/9] KVM: MMU: abstract spte write-protect Xiao Guangrong
2012-04-20 21:33   ` Marcelo Tosatti
2012-04-21  1:10     ` Takuya Yoshikawa
2012-04-21  4:34       ` Xiao Guangrong
2012-04-21  3:24     ` Xiao Guangrong
2012-04-21  4:18       ` Marcelo Tosatti
2012-04-21  6:52         ` Xiao Guangrong
2012-04-20  8:18 ` [PATCH v3 3/9] KVM: VMX: export PFEC.P bit on ept Xiao Guangrong
2012-04-20  8:18 ` [PATCH v3 4/9] KVM: MMU: introduce SPTE_ALLOW_WRITE bit Xiao Guangrong
2012-04-20 21:39   ` Marcelo Tosatti
2012-04-21  3:30     ` Xiao Guangrong
2012-04-21  4:22       ` Marcelo Tosatti
2012-04-21  6:55         ` Xiao Guangrong
2012-04-22 15:12         ` Avi Kivity
2012-04-23  7:24           ` Xiao Guangrong
2012-04-20  8:19 ` [PATCH v3 5/9] KVM: MMU: introduce SPTE_WRITE_PROTECT bit Xiao Guangrong
2012-04-20 21:52   ` Marcelo Tosatti
2012-04-21  0:40     ` Marcelo Tosatti
2012-04-21  0:55       ` Marcelo Tosatti
2012-04-21  1:38         ` Takuya Yoshikawa
2012-04-21  4:29         ` Xiao Guangrong
2012-04-21  4:00       ` Xiao Guangrong
2012-04-24  0:45         ` Marcelo Tosatti
2012-04-24  3:34           ` Xiao Guangrong
2012-04-21  3:47     ` Xiao Guangrong
2012-04-21  4:38       ` Marcelo Tosatti
2012-04-21  7:25         ` Xiao Guangrong
2012-04-24  0:24           ` Marcelo Tosatti
2012-04-20  8:19 ` [PATCH v3 6/9] KVM: MMU: fast path of handling guest page fault Xiao Guangrong
2012-04-20  8:20 ` [PATCH v3 7/9] KVM: MMU: trace fast " Xiao Guangrong
2012-04-20  8:20 ` [PATCH v3 8/9] KVM: MMU: fix kvm_mmu_pagetable_walk tracepoint Xiao Guangrong
2012-04-20  8:21 ` Xiao Guangrong [this message]
2012-04-21  0:59 ` [PATCH v3 0/9] KVM: MMU: fast page fault Marcelo Tosatti

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4F911C7B.2030602@linux.vnet.ibm.com \
    --to=xiaoguangrong@linux.vnet.ibm.com \
    --cc=avi@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mtosatti@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.