linux-mips.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: James Hogan <james.hogan@imgtec.com>
To: <linux-mips@linux-mips.org>
Cc: "James Hogan" <james.hogan@imgtec.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	"Radim Krčmář" <rkrcmar@redhat.com>,
	"Ralf Baechle" <ralf@linux-mips.org>,
	kvm@vger.kernel.org
Subject: [PATCH 9/13] KVM: MIPS/MMU: Handle dirty logging on GPA faults
Date: Mon, 16 Jan 2017 12:49:30 +0000	[thread overview]
Message-ID: <0e5f023d936dcb93600708ebb655673943f38f70.1484570878.git-series.james.hogan@imgtec.com> (raw)
In-Reply-To: <cover.99eec1b2ac935212acbcf2effacaab95cf6cdbf1.1484570878.git-series.james.hogan@imgtec.com>

Update kvm_mips_map_page() to handle logging of dirty guest physical
pages. Upcoming patches will propagate the dirty bit to the GVA page
tables.

A fast path is added for handling protection bits that can be resolved
without calling into KVM, currently just dirtying of clean pages being
written to.

The slow path marks the GPA page table entry writable only on writes,
and at the same time marks the page dirty in the dirty page logging
bitmask.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
---
 arch/mips/kvm/mmu.c | 74 +++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 70 insertions(+), 4 deletions(-)

diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c
index 63a6d542ecb3..7962eea4ebc3 100644
--- a/arch/mips/kvm/mmu.c
+++ b/arch/mips/kvm/mmu.c
@@ -451,6 +451,58 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 }
 
 /**
+ * _kvm_mips_map_page_fast() - Fast path GPA fault handler.
+ * @vcpu:		VCPU pointer.
+ * @gpa:		Guest physical address of fault.
+ * @write_fault:	Whether the fault was due to a write.
+ * @out_entry:		New PTE for @gpa (written on success unless NULL).
+ * @out_buddy:		New PTE for @gpa's buddy (written on success unless
+ *			NULL).
+ *
+ * Perform fast path GPA fault handling, doing all that can be done without
+ * calling into KVM. This handles dirtying of clean pages (for dirty page
+ * logging).
+ *
+ * Returns:	0 on success, in which case we can update derived mappings and
+ *		resume guest execution.
+ *		-EFAULT on failure due to absent GPA mapping or write to
+ *		read-only page, in which case KVM must be consulted.
+ */
+static int _kvm_mips_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa,
+				   bool write_fault,
+				   pte_t *out_entry, pte_t *out_buddy)
+{
+	struct kvm *kvm = vcpu->kvm;
+	gfn_t gfn = gpa >> PAGE_SHIFT;
+	pte_t *ptep;
+	int ret = 0;
+
+	spin_lock(&kvm->mmu_lock);
+
+	/* Fast path - just check GPA page table for an existing entry */
+	ptep = kvm_mips_pte_for_gpa(kvm, NULL, gpa);
+	if (!ptep || !pte_present(*ptep)) {
+		ret = -EFAULT;
+		goto out;
+	}
+
+	if (write_fault && !pte_dirty(*ptep)) {
+		/* Track dirtying of pages */
+		set_pte(ptep, pte_mkdirty(*ptep));
+		mark_page_dirty(kvm, gfn);
+	}
+
+	if (out_entry)
+		*out_entry = *ptep;
+	if (out_buddy)
+		*out_buddy = *ptep_buddy(ptep);
+
+out:
+	spin_unlock(&kvm->mmu_lock);
+	return ret;
+}
+
+/**
  * kvm_mips_map_page() - Map a guest physical page.
  * @vcpu:		VCPU pointer.
  * @gpa:		Guest physical address of fault.
@@ -462,9 +514,9 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
  * Handle GPA faults by creating a new GPA mapping (or updating an existing
  * one).
  *
- * This takes care of asking KVM for the corresponding PFN, and creating a
- * mapping in the GPA page tables. Derived mappings (GVA page tables and TLBs)
- * must be handled by the caller.
+ * This takes care of marking pages dirty (dirty page tracking), asking KVM for
+ * the corresponding PFN, and creating a mapping in the GPA page tables. Derived
+ * mappings (GVA page tables and TLBs) must be handled by the caller.
  *
  * Returns:	0 on success, in which case the caller may use the @out_entry
  *		and @out_buddy PTEs to update derived mappings and resume guest
@@ -485,7 +537,12 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, unsigned long gpa,
 	pte_t *ptep, entry, old_pte;
 	unsigned long prot_bits;
 
+	/* Try the fast path to handle clean pages */
 	srcu_idx = srcu_read_lock(&kvm->srcu);
+	err = _kvm_mips_map_page_fast(vcpu, gpa, write_fault, out_entry,
+				      out_buddy);
+	if (!err)
+		goto out;
 
 	/* We need a minimum of cached pages ready for page table creation */
 	err = mmu_topup_memory_cache(memcache, KVM_MMU_CACHE_MIN_PAGES,
@@ -493,6 +550,7 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, unsigned long gpa,
 	if (err)
 		goto out;
 
+	/* Slow path - ask KVM core whether we can access this GPA */
 	pfn = gfn_to_pfn(kvm, gfn);
 
 	if (is_error_noslot_pfn(pfn)) {
@@ -502,11 +560,19 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, unsigned long gpa,
 
 	spin_lock(&kvm->mmu_lock);
 
+	/* Ensure page tables are allocated */
 	ptep = kvm_mips_pte_for_gpa(kvm, memcache, gpa);
 
-	prot_bits = __READABLE | _PAGE_PRESENT | __WRITEABLE;
+	/* Set up the PTE */
+	prot_bits = __READABLE | _PAGE_PRESENT | _PAGE_WRITE |
+		_page_cachable_default;
+	if (write_fault) {
+		prot_bits |= __WRITEABLE;
+		mark_page_dirty(kvm, gfn);
+	}
 	entry = pfn_pte(pfn, __pgprot(prot_bits));
 
+	/* Write the PTE */
 	old_pte = *ptep;
 	set_pte(ptep, entry);
 	if (pte_present(old_pte))
-- 
git-series 0.8.10

WARNING: multiple messages have this Message-ID (diff)
From: James Hogan <james.hogan@imgtec.com>
To: linux-mips@linux-mips.org
Cc: "James Hogan" <james.hogan@imgtec.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	"Radim Krčmář" <rkrcmar@redhat.com>,
	"Ralf Baechle" <ralf@linux-mips.org>,
	kvm@vger.kernel.org
Subject: [PATCH 9/13] KVM: MIPS/MMU: Handle dirty logging on GPA faults
Date: Mon, 16 Jan 2017 12:49:30 +0000	[thread overview]
Message-ID: <0e5f023d936dcb93600708ebb655673943f38f70.1484570878.git-series.james.hogan@imgtec.com> (raw)
Message-ID: <20170116124930.OorSrRycGQHA0Lzklev9BqjF-oqEBg71JgYmY7wxS-w@z> (raw)
In-Reply-To: <cover.99eec1b2ac935212acbcf2effacaab95cf6cdbf1.1484570878.git-series.james.hogan@imgtec.com>

Update kvm_mips_map_page() to handle logging of dirty guest physical
pages. Upcoming patches will propagate the dirty bit to the GVA page
tables.

A fast path is added for handling protection bits that can be resolved
without calling into KVM, currently just dirtying of clean pages being
written to.

The slow path marks the GPA page table entry writable only on writes,
and at the same time marks the page dirty in the dirty page logging
bitmask.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
---
 arch/mips/kvm/mmu.c | 74 +++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 70 insertions(+), 4 deletions(-)

diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c
index 63a6d542ecb3..7962eea4ebc3 100644
--- a/arch/mips/kvm/mmu.c
+++ b/arch/mips/kvm/mmu.c
@@ -451,6 +451,58 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 }
 
 /**
+ * _kvm_mips_map_page_fast() - Fast path GPA fault handler.
+ * @vcpu:		VCPU pointer.
+ * @gpa:		Guest physical address of fault.
+ * @write_fault:	Whether the fault was due to a write.
+ * @out_entry:		New PTE for @gpa (written on success unless NULL).
+ * @out_buddy:		New PTE for @gpa's buddy (written on success unless
+ *			NULL).
+ *
+ * Perform fast path GPA fault handling, doing all that can be done without
+ * calling into KVM. This handles dirtying of clean pages (for dirty page
+ * logging).
+ *
+ * Returns:	0 on success, in which case we can update derived mappings and
+ *		resume guest execution.
+ *		-EFAULT on failure due to absent GPA mapping or write to
+ *		read-only page, in which case KVM must be consulted.
+ */
+static int _kvm_mips_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa,
+				   bool write_fault,
+				   pte_t *out_entry, pte_t *out_buddy)
+{
+	struct kvm *kvm = vcpu->kvm;
+	gfn_t gfn = gpa >> PAGE_SHIFT;
+	pte_t *ptep;
+	int ret = 0;
+
+	spin_lock(&kvm->mmu_lock);
+
+	/* Fast path - just check GPA page table for an existing entry */
+	ptep = kvm_mips_pte_for_gpa(kvm, NULL, gpa);
+	if (!ptep || !pte_present(*ptep)) {
+		ret = -EFAULT;
+		goto out;
+	}
+
+	if (write_fault && !pte_dirty(*ptep)) {
+		/* Track dirtying of pages */
+		set_pte(ptep, pte_mkdirty(*ptep));
+		mark_page_dirty(kvm, gfn);
+	}
+
+	if (out_entry)
+		*out_entry = *ptep;
+	if (out_buddy)
+		*out_buddy = *ptep_buddy(ptep);
+
+out:
+	spin_unlock(&kvm->mmu_lock);
+	return ret;
+}
+
+/**
  * kvm_mips_map_page() - Map a guest physical page.
  * @vcpu:		VCPU pointer.
  * @gpa:		Guest physical address of fault.
@@ -462,9 +514,9 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
  * Handle GPA faults by creating a new GPA mapping (or updating an existing
  * one).
  *
- * This takes care of asking KVM for the corresponding PFN, and creating a
- * mapping in the GPA page tables. Derived mappings (GVA page tables and TLBs)
- * must be handled by the caller.
+ * This takes care of marking pages dirty (dirty page tracking), asking KVM for
+ * the corresponding PFN, and creating a mapping in the GPA page tables. Derived
+ * mappings (GVA page tables and TLBs) must be handled by the caller.
  *
  * Returns:	0 on success, in which case the caller may use the @out_entry
  *		and @out_buddy PTEs to update derived mappings and resume guest
@@ -485,7 +537,12 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, unsigned long gpa,
 	pte_t *ptep, entry, old_pte;
 	unsigned long prot_bits;
 
+	/* Try the fast path to handle clean pages */
 	srcu_idx = srcu_read_lock(&kvm->srcu);
+	err = _kvm_mips_map_page_fast(vcpu, gpa, write_fault, out_entry,
+				      out_buddy);
+	if (!err)
+		goto out;
 
 	/* We need a minimum of cached pages ready for page table creation */
 	err = mmu_topup_memory_cache(memcache, KVM_MMU_CACHE_MIN_PAGES,
@@ -493,6 +550,7 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, unsigned long gpa,
 	if (err)
 		goto out;
 
+	/* Slow path - ask KVM core whether we can access this GPA */
 	pfn = gfn_to_pfn(kvm, gfn);
 
 	if (is_error_noslot_pfn(pfn)) {
@@ -502,11 +560,19 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, unsigned long gpa,
 
 	spin_lock(&kvm->mmu_lock);
 
+	/* Ensure page tables are allocated */
 	ptep = kvm_mips_pte_for_gpa(kvm, memcache, gpa);
 
-	prot_bits = __READABLE | _PAGE_PRESENT | __WRITEABLE;
+	/* Set up the PTE */
+	prot_bits = __READABLE | _PAGE_PRESENT | _PAGE_WRITE |
+		_page_cachable_default;
+	if (write_fault) {
+		prot_bits |= __WRITEABLE;
+		mark_page_dirty(kvm, gfn);
+	}
 	entry = pfn_pte(pfn, __pgprot(prot_bits));
 
+	/* Write the PTE */
 	old_pte = *ptep;
 	set_pte(ptep, entry);
 	if (pte_present(old_pte))
-- 
git-series 0.8.10

  parent reply	other threads:[~2017-01-16 12:53 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-16 12:49 [PATCH 0/13] KVM: MIPS: Dirty logging, SYNC_MMU & READONLY_MEM James Hogan
2017-01-16 12:49 ` James Hogan
2017-01-16 12:49 ` [PATCH 1/13] KVM: MIPS/T&E: Ignore user writes to CP0_Config7 James Hogan
2017-01-16 12:49   ` James Hogan
2017-01-16 12:49 ` [PATCH 2/13] KVM: MIPS: Pass type of fault down to kvm_mips_map_page() James Hogan
2017-01-16 12:49   ` James Hogan
2017-01-16 12:49 ` [PATCH 3/13] KVM: MIPS/T&E: Abstract bad access handling James Hogan
2017-01-16 12:49   ` James Hogan
2017-01-16 12:49 ` [PATCH 4/13] KVM: MIPS/T&E: Treat unhandled guest KSeg0 as MMIO James Hogan
2017-01-16 12:49   ` James Hogan
2017-01-16 12:49 ` [PATCH 5/13] KVM: MIPS/T&E: Handle read only GPA in TLB mod James Hogan
2017-01-16 12:49   ` James Hogan
2017-01-16 12:49 ` [PATCH 6/13] KVM: MIPS/MMU: Add GPA PT mkclean helper James Hogan
2017-01-16 12:49   ` James Hogan
2017-01-16 12:49 ` [PATCH 7/13] KVM: MIPS/MMU: Use generic dirty log & protect helper James Hogan
2017-01-16 12:49   ` James Hogan
2017-01-16 12:49 ` [PATCH 8/13] KVM: MIPS: Clean & flush on dirty page logging enable James Hogan
2017-01-16 12:49   ` James Hogan
2017-01-16 12:49 ` James Hogan [this message]
2017-01-16 12:49   ` [PATCH 9/13] KVM: MIPS/MMU: Handle dirty logging on GPA faults James Hogan
2017-01-16 12:49 ` [PATCH 10/13] KVM: MIPS/MMU: Pass GPA PTE bits to KSeg0 GVA PTEs James Hogan
2017-01-16 12:49   ` James Hogan
2017-01-16 12:49 ` [PATCH 11/13] KVM: MIPS/MMU: Pass GPA PTE bits to mapped " James Hogan
2017-01-16 12:49   ` James Hogan
2017-01-16 12:49 ` [PATCH 12/13] KVM: MIPS/MMU: Implement KVM_CAP_SYNC_MMU James Hogan
2017-01-16 12:49   ` James Hogan
2017-02-02 12:45   ` [PATCH v2 " James Hogan
2017-02-02 12:45     ` James Hogan
2017-01-16 12:49 ` [PATCH 13/13] KVM: MIPS: Claim KVM_CAP_READONLY_MEM support James Hogan
2017-01-16 12:49   ` James Hogan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0e5f023d936dcb93600708ebb655673943f38f70.1484570878.git-series.james.hogan@imgtec.com \
    --to=james.hogan@imgtec.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-mips@linux-mips.org \
    --cc=pbonzini@redhat.com \
    --cc=ralf@linux-mips.org \
    --cc=rkrcmar@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).