kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org
Cc: dinechin@redhat.com, sean.j.christopherson@intel.com,
	pbonzini@redhat.com, jasowang@redhat.com, yan.y.zhao@intel.com,
	mst@redhat.com, peterx@redhat.com, kevin.tian@intel.com,
	alex.williamson@redhat.com, dgilbert@redhat.com,
	vkuznets@redhat.com
Subject: [PATCH 04/14] KVM: Pass in kvm pointer into mark_page_dirty_in_slot()
Date: Tue,  4 Feb 2020 21:58:32 -0500	[thread overview]
Message-ID: <20200205025842.367575-1-peterx@redhat.com> (raw)
In-Reply-To: <20200205025105.367213-1-peterx@redhat.com>

The context will be needed to implement the kvm dirty ring.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 virt/kvm/kvm_main.c | 22 +++++++++++++---------
 1 file changed, 13 insertions(+), 9 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 69190f9f7bd8..5307f6e33587 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -144,7 +144,9 @@ static void hardware_disable_all(void);
 
 static void kvm_io_bus_destroy(struct kvm_io_bus *bus);
 
-static void mark_page_dirty_in_slot(struct kvm_memory_slot *memslot, gfn_t gfn);
+static void mark_page_dirty_in_slot(struct kvm *kvm,
+				    struct kvm_memory_slot *memslot,
+				    gfn_t gfn);
 
 __visible bool kvm_rebooting;
 EXPORT_SYMBOL_GPL(kvm_rebooting);
@@ -2057,7 +2059,8 @@ int kvm_vcpu_read_guest_atomic(struct kvm_vcpu *vcpu, gpa_t gpa,
 }
 EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest_atomic);
 
-static int __kvm_write_guest_page(struct kvm_memory_slot *memslot, gfn_t gfn,
+static int __kvm_write_guest_page(struct kvm *kvm,
+				  struct kvm_memory_slot *memslot, gfn_t gfn,
 			          const void *data, int offset, int len)
 {
 	int r;
@@ -2069,7 +2072,7 @@ static int __kvm_write_guest_page(struct kvm_memory_slot *memslot, gfn_t gfn,
 	r = __copy_to_user((void __user *)addr + offset, data, len);
 	if (r)
 		return -EFAULT;
-	mark_page_dirty_in_slot(memslot, gfn);
+	mark_page_dirty_in_slot(kvm, memslot, gfn);
 	return 0;
 }
 
@@ -2078,7 +2081,7 @@ int kvm_write_guest_page(struct kvm *kvm, gfn_t gfn,
 {
 	struct kvm_memory_slot *slot = gfn_to_memslot(kvm, gfn);
 
-	return __kvm_write_guest_page(slot, gfn, data, offset, len);
+	return __kvm_write_guest_page(kvm, slot, gfn, data, offset, len);
 }
 EXPORT_SYMBOL_GPL(kvm_write_guest_page);
 
@@ -2087,7 +2090,7 @@ int kvm_vcpu_write_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn,
 {
 	struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
 
-	return __kvm_write_guest_page(slot, gfn, data, offset, len);
+	return __kvm_write_guest_page(vcpu->kvm, slot, gfn, data, offset, len);
 }
 EXPORT_SYMBOL_GPL(kvm_vcpu_write_guest_page);
 
@@ -2206,7 +2209,7 @@ int kvm_write_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
 	r = __copy_to_user((void __user *)ghc->hva + offset, data, len);
 	if (r)
 		return -EFAULT;
-	mark_page_dirty_in_slot(ghc->memslot, gpa >> PAGE_SHIFT);
+	mark_page_dirty_in_slot(kvm, ghc->memslot, gpa >> PAGE_SHIFT);
 
 	return 0;
 }
@@ -2273,7 +2276,8 @@ int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsigned long len)
 }
 EXPORT_SYMBOL_GPL(kvm_clear_guest);
 
-static void mark_page_dirty_in_slot(struct kvm_memory_slot *memslot,
+static void mark_page_dirty_in_slot(struct kvm *kvm,
+				    struct kvm_memory_slot *memslot,
 				    gfn_t gfn)
 {
 	if (memslot && memslot->dirty_bitmap) {
@@ -2288,7 +2292,7 @@ void mark_page_dirty(struct kvm *kvm, gfn_t gfn)
 	struct kvm_memory_slot *memslot;
 
 	memslot = gfn_to_memslot(kvm, gfn);
-	mark_page_dirty_in_slot(memslot, gfn);
+	mark_page_dirty_in_slot(kvm, memslot, gfn);
 }
 EXPORT_SYMBOL_GPL(mark_page_dirty);
 
@@ -2297,7 +2301,7 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn)
 	struct kvm_memory_slot *memslot;
 
 	memslot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
-	mark_page_dirty_in_slot(memslot, gfn);
+	mark_page_dirty_in_slot(vcpu->kvm, memslot, gfn);
 }
 EXPORT_SYMBOL_GPL(kvm_vcpu_mark_page_dirty);
 
-- 
2.24.1


  parent reply	other threads:[~2020-02-05  2:58 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-05  2:50 [PATCH v4 00/14] KVM: Dirty ring interface Peter Xu
2020-02-05  2:50 ` [PATCH v4 01/14] KVM: X86: Change parameter for fast_page_fault tracepoint Peter Xu
2020-02-05  2:50 ` [PATCH v4 02/14] KVM: Cache as_id in kvm_memory_slot Peter Xu
2020-02-05  2:50 ` [PATCH v4 03/14] KVM: X86: Don't track dirty for KVM_SET_[TSS_ADDR|IDENTITY_MAP_ADDR] Peter Xu
2020-02-05  2:58 ` Peter Xu [this message]
2020-02-05  2:58   ` [PATCH 05/14] KVM: X86: Implement ring-based dirty memory tracking Peter Xu
2020-02-05  2:58   ` [PATCH 06/14] KVM: Make dirty ring exclusive to dirty bitmap log Peter Xu
2020-02-05  2:58   ` [PATCH 07/14] KVM: Don't allocate dirty bitmap if dirty ring is enabled Peter Xu
2020-02-05  2:58   ` [PATCH 08/14] KVM: selftests: Always clear dirty bitmap after iteration Peter Xu
2020-02-05  2:58   ` [PATCH 09/14] KVM: selftests: Sync uapi/linux/kvm.h to tools/ Peter Xu
2020-02-05  2:58   ` [PATCH 10/14] KVM: selftests: Use a single binary for dirty/clear log test Peter Xu
2020-02-05  9:28     ` Andrew Jones
2020-02-05 15:46       ` Peter Xu
2020-02-05 17:11         ` Andrew Jones
2020-02-05 17:39           ` Peter Xu
2020-02-06 22:40             ` Peter Xu
2020-02-07  8:31               ` Andrew Jones
2020-02-05  2:58   ` [PATCH 11/14] KVM: selftests: Introduce after_vcpu_run hook for dirty " Peter Xu
2020-02-05  2:58   ` [PATCH 12/14] KVM: selftests: Add dirty ring buffer test Peter Xu
2020-02-05  2:58   ` [PATCH 13/14] KVM: selftests: Let dirty_log_test async for dirty ring test Peter Xu
2020-02-05  9:48     ` Andrew Jones
2020-02-05 15:55       ` Peter Xu
2020-02-05 17:15         ` Andrew Jones
2020-02-05  3:00 ` [PATCH 14/14] KVM: selftests: Add "-c" parameter to dirty log test Peter Xu
2020-03-03 17:33 ` [PATCH v4 00/14] KVM: Dirty ring interface Peter Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200205025842.367575-1-peterx@redhat.com \
    --to=peterx@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=dinechin@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=sean.j.christopherson@intel.com \
    --cc=vkuznets@redhat.com \
    --cc=yan.y.zhao@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).