From: Sean Christopherson <sean.j.christopherson@intel.com>
To: Peter Xu <peterx@redhat.com>
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
Paolo Bonzini <pbonzini@redhat.com>,
"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
Vitaly Kuznetsov <vkuznets@redhat.com>
Subject: Re: [PATCH RFC 04/15] KVM: Implement ring-based dirty memory tracking
Date: Mon, 2 Dec 2019 12:10:36 -0800 [thread overview]
Message-ID: <20191202201036.GJ4063@linux.intel.com> (raw)
In-Reply-To: <20191129213505.18472-5-peterx@redhat.com>
On Fri, Nov 29, 2019 at 04:34:54PM -0500, Peter Xu wrote:
> This patch is heavily based on previous work from Lei Cao
> <lei.cao@stratus.com> and Paolo Bonzini <pbonzini@redhat.com>. [1]
>
> KVM currently uses large bitmaps to track dirty memory. These bitmaps
> are copied to userspace when userspace queries KVM for its dirty page
> information. The use of bitmaps is mostly sufficient for live
> migration, as large parts of memory are be dirtied from one log-dirty
> pass to another. However, in a checkpointing system, the number of
> dirty pages is small and in fact it is often bounded---the VM is
> paused when it has dirtied a pre-defined number of pages. Traversing a
> large, sparsely populated bitmap to find set bits is time-consuming,
> as is copying the bitmap to user-space.
>
> A similar issue will be there for live migration when the guest memory
> is huge while the page dirty procedure is trivial. In that case for
> each dirty sync we need to pull the whole dirty bitmap to userspace
> and analyse every bit even if it's mostly zeros.
>
> The preferred data structure for above scenarios is a dense list of
> guest frame numbers (GFN). This patch series stores the dirty list in
> kernel memory that can be memory mapped into userspace to allow speedy
> harvesting.
>
> We defined two new data structures:
>
> struct kvm_dirty_ring;
> struct kvm_dirty_ring_indexes;
>
> Firstly, kvm_dirty_ring is defined to represent a ring of dirty
> pages. When dirty tracking is enabled, we can push dirty gfn onto the
> ring.
>
> Secondly, kvm_dirty_ring_indexes is defined to represent the
> user/kernel interface of each ring. Currently it contains two
> indexes: (1) avail_index represents where we should push our next
> PFN (written by kernel), while (2) fetch_index represents where the
> userspace should fetch the next dirty PFN (written by userspace).
>
> One complete ring is composed by one kvm_dirty_ring plus its
> corresponding kvm_dirty_ring_indexes.
>
> Currently, we have N+1 rings for each VM of N vcpus:
>
> - for each vcpu, we have 1 per-vcpu dirty ring,
> - for each vm, we have 1 per-vm dirty ring
Why? I assume the purpose of per-vcpu rings is to avoid contention between
threads, but the motiviation needs to be explicitly stated. And why is a
per-vm fallback ring needed?
If my assumption is correct, have other approaches been tried/profiled?
E.g. using cmpxchg to reserve N number of entries in a shared ring. IMO,
adding kvm_get_running_vcpu() is a hack that is just asking for future
abuse and the vcpu/vm/as_id interactions in mark_page_dirty_in_ring()
look extremely fragile. I also dislike having two different mechanisms
for accessing the ring (lock for per-vm, something else for per-vcpu).
> Please refer to the documentation update in this patch for more
> details.
>
> Note that this patch implements the core logic of dirty ring buffer.
> It's still disabled for all archs for now. Also, we'll address some
> of the other issues in follow up patches before it's firstly enabled
> on x86.
>
> [1] https://patchwork.kernel.org/patch/10471409/
>
> Signed-off-by: Lei Cao <lei.cao@stratus.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
...
> diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c
> new file mode 100644
> index 000000000000..9264891f3c32
> --- /dev/null
> +++ b/virt/kvm/dirty_ring.c
> @@ -0,0 +1,156 @@
> +#include <linux/kvm_host.h>
> +#include <linux/kvm.h>
> +#include <linux/vmalloc.h>
> +#include <linux/kvm_dirty_ring.h>
> +
> +u32 kvm_dirty_ring_get_rsvd_entries(void)
> +{
> + return KVM_DIRTY_RING_RSVD_ENTRIES + kvm_cpu_dirty_log_size();
> +}
> +
> +int kvm_dirty_ring_alloc(struct kvm *kvm, struct kvm_dirty_ring *ring)
> +{
> + u32 size = kvm->dirty_ring_size;
Just pass in @size, that way you don't need @kvm. And the callers will be
less ugly, e.g. the initial allocation won't need to speculatively set
kvm->dirty_ring_size.
> +
> + ring->dirty_gfns = vmalloc(size);
> + if (!ring->dirty_gfns)
> + return -ENOMEM;
> + memset(ring->dirty_gfns, 0, size);
> +
> + ring->size = size / sizeof(struct kvm_dirty_gfn);
> + ring->soft_limit =
> + (kvm->dirty_ring_size / sizeof(struct kvm_dirty_gfn)) -
And passing @size avoids issues like this where a local var is ignored.
> + kvm_dirty_ring_get_rsvd_entries();
> + ring->dirty_index = 0;
> + ring->reset_index = 0;
> + spin_lock_init(&ring->lock);
> +
> + return 0;
> +}
> +
...
> +void kvm_dirty_ring_free(struct kvm_dirty_ring *ring)
> +{
> + if (ring->dirty_gfns) {
Why condition freeing the dirty ring on kvm->dirty_ring_size, this
obviously protects itself. Not to mention vfree() also plays nice with a
NULL input.
> + vfree(ring->dirty_gfns);
> + ring->dirty_gfns = NULL;
> + }
> +}
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 681452d288cd..8642c977629b 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -64,6 +64,8 @@
> #define CREATE_TRACE_POINTS
> #include <trace/events/kvm.h>
>
> +#include <linux/kvm_dirty_ring.h>
> +
> /* Worst case buffer size needed for holding an integer. */
> #define ITOA_MAX_LEN 12
>
> @@ -149,6 +151,10 @@ static void mark_page_dirty_in_slot(struct kvm *kvm,
> struct kvm_vcpu *vcpu,
> struct kvm_memory_slot *memslot,
> gfn_t gfn);
> +static void mark_page_dirty_in_ring(struct kvm *kvm,
> + struct kvm_vcpu *vcpu,
> + struct kvm_memory_slot *slot,
> + gfn_t gfn);
>
> __visible bool kvm_rebooting;
> EXPORT_SYMBOL_GPL(kvm_rebooting);
> @@ -359,11 +365,22 @@ int kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id)
> vcpu->preempted = false;
> vcpu->ready = false;
>
> + if (kvm->dirty_ring_size) {
> + r = kvm_dirty_ring_alloc(vcpu->kvm, &vcpu->dirty_ring);
> + if (r) {
> + kvm->dirty_ring_size = 0;
> + goto fail_free_run;
This looks wrong, kvm->dirty_ring_size is used to free allocations, i.e.
previous allocations will leak if a vcpu allocation fails.
> + }
> + }
> +
> r = kvm_arch_vcpu_init(vcpu);
> if (r < 0)
> - goto fail_free_run;
> + goto fail_free_ring;
> return 0;
>
> +fail_free_ring:
> + if (kvm->dirty_ring_size)
> + kvm_dirty_ring_free(&vcpu->dirty_ring);
> fail_free_run:
> free_page((unsigned long)vcpu->run);
> fail:
> @@ -381,6 +398,8 @@ void kvm_vcpu_uninit(struct kvm_vcpu *vcpu)
> put_pid(rcu_dereference_protected(vcpu->pid, 1));
> kvm_arch_vcpu_uninit(vcpu);
> free_page((unsigned long)vcpu->run);
> + if (vcpu->kvm->dirty_ring_size)
> + kvm_dirty_ring_free(&vcpu->dirty_ring);
> }
> EXPORT_SYMBOL_GPL(kvm_vcpu_uninit);
>
> @@ -690,6 +709,7 @@ static struct kvm *kvm_create_vm(unsigned long type)
> struct kvm *kvm = kvm_arch_alloc_vm();
> int r = -ENOMEM;
> int i;
> + struct page *page;
>
> if (!kvm)
> return ERR_PTR(-ENOMEM);
> @@ -705,6 +725,14 @@ static struct kvm *kvm_create_vm(unsigned long type)
>
> BUILD_BUG_ON(KVM_MEM_SLOTS_NUM > SHRT_MAX);
>
> + page = alloc_page(GFP_KERNEL | __GFP_ZERO);
> + if (!page) {
> + r = -ENOMEM;
> + goto out_err_alloc_page;
> + }
> + kvm->vm_run = page_address(page);
> + BUILD_BUG_ON(sizeof(struct kvm_vm_run) > PAGE_SIZE);
> +
> if (init_srcu_struct(&kvm->srcu))
> goto out_err_no_srcu;
> if (init_srcu_struct(&kvm->irq_srcu))
> @@ -775,6 +803,9 @@ static struct kvm *kvm_create_vm(unsigned long type)
> out_err_no_irq_srcu:
> cleanup_srcu_struct(&kvm->srcu);
> out_err_no_srcu:
> + free_page((unsigned long)page);
> + kvm->vm_run = NULL;
No need to nullify vm_run.
> +out_err_alloc_page:
> kvm_arch_free_vm(kvm);
> mmdrop(current->mm);
> return ERR_PTR(r);
> @@ -800,6 +831,15 @@ static void kvm_destroy_vm(struct kvm *kvm)
> int i;
> struct mm_struct *mm = kvm->mm;
>
> + if (kvm->dirty_ring_size) {
> + kvm_dirty_ring_free(&kvm->vm_dirty_ring);
> + }
Unnecessary parantheses.
> +
> + if (kvm->vm_run) {
> + free_page((unsigned long)kvm->vm_run);
> + kvm->vm_run = NULL;
> + }
> +
> kvm_uevent_notify_change(KVM_EVENT_DESTROY_VM, kvm);
> kvm_destroy_vm_debugfs(kvm);
> kvm_arch_sync_events(kvm);
> @@ -2301,7 +2341,7 @@ static void mark_page_dirty_in_slot(struct kvm *kvm,
> {
> if (memslot && memslot->dirty_bitmap) {
> unsigned long rel_gfn = gfn - memslot->base_gfn;
> -
> + mark_page_dirty_in_ring(kvm, vcpu, memslot, gfn);
> set_bit_le(rel_gfn, memslot->dirty_bitmap);
> }
> }
> @@ -2649,6 +2689,13 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield_to_kernel_mode)
> }
> EXPORT_SYMBOL_GPL(kvm_vcpu_on_spin);
next prev parent reply other threads:[~2019-12-02 20:10 UTC|newest]
Thread overview: 121+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-29 21:34 [PATCH RFC 00/15] KVM: Dirty ring interface Peter Xu
2019-11-29 21:34 ` [PATCH RFC 01/15] KVM: Move running VCPU from ARM to common code Peter Xu
2019-12-03 19:01 ` Sean Christopherson
2019-12-04 9:42 ` Paolo Bonzini
2019-12-09 22:05 ` Peter Xu
2019-11-29 21:34 ` [PATCH RFC 02/15] KVM: Add kvm/vcpu argument to mark_dirty_page_in_slot Peter Xu
2019-12-02 19:32 ` Sean Christopherson
2019-12-02 20:49 ` Peter Xu
2019-11-29 21:34 ` [PATCH RFC 03/15] KVM: Add build-time error check on kvm_run size Peter Xu
2019-12-02 19:30 ` Sean Christopherson
2019-12-02 20:53 ` Peter Xu
2019-12-02 22:19 ` Sean Christopherson
2019-12-02 22:40 ` Peter Xu
2019-12-03 5:50 ` Sean Christopherson
2019-12-03 13:41 ` Paolo Bonzini
2019-12-03 17:04 ` Peter Xu
2019-11-29 21:34 ` [PATCH RFC 04/15] KVM: Implement ring-based dirty memory tracking Peter Xu
2019-12-02 20:10 ` Sean Christopherson [this message]
2019-12-02 21:16 ` Peter Xu
2019-12-02 21:50 ` Sean Christopherson
2019-12-02 23:09 ` Peter Xu
2019-12-03 13:48 ` Paolo Bonzini
2019-12-03 18:46 ` Sean Christopherson
2019-12-04 10:05 ` Paolo Bonzini
2019-12-07 0:29 ` Sean Christopherson
2019-12-09 9:37 ` Paolo Bonzini
2019-12-09 21:54 ` Peter Xu
2019-12-10 10:07 ` Paolo Bonzini
2019-12-10 15:52 ` Peter Xu
2019-12-10 17:09 ` Paolo Bonzini
2019-12-15 17:21 ` Peter Xu
2019-12-16 10:08 ` Paolo Bonzini
2019-12-16 18:54 ` Peter Xu
2019-12-17 9:01 ` Paolo Bonzini
2019-12-17 16:24 ` Peter Xu
2019-12-17 16:28 ` Paolo Bonzini
2019-12-18 21:58 ` Peter Xu
2019-12-18 22:24 ` Sean Christopherson
2019-12-18 22:37 ` Paolo Bonzini
2019-12-18 22:49 ` Peter Xu
2019-12-17 2:28 ` Tian, Kevin
2019-12-17 16:18 ` Alex Williamson
2019-12-17 16:30 ` Paolo Bonzini
2019-12-18 0:29 ` Tian, Kevin
[not found] ` <AADFC41AFE54684AB9EE6CBC0274A5D19D645E5F@SHSMSX104.ccr.corp.intel.com>
2019-12-17 5:17 ` Tian, Kevin
2019-12-17 5:25 ` Yan Zhao
2019-12-17 16:24 ` Alex Williamson
2019-12-03 19:13 ` Sean Christopherson
2019-12-04 10:14 ` Paolo Bonzini
2019-12-04 14:33 ` Sean Christopherson
2019-12-04 10:38 ` Jason Wang
2019-12-04 11:04 ` Paolo Bonzini
2019-12-04 19:52 ` Peter Xu
2019-12-05 6:51 ` Jason Wang
2019-12-05 12:08 ` Peter Xu
2019-12-05 13:12 ` Jason Wang
2019-12-10 13:25 ` Michael S. Tsirkin
2019-12-10 13:31 ` Paolo Bonzini
2019-12-10 16:02 ` Peter Xu
2019-12-10 21:53 ` Michael S. Tsirkin
2019-12-11 9:05 ` Paolo Bonzini
2019-12-11 13:04 ` Michael S. Tsirkin
2019-12-11 14:54 ` Peter Xu
2019-12-10 21:48 ` Michael S. Tsirkin
2019-12-11 12:53 ` Michael S. Tsirkin
2019-12-11 14:14 ` Paolo Bonzini
2019-12-11 20:59 ` Peter Xu
2019-12-11 22:57 ` Michael S. Tsirkin
2019-12-12 0:08 ` Paolo Bonzini
2019-12-12 7:36 ` Michael S. Tsirkin
2019-12-12 8:12 ` Paolo Bonzini
2019-12-12 10:38 ` Michael S. Tsirkin
2019-12-15 17:33 ` Peter Xu
2019-12-16 9:47 ` Michael S. Tsirkin
2019-12-16 15:07 ` Peter Xu
2019-12-16 15:33 ` Michael S. Tsirkin
2019-12-16 15:47 ` Peter Xu
2019-12-11 17:24 ` Christophe de Dinechin
2019-12-13 20:23 ` Peter Xu
2019-12-14 7:57 ` Paolo Bonzini
2019-12-14 16:26 ` Peter Xu
2019-12-16 9:29 ` Paolo Bonzini
2019-12-16 15:26 ` Peter Xu
2019-12-16 15:31 ` Paolo Bonzini
2019-12-16 15:43 ` Peter Xu
2019-12-17 12:16 ` Christophe de Dinechin
2019-12-17 12:19 ` Paolo Bonzini
2019-12-17 15:38 ` Peter Xu
2019-12-17 16:31 ` Paolo Bonzini
2019-12-17 16:42 ` Peter Xu
2019-12-17 16:48 ` Paolo Bonzini
2019-12-17 19:41 ` Peter Xu
2019-12-18 0:33 ` Paolo Bonzini
2019-12-18 16:32 ` Peter Xu
2019-12-18 16:41 ` Paolo Bonzini
2019-12-20 18:19 ` Peter Xu
2019-11-29 21:34 ` [PATCH RFC 05/15] KVM: Make dirty ring exclusive to dirty bitmap log Peter Xu
2019-11-29 21:34 ` [PATCH RFC 06/15] KVM: Introduce dirty ring wait queue Peter Xu
2019-11-29 21:34 ` [PATCH RFC 07/15] KVM: X86: Implement ring-based dirty memory tracking Peter Xu
2019-11-29 21:34 ` [PATCH RFC 08/15] KVM: selftests: Always clear dirty bitmap after iteration Peter Xu
2019-11-29 21:34 ` [PATCH RFC 09/15] KVM: selftests: Sync uapi/linux/kvm.h to tools/ Peter Xu
2019-11-29 21:35 ` [PATCH RFC 10/15] KVM: selftests: Use a single binary for dirty/clear log test Peter Xu
2019-11-29 21:35 ` [PATCH RFC 11/15] KVM: selftests: Introduce after_vcpu_run hook for dirty " Peter Xu
2019-11-29 21:35 ` [PATCH RFC 12/15] KVM: selftests: Add dirty ring buffer test Peter Xu
2019-11-29 21:35 ` [PATCH RFC 13/15] KVM: selftests: Let dirty_log_test async for dirty ring test Peter Xu
2019-11-29 21:35 ` [PATCH RFC 14/15] KVM: selftests: Add "-c" parameter to dirty log test Peter Xu
2019-11-29 21:35 ` [PATCH RFC 15/15] KVM: selftests: Test dirty ring waitqueue Peter Xu
2019-11-30 8:29 ` [PATCH RFC 00/15] KVM: Dirty ring interface Paolo Bonzini
2019-12-02 2:13 ` Peter Xu
2019-12-03 13:59 ` Paolo Bonzini
2019-12-05 19:30 ` Peter Xu
2019-12-05 19:59 ` Paolo Bonzini
2019-12-05 20:52 ` Peter Xu
2019-12-02 20:21 ` Sean Christopherson
2019-12-02 20:43 ` Peter Xu
2019-12-04 10:39 ` Jason Wang
2019-12-04 19:33 ` Peter Xu
2019-12-05 6:49 ` Jason Wang
2019-12-11 13:41 ` Christophe de Dinechin
2019-12-11 14:16 ` Paolo Bonzini
2019-12-11 17:15 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191202201036.GJ4063@linux.intel.com \
--to=sean.j.christopherson@intel.com \
--cc=dgilbert@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=vkuznets@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).