qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC v3 00/11] KVM: Dirty ring support (QEMU part)
@ 2020-05-23 23:20 Peter Xu
  2020-05-23 23:20 ` [PATCH RFC v3 01/11] linux-headers: Update Peter Xu
                   ` (12 more replies)
  0 siblings, 13 replies; 19+ messages in thread
From: Peter Xu @ 2020-05-23 23:20 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Dr . David Alan Gilbert, peterx

I kept the dirty sync in kvm_set_phys_mem() for kvmslot removals, left a
comment on the known issue about strict dirty sync so we can fix it someday in
the future together with dirty log and dirty ring.

v3:
- added "KVM: Use a big lock to replace per-kml slots_lock"
  this is preparing for the last patch where we'll reap kvm dirty ring when
  removing kvmslots.
- added "KVM: Simplify dirty log sync in kvm_set_phys_mem"
  it's kind of a fix, but also a preparation of the last patch so it'll be very
  easy to add the dirty ring sync there
- the last patch is changed to handle correctly the dirty sync in kvmslot
  removal, also comment there about the known issues.
- reordered the patches a bit
- NOTE: since we kept the sync in memslot removal, this version does not depend
  on any other QEMU series - it is based on QEMU master

v2:
- add r-bs for Dave
- change dirty-ring-size parameter from int64 to uint64_t [Dave]
- remove an assertion for KVM_GET_DIRTY_LOG [Dave]
- document update: "per vcpu" dirty ring [Dave]
- rename KVMReaperState to KVMDirtyRingReaperState [Dave]
- dump errno when kvm_init_vcpu fails with dirty ring [Dave]
- switch to use dirty-ring-gfns as parameter [Dave]
- comment MAP_SHARED [Dave]
- dump more info when enable dirty ring failed [Dave]
- add kvm_dirty_ring_enabled flag to show whether dirty ring enabled
- rewrote many of the last patch to reduce LOC, now we do dirty ring reap only
  with BQL to simplify things, allowing the main or vcpu thread to directly
  call kvm_dirty_ring_reap() to collect dirty pages, so that we can drop a lot
  of synchronization variables like sems or eventfds.

For anyone who wants to try (we need to upgrade kernel too):

KVM branch:
  https://github.com/xzpeter/linux/tree/kvm-dirty-ring

QEMU branch for testing:
  https://github.com/xzpeter/qemu/tree/kvm-dirty-ring

Overview
========

KVM dirty ring is a new interface to pass over dirty bits from kernel
to the userspace.  Instead of using a bitmap for each memory region,
the dirty ring contains an array of dirtied GPAs to fetch, one ring
per vcpu.

There're a few major changes comparing to how the old dirty logging
interface would work:

- Granularity of dirty bits

  KVM dirty ring interface does not offer memory region level
  granularity to collect dirty bits (i.e., per KVM memory
  slot). Instead the dirty bit is collected globally for all the vcpus
  at once.  The major effect is on VGA part because VGA dirty tracking
  is enabled as long as the device is created, also it was in memory
  region granularity.  Now that operation will be amplified to a VM
  sync.  Maybe there's smarter way to do the same thing in VGA with
  the new interface, but so far I don't see it affects much at least
  on regular VMs.

- Collection of dirty bits

  The old dirty logging interface collects KVM dirty bits when
  synchronizing dirty bits.  KVM dirty ring interface instead used a
  standalone thread to do that.  So when the other thread (e.g., the
  migration thread) wants to synchronize the dirty bits, it simply
  kick the thread and wait until it flushes all the dirty bits to the
  ramblock dirty bitmap.

A new parameter "dirty-ring-size" is added to "-accel kvm".  By
default, dirty ring is still disabled (size==0).  To enable it, we
need to be with:

  -accel kvm,dirty-ring-size=65536

This establishes a 64K dirty ring buffer per vcpu.  Then if we
migrate, it'll switch to dirty ring.

I gave it a shot with a 24G guest, 8 vcpus, using 10g NIC as migration
channel.  When idle or dirty workload small, I don't observe major
difference on total migration time.  When with higher random dirty
workload (800MB/s dirty rate upon 20G memory, worse for kvm dirty
ring). Total migration time is (ping pong migrate for 6 times, in
seconds):

|-------------------------+---------------|
| dirty ring (4k entries) | dirty logging |
|-------------------------+---------------|
|                      70 |            58 |
|                      78 |            70 |
|                      72 |            48 |
|                      74 |            52 |
|                      83 |            49 |
|                      65 |            54 |
|-------------------------+---------------|

Summary:

dirty ring average:    73s
dirty logging average: 55s

The KVM dirty ring will be slower in above case.  The number may show
that the dirty logging is still preferred as a default value because
small/medium VMs are still major cases, and high dirty workload
happens frequently too.  And that's what this series did.

Please refer to the code and comment itself for more information.

Thanks,

Peter Xu (11):
  linux-headers: Update
  memory: Introduce log_sync_global() to memory listener
  KVM: Fixup kvm_log_clear_one_slot() ioctl return check
  KVM: Use a big lock to replace per-kml slots_lock
  KVM: Create the KVMSlot dirty bitmap on flag changes
  KVM: Provide helper to get kvm dirty log
  KVM: Provide helper to sync dirty bitmap from slot to ramblock
  KVM: Simplify dirty log sync in kvm_set_phys_mem
  KVM: Cache kvm slot dirty bitmap size
  KVM: Add dirty-gfn-count property
  KVM: Dirty ring support

 accel/kvm/kvm-all.c         | 540 +++++++++++++++++++++++++++++++-----
 accel/kvm/trace-events      |   7 +
 include/exec/memory.h       |  12 +
 include/hw/core/cpu.h       |   8 +
 include/sysemu/kvm_int.h    |   7 +-
 linux-headers/asm-x86/kvm.h |   1 +
 linux-headers/linux/kvm.h   |  53 ++++
 memory.c                    |  33 ++-
 qemu-options.hx             |   5 +
 9 files changed, 581 insertions(+), 85 deletions(-)

-- 
2.26.2



^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2020-05-26 14:18 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-23 23:20 [PATCH RFC v3 00/11] KVM: Dirty ring support (QEMU part) Peter Xu
2020-05-23 23:20 ` [PATCH RFC v3 01/11] linux-headers: Update Peter Xu
2020-05-24 13:27   ` Peter Maydell
2020-05-24 14:06     ` Peter Xu
2020-05-24 17:50       ` Peter Maydell
2020-05-25 14:29         ` Peter Xu
2020-05-23 23:20 ` [PATCH RFC v3 02/11] memory: Introduce log_sync_global() to memory listener Peter Xu
2020-05-23 23:20 ` [PATCH RFC v3 03/11] KVM: Fixup kvm_log_clear_one_slot() ioctl return check Peter Xu
2020-05-24 16:39   ` Philippe Mathieu-Daudé
2020-05-23 23:20 ` [PATCH RFC v3 04/11] KVM: Use a big lock to replace per-kml slots_lock Peter Xu
2020-05-23 23:20 ` [PATCH RFC v3 05/11] KVM: Create the KVMSlot dirty bitmap on flag changes Peter Xu
2020-05-23 23:20 ` [PATCH RFC v3 06/11] KVM: Provide helper to get kvm dirty log Peter Xu
2020-05-23 23:20 ` [PATCH RFC v3 07/11] KVM: Provide helper to sync dirty bitmap from slot to ramblock Peter Xu
2020-05-23 23:20 ` [PATCH RFC v3 08/11] KVM: Simplify dirty log sync in kvm_set_phys_mem Peter Xu
2020-05-23 23:20 ` [PATCH RFC v3 09/11] KVM: Cache kvm slot dirty bitmap size Peter Xu
2020-05-23 23:20 ` [PATCH RFC v3 10/11] KVM: Add dirty-gfn-count property Peter Xu
2020-05-23 23:20 ` [PATCH RFC v3 11/11] KVM: Dirty ring support Peter Xu
2020-05-24 13:06 ` [PATCH RFC v3 00/11] KVM: Dirty ring support (QEMU part) Peter Xu
2020-05-26 14:17 ` Peter Xu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).