qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_dirty()
@ 2021-06-30 20:08 Peter Xu
  2021-07-01  4:42 ` Wang, Wei W
                   ` (2 more replies)
  0 siblings, 3 replies; 37+ messages in thread
From: Peter Xu @ 2021-06-30 20:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: Hailiang Zhang, Juan Quintela, David Hildenbrand,
	Dr . David Alan Gilbert, peterx, Wei Wang,
	Leonardo Bras Soares Passos

Taking the mutex every time for each dirty bit to clear is too slow, especially
we'll take/release even if the dirty bit is cleared.  So far it's only used to
sync with special cases with qemu_guest_free_page_hint() against migration
thread, nothing really that serious yet.  Let's move the lock to be upper.

There're two callers of migration_bitmap_clear_dirty().

For migration, move it into ram_save_iterate().  With the help of MAX_WAIT
logic, we'll only run ram_save_iterate() for no more than 50ms-ish time, so
taking the lock once there at the entry.  It also means any call sites to
qemu_guest_free_page_hint() can be delayed; but it should be very rare, only
during migration, and I don't see a problem with it.

For COLO, move it up to colo_flush_ram_cache().  I think COLO forgot to take
that lock even when calling ramblock_sync_dirty_bitmap(), where another example
is migration_bitmap_sync() who took it right.  So let the mutex cover both the
ramblock_sync_dirty_bitmap() and migration_bitmap_clear_dirty() calls.

It's even possible to drop the lock so we use atomic operations upon rb->bmap
and the variable migration_dirty_pages.  I didn't do it just to still be safe,
also not predictable whether the frequent atomic ops could bring overhead too
e.g. on huge vms when it happens very often.  When that really comes, we can
keep a local counter and periodically call atomic ops.  Keep it simple for now.

Cc: Wei Wang <wei.w.wang@intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Hailiang Zhang <zhang.zhanghailiang@huawei.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Juan Quintela <quintela@redhat.com>
Cc: Leonardo Bras Soares Passos <lsoaresp@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/ram.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 723af67c2e..9f2965675d 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -795,8 +795,6 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs,
 {
     bool ret;
 
-    QEMU_LOCK_GUARD(&rs->bitmap_mutex);
-
     /*
      * Clear dirty bitmap if needed.  This _must_ be called before we
      * send any of the page in the chunk because we need to make sure
@@ -2834,6 +2832,14 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
         goto out;
     }
 
+    /*
+     * We'll take this lock a little bit long, but it's okay for two reasons.
+     * Firstly, the only possible other thread to take it is who calls
+     * qemu_guest_free_page_hint(), which should be rare; secondly, see
+     * MAX_WAIT (if curious, further see commit 4508bd9ed8053ce) below, which
+     * guarantees that we'll at least released it in a regular basis.
+     */
+    qemu_mutex_lock(&rs->bitmap_mutex);
     WITH_RCU_READ_LOCK_GUARD() {
         if (ram_list.version != rs->last_version) {
             ram_state_reset(rs);
@@ -2893,6 +2899,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
             i++;
         }
     }
+    qemu_mutex_unlock(&rs->bitmap_mutex);
 
     /*
      * Must occur before EOS (or any QEMUFile operation)
@@ -3682,6 +3689,7 @@ void colo_flush_ram_cache(void)
     unsigned long offset = 0;
 
     memory_global_dirty_log_sync();
+    qemu_mutex_lock(&ram_state->bitmap_mutex);
     WITH_RCU_READ_LOCK_GUARD() {
         RAMBLOCK_FOREACH_NOT_IGNORED(block) {
             ramblock_sync_dirty_bitmap(ram_state, block);
@@ -3710,6 +3718,7 @@ void colo_flush_ram_cache(void)
         }
     }
     trace_colo_flush_ram_cache_end();
+    qemu_mutex_unlock(&ram_state->bitmap_mutex);
 }
 
 /**
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 37+ messages in thread

end of thread, other threads:[~2021-07-14  5:05 UTC | newest]

Thread overview: 37+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-30 20:08 [PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_dirty() Peter Xu
2021-07-01  4:42 ` Wang, Wei W
2021-07-01 12:51   ` Peter Xu
2021-07-01 14:21     ` David Hildenbrand
2021-07-02  2:48       ` Wang, Wei W
2021-07-02  7:06         ` David Hildenbrand
2021-07-03  2:53           ` Wang, Wei W
2021-07-05 13:41             ` David Hildenbrand
2021-07-06  9:41               ` Wang, Wei W
2021-07-06 10:05                 ` David Hildenbrand
2021-07-06 17:39                   ` Peter Xu
2021-07-07 12:45                     ` Wang, Wei W
2021-07-07 16:45                       ` Peter Xu
2021-07-07 23:25                         ` Wang, Wei W
2021-07-08  0:21                           ` Peter Xu
2021-07-06 17:47             ` Peter Xu
2021-07-07  8:34               ` Wang, Wei W
2021-07-07 16:54                 ` Peter Xu
2021-07-08  2:55                   ` Wang, Wei W
2021-07-08 18:10                     ` Peter Xu
2021-07-02  2:29     ` Wang, Wei W
2021-07-06 17:59       ` Peter Xu
2021-07-07  8:33         ` Wang, Wei W
2021-07-07 16:44           ` Peter Xu
2021-07-08  2:49             ` Wang, Wei W
2021-07-08 18:30               ` Peter Xu
2021-07-09  8:58                 ` Wang, Wei W
2021-07-09 14:48                   ` Peter Xu
2021-07-13  8:20                     ` Wang, Wei W
2021-07-03 16:31 ` Lukas Straub
2021-07-04 14:14   ` Lukas Straub
2021-07-06 18:37     ` Peter Xu
2021-07-13  8:40 ` Wang, Wei W
2021-07-13 10:22   ` David Hildenbrand
2021-07-14  5:03     ` Wang, Wei W
2021-07-13 15:59   ` Peter Xu
2021-07-14  5:04     ` Wang, Wei W

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).