All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1] migration: clear the memory region dirty bitmap when skipping free pages
@ 2021-07-14  7:51 Wei Wang
  2021-07-14 10:27 ` Michael S. Tsirkin
  0 siblings, 1 reply; 5+ messages in thread
From: Wei Wang @ 2021-07-14  7:51 UTC (permalink / raw)
  To: qemu-devel; +Cc: mst, david, dgilbert, peterx, quintela

When skipping free pages, their corresponding dirty bits in the memory
region dirty bitmap need to be cleared. Otherwise the skipped pages will
be sent in the next round after the migration thread syncs dirty bits
from the memory region dirty bitmap.

migration_clear_memory_region_dirty_bitmap_range is put outside the
bitmap_mutex, becasue memory_region_clear_dirty_bitmap is possible to block
on the kvm slot mutex (don't want holding bitmap_mutex while blocked on
another mutex), and clear_bmap_test_and_clear uses atomic operation.

Cc: David Hildenbrand <david@redhat.com>
Cc: Peter Xu <peterx@redhat.com>
Reported-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
 capstone        |  2 +-
 migration/ram.c | 73 +++++++++++++++++++++++++++++++++++++------------
 slirp           |  2 +-
 ui/keycodemapdb |  2 +-
 4 files changed, 58 insertions(+), 21 deletions(-)

diff --git a/capstone b/capstone
index f8b1b83301..22ead3e0bf 160000
--- a/capstone
+++ b/capstone
@@ -1 +1 @@
-Subproject commit f8b1b833015a4ae47110ed068e0deb7106ced66d
+Subproject commit 22ead3e0bfdb87516656453336160e0a37b066bf
diff --git a/migration/ram.c b/migration/ram.c
index 88ff34f574..c44c6e2fed 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -789,6 +789,51 @@ unsigned long migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
     return find_next_bit(bitmap, size, start);
 }
 
+static void migration_clear_memory_region_dirty_bitmap(RAMState *rs,
+                                                       RAMBlock *rb,
+                                                       unsigned long page)
+{
+    uint8_t shift;
+    hwaddr size, start;
+
+    if (!rb->clear_bmap || !clear_bmap_test_and_clear(rb, page)) {
+        return;
+    }
+
+    shift = rb->clear_bmap_shift;
+    /*
+     * CLEAR_BITMAP_SHIFT_MIN should always guarantee this... this
+     * can make things easier sometimes since then start address
+     * of the small chunk will always be 64 pages aligned so the
+     * bitmap will always be aligned to unsigned long. We should
+     * even be able to remove this restriction but I'm simply
+     * keeping it.
+     */
+    assert(shift >= 6);
+
+    size = 1ULL << (TARGET_PAGE_BITS + shift);
+    start = (((ram_addr_t)page) << TARGET_PAGE_BITS) & (-size);
+    trace_migration_bitmap_clear_dirty(rb->idstr, start, size, page);
+    memory_region_clear_dirty_bitmap(rb->mr, start, size);
+}
+
+static void
+migration_clear_memory_region_dirty_bitmap_range(RAMState *rs,
+                                                 RAMBlock *rb,
+                                                 unsigned long start,
+                                                 unsigned long npages)
+{
+    unsigned long page_to_clear, i, nchunks;
+    unsigned long chunk_pages = 1UL << rb->clear_bmap_shift;
+
+    nchunks = (start + npages) / chunk_pages - start / chunk_pages + 1;
+
+    for (i = 0; i < nchunks; i++) {
+        page_to_clear = start + i * chunk_pages;
+        migration_clear_memory_region_dirty_bitmap(rs, rb, page_to_clear);
+    }
+}
+
 static inline bool migration_bitmap_clear_dirty(RAMState *rs,
                                                 RAMBlock *rb,
                                                 unsigned long page)
@@ -805,26 +850,9 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs,
      * the page in the chunk we clear the remote dirty bitmap for all.
      * Clearing it earlier won't be a problem, but too late will.
      */
-    if (rb->clear_bmap && clear_bmap_test_and_clear(rb, page)) {
-        uint8_t shift = rb->clear_bmap_shift;
-        hwaddr size = 1ULL << (TARGET_PAGE_BITS + shift);
-        hwaddr start = (((ram_addr_t)page) << TARGET_PAGE_BITS) & (-size);
-
-        /*
-         * CLEAR_BITMAP_SHIFT_MIN should always guarantee this... this
-         * can make things easier sometimes since then start address
-         * of the small chunk will always be 64 pages aligned so the
-         * bitmap will always be aligned to unsigned long.  We should
-         * even be able to remove this restriction but I'm simply
-         * keeping it.
-         */
-        assert(shift >= 6);
-        trace_migration_bitmap_clear_dirty(rb->idstr, start, size, page);
-        memory_region_clear_dirty_bitmap(rb->mr, start, size);
-    }
+    migration_clear_memory_region_dirty_bitmap(rs, rb, page);
 
     ret = test_and_clear_bit(page, rb->bmap);
-
     if (ret) {
         rs->migration_dirty_pages--;
     }
@@ -2742,6 +2770,15 @@ void qemu_guest_free_page_hint(void *addr, size_t len)
         start = offset >> TARGET_PAGE_BITS;
         npages = used_len >> TARGET_PAGE_BITS;
 
+        /*
+         * The skipped free pages are equavelent to be sent from clear_bmap's
+         * perspective, so clear the bits from the memory region bitmap which
+         * are initially set. Otherwise those skipped pages will be sent in
+         * the next round after syncing from the memory region bitmap.
+         */
+        migration_clear_memory_region_dirty_bitmap_range(ram_state, block,
+                                                         start, npages);
+
         qemu_mutex_lock(&ram_state->bitmap_mutex);
         ram_state->migration_dirty_pages -=
                       bitmap_count_one_with_offset(block->bmap, start, npages);
diff --git a/slirp b/slirp
index 8f43a99191..2faae0f778 160000
--- a/slirp
+++ b/slirp
@@ -1 +1 @@
-Subproject commit 8f43a99191afb47ca3f3c6972f6306209f367ece
+Subproject commit 2faae0f778f818fadc873308f983289df697eb93
diff --git a/ui/keycodemapdb b/ui/keycodemapdb
index 6119e6e19a..320f92c36a 160000
--- a/ui/keycodemapdb
+++ b/ui/keycodemapdb
@@ -1 +1 @@
-Subproject commit 6119e6e19a050df847418de7babe5166779955e4
+Subproject commit 320f92c36a80bfafc5d57834592a7be5fd79f104
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v1] migration: clear the memory region dirty bitmap when skipping free pages
  2021-07-14  7:51 [PATCH v1] migration: clear the memory region dirty bitmap when skipping free pages Wei Wang
@ 2021-07-14 10:27 ` Michael S. Tsirkin
  2021-07-14 10:30   ` David Hildenbrand
  0 siblings, 1 reply; 5+ messages in thread
From: Michael S. Tsirkin @ 2021-07-14 10:27 UTC (permalink / raw)
  To: Wei Wang; +Cc: peterx, david, qemu-devel, dgilbert, quintela

On Wed, Jul 14, 2021 at 03:51:04AM -0400, Wei Wang wrote:
> When skipping free pages, their corresponding dirty bits in the memory
> region dirty bitmap need to be cleared. Otherwise the skipped pages will
> be sent in the next round after the migration thread syncs dirty bits
> from the memory region dirty bitmap.
> 
> migration_clear_memory_region_dirty_bitmap_range is put outside the
> bitmap_mutex, becasue

because?

> memory_region_clear_dirty_bitmap is possible to block
> on the kvm slot mutex (don't want holding bitmap_mutex while blocked on
> another mutex), and clear_bmap_test_and_clear uses atomic operation.
> 
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Peter Xu <peterx@redhat.com>
> Reported-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> ---
>  capstone        |  2 +-

Seems unnecessary.

>  migration/ram.c | 73 +++++++++++++++++++++++++++++++++++++------------
>  slirp           |  2 +-
>  ui/keycodemapdb |  2 +-

These too.

>  4 files changed, 58 insertions(+), 21 deletions(-)
> 
> diff --git a/capstone b/capstone
> index f8b1b83301..22ead3e0bf 160000
> --- a/capstone
> +++ b/capstone
> @@ -1 +1 @@
> -Subproject commit f8b1b833015a4ae47110ed068e0deb7106ced66d
> +Subproject commit 22ead3e0bfdb87516656453336160e0a37b066bf
> diff --git a/migration/ram.c b/migration/ram.c
> index 88ff34f574..c44c6e2fed 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -789,6 +789,51 @@ unsigned long migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
>      return find_next_bit(bitmap, size, start);
>  }
>  
> +static void migration_clear_memory_region_dirty_bitmap(RAMState *rs,
> +                                                       RAMBlock *rb,
> +                                                       unsigned long page)
> +{
> +    uint8_t shift;
> +    hwaddr size, start;
> +
> +    if (!rb->clear_bmap || !clear_bmap_test_and_clear(rb, page)) {
> +        return;
> +    }
> +
> +    shift = rb->clear_bmap_shift;
> +    /*
> +     * CLEAR_BITMAP_SHIFT_MIN should always guarantee this... this
> +     * can make things easier sometimes since then start address
> +     * of the small chunk will always be 64 pages aligned so the
> +     * bitmap will always be aligned to unsigned long. We should
> +     * even be able to remove this restriction but I'm simply
> +     * keeping it.
> +     */
> +    assert(shift >= 6);
> +
> +    size = 1ULL << (TARGET_PAGE_BITS + shift);
> +    start = (((ram_addr_t)page) << TARGET_PAGE_BITS) & (-size);
> +    trace_migration_bitmap_clear_dirty(rb->idstr, start, size, page);
> +    memory_region_clear_dirty_bitmap(rb->mr, start, size);
> +}
> +
> +static void
> +migration_clear_memory_region_dirty_bitmap_range(RAMState *rs,
> +                                                 RAMBlock *rb,
> +                                                 unsigned long start,
> +                                                 unsigned long npages)
> +{
> +    unsigned long page_to_clear, i, nchunks;
> +    unsigned long chunk_pages = 1UL << rb->clear_bmap_shift;
> +
> +    nchunks = (start + npages) / chunk_pages - start / chunk_pages + 1;
> +
> +    for (i = 0; i < nchunks; i++) {
> +        page_to_clear = start + i * chunk_pages;
> +        migration_clear_memory_region_dirty_bitmap(rs, rb, page_to_clear);
> +    }
> +}
> +
>  static inline bool migration_bitmap_clear_dirty(RAMState *rs,
>                                                  RAMBlock *rb,
>                                                  unsigned long page)
> @@ -805,26 +850,9 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs,
>       * the page in the chunk we clear the remote dirty bitmap for all.
>       * Clearing it earlier won't be a problem, but too late will.
>       */
> -    if (rb->clear_bmap && clear_bmap_test_and_clear(rb, page)) {
> -        uint8_t shift = rb->clear_bmap_shift;
> -        hwaddr size = 1ULL << (TARGET_PAGE_BITS + shift);
> -        hwaddr start = (((ram_addr_t)page) << TARGET_PAGE_BITS) & (-size);
> -
> -        /*
> -         * CLEAR_BITMAP_SHIFT_MIN should always guarantee this... this
> -         * can make things easier sometimes since then start address
> -         * of the small chunk will always be 64 pages aligned so the
> -         * bitmap will always be aligned to unsigned long.  We should
> -         * even be able to remove this restriction but I'm simply
> -         * keeping it.
> -         */
> -        assert(shift >= 6);
> -        trace_migration_bitmap_clear_dirty(rb->idstr, start, size, page);
> -        memory_region_clear_dirty_bitmap(rb->mr, start, size);
> -    }
> +    migration_clear_memory_region_dirty_bitmap(rs, rb, page);
>  
>      ret = test_and_clear_bit(page, rb->bmap);
> -
>      if (ret) {
>          rs->migration_dirty_pages--;
>      }
> @@ -2742,6 +2770,15 @@ void qemu_guest_free_page_hint(void *addr, size_t len)
>          start = offset >> TARGET_PAGE_BITS;
>          npages = used_len >> TARGET_PAGE_BITS;
>  
> +        /*
> +         * The skipped free pages are equavelent to be sent from clear_bmap's
> +         * perspective, so clear the bits from the memory region bitmap which
> +         * are initially set. Otherwise those skipped pages will be sent in
> +         * the next round after syncing from the memory region bitmap.
> +         */
> +        migration_clear_memory_region_dirty_bitmap_range(ram_state, block,
> +                                                         start, npages);
> +
>          qemu_mutex_lock(&ram_state->bitmap_mutex);
>          ram_state->migration_dirty_pages -=
>                        bitmap_count_one_with_offset(block->bmap, start, npages);
> diff --git a/slirp b/slirp
> index 8f43a99191..2faae0f778 160000
> --- a/slirp
> +++ b/slirp
> @@ -1 +1 @@
> -Subproject commit 8f43a99191afb47ca3f3c6972f6306209f367ece
> +Subproject commit 2faae0f778f818fadc873308f983289df697eb93
> diff --git a/ui/keycodemapdb b/ui/keycodemapdb
> index 6119e6e19a..320f92c36a 160000
> --- a/ui/keycodemapdb
> +++ b/ui/keycodemapdb
> @@ -1 +1 @@
> -Subproject commit 6119e6e19a050df847418de7babe5166779955e4
> +Subproject commit 320f92c36a80bfafc5d57834592a7be5fd79f104
> -- 
> 2.25.1



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v1] migration: clear the memory region dirty bitmap when skipping free pages
  2021-07-14 10:27 ` Michael S. Tsirkin
@ 2021-07-14 10:30   ` David Hildenbrand
  2021-07-14 14:58     ` Wang, Wei W
  0 siblings, 1 reply; 5+ messages in thread
From: David Hildenbrand @ 2021-07-14 10:30 UTC (permalink / raw)
  To: Michael S. Tsirkin, Wei Wang; +Cc: peterx, qemu-devel, dgilbert, quintela

On 14.07.21 12:27, Michael S. Tsirkin wrote:
> On Wed, Jul 14, 2021 at 03:51:04AM -0400, Wei Wang wrote:
>> When skipping free pages, their corresponding dirty bits in the memory
>> region dirty bitmap need to be cleared. Otherwise the skipped pages will
>> be sent in the next round after the migration thread syncs dirty bits
>> from the memory region dirty bitmap.
>>
>> migration_clear_memory_region_dirty_bitmap_range is put outside the
>> bitmap_mutex, becasue
> 
> because?
> 
>> memory_region_clear_dirty_bitmap is possible to block
>> on the kvm slot mutex (don't want holding bitmap_mutex while blocked on
>> another mutex), and clear_bmap_test_and_clear uses atomic operation.

How is that different from our existing caller?

Please either clean everything up, completely avoiding the lock 
(separate patch), or move it under the lock.

Or am I missing something important?

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 5+ messages in thread

* RE: [PATCH v1] migration: clear the memory region dirty bitmap when skipping free pages
  2021-07-14 10:30   ` David Hildenbrand
@ 2021-07-14 14:58     ` Wang, Wei W
  2021-07-14 15:24       ` Peter Xu
  0 siblings, 1 reply; 5+ messages in thread
From: Wang, Wei W @ 2021-07-14 14:58 UTC (permalink / raw)
  To: David Hildenbrand, Michael S. Tsirkin
  Cc: peterx, qemu-devel, dgilbert, quintela

On Wednesday, July 14, 2021 6:30 PM, David Hildenbrand wrote:
> 
> On 14.07.21 12:27, Michael S. Tsirkin wrote:
> > On Wed, Jul 14, 2021 at 03:51:04AM -0400, Wei Wang wrote:
> >> When skipping free pages, their corresponding dirty bits in the
> >> memory region dirty bitmap need to be cleared. Otherwise the skipped
> >> pages will be sent in the next round after the migration thread syncs
> >> dirty bits from the memory region dirty bitmap.
> >>
> >> migration_clear_memory_region_dirty_bitmap_range is put outside the
> >> bitmap_mutex, becasue
> >
> > because?
> >
> >> memory_region_clear_dirty_bitmap is possible to block on the kvm slot
> >> mutex (don't want holding bitmap_mutex while blocked on another
> >> mutex), and clear_bmap_test_and_clear uses atomic operation.
> 
> How is that different from our existing caller?
> 
> Please either clean everything up, completely avoiding the lock (separate
> patch), or move it under the lock.
> 
> Or am I missing something important?

That seems ok to me and Peter to have it outside the lock. Not sure if Dave or Juan knows the reason why clear_bmap needs to be under the mutex given that it is atomic operation.

Best,
Wei

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v1] migration: clear the memory region dirty bitmap when skipping free pages
  2021-07-14 14:58     ` Wang, Wei W
@ 2021-07-14 15:24       ` Peter Xu
  0 siblings, 0 replies; 5+ messages in thread
From: Peter Xu @ 2021-07-14 15:24 UTC (permalink / raw)
  To: Wang, Wei W
  Cc: quintela, Michael S. Tsirkin, qemu-devel, dgilbert, David Hildenbrand

On Wed, Jul 14, 2021 at 02:58:31PM +0000, Wang, Wei W wrote:
> On Wednesday, July 14, 2021 6:30 PM, David Hildenbrand wrote:
> > 
> > On 14.07.21 12:27, Michael S. Tsirkin wrote:
> > > On Wed, Jul 14, 2021 at 03:51:04AM -0400, Wei Wang wrote:
> > >> When skipping free pages, their corresponding dirty bits in the
> > >> memory region dirty bitmap need to be cleared. Otherwise the skipped
> > >> pages will be sent in the next round after the migration thread syncs
> > >> dirty bits from the memory region dirty bitmap.
> > >>
> > >> migration_clear_memory_region_dirty_bitmap_range is put outside the
> > >> bitmap_mutex, becasue
> > >
> > > because?
> > >
> > >> memory_region_clear_dirty_bitmap is possible to block on the kvm slot
> > >> mutex (don't want holding bitmap_mutex while blocked on another
> > >> mutex), and clear_bmap_test_and_clear uses atomic operation.
> > 
> > How is that different from our existing caller?
> > 
> > Please either clean everything up, completely avoiding the lock (separate
> > patch), or move it under the lock.
> > 
> > Or am I missing something important?
> 
> That seems ok to me and Peter to have it outside the lock. Not sure if Dave or Juan knows the reason why clear_bmap needs to be under the mutex given that it is atomic operation.

Yes it looks ok to not have the lock to me, but I still think it's easier to
put all bitmap ops under the bitmap_mutex, so we handle clear_bmap/bmap the
same way.  It's also what we did in the existing code (although by accident).

Then we can replace clear_bmap atomic ops to normal mem accesses in a follow up
patch.  But it won't affect a huge lot - unlike normal bmap, clear_bmap is
normally per 1g chunk so modifying clear_bmap happens much less frequently.

Atomic ops will be needed of course if we want a spinlock version of
bitmap_mutex, however I still don't know whether that'll really help anything.

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-07-14 15:29 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-14  7:51 [PATCH v1] migration: clear the memory region dirty bitmap when skipping free pages Wei Wang
2021-07-14 10:27 ` Michael S. Tsirkin
2021-07-14 10:30   ` David Hildenbrand
2021-07-14 14:58     ` Wang, Wei W
2021-07-14 15:24       ` Peter Xu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.