All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v2] migratioin/ram: leave RAMBlock->bmap blank on allocating
@ 2019-06-04  0:28 Wei Yang
  2019-06-04  3:51 ` Peter Xu
  0 siblings, 1 reply; 3+ messages in thread
From: Wei Yang @ 2019-06-04  0:28 UTC (permalink / raw)
  To: qemu-devel; +Cc: Wei Yang, dgilbert, peterx, quintela

During migration, we would sync bitmap from ram_list.dirty_memory to
RAMBlock.bmap in cpu_physical_memory_sync_dirty_bitmap().

Since we set RAMBlock.bmap and ram_list.dirty_memory both to all 1, this
means at the first round this sync is meaningless and is a duplicated
work.

Leaving RAMBlock->bmap blank on allocating would have a side effect on
migration_dirty_pages, since it is calculated from the result of
cpu_physical_memory_sync_dirty_bitmap(). To keep it right, we need to
set migration_dirty_pages to 0 in ram_state_init().

Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

---
v2: add a comment explaining why leaving RAMBlock.bmap clear
---
 migration/ram.c | 16 +++++++++-------
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 4c60869226..e9a27ea5e6 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3181,12 +3181,7 @@ static int ram_state_init(RAMState **rsp)
     qemu_mutex_init(&(*rsp)->src_page_req_mutex);
     QSIMPLEQ_INIT(&(*rsp)->src_page_requests);
 
-    /*
-     * Count the total number of pages used by ram blocks not including any
-     * gaps due to alignment or unplugs.
-     */
-    (*rsp)->migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
-
+    (*rsp)->migration_dirty_pages = 0;
     ram_state_reset(*rsp);
 
     return 0;
@@ -3201,8 +3196,15 @@ static void ram_list_init_bitmaps(void)
     if (ram_bytes_total()) {
         RAMBLOCK_FOREACH_NOT_IGNORED(block) {
             pages = block->max_length >> TARGET_PAGE_BITS;
+            /*
+             * We leave RAMBlock.bmap clear here and they will be sync from
+             * ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] in
+             * migration_bitmap_sync_precopy().
+             *
+             * ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] is set to all 1
+             * in ram_block_add().
+             */
             block->bmap = bitmap_new(pages);
-            bitmap_set(block->bmap, 0, pages);
             if (migrate_postcopy_ram()) {
                 block->unsentmap = bitmap_new(pages);
                 bitmap_set(block->unsentmap, 0, pages);
-- 
2.19.1



^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [Qemu-devel] [PATCH v2] migratioin/ram: leave RAMBlock->bmap blank on allocating
  2019-06-04  0:28 [Qemu-devel] [PATCH v2] migratioin/ram: leave RAMBlock->bmap blank on allocating Wei Yang
@ 2019-06-04  3:51 ` Peter Xu
  2019-06-04  5:21   ` Wei Yang
  0 siblings, 1 reply; 3+ messages in thread
From: Peter Xu @ 2019-06-04  3:51 UTC (permalink / raw)
  To: Wei Yang; +Cc: qemu-devel, dgilbert, quintela

On Tue, Jun 04, 2019 at 08:28:10AM +0800, Wei Yang wrote:
> During migration, we would sync bitmap from ram_list.dirty_memory to
> RAMBlock.bmap in cpu_physical_memory_sync_dirty_bitmap().
> 
> Since we set RAMBlock.bmap and ram_list.dirty_memory both to all 1, this
> means at the first round this sync is meaningless and is a duplicated
> work.
> 
> Leaving RAMBlock->bmap blank on allocating would have a side effect on
> migration_dirty_pages, since it is calculated from the result of
> cpu_physical_memory_sync_dirty_bitmap(). To keep it right, we need to
> set migration_dirty_pages to 0 in ram_state_init().
> 
> Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> 
> ---
> v2: add a comment explaining why leaving RAMBlock.bmap clear
> ---
>  migration/ram.c | 16 +++++++++-------
>  1 file changed, 9 insertions(+), 7 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 4c60869226..e9a27ea5e6 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -3181,12 +3181,7 @@ static int ram_state_init(RAMState **rsp)
>      qemu_mutex_init(&(*rsp)->src_page_req_mutex);
>      QSIMPLEQ_INIT(&(*rsp)->src_page_requests);
>  
> -    /*
> -     * Count the total number of pages used by ram blocks not including any
> -     * gaps due to alignment or unplugs.
> -     */
> -    (*rsp)->migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
> -

Since you've spent time discovering this relationship, we can also
comment on this too to hint future readers:

       /*
        * This must match with the initial values of dirty bitmap.
        * Currently we initialize the dirty bitmap to all zeros so
        * here the total dirty page count is zero.
        */

> +    (*rsp)->migration_dirty_pages = 0;
>      ram_state_reset(*rsp);
>  
>      return 0;
> @@ -3201,8 +3196,15 @@ static void ram_list_init_bitmaps(void)
>      if (ram_bytes_total()) {
>          RAMBLOCK_FOREACH_NOT_IGNORED(block) {
>              pages = block->max_length >> TARGET_PAGE_BITS;
> +            /*
> +             * We leave RAMBlock.bmap clear here and they will be sync from
> +             * ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] in
> +             * migration_bitmap_sync_precopy().

(This sentence is a bit confusing to me)

> +             *
> +             * ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] is set to all 1
> +             * in ram_block_add().

How about:

    The initial dirty bitmap for migration must be set with all ones
    to make sure we'll migrate every guest RAM page to destination.
    Here we didn't set RAMBlock.bmap simply because it is already set
    in ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] in ram_block_add,
    and that's where we'll sync the dirty bitmaps.  Here setting
    RAMBlock.bmap would be fine too but not necessary.

?

> +             */
>              block->bmap = bitmap_new(pages);
> -            bitmap_set(block->bmap, 0, pages);
>              if (migrate_postcopy_ram()) {
>                  block->unsentmap = bitmap_new(pages);
>                  bitmap_set(block->unsentmap, 0, pages);
> -- 
> 2.19.1
> 

Regards,

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [Qemu-devel] [PATCH v2] migratioin/ram: leave RAMBlock->bmap blank on allocating
  2019-06-04  3:51 ` Peter Xu
@ 2019-06-04  5:21   ` Wei Yang
  0 siblings, 0 replies; 3+ messages in thread
From: Wei Yang @ 2019-06-04  5:21 UTC (permalink / raw)
  To: Peter Xu; +Cc: quintela, Wei Yang, dgilbert, qemu-devel

On Tue, Jun 04, 2019 at 11:51:24AM +0800, Peter Xu wrote:
>On Tue, Jun 04, 2019 at 08:28:10AM +0800, Wei Yang wrote:
>> During migration, we would sync bitmap from ram_list.dirty_memory to
>> RAMBlock.bmap in cpu_physical_memory_sync_dirty_bitmap().
>> 
>> Since we set RAMBlock.bmap and ram_list.dirty_memory both to all 1, this
>> means at the first round this sync is meaningless and is a duplicated
>> work.
>> 
>> Leaving RAMBlock->bmap blank on allocating would have a side effect on
>> migration_dirty_pages, since it is calculated from the result of
>> cpu_physical_memory_sync_dirty_bitmap(). To keep it right, we need to
>> set migration_dirty_pages to 0 in ram_state_init().
>> 
>> Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
>> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>> 
>> ---
>> v2: add a comment explaining why leaving RAMBlock.bmap clear
>> ---
>>  migration/ram.c | 16 +++++++++-------
>>  1 file changed, 9 insertions(+), 7 deletions(-)
>> 
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 4c60869226..e9a27ea5e6 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -3181,12 +3181,7 @@ static int ram_state_init(RAMState **rsp)
>>      qemu_mutex_init(&(*rsp)->src_page_req_mutex);
>>      QSIMPLEQ_INIT(&(*rsp)->src_page_requests);
>>  
>> -    /*
>> -     * Count the total number of pages used by ram blocks not including any
>> -     * gaps due to alignment or unplugs.
>> -     */
>> -    (*rsp)->migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
>> -
>
>Since you've spent time discovering this relationship, we can also
>comment on this too to hint future readers:
>
>       /*
>        * This must match with the initial values of dirty bitmap.
>        * Currently we initialize the dirty bitmap to all zeros so
>        * here the total dirty page count is zero.
>        */
>

You are right.

>> +    (*rsp)->migration_dirty_pages = 0;
>>      ram_state_reset(*rsp);
>>  
>>      return 0;
>> @@ -3201,8 +3196,15 @@ static void ram_list_init_bitmaps(void)
>>      if (ram_bytes_total()) {
>>          RAMBLOCK_FOREACH_NOT_IGNORED(block) {
>>              pages = block->max_length >> TARGET_PAGE_BITS;
>> +            /*
>> +             * We leave RAMBlock.bmap clear here and they will be sync from
>> +             * ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] in
>> +             * migration_bitmap_sync_precopy().
>
>(This sentence is a bit confusing to me)
>
>> +             *
>> +             * ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] is set to all 1
>> +             * in ram_block_add().
>
>How about:
>
>    The initial dirty bitmap for migration must be set with all ones
>    to make sure we'll migrate every guest RAM page to destination.
>    Here we didn't set RAMBlock.bmap simply because it is already set
>    in ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] in ram_block_add,
>    and that's where we'll sync the dirty bitmaps.  Here setting
>    RAMBlock.bmap would be fine too but not necessary.
>
>?
>

Sounds better :-)

>> +             */
>>              block->bmap = bitmap_new(pages);
>> -            bitmap_set(block->bmap, 0, pages);
>>              if (migrate_postcopy_ram()) {
>>                  block->unsentmap = bitmap_new(pages);
>>                  bitmap_set(block->unsentmap, 0, pages);
>> -- 
>> 2.19.1
>> 
>
>Regards,
>
>-- 
>Peter Xu

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2019-06-04  5:32 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-04  0:28 [Qemu-devel] [PATCH v2] migratioin/ram: leave RAMBlock->bmap blank on allocating Wei Yang
2019-06-04  3:51 ` Peter Xu
2019-06-04  5:21   ` Wei Yang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.