All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] Some modifications about ram_save_host_page()
@ 2021-03-01  8:21 Kunkun Jiang
  2021-03-01  8:21 ` [PATCH v2 1/3] migration/ram: Modify the code comment of ram_save_host_page() Kunkun Jiang
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Kunkun Jiang @ 2021-03-01  8:21 UTC (permalink / raw)
  To: David Edmondson, Juan Quintela, Dr . David Alan Gilbert,
	open list:All patches CC here
  Cc: Zenghui Yu, wanghaibin.wang, Keqian Zhu

Hi,

This series include patches as below:
Patch 1-2:
- modified the comment and code of ram_save_host_page() to make them match each other

Patch 3:
- optimized ram_save_host_page() by using migration_bitmap_find_dirty() to find dirty
pages

History:

v1 -> v2:
- Modify ram_save_host_page() comment [David Edmondson]
- Remove 'goto' [David Edmondson]

Kunkun Jiang (3):
  migration/ram: Modify the code comment of ram_save_host_page()
  migration/ram: Modify ram_save_host_page() to match the comment
  migration/ram: Optimize ram_save_host_page()

 migration/ram.c | 30 ++++++++++++++++--------------
 1 file changed, 16 insertions(+), 14 deletions(-)

-- 
2.23.0



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 1/3] migration/ram: Modify the code comment of ram_save_host_page()
  2021-03-01  8:21 [PATCH v2 0/3] Some modifications about ram_save_host_page() Kunkun Jiang
@ 2021-03-01  8:21 ` Kunkun Jiang
  2021-03-03  8:38   ` David Edmondson
  2021-03-01  8:21 ` [PATCH v2 2/3] migration/ram: Modify ram_save_host_page() to match the comment Kunkun Jiang
  2021-03-01  8:21 ` [PATCH v2 3/3] migration/ram: Optimize ram_save_host_page() Kunkun Jiang
  2 siblings, 1 reply; 9+ messages in thread
From: Kunkun Jiang @ 2021-03-01  8:21 UTC (permalink / raw)
  To: David Edmondson, Juan Quintela, Dr . David Alan Gilbert,
	open list:All patches CC here
  Cc: Zenghui Yu, wanghaibin.wang, Keqian Zhu

The ram_save_host_page() has been modified several times
since its birth. But the comment hasn't been modified as it should
be. It'd better to modify the comment to explain ram_save_host_page()
more clearly.

Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
---
 migration/ram.c | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 72143da0ac..24967cb970 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1970,15 +1970,13 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
 }
 
 /**
- * ram_save_host_page: save a whole host page
+ * ram_save_host_page: save a whole host page or the rest of a RAMBlock
  *
- * Starting at *offset send pages up to the end of the current host
- * page. It's valid for the initial offset to point into the middle of
- * a host page in which case the remainder of the hostpage is sent.
- * Only dirty target pages are sent. Note that the host page size may
- * be a huge page for this block.
- * The saving stops at the boundary of the used_length of the block
- * if the RAMBlock isn't a multiple of the host page size.
+ * Send dirty pages between pss->page and either the end of that page
+ * or the used_length of the RAMBlock, whichever is smaller.
+ *
+ * Note that if the host page is a huge page, pss->page may be in the
+ * middle of that page.
  *
  * Returns the number of pages written or negative on error
  *
-- 
2.23.0



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 2/3] migration/ram: Modify ram_save_host_page() to match the comment
  2021-03-01  8:21 [PATCH v2 0/3] Some modifications about ram_save_host_page() Kunkun Jiang
  2021-03-01  8:21 ` [PATCH v2 1/3] migration/ram: Modify the code comment of ram_save_host_page() Kunkun Jiang
@ 2021-03-01  8:21 ` Kunkun Jiang
  2021-03-03  8:37   ` david.edmondson
  2021-03-01  8:21 ` [PATCH v2 3/3] migration/ram: Optimize ram_save_host_page() Kunkun Jiang
  2 siblings, 1 reply; 9+ messages in thread
From: Kunkun Jiang @ 2021-03-01  8:21 UTC (permalink / raw)
  To: David Edmondson, Juan Quintela, Dr . David Alan Gilbert,
	open list:All patches CC here
  Cc: Zenghui Yu, wanghaibin.wang, Keqian Zhu

According to the comment, when the host page is a huge page, the
migration_rate_limit() should be executed. If not, this function
can be omitted to save time.

Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
---
 migration/ram.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/migration/ram.c b/migration/ram.c
index 24967cb970..3a9115b6dc 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2014,7 +2014,9 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
         pages += tmppages;
         pss->page++;
         /* Allow rate limiting to happen in the middle of huge pages */
-        migration_rate_limit();
+        if (pagesize_bits > 1) {
+            migration_rate_limit();
+        }
     } while ((pss->page & (pagesize_bits - 1)) &&
              offset_in_ramblock(pss->block,
                                 ((ram_addr_t)pss->page) << TARGET_PAGE_BITS));
-- 
2.23.0



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 3/3] migration/ram: Optimize ram_save_host_page()
  2021-03-01  8:21 [PATCH v2 0/3] Some modifications about ram_save_host_page() Kunkun Jiang
  2021-03-01  8:21 ` [PATCH v2 1/3] migration/ram: Modify the code comment of ram_save_host_page() Kunkun Jiang
  2021-03-01  8:21 ` [PATCH v2 2/3] migration/ram: Modify ram_save_host_page() to match the comment Kunkun Jiang
@ 2021-03-01  8:21 ` Kunkun Jiang
  2021-03-03  8:56   ` David Edmondson
  2 siblings, 1 reply; 9+ messages in thread
From: Kunkun Jiang @ 2021-03-01  8:21 UTC (permalink / raw)
  To: David Edmondson, Juan Quintela, Dr . David Alan Gilbert,
	open list:All patches CC here
  Cc: Zenghui Yu, wanghaibin.wang, Keqian Zhu

Starting from pss->page, ram_save_host_page() will check every page
and send the dirty pages up to the end of the current host page or
the boundary of used_length of the block. If the host page size is
a huge page, the step "check" will take a lot of time.

This will improve performance to use migration_bitmap_find_dirty().

Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
---
 migration/ram.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 3a9115b6dc..a1374db356 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1991,6 +1991,8 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
     int tmppages, pages = 0;
     size_t pagesize_bits =
         qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS;
+    unsigned long hostpage_boundary =
+        QEMU_ALIGN_UP(pss->page + 1, pagesize_bits);
     unsigned long start_page = pss->page;
     int res;
 
@@ -2002,7 +2004,7 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
     do {
         /* Check the pages is dirty and if it is send it */
         if (!migration_bitmap_clear_dirty(rs, pss->block, pss->page)) {
-            pss->page++;
+            pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page);
             continue;
         }
 
@@ -2012,16 +2014,16 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
         }
 
         pages += tmppages;
-        pss->page++;
+        pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page);
         /* Allow rate limiting to happen in the middle of huge pages */
         if (pagesize_bits > 1) {
             migration_rate_limit();
         }
-    } while ((pss->page & (pagesize_bits - 1)) &&
+    } while ((pss->page < hostpage_boundary) &&
              offset_in_ramblock(pss->block,
                                 ((ram_addr_t)pss->page) << TARGET_PAGE_BITS));
-    /* The offset we leave with is the last one we looked at */
-    pss->page--;
+    /* The offset we leave with is the min boundary of host page and block */
+    pss->page = MIN(pss->page, hostpage_boundary) - 1;
 
     res = ram_save_release_protection(rs, pss, start_page);
     return (res < 0 ? res : pages);
-- 
2.23.0



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 2/3] migration/ram: Modify ram_save_host_page() to match the comment
  2021-03-01  8:21 ` [PATCH v2 2/3] migration/ram: Modify ram_save_host_page() to match the comment Kunkun Jiang
@ 2021-03-03  8:37   ` david.edmondson
  0 siblings, 0 replies; 9+ messages in thread
From: david.edmondson @ 2021-03-03  8:37 UTC (permalink / raw)
  To: Kunkun Jiang, Juan Quintela, Dr . David Alan Gilbert,
	open list:All patches CC here
  Cc: Zenghui Yu, wanghaibin.wang, Keqian Zhu

On Monday, 2021-03-01 at 16:21:31 +08, Kunkun Jiang wrote:

> According to the comment, when the host page is a huge page, the
> migration_rate_limit() should be executed. If not, this function
> can be omitted to save time.
>
> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
> Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>

Reviewed-by: David Edmondson <david.edmondson@oracle.com>

> ---
>  migration/ram.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/migration/ram.c b/migration/ram.c
> index 24967cb970..3a9115b6dc 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -2014,7 +2014,9 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
>          pages += tmppages;
>          pss->page++;
>          /* Allow rate limiting to happen in the middle of huge pages */
> -        migration_rate_limit();
> +        if (pagesize_bits > 1) {
> +            migration_rate_limit();
> +        }
>      } while ((pss->page & (pagesize_bits - 1)) &&
>               offset_in_ramblock(pss->block,
>                                  ((ram_addr_t)pss->page) << TARGET_PAGE_BITS));
> -- 
> 2.23.0

dme.
-- 
Please don't stand so close to me.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 1/3] migration/ram: Modify the code comment of ram_save_host_page()
  2021-03-01  8:21 ` [PATCH v2 1/3] migration/ram: Modify the code comment of ram_save_host_page() Kunkun Jiang
@ 2021-03-03  8:38   ` David Edmondson
  0 siblings, 0 replies; 9+ messages in thread
From: David Edmondson @ 2021-03-03  8:38 UTC (permalink / raw)
  To: Kunkun Jiang, Juan Quintela, Dr . David Alan Gilbert,
	open list:All patches CC here
  Cc: Zenghui Yu, wanghaibin.wang, Keqian Zhu

On Monday, 2021-03-01 at 16:21:30 +08, Kunkun Jiang wrote:

> The ram_save_host_page() has been modified several times
> since its birth. But the comment hasn't been modified as it should
> be. It'd better to modify the comment to explain ram_save_host_page()
> more clearly.

I don't think that it's reasonable for me to send Reviewed-by for this,
given that I suggested the text.

Could someone else check the sense and correctness?

> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
> Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
> ---
>  migration/ram.c | 14 ++++++--------
>  1 file changed, 6 insertions(+), 8 deletions(-)
>
> diff --git a/migration/ram.c b/migration/ram.c
> index 72143da0ac..24967cb970 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -1970,15 +1970,13 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
>  }
>  
>  /**
> - * ram_save_host_page: save a whole host page
> + * ram_save_host_page: save a whole host page or the rest of a RAMBlock
>   *
> - * Starting at *offset send pages up to the end of the current host
> - * page. It's valid for the initial offset to point into the middle of
> - * a host page in which case the remainder of the hostpage is sent.
> - * Only dirty target pages are sent. Note that the host page size may
> - * be a huge page for this block.
> - * The saving stops at the boundary of the used_length of the block
> - * if the RAMBlock isn't a multiple of the host page size.
> + * Send dirty pages between pss->page and either the end of that page
> + * or the used_length of the RAMBlock, whichever is smaller.
> + *
> + * Note that if the host page is a huge page, pss->page may be in the
> + * middle of that page.
>   *
>   * Returns the number of pages written or negative on error
>   *
> -- 
> 2.23.0

dme.
-- 
Leaves are falling all around, it's time I was on my way.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 3/3] migration/ram: Optimize ram_save_host_page()
  2021-03-01  8:21 ` [PATCH v2 3/3] migration/ram: Optimize ram_save_host_page() Kunkun Jiang
@ 2021-03-03  8:56   ` David Edmondson
  2021-03-03 11:47     ` Kunkun Jiang
  0 siblings, 1 reply; 9+ messages in thread
From: David Edmondson @ 2021-03-03  8:56 UTC (permalink / raw)
  To: Kunkun Jiang, Juan Quintela, Dr . David Alan Gilbert,
	open list:All patches CC here
  Cc: Zenghui Yu, wanghaibin.wang, Keqian Zhu

On Monday, 2021-03-01 at 16:21:32 +08, Kunkun Jiang wrote:

> Starting from pss->page, ram_save_host_page() will check every page
> and send the dirty pages up to the end of the current host page or
> the boundary of used_length of the block. If the host page size is
> a huge page, the step "check" will take a lot of time.
>
> This will improve performance to use migration_bitmap_find_dirty().

This is cleaner, thank you.

I was hoping to just invert the body of the loop - something like
(completely untested):

do {
  int pages_this_iteration = 0;

  /* Check if the page is dirty and, if so, send it. */
  if (migration_bitmap_clear_dirty(rs, pss->block, pss->page)) {
    pages_this_iteration = ram_save_target_page(rs, pss, last_stage);
    if (pages_this_iteration < 0) {
      return pages_this_iteration;
    }

    pages += pages_this_iteration;

    /*
     * Allow rate limiting to happen in the middle of huge pages if
     * the current iteration sent something.
     */
    if (pagesize_bits > 1 && pages_this_iteration > 0) {
      migration_rate_limit();
    }
  }
  pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page);
 } while ((pss->page < hostpage_boundary) &&
          offset_in_ramblock(pss->block,
                             ((ram_addr_t)pss->page) << TARGET_PAGE_BITS));
/* The offset we leave with is the min boundary of host page and block */
pss->page = MIN(pss->page, hostpage_boundary) - 1;

> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
> Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
> ---
>  migration/ram.c | 12 +++++++-----
>  1 file changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/migration/ram.c b/migration/ram.c
> index 3a9115b6dc..a1374db356 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -1991,6 +1991,8 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
>      int tmppages, pages = 0;
>      size_t pagesize_bits =
>          qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS;
> +    unsigned long hostpage_boundary =
> +        QEMU_ALIGN_UP(pss->page + 1, pagesize_bits);
>      unsigned long start_page = pss->page;
>      int res;
>  
> @@ -2002,7 +2004,7 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
>      do {
>          /* Check the pages is dirty and if it is send it */
>          if (!migration_bitmap_clear_dirty(rs, pss->block, pss->page)) {
> -            pss->page++;
> +            pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page);
>              continue;
>          }
>  
> @@ -2012,16 +2014,16 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
>          }
>  
>          pages += tmppages;
> -        pss->page++;
> +        pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page);
>          /* Allow rate limiting to happen in the middle of huge pages */
>          if (pagesize_bits > 1) {
>              migration_rate_limit();
>          }
> -    } while ((pss->page & (pagesize_bits - 1)) &&
> +    } while ((pss->page < hostpage_boundary) &&
>               offset_in_ramblock(pss->block,
>                                  ((ram_addr_t)pss->page) << TARGET_PAGE_BITS));
> -    /* The offset we leave with is the last one we looked at */
> -    pss->page--;
> +    /* The offset we leave with is the min boundary of host page and block */
> +    pss->page = MIN(pss->page, hostpage_boundary) - 1;
>  
>      res = ram_save_release_protection(rs, pss, start_page);
>      return (res < 0 ? res : pages);
> -- 
> 2.23.0

dme.
-- 
Don't you know you're never going to get to France.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 3/3] migration/ram: Optimize ram_save_host_page()
  2021-03-03  8:56   ` David Edmondson
@ 2021-03-03 11:47     ` Kunkun Jiang
  2021-03-03 14:55       ` David Edmondson
  0 siblings, 1 reply; 9+ messages in thread
From: Kunkun Jiang @ 2021-03-03 11:47 UTC (permalink / raw)
  To: David Edmondson, Juan Quintela, Dr . David Alan Gilbert,
	open list:All patches CC here
  Cc: Zenghui Yu, wanghaibin.wang, Keqian Zhu

On 2021/3/3 16:56, David Edmondson wrote:
> On Monday, 2021-03-01 at 16:21:32 +08, Kunkun Jiang wrote:
>
>> Starting from pss->page, ram_save_host_page() will check every page
>> and send the dirty pages up to the end of the current host page or
>> the boundary of used_length of the block. If the host page size is
>> a huge page, the step "check" will take a lot of time.
>>
>> This will improve performance to use migration_bitmap_find_dirty().
> This is cleaner, thank you.
>
> I was hoping to just invert the body of the loop - something like
> (completely untested):
Sorry for my misunderstanding.
I will improve it in the next version.
> do {
>    int pages_this_iteration = 0;
>
>    /* Check if the page is dirty and, if so, send it. */
>    if (migration_bitmap_clear_dirty(rs, pss->block, pss->page)) {
>      pages_this_iteration = ram_save_target_page(rs, pss, last_stage);
>      if (pages_this_iteration < 0) {
>        return pages_this_iteration;
>      }
>
>      pages += pages_this_iteration;
>
>      /*
>       * Allow rate limiting to happen in the middle of huge pages if
>       * the current iteration sent something.
>       */
>      if (pagesize_bits > 1 && pages_this_iteration > 0) {
>        migration_rate_limit();
>      }
I missed the case that the value of pages_this_iteration is 0. 😅
>    }
>    pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page);
>   } while ((pss->page < hostpage_boundary) &&
>            offset_in_ramblock(pss->block,
>                               ((ram_addr_t)pss->page) << TARGET_PAGE_BITS));
> /* The offset we leave with is the min boundary of host page and block */
> pss->page = MIN(pss->page, hostpage_boundary) - 1;

Best Regards.

Kunkun Jiang

>> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
>> Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
>> ---
>>   migration/ram.c | 12 +++++++-----
>>   1 file changed, 7 insertions(+), 5 deletions(-)
>>
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 3a9115b6dc..a1374db356 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -1991,6 +1991,8 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
>>       int tmppages, pages = 0;
>>       size_t pagesize_bits =
>>           qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS;
>> +    unsigned long hostpage_boundary =
>> +        QEMU_ALIGN_UP(pss->page + 1, pagesize_bits);
>>       unsigned long start_page = pss->page;
>>       int res;
>>   
>> @@ -2002,7 +2004,7 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
>>       do {
>>           /* Check the pages is dirty and if it is send it */
>>           if (!migration_bitmap_clear_dirty(rs, pss->block, pss->page)) {
>> -            pss->page++;
>> +            pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page);
>>               continue;
>>           }
>>   
>> @@ -2012,16 +2014,16 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
>>           }
>>   
>>           pages += tmppages;
>> -        pss->page++;
>> +        pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page);
>>           /* Allow rate limiting to happen in the middle of huge pages */
>>           if (pagesize_bits > 1) {
>>               migration_rate_limit();
>>           }
>> -    } while ((pss->page & (pagesize_bits - 1)) &&
>> +    } while ((pss->page < hostpage_boundary) &&
>>                offset_in_ramblock(pss->block,
>>                                   ((ram_addr_t)pss->page) << TARGET_PAGE_BITS));
>> -    /* The offset we leave with is the last one we looked at */
>> -    pss->page--;
>> +    /* The offset we leave with is the min boundary of host page and block */
>> +    pss->page = MIN(pss->page, hostpage_boundary) - 1;
>>   
>>       res = ram_save_release_protection(rs, pss, start_page);
>>       return (res < 0 ? res : pages);
>> -- 
>> 2.23.0
> dme.




^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 3/3] migration/ram: Optimize ram_save_host_page()
  2021-03-03 11:47     ` Kunkun Jiang
@ 2021-03-03 14:55       ` David Edmondson
  0 siblings, 0 replies; 9+ messages in thread
From: David Edmondson @ 2021-03-03 14:55 UTC (permalink / raw)
  To: Kunkun Jiang, Juan Quintela, Dr . David Alan Gilbert,
	open list:All patches CC here
  Cc: Zenghui Yu, wanghaibin.wang, Keqian Zhu

On Wednesday, 2021-03-03 at 19:47:20 +08, Kunkun Jiang wrote:

> On 2021/3/3 16:56, David Edmondson wrote:
>> On Monday, 2021-03-01 at 16:21:32 +08, Kunkun Jiang wrote:
>>
>>> Starting from pss->page, ram_save_host_page() will check every page
>>> and send the dirty pages up to the end of the current host page or
>>> the boundary of used_length of the block. If the host page size is
>>> a huge page, the step "check" will take a lot of time.
>>>
>>> This will improve performance to use migration_bitmap_find_dirty().
>> This is cleaner, thank you.
>>
>> I was hoping to just invert the body of the loop - something like
>> (completely untested):
> Sorry for my misunderstanding.

No, I explained myself poorly.

> I will improve it in the next version.
>> do {
>>    int pages_this_iteration = 0;
>>
>>    /* Check if the page is dirty and, if so, send it. */
>>    if (migration_bitmap_clear_dirty(rs, pss->block, pss->page)) {
>>      pages_this_iteration = ram_save_target_page(rs, pss, last_stage);
>>      if (pages_this_iteration < 0) {
>>        return pages_this_iteration;
>>      }
>>
>>      pages += pages_this_iteration;
>>
>>      /*
>>       * Allow rate limiting to happen in the middle of huge pages if
>>       * the current iteration sent something.
>>       */
>>      if (pagesize_bits > 1 && pages_this_iteration > 0) {
>>        migration_rate_limit();
>>      }
> I missed the case that the value of pages_this_iteration is 0. 😅

I don't think that your version was wrong, because it returned early
from the loop if there were no candidate pages.

>>    }
>>    pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page);
>>   } while ((pss->page < hostpage_boundary) &&
>>            offset_in_ramblock(pss->block,
>>                               ((ram_addr_t)pss->page) << TARGET_PAGE_BITS));
>> /* The offset we leave with is the min boundary of host page and block */
>> pss->page = MIN(pss->page, hostpage_boundary) - 1;
>
> Best Regards.
>
> Kunkun Jiang
>
>>> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
>>> Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
>>> ---
>>>   migration/ram.c | 12 +++++++-----
>>>   1 file changed, 7 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/migration/ram.c b/migration/ram.c
>>> index 3a9115b6dc..a1374db356 100644
>>> --- a/migration/ram.c
>>> +++ b/migration/ram.c
>>> @@ -1991,6 +1991,8 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
>>>       int tmppages, pages = 0;
>>>       size_t pagesize_bits =
>>>           qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS;
>>> +    unsigned long hostpage_boundary =
>>> +        QEMU_ALIGN_UP(pss->page + 1, pagesize_bits);
>>>       unsigned long start_page = pss->page;
>>>       int res;
>>>   
>>> @@ -2002,7 +2004,7 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
>>>       do {
>>>           /* Check the pages is dirty and if it is send it */
>>>           if (!migration_bitmap_clear_dirty(rs, pss->block, pss->page)) {
>>> -            pss->page++;
>>> +            pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page);
>>>               continue;
>>>           }
>>>   
>>> @@ -2012,16 +2014,16 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
>>>           }
>>>   
>>>           pages += tmppages;
>>> -        pss->page++;
>>> +        pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page);
>>>           /* Allow rate limiting to happen in the middle of huge pages */
>>>           if (pagesize_bits > 1) {
>>>               migration_rate_limit();
>>>           }
>>> -    } while ((pss->page & (pagesize_bits - 1)) &&
>>> +    } while ((pss->page < hostpage_boundary) &&
>>>                offset_in_ramblock(pss->block,
>>>                                   ((ram_addr_t)pss->page) << TARGET_PAGE_BITS));
>>> -    /* The offset we leave with is the last one we looked at */
>>> -    pss->page--;
>>> +    /* The offset we leave with is the min boundary of host page and block */
>>> +    pss->page = MIN(pss->page, hostpage_boundary) - 1;
>>>   
>>>       res = ram_save_release_protection(rs, pss, start_page);
>>>       return (res < 0 ? res : pages);
>>> -- 
>>> 2.23.0
>> dme.

dme.
-- 
Too much information, running through my brain.


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2021-03-03 14:56 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-01  8:21 [PATCH v2 0/3] Some modifications about ram_save_host_page() Kunkun Jiang
2021-03-01  8:21 ` [PATCH v2 1/3] migration/ram: Modify the code comment of ram_save_host_page() Kunkun Jiang
2021-03-03  8:38   ` David Edmondson
2021-03-01  8:21 ` [PATCH v2 2/3] migration/ram: Modify ram_save_host_page() to match the comment Kunkun Jiang
2021-03-03  8:37   ` david.edmondson
2021-03-01  8:21 ` [PATCH v2 3/3] migration/ram: Optimize ram_save_host_page() Kunkun Jiang
2021-03-03  8:56   ` David Edmondson
2021-03-03 11:47     ` Kunkun Jiang
2021-03-03 14:55       ` David Edmondson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.