All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3] migration: Count new_dirty instead of real_dirty
@ 2020-06-22  3:20 Keqian Zhu
  2020-07-03 14:20 ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 3+ messages in thread
From: Keqian Zhu @ 2020-06-22  3:20 UTC (permalink / raw)
  To: qemu-devel, qemu-arm
  Cc: zhang.zhanghailiang, Juan Quintela, wanghaibin.wang, Chao Fan,
	Dr . David Alan Gilbert, jianjay.zhou, Paolo Bonzini, Keqian Zhu

real_dirty_pages becomes equal to total ram size after dirty log sync
in ram_init_bitmaps, the reason is that the bitmap of ramblock is
initialized to be all set, so old path counts them as "real dirty" at
beginning.

This causes wrong dirty rate and false positive throttling.

Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
---
Changelog:

v3:
 - Address Dave's comments.

v2:
 - Use new_dirty_pages instead of accu_dirty_pages.
 - Adjust commit messages.
---
 include/exec/ram_addr.h | 5 +----
 migration/ram.c         | 8 +++++---
 2 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index 7b5c24e928..3ef729a23c 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -442,8 +442,7 @@ static inline void cpu_physical_memory_clear_dirty_range(ram_addr_t start,
 static inline
 uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb,
                                                ram_addr_t start,
-                                               ram_addr_t length,
-                                               uint64_t *real_dirty_pages)
+                                               ram_addr_t length)
 {
     ram_addr_t addr;
     unsigned long word = BIT_WORD((start + rb->offset) >> TARGET_PAGE_BITS);
@@ -469,7 +468,6 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb,
             if (src[idx][offset]) {
                 unsigned long bits = atomic_xchg(&src[idx][offset], 0);
                 unsigned long new_dirty;
-                *real_dirty_pages += ctpopl(bits);
                 new_dirty = ~dest[k];
                 dest[k] |= bits;
                 new_dirty &= bits;
@@ -502,7 +500,6 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb,
                         start + addr + offset,
                         TARGET_PAGE_SIZE,
                         DIRTY_MEMORY_MIGRATION)) {
-                *real_dirty_pages += 1;
                 long k = (start + addr) >> TARGET_PAGE_BITS;
                 if (!test_and_set_bit(k, dest)) {
                     num_dirty++;
diff --git a/migration/ram.c b/migration/ram.c
index 069b6e30bc..5554a7d2d8 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -859,9 +859,11 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs,
 /* Called with RCU critical section */
 static void ramblock_sync_dirty_bitmap(RAMState *rs, RAMBlock *rb)
 {
-    rs->migration_dirty_pages +=
-        cpu_physical_memory_sync_dirty_bitmap(rb, 0, rb->used_length,
-                                              &rs->num_dirty_pages_period);
+    uint64_t new_dirty_pages =
+        cpu_physical_memory_sync_dirty_bitmap(rb, 0, rb->used_length);
+
+    rs->migration_dirty_pages += new_dirty_pages;
+    rs->num_dirty_pages_period += new_dirty_pages;
 }
 
 /**
-- 
2.19.1



^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH v3] migration: Count new_dirty instead of real_dirty
  2020-06-22  3:20 [PATCH v3] migration: Count new_dirty instead of real_dirty Keqian Zhu
@ 2020-07-03 14:20 ` Dr. David Alan Gilbert
  2020-07-06 11:53   ` zhukeqian
  0 siblings, 1 reply; 3+ messages in thread
From: Dr. David Alan Gilbert @ 2020-07-03 14:20 UTC (permalink / raw)
  To: Keqian Zhu
  Cc: zhang.zhanghailiang, Juan Quintela, wanghaibin.wang, Chao Fan,
	qemu-devel, qemu-arm, jianjay.zhou, Paolo Bonzini

* Keqian Zhu (zhukeqian1@huawei.com) wrote:
> real_dirty_pages becomes equal to total ram size after dirty log sync
> in ram_init_bitmaps, the reason is that the bitmap of ramblock is
> initialized to be all set, so old path counts them as "real dirty" at
> beginning.
> 
> This causes wrong dirty rate and false positive throttling.
> 
> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>

OK, 

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

and queued.

you might still want to look at migration_trigger_thrtottle and see if
you can stop the throttling if in the RAM bulk stage.

> ---
> Changelog:
> 
> v3:
>  - Address Dave's comments.
> 
> v2:
>  - Use new_dirty_pages instead of accu_dirty_pages.
>  - Adjust commit messages.
> ---
>  include/exec/ram_addr.h | 5 +----
>  migration/ram.c         | 8 +++++---
>  2 files changed, 6 insertions(+), 7 deletions(-)
> 
> diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
> index 7b5c24e928..3ef729a23c 100644
> --- a/include/exec/ram_addr.h
> +++ b/include/exec/ram_addr.h
> @@ -442,8 +442,7 @@ static inline void cpu_physical_memory_clear_dirty_range(ram_addr_t start,
>  static inline
>  uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb,
>                                                 ram_addr_t start,
> -                                               ram_addr_t length,
> -                                               uint64_t *real_dirty_pages)
> +                                               ram_addr_t length)
>  {
>      ram_addr_t addr;
>      unsigned long word = BIT_WORD((start + rb->offset) >> TARGET_PAGE_BITS);
> @@ -469,7 +468,6 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb,
>              if (src[idx][offset]) {
>                  unsigned long bits = atomic_xchg(&src[idx][offset], 0);
>                  unsigned long new_dirty;
> -                *real_dirty_pages += ctpopl(bits);
>                  new_dirty = ~dest[k];
>                  dest[k] |= bits;
>                  new_dirty &= bits;
> @@ -502,7 +500,6 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb,
>                          start + addr + offset,
>                          TARGET_PAGE_SIZE,
>                          DIRTY_MEMORY_MIGRATION)) {
> -                *real_dirty_pages += 1;
>                  long k = (start + addr) >> TARGET_PAGE_BITS;
>                  if (!test_and_set_bit(k, dest)) {
>                      num_dirty++;
> diff --git a/migration/ram.c b/migration/ram.c
> index 069b6e30bc..5554a7d2d8 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -859,9 +859,11 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs,
>  /* Called with RCU critical section */
>  static void ramblock_sync_dirty_bitmap(RAMState *rs, RAMBlock *rb)
>  {
> -    rs->migration_dirty_pages +=
> -        cpu_physical_memory_sync_dirty_bitmap(rb, 0, rb->used_length,
> -                                              &rs->num_dirty_pages_period);
> +    uint64_t new_dirty_pages =
> +        cpu_physical_memory_sync_dirty_bitmap(rb, 0, rb->used_length);
> +
> +    rs->migration_dirty_pages += new_dirty_pages;
> +    rs->num_dirty_pages_period += new_dirty_pages;
>  }
>  
>  /**
> -- 
> 2.19.1
> 
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v3] migration: Count new_dirty instead of real_dirty
  2020-07-03 14:20 ` Dr. David Alan Gilbert
@ 2020-07-06 11:53   ` zhukeqian
  0 siblings, 0 replies; 3+ messages in thread
From: zhukeqian @ 2020-07-06 11:53 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: zhang.zhanghailiang, Juan Quintela, wanghaibin.wang, Chao Fan,
	qemu-devel, qemu-arm, jianjay.zhou, Paolo Bonzini

Hi Dave,

On 2020/7/3 22:20, Dr. David Alan Gilbert wrote:
> * Keqian Zhu (zhukeqian1@huawei.com) wrote:
>> real_dirty_pages becomes equal to total ram size after dirty log sync
>> in ram_init_bitmaps, the reason is that the bitmap of ramblock is
>> initialized to be all set, so old path counts them as "real dirty" at
>> beginning.
>>
>> This causes wrong dirty rate and false positive throttling.
>>
>> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
> 
> OK, 
> 
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> 
> and queued.
> 
> you might still want to look at migration_trigger_thrtottle and see if
> you can stop the throttling if in the RAM bulk stage.
Yes, I tested it and it worked well.

Thanks,
Keqian


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-07-06 11:54 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-22  3:20 [PATCH v3] migration: Count new_dirty instead of real_dirty Keqian Zhu
2020-07-03 14:20 ` Dr. David Alan Gilbert
2020-07-06 11:53   ` zhukeqian

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.