All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Rao, Lei" <lei.rao@intel.com>
To: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: "lukasstraub2@web.de" <lukasstraub2@web.de>,
	"lizhijian@cn.fujitsu.com" <lizhijian@cn.fujitsu.com>,
	"quintela@redhat.com" <quintela@redhat.com>,
	"jasowang@redhat.com" <jasowang@redhat.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"Zhang, Chen" <chen.zhang@intel.com>,
	"pbonzini@redhat.com" <pbonzini@redhat.com>
Subject: RE: [PATCH v4 09/10] Add the function of colo_bitmap_clear_diry.
Date: Fri, 26 Mar 2021 01:45:34 +0000	[thread overview]
Message-ID: <SN6PR11MB31038ED3C97A8B7B681543ADFD619@SN6PR11MB3103.namprd11.prod.outlook.com> (raw)
In-Reply-To: <YFzRbpcUyLOOYlj8@work-vm>


-----Original Message-----
From: Dr. David Alan Gilbert <dgilbert@redhat.com> 
Sent: Friday, March 26, 2021 2:08 AM
To: Rao, Lei <lei.rao@intel.com>
Cc: Zhang, Chen <chen.zhang@intel.com>; lizhijian@cn.fujitsu.com; jasowang@redhat.com; quintela@redhat.com; pbonzini@redhat.com; lukasstraub2@web.de; qemu-devel@nongnu.org
Subject: Re: [PATCH v4 09/10] Add the function of colo_bitmap_clear_diry.

* leirao (lei.rao@intel.com) wrote:
> From: "Rao, Lei" <lei.rao@intel.com>
> 
> When we use continuous dirty memory copy for flushing ram cache on 
> secondary VM, we can also clean up the bitmap of contiguous dirty page 
> memory. This also can reduce the VM stop time during checkpoint.
> 
> Signed-off-by: Lei Rao <lei.rao@intel.com>
> ---
>  migration/ram.c | 29 +++++++++++++++++++++++++----
>  1 file changed, 25 insertions(+), 4 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c index a258466..ae1e659 
> 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -855,6 +855,30 @@ unsigned long colo_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
>      return first;
>  }
>  
> +/**
> + * colo_bitmap_clear_dirty:when we flush ram cache to ram, we will 
> +use
> + * continuous memory copy, so we can also clean up the bitmap of 
> +contiguous
> + * dirty memory.
> + */
> +static inline bool colo_bitmap_clear_dirty(RAMState *rs,
> +                                           RAMBlock *rb,
> +                                           unsigned long start,
> +                                           unsigned long num) {
> +    bool ret;
> +    unsigned long i = 0;
> +
> +    qemu_mutex_lock(&rs->bitmap_mutex);

Please use QEMU_LOCK_GUARD(&rs->bitmap_mutex);

Will be changed in V5. Thanks.

> +    for (i = 0; i < num; i++) {
> +        ret = test_and_clear_bit(start + i, rb->bmap);
> +        if (ret) {
> +            rs->migration_dirty_pages--;
> +        }
> +    }
> +    qemu_mutex_unlock(&rs->bitmap_mutex);
> +    return ret;

This implementation is missing the clear_bmap code that migration_bitmap_clear_dirty has.
I think that's necessary now.

Are we sure there's any benefit in this?

Dave

There is such a note about clear_bmap in struct RAMBlock:
"On destination side, this should always be NULL, and the variable `clear_bmap_shift' is meaningless."
This means that clear_bmap is always NULL on secondary VM. And for the behavior of flush ram cache to ram, we will always only happen on secondary VM.
So, I think the clear_bmap code is unnecessary for COLO.
As for the benefits, When the number of dirty pages from flush ram cache to ram is too much. it will reduce the number of locks acquired.

Lei

> +}
> +
>  static inline bool migration_bitmap_clear_dirty(RAMState *rs,
>                                                  RAMBlock *rb,
>                                                  unsigned long page) 
> @@ -3700,7 +3724,6 @@ void colo_flush_ram_cache(void)
>      void *src_host;
>      unsigned long offset = 0;
>      unsigned long num = 0;
> -    unsigned long i = 0;
>  
>      memory_global_dirty_log_sync();
>      WITH_RCU_READ_LOCK_GUARD() {
> @@ -3722,9 +3745,7 @@ void colo_flush_ram_cache(void)
>                  num = 0;
>                  block = QLIST_NEXT_RCU(block, next);
>              } else {
> -                for (i = 0; i < num; i++) {
> -                    migration_bitmap_clear_dirty(ram_state, block, offset + i);
> -                }
> +                colo_bitmap_clear_dirty(ram_state, block, offset, 
> + num);
>                  dst_host = block->host
>                           + (((ram_addr_t)offset) << TARGET_PAGE_BITS);
>                  src_host = block->colo_cache
> --
> 1.8.3.1
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



  reply	other threads:[~2021-03-26  1:48 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-25  2:24 [PATCH v4 00/10] Fixed some bugs and optimized some codes for COLO leirao
2021-03-25  2:24 ` [PATCH v4 01/10] Remove some duplicate trace code leirao
2021-03-25  2:24 ` [PATCH v4 02/10] Fix the qemu crash when guest shutdown during checkpoint leirao
2021-03-25  2:24 ` [PATCH v4 03/10] Optimize the function of filter_send leirao
2021-03-25  2:24 ` [PATCH v4 04/10] Remove migrate_set_block_enabled in checkpoint leirao
2021-03-25  2:24 ` [PATCH v4 05/10] Add a function named packet_new_nocopy for COLO leirao
2021-03-25  2:24 ` [PATCH v4 06/10] Add the function of colo_compare_cleanup leirao
2021-03-25  2:24 ` [PATCH v4 07/10] Reset the auto-converge counter at every checkpoint leirao
2021-03-25  2:24 ` [PATCH v4 08/10] Reduce the PVM stop time during Checkpoint leirao
2021-03-29 12:03   ` Dr. David Alan Gilbert
2021-03-25  2:24 ` [PATCH v4 09/10] Add the function of colo_bitmap_clear_diry leirao
2021-03-25 18:07   ` Dr. David Alan Gilbert
2021-03-26  1:45     ` Rao, Lei [this message]
2021-03-29 11:31       ` Dr. David Alan Gilbert
2021-04-09  3:52         ` Rao, Lei
2021-03-25  2:24 ` [PATCH v4 10/10] Fixed calculation error of pkt->header_size in fill_pkt_tcp_info() leirao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=SN6PR11MB31038ED3C97A8B7B681543ADFD619@SN6PR11MB3103.namprd11.prod.outlook.com \
    --to=lei.rao@intel.com \
    --cc=chen.zhang@intel.com \
    --cc=dgilbert@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=lizhijian@cn.fujitsu.com \
    --cc=lukasstraub2@web.de \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.