qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Wang, Wei W" <wei.w.wang@intel.com>
To: David Hildenbrand <david@redhat.com>, Peter Xu <peterx@redhat.com>
Cc: Hailiang Zhang <zhang.zhanghailiang@huawei.com>,
	Juan Quintela <quintela@redhat.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
	Leonardo Bras Soares Passos <lsoaresp@redhat.com>
Subject: RE: [PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_dirty()
Date: Tue, 6 Jul 2021 09:41:09 +0000	[thread overview]
Message-ID: <a01758f98b3f46f282f0d6974862680d@intel.com> (raw)
In-Reply-To: <a5877d58-d501-0ff6-676b-c98df44d1b6f@redhat.com>

On Monday, July 5, 2021 9:42 PM, David Hildenbrand wrote:
> On 03.07.21 04:53, Wang, Wei W wrote:
> > On Friday, July 2, 2021 3:07 PM, David Hildenbrand wrote:
> >> On 02.07.21 04:48, Wang, Wei W wrote:
> >>> On Thursday, July 1, 2021 10:22 PM, David Hildenbrand wrote:
> >>>> On 01.07.21 14:51, Peter Xu wrote:
> >>
> >> I think that clearly shows the issue.
> >>
> >> My theory I did not verify yet: Assume we have 1GB chunks in the clear bmap.
> >> Assume the VM reports all pages within a 1GB chunk as free (easy with
> >> a fresh VM). While processing hints, we will clear the bits from the
> >> dirty bmap in the RAMBlock. As we will never migrate any page of that
> >> 1GB chunk, we will not actually clear the dirty bitmap of the memory
> >> region. When re-syncing, we will set all bits bits in the dirty bmap
> >> again from the dirty bitmap in the memory region. Thus, many of our
> >> hints end up being mostly ignored. The smaller the clear bmap chunk, the
> more extreme the issue.
> >
> > OK, that looks possible. We need to clear the related bits from the
> > memory region bitmap before skipping the free pages. Could you try with
> below patch:
> 
> I did a quick test (with the memhog example) and it seems like it mostly works.
> However, we're now doing the bitmap clearing from another, racing with the
> migration thread. Especially:
> 
> 1. Racing with clear_bmap_set() via cpu_physical_memory_sync_dirty_bitmap()
> 2. Racing with migration_bitmap_clear_dirty()
> 
> So that might need some thought, if I'm not wrong.

I think this is similar to the non-clear_bmap case, where the rb->bmap is set or cleared by
the migration thread and qemu_guest_free_page_hint. For example, the migration thread
could find a bit is set from rb->bmap before qemu_guest_free_page_hint gets that bit cleared in time.
The result is that the free page is transferred, which isn't necessary, but won't cause any issue.
This is very rare in practice.

> 
> The simplest approach would be removing/freeing the clear_bmap via
> PRECOPY_NOTIFY_SETUP(), similar to
> precopy_enable_free_page_optimization() we had before. Of course, this will
> skip the clear_bmap optimization.

Not necessarily, I think. The two optimizations are not mutual exclusive essentially.
At least, they work well together now. If there are other implementation issues reported,
we could check them then.

Best,
Wei

  reply	other threads:[~2021-07-06  9:42 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-30 20:08 [PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_dirty() Peter Xu
2021-07-01  4:42 ` Wang, Wei W
2021-07-01 12:51   ` Peter Xu
2021-07-01 14:21     ` David Hildenbrand
2021-07-02  2:48       ` Wang, Wei W
2021-07-02  7:06         ` David Hildenbrand
2021-07-03  2:53           ` Wang, Wei W
2021-07-05 13:41             ` David Hildenbrand
2021-07-06  9:41               ` Wang, Wei W [this message]
2021-07-06 10:05                 ` David Hildenbrand
2021-07-06 17:39                   ` Peter Xu
2021-07-07 12:45                     ` Wang, Wei W
2021-07-07 16:45                       ` Peter Xu
2021-07-07 23:25                         ` Wang, Wei W
2021-07-08  0:21                           ` Peter Xu
2021-07-06 17:47             ` Peter Xu
2021-07-07  8:34               ` Wang, Wei W
2021-07-07 16:54                 ` Peter Xu
2021-07-08  2:55                   ` Wang, Wei W
2021-07-08 18:10                     ` Peter Xu
2021-07-02  2:29     ` Wang, Wei W
2021-07-06 17:59       ` Peter Xu
2021-07-07  8:33         ` Wang, Wei W
2021-07-07 16:44           ` Peter Xu
2021-07-08  2:49             ` Wang, Wei W
2021-07-08 18:30               ` Peter Xu
2021-07-09  8:58                 ` Wang, Wei W
2021-07-09 14:48                   ` Peter Xu
2021-07-13  8:20                     ` Wang, Wei W
2021-07-03 16:31 ` Lukas Straub
2021-07-04 14:14   ` Lukas Straub
2021-07-06 18:37     ` Peter Xu
2021-07-13  8:40 ` Wang, Wei W
2021-07-13 10:22   ` David Hildenbrand
2021-07-14  5:03     ` Wang, Wei W
2021-07-13 15:59   ` Peter Xu
2021-07-14  5:04     ` Wang, Wei W

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a01758f98b3f46f282f0d6974862680d@intel.com \
    --to=wei.w.wang@intel.com \
    --cc=david@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=lsoaresp@redhat.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=zhang.zhanghailiang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).