All of lore.kernel.org
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: David Hildenbrand <david@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Pankaj Gupta <pankaj.gupta@cloud.ionos.com>,
	Juan Quintela <quintela@redhat.com>,
	teawater <teawaterz@linux.alibaba.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	qemu-devel@nongnu.org,
	Alex Williamson <alex.williamson@redhat.com>,
	Marek Kedzierski <mkedzier@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrey Gruzdev <andrey.gruzdev@virtuozzo.com>,
	Wei Yang <richard.weiyang@linux.alibaba.com>
Subject: Re: [PATCH v2 0/6] migration/ram: Optimize for virtio-mem via RamDiscardManager
Date: Fri, 23 Jul 2021 18:19:35 -0400	[thread overview]
Message-ID: <YPtAd+JqfNeQqGib@t490s> (raw)
In-Reply-To: <ea9e9071-4ecb-9c28-9567-92585a18b4eb@redhat.com>

On Fri, Jul 23, 2021 at 08:41:40PM +0200, David Hildenbrand wrote:
> On 23.07.21 18:12, Peter Xu wrote:
> > On Thu, Jul 22, 2021 at 01:43:41PM +0200, David Hildenbrand wrote:
> > > > > a) In precopy code, always clearing all dirty bits from the bitmap that
> > > > >      correspond to discarded range, whenever we update the dirty bitmap. This
> > > > >      results in logically unplugged memory to never get migrated.
> > > > 
> > > > Have you seen cases where discarded areas are being marked as dirty?
> > > > That suggests something somewhere is writing to them and shouldn't be.
> > > 
> > > I have due to sub-optimal clear_bmap handling to be sorted out by
> > > 
> > > https://lkml.kernel.org/r/20210722083055.23352-1-wei.w.wang@intel.com
> > > 
> > > Whereby the issue is rather that initially dirty bits don't get cleared in
> > > lower layers and keep popping up as dirty.
> > > 
> > > The issue with postcopy recovery code setting discarded ranges dirty in
> > > the dirty bitmap, I did not try reproducing. But from looking at the
> > > code, it's pretty clear that it would happen.
> > > 
> > > Apart from that, nothing should dirty that memory. Of course,
> > > malicious guests could trigger it for now, in which case we wouldn't catch it
> > > and migrate such pages with postcopy, because the final bitmap sync in
> > > ram_postcopy_send_discard_bitmap() is performed without calling notifiers
> > > right now.
> > 
> > I have the same concern with Dave: does it mean that we don't need to touch at
> > least ramblock_sync_dirty_bitmap in patch 3?
> 
> Yes, see the comment in patch #3:
> 
> "
> Note: If discarded ranges span complete clear_bmap chunks, we'll never
> clear the corresponding bits from clear_bmap and consequently never call
> memory_region_clear_dirty_bitmap on the affected regions. While this is
> perfectly fine, we're still synchronizing the bitmap of discarded ranges,
> for example, in
> ramblock_sync_dirty_bitmap()->cpu_physical_memory_sync_dirty_bitmap()
> but also during memory_global_dirty_log_sync().
> 
> In the future, it might make sense to never even synchronize the dirty log
> of these ranges, for example in KVM code, skipping discarded ranges
> completely.
> "
> 
> The KVM path might be even more interesting (with !dirty ring IIRC).
> 
> So that might certainly be worth looking into if we find it to be a real
> performance problem.

OK; hmm then I feel like what's missing is we didn't have the dirty bmap and
the clear map synced - say, what if we do memory_region_clear_dirty_bitmap()
when dropping the virtio-mem unplugged ranges too?

If disgarded ranges are static during migration, the clear dirty log should
happen once for them at bitmap init time.  Then IIUC when sync we don't need to
worry about unplugged memory anymore.

> 
> > 
> > Doing that for bitmap init and postcopy recovery looks right.
> > 
> > One other trivial comment is instead of touching up ram_dirty_bitmap_reload(),
> > IMHO it's simpler to set all 1's to disgarded memories on dst receivedmap;
> > imagine multiple postcopy recovery happened, then with that we walk the disgard
> > memory list only once for each migration.  Not a big deal, though.
> 
> Right, but I decided to reuse
> ramblock_dirty_bitmap_exclude_discarded_pages() such that I can avoid yet
> another helper.

Yeah, that's okay.

Thanks,

-- 
Peter Xu



  reply	other threads:[~2021-07-23 22:20 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-21  9:27 [PATCH v2 0/6] migration/ram: Optimize for virtio-mem via RamDiscardManager David Hildenbrand
2021-07-21  9:27 ` [PATCH v2 1/6] memory: Introduce replay_discarded callback for RamDiscardManager David Hildenbrand
2021-07-23 16:34   ` Peter Xu
2021-07-21  9:27 ` [PATCH v2 2/6] virtio-mem: Implement replay_discarded RamDiscardManager callback David Hildenbrand
2021-07-23 16:34   ` Peter Xu
2021-07-21  9:27 ` [PATCH v2 3/6] migration/ram: Handle RAMBlocks with a RamDiscardManager on the migration source David Hildenbrand
2021-07-21  9:27 ` [PATCH v2 4/6] virtio-mem: Drop precopy notifier David Hildenbrand
2021-07-23 16:34   ` Peter Xu
2021-07-21  9:27 ` [PATCH v2 5/6] migration/postcopy: Handle RAMBlocks with a RamDiscardManager on the destination David Hildenbrand
2021-07-23 16:34   ` Peter Xu
2021-07-23 18:36     ` David Hildenbrand
2021-07-23 18:52       ` Peter Xu
2021-07-23 19:01         ` David Hildenbrand
2021-07-23 22:10           ` Peter Xu
2021-07-29 12:14             ` David Hildenbrand
2021-07-29 15:52               ` Peter Xu
2021-07-29 16:15                 ` David Hildenbrand
2021-07-29 19:20                   ` Peter Xu
2021-07-29 19:22                     ` David Hildenbrand
2021-07-21  9:27 ` [PATCH v2 6/6] migration/ram: Handle RAMBlocks with a RamDiscardManager on background snapshots David Hildenbrand
2021-07-23 16:37   ` Peter Xu
2021-07-22 11:29 ` [PATCH v2 0/6] migration/ram: Optimize for virtio-mem via RamDiscardManager Dr. David Alan Gilbert
2021-07-22 11:43   ` David Hildenbrand
2021-07-23 16:12     ` Peter Xu
2021-07-23 18:41       ` David Hildenbrand
2021-07-23 22:19         ` Peter Xu [this message]
2021-07-27  9:25           ` David Hildenbrand
2021-07-27 17:10             ` Peter Xu
2021-07-28 17:39               ` David Hildenbrand
2021-07-28 19:42                 ` Peter Xu
2021-07-28 19:46                   ` David Hildenbrand
2021-07-28 20:19                     ` Peter Xu
2021-07-29  8:14                       ` David Hildenbrand
2021-07-29 16:12                         ` Peter Xu
2021-07-29 16:19                           ` David Hildenbrand
2021-07-29 19:32                             ` Peter Xu
2021-07-29 19:39                               ` David Hildenbrand
2021-07-29 20:00                                 ` Peter Xu
2021-07-29 20:06                                   ` David Hildenbrand
2021-07-29 20:28                                     ` Peter Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YPtAd+JqfNeQqGib@t490s \
    --to=peterx@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=andrey.gruzdev@virtuozzo.com \
    --cc=david@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=ehabkost@redhat.com \
    --cc=mkedzier@redhat.com \
    --cc=mst@redhat.com \
    --cc=pankaj.gupta@cloud.ionos.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=richard.weiyang@linux.alibaba.com \
    --cc=teawaterz@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.