qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Peter Xu <peterx@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Pankaj Gupta <pankaj.gupta@cloud.ionos.com>,
	Juan Quintela <quintela@redhat.com>,
	teawater <teawaterz@linux.alibaba.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	qemu-devel@nongnu.org,
	Alex Williamson <alex.williamson@redhat.com>,
	Marek Kedzierski <mkedzier@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrey Gruzdev <andrey.gruzdev@virtuozzo.com>,
	Wei Yang <richard.weiyang@linux.alibaba.com>
Subject: Re: [PATCH v2 0/6] migration/ram: Optimize for virtio-mem via RamDiscardManager
Date: Tue, 27 Jul 2021 11:25:09 +0200	[thread overview]
Message-ID: <da54f4ab-2f20-a780-1a9c-b6f4c1c50969@redhat.com> (raw)
In-Reply-To: <YPtAd+JqfNeQqGib@t490s>

On 24.07.21 00:19, Peter Xu wrote:
> On Fri, Jul 23, 2021 at 08:41:40PM +0200, David Hildenbrand wrote:
>> On 23.07.21 18:12, Peter Xu wrote:
>>> On Thu, Jul 22, 2021 at 01:43:41PM +0200, David Hildenbrand wrote:
>>>>>> a) In precopy code, always clearing all dirty bits from the bitmap that
>>>>>>       correspond to discarded range, whenever we update the dirty bitmap. This
>>>>>>       results in logically unplugged memory to never get migrated.
>>>>>
>>>>> Have you seen cases where discarded areas are being marked as dirty?
>>>>> That suggests something somewhere is writing to them and shouldn't be.
>>>>
>>>> I have due to sub-optimal clear_bmap handling to be sorted out by
>>>>
>>>> https://lkml.kernel.org/r/20210722083055.23352-1-wei.w.wang@intel.com
>>>>
>>>> Whereby the issue is rather that initially dirty bits don't get cleared in
>>>> lower layers and keep popping up as dirty.
>>>>
>>>> The issue with postcopy recovery code setting discarded ranges dirty in
>>>> the dirty bitmap, I did not try reproducing. But from looking at the
>>>> code, it's pretty clear that it would happen.
>>>>
>>>> Apart from that, nothing should dirty that memory. Of course,
>>>> malicious guests could trigger it for now, in which case we wouldn't catch it
>>>> and migrate such pages with postcopy, because the final bitmap sync in
>>>> ram_postcopy_send_discard_bitmap() is performed without calling notifiers
>>>> right now.
>>>
>>> I have the same concern with Dave: does it mean that we don't need to touch at
>>> least ramblock_sync_dirty_bitmap in patch 3?
>>
>> Yes, see the comment in patch #3:
>>
>> "
>> Note: If discarded ranges span complete clear_bmap chunks, we'll never
>> clear the corresponding bits from clear_bmap and consequently never call
>> memory_region_clear_dirty_bitmap on the affected regions. While this is
>> perfectly fine, we're still synchronizing the bitmap of discarded ranges,
>> for example, in
>> ramblock_sync_dirty_bitmap()->cpu_physical_memory_sync_dirty_bitmap()
>> but also during memory_global_dirty_log_sync().
>>
>> In the future, it might make sense to never even synchronize the dirty log
>> of these ranges, for example in KVM code, skipping discarded ranges
>> completely.
>> "
>>
>> The KVM path might be even more interesting (with !dirty ring IIRC).
>>
>> So that might certainly be worth looking into if we find it to be a real
>> performance problem.
> 
> OK; hmm then I feel like what's missing is we didn't have the dirty bmap and
> the clear map synced - say, what if we do memory_region_clear_dirty_bitmap()
> when dropping the virtio-mem unplugged ranges too?

Is it a problem that we leave clear_bmap set and actually never clear 
some ranges? I don't think so. To me, this feels like the right thing to 
do: no need to clear something (in QEMU, in KVM) nobody cares about.

IMHO, the real optimization should be to not even sync discarded ranges 
(not from the accelerator, not from the memory region), skipping these 
ranges completely (no sync, no clear). With what you propose, we might 
end up calling into KVM to clear bitmaps of ranges we are not interested 
in, no?

> 
> If disgarded ranges are static during migration, the clear dirty log should
> happen once for them at bitmap init time.  Then IIUC when sync we don't need to
> worry about unplugged memory anymore.

Again, I'm not sure why we want to clear something we don't care about.


There are 3 cases to handle I think:

1) Initially, when the bitmap is set to 1, we want to exclude all 
discarded ranges.

2) Whenever we sync the bitmap, we don't want to get discarded ranges 
set dirty. (e.g., bits still or again dirty in KVM or the memory region)

3) When reloading the bitmap during postcopy errors.


I think for 1) and 3) we seem to agree that clearing the discarded 
ranges from the dirty bitmap is conceptually the right thing.


For 2) I see 3 options:


a) Sync everything, fixup the dirty bitmap, never clear the dirty log of 
discarded parts. It's fairly simple and straight forward, as I can 
simply reuse the existing helper. Something that's discarded will never 
be dirty, not even if a misbehaving guest touches memory it shouldn't. 
[this patch]

b) Sync only populated parts, no need to fixup the dirty bitmap, never 
clear the dirty log of discarded parts. It's a bit more complicated but 
achieves the same goal as a). [optimization I propose for the future]

c) Sync everything, don't fixup the dirty bitmap, clear the dirty log of 
discarded parts initially. There are ways we still might migrate 
discarded ranges, for example, if a misbehaving guest touches memory it 
shouldn't. [what you propose]


Is my understanding correct? Any reasons why we should chose c) over b) 
long term or c) over a) short term?

Thanks!

-- 
Thanks,

David / dhildenb




  reply	other threads:[~2021-07-27  9:26 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-21  9:27 [PATCH v2 0/6] migration/ram: Optimize for virtio-mem via RamDiscardManager David Hildenbrand
2021-07-21  9:27 ` [PATCH v2 1/6] memory: Introduce replay_discarded callback for RamDiscardManager David Hildenbrand
2021-07-23 16:34   ` Peter Xu
2021-07-21  9:27 ` [PATCH v2 2/6] virtio-mem: Implement replay_discarded RamDiscardManager callback David Hildenbrand
2021-07-23 16:34   ` Peter Xu
2021-07-21  9:27 ` [PATCH v2 3/6] migration/ram: Handle RAMBlocks with a RamDiscardManager on the migration source David Hildenbrand
2021-07-21  9:27 ` [PATCH v2 4/6] virtio-mem: Drop precopy notifier David Hildenbrand
2021-07-23 16:34   ` Peter Xu
2021-07-21  9:27 ` [PATCH v2 5/6] migration/postcopy: Handle RAMBlocks with a RamDiscardManager on the destination David Hildenbrand
2021-07-23 16:34   ` Peter Xu
2021-07-23 18:36     ` David Hildenbrand
2021-07-23 18:52       ` Peter Xu
2021-07-23 19:01         ` David Hildenbrand
2021-07-23 22:10           ` Peter Xu
2021-07-29 12:14             ` David Hildenbrand
2021-07-29 15:52               ` Peter Xu
2021-07-29 16:15                 ` David Hildenbrand
2021-07-29 19:20                   ` Peter Xu
2021-07-29 19:22                     ` David Hildenbrand
2021-07-21  9:27 ` [PATCH v2 6/6] migration/ram: Handle RAMBlocks with a RamDiscardManager on background snapshots David Hildenbrand
2021-07-23 16:37   ` Peter Xu
2021-07-22 11:29 ` [PATCH v2 0/6] migration/ram: Optimize for virtio-mem via RamDiscardManager Dr. David Alan Gilbert
2021-07-22 11:43   ` David Hildenbrand
2021-07-23 16:12     ` Peter Xu
2021-07-23 18:41       ` David Hildenbrand
2021-07-23 22:19         ` Peter Xu
2021-07-27  9:25           ` David Hildenbrand [this message]
2021-07-27 17:10             ` Peter Xu
2021-07-28 17:39               ` David Hildenbrand
2021-07-28 19:42                 ` Peter Xu
2021-07-28 19:46                   ` David Hildenbrand
2021-07-28 20:19                     ` Peter Xu
2021-07-29  8:14                       ` David Hildenbrand
2021-07-29 16:12                         ` Peter Xu
2021-07-29 16:19                           ` David Hildenbrand
2021-07-29 19:32                             ` Peter Xu
2021-07-29 19:39                               ` David Hildenbrand
2021-07-29 20:00                                 ` Peter Xu
2021-07-29 20:06                                   ` David Hildenbrand
2021-07-29 20:28                                     ` Peter Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=da54f4ab-2f20-a780-1a9c-b6f4c1c50969@redhat.com \
    --to=david@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=andrey.gruzdev@virtuozzo.com \
    --cc=dgilbert@redhat.com \
    --cc=ehabkost@redhat.com \
    --cc=mkedzier@redhat.com \
    --cc=mst@redhat.com \
    --cc=pankaj.gupta@cloud.ionos.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=richard.weiyang@linux.alibaba.com \
    --cc=teawaterz@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).