From: David Hildenbrand <david@redhat.com>
To: "Philippe Mathieu-Daudé" <philmd@redhat.com>, qemu-devel@nongnu.org
Cc: Alex Williamson <alex.williamson@redhat.com>,
Eduardo Habkost <ehabkost@redhat.com>,
Juan Quintela <quintela@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
Peter Xu <peterx@redhat.com>,
Andrey Gruzdev <andrey.gruzdev@virtuozzo.com>,
Pankaj Gupta <pankaj.gupta@cloud.ionos.com>,
teawater <teawaterz@linux.alibaba.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Marek Kedzierski <mkedzier@redhat.com>,
Wei Yang <richard.weiyang@linux.alibaba.com>
Subject: Re: [PATCH v3 7/7] migration/ram: Handle RAMBlocks with a RamDiscardManager on background snapshots
Date: Thu, 5 Aug 2021 10:11:09 +0200 [thread overview]
Message-ID: <265427ef-ea74-e352-8148-7e4353af6ceb@redhat.com> (raw)
In-Reply-To: <fd43555b-5661-33a5-a4da-2a38939704f7@redhat.com>
On 05.08.21 10:04, Philippe Mathieu-Daudé wrote:
> On 7/30/21 10:52 AM, David Hildenbrand wrote:
>> We already don't ever migrate memory that corresponds to discarded ranges
>> as managed by a RamDiscardManager responsible for the mapped memory region
>> of the RAMBlock.
>>
>> virtio-mem uses this mechanism to logically unplug parts of a RAMBlock.
>> Right now, we still populate zeropages for the whole usable part of the
>> RAMBlock, which is undesired because:
>>
>> 1. Even populating the shared zeropage will result in memory getting
>> consumed for page tables.
>> 2. Memory backends without a shared zeropage (like hugetlbfs and shmem)
>> will populate an actual, fresh page, resulting in an unintended
>> memory consumption.
>>
>> Discarded ("logically unplugged") parts have to remain discarded. As
>> these pages are never part of the migration stream, there is no need to
>> track modifications via userfaultfd WP reliably for these parts.
>>
>> Further, any writes to these ranges by the VM are invalid and the
>> behavior is undefined.
>>
>> Note that Linux only supports userfaultfd WP on private anonymous memory
>> for now, which usually results in the shared zeropage getting populated.
>> The issue will become more relevant once userfaultfd WP supports shmem
>> and hugetlb.
>>
>> Acked-by: Peter Xu <peterx@redhat.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>> migration/ram.c | 53 +++++++++++++++++++++++++++++++++++++++++--------
>> 1 file changed, 45 insertions(+), 8 deletions(-)
>>
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 01cea01774..fd5949734e 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -1639,6 +1639,28 @@ out:
>> return ret;
>> }
>>
>> +static inline void populate_range(RAMBlock *block, hwaddr offset, hwaddr size)
>> +{
>> + char *ptr = (char *) block->host;
>> +
>> + for (; offset < size; offset += qemu_real_host_page_size) {
>> + char tmp = *(ptr + offset);
>> +
>> + /* Don't optimize the read out */
>> + asm volatile("" : "+r" (tmp));
>> + }
>
> This template is now used 3 times, a good opportunity to extract it as
> an (inline?) helper.
>
Can you point me at the other users?
Isn't populate_range() the inline helper you are looking for? :)
--
Thanks,
David / dhildenb
next prev parent reply other threads:[~2021-08-05 8:14 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-30 8:52 [PATCH v3 0/7] migration/ram: Optimize for virtio-mem via RamDiscardManager David Hildenbrand
2021-07-30 8:52 ` [PATCH v3 1/7] memory: Introduce replay_discarded callback for RamDiscardManager David Hildenbrand
2021-07-30 8:52 ` [PATCH v3 2/7] virtio-mem: Implement replay_discarded RamDiscardManager callback David Hildenbrand
2021-07-30 8:52 ` [PATCH v3 3/7] migration/ram: Don't passs RAMState to migration_clear_memory_region_dirty_bitmap_*() David Hildenbrand
2021-08-05 0:05 ` Peter Xu
2021-08-05 7:41 ` Philippe Mathieu-Daudé
2021-07-30 8:52 ` [PATCH v3 4/7] migration/ram: Handle RAMBlocks with a RamDiscardManager on the migration source David Hildenbrand
2021-08-05 0:06 ` Peter Xu
2021-07-30 8:52 ` [PATCH v3 5/7] virtio-mem: Drop precopy notifier David Hildenbrand
2021-07-30 8:52 ` [PATCH v3 6/7] migration/postcopy: Handle RAMBlocks with a RamDiscardManager on the destination David Hildenbrand
2021-08-05 0:04 ` Peter Xu
2021-08-05 8:10 ` David Hildenbrand
2021-08-05 12:52 ` Peter Xu
2021-08-05 7:48 ` Philippe Mathieu-Daudé
2021-08-05 8:07 ` David Hildenbrand
2021-08-05 8:17 ` Philippe Mathieu-Daudé
2021-08-05 8:20 ` David Hildenbrand
2021-07-30 8:52 ` [PATCH v3 7/7] migration/ram: Handle RAMBlocks with a RamDiscardManager on background snapshots David Hildenbrand
2021-08-05 8:04 ` Philippe Mathieu-Daudé
2021-08-05 8:11 ` David Hildenbrand [this message]
2021-08-05 8:21 ` Philippe Mathieu-Daudé
2021-08-05 8:27 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=265427ef-ea74-e352-8148-7e4353af6ceb@redhat.com \
--to=david@redhat.com \
--cc=alex.williamson@redhat.com \
--cc=andrey.gruzdev@virtuozzo.com \
--cc=dgilbert@redhat.com \
--cc=ehabkost@redhat.com \
--cc=mkedzier@redhat.com \
--cc=mst@redhat.com \
--cc=pankaj.gupta@cloud.ionos.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=philmd@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=richard.weiyang@linux.alibaba.com \
--cc=teawaterz@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).