qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: "Marcel Apfelbaum" <mapfelba@redhat.com>,
	"Eduardo Habkost" <ehabkost@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Michal Privoznik" <mprivozn@redhat.com>,
	"Richard Henderson" <richard.henderson@linaro.org>,
	qemu-devel@nongnu.org, "Peter Xu" <peterx@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	"Greg Kurz" <groug@kaod.org>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	"Stefan Hajnoczi" <stefanha@redhat.com>,
	"Murilo Opsfelder Araujo" <muriloo@linux.ibm.com>,
	"Igor Mammedov" <imammedo@redhat.com>,
	"Nitesh Lal" <nilal@redhat.com>,
	"Philippe Mathieu-Daudé" <philmd@redhat.com>
Subject: Re: [PATCH v7 09/15] util/mmap-alloc: Support RAM_NORESERVE via MAP_NORESERVE under Linux
Date: Tue, 4 May 2021 13:04:17 +0200	[thread overview]
Message-ID: <e72359da-918c-df2d-c541-c1fcf7e3c7d5@redhat.com> (raw)
In-Reply-To: <YJEiz4E+Gk/fqWBo@redhat.com>

On 04.05.21 12:32, Daniel P. Berrangé wrote:
> On Tue, May 04, 2021 at 12:21:25PM +0200, David Hildenbrand wrote:
>> On 04.05.21 12:09, Daniel P. Berrangé wrote:
>>> On Wed, Apr 28, 2021 at 03:37:48PM +0200, David Hildenbrand wrote:
>>>> Let's support RAM_NORESERVE via MAP_NORESERVE on Linux. The flag has no
>>>> effect on most shared mappings - except for hugetlbfs and anonymous memory.
>>>>
>>>> Linux man page:
>>>>     "MAP_NORESERVE: Do not reserve swap space for this mapping. When swap
>>>>     space is reserved, one has the guarantee that it is possible to modify
>>>>     the mapping. When swap space is not reserved one might get SIGSEGV
>>>>     upon a write if no physical memory is available. See also the discussion
>>>>     of the file /proc/sys/vm/overcommit_memory in proc(5). In kernels before
>>>>     2.6, this flag had effect only for private writable mappings."
>>>>
>>>> Note that the "guarantee" part is wrong with memory overcommit in Linux.
>>>>
>>>> Also, in Linux hugetlbfs is treated differently - we configure reservation
>>>> of huge pages from the pool, not reservation of swap space (huge pages
>>>> cannot be swapped).
>>>>
>>>> The rough behavior is [1]:
>>>> a) !Hugetlbfs:
>>>>
>>>>     1) Without MAP_NORESERVE *or* with memory overcommit under Linux
>>>>        disabled ("/proc/sys/vm/overcommit_memory == 2"), the following
>>>>        accounting/reservation happens:
>>>>         For a file backed map
>>>>          SHARED or READ-only - 0 cost (the file is the map not swap)
>>>>          PRIVATE WRITABLE - size of mapping per instance
>>>>
>>>>         For an anonymous or /dev/zero map
>>>>          SHARED   - size of mapping
>>>>          PRIVATE READ-only - 0 cost (but of little use)
>>>>          PRIVATE WRITABLE - size of mapping per instance
>>>>
>>>>     2) With MAP_NORESERVE, no accounting/reservation happens.
>>>>
>>>> b) Hugetlbfs:
>>>>
>>>>     1) Without MAP_NORESERVE, huge pages are reserved.
>>>>
>>>>     2) With MAP_NORESERVE, no huge pages are reserved.
>>>>
>>>> Note: With "/proc/sys/vm/overcommit_memory == 0", we were already able
>>>> to configure it for !hugetlbfs globally; this toggle now allows
>>>> configuring it more fine-grained, not for the whole system.
>>>>
>>>> The target use case is virtio-mem, which dynamically exposes memory
>>>> inside a large, sparse memory area to the VM.
>>>
>>> Can you explain this use case in more real world terms, as I'm not
>>> understanding what a mgmt app would actually do with this in
>>> practice ?
>>
>> Let's consider huge pages for simplicity. Assume you have 128 free huge
>> pages in your hypervisor that you want to dynamically assign to VMs.
>>
>> Further assume you have two VMs running. A workflow could look like
>>
>> 1. Assign all huge pages to VM 0
>> 2. Reassign 64 huge pages to VM 1
>> 3. Reassign another 32 huge pages to VM 1
>> 4. Reasssign 16 huge pages to VM 0
>> 5. ...
>>
>> Basically what we're used to doing with "ordinary" memory.
> 
> What does this look like in terms of the memory backend configuration
> when you boot VM 0 and VM 1 ?
> 
> Are you saying that we boot both VMs with
> 
>     -object hostmem-memfd,size=128G,hugetlb=yes,hugetlbsize=1G,reserve=off
> 
> and then we have another property set on 'virtio-mem' to tell it
> how much/little of that 128 G, to actually give to the guest ?
> How do we change that at runtime ?

Roughly, yes. We only special-case memory backends managed by virtio-mem devices.

An advanced example for a single VM could look like this:

sudo build/qemu-system-x86_64 \
	... \
	-m 4G,maxmem=64G \
	-smp sockets=2,cores=2 \
	-object hostmem-memfd,id=bmem0,size=2G,hugetlb=yes,hugetlbsize=2M \
	-numa node,nodeid=0,cpus=0-1,memdev=bmem0 \
	-object hostmem-memfd,id=bmem1,size=2G,hugetlb=yes,hugetlbsize=2M \
	-numa node,nodeid=1,cpus=2-3,memdev=bmem1 \
	... \
	-object hostmem-memfd,id=mem0,size=30G,hugetlb=yes,hugetlbsize=2M,reserve=off \
	-device virtio-mem-pci,id=vmem0,memdev=mem0,node=0,requested-size=0G \
	-object hostmem-memfd,id=mem1,size=30G,hugetlb=yes,hugetlbsize=2M,reserve=off \
	-device virtio-mem-pci,id=vmem1,memdev=mem1,node=1,requested-size=0G \
	... \

We can request a size change by adjusting the "requested-size" property (e.g., via qom-set)
and observe the current size by reading the "size" property (e.g., qom-get). Think of
it as an advanced device-local memory balloon mixed with the concept of a memory hotplug.


I suggest taking a look at the libvirt virito-mem implemetation
-- don't think it's upstream yet:

https://lkml.kernel.org/r/cover.1615982004.git.mprivozn@redhat.com

I'm CCing Michal -- I already gave him a note upfront which additional
properties we might see for memory backends (e.g., reserve, managed-size)
and virtio-mem devices (e.g., iothread, prealloc, reserve, prot).

> 
> 
>> For that to work with virtio-mem, you'll have to disable reservation of huge
>> pages for the virtio-mem managed memory region.
>>
>> (prealloction of huge pages in virtio-mem to protect from user mistakes is a
>> separate work item)
>>
>> reserve=off will be the default for virtio-mem, and actual
>> reservation/preallcoation will be done within virtio-mem. There could be use
>> for "reserve=off" for virtio-balloon use cases as well, but I'd like to
>> exclude that from the discussion for now.
> 
> The hostmem backend defaults are indepdant of frontend usage, so when you
> say reserve=off is the default for virtio-mem, are you expecting the mgmt
> app like libvirt to specify that ?

Sorry, yes exactly; only for the memory backend managed by a virtio-mem device.

-- 
Thanks,

David / dhildenb



  reply	other threads:[~2021-05-04 11:05 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-28 13:37 [PATCH v7 00/15] RAM_NORESERVE, MAP_NORESERVE and hostmem "reserve" property David Hildenbrand
2021-04-28 13:37 ` [PATCH v7 01/15] util/mmap-alloc: Factor out calculation of the pagesize for the guard page David Hildenbrand
2021-04-28 13:37 ` [PATCH v7 02/15] util/mmap-alloc: Factor out reserving of a memory region to mmap_reserve() David Hildenbrand
2021-04-28 13:37 ` [PATCH v7 03/15] util/mmap-alloc: Factor out activating of memory to mmap_activate() David Hildenbrand
2021-04-28 13:37 ` [PATCH v7 04/15] softmmu/memory: Pass ram_flags to qemu_ram_alloc_from_fd() David Hildenbrand
2021-04-28 13:37 ` [PATCH v7 05/15] softmmu/memory: Pass ram_flags to memory_region_init_ram_shared_nomigrate() David Hildenbrand
2021-04-28 13:37 ` [PATCH v7 06/15] softmmu/memory: Pass ram_flags to qemu_ram_alloc() and qemu_ram_alloc_internal() David Hildenbrand
2021-04-28 13:37 ` [PATCH v7 07/15] util/mmap-alloc: Pass flags instead of separate bools to qemu_ram_mmap() David Hildenbrand
2021-04-28 13:37 ` [PATCH v7 08/15] memory: Introduce RAM_NORESERVE and wire it up in qemu_ram_mmap() David Hildenbrand
2021-04-28 13:37 ` [PATCH v7 09/15] util/mmap-alloc: Support RAM_NORESERVE via MAP_NORESERVE under Linux David Hildenbrand
2021-05-04 10:09   ` Daniel P. Berrangé
2021-05-04 10:21     ` David Hildenbrand
2021-05-04 10:32       ` Daniel P. Berrangé
2021-05-04 11:04         ` David Hildenbrand [this message]
2021-05-04 11:14           ` Daniel P. Berrangé
2021-05-04 11:28             ` David Hildenbrand
2021-04-28 13:37 ` [PATCH v7 10/15] hostmem: Wire up RAM_NORESERVE via "reserve" property David Hildenbrand
2021-05-04  9:58   ` Paolo Bonzini
2021-05-06  9:25     ` David Hildenbrand
2021-05-04 10:12   ` Daniel P. Berrangé
2021-05-04 11:08     ` David Hildenbrand
2021-05-04 11:18       ` Daniel P. Berrangé
2021-05-04 12:47         ` David Hildenbrand
2021-05-06  9:59         ` David Hildenbrand
2021-04-28 13:37 ` [PATCH v7 11/15] qmp: Clarify memory backend properties returned via query-memdev David Hildenbrand
2021-04-28 13:37 ` [PATCH v7 12/15] qmp: Include "share" property of memory backends David Hildenbrand
2021-04-28 13:37 ` [PATCH v7 13/15] hmp: Print "share" property of memory backends with "info memdev" David Hildenbrand
2021-04-28 13:37 ` [PATCH v7 14/15] qmp: Include "reserve" property of memory backends David Hildenbrand
2021-04-28 13:37 ` [PATCH v7 15/15] hmp: Print "reserve" property of memory backends with "info memdev" David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e72359da-918c-df2d-c541-c1fcf7e3c7d5@redhat.com \
    --to=david@redhat.com \
    --cc=berrange@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=ehabkost@redhat.com \
    --cc=groug@kaod.org \
    --cc=imammedo@redhat.com \
    --cc=mapfelba@redhat.com \
    --cc=mprivozn@redhat.com \
    --cc=mst@redhat.com \
    --cc=muriloo@linux.ibm.com \
    --cc=nilal@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=philmd@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=richard.henderson@linaro.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).