All of lore.kernel.org
 help / color / mirror / Atom feed
From: George Dunlap <George.Dunlap@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"Julien Grall" <julien@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH DO NOT APPLY] docs: Document allocator properties and the rubric for using them
Date: Fri, 12 Mar 2021 14:32:09 +0000	[thread overview]
Message-ID: <63895FAD-B848-461D-8A31-E6C9973B6726@citrix.com> (raw)
In-Reply-To: <b225be0f-3eed-426e-8829-6e7c57cd7635@suse.com>



> On Feb 16, 2021, at 3:29 PM, Jan Beulich <JBeulich@suse.com> wrote:
> 
> On 16.02.2021 11:28, George Dunlap wrote:
>> --- /dev/null
>> +++ b/docs/hypervisor-guide/memory-allocation-functions.rst
>> @@ -0,0 +1,118 @@
>> +.. SPDX-License-Identifier: CC-BY-4.0
>> +
>> +Xenheap memory allocation functions
>> +===================================
>> +
>> +In general Xen contains two pools (or "heaps") of memory: the *xen
>> +heap* and the *dom heap*.  Please see the comment at the top of
>> +``xen/common/page_alloc.c`` for the canonical explanation.
>> +
>> +This document describes the various functions available to allocate
>> +memory from the xen heap: their properties and rules for when they should be
>> +used.
> 
> Irrespective of your subsequent indication of you disliking the
> proposal (which I understand only affects the guidelines further
> down anyway) I'd like to point out that vmalloc() does not
> allocate from the Xen heap. Therefore a benefit of always
> recommending use of xvmalloc() would be that the function could
> fall back to vmalloc() (and hence the larger domain heap) when
> xmalloc() failed.

OK, that’s good to know.

So just trying to think this through: address space is limiting factor for how big the xenheap can be, right?  Presumably “vmap” space is also limited, and will be much smaller?  So in a sense the “fallback” is less about getting more memory, but about using up that extra little bit of virtual address space?

Another question that raises:  Are there times when it’s advantageous to specify which heap to allocate from?  If there are good reasons for allocations to be in the xenheap or in the domheap / vmap area, then the guidelines should probably say that as well.

And, of course, will the whole concept of the xenheap / domheap split go away if we ever get rid of the 1:1 map?

> 
>> +TLDR guidelines
>> +---------------
>> +
>> +* By default, ``xvmalloc`` (or its helper cognates) should be used
>> +  unless you know you have specific properties that need to be met.
>> +
>> +* If you need memory which needs to be physically contiguous, and may
>> +  be larger than ``PAGE_SIZE``...
>> +  
>> +  - ...and is order 2, use ``alloc_xenheap_pages``.
>> +    
>> +  - ...and is not order 2, use ``xmalloc`` (or its helper cognates)..
> 
> ITYM "an exact power of 2 number of pages"?

Yes, I’ll fix that.

> 
>> +* If you don't need memory to be physically contiguous, and know the
>> +  allocation will always be larger than ``PAGE_SIZE``, you may use
>> +  ``vmalloc`` (or one of its helper cognates).
>> +
>> +* If you know that allocation will always be less than ``PAGE_SIZE``,
>> +  you may use ``xmalloc``.
> 
> As per Julien's and your own replies, this wants to be "minimum
> possible page size", which of course depends on where in the
> tree the piece of code is to live. (It would be "maximum
> possible page size" in the earlier paragraph.)

I’ll see if I can clarify this.

> 
>> +Properties of various allocation functions
>> +------------------------------------------
>> +
>> +Ultimately, the underlying allocator for all of these functions is
>> +``alloc_xenheap_pages``.  They differ on several different properties:
>> +
>> +1. What underlying allocation sizes are.  This in turn has an effect
>> +   on:
>> +
>> +   - How much memory is wasted when requested size doesn't match
>> +
>> +   - How such allocations are affected by memory fragmentation
>> +
>> +   - How such allocations affect memory fragmentation
>> +
>> +2. Whether the underlying pages are physically contiguous
>> +
>> +3. Whether allocation and deallocation require the cost of mapping and
>> +   unmapping
>> +
>> +``alloc_xenheap_pages`` will allocate a physically contiguous set of
>> +pages on orders of 2.  No mapping or unmapping is done.
> 
> That's the case today, but meant to change rather sooner than later
> (when the 1:1 map disappears).

Is that the kind of thing we want to add into this document?  I suppose it would be good to make the guidelines now such that they produce code which is as easy as possible to adapt to the new way of doing things.

 -George

  reply	other threads:[~2021-03-12 14:33 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-16 10:28 [PATCH DO NOT APPLY] docs: Document allocator properties and the rubric for using them George Dunlap
2021-02-16 10:55 ` Julien Grall
2021-02-16 11:16   ` George Dunlap
2021-02-16 11:17     ` George Dunlap
2021-03-06 20:03       ` Julien Grall
2021-03-09 11:36         ` George Dunlap
2021-02-16 10:58 ` George Dunlap
2021-02-16 15:29 ` Jan Beulich
2021-03-12 14:32   ` George Dunlap [this message]
2021-03-12 15:19     ` George Dunlap
2021-03-12 16:15     ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=63895FAD-B848-461D-8A31-E6C9973B6726@citrix.com \
    --to=george.dunlap@citrix.com \
    --cc=Andrew.Cooper3@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=julien@xen.org \
    --cc=roger.pau@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.