All of lore.kernel.org
 help / color / mirror / Atom feed
From: George Dunlap <George.Dunlap@citrix.com>
To: Julien Grall <julien@xen.org>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH DO NOT APPLY] docs: Document allocator properties and the rubric for using them
Date: Tue, 9 Mar 2021 11:36:04 +0000	[thread overview]
Message-ID: <F89758EE-0377-4F7C-9741-57364B7D4BCD@citrix.com> (raw)
In-Reply-To: <9d63df30-6de7-4ea8-1e38-d70318b4b7bb@xen.org>



> On Mar 6, 2021, at 8:03 PM, Julien Grall <julien@xen.org> wrote:
> 
> Hi George,
> 
> On 16/02/2021 11:17, George Dunlap wrote:
>>> On Feb 16, 2021, at 11:16 AM, George Dunlap <george.dunlap@citrix.com> wrote:
>>> 
>>> 
>>> 
>>>> On Feb 16, 2021, at 10:55 AM, Julien Grall <julien@xen.org> wrote:
>>>> 
>>>> Hi George,
>>>> 
>>>> On 16/02/2021 10:28, George Dunlap wrote:
>>>>> Document the properties of the various allocators and lay out a clear
>>>>> rubric for when to use each.
>>>>> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
>>>>> ---
>>>>> This doc is my understanding of the properties of the current
>>>>> allocators (alloc_xenheap_pages, xmalloc, and vmalloc), and of Jan's
>>>>> proposed new wrapper, xvmalloc.
>>>>> xmalloc, vmalloc, and xvmalloc were designed more or less to mirror
>>>>> similar functions in Linux (kmalloc, vmalloc, and kvmalloc
>>>>> respectively).
>>>>> CC: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>> CC: Jan Beulich <jbeulich@suse.com>
>>>>> CC: Roger Pau Monne <roger.pau@citrix.com>
>>>>> CC: Stefano Stabellini <sstabellini@kernel.org>
>>>>> CC: Julien Grall <julien@xen.org>
>>>>> ---
>>>>> .../memory-allocation-functions.rst           | 118 ++++++++++++++++++
>>>>> 1 file changed, 118 insertions(+)
>>>>> create mode 100644 docs/hypervisor-guide/memory-allocation-functions.rst
>>>>> diff --git a/docs/hypervisor-guide/memory-allocation-functions.rst b/docs/hypervisor-guide/memory-allocation-functions.rst
>>>>> new file mode 100644
>>>>> index 0000000000..15aa2a1a65
>>>>> --- /dev/null
>>>>> +++ b/docs/hypervisor-guide/memory-allocation-functions.rst
>>>>> @@ -0,0 +1,118 @@
>>>>> +.. SPDX-License-Identifier: CC-BY-4.0
>>>>> +
>>>>> +Xenheap memory allocation functions
>>>>> +===================================
>>>>> +
>>>>> +In general Xen contains two pools (or "heaps") of memory: the *xen
>>>>> +heap* and the *dom heap*.  Please see the comment at the top of
>>>>> +``xen/common/page_alloc.c`` for the canonical explanation.
>>>>> +
>>>>> +This document describes the various functions available to allocate
>>>>> +memory from the xen heap: their properties and rules for when they should be
>>>>> +used.
>>>>> +
>>>>> +
>>>>> +TLDR guidelines
>>>>> +---------------
>>>>> +
>>>>> +* By default, ``xvmalloc`` (or its helper cognates) should be used
>>>>> +  unless you know you have specific properties that need to be met.
>>>>> +
>>>>> +* If you need memory which needs to be physically contiguous, and may
>>>>> +  be larger than ``PAGE_SIZE``...
>>>>> +
>>>>> +  - ...and is order 2, use ``alloc_xenheap_pages``.
>>>>> +
>>>>> +  - ...and is not order 2, use ``xmalloc`` (or its helper cognates)..
>>>>> +
>>>>> +* If you don't need memory to be physically contiguous, and know the
>>>>> +  allocation will always be larger than ``PAGE_SIZE``, you may use
>>>>> +  ``vmalloc`` (or one of its helper cognates).
>>>>> +
>>>>> +* If you know that allocation will always be less than ``PAGE_SIZE``,
>>>>> +  you may use ``xmalloc``.
>>>> 
>>>> AFAICT, the determining factor is PAGE_SIZE. This is a single is a single value on x86 (e.g. 4KB) but on other architecture this may be multiple values.
>>>> 
>>>> For instance, on Arm, this could be 4KB, 16KB, 64KB (note that only the former is so far supported on Xen).
>>>> 
>>>> For Arm and common code, it feels to me we can't make a clear decision based on PAGE_SIZE. Instead, I continue to think that the decision should only be based on physical vs virtually contiguous.
>>>> 
>>>> We can then add further rules for x86 specific code if the maintainers want.
>>> 
>>> Sorry my second mail was somewhat delayed — my intent was: 1) post the document I’d agreed to write, 2) say why I think the proposal is a bad idea. :-)
> 
> No worry, I jumped too quickly in the discussion :).
> 
>>> 
>>> Re page size — the vast majority of time we’re talking “knowing” that the size is less than 4k.  If we’re confident that no architecture will ever have a page size less than 4k, then we know that all allocations less than 4k will always be less than PAGE_SIZE.  Obviously larger page sizes then becomes an issue.
>>> 
>>> But in any case — unless we have BUG_ON(size > PAGE_SIZE), we’re going to have to have a fallback, which is going to cost one precious conditional, making the whole exercise pointless.
>> Er, just in case it wasn’t clear — I agree with this:
>>>> I continue to think that the decision should only be based on physical vs virtually contiguous.
> 
> We have two opposite proposal with no clear way to reconciliate them. Should we request a vote on the two proposals?

Let me write up an alternate proposal with Jan’s feedback; then if he still thinks his way is better we can vote.

 -George

  reply	other threads:[~2021-03-09 11:36 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-16 10:28 [PATCH DO NOT APPLY] docs: Document allocator properties and the rubric for using them George Dunlap
2021-02-16 10:55 ` Julien Grall
2021-02-16 11:16   ` George Dunlap
2021-02-16 11:17     ` George Dunlap
2021-03-06 20:03       ` Julien Grall
2021-03-09 11:36         ` George Dunlap [this message]
2021-02-16 10:58 ` George Dunlap
2021-02-16 15:29 ` Jan Beulich
2021-03-12 14:32   ` George Dunlap
2021-03-12 15:19     ` George Dunlap
2021-03-12 16:15     ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=F89758EE-0377-4F7C-9741-57364B7D4BCD@citrix.com \
    --to=george.dunlap@citrix.com \
    --cc=Andrew.Cooper3@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=julien@xen.org \
    --cc=roger.pau@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.