xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Jan Beulich <jbeulich@suse.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: julien@xen.org, Wei Liu <wl@xen.org>,
	andrew.cooper3@citrix.com,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>,
	Volodymyr_Babchuk@epam.com
Subject: Re: [PATCH 05/12] xen: introduce reserve_heap_pages
Date: Mon, 4 May 2020 11:16:39 +0200	[thread overview]
Message-ID: <224e04ed-e790-d963-f74a-d600677c4413@suse.com> (raw)
In-Reply-To: <alpine.DEB.2.21.2004300907060.28941@sstabellini-ThinkPad-T480s>

On 30.04.2020 18:21, Stefano Stabellini wrote:
> On Thu, 30 Apr 2020, Jan Beulich wrote:
>> On 30.04.2020 00:46, Stefano Stabellini wrote:
>>> On Fri, 17 Apr 2020, Jan Beulich wrote:
>>>> On 15.04.2020 03:02, Stefano Stabellini wrote:
>>>>> Introduce a function named reserve_heap_pages (similar to
>>>>> alloc_heap_pages) that allocates a requested memory range. Call
>>>>> __alloc_heap_pages for the implementation.
>>>>>
>>>>> Change __alloc_heap_pages so that the original page doesn't get
>>>>> modified, giving back unneeded memory top to bottom rather than bottom
>>>>> to top.
>>>>
>>>> While it may be less of a problem within a zone, doing so is
>>>> against our general "return high pages first" policy.
>>>
>>> Is this something you'd be OK with anyway?
>>
>> As a last resort, maybe. But it really depends on why it needs to be
>> this way.
>>
>>> If not, do you have a suggestion on how to do it better? I couldn't find
>>> a nice way to do it without code duplication, or a big nasty 'if' in the
>>> middle of the function.
>>
>> I'd first need to understand the problem to solve.
> 
> OK, I'll make an example.
> 
> reserve_heap_pages wants to reserve the range 0x10000000 - 0x20000000.
> 
> reserve_heap_pages get the struct page_info for 0x10000000 and calls
> alloc_pages_from_buddy to allocate an order of 28.
> 
> alloc_pages_from_buddy realizes that the free memory area starting from
> 0x10000000 is actually of order 30, even larger than the requested order
> of 28. The free area is 0x10000000 - 0x50000000.
> 
> alloc_pages_from_buddy instead of just allocating an order of 28
> starting from 0x10000000, it would allocate the "top" order of 28 in the
> free area. So it would allocate: 0x40000000 - 0x50000000, returning
> 0x40000000.
> 
> Of course, this doesn't work for reserve_heap_pages.
> 
> 
> This patch changes the behavior of alloc_pages_from_buddy so that in a
> situation like the above, it would return 0x10000000 - 0x20000000
> (leaving the rest of the free area unallocated.)

So what if then, for the same order-30 (really order-18 if I assume
you name addresses, not frame numbers), a reservation request came
in for the highest order-28 sub-region? You'd again be screwed if
you relied on which part of a larger buddy gets returned by the
lower level function you call. I can't help thinking that basing
reservation on allocation functions can't really be made work for
all possible cases. Instead reservation requests need to check that
the requested range is free _and_ split the potentially larger
range according to the request.

Jan


  reply	other threads:[~2020-05-04  9:17 UTC|newest]

Thread overview: 67+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-15  1:02 [PATCH 0/12] direct-map DomUs Stefano Stabellini
2020-04-15  1:02 ` [PATCH 01/12] xen: introduce xen_dom_flags Stefano Stabellini
2020-04-15  9:12   ` Jan Beulich
2020-04-15 13:26     ` Julien Grall
2020-04-29 23:57     ` Stefano Stabellini
2020-04-15  1:02 ` [PATCH 02/12] xen/arm: introduce arch_xen_dom_flags and direct_map Stefano Stabellini
2020-04-15 10:27   ` Jan Beulich
2020-04-15 11:27     ` Andrew Cooper
2020-04-30  0:34     ` Stefano Stabellini
2020-04-15  1:02 ` [PATCH 03/12] xen/arm: introduce 1:1 mapping for domUs Stefano Stabellini
2020-04-15 13:36   ` Julien Grall
2020-05-01  1:26     ` Stefano Stabellini
2020-05-01  8:30       ` Julien Grall
2020-05-09  0:07         ` Stefano Stabellini
2020-05-09  9:56           ` Julien Grall
2020-04-15  1:02 ` [PATCH 04/12] xen: split alloc_heap_pages in two halves for reusability Stefano Stabellini
2020-04-15 11:22   ` Wei Liu
2020-04-17 10:02   ` Jan Beulich
2020-04-29 23:09     ` Stefano Stabellini
2020-04-15  1:02 ` [PATCH 05/12] xen: introduce reserve_heap_pages Stefano Stabellini
2020-04-15 13:24   ` Julien Grall
2020-04-17 10:11   ` Jan Beulich
2020-04-29 22:46     ` Stefano Stabellini
2020-04-30  6:29       ` Jan Beulich
2020-04-30 16:21         ` Stefano Stabellini
2020-05-04  9:16           ` Jan Beulich [this message]
2020-04-30 14:51       ` Julien Grall
2020-04-30 17:00         ` Stefano Stabellini
2020-04-30 18:27           ` Julien Grall
2020-05-12  1:10             ` Stefano Stabellini
2020-05-12  8:57               ` Julien Grall
2020-04-15  1:02 ` [PATCH 06/12] xen/arm: reserve 1:1 memory for direct_map domUs Stefano Stabellini
2020-04-15 13:38   ` Julien Grall
2020-04-15  1:02 ` [PATCH 07/12] xen/arm: new vgic: rename vgic_cpu/dist_base to c/dbase Stefano Stabellini
2020-04-15 13:41   ` Julien Grall
2020-04-15  1:02 ` [PATCH 08/12] xen/arm: if is_domain_direct_mapped use native addresses for GICv2 Stefano Stabellini
2020-04-15 14:00   ` Julien Grall
2020-05-01  1:26     ` Stefano Stabellini
2020-05-01  8:23       ` Julien Grall
2020-05-09  0:06         ` Stefano Stabellini
2020-04-15  1:02 ` [PATCH 09/12] xen/arm: if is_domain_direct_mapped use native addresses for GICv3 Stefano Stabellini
2020-04-15 14:09   ` Julien Grall
2020-05-01  1:31     ` Stefano Stabellini
2020-05-01  8:40       ` Julien Grall
2020-05-09  0:06         ` Stefano Stabellini
2020-04-15  1:02 ` [PATCH 10/12] xen/arm: if is_domain_direct_mapped use native UART address for vPL011 Stefano Stabellini
2020-04-15 14:11   ` Julien Grall
2020-05-01  1:26     ` Stefano Stabellini
2020-05-01  8:09       ` Julien Grall
2020-05-09  0:07         ` Stefano Stabellini
2020-05-09 10:11           ` Julien Grall
2020-05-11 22:58             ` Stefano Stabellini
2020-04-15  1:02 ` [PATCH 11/12] xen/arm: if xen_force don't try to setup the IOMMU Stefano Stabellini
2020-04-15 14:12   ` Julien Grall
2020-04-29 21:55     ` Stefano Stabellini
2020-04-30 13:51       ` Julien Grall
2020-05-01  1:25         ` Stefano Stabellini
2020-04-15  1:02 ` [PATCH 12/12] xen/arm: call iomem_permit_access for passthrough devices Stefano Stabellini
2020-04-15 14:18   ` Julien Grall
2020-04-29 20:47     ` Stefano Stabellini
2020-04-30 13:01       ` Julien Grall
2020-05-24 14:12         ` Julien Grall
2020-05-26 16:46           ` Stefano Stabellini
2020-05-27 18:09             ` Julien Grall
2020-04-16  8:59 ` [PATCH 0/12] direct-map DomUs Julien Grall
2020-04-29 20:16   ` Stefano Stabellini
2020-04-30 12:54     ` Julien Grall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=224e04ed-e790-d963-f74a-d600677c4413@suse.com \
    --to=jbeulich@suse.com \
    --cc=Volodymyr_Babchuk@epam.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=julien@xen.org \
    --cc=sstabellini@kernel.org \
    --cc=stefano.stabellini@xilinx.com \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).