All of lore.kernel.org
 help / color / mirror / Atom feed
From: Julien Grall <julien.grall@linaro.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [PATCH 2/6] tools: arm: allocate superpages to guests.
Date: Wed, 11 Jun 2014 13:16:29 +0100	[thread overview]
Message-ID: <5398489D.2040805@linaro.org> (raw)
In-Reply-To: <1402487707.16332.18.camel@kazak.uk.xensource.com>

On 06/11/2014 12:55 PM, Ian Campbell wrote:
> On Wed, 2014-06-11 at 12:50 +0100, Julien Grall wrote:
>> On 06/10/2014 04:16 PM, Ian Campbell wrote:
>>>>> +/*  >0: success, *nr_pfns set to number actually populated
>>>>> + *   0: didn't try with this pfn shift (e.g. misaligned base etc)
>>>>> + *  <0: ERROR
>>>>> + */
>>>>> +static int populate_one_size(struct xc_dom_image *dom, int pfn_shift,
>>>>> +                             xen_pfn_t base_pfn, xen_pfn_t *nr_pfns)
>>>>> +{
>>>>> +    uint64_t mask = ((uint64_t)1<<(pfn_shift))-1;
>>>>> +    uint64_t next_mask = ((uint64_t)1<<(LPAE_SHIFT))-1;
>>>>> +    int nr, i, count = min_t(int, 1024*1024, *nr_pfns >> pfn_shift);
>>>>
>>>> Stupid question, where does come from the 1024*1024?
>>>
>>> It was in the original as a backstop on allocsz. It would correspond to
>>> 4GB worth of 4K page I suppose
>>
>> Ah ok. I didn't pay attention about it.
>>
>>>
>>>>> +    xen_pfn_t extents[count];
>>>>
>>>> If I follow the count definition, it's possible to allocate 1024*1024
>>>> xen_pfn_t (about 8MB) on the stack.
>>>
>>> userspace stack isn't all that precious but 8MB does seem a little
>>> excessive. Previously this was using the already allocated host p2m so
>>> it wasn't an issue, but that doesn't work for allocations of page >4K.
>>>
>>> The tradeoff is that smaller batches mean it will take longer.
>>>
>>> I don't think it would be unreasonable to reduce this to be e.g. 1GB
>>> worth of entries (256*1024) or 2MB of stack.
>>
>> It seems the default stack size is 8MB, I'm wondering if we can have
>> some use case where this limit is smaller.
> 
> I think we effectively assume it is larger than any amount we would
> practically use.
> 
>> Is there any issue to allocate this variable with malloc?
> 
> Just more book keeping code really.

Looking to the code, it doesn't seem too difficult to add malloc and
handle error.

I would prefer to use the malloc rather the stack and therefore assuming
that's stack a quite big, even though this variable should never be too
big. It wouldn't be mad to have an OS using a 1M stack (or even less).

Anyway it seems that x86 libxc is using the same trick for superpage...

Regards,

-- 
Julien Grall

  reply	other threads:[~2014-06-11 12:16 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-06-10  9:55 [PATCH 0/6] xen: arm: Use super pages in p2m Ian Campbell
2014-06-10  9:57 ` [PATCH 1/6] xen: arm: dump vcpu gic info in arch_dump_vcpu_info Ian Campbell
2014-06-10 11:08   ` Julien Grall
2014-06-10  9:57 ` [PATCH 2/6] tools: arm: allocate superpages to guests Ian Campbell
2014-06-10 11:23   ` Julien Grall
2014-06-10 15:16     ` Ian Campbell
2014-06-11 11:50       ` Julien Grall
2014-06-11 11:55         ` Ian Campbell
2014-06-11 12:16           ` Julien Grall [this message]
2014-06-10  9:57 ` [PATCH 3/6] xen: arm: only put_page for p2m operations which require it Ian Campbell
2014-06-10 11:28   ` Julien Grall
2014-06-10 15:18     ` Ian Campbell
2014-06-10  9:57 ` [PATCH 4/6] xen: arm: handle superpage mappings in p2m_lookup Ian Campbell
2014-06-10 11:35   ` Julien Grall
2014-06-10  9:57 ` [PATCH 5/6] xen: arm: add some helpers for assessing p2m pte Ian Campbell
2014-06-10 11:37   ` Julien Grall
2014-06-10 11:46     ` Ian Campbell
2014-06-10 11:54       ` Julien Grall
2014-06-10 12:29         ` Ian Campbell
2014-06-10 12:52           ` Julien Grall
2014-06-10 15:19             ` Ian Campbell
2014-06-10  9:57 ` [PATCH 6/6] xen: arm: use superpages in p2m when pages are suitably aligned Ian Campbell
2014-06-10 12:10   ` Julien Grall
2014-06-10 15:23     ` Ian Campbell
2014-06-10 19:11       ` Julien Grall
2014-06-11  8:29         ` Ian Campbell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5398489D.2040805@linaro.org \
    --to=julien.grall@linaro.org \
    --cc=Ian.Campbell@citrix.com \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=tim@xen.org \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.