xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Jan Beulich <jbeulich@suse.com>
To: Penny Zheng <Penny.Zheng@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Wei Chen <Wei.Chen@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"sstabellini@kernel.org" <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>
Subject: Re: [PATCH 6/9] xen/arm: introduce alloc_staticmem_pages and alloc_domstatic_pages
Date: Tue, 6 Jul 2021 08:53:49 +0200	[thread overview]
Message-ID: <caa11a54-acb6-928d-de3a-8e081a7c3d34@suse.com> (raw)
In-Reply-To: <VE1PR08MB5215E2802F3DE22F1F244023F71B9@VE1PR08MB5215.eurprd08.prod.outlook.com>

On 06.07.2021 07:58, Penny Zheng wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: Thursday, June 10, 2021 6:23 PM
>>
>> On 07.06.2021 04:43, Penny Zheng wrote:
>>> --- a/xen/common/page_alloc.c
>>> +++ b/xen/common/page_alloc.c
>>> @@ -1065,6 +1065,75 @@ static struct page_info *alloc_heap_pages(
>>>      return pg;
>>>  }
>>>
>>> +#ifdef CONFIG_STATIC_ALLOCATION
>>> +/*
>>> + * Allocate nr_mfns contiguous pages, starting at #smfn, of static memory.
>>> + * It is the equivalent of alloc_heap_pages for static memory  */
>>> +static struct page_info *alloc_staticmem_pages(unsigned long nr_mfns,
>>> +                                               mfn_t smfn,
>>> +                                               unsigned int memflags)
>>> +{
>>> +    bool need_tlbflush = false;
>>> +    uint32_t tlbflush_timestamp = 0;
>>> +    unsigned long i;
>>> +    struct page_info *pg;
>>> +
>>> +    /* For now, it only supports allocating at specified address. */
>>> +    if ( !mfn_valid(smfn) || !nr_mfns )
>>> +    {
>>> +        printk(XENLOG_ERR
>>> +               "Invalid %lu static memory starting at %"PRI_mfn"\n",
>>
>> Reading a log containing e.g. "Invalid 0 static memory starting at ..." I don't
>> think I would recognize that the "0" is the count of pages.
> 
> Sure. How about "try to allocate out of range page %"PRI_mfn"\n"?

This still doesn't convey _both_ parts of the if() that would cause
the log message to be issued.

>>> +               nr_mfns, mfn_x(smfn));
>>> +        return NULL;
>>> +    }
>>> +    pg = mfn_to_page(smfn);
>>> +
>>> +    for ( i = 0; i < nr_mfns; i++ )
>>> +    {
>>> +        /*
>>> +         * Reference count must continuously be zero for free pages
>>> +         * of static memory(PGC_reserved).
>>> +         */
>>> +        ASSERT(pg[i].count_info & PGC_reserved);
>>
>> What logic elsewhere guarantees that this will hold? ASSERT()s are to verify
>> that assumptions are met. But I don't think you can sensibly assume the caller
>> knows the range is reserved (and free), or else you could get away without any
>> allocation function.
> 
> The caller shall only call alloc_staticmem_pages when it knows range is reserved,
> like, alloc_staticmem_pages is only called in the context of alloc_domstatic_pages
> for now.

If the caller knows the static ranges, this isn't really "allocation".
I.e. I then question the need for "allocating" in the first place.

>>> +        if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
>>> +        {
>>> +            printk(XENLOG_ERR
>>> +                   "Reference count must continuously be zero for free pages"
>>> +                   "pg[%lu] MFN %"PRI_mfn" c=%#lx t=%#x\n",
>>> +                   i, mfn_x(page_to_mfn(pg + i)),
>>> +                   pg[i].count_info, pg[i].tlbflush_timestamp);
>>> +            BUG();
>>> +        }
>>
>> The same applies here at least until proper locking gets added, which I guess is
>> happening in the next patch when really it would need to happen right here.
>>
> 
> Ok, I will combine two commits together, and add locking here. 
> I thought the content of this commit is a little bit too much, so maybe
> adding the proper lock shall be created as a new patch. 😉
>  
>> Furthermore I don't see why you don't fold ASSERT() and if into
>>
>>         if ( pg[i].count_info != (PGC_state_free | PGC_reserved) )
>>
>> After all PGC_reserved is not similar to PGC_need_scrub, which
>> alloc_heap_pages() masks out the way you also have it here.
>>
> 
> I understand that you prefer if condition is phrased as follows:
>  	if ( pg[i].count_info != (PGC_state_free | PGC_reserved) )
> Agree that PGC_reserved shall has the same position as PGC_state_free.
> Hmmm, for why I don't fold ASSERT(), do you mean that 
> ASSERT(pg[i].count_info == (PGC_state_free | PGC_reserved))?

No. By converting to the suggested construct the ASSERT() disappears
by way of folding _into_ the if().

>> As to the printk() - the extra verbosity compared to the original isn't helpful or
>> necessary imo. The message needs to be distinguishable from the other one,
>> yes, so it would better mention "static" in some way. But the prefix you have is
>> too long for my taste (and lacks a separating blank anyway).
>>
> 
> If you don't like the extra verbosity, maybe just
> " Static pg[%lu] MFN %"PRI_mfn" c=%#lx t=%#x.\n"?

Something along these lines, yes, but I wonder how difficult it is
to take the original message and insert "static" at a suitable place.
Any part you omit would again want justifying. Personally I'd go with
"pg[%u] static MFN %"PRI_mfn" c=%#lx o=%u v=%#lx t=%#x\n" unless any
of the parts are provably pointless to log for static pages.

>>> @@ -2434,6 +2512,57 @@ struct page_info *alloc_domheap_pages(
>>>      return pg;
>>>  }
>>>
>>> +#ifdef CONFIG_STATIC_ALLOCATION
>>> +/*
>>> + * Allocate nr_mfns contiguous pages, starting at #smfn, of static
>>> +memory,
>>> + * then assign them to one specific domain #d.
>>> + * It is the equivalent of alloc_domheap_pages for static memory.
>>> + */
>>> +struct page_info *alloc_domstatic_pages(
>>> +        struct domain *d, unsigned long nr_mfns, mfn_t smfn,
>>> +        unsigned int memflags)
>>> +{
>>> +    struct page_info *pg = NULL;
>>> +    unsigned long dma_size;
>>> +
>>> +    ASSERT(!in_irq());
>>> +
>>> +    if ( !dma_bitsize )
>>> +        memflags &= ~MEMF_no_dma;
>>> +    else
>>> +    {
>>> +        if ( (dma_bitsize - PAGE_SHIFT) > 0 )
>>> +        {
>>> +            dma_size = 1ul << (dma_bitsize - PAGE_SHIFT);
>>> +            /* Starting address shall meet the DMA limitation. */
>>> +            if ( mfn_x(smfn) < dma_size )
>>> +                return NULL;
>>
>> I think I did ask this on v1 already: Why the first page? Static memory regions,
>> unlike buddy allocator zones, can cross power-of-2 address boundaries. Hence
>> it ought to be the last page that gets checked for fitting address width
>> restriction requirements.
>>
>> And then - is this necessary at all? Shouldn't "pre-defined by configuration
>> using physical address ranges" imply the memory designated for a guest fits its
>> DMA needs?
>>
> 
> Hmmm, In my understanding, here is the DMA restriction when using buddy allocator:
>     else if ( (dma_zone = bits_to_zone(dma_bitsize)) < zone_hi )
>         pg = alloc_heap_pages(dma_zone + 1, zone_hi, order, memflags, d);
> dma_zone is restricting the starting buddy allocator zone, so I am thinking that here, it shall
> restrict the first page.
> 
> imo, if let user define, it also could be missing DMA restriction?

Did you read my earlier reply? Again: The difference is that ordinary
allocations (buddies) can't cross zone boundaries. Hence it is
irrelevant if you check DMA properties on the first or last page - both
will have the same number of significant bits. The same is - afaict -
not true for static allocation ranges.

Of course, as expressed before, a question is whether DMA suitability
needs checking in the first place for static allocations: I'd view it
as mis-configuration if a domain was provided memory it can't really
use properly.

Jan



  reply	other threads:[~2021-07-06  6:54 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-07  2:43 [PATCH V2 0/9] Domain on Static Allocation Penny Zheng
2021-06-07  2:43 ` [PATCH 1/9] xen/arm: introduce domain " Penny Zheng
2021-06-07  2:43 ` [PATCH 2/9] xen/arm: introduce PGC_reserved Penny Zheng
2021-06-30 17:44   ` Julien Grall
2021-07-05  3:09     ` Penny Zheng
2021-06-07  2:43 ` [PATCH 3/9] xen/arm: introduce CONFIG_STATIC_ALLOCATION Penny Zheng
2021-06-07  6:17   ` Jan Beulich
2021-06-30 17:45   ` Julien Grall
2021-07-05  3:16     ` Penny Zheng
2021-06-07  2:43 ` [PATCH 4/9] xen/arm: static memory initialization Penny Zheng
2021-06-10  9:35   ` Jan Beulich
2021-06-30 17:46     ` Julien Grall
2021-07-05  5:22       ` Penny Zheng
2021-07-05  7:14         ` Penny Zheng
2021-07-05  7:50           ` Jan Beulich
2021-07-05  9:19             ` Penny Zheng
2021-07-05  7:48         ` Jan Beulich
2021-06-30 18:09   ` Julien Grall
2021-07-05  7:28     ` Penny Zheng
2021-07-06  9:09       ` Julien Grall
2021-07-06  9:20         ` Penny Zheng
2021-07-06  9:26           ` Julien Grall
2021-06-07  2:43 ` [PATCH 5/9] xen: introduce assign_pages_nr Penny Zheng
2021-06-10  9:49   ` Jan Beulich
2021-06-30 18:29     ` Julien Grall
2021-07-01  8:26       ` Jan Beulich
2021-07-01  9:24         ` Julien Grall
2021-07-01 10:13           ` Jan Beulich
2021-06-07  2:43 ` [PATCH 6/9] xen/arm: introduce alloc_staticmem_pages and alloc_domstatic_pages Penny Zheng
2021-06-10 10:23   ` Jan Beulich
2021-07-06  5:58     ` Penny Zheng
2021-07-06  6:53       ` Jan Beulich [this message]
2021-07-06  9:39         ` Julien Grall
2021-07-06  9:59           ` Jan Beulich
2021-07-06 10:31             ` Julien Grall
2021-07-08  9:09         ` Penny Zheng
2021-07-08 10:06           ` Jan Beulich
2021-07-08 11:07             ` Penny Zheng
2021-06-07  2:43 ` [PATCH 7/9] xen/arm: take care of concurrency on static memory allocation Penny Zheng
2021-06-10 10:53   ` Jan Beulich
2021-06-07  2:43 ` [PATCH 8/9] xen/arm: check `xen,static-mem` property during domain construction Penny Zheng
2021-07-03 13:26   ` Julien Grall
2021-07-06  6:31     ` Penny Zheng
2021-07-06  6:57       ` Jan Beulich
2021-07-06  7:35         ` Penny Zheng
2021-07-06  9:22       ` Julien Grall
2021-06-07  2:43 ` [PATCH 9/9] xen/arm: introduce allocate_static_memory Penny Zheng
2021-07-03 14:18   ` Julien Grall
2021-07-06  7:30     ` Penny Zheng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=caa11a54-acb6-928d-de3a-8e081a7c3d34@suse.com \
    --to=jbeulich@suse.com \
    --cc=Bertrand.Marquis@arm.com \
    --cc=Penny.Zheng@arm.com \
    --cc=Wei.Chen@arm.com \
    --cc=julien@xen.org \
    --cc=sstabellini@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).