All of lore.kernel.org
 help / color / mirror / Atom feed
From: Penny Zheng <Penny.Zheng@arm.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"sstabellini@kernel.org" <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Wei Chen <Wei.Chen@arm.com>
Subject: RE: [PATCH 6/9] xen/arm: introduce alloc_staticmem_pages and alloc_domstatic_pages
Date: Thu, 8 Jul 2021 11:07:18 +0000	[thread overview]
Message-ID: <VE1PR08MB5215EC648B15AF57D2E7051EF7199@VE1PR08MB5215.eurprd08.prod.outlook.com> (raw)
In-Reply-To: <886da191-602d-5dc6-8a4c-777aed90fb09@suse.com>

Hi Jan

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: Thursday, July 8, 2021 6:06 PM
> To: Penny Zheng <Penny.Zheng@arm.com>
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; xen-
> devel@lists.xenproject.org; sstabellini@kernel.org; Julien Grall
> <julien@xen.org>; Wei Chen <Wei.Chen@arm.com>
> Subject: Re: [PATCH 6/9] xen/arm: introduce alloc_staticmem_pages and
> alloc_domstatic_pages
> 
> On 08.07.2021 11:09, Penny Zheng wrote:
> > Hi Jan
> >
> >> -----Original Message-----
> >> From: Jan Beulich <jbeulich@suse.com>
> >> Sent: Tuesday, July 6, 2021 2:54 PM
> >> To: Penny Zheng <Penny.Zheng@arm.com>
> >> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
> >> <Wei.Chen@arm.com>; xen-devel@lists.xenproject.org;
> >> sstabellini@kernel.org; Julien Grall <julien@xen.org>
> >> Subject: Re: [PATCH 6/9] xen/arm: introduce alloc_staticmem_pages and
> >> alloc_domstatic_pages
> >>
> >> On 06.07.2021 07:58, Penny Zheng wrote:
> >>>> From: Jan Beulich <jbeulich@suse.com>
> >>>> Sent: Thursday, June 10, 2021 6:23 PM
> >>>>
> >>>> On 07.06.2021 04:43, Penny Zheng wrote:
> >>>>> --- a/xen/common/page_alloc.c
> >>>>> +++ b/xen/common/page_alloc.c
> >>>>> @@ -1065,6 +1065,75 @@ static struct page_info *alloc_heap_pages(
> >>>>>      return pg;
> >>>>>  }
> >>>>>
> >>>>> +#ifdef CONFIG_STATIC_ALLOCATION
> >>>>> +/*
> >>>>> + * Allocate nr_mfns contiguous pages, starting at #smfn, of static
> memory.
> >>>>> + * It is the equivalent of alloc_heap_pages for static memory  */
> >>>>> +static struct page_info *alloc_staticmem_pages(unsigned long nr_mfns,
> >>>>> +                                               mfn_t smfn,
> >>>>> +                                               unsigned int
> >>>>> +memflags) {
> >>>>> +    bool need_tlbflush = false;
> >>>>> +    uint32_t tlbflush_timestamp = 0;
> >>>>> +    unsigned long i;
> >>>>> +    struct page_info *pg;
> >>>>> +
> >>>>> +    /* For now, it only supports allocating at specified address. */
> >>>>> +    if ( !mfn_valid(smfn) || !nr_mfns )
> >>>>> +    {
> >>>>> +        printk(XENLOG_ERR
> >>>>> +               "Invalid %lu static memory starting at
> >>>>> + %"PRI_mfn"\n",
> >>>>
> >>>> Reading a log containing e.g. "Invalid 0 static memory starting at
> >>>> ..." I don't think I would recognize that the "0" is the count of pages.
> >>>
> >>> Sure. How about "try to allocate out of range page %"PRI_mfn"\n"?
> >>
> >> This still doesn't convey _both_ parts of the if() that would cause
> >> the log message to be issued.
> >>
> >
> > Sorry. How about
> > "
> >         printk(XENLOG_ERR
> >                "Either out-of-range static memory starting at %"PRI_mfn""
> >                "or invalid number of pages: %ul.\n",
> >                mfn_x(smfn), nr_mfns);
> > "
> 
> I'm sorry - while now you convey both aspects, the message has become too
> verbose. What's wrong with "Invalid static memory request: ... pages at ...\"?
> But I wonder anyway if a log message is appropriate here in the first place.
> 

Sorry for my poor English writing. :/
Just having a habit(maybe not a good habit) of printing error messages, if you think the
log itself is verbose here, I'll remove it. ;)

> >>>>> @@ -2434,6 +2512,57 @@ struct page_info *alloc_domheap_pages(
> >>>>>      return pg;
> >>>>>  }
> >>>>>
> >>>>> +#ifdef CONFIG_STATIC_ALLOCATION
> >>>>> +/*
> >>>>> + * Allocate nr_mfns contiguous pages, starting at #smfn, of
> >>>>> +static memory,
> >>>>> + * then assign them to one specific domain #d.
> >>>>> + * It is the equivalent of alloc_domheap_pages for static memory.
> >>>>> + */
> >>>>> +struct page_info *alloc_domstatic_pages(
> >>>>> +        struct domain *d, unsigned long nr_mfns, mfn_t smfn,
> >>>>> +        unsigned int memflags)
> >>>>> +{
> >>>>> +    struct page_info *pg = NULL;
> >>>>> +    unsigned long dma_size;
> >>>>> +
> >>>>> +    ASSERT(!in_irq());
> >>>>> +
> >>>>> +    if ( !dma_bitsize )
> >>>>> +        memflags &= ~MEMF_no_dma;
> >>>>> +    else
> >>>>> +    {
> >>>>> +        if ( (dma_bitsize - PAGE_SHIFT) > 0 )
> >>>>> +        {
> >>>>> +            dma_size = 1ul << (dma_bitsize - PAGE_SHIFT);
> >>>>> +            /* Starting address shall meet the DMA limitation. */
> >>>>> +            if ( mfn_x(smfn) < dma_size )
> >>>>> +                return NULL;
> >>>>
> >>>> I think I did ask this on v1 already: Why the first page? Static
> >>>> memory regions, unlike buddy allocator zones, can cross power-of-2
> >>>> address boundaries. Hence it ought to be the last page that gets
> >>>> checked for fitting address width restriction requirements.
> >>>>
> >>>> And then - is this necessary at all? Shouldn't "pre-defined by
> >>>> configuration using physical address ranges" imply the memory
> >>>> designated for a guest fits its DMA needs?
> >>>>
> >>>
> >>> Hmmm, In my understanding, here is the DMA restriction when using
> >>> buddy
> >> allocator:
> >>>     else if ( (dma_zone = bits_to_zone(dma_bitsize)) < zone_hi )
> >>>         pg = alloc_heap_pages(dma_zone + 1, zone_hi, order,
> >>> memflags, d); dma_zone is restricting the starting buddy allocator
> >>> zone, so I am thinking that here, it shall restrict the first page.
> >>>
> >>> imo, if let user define, it also could be missing DMA restriction?
> >>
> >> Did you read my earlier reply? Again: The difference is that ordinary
> >> allocations (buddies) can't cross zone boundaries. Hence it is
> >> irrelevant if you check DMA properties on the first or last page -
> >> both will have the same number of significant bits. The same is -
> >> afaict - not true for static allocation ranges.
> >>
> >
> > True.
> >
> > Ordinary allocations (buddies) can't cross zone boundaries, So I
> > understand that following the logic in "alloc_heap_pages(dma_zone + 1,
> zone_hi, order, memflags, d);"
> > pages of the smallest address shall be allocated from "dma_zone + 1",
> > like you said, it is irrelevant if you check DMA properties on the
> > first or last pages, but imo, no matter first or last page, both shall be larger
> than (2^(dma_zone + 1)).
> >
> > Taking 32 as dma_bitsize, then the memory with this DMA restriction
> > allocated by "alloc_heap_pages(dma_zone + 1, zone_hi, order, memflags,
> > d);" shall be at least more than 4G.
> 
> DMA restrictions are always "needs to be no more than N bits".
> 
> > That’s why I keep comparing the first page of static allocation, that
> > I am following the "more than" logic here.
> >
> > But you're right, I got a little investigation on ARM DMA limitation,
> > still taking dma_bitsize=32 as an example, we want that the actually
> allocated memory is smaller than 4G, not more than.
> > So I think the logic behind this code line " alloc_heap_pages(dma_zone
> > + 1, zone_hi, order, memflags, d);" is not right for ARM, and it shall
> > be changed to "alloc_heap_pages(zone_lo, dma_zone + 1, order, memflags,
> d);" as correction.
> 
> But this step is to _avoid_ the DMA-reserved part of the heap.

Thanks for the explanation!!!

I totally mis-understood the usage of dma_bitsize. It turns out to be the DMA-reserved...
Putting DMA limitation on zone_lo is absolutely right on ARM also.

> The caller requests address restricted memory by passing suitable memflags. If
> the request doesn't require access to the DMA- reserved part of the heap
> (dma_zone < zone_hi) we first try to get memory from there. Only if that fails
> will we fall back and try taking memory from the lower region. IOW the
> problem with your code is more fundamental: You use dma_bitsize when
> really you ought to extract the caller's requested restriction (if any) from
> memflags. I would assume that dma_bitsize is orthogonal to static memory, i.e.
> you don't need to try to preserve low memory.
> 

True, No need to preserve low memory for static memory.

> Jan

Penny Zheng


  reply	other threads:[~2021-07-08 11:07 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-07  2:43 [PATCH V2 0/9] Domain on Static Allocation Penny Zheng
2021-06-07  2:43 ` [PATCH 1/9] xen/arm: introduce domain " Penny Zheng
2021-06-07  2:43 ` [PATCH 2/9] xen/arm: introduce PGC_reserved Penny Zheng
2021-06-30 17:44   ` Julien Grall
2021-07-05  3:09     ` Penny Zheng
2021-06-07  2:43 ` [PATCH 3/9] xen/arm: introduce CONFIG_STATIC_ALLOCATION Penny Zheng
2021-06-07  6:17   ` Jan Beulich
2021-06-30 17:45   ` Julien Grall
2021-07-05  3:16     ` Penny Zheng
2021-06-07  2:43 ` [PATCH 4/9] xen/arm: static memory initialization Penny Zheng
2021-06-10  9:35   ` Jan Beulich
2021-06-30 17:46     ` Julien Grall
2021-07-05  5:22       ` Penny Zheng
2021-07-05  7:14         ` Penny Zheng
2021-07-05  7:50           ` Jan Beulich
2021-07-05  9:19             ` Penny Zheng
2021-07-05  7:48         ` Jan Beulich
2021-06-30 18:09   ` Julien Grall
2021-07-05  7:28     ` Penny Zheng
2021-07-06  9:09       ` Julien Grall
2021-07-06  9:20         ` Penny Zheng
2021-07-06  9:26           ` Julien Grall
2021-06-07  2:43 ` [PATCH 5/9] xen: introduce assign_pages_nr Penny Zheng
2021-06-10  9:49   ` Jan Beulich
2021-06-30 18:29     ` Julien Grall
2021-07-01  8:26       ` Jan Beulich
2021-07-01  9:24         ` Julien Grall
2021-07-01 10:13           ` Jan Beulich
2021-06-07  2:43 ` [PATCH 6/9] xen/arm: introduce alloc_staticmem_pages and alloc_domstatic_pages Penny Zheng
2021-06-10 10:23   ` Jan Beulich
2021-07-06  5:58     ` Penny Zheng
2021-07-06  6:53       ` Jan Beulich
2021-07-06  9:39         ` Julien Grall
2021-07-06  9:59           ` Jan Beulich
2021-07-06 10:31             ` Julien Grall
2021-07-08  9:09         ` Penny Zheng
2021-07-08 10:06           ` Jan Beulich
2021-07-08 11:07             ` Penny Zheng [this message]
2021-06-07  2:43 ` [PATCH 7/9] xen/arm: take care of concurrency on static memory allocation Penny Zheng
2021-06-10 10:53   ` Jan Beulich
2021-06-07  2:43 ` [PATCH 8/9] xen/arm: check `xen,static-mem` property during domain construction Penny Zheng
2021-07-03 13:26   ` Julien Grall
2021-07-06  6:31     ` Penny Zheng
2021-07-06  6:57       ` Jan Beulich
2021-07-06  7:35         ` Penny Zheng
2021-07-06  9:22       ` Julien Grall
2021-06-07  2:43 ` [PATCH 9/9] xen/arm: introduce allocate_static_memory Penny Zheng
2021-07-03 14:18   ` Julien Grall
2021-07-06  7:30     ` Penny Zheng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=VE1PR08MB5215EC648B15AF57D2E7051EF7199@VE1PR08MB5215.eurprd08.prod.outlook.com \
    --to=penny.zheng@arm.com \
    --cc=Bertrand.Marquis@arm.com \
    --cc=Wei.Chen@arm.com \
    --cc=jbeulich@suse.com \
    --cc=julien@xen.org \
    --cc=sstabellini@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.