xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@suse.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	andrei.semenov@bertin.fr, Wei Liu <wei.liu2@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 2/2] x86/dom0: improve paging memory usage calculations
Date: Tue, 11 Dec 2018 09:21:29 -0700	[thread overview]
Message-ID: <5C0FE4090200007800205338@prv1-mh.provo.novell.com> (raw)
In-Reply-To: <20181211153651.kgltzqwy5cbj5rpq@mac>

>>> On 11.12.18 at 16:36, <roger.pau@citrix.com> wrote:
> On Tue, Dec 11, 2018 at 08:19:34AM -0700, Jan Beulich wrote:
>> >>> On 05.12.18 at 15:55, <roger.pau@citrix.com> wrote:
>> > +unsigned long __init dom0_hap_pages(const struct domain *d,
>> > +                                    unsigned long nr_pages)
>> > +{
>> > +    /*
>> > +     * Attempt to account for at least some of the MMIO regions by adding the
>> > +     * size of the holes in the memory map to the amount of pages to map. Note
>> > +     * this will obviously not account for MMIO regions that are past the last
>> > +     * RAM range in the memory map.
>> > +     */
>> > +    nr_pages += max_page - total_pages;
>> > +    /*
>> > +     * Approximate the memory required for the HAP/IOMMU page tables by
>> > +     * pessimistically assuming each page will consume a 8 byte page table
>> > +     * entry.
>> > +     */
>> > +    return DIV_ROUND_UP(nr_pages * 8, PAGE_SIZE << PAGE_ORDER_4K);
>> 
>> With enough memory handed to Dom0 the memory needed for
>> L2 and higher page tables will matter as well.
> 
> The above calculation assumes all chunks will be mapped as 4KB
> entries, but this is very unlikely, so there's some room for higher
> page tables.

Right, but there's no dependency on 2M and/or 1G pages being
available, nor does the comment give any hint towards that
implication.

> If that doesn't seem enough I can add some extra space
> here, maybe a +5% or +10%?

A percentage doesn't do imo. From the memory map it should
be clear how many L2, L3, and L4 tables are going to be needed.
We do such a calculation in the PV case as well, after all.

>> I'm anyway having difficulty seeing why HAP and shadow would
>> have to use different calculations, the more that shadow relies
>> on the same P2M code that shadow uses in the AMD/SVM case.
> 
> For once shadow needs to take the number of vCPUs into account while
> HAP doesn't.

Yes, and as said - adding that shadow-specific amount on top of
the generic calculation would seem better to me.

>> Plus, as iirc was said by someone else already, I don't think we
>> can (continue to) neglect the MMIO space needs for MMCFG
>> and PCI devices, especially with devices having multi-Gb BARs.
> 
> Well, there's the comment above that notes this approach only takes
> into account the holes in the memory map as regions to be mapped. This
> can be improved later on, but I think the important point here is to
> know where this numbers come from in order to tweak it in the future.

You've given this same argument to Wei before. I agree the
calculation adjustments are an improvement even without
taking that other aspect into consideration, but I'm not happy
to see an important portion left out. What if the sum of all
BARs exceeds the amount of RAM? What if enough BARs are
so undesirably placed that every one of them needs a full
separate chain of L4, L3, L2, and L1 entries?

>> > +        else
>> > +            avail -= dom0_shadow_pages(d, nr_pages) +
>> > +                     dom0_hap_pages(d, nr_pages);
>> >      }
>> 
>> Doesn't dom0_shadow_pages() (mean to) already include the
>> amount needed for the P2M?
> 
> libxl code mentions: "plus 1 page per MiB of RAM for the P2M map," so
> I guess the shadow calculation takes into account the memory used by
> the IOMMU page tables?

I think that comment refers to the P2M needs, not the IOMMU ones.
Iirc in shadow mode the IOMMU uses separate page tables, albeit I
don't recall why that is when the P2M is really only used by software
in that case.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2018-12-11 16:21 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-12-05 14:54 [PATCH v2 0/2] x86/dom0: improve PVH paging memory calculation Roger Pau Monne
2018-12-05 14:54 ` [PATCH v2 1/2] x86/dom0: rename paging function Roger Pau Monne
2018-12-06 12:31   ` Wei Liu
2018-12-11 15:08   ` Jan Beulich
2018-12-11 15:19     ` Roger Pau Monné
2018-12-11 15:33       ` Jan Beulich
2018-12-12  9:14         ` Roger Pau Monné
2018-12-12  9:53           ` Jan Beulich
2018-12-12 10:04             ` Roger Pau Monné
2018-12-12 10:32               ` Jan Beulich
2018-12-12 15:56                 ` Roger Pau Monné
2018-12-12 16:15                   ` Jan Beulich
2018-12-12 17:05                     ` Roger Pau Monné
     [not found]                       ` <3F7E1F6E020000A10063616D@prv1-mh.provo.novell.com>
2018-12-13  7:45                         ` Jan Beulich
2018-12-13  9:14                           ` Roger Pau Monné
     [not found]                             ` <12305AED020000300063616D@prv1-mh.provo.novell.com>
2018-12-13 10:17                               ` Jan Beulich
2018-12-13 14:20                                 ` Roger Pau Monné
     [not found]                                   ` <7320EEF8020000C00063616D@prv1-mh.provo.novell.com>
2018-12-13 14:47                                     ` Jan Beulich
2019-01-29 15:02                                       ` Roger Pau Monné
     [not found]                                         ` <812B19D1020000B00063616D@prv1-mh.provo.novell.com>
2019-01-29 15:38                                           ` Jan Beulich
2018-12-05 14:55 ` [PATCH v2 2/2] x86/dom0: improve paging memory usage calculations Roger Pau Monne
2018-12-06 12:42   ` Wei Liu
2018-12-10 10:33     ` Roger Pau Monné
2018-12-11 12:17       ` Wei Liu
2018-12-11 15:19   ` Jan Beulich
2018-12-11 15:36     ` Roger Pau Monné
2018-12-11 16:21       ` Jan Beulich [this message]
2018-12-12  9:37         ` Roger Pau Monné
2018-12-12  9:59           ` Jan Beulich
2018-12-12 10:16             ` Roger Pau Monné
2018-12-12 10:57               ` Jan Beulich
2018-12-12 11:14                 ` Roger Pau Monné
2018-12-12 11:19                   ` Roger Pau Monné

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5C0FE4090200007800205338@prv1-mh.provo.novell.com \
    --to=jbeulich@suse.com \
    --cc=andrei.semenov@bertin.fr \
    --cc=andrew.cooper3@citrix.com \
    --cc=roger.pau@citrix.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).