xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@suse.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	andrei.semenov@bertin.fr, Wei Liu <wei.liu2@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 2/2] x86/dom0: improve paging memory usage calculations
Date: Tue, 11 Dec 2018 08:19:34 -0700	[thread overview]
Message-ID: <5C0FD5860200007800205220@prv1-mh.provo.novell.com> (raw)
In-Reply-To: <20181205145500.11989-3-roger.pau@citrix.com>

>>> On 05.12.18 at 15:55, <roger.pau@citrix.com> wrote:
> +unsigned long __init dom0_hap_pages(const struct domain *d,
> +                                    unsigned long nr_pages)
> +{
> +    /*
> +     * Attempt to account for at least some of the MMIO regions by adding the
> +     * size of the holes in the memory map to the amount of pages to map. Note
> +     * this will obviously not account for MMIO regions that are past the last
> +     * RAM range in the memory map.
> +     */
> +    nr_pages += max_page - total_pages;
> +    /*
> +     * Approximate the memory required for the HAP/IOMMU page tables by
> +     * pessimistically assuming each page will consume a 8 byte page table
> +     * entry.
> +     */
> +    return DIV_ROUND_UP(nr_pages * 8, PAGE_SIZE << PAGE_ORDER_4K);

With enough memory handed to Dom0 the memory needed for
L2 and higher page tables will matter as well.

I'm anyway having difficulty seeing why HAP and shadow would
have to use different calculations, the more that shadow relies
on the same P2M code that shadow uses in the AMD/SVM case.

Plus, as iirc was said by someone else already, I don't think we
can (continue to) neglect the MMIO space needs for MMCFG
and PCI devices, especially with devices having multi-Gb BARs.

> +}
> +
> +

No double blank lines please.

> @@ -324,8 +342,13 @@ unsigned long __init dom0_compute_nr_pages(
>          if ( !need_paging )
>              break;
>  
> -        /* Reserve memory for shadow or HAP. */
> -        avail -= dom0_shadow_pages(d, nr_pages);
> +        /* Reserve memory for CPU and IOMMU page tables. */
> +        if ( paging_mode_hap(d) )
> +            avail -= dom0_hap_pages(d, nr_pages) *
> +                     (iommu_hap_pt_share ? 1 : 2);

Use "<< !iommu_hap_pt_share" instead?

> +        else
> +            avail -= dom0_shadow_pages(d, nr_pages) +
> +                     dom0_hap_pages(d, nr_pages);
>      }

Doesn't dom0_shadow_pages() (mean to) already include the
amount needed for the P2M?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  parent reply	other threads:[~2018-12-11 15:19 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-12-05 14:54 [PATCH v2 0/2] x86/dom0: improve PVH paging memory calculation Roger Pau Monne
2018-12-05 14:54 ` [PATCH v2 1/2] x86/dom0: rename paging function Roger Pau Monne
2018-12-06 12:31   ` Wei Liu
2018-12-11 15:08   ` Jan Beulich
2018-12-11 15:19     ` Roger Pau Monné
2018-12-11 15:33       ` Jan Beulich
2018-12-12  9:14         ` Roger Pau Monné
2018-12-12  9:53           ` Jan Beulich
2018-12-12 10:04             ` Roger Pau Monné
2018-12-12 10:32               ` Jan Beulich
2018-12-12 15:56                 ` Roger Pau Monné
2018-12-12 16:15                   ` Jan Beulich
2018-12-12 17:05                     ` Roger Pau Monné
     [not found]                       ` <3F7E1F6E020000A10063616D@prv1-mh.provo.novell.com>
2018-12-13  7:45                         ` Jan Beulich
2018-12-13  9:14                           ` Roger Pau Monné
     [not found]                             ` <12305AED020000300063616D@prv1-mh.provo.novell.com>
2018-12-13 10:17                               ` Jan Beulich
2018-12-13 14:20                                 ` Roger Pau Monné
     [not found]                                   ` <7320EEF8020000C00063616D@prv1-mh.provo.novell.com>
2018-12-13 14:47                                     ` Jan Beulich
2019-01-29 15:02                                       ` Roger Pau Monné
     [not found]                                         ` <812B19D1020000B00063616D@prv1-mh.provo.novell.com>
2019-01-29 15:38                                           ` Jan Beulich
2018-12-05 14:55 ` [PATCH v2 2/2] x86/dom0: improve paging memory usage calculations Roger Pau Monne
2018-12-06 12:42   ` Wei Liu
2018-12-10 10:33     ` Roger Pau Monné
2018-12-11 12:17       ` Wei Liu
2018-12-11 15:19   ` Jan Beulich [this message]
2018-12-11 15:36     ` Roger Pau Monné
2018-12-11 16:21       ` Jan Beulich
2018-12-12  9:37         ` Roger Pau Monné
2018-12-12  9:59           ` Jan Beulich
2018-12-12 10:16             ` Roger Pau Monné
2018-12-12 10:57               ` Jan Beulich
2018-12-12 11:14                 ` Roger Pau Monné
2018-12-12 11:19                   ` Roger Pau Monné

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5C0FD5860200007800205220@prv1-mh.provo.novell.com \
    --to=jbeulich@suse.com \
    --cc=andrei.semenov@bertin.fr \
    --cc=andrew.cooper3@citrix.com \
    --cc=roger.pau@citrix.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).