All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@suse.com>
To: Sergey Dyasli <sergey.dyasli@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	xen-devel@lists.xen.org, Wei Liu <wei.liu2@citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [PATCH v1] x86/dom0: use MEMF_no_scrub for Dom0 RAM allocation
Date: Tue, 20 Nov 2018 10:16:57 -0700	[thread overview]
Message-ID: <5BF4418902000078001FE36A@prv1-mh.provo.novell.com> (raw)
In-Reply-To: <20181120170055.16309-1-sergey.dyasli@citrix.com>

>>> On 20.11.18 at 18:00, <sergey.dyasli@citrix.com> wrote:
> Now that idle scrub is the default option, all memory is marked as dirty
> and alloc_domheap_pages() will do eager scrubbing by default. This can
> lead to longer Dom0 construction and potentially to a watchdog timeout,
> especially on older H/W (e.g. Harpertown).
> 
> Pass MEMF_no_scrub to optimise this process since there is little point
> in scrubbing memory for Dom0 RAM.

Good idea.

> --- a/xen/arch/x86/pv/dom0_build.c
> +++ b/xen/arch/x86/pv/dom0_build.c
> @@ -239,7 +239,8 @@ static struct page_info * __init alloc_chunk(struct domain *d,
>          order = last_order;
>      else if ( max_pages & (max_pages - 1) )
>          --order;
> -    while ( (page = alloc_domheap_pages(d, order, dom0_memflags)) == NULL )
> +    while ( (page = alloc_domheap_pages(d, order, dom0_memflags |
> +                                                  MEMF_no_scrub)) == NULL )
>          if ( order-- == 0 )
>              break;
>      if ( page )
> @@ -265,7 +266,7 @@ static struct page_info * __init alloc_chunk(struct domain *d,
>  
>          if ( d->tot_pages + (1 << order) > d->max_pages )
>              continue;
> -        pg2 = alloc_domheap_pages(d, order, MEMF_exact_node);
> +        pg2 = alloc_domheap_pages(d, order, MEMF_exact_node | MEMF_no_scrub);
>          if ( pg2 > page )
>          {
>              free_domheap_pages(page, free_order);

There are quite a few more allocations up from here. Any reason
you don't convert those as well, the more that some even
clear_page() what they've just allocated?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2018-11-20 17:16 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-20 17:00 [PATCH v1] x86/dom0: use MEMF_no_scrub for Dom0 RAM allocation Sergey Dyasli
2018-11-20 17:16 ` Jan Beulich [this message]
2018-11-21  9:22   ` Sergey Dyasli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5BF4418902000078001FE36A@prv1-mh.provo.novell.com \
    --to=jbeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=roger.pau@citrix.com \
    --cc=sergey.dyasli@citrix.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.