All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1] x86/dom0: use MEMF_no_scrub for Dom0 RAM allocation
@ 2018-11-20 17:00 Sergey Dyasli
  2018-11-20 17:16 ` Jan Beulich
  0 siblings, 1 reply; 3+ messages in thread
From: Sergey Dyasli @ 2018-11-20 17:00 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergey Dyasli, Roger Pau Monné, Wei Liu, Jan Beulich, Andrew Cooper

Now that idle scrub is the default option, all memory is marked as dirty
and alloc_domheap_pages() will do eager scrubbing by default. This can
lead to longer Dom0 construction and potentially to a watchdog timeout,
especially on older H/W (e.g. Harpertown).

Pass MEMF_no_scrub to optimise this process since there is little point
in scrubbing memory for Dom0 RAM.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
CC: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: "Roger Pau Monné" <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/dom0_build.c | 2 +-
 xen/arch/x86/pv/dom0_build.c  | 5 +++--
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index 3e29cd30b8..12c20a4b66 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -101,7 +101,7 @@ static int __init pvh_populate_memory_range(struct domain *d,
         unsigned int range_order = get_order_from_pages(nr_pages + 1);
 
         order = min(range_order ? range_order - 1 : 0, order);
-        page = alloc_domheap_pages(d, order, dom0_memflags);
+        page = alloc_domheap_pages(d, order, dom0_memflags | MEMF_no_scrub);
         if ( page == NULL )
         {
             if ( order == 0 && dom0_memflags )
diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index dc3c1e1202..f50a36c1f3 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -239,7 +239,8 @@ static struct page_info * __init alloc_chunk(struct domain *d,
         order = last_order;
     else if ( max_pages & (max_pages - 1) )
         --order;
-    while ( (page = alloc_domheap_pages(d, order, dom0_memflags)) == NULL )
+    while ( (page = alloc_domheap_pages(d, order, dom0_memflags |
+                                                  MEMF_no_scrub)) == NULL )
         if ( order-- == 0 )
             break;
     if ( page )
@@ -265,7 +266,7 @@ static struct page_info * __init alloc_chunk(struct domain *d,
 
         if ( d->tot_pages + (1 << order) > d->max_pages )
             continue;
-        pg2 = alloc_domheap_pages(d, order, MEMF_exact_node);
+        pg2 = alloc_domheap_pages(d, order, MEMF_exact_node | MEMF_no_scrub);
         if ( pg2 > page )
         {
             free_domheap_pages(page, free_order);
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH v1] x86/dom0: use MEMF_no_scrub for Dom0 RAM allocation
  2018-11-20 17:00 [PATCH v1] x86/dom0: use MEMF_no_scrub for Dom0 RAM allocation Sergey Dyasli
@ 2018-11-20 17:16 ` Jan Beulich
  2018-11-21  9:22   ` Sergey Dyasli
  0 siblings, 1 reply; 3+ messages in thread
From: Jan Beulich @ 2018-11-20 17:16 UTC (permalink / raw)
  To: Sergey Dyasli; +Cc: Andrew Cooper, xen-devel, Wei Liu, Roger Pau Monne

>>> On 20.11.18 at 18:00, <sergey.dyasli@citrix.com> wrote:
> Now that idle scrub is the default option, all memory is marked as dirty
> and alloc_domheap_pages() will do eager scrubbing by default. This can
> lead to longer Dom0 construction and potentially to a watchdog timeout,
> especially on older H/W (e.g. Harpertown).
> 
> Pass MEMF_no_scrub to optimise this process since there is little point
> in scrubbing memory for Dom0 RAM.

Good idea.

> --- a/xen/arch/x86/pv/dom0_build.c
> +++ b/xen/arch/x86/pv/dom0_build.c
> @@ -239,7 +239,8 @@ static struct page_info * __init alloc_chunk(struct domain *d,
>          order = last_order;
>      else if ( max_pages & (max_pages - 1) )
>          --order;
> -    while ( (page = alloc_domheap_pages(d, order, dom0_memflags)) == NULL )
> +    while ( (page = alloc_domheap_pages(d, order, dom0_memflags |
> +                                                  MEMF_no_scrub)) == NULL )
>          if ( order-- == 0 )
>              break;
>      if ( page )
> @@ -265,7 +266,7 @@ static struct page_info * __init alloc_chunk(struct domain *d,
>  
>          if ( d->tot_pages + (1 << order) > d->max_pages )
>              continue;
> -        pg2 = alloc_domheap_pages(d, order, MEMF_exact_node);
> +        pg2 = alloc_domheap_pages(d, order, MEMF_exact_node | MEMF_no_scrub);
>          if ( pg2 > page )
>          {
>              free_domheap_pages(page, free_order);

There are quite a few more allocations up from here. Any reason
you don't convert those as well, the more that some even
clear_page() what they've just allocated?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v1] x86/dom0: use MEMF_no_scrub for Dom0 RAM allocation
  2018-11-20 17:16 ` Jan Beulich
@ 2018-11-21  9:22   ` Sergey Dyasli
  0 siblings, 0 replies; 3+ messages in thread
From: Sergey Dyasli @ 2018-11-21  9:22 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Roger Pau Monne, Wei Liu,
	sergey.dyasli@citrix.com >> Sergey Dyasli, xen-devel

On 20/11/2018 17:16, Jan Beulich wrote:
>>>> On 20.11.18 at 18:00, <sergey.dyasli@citrix.com> wrote:
>> Now that idle scrub is the default option, all memory is marked as dirty
>> and alloc_domheap_pages() will do eager scrubbing by default. This can
>> lead to longer Dom0 construction and potentially to a watchdog timeout,
>> especially on older H/W (e.g. Harpertown).
>>
>> Pass MEMF_no_scrub to optimise this process since there is little point
>> in scrubbing memory for Dom0 RAM.
> 
> Good idea.
> 
>> --- a/xen/arch/x86/pv/dom0_build.c
>> +++ b/xen/arch/x86/pv/dom0_build.c
>> @@ -239,7 +239,8 @@ static struct page_info * __init alloc_chunk(struct domain *d,
>>          order = last_order;
>>      else if ( max_pages & (max_pages - 1) )
>>          --order;
>> -    while ( (page = alloc_domheap_pages(d, order, dom0_memflags)) == NULL )
>> +    while ( (page = alloc_domheap_pages(d, order, dom0_memflags |
>> +                                                  MEMF_no_scrub)) == NULL )
>>          if ( order-- == 0 )
>>              break;
>>      if ( page )
>> @@ -265,7 +266,7 @@ static struct page_info * __init alloc_chunk(struct domain *d,
>>  
>>          if ( d->tot_pages + (1 << order) > d->max_pages )
>>              continue;
>> -        pg2 = alloc_domheap_pages(d, order, MEMF_exact_node);
>> +        pg2 = alloc_domheap_pages(d, order, MEMF_exact_node | MEMF_no_scrub);
>>          if ( pg2 > page )
>>          {
>>              free_domheap_pages(page, free_order);
> 
> There are quite a few more allocations up from here. Any reason
> you don't convert those as well, the more that some even
> clear_page() what they've just allocated?

Dom0 RAM is just being the largest allocation. But yes, it should be safe
to use MEMF_no_scrub in every alloc_domheap_pages() call during Dom0
construction. I'll send an updated patch after some testing.

--
Thanks,
Sergey

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2018-11-21  9:22 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-20 17:00 [PATCH v1] x86/dom0: use MEMF_no_scrub for Dom0 RAM allocation Sergey Dyasli
2018-11-20 17:16 ` Jan Beulich
2018-11-21  9:22   ` Sergey Dyasli

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.