* [PATCH v1] mm/page_alloc: fix MEMF_no_dma allocations for single NUMA
@ 2019-01-07 11:27 Sergey Dyasli
2019-01-07 12:05 ` Jan Beulich
0 siblings, 1 reply; 3+ messages in thread
From: Sergey Dyasli @ 2019-01-07 11:27 UTC (permalink / raw)
To: xen-devel
Cc: Sergey Dyasli, Wei Liu, George Dunlap, Andrew Cooper,
Julien Grall, Jan Beulich, Boris Ostrovsky, Roger Pau Monné
Currently dma_bitsize is zero by default on single NUMA node machines.
This makes all alloc_domheap_pages() calls with MEMF_no_dma return NULL.
There is only 1 user of MEMF_no_dma: dom0_memflags, which are used
during memory allocation for Dom0. Failing allocation with default
dom0_memflags is especially severe for the PV Dom0 case: it makes
alloc_chunk() to use suboptimal 2MB allocation algorithm with a search
for higher memory addresses.
This can lead to the NMI watchdog timeout during PV Dom0 construction
on some machines, which can be worked around by specifying "dma_bits"
in Xen's cmdline manually.
Fix the issue by initialising dma_bitsize even on single NUMA machines.
Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Julien Grall <julien.grall@arm.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
xen/common/page_alloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index e591601f9c..4515282c27 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1863,7 +1863,7 @@ void __init end_boot_allocator(void)
nr_bootmem_regions = 0;
init_heap_pages(virt_to_page(bootmem_region_list), 1);
- if ( !dma_bitsize && (num_online_nodes() > 1) )
+ if ( !dma_bitsize )
dma_bitsize = arch_get_dma_bitsize();
printk("Domain heap initialised");
--
2.17.1
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH v1] mm/page_alloc: fix MEMF_no_dma allocations for single NUMA
2019-01-07 11:27 [PATCH v1] mm/page_alloc: fix MEMF_no_dma allocations for single NUMA Sergey Dyasli
@ 2019-01-07 12:05 ` Jan Beulich
2019-01-08 11:09 ` Sergey Dyasli
0 siblings, 1 reply; 3+ messages in thread
From: Jan Beulich @ 2019-01-07 12:05 UTC (permalink / raw)
To: Sergey Dyasli
Cc: Wei Liu, George Dunlap, Andrew Cooper, xen-devel, Julien Grall,
Boris Ostrovsky, Roger Pau Monne
>>> On 07.01.19 at 12:27, <sergey.dyasli@citrix.com> wrote:
> Currently dma_bitsize is zero by default on single NUMA node machines.
> This makes all alloc_domheap_pages() calls with MEMF_no_dma return NULL.
>
> There is only 1 user of MEMF_no_dma: dom0_memflags, which are used
> during memory allocation for Dom0. Failing allocation with default
> dom0_memflags is especially severe for the PV Dom0 case: it makes
> alloc_chunk() to use suboptimal 2MB allocation algorithm with a search
> for higher memory addresses.
>
> This can lead to the NMI watchdog timeout during PV Dom0 construction
> on some machines, which can be worked around by specifying "dma_bits"
> in Xen's cmdline manually.
>
> Fix the issue by initialising dma_bitsize even on single NUMA machines.
I've not yet looked at why exactly this was done for multi-node
systems only, but in any event this change renders somewhat
stale the comment next to the dma_bitsize definition.
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1863,7 +1863,7 @@ void __init end_boot_allocator(void)
> nr_bootmem_regions = 0;
> init_heap_pages(virt_to_page(bootmem_region_list), 1);
>
> - if ( !dma_bitsize && (num_online_nodes() > 1) )
> + if ( !dma_bitsize )
> dma_bitsize = arch_get_dma_bitsize();
Did you consider the alternative of leaving this code alone and
instead doing
if ( !dma_bitsize )
memflags &= ~MEMF_no_dma;
else if ( (dma_zone = bits_to_zone(dma_bitsize)) < zone_hi )
pg = alloc_heap_pages(dma_zone + 1, zone_hi, order, memflags, d);
in alloc_domheap_pages(), which would also address the same
issue in the case of arch_get_dma_bitsize() returning zero?
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH v1] mm/page_alloc: fix MEMF_no_dma allocations for single NUMA
2019-01-07 12:05 ` Jan Beulich
@ 2019-01-08 11:09 ` Sergey Dyasli
0 siblings, 0 replies; 3+ messages in thread
From: Sergey Dyasli @ 2019-01-08 11:09 UTC (permalink / raw)
To: Jan Beulich
Cc: Wei Liu, George Dunlap, Andrew Cooper, xen-devel, Julien Grall,
Boris Ostrovsky, Roger Pau Monne
On 07/01/2019 12:05, Jan Beulich wrote:
>>>> On 07.01.19 at 12:27, <sergey.dyasli@citrix.com> wrote:
>> Currently dma_bitsize is zero by default on single NUMA node machines.
>> This makes all alloc_domheap_pages() calls with MEMF_no_dma return NULL.
>>
>> There is only 1 user of MEMF_no_dma: dom0_memflags, which are used
>> during memory allocation for Dom0. Failing allocation with default
>> dom0_memflags is especially severe for the PV Dom0 case: it makes
>> alloc_chunk() to use suboptimal 2MB allocation algorithm with a search
>> for higher memory addresses.
>>
>> This can lead to the NMI watchdog timeout during PV Dom0 construction
>> on some machines, which can be worked around by specifying "dma_bits"
>> in Xen's cmdline manually.
>>
>> Fix the issue by initialising dma_bitsize even on single NUMA machines.
>
> I've not yet looked at why exactly this was done for multi-node
> systems only, but in any event this change renders somewhat
> stale the comment next to the dma_bitsize definition.
>
>> --- a/xen/common/page_alloc.c
>> +++ b/xen/common/page_alloc.c
>> @@ -1863,7 +1863,7 @@ void __init end_boot_allocator(void)
>> nr_bootmem_regions = 0;
>> init_heap_pages(virt_to_page(bootmem_region_list), 1);
>>
>> - if ( !dma_bitsize && (num_online_nodes() > 1) )
>> + if ( !dma_bitsize )
>> dma_bitsize = arch_get_dma_bitsize();
>
> Did you consider the alternative of leaving this code alone and
> instead doing
>
> if ( !dma_bitsize )
> memflags &= ~MEMF_no_dma;
> else if ( (dma_zone = bits_to_zone(dma_bitsize)) < zone_hi )
> pg = alloc_heap_pages(dma_zone + 1, zone_hi, order, memflags, d);
>
> in alloc_domheap_pages(), which would also address the same
> issue in the case of arch_get_dma_bitsize() returning zero?
I like this suggestion. There is no point in MEMF_no_dma if dma_zone
doesn't exist, so the flag can be ignored. The allocator should first
use higher memory addresses for memory allocation requests, and DMA (<4G)
memory shouldn't end up allocated for Dom0 RAM (given there is enough
mem >4G).
I'll prepare a v2 soon.
Thanks,
Sergey
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2019-01-08 11:09 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-07 11:27 [PATCH v1] mm/page_alloc: fix MEMF_no_dma allocations for single NUMA Sergey Dyasli
2019-01-07 12:05 ` Jan Beulich
2019-01-08 11:09 ` Sergey Dyasli
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.