[v3,2/2] arm64: mm: reserve CMA and crashkernel in ZONE_DMA32
diff mbox series

Message ID 20191107095611.18429-3-nsaenzjulienne@suse.de
State New, archived
Headers show
Series
  • arm64: Fix CMA/crashkernel reservation
Related show

Commit Message

Nicolas Saenz Julienne Nov. 7, 2019, 9:56 a.m. UTC
With the introduction of ZONE_DMA in arm64 we moved the default CMA and
crashkernel reservation into that area. This caused a regression on big
machines that need big CMA and crashkernel reservations. Note that
ZONE_DMA is only 1GB big.

Restore the previous behavior as the wide majority of devices are OK
with reserving these in ZONE_DMA32. The ones that need them in ZONE_DMA
will configure it explicitly.

Reported-by: Qian Cai <cai@lca.pw>
Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
---
 arch/arm64/mm/init.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Jon Masters March 22, 2021, 6:34 p.m. UTC | #1
Hi Nicolas,

On 11/7/19 4:56 AM, Nicolas Saenz Julienne wrote:
> With the introduction of ZONE_DMA in arm64 we moved the default CMA and
> crashkernel reservation into that area. This caused a regression on big
> machines that need big CMA and crashkernel reservations. Note that
> ZONE_DMA is only 1GB big.
> 
> Restore the previous behavior as the wide majority of devices are OK
> with reserving these in ZONE_DMA32. The ones that need them in ZONE_DMA
> will configure it explicitly.
> 
> Reported-by: Qian Cai <cai@lca.pw>
> Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
> ---
>   arch/arm64/mm/init.c | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 580d1052ac34..8385d3c0733f 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -88,7 +88,7 @@ static void __init reserve_crashkernel(void)
>   
>   	if (crash_base == 0) {
>   		/* Current arm64 boot protocol requires 2MB alignment */
> -		crash_base = memblock_find_in_range(0, ARCH_LOW_ADDRESS_LIMIT,
> +		crash_base = memblock_find_in_range(0, arm64_dma32_phys_limit,
>   				crash_size, SZ_2M);
>   		if (crash_base == 0) {
>   			pr_warn("cannot allocate crashkernel (size:0x%llx)\n",
> @@ -454,7 +454,7 @@ void __init arm64_memblock_init(void)
>   
>   	high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
>   
> -	dma_contiguous_reserve(arm64_dma_phys_limit ? : arm64_dma32_phys_limit);
> +	dma_contiguous_reserve(arm64_dma32_phys_limit);
>   }
>   
>   void __init bootmem_init(void)

Can we get a bit more of a backstory about what the regression was on 
larger machines? If the 32-bit DMA region is too small, but the machine 
otherwise has plenty of memory, the crashkernel reservation will fail. 
Most e.g. enterprise users aren't going to respond to that situation by 
determining the placement manually, they'll just not have a crashkernel.

Jon.
Jon Masters March 22, 2021, 6:40 p.m. UTC | #2
On 3/22/21 2:34 PM, Jon Masters wrote:
> Hi Nicolas,
> 
> On 11/7/19 4:56 AM, Nicolas Saenz Julienne wrote:
>> With the introduction of ZONE_DMA in arm64 we moved the default CMA and
>> crashkernel reservation into that area. This caused a regression on big
>> machines that need big CMA and crashkernel reservations. Note that
>> ZONE_DMA is only 1GB big.
>>
>> Restore the previous behavior as the wide majority of devices are OK
>> with reserving these in ZONE_DMA32. The ones that need them in ZONE_DMA
>> will configure it explicitly.
>>
>> Reported-by: Qian Cai <cai@lca.pw>
>> Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
>> ---
>>   arch/arm64/mm/init.c | 4 ++--
>>   1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
>> index 580d1052ac34..8385d3c0733f 100644
>> --- a/arch/arm64/mm/init.c
>> +++ b/arch/arm64/mm/init.c
>> @@ -88,7 +88,7 @@ static void __init reserve_crashkernel(void)
>>       if (crash_base == 0) {
>>           /* Current arm64 boot protocol requires 2MB alignment */
>> -        crash_base = memblock_find_in_range(0, ARCH_LOW_ADDRESS_LIMIT,
>> +        crash_base = memblock_find_in_range(0, arm64_dma32_phys_limit,
>>                   crash_size, SZ_2M);
>>           if (crash_base == 0) {
>>               pr_warn("cannot allocate crashkernel (size:0x%llx)\n",
>> @@ -454,7 +454,7 @@ void __init arm64_memblock_init(void)
>>       high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
>> -    dma_contiguous_reserve(arm64_dma_phys_limit ? : 
>> arm64_dma32_phys_limit);
>> +    dma_contiguous_reserve(arm64_dma32_phys_limit);
>>   }
>>   void __init bootmem_init(void)
> 
> Can we get a bit more of a backstory about what the regression was on 
> larger machines? If the 32-bit DMA region is too small, but the machine 
> otherwise has plenty of memory, the crashkernel reservation will fail. 
> Most e.g. enterprise users aren't going to respond to that situation by 
> determining the placement manually, they'll just not have a crashkernel.

Nevermind, looks like Catalin already changed this logic in Jan 2021 by 
removing arm64_dma32_phys_limit and I'm out of date.

Jon.
Nicolas Saenz Julienne March 22, 2021, 6:48 p.m. UTC | #3
On Mon, 2021-03-22 at 14:40 -0400, Jon Masters wrote:
> On 3/22/21 2:34 PM, Jon Masters wrote:
> > Hi Nicolas,
> > 
> > On 11/7/19 4:56 AM, Nicolas Saenz Julienne wrote:
> > > With the introduction of ZONE_DMA in arm64 we moved the default CMA and
> > > crashkernel reservation into that area. This caused a regression on big
> > > machines that need big CMA and crashkernel reservations. Note that
> > > ZONE_DMA is only 1GB big.
> > > 
> > > Restore the previous behavior as the wide majority of devices are OK
> > > with reserving these in ZONE_DMA32. The ones that need them in ZONE_DMA
> > > will configure it explicitly.
> > > 
> > > Reported-by: Qian Cai <cai@lca.pw>
> > > Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
> > > ---
> > >   arch/arm64/mm/init.c | 4 ++--
> > >   1 file changed, 2 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> > > index 580d1052ac34..8385d3c0733f 100644
> > > --- a/arch/arm64/mm/init.c
> > > +++ b/arch/arm64/mm/init.c
> > > @@ -88,7 +88,7 @@ static void __init reserve_crashkernel(void)
> > >       if (crash_base == 0) {
> > >           /* Current arm64 boot protocol requires 2MB alignment */
> > > -        crash_base = memblock_find_in_range(0, ARCH_LOW_ADDRESS_LIMIT,
> > > +        crash_base = memblock_find_in_range(0, arm64_dma32_phys_limit,
> > >                   crash_size, SZ_2M);
> > >           if (crash_base == 0) {
> > >               pr_warn("cannot allocate crashkernel (size:0x%llx)\n",
> > > @@ -454,7 +454,7 @@ void __init arm64_memblock_init(void)
> > >       high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
> > > -    dma_contiguous_reserve(arm64_dma_phys_limit ? : 
> > > arm64_dma32_phys_limit);
> > > +    dma_contiguous_reserve(arm64_dma32_phys_limit);
> > >   }
> > >   void __init bootmem_init(void)
> > 
> > Can we get a bit more of a backstory about what the regression was on 
> > larger machines? If the 32-bit DMA region is too small, but the machine 
> > otherwise has plenty of memory, the crashkernel reservation will fail. 
> > Most e.g. enterprise users aren't going to respond to that situation by 
> > determining the placement manually, they'll just not have a crashkernel.
> 
> Nevermind, looks like Catalin already changed this logic in Jan 2021 by 
> removing arm64_dma32_phys_limit and I'm out of date.

Also see this series (already merged):

https://lore.kernel.org/linux-arm-kernel/20201119175400.9995-1-nsaenzjulienne@suse.de/

Regads,
Nicolas

Patch
diff mbox series

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 580d1052ac34..8385d3c0733f 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -88,7 +88,7 @@  static void __init reserve_crashkernel(void)
 
 	if (crash_base == 0) {
 		/* Current arm64 boot protocol requires 2MB alignment */
-		crash_base = memblock_find_in_range(0, ARCH_LOW_ADDRESS_LIMIT,
+		crash_base = memblock_find_in_range(0, arm64_dma32_phys_limit,
 				crash_size, SZ_2M);
 		if (crash_base == 0) {
 			pr_warn("cannot allocate crashkernel (size:0x%llx)\n",
@@ -454,7 +454,7 @@  void __init arm64_memblock_init(void)
 
 	high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
 
-	dma_contiguous_reserve(arm64_dma_phys_limit ? : arm64_dma32_phys_limit);
+	dma_contiguous_reserve(arm64_dma32_phys_limit);
 }
 
 void __init bootmem_init(void)