* [PATCH] powerpc/mm: Move CMA reservations after initmem_init()
@ 2022-06-16 12:00 ` Michael Ellerman
0 siblings, 0 replies; 8+ messages in thread
From: Michael Ellerman @ 2022-06-16 12:00 UTC (permalink / raw)
To: linuxppc-dev; +Cc: aneesh.kumar, ziy, linux-mm
After commit 11ac3e87ce09 ("mm: cma: use pageblock_order as the single
alignment") there is an error at boot about the KVM CMA reservation
failing, eg:
kvm_cma_reserve: reserving 6553 MiB for global area
cma: Failed to reserve 6553 MiB
That makes it impossible to start KVM guests using the hash MMU with
more than 2G of memory, because the VM is unable to allocate a large
enough region for the hash page table, eg:
$ qemu-system-ppc64 -enable-kvm -M pseries -m 4G ...
qemu-system-ppc64: Failed to allocate KVM HPT of order 25: Cannot allocate memory
Aneesh pointed out that this happens because when kvm_cma_reserve() is
called, pageblock_order has not been initialised yet, and is still zero,
causing the checks in cma_init_reserved_mem() against
CMA_MIN_ALIGNMENT_PAGES to fail.
Fix it by moving the call to kvm_cma_reserve() after initmem_init(). The
pageblock_order is initialised in sparse_init() which is called from
initmem_init().
Also move the hugetlb CMA reservation.
Fixes: 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
arch/powerpc/kernel/setup-common.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
index eb0077b302e2..1a02629ec70b 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -935,12 +935,6 @@ void __init setup_arch(char **cmdline_p)
/* Print various info about the machine that has been gathered so far. */
print_system_info();
- /* Reserve large chunks of memory for use by CMA for KVM. */
- kvm_cma_reserve();
-
- /* Reserve large chunks of memory for us by CMA for hugetlb */
- gigantic_hugetlb_cma_reserve();
-
klp_init_thread_info(&init_task);
setup_initial_init_mm(_stext, _etext, _edata, _end);
@@ -955,6 +949,13 @@ void __init setup_arch(char **cmdline_p)
initmem_init();
+ /*
+ * Reserve large chunks of memory for use by CMA for KVM and hugetlb. These must
+ * be called after initmem_init(), so that pageblock_order is initialised.
+ */
+ kvm_cma_reserve();
+ gigantic_hugetlb_cma_reserve();
+
early_memtest(min_low_pfn << PAGE_SHIFT, max_low_pfn << PAGE_SHIFT);
if (ppc_md.setup_arch)
--
2.35.3
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH] powerpc/mm: Move CMA reservations after initmem_init()
@ 2022-06-16 12:00 ` Michael Ellerman
0 siblings, 0 replies; 8+ messages in thread
From: Michael Ellerman @ 2022-06-16 12:00 UTC (permalink / raw)
To: linuxppc-dev; +Cc: aneesh.kumar, ziy, linux-mm
After commit 11ac3e87ce09 ("mm: cma: use pageblock_order as the single
alignment") there is an error at boot about the KVM CMA reservation
failing, eg:
kvm_cma_reserve: reserving 6553 MiB for global area
cma: Failed to reserve 6553 MiB
That makes it impossible to start KVM guests using the hash MMU with
more than 2G of memory, because the VM is unable to allocate a large
enough region for the hash page table, eg:
$ qemu-system-ppc64 -enable-kvm -M pseries -m 4G ...
qemu-system-ppc64: Failed to allocate KVM HPT of order 25: Cannot allocate memory
Aneesh pointed out that this happens because when kvm_cma_reserve() is
called, pageblock_order has not been initialised yet, and is still zero,
causing the checks in cma_init_reserved_mem() against
CMA_MIN_ALIGNMENT_PAGES to fail.
Fix it by moving the call to kvm_cma_reserve() after initmem_init(). The
pageblock_order is initialised in sparse_init() which is called from
initmem_init().
Also move the hugetlb CMA reservation.
Fixes: 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
arch/powerpc/kernel/setup-common.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
index eb0077b302e2..1a02629ec70b 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -935,12 +935,6 @@ void __init setup_arch(char **cmdline_p)
/* Print various info about the machine that has been gathered so far. */
print_system_info();
- /* Reserve large chunks of memory for use by CMA for KVM. */
- kvm_cma_reserve();
-
- /* Reserve large chunks of memory for us by CMA for hugetlb */
- gigantic_hugetlb_cma_reserve();
-
klp_init_thread_info(&init_task);
setup_initial_init_mm(_stext, _etext, _edata, _end);
@@ -955,6 +949,13 @@ void __init setup_arch(char **cmdline_p)
initmem_init();
+ /*
+ * Reserve large chunks of memory for use by CMA for KVM and hugetlb. These must
+ * be called after initmem_init(), so that pageblock_order is initialised.
+ */
+ kvm_cma_reserve();
+ gigantic_hugetlb_cma_reserve();
+
early_memtest(min_low_pfn << PAGE_SHIFT, max_low_pfn << PAGE_SHIFT);
if (ppc_md.setup_arch)
--
2.35.3
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH] powerpc/mm: Move CMA reservations after initmem_init()
2022-06-16 12:00 ` Michael Ellerman
@ 2022-06-16 13:07 ` Aneesh Kumar K.V
-1 siblings, 0 replies; 8+ messages in thread
From: Aneesh Kumar K.V @ 2022-06-16 13:07 UTC (permalink / raw)
To: Michael Ellerman, linuxppc-dev; +Cc: ziy, linux-mm
Michael Ellerman <mpe@ellerman.id.au> writes:
> After commit 11ac3e87ce09 ("mm: cma: use pageblock_order as the single
> alignment") there is an error at boot about the KVM CMA reservation
> failing, eg:
>
> kvm_cma_reserve: reserving 6553 MiB for global area
> cma: Failed to reserve 6553 MiB
>
> That makes it impossible to start KVM guests using the hash MMU with
> more than 2G of memory, because the VM is unable to allocate a large
> enough region for the hash page table, eg:
>
> $ qemu-system-ppc64 -enable-kvm -M pseries -m 4G ...
> qemu-system-ppc64: Failed to allocate KVM HPT of order 25: Cannot allocate memory
>
> Aneesh pointed out that this happens because when kvm_cma_reserve() is
> called, pageblock_order has not been initialised yet, and is still zero,
> causing the checks in cma_init_reserved_mem() against
> CMA_MIN_ALIGNMENT_PAGES to fail.
>
> Fix it by moving the call to kvm_cma_reserve() after initmem_init(). The
> pageblock_order is initialised in sparse_init() which is called from
> initmem_init().
>
> Also move the hugetlb CMA reservation.
>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> Fixes: 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment")
> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
> ---
> arch/powerpc/kernel/setup-common.c | 13 +++++++------
> 1 file changed, 7 insertions(+), 6 deletions(-)
>
> diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
> index eb0077b302e2..1a02629ec70b 100644
> --- a/arch/powerpc/kernel/setup-common.c
> +++ b/arch/powerpc/kernel/setup-common.c
> @@ -935,12 +935,6 @@ void __init setup_arch(char **cmdline_p)
> /* Print various info about the machine that has been gathered so far. */
> print_system_info();
>
> - /* Reserve large chunks of memory for use by CMA for KVM. */
> - kvm_cma_reserve();
> -
> - /* Reserve large chunks of memory for us by CMA for hugetlb */
> - gigantic_hugetlb_cma_reserve();
> -
> klp_init_thread_info(&init_task);
>
> setup_initial_init_mm(_stext, _etext, _edata, _end);
> @@ -955,6 +949,13 @@ void __init setup_arch(char **cmdline_p)
>
> initmem_init();
>
> + /*
> + * Reserve large chunks of memory for use by CMA for KVM and hugetlb. These must
> + * be called after initmem_init(), so that pageblock_order is initialised.
> + */
> + kvm_cma_reserve();
> + gigantic_hugetlb_cma_reserve();
> +
> early_memtest(min_low_pfn << PAGE_SHIFT, max_low_pfn << PAGE_SHIFT);
>
> if (ppc_md.setup_arch)
> --
> 2.35.3
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] powerpc/mm: Move CMA reservations after initmem_init()
@ 2022-06-16 13:07 ` Aneesh Kumar K.V
0 siblings, 0 replies; 8+ messages in thread
From: Aneesh Kumar K.V @ 2022-06-16 13:07 UTC (permalink / raw)
To: Michael Ellerman, linuxppc-dev; +Cc: linux-mm, ziy
Michael Ellerman <mpe@ellerman.id.au> writes:
> After commit 11ac3e87ce09 ("mm: cma: use pageblock_order as the single
> alignment") there is an error at boot about the KVM CMA reservation
> failing, eg:
>
> kvm_cma_reserve: reserving 6553 MiB for global area
> cma: Failed to reserve 6553 MiB
>
> That makes it impossible to start KVM guests using the hash MMU with
> more than 2G of memory, because the VM is unable to allocate a large
> enough region for the hash page table, eg:
>
> $ qemu-system-ppc64 -enable-kvm -M pseries -m 4G ...
> qemu-system-ppc64: Failed to allocate KVM HPT of order 25: Cannot allocate memory
>
> Aneesh pointed out that this happens because when kvm_cma_reserve() is
> called, pageblock_order has not been initialised yet, and is still zero,
> causing the checks in cma_init_reserved_mem() against
> CMA_MIN_ALIGNMENT_PAGES to fail.
>
> Fix it by moving the call to kvm_cma_reserve() after initmem_init(). The
> pageblock_order is initialised in sparse_init() which is called from
> initmem_init().
>
> Also move the hugetlb CMA reservation.
>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> Fixes: 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment")
> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
> ---
> arch/powerpc/kernel/setup-common.c | 13 +++++++------
> 1 file changed, 7 insertions(+), 6 deletions(-)
>
> diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
> index eb0077b302e2..1a02629ec70b 100644
> --- a/arch/powerpc/kernel/setup-common.c
> +++ b/arch/powerpc/kernel/setup-common.c
> @@ -935,12 +935,6 @@ void __init setup_arch(char **cmdline_p)
> /* Print various info about the machine that has been gathered so far. */
> print_system_info();
>
> - /* Reserve large chunks of memory for use by CMA for KVM. */
> - kvm_cma_reserve();
> -
> - /* Reserve large chunks of memory for us by CMA for hugetlb */
> - gigantic_hugetlb_cma_reserve();
> -
> klp_init_thread_info(&init_task);
>
> setup_initial_init_mm(_stext, _etext, _edata, _end);
> @@ -955,6 +949,13 @@ void __init setup_arch(char **cmdline_p)
>
> initmem_init();
>
> + /*
> + * Reserve large chunks of memory for use by CMA for KVM and hugetlb. These must
> + * be called after initmem_init(), so that pageblock_order is initialised.
> + */
> + kvm_cma_reserve();
> + gigantic_hugetlb_cma_reserve();
> +
> early_memtest(min_low_pfn << PAGE_SHIFT, max_low_pfn << PAGE_SHIFT);
>
> if (ppc_md.setup_arch)
> --
> 2.35.3
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] powerpc/mm: Move CMA reservations after initmem_init()
2022-06-16 12:00 ` Michael Ellerman
@ 2022-06-16 13:33 ` Zi Yan
-1 siblings, 0 replies; 8+ messages in thread
From: Zi Yan @ 2022-06-16 13:33 UTC (permalink / raw)
To: Michael Ellerman; +Cc: linuxppc-dev, aneesh.kumar, linux-mm
[-- Attachment #1: Type: text/plain, Size: 2634 bytes --]
On 16 Jun 2022, at 8:00, Michael Ellerman wrote:
> After commit 11ac3e87ce09 ("mm: cma: use pageblock_order as the single
> alignment") there is an error at boot about the KVM CMA reservation
> failing, eg:
>
> kvm_cma_reserve: reserving 6553 MiB for global area
> cma: Failed to reserve 6553 MiB
>
> That makes it impossible to start KVM guests using the hash MMU with
> more than 2G of memory, because the VM is unable to allocate a large
> enough region for the hash page table, eg:
>
> $ qemu-system-ppc64 -enable-kvm -M pseries -m 4G ...
> qemu-system-ppc64: Failed to allocate KVM HPT of order 25: Cannot allocate memory
>
> Aneesh pointed out that this happens because when kvm_cma_reserve() is
> called, pageblock_order has not been initialised yet, and is still zero,
> causing the checks in cma_init_reserved_mem() against
> CMA_MIN_ALIGNMENT_PAGES to fail.
>
> Fix it by moving the call to kvm_cma_reserve() after initmem_init(). The
> pageblock_order is initialised in sparse_init() which is called from
> initmem_init().
>
> Also move the hugetlb CMA reservation.
>
> Fixes: 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment")
> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
> ---
> arch/powerpc/kernel/setup-common.c | 13 +++++++------
> 1 file changed, 7 insertions(+), 6 deletions(-)
>
> diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
> index eb0077b302e2..1a02629ec70b 100644
> --- a/arch/powerpc/kernel/setup-common.c
> +++ b/arch/powerpc/kernel/setup-common.c
> @@ -935,12 +935,6 @@ void __init setup_arch(char **cmdline_p)
> /* Print various info about the machine that has been gathered so far. */
> print_system_info();
>
> - /* Reserve large chunks of memory for use by CMA for KVM. */
> - kvm_cma_reserve();
> -
> - /* Reserve large chunks of memory for us by CMA for hugetlb */
> - gigantic_hugetlb_cma_reserve();
> -
> klp_init_thread_info(&init_task);
>
> setup_initial_init_mm(_stext, _etext, _edata, _end);
> @@ -955,6 +949,13 @@ void __init setup_arch(char **cmdline_p)
>
> initmem_init();
>
> + /*
> + * Reserve large chunks of memory for use by CMA for KVM and hugetlb. These must
> + * be called after initmem_init(), so that pageblock_order is initialised.
> + */
> + kvm_cma_reserve();
> + gigantic_hugetlb_cma_reserve();
> +
> early_memtest(min_low_pfn << PAGE_SHIFT, max_low_pfn << PAGE_SHIFT);
>
> if (ppc_md.setup_arch)
> --
> 2.35.3
Thank you for the fix.
Reviewed-by: Zi Yan <ziy@nvidia.com>
--
Best Regards,
Yan, Zi
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] powerpc/mm: Move CMA reservations after initmem_init()
@ 2022-06-16 13:33 ` Zi Yan
0 siblings, 0 replies; 8+ messages in thread
From: Zi Yan @ 2022-06-16 13:33 UTC (permalink / raw)
To: Michael Ellerman; +Cc: aneesh.kumar, linuxppc-dev, linux-mm
[-- Attachment #1: Type: text/plain, Size: 2634 bytes --]
On 16 Jun 2022, at 8:00, Michael Ellerman wrote:
> After commit 11ac3e87ce09 ("mm: cma: use pageblock_order as the single
> alignment") there is an error at boot about the KVM CMA reservation
> failing, eg:
>
> kvm_cma_reserve: reserving 6553 MiB for global area
> cma: Failed to reserve 6553 MiB
>
> That makes it impossible to start KVM guests using the hash MMU with
> more than 2G of memory, because the VM is unable to allocate a large
> enough region for the hash page table, eg:
>
> $ qemu-system-ppc64 -enable-kvm -M pseries -m 4G ...
> qemu-system-ppc64: Failed to allocate KVM HPT of order 25: Cannot allocate memory
>
> Aneesh pointed out that this happens because when kvm_cma_reserve() is
> called, pageblock_order has not been initialised yet, and is still zero,
> causing the checks in cma_init_reserved_mem() against
> CMA_MIN_ALIGNMENT_PAGES to fail.
>
> Fix it by moving the call to kvm_cma_reserve() after initmem_init(). The
> pageblock_order is initialised in sparse_init() which is called from
> initmem_init().
>
> Also move the hugetlb CMA reservation.
>
> Fixes: 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment")
> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
> ---
> arch/powerpc/kernel/setup-common.c | 13 +++++++------
> 1 file changed, 7 insertions(+), 6 deletions(-)
>
> diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
> index eb0077b302e2..1a02629ec70b 100644
> --- a/arch/powerpc/kernel/setup-common.c
> +++ b/arch/powerpc/kernel/setup-common.c
> @@ -935,12 +935,6 @@ void __init setup_arch(char **cmdline_p)
> /* Print various info about the machine that has been gathered so far. */
> print_system_info();
>
> - /* Reserve large chunks of memory for use by CMA for KVM. */
> - kvm_cma_reserve();
> -
> - /* Reserve large chunks of memory for us by CMA for hugetlb */
> - gigantic_hugetlb_cma_reserve();
> -
> klp_init_thread_info(&init_task);
>
> setup_initial_init_mm(_stext, _etext, _edata, _end);
> @@ -955,6 +949,13 @@ void __init setup_arch(char **cmdline_p)
>
> initmem_init();
>
> + /*
> + * Reserve large chunks of memory for use by CMA for KVM and hugetlb. These must
> + * be called after initmem_init(), so that pageblock_order is initialised.
> + */
> + kvm_cma_reserve();
> + gigantic_hugetlb_cma_reserve();
> +
> early_memtest(min_low_pfn << PAGE_SHIFT, max_low_pfn << PAGE_SHIFT);
>
> if (ppc_md.setup_arch)
> --
> 2.35.3
Thank you for the fix.
Reviewed-by: Zi Yan <ziy@nvidia.com>
--
Best Regards,
Yan, Zi
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] powerpc/mm: Move CMA reservations after initmem_init()
2022-06-16 12:00 ` Michael Ellerman
@ 2022-06-26 0:28 ` Michael Ellerman
-1 siblings, 0 replies; 8+ messages in thread
From: Michael Ellerman @ 2022-06-26 0:28 UTC (permalink / raw)
To: linuxppc-dev, Michael Ellerman; +Cc: aneesh.kumar, linux-mm, ziy
On Thu, 16 Jun 2022 22:00:33 +1000, Michael Ellerman wrote:
> After commit 11ac3e87ce09 ("mm: cma: use pageblock_order as the single
> alignment") there is an error at boot about the KVM CMA reservation
> failing, eg:
>
> kvm_cma_reserve: reserving 6553 MiB for global area
> cma: Failed to reserve 6553 MiB
>
> [...]
Applied to powerpc/fixes.
[1/1] powerpc/mm: Move CMA reservations after initmem_init()
https://git.kernel.org/powerpc/c/6cf06c17e94f26c290fd3370a5c36514ae15ac43
cheers
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] powerpc/mm: Move CMA reservations after initmem_init()
@ 2022-06-26 0:28 ` Michael Ellerman
0 siblings, 0 replies; 8+ messages in thread
From: Michael Ellerman @ 2022-06-26 0:28 UTC (permalink / raw)
To: linuxppc-dev, Michael Ellerman; +Cc: aneesh.kumar, ziy, linux-mm
On Thu, 16 Jun 2022 22:00:33 +1000, Michael Ellerman wrote:
> After commit 11ac3e87ce09 ("mm: cma: use pageblock_order as the single
> alignment") there is an error at boot about the KVM CMA reservation
> failing, eg:
>
> kvm_cma_reserve: reserving 6553 MiB for global area
> cma: Failed to reserve 6553 MiB
>
> [...]
Applied to powerpc/fixes.
[1/1] powerpc/mm: Move CMA reservations after initmem_init()
https://git.kernel.org/powerpc/c/6cf06c17e94f26c290fd3370a5c36514ae15ac43
cheers
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2022-06-26 0:29 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-16 12:00 [PATCH] powerpc/mm: Move CMA reservations after initmem_init() Michael Ellerman
2022-06-16 12:00 ` Michael Ellerman
2022-06-16 13:07 ` Aneesh Kumar K.V
2022-06-16 13:07 ` Aneesh Kumar K.V
2022-06-16 13:33 ` Zi Yan
2022-06-16 13:33 ` Zi Yan
2022-06-26 0:28 ` Michael Ellerman
2022-06-26 0:28 ` Michael Ellerman
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.