* [PATCH v2 resend 0/3] arm64: mm: Do not defer reserve_crashkernel()
@ 2022-03-31 7:40 ` Kefeng Wang
0 siblings, 0 replies; 23+ messages in thread
From: Kefeng Wang @ 2022-03-31 7:40 UTC (permalink / raw)
To: catalin.marinas, will, linux-arm-kernel, linux-kernel
Cc: vijayb, f.fainelli, Kefeng Wang
Commit 031495635b46 ("arm64: Do not defer reserve_crashkernel() for
platforms with no DMA memory zones"), this lets the kernel benifit
due to BLOCK_MAPPINGS, we could do more if ZONE_DMA and ZONE_DMA32
enabled.
1) Don't defer reserve_crashkernel() if only ZONE_DMA32
2) Don't defer reserve_crashkernel() if ZONE_DMA with dma_force_32bit
kernel parameter(newly added)
Unixbench benchmark result shows between the block mapping and page mapping.
----------------+------------------+-------------------
| block mapping | page mapping
----------------+------------------+-------------------
Process Creation| 5,030.7 | 4,711.8
(in unixbench) | |
----------------+------------------+-------------------
note: RODATA_FULL_DEFAULT_ENABLED is not enabled
v2 resend:
- fix build error reported-by lkp
v2:
- update patch1 according to Vijay and Florian, and RB of Vijay
- add new patch2
Kefeng Wang (3):
arm64: mm: Do not defer reserve_crashkernel() if only ZONE_DMA32
arm64: mm: Don't defer reserve_crashkernel() with dma_force_32bit
arm64: mm: Cleanup useless parameters in zone_sizes_init()
arch/arm64/include/asm/kexec.h | 1 +
arch/arm64/mm/init.c | 59 +++++++++++++++++++++++++---------
arch/arm64/mm/mmu.c | 6 ++--
3 files changed, 46 insertions(+), 20 deletions(-)
--
2.26.2
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v2 resend 0/3] arm64: mm: Do not defer reserve_crashkernel()
@ 2022-03-31 7:40 ` Kefeng Wang
0 siblings, 0 replies; 23+ messages in thread
From: Kefeng Wang @ 2022-03-31 7:40 UTC (permalink / raw)
To: catalin.marinas, will, linux-arm-kernel, linux-kernel
Cc: vijayb, f.fainelli, Kefeng Wang
Commit 031495635b46 ("arm64: Do not defer reserve_crashkernel() for
platforms with no DMA memory zones"), this lets the kernel benifit
due to BLOCK_MAPPINGS, we could do more if ZONE_DMA and ZONE_DMA32
enabled.
1) Don't defer reserve_crashkernel() if only ZONE_DMA32
2) Don't defer reserve_crashkernel() if ZONE_DMA with dma_force_32bit
kernel parameter(newly added)
Unixbench benchmark result shows between the block mapping and page mapping.
----------------+------------------+-------------------
| block mapping | page mapping
----------------+------------------+-------------------
Process Creation| 5,030.7 | 4,711.8
(in unixbench) | |
----------------+------------------+-------------------
note: RODATA_FULL_DEFAULT_ENABLED is not enabled
v2 resend:
- fix build error reported-by lkp
v2:
- update patch1 according to Vijay and Florian, and RB of Vijay
- add new patch2
Kefeng Wang (3):
arm64: mm: Do not defer reserve_crashkernel() if only ZONE_DMA32
arm64: mm: Don't defer reserve_crashkernel() with dma_force_32bit
arm64: mm: Cleanup useless parameters in zone_sizes_init()
arch/arm64/include/asm/kexec.h | 1 +
arch/arm64/mm/init.c | 59 +++++++++++++++++++++++++---------
arch/arm64/mm/mmu.c | 6 ++--
3 files changed, 46 insertions(+), 20 deletions(-)
--
2.26.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v2 resend 1/3] arm64: mm: Do not defer reserve_crashkernel() if only ZONE_DMA32
2022-03-31 7:40 ` Kefeng Wang
@ 2022-03-31 7:40 ` Kefeng Wang
-1 siblings, 0 replies; 23+ messages in thread
From: Kefeng Wang @ 2022-03-31 7:40 UTC (permalink / raw)
To: catalin.marinas, will, linux-arm-kernel, linux-kernel
Cc: vijayb, f.fainelli, Kefeng Wang, Pasha Tatashin
The kernel could be benefit due to BLOCK_MAPPINGS, see commit
031495635b46 ("arm64: Do not defer reserve_crashkernel() for
platforms with no DMA memory zones"), if only with ZONE_DMA32,
set arm64_dma_phys_limit to max_zone_phys(32) earlier in
arm64_memblock_init(), so platforms with just ZONE_DMA32 config
enabled will be benefit.
Cc: Vijay Balakrishna <vijayb@linux.microsoft.com>
Cc: Pasha Tatashin <pasha.tatashin@soleen.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Vijay Balakrishna <vijayb@linux.microsoft.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
arch/arm64/mm/init.c | 23 +++++++++++++----------
arch/arm64/mm/mmu.c | 6 ++----
2 files changed, 15 insertions(+), 14 deletions(-)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 8ac25f19084e..fb01eb489fa9 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -65,8 +65,9 @@ EXPORT_SYMBOL(memstart_addr);
* Memory reservation for crash kernel either done early or deferred
* depending on DMA memory zones configs (ZONE_DMA) --
*
- * In absence of ZONE_DMA configs arm64_dma_phys_limit initialized
- * here instead of max_zone_phys(). This lets early reservation of
+ * In absence of ZONE_DMA and ZONE_DMA32 configs arm64_dma_phys_limit
+ * initialized here and if only with ZONE_DMA32 arm64_dma_phys_limit
+ * initialised to dma32_phys_limit. This lets early reservation of
* crash kernel memory which has a dependency on arm64_dma_phys_limit.
* Reserving memory early for crash kernel allows linear creation of block
* mappings (greater than page-granularity) for all the memory bank rangs.
@@ -84,6 +85,7 @@ EXPORT_SYMBOL(memstart_addr);
* Note: Page-granularity mapppings are necessary for crash kernel memory
* range for shrinking its size via /sys/kernel/kexec_crash_size interface.
*/
+static phys_addr_t __ro_after_init dma32_phys_limit;
#if IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32)
phys_addr_t __ro_after_init arm64_dma_phys_limit;
#else
@@ -160,11 +162,10 @@ static phys_addr_t __init max_zone_phys(unsigned int zone_bits)
static void __init zone_sizes_init(unsigned long min, unsigned long max)
{
unsigned long max_zone_pfns[MAX_NR_ZONES] = {0};
- unsigned int __maybe_unused acpi_zone_dma_bits;
- unsigned int __maybe_unused dt_zone_dma_bits;
- phys_addr_t __maybe_unused dma32_phys_limit = max_zone_phys(32);
-
#ifdef CONFIG_ZONE_DMA
+ unsigned int acpi_zone_dma_bits;
+ unsigned int dt_zone_dma_bits;
+
acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address());
dt_zone_dma_bits = fls64(of_dma_get_max_cpu_address(NULL));
zone_dma_bits = min3(32U, dt_zone_dma_bits, acpi_zone_dma_bits);
@@ -173,8 +174,6 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
#endif
#ifdef CONFIG_ZONE_DMA32
max_zone_pfns[ZONE_DMA32] = PFN_DOWN(dma32_phys_limit);
- if (!arm64_dma_phys_limit)
- arm64_dma_phys_limit = dma32_phys_limit;
#endif
max_zone_pfns[ZONE_NORMAL] = max;
@@ -336,8 +335,12 @@ void __init arm64_memblock_init(void)
early_init_fdt_scan_reserved_mem();
- if (!IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32))
+ dma32_phys_limit = max_zone_phys(32);
+ if (!IS_ENABLED(CONFIG_ZONE_DMA)) {
+ if (IS_ENABLED(CONFIG_ZONE_DMA32))
+ arm64_dma_phys_limit = dma32_phys_limit;
reserve_crashkernel();
+ }
high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
}
@@ -385,7 +388,7 @@ void __init bootmem_init(void)
* request_standard_resources() depends on crashkernel's memory being
* reserved, so do it here.
*/
- if (IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32))
+ if (IS_ENABLED(CONFIG_ZONE_DMA))
reserve_crashkernel();
memblock_dump_all();
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 626ec32873c6..23734481318a 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -529,8 +529,7 @@ static void __init map_mem(pgd_t *pgdp)
#ifdef CONFIG_KEXEC_CORE
if (crash_mem_map) {
- if (IS_ENABLED(CONFIG_ZONE_DMA) ||
- IS_ENABLED(CONFIG_ZONE_DMA32))
+ if (IS_ENABLED(CONFIG_ZONE_DMA))
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
else if (crashk_res.end)
memblock_mark_nomap(crashk_res.start,
@@ -571,8 +570,7 @@ static void __init map_mem(pgd_t *pgdp)
* through /sys/kernel/kexec_crash_size interface.
*/
#ifdef CONFIG_KEXEC_CORE
- if (crash_mem_map &&
- !IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32)) {
+ if (crash_mem_map && !IS_ENABLED(CONFIG_ZONE_DMA)) {
if (crashk_res.end) {
__map_memblock(pgdp, crashk_res.start,
crashk_res.end + 1,
--
2.26.2
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v2 resend 1/3] arm64: mm: Do not defer reserve_crashkernel() if only ZONE_DMA32
@ 2022-03-31 7:40 ` Kefeng Wang
0 siblings, 0 replies; 23+ messages in thread
From: Kefeng Wang @ 2022-03-31 7:40 UTC (permalink / raw)
To: catalin.marinas, will, linux-arm-kernel, linux-kernel
Cc: vijayb, f.fainelli, Kefeng Wang, Pasha Tatashin
The kernel could be benefit due to BLOCK_MAPPINGS, see commit
031495635b46 ("arm64: Do not defer reserve_crashkernel() for
platforms with no DMA memory zones"), if only with ZONE_DMA32,
set arm64_dma_phys_limit to max_zone_phys(32) earlier in
arm64_memblock_init(), so platforms with just ZONE_DMA32 config
enabled will be benefit.
Cc: Vijay Balakrishna <vijayb@linux.microsoft.com>
Cc: Pasha Tatashin <pasha.tatashin@soleen.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Vijay Balakrishna <vijayb@linux.microsoft.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
arch/arm64/mm/init.c | 23 +++++++++++++----------
arch/arm64/mm/mmu.c | 6 ++----
2 files changed, 15 insertions(+), 14 deletions(-)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 8ac25f19084e..fb01eb489fa9 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -65,8 +65,9 @@ EXPORT_SYMBOL(memstart_addr);
* Memory reservation for crash kernel either done early or deferred
* depending on DMA memory zones configs (ZONE_DMA) --
*
- * In absence of ZONE_DMA configs arm64_dma_phys_limit initialized
- * here instead of max_zone_phys(). This lets early reservation of
+ * In absence of ZONE_DMA and ZONE_DMA32 configs arm64_dma_phys_limit
+ * initialized here and if only with ZONE_DMA32 arm64_dma_phys_limit
+ * initialised to dma32_phys_limit. This lets early reservation of
* crash kernel memory which has a dependency on arm64_dma_phys_limit.
* Reserving memory early for crash kernel allows linear creation of block
* mappings (greater than page-granularity) for all the memory bank rangs.
@@ -84,6 +85,7 @@ EXPORT_SYMBOL(memstart_addr);
* Note: Page-granularity mapppings are necessary for crash kernel memory
* range for shrinking its size via /sys/kernel/kexec_crash_size interface.
*/
+static phys_addr_t __ro_after_init dma32_phys_limit;
#if IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32)
phys_addr_t __ro_after_init arm64_dma_phys_limit;
#else
@@ -160,11 +162,10 @@ static phys_addr_t __init max_zone_phys(unsigned int zone_bits)
static void __init zone_sizes_init(unsigned long min, unsigned long max)
{
unsigned long max_zone_pfns[MAX_NR_ZONES] = {0};
- unsigned int __maybe_unused acpi_zone_dma_bits;
- unsigned int __maybe_unused dt_zone_dma_bits;
- phys_addr_t __maybe_unused dma32_phys_limit = max_zone_phys(32);
-
#ifdef CONFIG_ZONE_DMA
+ unsigned int acpi_zone_dma_bits;
+ unsigned int dt_zone_dma_bits;
+
acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address());
dt_zone_dma_bits = fls64(of_dma_get_max_cpu_address(NULL));
zone_dma_bits = min3(32U, dt_zone_dma_bits, acpi_zone_dma_bits);
@@ -173,8 +174,6 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
#endif
#ifdef CONFIG_ZONE_DMA32
max_zone_pfns[ZONE_DMA32] = PFN_DOWN(dma32_phys_limit);
- if (!arm64_dma_phys_limit)
- arm64_dma_phys_limit = dma32_phys_limit;
#endif
max_zone_pfns[ZONE_NORMAL] = max;
@@ -336,8 +335,12 @@ void __init arm64_memblock_init(void)
early_init_fdt_scan_reserved_mem();
- if (!IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32))
+ dma32_phys_limit = max_zone_phys(32);
+ if (!IS_ENABLED(CONFIG_ZONE_DMA)) {
+ if (IS_ENABLED(CONFIG_ZONE_DMA32))
+ arm64_dma_phys_limit = dma32_phys_limit;
reserve_crashkernel();
+ }
high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
}
@@ -385,7 +388,7 @@ void __init bootmem_init(void)
* request_standard_resources() depends on crashkernel's memory being
* reserved, so do it here.
*/
- if (IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32))
+ if (IS_ENABLED(CONFIG_ZONE_DMA))
reserve_crashkernel();
memblock_dump_all();
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 626ec32873c6..23734481318a 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -529,8 +529,7 @@ static void __init map_mem(pgd_t *pgdp)
#ifdef CONFIG_KEXEC_CORE
if (crash_mem_map) {
- if (IS_ENABLED(CONFIG_ZONE_DMA) ||
- IS_ENABLED(CONFIG_ZONE_DMA32))
+ if (IS_ENABLED(CONFIG_ZONE_DMA))
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
else if (crashk_res.end)
memblock_mark_nomap(crashk_res.start,
@@ -571,8 +570,7 @@ static void __init map_mem(pgd_t *pgdp)
* through /sys/kernel/kexec_crash_size interface.
*/
#ifdef CONFIG_KEXEC_CORE
- if (crash_mem_map &&
- !IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32)) {
+ if (crash_mem_map && !IS_ENABLED(CONFIG_ZONE_DMA)) {
if (crashk_res.end) {
__map_memblock(pgdp, crashk_res.start,
crashk_res.end + 1,
--
2.26.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v2 resend 2/3] arm64: mm: Don't defer reserve_crashkernel() with dma_force_32bit
2022-03-31 7:40 ` Kefeng Wang
@ 2022-03-31 7:40 ` Kefeng Wang
-1 siblings, 0 replies; 23+ messages in thread
From: Kefeng Wang @ 2022-03-31 7:40 UTC (permalink / raw)
To: catalin.marinas, will, linux-arm-kernel, linux-kernel
Cc: vijayb, f.fainelli, Kefeng Wang
ARM64 enable ZONE_DMA by default, and with ZONE_DMA crash kernel
memory reservation is delayed until DMA zone memory range size
initilazation performed in zone_sizes_init(), but for most platforms
use 32bit dma_zone_bits, so add dma_force_32bit kernel parameter
if ZONE_DMA enabled, and initialize arm64_dma_phys_limit to
dma32_phys_limit in arm64_memblock_init() if dma_force_32bit
is setup, this could let the crash kernel reservation earlier,
and allows linear creation with block mapping.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
arch/arm64/include/asm/kexec.h | 1 +
arch/arm64/mm/init.c | 42 ++++++++++++++++++++++++++--------
arch/arm64/mm/mmu.c | 4 ++--
3 files changed, 36 insertions(+), 11 deletions(-)
diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h
index 9839bfc163d7..8bea40aea359 100644
--- a/arch/arm64/include/asm/kexec.h
+++ b/arch/arm64/include/asm/kexec.h
@@ -95,6 +95,7 @@ void cpu_soft_restart(unsigned long el2_switch, unsigned long entry,
unsigned long arg0, unsigned long arg1,
unsigned long arg2);
#endif
+bool crashkernel_could_early_reserve(void);
#define ARCH_HAS_KIMAGE_ARCH
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index fb01eb489fa9..0aafa9181607 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -66,7 +66,8 @@ EXPORT_SYMBOL(memstart_addr);
* depending on DMA memory zones configs (ZONE_DMA) --
*
* In absence of ZONE_DMA and ZONE_DMA32 configs arm64_dma_phys_limit
- * initialized here and if only with ZONE_DMA32 arm64_dma_phys_limit
+ * initialized here, and if only with ZONE_DMA32 or if with ZONE_DMA
+ * and dma_force_32bit kernel parameter, the arm64_dma_phys_limit is
* initialised to dma32_phys_limit. This lets early reservation of
* crash kernel memory which has a dependency on arm64_dma_phys_limit.
* Reserving memory early for crash kernel allows linear creation of block
@@ -92,6 +93,27 @@ phys_addr_t __ro_after_init arm64_dma_phys_limit;
phys_addr_t __ro_after_init arm64_dma_phys_limit = PHYS_MASK + 1;
#endif
+static bool __ro_after_init arm64_dma_force_32bit;
+#ifdef CONFIG_ZONE_DMA
+static int __init arm64_dma_force_32bit_setup(char *p)
+{
+ zone_dma_bits = 32;
+ arm64_dma_force_32bit = true;
+
+ return 0;
+}
+early_param("dma_force_32bit", arm64_dma_force_32bit_setup);
+#endif
+
+bool __init crashkernel_could_early_reserve(void)
+{
+ if (!IS_ENABLED(CONFIG_ZONE_DMA))
+ return true;
+ if (arm64_dma_force_32bit)
+ return true;
+ return false;
+}
+
/*
* reserve_crashkernel() - reserves memory for crash kernel
*
@@ -163,12 +185,14 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
{
unsigned long max_zone_pfns[MAX_NR_ZONES] = {0};
#ifdef CONFIG_ZONE_DMA
- unsigned int acpi_zone_dma_bits;
- unsigned int dt_zone_dma_bits;
+ if (!arm64_dma_force_32bit) {
+ unsigned int acpi_zone_dma_bits;
+ unsigned int dt_zone_dma_bits;
- acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address());
- dt_zone_dma_bits = fls64(of_dma_get_max_cpu_address(NULL));
- zone_dma_bits = min3(32U, dt_zone_dma_bits, acpi_zone_dma_bits);
+ acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address());
+ dt_zone_dma_bits = fls64(of_dma_get_max_cpu_address(NULL));
+ zone_dma_bits = min3(32U, dt_zone_dma_bits, acpi_zone_dma_bits);
+ }
arm64_dma_phys_limit = max_zone_phys(zone_dma_bits);
max_zone_pfns[ZONE_DMA] = PFN_DOWN(arm64_dma_phys_limit);
#endif
@@ -336,8 +360,8 @@ void __init arm64_memblock_init(void)
early_init_fdt_scan_reserved_mem();
dma32_phys_limit = max_zone_phys(32);
- if (!IS_ENABLED(CONFIG_ZONE_DMA)) {
- if (IS_ENABLED(CONFIG_ZONE_DMA32))
+ if (crashkernel_could_early_reserve()) {
+ if (IS_ENABLED(CONFIG_ZONE_DMA32) || arm64_dma_force_32bit)
arm64_dma_phys_limit = dma32_phys_limit;
reserve_crashkernel();
}
@@ -388,7 +412,7 @@ void __init bootmem_init(void)
* request_standard_resources() depends on crashkernel's memory being
* reserved, so do it here.
*/
- if (IS_ENABLED(CONFIG_ZONE_DMA))
+ if (!crashkernel_could_early_reserve())
reserve_crashkernel();
memblock_dump_all();
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 23734481318a..8f7e8452d906 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -529,7 +529,7 @@ static void __init map_mem(pgd_t *pgdp)
#ifdef CONFIG_KEXEC_CORE
if (crash_mem_map) {
- if (IS_ENABLED(CONFIG_ZONE_DMA))
+ if (!crashkernel_could_early_reserve())
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
else if (crashk_res.end)
memblock_mark_nomap(crashk_res.start,
@@ -570,7 +570,7 @@ static void __init map_mem(pgd_t *pgdp)
* through /sys/kernel/kexec_crash_size interface.
*/
#ifdef CONFIG_KEXEC_CORE
- if (crash_mem_map && !IS_ENABLED(CONFIG_ZONE_DMA)) {
+ if (crash_mem_map && crashkernel_could_early_reserve()) {
if (crashk_res.end) {
__map_memblock(pgdp, crashk_res.start,
crashk_res.end + 1,
--
2.26.2
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v2 resend 2/3] arm64: mm: Don't defer reserve_crashkernel() with dma_force_32bit
@ 2022-03-31 7:40 ` Kefeng Wang
0 siblings, 0 replies; 23+ messages in thread
From: Kefeng Wang @ 2022-03-31 7:40 UTC (permalink / raw)
To: catalin.marinas, will, linux-arm-kernel, linux-kernel
Cc: vijayb, f.fainelli, Kefeng Wang
ARM64 enable ZONE_DMA by default, and with ZONE_DMA crash kernel
memory reservation is delayed until DMA zone memory range size
initilazation performed in zone_sizes_init(), but for most platforms
use 32bit dma_zone_bits, so add dma_force_32bit kernel parameter
if ZONE_DMA enabled, and initialize arm64_dma_phys_limit to
dma32_phys_limit in arm64_memblock_init() if dma_force_32bit
is setup, this could let the crash kernel reservation earlier,
and allows linear creation with block mapping.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
arch/arm64/include/asm/kexec.h | 1 +
arch/arm64/mm/init.c | 42 ++++++++++++++++++++++++++--------
arch/arm64/mm/mmu.c | 4 ++--
3 files changed, 36 insertions(+), 11 deletions(-)
diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h
index 9839bfc163d7..8bea40aea359 100644
--- a/arch/arm64/include/asm/kexec.h
+++ b/arch/arm64/include/asm/kexec.h
@@ -95,6 +95,7 @@ void cpu_soft_restart(unsigned long el2_switch, unsigned long entry,
unsigned long arg0, unsigned long arg1,
unsigned long arg2);
#endif
+bool crashkernel_could_early_reserve(void);
#define ARCH_HAS_KIMAGE_ARCH
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index fb01eb489fa9..0aafa9181607 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -66,7 +66,8 @@ EXPORT_SYMBOL(memstart_addr);
* depending on DMA memory zones configs (ZONE_DMA) --
*
* In absence of ZONE_DMA and ZONE_DMA32 configs arm64_dma_phys_limit
- * initialized here and if only with ZONE_DMA32 arm64_dma_phys_limit
+ * initialized here, and if only with ZONE_DMA32 or if with ZONE_DMA
+ * and dma_force_32bit kernel parameter, the arm64_dma_phys_limit is
* initialised to dma32_phys_limit. This lets early reservation of
* crash kernel memory which has a dependency on arm64_dma_phys_limit.
* Reserving memory early for crash kernel allows linear creation of block
@@ -92,6 +93,27 @@ phys_addr_t __ro_after_init arm64_dma_phys_limit;
phys_addr_t __ro_after_init arm64_dma_phys_limit = PHYS_MASK + 1;
#endif
+static bool __ro_after_init arm64_dma_force_32bit;
+#ifdef CONFIG_ZONE_DMA
+static int __init arm64_dma_force_32bit_setup(char *p)
+{
+ zone_dma_bits = 32;
+ arm64_dma_force_32bit = true;
+
+ return 0;
+}
+early_param("dma_force_32bit", arm64_dma_force_32bit_setup);
+#endif
+
+bool __init crashkernel_could_early_reserve(void)
+{
+ if (!IS_ENABLED(CONFIG_ZONE_DMA))
+ return true;
+ if (arm64_dma_force_32bit)
+ return true;
+ return false;
+}
+
/*
* reserve_crashkernel() - reserves memory for crash kernel
*
@@ -163,12 +185,14 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
{
unsigned long max_zone_pfns[MAX_NR_ZONES] = {0};
#ifdef CONFIG_ZONE_DMA
- unsigned int acpi_zone_dma_bits;
- unsigned int dt_zone_dma_bits;
+ if (!arm64_dma_force_32bit) {
+ unsigned int acpi_zone_dma_bits;
+ unsigned int dt_zone_dma_bits;
- acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address());
- dt_zone_dma_bits = fls64(of_dma_get_max_cpu_address(NULL));
- zone_dma_bits = min3(32U, dt_zone_dma_bits, acpi_zone_dma_bits);
+ acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address());
+ dt_zone_dma_bits = fls64(of_dma_get_max_cpu_address(NULL));
+ zone_dma_bits = min3(32U, dt_zone_dma_bits, acpi_zone_dma_bits);
+ }
arm64_dma_phys_limit = max_zone_phys(zone_dma_bits);
max_zone_pfns[ZONE_DMA] = PFN_DOWN(arm64_dma_phys_limit);
#endif
@@ -336,8 +360,8 @@ void __init arm64_memblock_init(void)
early_init_fdt_scan_reserved_mem();
dma32_phys_limit = max_zone_phys(32);
- if (!IS_ENABLED(CONFIG_ZONE_DMA)) {
- if (IS_ENABLED(CONFIG_ZONE_DMA32))
+ if (crashkernel_could_early_reserve()) {
+ if (IS_ENABLED(CONFIG_ZONE_DMA32) || arm64_dma_force_32bit)
arm64_dma_phys_limit = dma32_phys_limit;
reserve_crashkernel();
}
@@ -388,7 +412,7 @@ void __init bootmem_init(void)
* request_standard_resources() depends on crashkernel's memory being
* reserved, so do it here.
*/
- if (IS_ENABLED(CONFIG_ZONE_DMA))
+ if (!crashkernel_could_early_reserve())
reserve_crashkernel();
memblock_dump_all();
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 23734481318a..8f7e8452d906 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -529,7 +529,7 @@ static void __init map_mem(pgd_t *pgdp)
#ifdef CONFIG_KEXEC_CORE
if (crash_mem_map) {
- if (IS_ENABLED(CONFIG_ZONE_DMA))
+ if (!crashkernel_could_early_reserve())
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
else if (crashk_res.end)
memblock_mark_nomap(crashk_res.start,
@@ -570,7 +570,7 @@ static void __init map_mem(pgd_t *pgdp)
* through /sys/kernel/kexec_crash_size interface.
*/
#ifdef CONFIG_KEXEC_CORE
- if (crash_mem_map && !IS_ENABLED(CONFIG_ZONE_DMA)) {
+ if (crash_mem_map && crashkernel_could_early_reserve()) {
if (crashk_res.end) {
__map_memblock(pgdp, crashk_res.start,
crashk_res.end + 1,
--
2.26.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v2 resend 3/3] arm64: mm: Cleanup useless parameters in zone_sizes_init()
2022-03-31 7:40 ` Kefeng Wang
@ 2022-03-31 7:40 ` Kefeng Wang
-1 siblings, 0 replies; 23+ messages in thread
From: Kefeng Wang @ 2022-03-31 7:40 UTC (permalink / raw)
To: catalin.marinas, will, linux-arm-kernel, linux-kernel
Cc: vijayb, f.fainelli, Kefeng Wang
Directly use max_pfn for max and no one use min, kill them.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
arch/arm64/mm/init.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 0aafa9181607..80e9ff37b697 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -181,7 +181,7 @@ static phys_addr_t __init max_zone_phys(unsigned int zone_bits)
return min(zone_mask, memblock_end_of_DRAM() - 1) + 1;
}
-static void __init zone_sizes_init(unsigned long min, unsigned long max)
+static void __init zone_sizes_init(void)
{
unsigned long max_zone_pfns[MAX_NR_ZONES] = {0};
#ifdef CONFIG_ZONE_DMA
@@ -199,7 +199,7 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
#ifdef CONFIG_ZONE_DMA32
max_zone_pfns[ZONE_DMA32] = PFN_DOWN(dma32_phys_limit);
#endif
- max_zone_pfns[ZONE_NORMAL] = max;
+ max_zone_pfns[ZONE_NORMAL] = max_pfn;
free_area_init(max_zone_pfns);
}
@@ -401,7 +401,7 @@ void __init bootmem_init(void)
* done after the fixed reservations
*/
sparse_init();
- zone_sizes_init(min, max);
+ zone_sizes_init();
/*
* Reserve the CMA area after arm64_dma_phys_limit was initialised.
--
2.26.2
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v2 resend 3/3] arm64: mm: Cleanup useless parameters in zone_sizes_init()
@ 2022-03-31 7:40 ` Kefeng Wang
0 siblings, 0 replies; 23+ messages in thread
From: Kefeng Wang @ 2022-03-31 7:40 UTC (permalink / raw)
To: catalin.marinas, will, linux-arm-kernel, linux-kernel
Cc: vijayb, f.fainelli, Kefeng Wang
Directly use max_pfn for max and no one use min, kill them.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
arch/arm64/mm/init.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 0aafa9181607..80e9ff37b697 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -181,7 +181,7 @@ static phys_addr_t __init max_zone_phys(unsigned int zone_bits)
return min(zone_mask, memblock_end_of_DRAM() - 1) + 1;
}
-static void __init zone_sizes_init(unsigned long min, unsigned long max)
+static void __init zone_sizes_init(void)
{
unsigned long max_zone_pfns[MAX_NR_ZONES] = {0};
#ifdef CONFIG_ZONE_DMA
@@ -199,7 +199,7 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
#ifdef CONFIG_ZONE_DMA32
max_zone_pfns[ZONE_DMA32] = PFN_DOWN(dma32_phys_limit);
#endif
- max_zone_pfns[ZONE_NORMAL] = max;
+ max_zone_pfns[ZONE_NORMAL] = max_pfn;
free_area_init(max_zone_pfns);
}
@@ -401,7 +401,7 @@ void __init bootmem_init(void)
* done after the fixed reservations
*/
sparse_init();
- zone_sizes_init(min, max);
+ zone_sizes_init();
/*
* Reserve the CMA area after arm64_dma_phys_limit was initialised.
--
2.26.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH v2 resend 2/3] arm64: mm: Don't defer reserve_crashkernel() with dma_force_32bit
2022-03-31 7:40 ` Kefeng Wang
@ 2022-03-31 16:14 ` kernel test robot
-1 siblings, 0 replies; 23+ messages in thread
From: kernel test robot @ 2022-03-31 16:14 UTC (permalink / raw)
To: Kefeng Wang, catalin.marinas, will, linux-arm-kernel, linux-kernel
Cc: kbuild-all, vijayb, f.fainelli, Kefeng Wang
Hi Kefeng,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on next-20220330]
[cannot apply to arm64/for-next/core v5.17 v5.17-rc8 v5.17-rc7 v5.17]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/intel-lab-lkp/linux/commits/Kefeng-Wang/arm64-mm-Do-not-defer-reserve_crashkernel/20220331-152839
base: a67ba3cf9551f8c92d5ec9d7eae1aadbb9127b57
config: arm64-buildonly-randconfig-r001-20220331 (https://download.01.org/0day-ci/archive/20220401/202204010040.RUk6NuNS-lkp@intel.com/config)
compiler: aarch64-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/intel-lab-lkp/linux/commit/970ec526bd69287a4eb9838600aaf66c46fde350
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Kefeng-Wang/arm64-mm-Do-not-defer-reserve_crashkernel/20220331-152839
git checkout 970ec526bd69287a4eb9838600aaf66c46fde350
# save the config file to linux build tree
mkdir build_dir
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=arm64 SHELL=/bin/bash arch/arm64/mm/
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All warnings (new ones prefixed by >>):
>> arch/arm64/mm/init.c:108:13: warning: no previous prototype for 'crashkernel_could_early_reserve' [-Wmissing-prototypes]
108 | bool __init crashkernel_could_early_reserve(void)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
vim +/crashkernel_could_early_reserve +108 arch/arm64/mm/init.c
107
> 108 bool __init crashkernel_could_early_reserve(void)
109 {
110 if (!IS_ENABLED(CONFIG_ZONE_DMA))
111 return true;
112 if (arm64_dma_force_32bit)
113 return true;
114 return false;
115 }
116
--
0-DAY CI Kernel Test Service
https://01.org/lkp
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 resend 2/3] arm64: mm: Don't defer reserve_crashkernel() with dma_force_32bit
@ 2022-03-31 16:14 ` kernel test robot
0 siblings, 0 replies; 23+ messages in thread
From: kernel test robot @ 2022-03-31 16:14 UTC (permalink / raw)
To: Kefeng Wang, catalin.marinas, will, linux-arm-kernel, linux-kernel
Cc: kbuild-all, vijayb, f.fainelli, Kefeng Wang
Hi Kefeng,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on next-20220330]
[cannot apply to arm64/for-next/core v5.17 v5.17-rc8 v5.17-rc7 v5.17]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/intel-lab-lkp/linux/commits/Kefeng-Wang/arm64-mm-Do-not-defer-reserve_crashkernel/20220331-152839
base: a67ba3cf9551f8c92d5ec9d7eae1aadbb9127b57
config: arm64-buildonly-randconfig-r001-20220331 (https://download.01.org/0day-ci/archive/20220401/202204010040.RUk6NuNS-lkp@intel.com/config)
compiler: aarch64-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/intel-lab-lkp/linux/commit/970ec526bd69287a4eb9838600aaf66c46fde350
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Kefeng-Wang/arm64-mm-Do-not-defer-reserve_crashkernel/20220331-152839
git checkout 970ec526bd69287a4eb9838600aaf66c46fde350
# save the config file to linux build tree
mkdir build_dir
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=arm64 SHELL=/bin/bash arch/arm64/mm/
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All warnings (new ones prefixed by >>):
>> arch/arm64/mm/init.c:108:13: warning: no previous prototype for 'crashkernel_could_early_reserve' [-Wmissing-prototypes]
108 | bool __init crashkernel_could_early_reserve(void)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
vim +/crashkernel_could_early_reserve +108 arch/arm64/mm/init.c
107
> 108 bool __init crashkernel_could_early_reserve(void)
109 {
110 if (!IS_ENABLED(CONFIG_ZONE_DMA))
111 return true;
112 if (arm64_dma_force_32bit)
113 return true;
114 return false;
115 }
116
--
0-DAY CI Kernel Test Service
https://01.org/lkp
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 resend 2/3] arm64: mm: Don't defer reserve_crashkernel() with dma_force_32bit
2022-03-31 16:14 ` kernel test robot
(?)
@ 2022-04-01 5:17 ` Kefeng Wang
-1 siblings, 0 replies; 23+ messages in thread
From: Kefeng Wang @ 2022-04-01 5:17 UTC (permalink / raw)
To: kernel test robot, catalin.marinas, will, linux-arm-kernel, linux-kernel
Cc: kbuild-all, vijayb, f.fainelli
On 2022/4/1 0:14, kernel test robot wrote:
> Hi Kefeng,
>
> Thank you for the patch! Perhaps something to improve:
>
> [auto build test WARNING on next-20220330]
> [cannot apply to arm64/for-next/core v5.17 v5.17-rc8 v5.17-rc7 v5.17]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch]
>
> url: https://github.com/intel-lab-lkp/linux/commits/Kefeng-Wang/arm64-mm-Do-not-defer-reserve_crashkernel/20220331-152839
> base: a67ba3cf9551f8c92d5ec9d7eae1aadbb9127b57
> config: arm64-buildonly-randconfig-r001-20220331 (https://download.01.org/0day-ci/archive/20220401/202204010040.RUk6NuNS-lkp@intel.com/config)
> compiler: aarch64-linux-gcc (GCC) 11.2.0
> reproduce (this is a W=1 build):
> wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
> chmod +x ~/bin/make.cross
> # https://github.com/intel-lab-lkp/linux/commit/970ec526bd69287a4eb9838600aaf66c46fde350
> git remote add linux-review https://github.com/intel-lab-lkp/linux
> git fetch --no-tags linux-review Kefeng-Wang/arm64-mm-Do-not-defer-reserve_crashkernel/20220331-152839
> git checkout 970ec526bd69287a4eb9838600aaf66c46fde350
> # save the config file to linux build tree
> mkdir build_dir
> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=arm64 SHELL=/bin/bash arch/arm64/mm/
>
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <lkp@intel.com>
>
> All warnings (new ones prefixed by >>):
>
>>> arch/arm64/mm/init.c:108:13: warning: no previous prototype for 'crashkernel_could_early_reserve' [-Wmissing-prototypes]
> 108 | bool __init crashkernel_could_early_reserve(void)
> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Let's wait for some feedback, add <asm/kexec.h> into init.c could
silence the warning.
thanks.
>
> vim +/crashkernel_could_early_reserve +108 arch/arm64/mm/init.c
>
> 107
> > 108 bool __init crashkernel_could_early_reserve(void)
> 109 {
> 110 if (!IS_ENABLED(CONFIG_ZONE_DMA))
> 111 return true;
> 112 if (arm64_dma_force_32bit)
> 113 return true;
> 114 return false;
> 115 }
> 116
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 resend 2/3] arm64: mm: Don't defer reserve_crashkernel() with dma_force_32bit
@ 2022-04-01 5:17 ` Kefeng Wang
0 siblings, 0 replies; 23+ messages in thread
From: Kefeng Wang @ 2022-04-01 5:17 UTC (permalink / raw)
To: kernel test robot, catalin.marinas, will, linux-arm-kernel, linux-kernel
Cc: kbuild-all, vijayb, f.fainelli
On 2022/4/1 0:14, kernel test robot wrote:
> Hi Kefeng,
>
> Thank you for the patch! Perhaps something to improve:
>
> [auto build test WARNING on next-20220330]
> [cannot apply to arm64/for-next/core v5.17 v5.17-rc8 v5.17-rc7 v5.17]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch]
>
> url: https://github.com/intel-lab-lkp/linux/commits/Kefeng-Wang/arm64-mm-Do-not-defer-reserve_crashkernel/20220331-152839
> base: a67ba3cf9551f8c92d5ec9d7eae1aadbb9127b57
> config: arm64-buildonly-randconfig-r001-20220331 (https://download.01.org/0day-ci/archive/20220401/202204010040.RUk6NuNS-lkp@intel.com/config)
> compiler: aarch64-linux-gcc (GCC) 11.2.0
> reproduce (this is a W=1 build):
> wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
> chmod +x ~/bin/make.cross
> # https://github.com/intel-lab-lkp/linux/commit/970ec526bd69287a4eb9838600aaf66c46fde350
> git remote add linux-review https://github.com/intel-lab-lkp/linux
> git fetch --no-tags linux-review Kefeng-Wang/arm64-mm-Do-not-defer-reserve_crashkernel/20220331-152839
> git checkout 970ec526bd69287a4eb9838600aaf66c46fde350
> # save the config file to linux build tree
> mkdir build_dir
> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=arm64 SHELL=/bin/bash arch/arm64/mm/
>
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <lkp@intel.com>
>
> All warnings (new ones prefixed by >>):
>
>>> arch/arm64/mm/init.c:108:13: warning: no previous prototype for 'crashkernel_could_early_reserve' [-Wmissing-prototypes]
> 108 | bool __init crashkernel_could_early_reserve(void)
> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Let's wait for some feedback, add <asm/kexec.h> into init.c could
silence the warning.
thanks.
>
> vim +/crashkernel_could_early_reserve +108 arch/arm64/mm/init.c
>
> 107
> > 108 bool __init crashkernel_could_early_reserve(void)
> 109 {
> 110 if (!IS_ENABLED(CONFIG_ZONE_DMA))
> 111 return true;
> 112 if (arm64_dma_force_32bit)
> 113 return true;
> 114 return false;
> 115 }
> 116
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 resend 2/3] arm64: mm: Don't defer reserve_crashkernel() with dma_force_32bit
@ 2022-04-01 5:17 ` Kefeng Wang
0 siblings, 0 replies; 23+ messages in thread
From: Kefeng Wang @ 2022-04-01 5:17 UTC (permalink / raw)
To: kbuild-all
[-- Attachment #1: Type: text/plain, Size: 2442 bytes --]
On 2022/4/1 0:14, kernel test robot wrote:
> Hi Kefeng,
>
> Thank you for the patch! Perhaps something to improve:
>
> [auto build test WARNING on next-20220330]
> [cannot apply to arm64/for-next/core v5.17 v5.17-rc8 v5.17-rc7 v5.17]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch]
>
> url: https://github.com/intel-lab-lkp/linux/commits/Kefeng-Wang/arm64-mm-Do-not-defer-reserve_crashkernel/20220331-152839
> base: a67ba3cf9551f8c92d5ec9d7eae1aadbb9127b57
> config: arm64-buildonly-randconfig-r001-20220331 (https://download.01.org/0day-ci/archive/20220401/202204010040.RUk6NuNS-lkp(a)intel.com/config)
> compiler: aarch64-linux-gcc (GCC) 11.2.0
> reproduce (this is a W=1 build):
> wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
> chmod +x ~/bin/make.cross
> # https://github.com/intel-lab-lkp/linux/commit/970ec526bd69287a4eb9838600aaf66c46fde350
> git remote add linux-review https://github.com/intel-lab-lkp/linux
> git fetch --no-tags linux-review Kefeng-Wang/arm64-mm-Do-not-defer-reserve_crashkernel/20220331-152839
> git checkout 970ec526bd69287a4eb9838600aaf66c46fde350
> # save the config file to linux build tree
> mkdir build_dir
> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=arm64 SHELL=/bin/bash arch/arm64/mm/
>
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <lkp@intel.com>
>
> All warnings (new ones prefixed by >>):
>
>>> arch/arm64/mm/init.c:108:13: warning: no previous prototype for 'crashkernel_could_early_reserve' [-Wmissing-prototypes]
> 108 | bool __init crashkernel_could_early_reserve(void)
> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Let's wait for some feedback, add <asm/kexec.h> into init.c could
silence the warning.
thanks.
>
> vim +/crashkernel_could_early_reserve +108 arch/arm64/mm/init.c
>
> 107
> > 108 bool __init crashkernel_could_early_reserve(void)
> 109 {
> 110 if (!IS_ENABLED(CONFIG_ZONE_DMA))
> 111 return true;
> 112 if (arm64_dma_force_32bit)
> 113 return true;
> 114 return false;
> 115 }
> 116
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 resend 1/3] arm64: mm: Do not defer reserve_crashkernel() if only ZONE_DMA32
2022-03-31 7:40 ` Kefeng Wang
@ 2022-04-01 15:59 ` Vijay Balakrishna
-1 siblings, 0 replies; 23+ messages in thread
From: Vijay Balakrishna @ 2022-04-01 15:59 UTC (permalink / raw)
To: Kefeng Wang, catalin.marinas, will, linux-arm-kernel, linux-kernel
Cc: f.fainelli, Pasha Tatashin
On 3/31/2022 12:40 AM, Kefeng Wang wrote:
> The kernel could be benefit due to BLOCK_MAPPINGS, see commit
> 031495635b46 ("arm64: Do not defer reserve_crashkernel() for
> platforms with no DMA memory zones"), if only with ZONE_DMA32,
> set arm64_dma_phys_limit to max_zone_phys(32) earlier in
> arm64_memblock_init(), so platforms with just ZONE_DMA32 config
> enabled will be benefit.
nit --
- "will be benefit" => will benefit
On further look I feel we can getaway without dma32_phys_limit static
global and replace with two separate calls to max_zone_phys(32)? Just a
thought. If you decide to keep it then a better location would be
immediately above arm64_memblock_init() definition where it is
initialized can improve code readability.
Thanks,
Vijay
>
> Cc: Vijay Balakrishna <vijayb@linux.microsoft.com>
> Cc: Pasha Tatashin <pasha.tatashin@soleen.com>
> Cc: Will Deacon <will@kernel.org>
> Reviewed-by: Vijay Balakrishna <vijayb@linux.microsoft.com>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
> arch/arm64/mm/init.c | 23 +++++++++++++----------
> arch/arm64/mm/mmu.c | 6 ++----
> 2 files changed, 15 insertions(+), 14 deletions(-)
>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 8ac25f19084e..fb01eb489fa9 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -65,8 +65,9 @@ EXPORT_SYMBOL(memstart_addr);
> * Memory reservation for crash kernel either done early or deferred
> * depending on DMA memory zones configs (ZONE_DMA) --
> *
> - * In absence of ZONE_DMA configs arm64_dma_phys_limit initialized
> - * here instead of max_zone_phys(). This lets early reservation of
> + * In absence of ZONE_DMA and ZONE_DMA32 configs arm64_dma_phys_limit
> + * initialized here and if only with ZONE_DMA32 arm64_dma_phys_limit
> + * initialised to dma32_phys_limit. This lets early reservation of
> * crash kernel memory which has a dependency on arm64_dma_phys_limit.
> * Reserving memory early for crash kernel allows linear creation of block
> * mappings (greater than page-granularity) for all the memory bank rangs.
> @@ -84,6 +85,7 @@ EXPORT_SYMBOL(memstart_addr);
> * Note: Page-granularity mapppings are necessary for crash kernel memory
> * range for shrinking its size via /sys/kernel/kexec_crash_size interface.
> */
> +static phys_addr_t __ro_after_init dma32_phys_limit;
> #if IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32)
> phys_addr_t __ro_after_init arm64_dma_phys_limit;
> #else
> @@ -160,11 +162,10 @@ static phys_addr_t __init max_zone_phys(unsigned int zone_bits)
> static void __init zone_sizes_init(unsigned long min, unsigned long max)
> {
> unsigned long max_zone_pfns[MAX_NR_ZONES] = {0};
> - unsigned int __maybe_unused acpi_zone_dma_bits;
> - unsigned int __maybe_unused dt_zone_dma_bits;
> - phys_addr_t __maybe_unused dma32_phys_limit = max_zone_phys(32);
> -
> #ifdef CONFIG_ZONE_DMA
> + unsigned int acpi_zone_dma_bits;
> + unsigned int dt_zone_dma_bits;
> +
> acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address());
> dt_zone_dma_bits = fls64(of_dma_get_max_cpu_address(NULL));
> zone_dma_bits = min3(32U, dt_zone_dma_bits, acpi_zone_dma_bits);
> @@ -173,8 +174,6 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
> #endif
> #ifdef CONFIG_ZONE_DMA32
> max_zone_pfns[ZONE_DMA32] = PFN_DOWN(dma32_phys_limit);
> - if (!arm64_dma_phys_limit)
> - arm64_dma_phys_limit = dma32_phys_limit;
> #endif
> max_zone_pfns[ZONE_NORMAL] = max;
>
> @@ -336,8 +335,12 @@ void __init arm64_memblock_init(void)
>
> early_init_fdt_scan_reserved_mem();
>
> - if (!IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32))
> + dma32_phys_limit = max_zone_phys(32);
> + if (!IS_ENABLED(CONFIG_ZONE_DMA)) {
> + if (IS_ENABLED(CONFIG_ZONE_DMA32))
> + arm64_dma_phys_limit = dma32_phys_limit;
> reserve_crashkernel();
> + }
>
> high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
> }
> @@ -385,7 +388,7 @@ void __init bootmem_init(void)
> * request_standard_resources() depends on crashkernel's memory being
> * reserved, so do it here.
> */
> - if (IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32))
> + if (IS_ENABLED(CONFIG_ZONE_DMA))
> reserve_crashkernel();
>
> memblock_dump_all();
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 626ec32873c6..23734481318a 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -529,8 +529,7 @@ static void __init map_mem(pgd_t *pgdp)
>
> #ifdef CONFIG_KEXEC_CORE
> if (crash_mem_map) {
> - if (IS_ENABLED(CONFIG_ZONE_DMA) ||
> - IS_ENABLED(CONFIG_ZONE_DMA32))
> + if (IS_ENABLED(CONFIG_ZONE_DMA))
> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
> else if (crashk_res.end)
> memblock_mark_nomap(crashk_res.start,
> @@ -571,8 +570,7 @@ static void __init map_mem(pgd_t *pgdp)
> * through /sys/kernel/kexec_crash_size interface.
> */
> #ifdef CONFIG_KEXEC_CORE
> - if (crash_mem_map &&
> - !IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32)) {
> + if (crash_mem_map && !IS_ENABLED(CONFIG_ZONE_DMA)) {
> if (crashk_res.end) {
> __map_memblock(pgdp, crashk_res.start,
> crashk_res.end + 1,
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 resend 1/3] arm64: mm: Do not defer reserve_crashkernel() if only ZONE_DMA32
@ 2022-04-01 15:59 ` Vijay Balakrishna
0 siblings, 0 replies; 23+ messages in thread
From: Vijay Balakrishna @ 2022-04-01 15:59 UTC (permalink / raw)
To: Kefeng Wang, catalin.marinas, will, linux-arm-kernel, linux-kernel
Cc: f.fainelli, Pasha Tatashin
On 3/31/2022 12:40 AM, Kefeng Wang wrote:
> The kernel could be benefit due to BLOCK_MAPPINGS, see commit
> 031495635b46 ("arm64: Do not defer reserve_crashkernel() for
> platforms with no DMA memory zones"), if only with ZONE_DMA32,
> set arm64_dma_phys_limit to max_zone_phys(32) earlier in
> arm64_memblock_init(), so platforms with just ZONE_DMA32 config
> enabled will be benefit.
nit --
- "will be benefit" => will benefit
On further look I feel we can getaway without dma32_phys_limit static
global and replace with two separate calls to max_zone_phys(32)? Just a
thought. If you decide to keep it then a better location would be
immediately above arm64_memblock_init() definition where it is
initialized can improve code readability.
Thanks,
Vijay
>
> Cc: Vijay Balakrishna <vijayb@linux.microsoft.com>
> Cc: Pasha Tatashin <pasha.tatashin@soleen.com>
> Cc: Will Deacon <will@kernel.org>
> Reviewed-by: Vijay Balakrishna <vijayb@linux.microsoft.com>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
> arch/arm64/mm/init.c | 23 +++++++++++++----------
> arch/arm64/mm/mmu.c | 6 ++----
> 2 files changed, 15 insertions(+), 14 deletions(-)
>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 8ac25f19084e..fb01eb489fa9 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -65,8 +65,9 @@ EXPORT_SYMBOL(memstart_addr);
> * Memory reservation for crash kernel either done early or deferred
> * depending on DMA memory zones configs (ZONE_DMA) --
> *
> - * In absence of ZONE_DMA configs arm64_dma_phys_limit initialized
> - * here instead of max_zone_phys(). This lets early reservation of
> + * In absence of ZONE_DMA and ZONE_DMA32 configs arm64_dma_phys_limit
> + * initialized here and if only with ZONE_DMA32 arm64_dma_phys_limit
> + * initialised to dma32_phys_limit. This lets early reservation of
> * crash kernel memory which has a dependency on arm64_dma_phys_limit.
> * Reserving memory early for crash kernel allows linear creation of block
> * mappings (greater than page-granularity) for all the memory bank rangs.
> @@ -84,6 +85,7 @@ EXPORT_SYMBOL(memstart_addr);
> * Note: Page-granularity mapppings are necessary for crash kernel memory
> * range for shrinking its size via /sys/kernel/kexec_crash_size interface.
> */
> +static phys_addr_t __ro_after_init dma32_phys_limit;
> #if IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32)
> phys_addr_t __ro_after_init arm64_dma_phys_limit;
> #else
> @@ -160,11 +162,10 @@ static phys_addr_t __init max_zone_phys(unsigned int zone_bits)
> static void __init zone_sizes_init(unsigned long min, unsigned long max)
> {
> unsigned long max_zone_pfns[MAX_NR_ZONES] = {0};
> - unsigned int __maybe_unused acpi_zone_dma_bits;
> - unsigned int __maybe_unused dt_zone_dma_bits;
> - phys_addr_t __maybe_unused dma32_phys_limit = max_zone_phys(32);
> -
> #ifdef CONFIG_ZONE_DMA
> + unsigned int acpi_zone_dma_bits;
> + unsigned int dt_zone_dma_bits;
> +
> acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address());
> dt_zone_dma_bits = fls64(of_dma_get_max_cpu_address(NULL));
> zone_dma_bits = min3(32U, dt_zone_dma_bits, acpi_zone_dma_bits);
> @@ -173,8 +174,6 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
> #endif
> #ifdef CONFIG_ZONE_DMA32
> max_zone_pfns[ZONE_DMA32] = PFN_DOWN(dma32_phys_limit);
> - if (!arm64_dma_phys_limit)
> - arm64_dma_phys_limit = dma32_phys_limit;
> #endif
> max_zone_pfns[ZONE_NORMAL] = max;
>
> @@ -336,8 +335,12 @@ void __init arm64_memblock_init(void)
>
> early_init_fdt_scan_reserved_mem();
>
> - if (!IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32))
> + dma32_phys_limit = max_zone_phys(32);
> + if (!IS_ENABLED(CONFIG_ZONE_DMA)) {
> + if (IS_ENABLED(CONFIG_ZONE_DMA32))
> + arm64_dma_phys_limit = dma32_phys_limit;
> reserve_crashkernel();
> + }
>
> high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
> }
> @@ -385,7 +388,7 @@ void __init bootmem_init(void)
> * request_standard_resources() depends on crashkernel's memory being
> * reserved, so do it here.
> */
> - if (IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32))
> + if (IS_ENABLED(CONFIG_ZONE_DMA))
> reserve_crashkernel();
>
> memblock_dump_all();
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 626ec32873c6..23734481318a 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -529,8 +529,7 @@ static void __init map_mem(pgd_t *pgdp)
>
> #ifdef CONFIG_KEXEC_CORE
> if (crash_mem_map) {
> - if (IS_ENABLED(CONFIG_ZONE_DMA) ||
> - IS_ENABLED(CONFIG_ZONE_DMA32))
> + if (IS_ENABLED(CONFIG_ZONE_DMA))
> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
> else if (crashk_res.end)
> memblock_mark_nomap(crashk_res.start,
> @@ -571,8 +570,7 @@ static void __init map_mem(pgd_t *pgdp)
> * through /sys/kernel/kexec_crash_size interface.
> */
> #ifdef CONFIG_KEXEC_CORE
> - if (crash_mem_map &&
> - !IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32)) {
> + if (crash_mem_map && !IS_ENABLED(CONFIG_ZONE_DMA)) {
> if (crashk_res.end) {
> __map_memblock(pgdp, crashk_res.start,
> crashk_res.end + 1,
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 resend 3/3] arm64: mm: Cleanup useless parameters in zone_sizes_init()
2022-03-31 7:40 ` Kefeng Wang
@ 2022-04-01 17:05 ` Vijay Balakrishna
-1 siblings, 0 replies; 23+ messages in thread
From: Vijay Balakrishna @ 2022-04-01 17:05 UTC (permalink / raw)
To: Kefeng Wang, catalin.marinas, will, linux-arm-kernel, linux-kernel
Cc: f.fainelli
On 3/31/2022 12:40 AM, Kefeng Wang wrote:
> Directly use max_pfn for max and no one use min, kill them.
>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Looks good. Reference to dma32_phys_limit in zone_sizes_init() depends
on what you do with my comment [1].
[1]
https://lore.kernel.org/all/69c1e722-33ea-95cf-de84-aed3022cb042@linux.microsoft.com/
Reviewed-by: Vijay Balakrishna <vijayb@linux.microsoft.com>
> ---
> arch/arm64/mm/init.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 0aafa9181607..80e9ff37b697 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -181,7 +181,7 @@ static phys_addr_t __init max_zone_phys(unsigned int zone_bits)
> return min(zone_mask, memblock_end_of_DRAM() - 1) + 1;
> }
>
> -static void __init zone_sizes_init(unsigned long min, unsigned long max)
> +static void __init zone_sizes_init(void)
> {
> unsigned long max_zone_pfns[MAX_NR_ZONES] = {0};
> #ifdef CONFIG_ZONE_DMA
> @@ -199,7 +199,7 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
> #ifdef CONFIG_ZONE_DMA32
> max_zone_pfns[ZONE_DMA32] = PFN_DOWN(dma32_phys_limit);
> #endif
> - max_zone_pfns[ZONE_NORMAL] = max;
> + max_zone_pfns[ZONE_NORMAL] = max_pfn;
>
> free_area_init(max_zone_pfns);
> }
> @@ -401,7 +401,7 @@ void __init bootmem_init(void)
> * done after the fixed reservations
> */
> sparse_init();
> - zone_sizes_init(min, max);
> + zone_sizes_init();
>
> /*
> * Reserve the CMA area after arm64_dma_phys_limit was initialised.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 resend 3/3] arm64: mm: Cleanup useless parameters in zone_sizes_init()
@ 2022-04-01 17:05 ` Vijay Balakrishna
0 siblings, 0 replies; 23+ messages in thread
From: Vijay Balakrishna @ 2022-04-01 17:05 UTC (permalink / raw)
To: Kefeng Wang, catalin.marinas, will, linux-arm-kernel, linux-kernel
Cc: f.fainelli
On 3/31/2022 12:40 AM, Kefeng Wang wrote:
> Directly use max_pfn for max and no one use min, kill them.
>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Looks good. Reference to dma32_phys_limit in zone_sizes_init() depends
on what you do with my comment [1].
[1]
https://lore.kernel.org/all/69c1e722-33ea-95cf-de84-aed3022cb042@linux.microsoft.com/
Reviewed-by: Vijay Balakrishna <vijayb@linux.microsoft.com>
> ---
> arch/arm64/mm/init.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 0aafa9181607..80e9ff37b697 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -181,7 +181,7 @@ static phys_addr_t __init max_zone_phys(unsigned int zone_bits)
> return min(zone_mask, memblock_end_of_DRAM() - 1) + 1;
> }
>
> -static void __init zone_sizes_init(unsigned long min, unsigned long max)
> +static void __init zone_sizes_init(void)
> {
> unsigned long max_zone_pfns[MAX_NR_ZONES] = {0};
> #ifdef CONFIG_ZONE_DMA
> @@ -199,7 +199,7 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
> #ifdef CONFIG_ZONE_DMA32
> max_zone_pfns[ZONE_DMA32] = PFN_DOWN(dma32_phys_limit);
> #endif
> - max_zone_pfns[ZONE_NORMAL] = max;
> + max_zone_pfns[ZONE_NORMAL] = max_pfn;
>
> free_area_init(max_zone_pfns);
> }
> @@ -401,7 +401,7 @@ void __init bootmem_init(void)
> * done after the fixed reservations
> */
> sparse_init();
> - zone_sizes_init(min, max);
> + zone_sizes_init();
>
> /*
> * Reserve the CMA area after arm64_dma_phys_limit was initialised.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 resend 2/3] arm64: mm: Don't defer reserve_crashkernel() with dma_force_32bit
2022-03-31 7:40 ` Kefeng Wang
@ 2022-04-01 22:09 ` Vijay Balakrishna
-1 siblings, 0 replies; 23+ messages in thread
From: Vijay Balakrishna @ 2022-04-01 22:09 UTC (permalink / raw)
To: Kefeng Wang, catalin.marinas, will, linux-arm-kernel, linux-kernel
Cc: f.fainelli
On 3/31/2022 12:40 AM, Kefeng Wang wrote:
> ARM64 enable ZONE_DMA by default, and with ZONE_DMA crash kernel
> memory reservation is delayed until DMA zone memory range size
> initilazation performed in zone_sizes_init(), but for most platforms
> use 32bit dma_zone_bits, so add dma_force_32bit kernel parameter
> if ZONE_DMA enabled, and initialize arm64_dma_phys_limit to
> dma32_phys_limit in arm64_memblock_init() if dma_force_32bit
> is setup, this could let the crash kernel reservation earlier,
> and allows linear creation with block mapping.
>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
I don't see any problem with the approach. Hope you or someone can test
to make sure no surprises on RPi4 with the proposed change. I do
understand on RPi4 --
- both ZONE_DMA and ZONE_DMA32 are enabled
- one wouldn't use dma_force_32bit kernel parameter
- crashkernel_could_early_reserve() would return false to preserve late
reserve of crash kernel memory
nit --
- consider renaming crashkernel_could_early_reserve() =>
crashkernel_early_reserve()
Reviewed-by: Vijay Balakrishna <vijayb@linux.microsoft.com>
> ---
> arch/arm64/include/asm/kexec.h | 1 +
> arch/arm64/mm/init.c | 42 ++++++++++++++++++++++++++--------
> arch/arm64/mm/mmu.c | 4 ++--
> 3 files changed, 36 insertions(+), 11 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h
> index 9839bfc163d7..8bea40aea359 100644
> --- a/arch/arm64/include/asm/kexec.h
> +++ b/arch/arm64/include/asm/kexec.h
> @@ -95,6 +95,7 @@ void cpu_soft_restart(unsigned long el2_switch, unsigned long entry,
> unsigned long arg0, unsigned long arg1,
> unsigned long arg2);
> #endif
> +bool crashkernel_could_early_reserve(void);
>
> #define ARCH_HAS_KIMAGE_ARCH
>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index fb01eb489fa9..0aafa9181607 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -66,7 +66,8 @@ EXPORT_SYMBOL(memstart_addr);
> * depending on DMA memory zones configs (ZONE_DMA) --
> *
> * In absence of ZONE_DMA and ZONE_DMA32 configs arm64_dma_phys_limit
> - * initialized here and if only with ZONE_DMA32 arm64_dma_phys_limit
> + * initialized here, and if only with ZONE_DMA32 or if with ZONE_DMA
> + * and dma_force_32bit kernel parameter, the arm64_dma_phys_limit is
> * initialised to dma32_phys_limit. This lets early reservation of
> * crash kernel memory which has a dependency on arm64_dma_phys_limit.
> * Reserving memory early for crash kernel allows linear creation of block
> @@ -92,6 +93,27 @@ phys_addr_t __ro_after_init arm64_dma_phys_limit;
> phys_addr_t __ro_after_init arm64_dma_phys_limit = PHYS_MASK + 1;
> #endif
>
> +static bool __ro_after_init arm64_dma_force_32bit;
> +#ifdef CONFIG_ZONE_DMA
> +static int __init arm64_dma_force_32bit_setup(char *p)
> +{
> + zone_dma_bits = 32;
> + arm64_dma_force_32bit = true;
> +
> + return 0;
> +}
> +early_param("dma_force_32bit", arm64_dma_force_32bit_setup);
> +#endif
> +
> +bool __init crashkernel_could_early_reserve(void)
> +{
> + if (!IS_ENABLED(CONFIG_ZONE_DMA))
> + return true;
> + if (arm64_dma_force_32bit)
> + return true;
> + return false;
> +}
> +
> /*
> * reserve_crashkernel() - reserves memory for crash kernel
> *
> @@ -163,12 +185,14 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
> {
> unsigned long max_zone_pfns[MAX_NR_ZONES] = {0};
> #ifdef CONFIG_ZONE_DMA
> - unsigned int acpi_zone_dma_bits;
> - unsigned int dt_zone_dma_bits;
> + if (!arm64_dma_force_32bit) {
> + unsigned int acpi_zone_dma_bits;
> + unsigned int dt_zone_dma_bits;
>
> - acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address());
> - dt_zone_dma_bits = fls64(of_dma_get_max_cpu_address(NULL));
> - zone_dma_bits = min3(32U, dt_zone_dma_bits, acpi_zone_dma_bits);
> + acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address());
> + dt_zone_dma_bits = fls64(of_dma_get_max_cpu_address(NULL));
> + zone_dma_bits = min3(32U, dt_zone_dma_bits, acpi_zone_dma_bits);
> + }
> arm64_dma_phys_limit = max_zone_phys(zone_dma_bits);
> max_zone_pfns[ZONE_DMA] = PFN_DOWN(arm64_dma_phys_limit);
> #endif
> @@ -336,8 +360,8 @@ void __init arm64_memblock_init(void)
> early_init_fdt_scan_reserved_mem();
>
> dma32_phys_limit = max_zone_phys(32);
> - if (!IS_ENABLED(CONFIG_ZONE_DMA)) {
> - if (IS_ENABLED(CONFIG_ZONE_DMA32))
> + if (crashkernel_could_early_reserve()) {
> + if (IS_ENABLED(CONFIG_ZONE_DMA32) || arm64_dma_force_32bit)
> arm64_dma_phys_limit = dma32_phys_limit;
> reserve_crashkernel();
> }
> @@ -388,7 +412,7 @@ void __init bootmem_init(void)
> * request_standard_resources() depends on crashkernel's memory being
> * reserved, so do it here.
> */
> - if (IS_ENABLED(CONFIG_ZONE_DMA))
> + if (!crashkernel_could_early_reserve())
> reserve_crashkernel();
>
> memblock_dump_all();
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 23734481318a..8f7e8452d906 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -529,7 +529,7 @@ static void __init map_mem(pgd_t *pgdp)
>
> #ifdef CONFIG_KEXEC_CORE
> if (crash_mem_map) {
> - if (IS_ENABLED(CONFIG_ZONE_DMA))
> + if (!crashkernel_could_early_reserve())
> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
> else if (crashk_res.end)
> memblock_mark_nomap(crashk_res.start,
> @@ -570,7 +570,7 @@ static void __init map_mem(pgd_t *pgdp)
> * through /sys/kernel/kexec_crash_size interface.
> */
> #ifdef CONFIG_KEXEC_CORE
> - if (crash_mem_map && !IS_ENABLED(CONFIG_ZONE_DMA)) {
> + if (crash_mem_map && crashkernel_could_early_reserve()) {
> if (crashk_res.end) {
> __map_memblock(pgdp, crashk_res.start,
> crashk_res.end + 1,
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 resend 2/3] arm64: mm: Don't defer reserve_crashkernel() with dma_force_32bit
@ 2022-04-01 22:09 ` Vijay Balakrishna
0 siblings, 0 replies; 23+ messages in thread
From: Vijay Balakrishna @ 2022-04-01 22:09 UTC (permalink / raw)
To: Kefeng Wang, catalin.marinas, will, linux-arm-kernel, linux-kernel
Cc: f.fainelli
On 3/31/2022 12:40 AM, Kefeng Wang wrote:
> ARM64 enable ZONE_DMA by default, and with ZONE_DMA crash kernel
> memory reservation is delayed until DMA zone memory range size
> initilazation performed in zone_sizes_init(), but for most platforms
> use 32bit dma_zone_bits, so add dma_force_32bit kernel parameter
> if ZONE_DMA enabled, and initialize arm64_dma_phys_limit to
> dma32_phys_limit in arm64_memblock_init() if dma_force_32bit
> is setup, this could let the crash kernel reservation earlier,
> and allows linear creation with block mapping.
>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
I don't see any problem with the approach. Hope you or someone can test
to make sure no surprises on RPi4 with the proposed change. I do
understand on RPi4 --
- both ZONE_DMA and ZONE_DMA32 are enabled
- one wouldn't use dma_force_32bit kernel parameter
- crashkernel_could_early_reserve() would return false to preserve late
reserve of crash kernel memory
nit --
- consider renaming crashkernel_could_early_reserve() =>
crashkernel_early_reserve()
Reviewed-by: Vijay Balakrishna <vijayb@linux.microsoft.com>
> ---
> arch/arm64/include/asm/kexec.h | 1 +
> arch/arm64/mm/init.c | 42 ++++++++++++++++++++++++++--------
> arch/arm64/mm/mmu.c | 4 ++--
> 3 files changed, 36 insertions(+), 11 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h
> index 9839bfc163d7..8bea40aea359 100644
> --- a/arch/arm64/include/asm/kexec.h
> +++ b/arch/arm64/include/asm/kexec.h
> @@ -95,6 +95,7 @@ void cpu_soft_restart(unsigned long el2_switch, unsigned long entry,
> unsigned long arg0, unsigned long arg1,
> unsigned long arg2);
> #endif
> +bool crashkernel_could_early_reserve(void);
>
> #define ARCH_HAS_KIMAGE_ARCH
>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index fb01eb489fa9..0aafa9181607 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -66,7 +66,8 @@ EXPORT_SYMBOL(memstart_addr);
> * depending on DMA memory zones configs (ZONE_DMA) --
> *
> * In absence of ZONE_DMA and ZONE_DMA32 configs arm64_dma_phys_limit
> - * initialized here and if only with ZONE_DMA32 arm64_dma_phys_limit
> + * initialized here, and if only with ZONE_DMA32 or if with ZONE_DMA
> + * and dma_force_32bit kernel parameter, the arm64_dma_phys_limit is
> * initialised to dma32_phys_limit. This lets early reservation of
> * crash kernel memory which has a dependency on arm64_dma_phys_limit.
> * Reserving memory early for crash kernel allows linear creation of block
> @@ -92,6 +93,27 @@ phys_addr_t __ro_after_init arm64_dma_phys_limit;
> phys_addr_t __ro_after_init arm64_dma_phys_limit = PHYS_MASK + 1;
> #endif
>
> +static bool __ro_after_init arm64_dma_force_32bit;
> +#ifdef CONFIG_ZONE_DMA
> +static int __init arm64_dma_force_32bit_setup(char *p)
> +{
> + zone_dma_bits = 32;
> + arm64_dma_force_32bit = true;
> +
> + return 0;
> +}
> +early_param("dma_force_32bit", arm64_dma_force_32bit_setup);
> +#endif
> +
> +bool __init crashkernel_could_early_reserve(void)
> +{
> + if (!IS_ENABLED(CONFIG_ZONE_DMA))
> + return true;
> + if (arm64_dma_force_32bit)
> + return true;
> + return false;
> +}
> +
> /*
> * reserve_crashkernel() - reserves memory for crash kernel
> *
> @@ -163,12 +185,14 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
> {
> unsigned long max_zone_pfns[MAX_NR_ZONES] = {0};
> #ifdef CONFIG_ZONE_DMA
> - unsigned int acpi_zone_dma_bits;
> - unsigned int dt_zone_dma_bits;
> + if (!arm64_dma_force_32bit) {
> + unsigned int acpi_zone_dma_bits;
> + unsigned int dt_zone_dma_bits;
>
> - acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address());
> - dt_zone_dma_bits = fls64(of_dma_get_max_cpu_address(NULL));
> - zone_dma_bits = min3(32U, dt_zone_dma_bits, acpi_zone_dma_bits);
> + acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address());
> + dt_zone_dma_bits = fls64(of_dma_get_max_cpu_address(NULL));
> + zone_dma_bits = min3(32U, dt_zone_dma_bits, acpi_zone_dma_bits);
> + }
> arm64_dma_phys_limit = max_zone_phys(zone_dma_bits);
> max_zone_pfns[ZONE_DMA] = PFN_DOWN(arm64_dma_phys_limit);
> #endif
> @@ -336,8 +360,8 @@ void __init arm64_memblock_init(void)
> early_init_fdt_scan_reserved_mem();
>
> dma32_phys_limit = max_zone_phys(32);
> - if (!IS_ENABLED(CONFIG_ZONE_DMA)) {
> - if (IS_ENABLED(CONFIG_ZONE_DMA32))
> + if (crashkernel_could_early_reserve()) {
> + if (IS_ENABLED(CONFIG_ZONE_DMA32) || arm64_dma_force_32bit)
> arm64_dma_phys_limit = dma32_phys_limit;
> reserve_crashkernel();
> }
> @@ -388,7 +412,7 @@ void __init bootmem_init(void)
> * request_standard_resources() depends on crashkernel's memory being
> * reserved, so do it here.
> */
> - if (IS_ENABLED(CONFIG_ZONE_DMA))
> + if (!crashkernel_could_early_reserve())
> reserve_crashkernel();
>
> memblock_dump_all();
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 23734481318a..8f7e8452d906 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -529,7 +529,7 @@ static void __init map_mem(pgd_t *pgdp)
>
> #ifdef CONFIG_KEXEC_CORE
> if (crash_mem_map) {
> - if (IS_ENABLED(CONFIG_ZONE_DMA))
> + if (!crashkernel_could_early_reserve())
> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
> else if (crashk_res.end)
> memblock_mark_nomap(crashk_res.start,
> @@ -570,7 +570,7 @@ static void __init map_mem(pgd_t *pgdp)
> * through /sys/kernel/kexec_crash_size interface.
> */
> #ifdef CONFIG_KEXEC_CORE
> - if (crash_mem_map && !IS_ENABLED(CONFIG_ZONE_DMA)) {
> + if (crash_mem_map && crashkernel_could_early_reserve()) {
> if (crashk_res.end) {
> __map_memblock(pgdp, crashk_res.start,
> crashk_res.end + 1,
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 resend 3/3] arm64: mm: Cleanup useless parameters in zone_sizes_init()
2022-04-01 17:05 ` Vijay Balakrishna
@ 2022-04-11 8:22 ` Kefeng Wang
-1 siblings, 0 replies; 23+ messages in thread
From: Kefeng Wang @ 2022-04-11 8:22 UTC (permalink / raw)
To: Vijay Balakrishna, catalin.marinas, will, linux-arm-kernel, linux-kernel
Cc: f.fainelli
On 2022/4/2 1:05, Vijay Balakrishna wrote:
>
>
> On 3/31/2022 12:40 AM, Kefeng Wang wrote:
>> Directly use max_pfn for max and no one use min, kill them.
>>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>
> Looks good. Reference to dma32_phys_limit in zone_sizes_init()
> depends on what you do with my comment [1].
>
> [1]
> https://lore.kernel.org/all/69c1e722-33ea-95cf-de84-aed3022cb042@linux.microsoft.com/
Ok, will drop dma32_phys_limit and directly use max_zone_phys(32).
>
> Reviewed-by: Vijay Balakrishna <vijayb@linux.microsoft.com>
>
Thanks.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 resend 3/3] arm64: mm: Cleanup useless parameters in zone_sizes_init()
@ 2022-04-11 8:22 ` Kefeng Wang
0 siblings, 0 replies; 23+ messages in thread
From: Kefeng Wang @ 2022-04-11 8:22 UTC (permalink / raw)
To: Vijay Balakrishna, catalin.marinas, will, linux-arm-kernel, linux-kernel
Cc: f.fainelli
On 2022/4/2 1:05, Vijay Balakrishna wrote:
>
>
> On 3/31/2022 12:40 AM, Kefeng Wang wrote:
>> Directly use max_pfn for max and no one use min, kill them.
>>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>
> Looks good. Reference to dma32_phys_limit in zone_sizes_init()
> depends on what you do with my comment [1].
>
> [1]
> https://lore.kernel.org/all/69c1e722-33ea-95cf-de84-aed3022cb042@linux.microsoft.com/
Ok, will drop dma32_phys_limit and directly use max_zone_phys(32).
>
> Reviewed-by: Vijay Balakrishna <vijayb@linux.microsoft.com>
>
Thanks.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 resend 2/3] arm64: mm: Don't defer reserve_crashkernel() with dma_force_32bit
2022-04-01 22:09 ` Vijay Balakrishna
@ 2022-04-11 8:28 ` Kefeng Wang
-1 siblings, 0 replies; 23+ messages in thread
From: Kefeng Wang @ 2022-04-11 8:28 UTC (permalink / raw)
To: Vijay Balakrishna, catalin.marinas, will, linux-arm-kernel, linux-kernel
Cc: f.fainelli
On 2022/4/2 6:09, Vijay Balakrishna wrote:
>
>
> On 3/31/2022 12:40 AM, Kefeng Wang wrote:
>> ARM64 enable ZONE_DMA by default, and with ZONE_DMA crash kernel
>> memory reservation is delayed until DMA zone memory range size
>> initilazation performed in zone_sizes_init(), but for most platforms
>> use 32bit dma_zone_bits, so add dma_force_32bit kernel parameter
>> if ZONE_DMA enabled, and initialize arm64_dma_phys_limit to
>> dma32_phys_limit in arm64_memblock_init() if dma_force_32bit
>> is setup, this could let the crash kernel reservation earlier,
>> and allows linear creation with block mapping.
>>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>
> I don't see any problem with the approach. Hope you or someone can
> test to make sure no surprises on RPi4 with the proposed change. I do
> understand on RPi4 --
>
> - both ZONE_DMA and ZONE_DMA32 are enabled
> - one wouldn't use dma_force_32bit kernel parameter
> - crashkernel_could_early_reserve() would return false to preserve
> late reserve of crash kernel memory
>
I don't have RPi4, I tested following cases on qemu
1) only with ZONE_DMA
1.1) only with ZONE_DMA and with dma_force_32bit
2) only with ZONE_DMA32
3) with ZONE_DMA and ZONE_DMA32
3.1) with ZONE_DMA and ZONE_DMA32 and with dma_force_32bit
> nit --
> - consider renaming crashkernel_could_early_reserve() =>
> crashkernel_early_reserve()
>
Sure.
> Reviewed-by: Vijay Balakrishna <vijayb@linux.microsoft.com>
>
Thanks.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 resend 2/3] arm64: mm: Don't defer reserve_crashkernel() with dma_force_32bit
@ 2022-04-11 8:28 ` Kefeng Wang
0 siblings, 0 replies; 23+ messages in thread
From: Kefeng Wang @ 2022-04-11 8:28 UTC (permalink / raw)
To: Vijay Balakrishna, catalin.marinas, will, linux-arm-kernel, linux-kernel
Cc: f.fainelli
On 2022/4/2 6:09, Vijay Balakrishna wrote:
>
>
> On 3/31/2022 12:40 AM, Kefeng Wang wrote:
>> ARM64 enable ZONE_DMA by default, and with ZONE_DMA crash kernel
>> memory reservation is delayed until DMA zone memory range size
>> initilazation performed in zone_sizes_init(), but for most platforms
>> use 32bit dma_zone_bits, so add dma_force_32bit kernel parameter
>> if ZONE_DMA enabled, and initialize arm64_dma_phys_limit to
>> dma32_phys_limit in arm64_memblock_init() if dma_force_32bit
>> is setup, this could let the crash kernel reservation earlier,
>> and allows linear creation with block mapping.
>>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>
> I don't see any problem with the approach. Hope you or someone can
> test to make sure no surprises on RPi4 with the proposed change. I do
> understand on RPi4 --
>
> - both ZONE_DMA and ZONE_DMA32 are enabled
> - one wouldn't use dma_force_32bit kernel parameter
> - crashkernel_could_early_reserve() would return false to preserve
> late reserve of crash kernel memory
>
I don't have RPi4, I tested following cases on qemu
1) only with ZONE_DMA
1.1) only with ZONE_DMA and with dma_force_32bit
2) only with ZONE_DMA32
3) with ZONE_DMA and ZONE_DMA32
3.1) with ZONE_DMA and ZONE_DMA32 and with dma_force_32bit
> nit --
> - consider renaming crashkernel_could_early_reserve() =>
> crashkernel_early_reserve()
>
Sure.
> Reviewed-by: Vijay Balakrishna <vijayb@linux.microsoft.com>
>
Thanks.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2022-04-11 8:29 UTC | newest]
Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-31 7:40 [PATCH v2 resend 0/3] arm64: mm: Do not defer reserve_crashkernel() Kefeng Wang
2022-03-31 7:40 ` Kefeng Wang
2022-03-31 7:40 ` [PATCH v2 resend 1/3] arm64: mm: Do not defer reserve_crashkernel() if only ZONE_DMA32 Kefeng Wang
2022-03-31 7:40 ` Kefeng Wang
2022-04-01 15:59 ` Vijay Balakrishna
2022-04-01 15:59 ` Vijay Balakrishna
2022-03-31 7:40 ` [PATCH v2 resend 2/3] arm64: mm: Don't defer reserve_crashkernel() with dma_force_32bit Kefeng Wang
2022-03-31 7:40 ` Kefeng Wang
2022-03-31 16:14 ` kernel test robot
2022-03-31 16:14 ` kernel test robot
2022-04-01 5:17 ` Kefeng Wang
2022-04-01 5:17 ` Kefeng Wang
2022-04-01 5:17 ` Kefeng Wang
2022-04-01 22:09 ` Vijay Balakrishna
2022-04-01 22:09 ` Vijay Balakrishna
2022-04-11 8:28 ` Kefeng Wang
2022-04-11 8:28 ` Kefeng Wang
2022-03-31 7:40 ` [PATCH v2 resend 3/3] arm64: mm: Cleanup useless parameters in zone_sizes_init() Kefeng Wang
2022-03-31 7:40 ` Kefeng Wang
2022-04-01 17:05 ` Vijay Balakrishna
2022-04-01 17:05 ` Vijay Balakrishna
2022-04-11 8:22 ` Kefeng Wang
2022-04-11 8:22 ` Kefeng Wang
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.