All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/3] arm64: mm: Do not defer reserve_crashkernel()
@ 2022-04-11  9:24 ` Kefeng Wang
  0 siblings, 0 replies; 16+ messages in thread
From: Kefeng Wang @ 2022-04-11  9:24 UTC (permalink / raw)
  To: catalin.marinas, will, linux-arm-kernel, linux-kernel
  Cc: vijayb, f.fainelli, Kefeng Wang

Commit 031495635b46 ("arm64: Do not defer reserve_crashkernel() for
platforms with no DMA memory zones"), this lets the kernel benifit
due to BLOCK_MAPPINGS, we could do more if ZONE_DMA and ZONE_DMA32
enabled.

1) Don't defer reserve_crashkernel() if only ZONE_DMA32
2) Don't defer reserve_crashkernel() if ZONE_DMA with dma_force_32bit
   kernel parameter(newly added)

Here is another case to show the benefit of the block mapping.

Unixbench benchmark result shows between the block mapping and page mapping.
----------------+------------------+-------------------
        	| block mapping    |   page mapping    
----------------+------------------+-------------------
Process Creation|  5,030.7         |    4,711.8       
(in unixbench)  |                  |                   
----------------+------------------+-------------------

note: RODATA_FULL_DEFAULT_ENABLED is not enabled

v3:
- renaming crashkernel_could_early_reserve() to crashkernel_early_reserve() 
- drop dma32_phys_limit, directly use max_zone_phys(32)
- fix no previous prototype issue
- add RB of Vijay to patch2/3
v2 resend:
- fix build error reported-by lkp
v2:
- update patch1 according to Vijay and Florian, and RB of Vijay
- add new patch2

Kefeng Wang (3):
  arm64: mm: Do not defer reserve_crashkernel() if only ZONE_DMA32
  arm64: mm: Don't defer reserve_crashkernel() with dma_force_32bit
  arm64: mm: Cleanup useless parameters in zone_sizes_init()

 arch/arm64/include/asm/kexec.h |  1 +
 arch/arm64/mm/init.c           | 60 ++++++++++++++++++++++++----------
 arch/arm64/mm/mmu.c            |  6 ++--
 3 files changed, 46 insertions(+), 21 deletions(-)

-- 
2.26.2


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v3 0/3] arm64: mm: Do not defer reserve_crashkernel()
@ 2022-04-11  9:24 ` Kefeng Wang
  0 siblings, 0 replies; 16+ messages in thread
From: Kefeng Wang @ 2022-04-11  9:24 UTC (permalink / raw)
  To: catalin.marinas, will, linux-arm-kernel, linux-kernel
  Cc: vijayb, f.fainelli, Kefeng Wang

Commit 031495635b46 ("arm64: Do not defer reserve_crashkernel() for
platforms with no DMA memory zones"), this lets the kernel benifit
due to BLOCK_MAPPINGS, we could do more if ZONE_DMA and ZONE_DMA32
enabled.

1) Don't defer reserve_crashkernel() if only ZONE_DMA32
2) Don't defer reserve_crashkernel() if ZONE_DMA with dma_force_32bit
   kernel parameter(newly added)

Here is another case to show the benefit of the block mapping.

Unixbench benchmark result shows between the block mapping and page mapping.
----------------+------------------+-------------------
        	| block mapping    |   page mapping    
----------------+------------------+-------------------
Process Creation|  5,030.7         |    4,711.8       
(in unixbench)  |                  |                   
----------------+------------------+-------------------

note: RODATA_FULL_DEFAULT_ENABLED is not enabled

v3:
- renaming crashkernel_could_early_reserve() to crashkernel_early_reserve() 
- drop dma32_phys_limit, directly use max_zone_phys(32)
- fix no previous prototype issue
- add RB of Vijay to patch2/3
v2 resend:
- fix build error reported-by lkp
v2:
- update patch1 according to Vijay and Florian, and RB of Vijay
- add new patch2

Kefeng Wang (3):
  arm64: mm: Do not defer reserve_crashkernel() if only ZONE_DMA32
  arm64: mm: Don't defer reserve_crashkernel() with dma_force_32bit
  arm64: mm: Cleanup useless parameters in zone_sizes_init()

 arch/arm64/include/asm/kexec.h |  1 +
 arch/arm64/mm/init.c           | 60 ++++++++++++++++++++++++----------
 arch/arm64/mm/mmu.c            |  6 ++--
 3 files changed, 46 insertions(+), 21 deletions(-)

-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v3 1/3] arm64: mm: Do not defer reserve_crashkernel() if only ZONE_DMA32
  2022-04-11  9:24 ` Kefeng Wang
@ 2022-04-11  9:24   ` Kefeng Wang
  -1 siblings, 0 replies; 16+ messages in thread
From: Kefeng Wang @ 2022-04-11  9:24 UTC (permalink / raw)
  To: catalin.marinas, will, linux-arm-kernel, linux-kernel
  Cc: vijayb, f.fainelli, Kefeng Wang, Pasha Tatashin

The kernel could benefit due to BLOCK_MAPPINGS, see commit
031495635b46 ("arm64: Do not defer reserve_crashkernel() for
platforms with no DMA memory zones"), if only with ZONE_DMA32,
set arm64_dma_phys_limit to max_zone_phys(32) earlier in
arm64_memblock_init(), so platforms with just ZONE_DMA32 config
enabled will benefit.

Cc: Vijay Balakrishna <vijayb@linux.microsoft.com>
Cc: Pasha Tatashin <pasha.tatashin@soleen.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Vijay Balakrishna <vijayb@linux.microsoft.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/arm64/mm/init.c | 23 ++++++++++++-----------
 arch/arm64/mm/mmu.c  |  6 ++----
 2 files changed, 14 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 1e7b1550e2fc..897de41102d9 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -65,8 +65,9 @@ EXPORT_SYMBOL(memstart_addr);
  * Memory reservation for crash kernel either done early or deferred
  * depending on DMA memory zones configs (ZONE_DMA) --
  *
- * In absence of ZONE_DMA configs arm64_dma_phys_limit initialized
- * here instead of max_zone_phys().  This lets early reservation of
+ * In absence of ZONE_DMA and ZONE_DMA32 configs arm64_dma_phys_limit
+ * initialized here and if only with ZONE_DMA32 arm64_dma_phys_limit
+ * initialised to max_zone_phys(32). This lets early reservation of
  * crash kernel memory which has a dependency on arm64_dma_phys_limit.
  * Reserving memory early for crash kernel allows linear creation of block
  * mappings (greater than page-granularity) for all the memory bank rangs.
@@ -160,11 +161,10 @@ static phys_addr_t __init max_zone_phys(unsigned int zone_bits)
 static void __init zone_sizes_init(unsigned long min, unsigned long max)
 {
 	unsigned long max_zone_pfns[MAX_NR_ZONES]  = {0};
-	unsigned int __maybe_unused acpi_zone_dma_bits;
-	unsigned int __maybe_unused dt_zone_dma_bits;
-	phys_addr_t __maybe_unused dma32_phys_limit = max_zone_phys(32);
-
 #ifdef CONFIG_ZONE_DMA
+	unsigned int acpi_zone_dma_bits;
+	unsigned int dt_zone_dma_bits;
+
 	acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address());
 	dt_zone_dma_bits = fls64(of_dma_get_max_cpu_address(NULL));
 	zone_dma_bits = min3(32U, dt_zone_dma_bits, acpi_zone_dma_bits);
@@ -172,9 +172,7 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
 	max_zone_pfns[ZONE_DMA] = PFN_DOWN(arm64_dma_phys_limit);
 #endif
 #ifdef CONFIG_ZONE_DMA32
-	max_zone_pfns[ZONE_DMA32] = PFN_DOWN(dma32_phys_limit);
-	if (!arm64_dma_phys_limit)
-		arm64_dma_phys_limit = dma32_phys_limit;
+	max_zone_pfns[ZONE_DMA32] = PFN_DOWN(max_zone_phys(32));
 #endif
 	max_zone_pfns[ZONE_NORMAL] = max;
 
@@ -336,8 +334,11 @@ void __init arm64_memblock_init(void)
 
 	early_init_fdt_scan_reserved_mem();
 
-	if (!IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32))
+	if (!IS_ENABLED(CONFIG_ZONE_DMA)) {
+		if (IS_ENABLED(CONFIG_ZONE_DMA32))
+			arm64_dma_phys_limit = max_zone_phys(32);
 		reserve_crashkernel();
+	}
 
 	high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
 }
@@ -385,7 +386,7 @@ void __init bootmem_init(void)
 	 * request_standard_resources() depends on crashkernel's memory being
 	 * reserved, so do it here.
 	 */
-	if (IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32))
+	if (IS_ENABLED(CONFIG_ZONE_DMA))
 		reserve_crashkernel();
 
 	memblock_dump_all();
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 626ec32873c6..23734481318a 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -529,8 +529,7 @@ static void __init map_mem(pgd_t *pgdp)
 
 #ifdef CONFIG_KEXEC_CORE
 	if (crash_mem_map) {
-		if (IS_ENABLED(CONFIG_ZONE_DMA) ||
-		    IS_ENABLED(CONFIG_ZONE_DMA32))
+		if (IS_ENABLED(CONFIG_ZONE_DMA))
 			flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 		else if (crashk_res.end)
 			memblock_mark_nomap(crashk_res.start,
@@ -571,8 +570,7 @@ static void __init map_mem(pgd_t *pgdp)
 	 * through /sys/kernel/kexec_crash_size interface.
 	 */
 #ifdef CONFIG_KEXEC_CORE
-	if (crash_mem_map &&
-	    !IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32)) {
+	if (crash_mem_map && !IS_ENABLED(CONFIG_ZONE_DMA)) {
 		if (crashk_res.end) {
 			__map_memblock(pgdp, crashk_res.start,
 				       crashk_res.end + 1,
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 1/3] arm64: mm: Do not defer reserve_crashkernel() if only ZONE_DMA32
@ 2022-04-11  9:24   ` Kefeng Wang
  0 siblings, 0 replies; 16+ messages in thread
From: Kefeng Wang @ 2022-04-11  9:24 UTC (permalink / raw)
  To: catalin.marinas, will, linux-arm-kernel, linux-kernel
  Cc: vijayb, f.fainelli, Kefeng Wang, Pasha Tatashin

The kernel could benefit due to BLOCK_MAPPINGS, see commit
031495635b46 ("arm64: Do not defer reserve_crashkernel() for
platforms with no DMA memory zones"), if only with ZONE_DMA32,
set arm64_dma_phys_limit to max_zone_phys(32) earlier in
arm64_memblock_init(), so platforms with just ZONE_DMA32 config
enabled will benefit.

Cc: Vijay Balakrishna <vijayb@linux.microsoft.com>
Cc: Pasha Tatashin <pasha.tatashin@soleen.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Vijay Balakrishna <vijayb@linux.microsoft.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/arm64/mm/init.c | 23 ++++++++++++-----------
 arch/arm64/mm/mmu.c  |  6 ++----
 2 files changed, 14 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 1e7b1550e2fc..897de41102d9 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -65,8 +65,9 @@ EXPORT_SYMBOL(memstart_addr);
  * Memory reservation for crash kernel either done early or deferred
  * depending on DMA memory zones configs (ZONE_DMA) --
  *
- * In absence of ZONE_DMA configs arm64_dma_phys_limit initialized
- * here instead of max_zone_phys().  This lets early reservation of
+ * In absence of ZONE_DMA and ZONE_DMA32 configs arm64_dma_phys_limit
+ * initialized here and if only with ZONE_DMA32 arm64_dma_phys_limit
+ * initialised to max_zone_phys(32). This lets early reservation of
  * crash kernel memory which has a dependency on arm64_dma_phys_limit.
  * Reserving memory early for crash kernel allows linear creation of block
  * mappings (greater than page-granularity) for all the memory bank rangs.
@@ -160,11 +161,10 @@ static phys_addr_t __init max_zone_phys(unsigned int zone_bits)
 static void __init zone_sizes_init(unsigned long min, unsigned long max)
 {
 	unsigned long max_zone_pfns[MAX_NR_ZONES]  = {0};
-	unsigned int __maybe_unused acpi_zone_dma_bits;
-	unsigned int __maybe_unused dt_zone_dma_bits;
-	phys_addr_t __maybe_unused dma32_phys_limit = max_zone_phys(32);
-
 #ifdef CONFIG_ZONE_DMA
+	unsigned int acpi_zone_dma_bits;
+	unsigned int dt_zone_dma_bits;
+
 	acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address());
 	dt_zone_dma_bits = fls64(of_dma_get_max_cpu_address(NULL));
 	zone_dma_bits = min3(32U, dt_zone_dma_bits, acpi_zone_dma_bits);
@@ -172,9 +172,7 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
 	max_zone_pfns[ZONE_DMA] = PFN_DOWN(arm64_dma_phys_limit);
 #endif
 #ifdef CONFIG_ZONE_DMA32
-	max_zone_pfns[ZONE_DMA32] = PFN_DOWN(dma32_phys_limit);
-	if (!arm64_dma_phys_limit)
-		arm64_dma_phys_limit = dma32_phys_limit;
+	max_zone_pfns[ZONE_DMA32] = PFN_DOWN(max_zone_phys(32));
 #endif
 	max_zone_pfns[ZONE_NORMAL] = max;
 
@@ -336,8 +334,11 @@ void __init arm64_memblock_init(void)
 
 	early_init_fdt_scan_reserved_mem();
 
-	if (!IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32))
+	if (!IS_ENABLED(CONFIG_ZONE_DMA)) {
+		if (IS_ENABLED(CONFIG_ZONE_DMA32))
+			arm64_dma_phys_limit = max_zone_phys(32);
 		reserve_crashkernel();
+	}
 
 	high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
 }
@@ -385,7 +386,7 @@ void __init bootmem_init(void)
 	 * request_standard_resources() depends on crashkernel's memory being
 	 * reserved, so do it here.
 	 */
-	if (IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32))
+	if (IS_ENABLED(CONFIG_ZONE_DMA))
 		reserve_crashkernel();
 
 	memblock_dump_all();
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 626ec32873c6..23734481318a 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -529,8 +529,7 @@ static void __init map_mem(pgd_t *pgdp)
 
 #ifdef CONFIG_KEXEC_CORE
 	if (crash_mem_map) {
-		if (IS_ENABLED(CONFIG_ZONE_DMA) ||
-		    IS_ENABLED(CONFIG_ZONE_DMA32))
+		if (IS_ENABLED(CONFIG_ZONE_DMA))
 			flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 		else if (crashk_res.end)
 			memblock_mark_nomap(crashk_res.start,
@@ -571,8 +570,7 @@ static void __init map_mem(pgd_t *pgdp)
 	 * through /sys/kernel/kexec_crash_size interface.
 	 */
 #ifdef CONFIG_KEXEC_CORE
-	if (crash_mem_map &&
-	    !IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32)) {
+	if (crash_mem_map && !IS_ENABLED(CONFIG_ZONE_DMA)) {
 		if (crashk_res.end) {
 			__map_memblock(pgdp, crashk_res.start,
 				       crashk_res.end + 1,
-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 2/3] arm64: mm: Don't defer reserve_crashkernel() with dma_force_32bit
  2022-04-11  9:24 ` Kefeng Wang
@ 2022-04-11  9:24   ` Kefeng Wang
  -1 siblings, 0 replies; 16+ messages in thread
From: Kefeng Wang @ 2022-04-11  9:24 UTC (permalink / raw)
  To: catalin.marinas, will, linux-arm-kernel, linux-kernel
  Cc: vijayb, f.fainelli, Kefeng Wang

ARM64 enable ZONE_DMA by default, and with ZONE_DMA crash kernel
memory reservation is delayed until DMA zone memory range size
initilazation performed in zone_sizes_init(), but for most platforms
use 32bit dma_zone_bits, so add dma_force_32bit kernel parameter
if ZONE_DMA enabled, and initialize arm64_dma_phys_limit to
max_zone_phys(32) in arm64_memblock_init() if dma_force_32bit
is setup, this could let the crash kernel reservation earlier,
and allows linear creation with block mapping.

Reviewed-by: Vijay Balakrishna <vijayb@linux.microsoft.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/arm64/include/asm/kexec.h |  1 +
 arch/arm64/mm/init.c           | 43 +++++++++++++++++++++++++++-------
 arch/arm64/mm/mmu.c            |  4 ++--
 3 files changed, 37 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h
index 9839bfc163d7..624865d1cc71 100644
--- a/arch/arm64/include/asm/kexec.h
+++ b/arch/arm64/include/asm/kexec.h
@@ -95,6 +95,7 @@ void cpu_soft_restart(unsigned long el2_switch, unsigned long entry,
 		      unsigned long arg0, unsigned long arg1,
 		      unsigned long arg2);
 #endif
+bool crashkernel_early_reserve(void);
 
 #define ARCH_HAS_KIMAGE_ARCH
 
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 897de41102d9..18b0031eadd0 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -35,6 +35,7 @@
 #include <asm/boot.h>
 #include <asm/fixmap.h>
 #include <asm/kasan.h>
+#include <asm/kexec.h>
 #include <asm/kernel-pgtable.h>
 #include <asm/kvm_host.h>
 #include <asm/memory.h>
@@ -66,7 +67,8 @@ EXPORT_SYMBOL(memstart_addr);
  * depending on DMA memory zones configs (ZONE_DMA) --
  *
  * In absence of ZONE_DMA and ZONE_DMA32 configs arm64_dma_phys_limit
- * initialized here and if only with ZONE_DMA32 arm64_dma_phys_limit
+ * initialized here, and if only with ZONE_DMA32 or if with ZONE_DMA
+ * and dma_force_32bit kernel parameter, the arm64_dma_phys_limit is
  * initialised to max_zone_phys(32). This lets early reservation of
  * crash kernel memory which has a dependency on arm64_dma_phys_limit.
  * Reserving memory early for crash kernel allows linear creation of block
@@ -91,6 +93,27 @@ phys_addr_t __ro_after_init arm64_dma_phys_limit;
 phys_addr_t __ro_after_init arm64_dma_phys_limit = PHYS_MASK + 1;
 #endif
 
+static bool __ro_after_init arm64_dma_force_32bit;
+#ifdef CONFIG_ZONE_DMA
+static int __init arm64_dma_force_32bit_setup(char *p)
+{
+	zone_dma_bits = 32;
+	arm64_dma_force_32bit = true;
+
+	return 0;
+}
+early_param("dma_force_32bit", arm64_dma_force_32bit_setup);
+#endif
+
+bool __init crashkernel_early_reserve(void)
+{
+	if (!IS_ENABLED(CONFIG_ZONE_DMA))
+		return true;
+	if (arm64_dma_force_32bit)
+		return true;
+	return false;
+}
+
 /*
  * reserve_crashkernel() - reserves memory for crash kernel
  *
@@ -162,12 +185,14 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
 {
 	unsigned long max_zone_pfns[MAX_NR_ZONES]  = {0};
 #ifdef CONFIG_ZONE_DMA
-	unsigned int acpi_zone_dma_bits;
-	unsigned int dt_zone_dma_bits;
+	if (!arm64_dma_force_32bit) {
+		unsigned int acpi_zone_dma_bits;
+		unsigned int dt_zone_dma_bits;
 
-	acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address());
-	dt_zone_dma_bits = fls64(of_dma_get_max_cpu_address(NULL));
-	zone_dma_bits = min3(32U, dt_zone_dma_bits, acpi_zone_dma_bits);
+		acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address());
+		dt_zone_dma_bits = fls64(of_dma_get_max_cpu_address(NULL));
+		zone_dma_bits = min3(32U, dt_zone_dma_bits, acpi_zone_dma_bits);
+	}
 	arm64_dma_phys_limit = max_zone_phys(zone_dma_bits);
 	max_zone_pfns[ZONE_DMA] = PFN_DOWN(arm64_dma_phys_limit);
 #endif
@@ -334,8 +359,8 @@ void __init arm64_memblock_init(void)
 
 	early_init_fdt_scan_reserved_mem();
 
-	if (!IS_ENABLED(CONFIG_ZONE_DMA)) {
-		if (IS_ENABLED(CONFIG_ZONE_DMA32))
+	if (crashkernel_early_reserve()) {
+		if (IS_ENABLED(CONFIG_ZONE_DMA32) || arm64_dma_force_32bit)
 			arm64_dma_phys_limit = max_zone_phys(32);
 		reserve_crashkernel();
 	}
@@ -386,7 +411,7 @@ void __init bootmem_init(void)
 	 * request_standard_resources() depends on crashkernel's memory being
 	 * reserved, so do it here.
 	 */
-	if (IS_ENABLED(CONFIG_ZONE_DMA))
+	if (!crashkernel_early_reserve())
 		reserve_crashkernel();
 
 	memblock_dump_all();
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 23734481318a..46b626025b78 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -529,7 +529,7 @@ static void __init map_mem(pgd_t *pgdp)
 
 #ifdef CONFIG_KEXEC_CORE
 	if (crash_mem_map) {
-		if (IS_ENABLED(CONFIG_ZONE_DMA))
+		if (!crashkernel_early_reserve())
 			flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 		else if (crashk_res.end)
 			memblock_mark_nomap(crashk_res.start,
@@ -570,7 +570,7 @@ static void __init map_mem(pgd_t *pgdp)
 	 * through /sys/kernel/kexec_crash_size interface.
 	 */
 #ifdef CONFIG_KEXEC_CORE
-	if (crash_mem_map && !IS_ENABLED(CONFIG_ZONE_DMA)) {
+	if (crash_mem_map && crashkernel_early_reserve()) {
 		if (crashk_res.end) {
 			__map_memblock(pgdp, crashk_res.start,
 				       crashk_res.end + 1,
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 2/3] arm64: mm: Don't defer reserve_crashkernel() with dma_force_32bit
@ 2022-04-11  9:24   ` Kefeng Wang
  0 siblings, 0 replies; 16+ messages in thread
From: Kefeng Wang @ 2022-04-11  9:24 UTC (permalink / raw)
  To: catalin.marinas, will, linux-arm-kernel, linux-kernel
  Cc: vijayb, f.fainelli, Kefeng Wang

ARM64 enable ZONE_DMA by default, and with ZONE_DMA crash kernel
memory reservation is delayed until DMA zone memory range size
initilazation performed in zone_sizes_init(), but for most platforms
use 32bit dma_zone_bits, so add dma_force_32bit kernel parameter
if ZONE_DMA enabled, and initialize arm64_dma_phys_limit to
max_zone_phys(32) in arm64_memblock_init() if dma_force_32bit
is setup, this could let the crash kernel reservation earlier,
and allows linear creation with block mapping.

Reviewed-by: Vijay Balakrishna <vijayb@linux.microsoft.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/arm64/include/asm/kexec.h |  1 +
 arch/arm64/mm/init.c           | 43 +++++++++++++++++++++++++++-------
 arch/arm64/mm/mmu.c            |  4 ++--
 3 files changed, 37 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h
index 9839bfc163d7..624865d1cc71 100644
--- a/arch/arm64/include/asm/kexec.h
+++ b/arch/arm64/include/asm/kexec.h
@@ -95,6 +95,7 @@ void cpu_soft_restart(unsigned long el2_switch, unsigned long entry,
 		      unsigned long arg0, unsigned long arg1,
 		      unsigned long arg2);
 #endif
+bool crashkernel_early_reserve(void);
 
 #define ARCH_HAS_KIMAGE_ARCH
 
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 897de41102d9..18b0031eadd0 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -35,6 +35,7 @@
 #include <asm/boot.h>
 #include <asm/fixmap.h>
 #include <asm/kasan.h>
+#include <asm/kexec.h>
 #include <asm/kernel-pgtable.h>
 #include <asm/kvm_host.h>
 #include <asm/memory.h>
@@ -66,7 +67,8 @@ EXPORT_SYMBOL(memstart_addr);
  * depending on DMA memory zones configs (ZONE_DMA) --
  *
  * In absence of ZONE_DMA and ZONE_DMA32 configs arm64_dma_phys_limit
- * initialized here and if only with ZONE_DMA32 arm64_dma_phys_limit
+ * initialized here, and if only with ZONE_DMA32 or if with ZONE_DMA
+ * and dma_force_32bit kernel parameter, the arm64_dma_phys_limit is
  * initialised to max_zone_phys(32). This lets early reservation of
  * crash kernel memory which has a dependency on arm64_dma_phys_limit.
  * Reserving memory early for crash kernel allows linear creation of block
@@ -91,6 +93,27 @@ phys_addr_t __ro_after_init arm64_dma_phys_limit;
 phys_addr_t __ro_after_init arm64_dma_phys_limit = PHYS_MASK + 1;
 #endif
 
+static bool __ro_after_init arm64_dma_force_32bit;
+#ifdef CONFIG_ZONE_DMA
+static int __init arm64_dma_force_32bit_setup(char *p)
+{
+	zone_dma_bits = 32;
+	arm64_dma_force_32bit = true;
+
+	return 0;
+}
+early_param("dma_force_32bit", arm64_dma_force_32bit_setup);
+#endif
+
+bool __init crashkernel_early_reserve(void)
+{
+	if (!IS_ENABLED(CONFIG_ZONE_DMA))
+		return true;
+	if (arm64_dma_force_32bit)
+		return true;
+	return false;
+}
+
 /*
  * reserve_crashkernel() - reserves memory for crash kernel
  *
@@ -162,12 +185,14 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
 {
 	unsigned long max_zone_pfns[MAX_NR_ZONES]  = {0};
 #ifdef CONFIG_ZONE_DMA
-	unsigned int acpi_zone_dma_bits;
-	unsigned int dt_zone_dma_bits;
+	if (!arm64_dma_force_32bit) {
+		unsigned int acpi_zone_dma_bits;
+		unsigned int dt_zone_dma_bits;
 
-	acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address());
-	dt_zone_dma_bits = fls64(of_dma_get_max_cpu_address(NULL));
-	zone_dma_bits = min3(32U, dt_zone_dma_bits, acpi_zone_dma_bits);
+		acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address());
+		dt_zone_dma_bits = fls64(of_dma_get_max_cpu_address(NULL));
+		zone_dma_bits = min3(32U, dt_zone_dma_bits, acpi_zone_dma_bits);
+	}
 	arm64_dma_phys_limit = max_zone_phys(zone_dma_bits);
 	max_zone_pfns[ZONE_DMA] = PFN_DOWN(arm64_dma_phys_limit);
 #endif
@@ -334,8 +359,8 @@ void __init arm64_memblock_init(void)
 
 	early_init_fdt_scan_reserved_mem();
 
-	if (!IS_ENABLED(CONFIG_ZONE_DMA)) {
-		if (IS_ENABLED(CONFIG_ZONE_DMA32))
+	if (crashkernel_early_reserve()) {
+		if (IS_ENABLED(CONFIG_ZONE_DMA32) || arm64_dma_force_32bit)
 			arm64_dma_phys_limit = max_zone_phys(32);
 		reserve_crashkernel();
 	}
@@ -386,7 +411,7 @@ void __init bootmem_init(void)
 	 * request_standard_resources() depends on crashkernel's memory being
 	 * reserved, so do it here.
 	 */
-	if (IS_ENABLED(CONFIG_ZONE_DMA))
+	if (!crashkernel_early_reserve())
 		reserve_crashkernel();
 
 	memblock_dump_all();
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 23734481318a..46b626025b78 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -529,7 +529,7 @@ static void __init map_mem(pgd_t *pgdp)
 
 #ifdef CONFIG_KEXEC_CORE
 	if (crash_mem_map) {
-		if (IS_ENABLED(CONFIG_ZONE_DMA))
+		if (!crashkernel_early_reserve())
 			flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 		else if (crashk_res.end)
 			memblock_mark_nomap(crashk_res.start,
@@ -570,7 +570,7 @@ static void __init map_mem(pgd_t *pgdp)
 	 * through /sys/kernel/kexec_crash_size interface.
 	 */
 #ifdef CONFIG_KEXEC_CORE
-	if (crash_mem_map && !IS_ENABLED(CONFIG_ZONE_DMA)) {
+	if (crash_mem_map && crashkernel_early_reserve()) {
 		if (crashk_res.end) {
 			__map_memblock(pgdp, crashk_res.start,
 				       crashk_res.end + 1,
-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 3/3] arm64: mm: Cleanup useless parameters in zone_sizes_init()
  2022-04-11  9:24 ` Kefeng Wang
@ 2022-04-11  9:24   ` Kefeng Wang
  -1 siblings, 0 replies; 16+ messages in thread
From: Kefeng Wang @ 2022-04-11  9:24 UTC (permalink / raw)
  To: catalin.marinas, will, linux-arm-kernel, linux-kernel
  Cc: vijayb, f.fainelli, Kefeng Wang

Directly use max_pfn for max and no one use min, kill them.

Reviewed-by: Vijay Balakrishna <vijayb@linux.microsoft.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/arm64/mm/init.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 18b0031eadd0..788413fc2ecf 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -181,7 +181,7 @@ static phys_addr_t __init max_zone_phys(unsigned int zone_bits)
 	return min(zone_mask, memblock_end_of_DRAM() - 1) + 1;
 }
 
-static void __init zone_sizes_init(unsigned long min, unsigned long max)
+static void __init zone_sizes_init(void)
 {
 	unsigned long max_zone_pfns[MAX_NR_ZONES]  = {0};
 #ifdef CONFIG_ZONE_DMA
@@ -199,7 +199,7 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
 #ifdef CONFIG_ZONE_DMA32
 	max_zone_pfns[ZONE_DMA32] = PFN_DOWN(max_zone_phys(32));
 #endif
-	max_zone_pfns[ZONE_NORMAL] = max;
+	max_zone_pfns[ZONE_NORMAL] = max_pfn;
 
 	free_area_init(max_zone_pfns);
 }
@@ -400,7 +400,7 @@ void __init bootmem_init(void)
 	 * done after the fixed reservations
 	 */
 	sparse_init();
-	zone_sizes_init(min, max);
+	zone_sizes_init();
 
 	/*
 	 * Reserve the CMA area after arm64_dma_phys_limit was initialised.
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 3/3] arm64: mm: Cleanup useless parameters in zone_sizes_init()
@ 2022-04-11  9:24   ` Kefeng Wang
  0 siblings, 0 replies; 16+ messages in thread
From: Kefeng Wang @ 2022-04-11  9:24 UTC (permalink / raw)
  To: catalin.marinas, will, linux-arm-kernel, linux-kernel
  Cc: vijayb, f.fainelli, Kefeng Wang

Directly use max_pfn for max and no one use min, kill them.

Reviewed-by: Vijay Balakrishna <vijayb@linux.microsoft.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/arm64/mm/init.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 18b0031eadd0..788413fc2ecf 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -181,7 +181,7 @@ static phys_addr_t __init max_zone_phys(unsigned int zone_bits)
 	return min(zone_mask, memblock_end_of_DRAM() - 1) + 1;
 }
 
-static void __init zone_sizes_init(unsigned long min, unsigned long max)
+static void __init zone_sizes_init(void)
 {
 	unsigned long max_zone_pfns[MAX_NR_ZONES]  = {0};
 #ifdef CONFIG_ZONE_DMA
@@ -199,7 +199,7 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
 #ifdef CONFIG_ZONE_DMA32
 	max_zone_pfns[ZONE_DMA32] = PFN_DOWN(max_zone_phys(32));
 #endif
-	max_zone_pfns[ZONE_NORMAL] = max;
+	max_zone_pfns[ZONE_NORMAL] = max_pfn;
 
 	free_area_init(max_zone_pfns);
 }
@@ -400,7 +400,7 @@ void __init bootmem_init(void)
 	 * done after the fixed reservations
 	 */
 	sparse_init();
-	zone_sizes_init(min, max);
+	zone_sizes_init();
 
 	/*
 	 * Reserve the CMA area after arm64_dma_phys_limit was initialised.
-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 0/3] arm64: mm: Do not defer reserve_crashkernel()
  2022-04-11  9:24 ` Kefeng Wang
@ 2022-04-27 12:41   ` Kefeng Wang
  -1 siblings, 0 replies; 16+ messages in thread
From: Kefeng Wang @ 2022-04-27 12:41 UTC (permalink / raw)
  To: catalin.marinas, will, linux-arm-kernel, linux-kernel; +Cc: vijayb, f.fainelli

Hi Catalin and Will, any comments, thanks.

On 2022/4/11 17:24, Kefeng Wang wrote:
> Commit 031495635b46 ("arm64: Do not defer reserve_crashkernel() for
> platforms with no DMA memory zones"), this lets the kernel benifit
> due to BLOCK_MAPPINGS, we could do more if ZONE_DMA and ZONE_DMA32
> enabled.
>
> 1) Don't defer reserve_crashkernel() if only ZONE_DMA32
> 2) Don't defer reserve_crashkernel() if ZONE_DMA with dma_force_32bit
>     kernel parameter(newly added)
>
> Here is another case to show the benefit of the block mapping.
>
> Unixbench benchmark result shows between the block mapping and page mapping.
> ----------------+------------------+-------------------
>          	| block mapping    |   page mapping
> ----------------+------------------+-------------------
> Process Creation|  5,030.7         |    4,711.8
> (in unixbench)  |                  |
> ----------------+------------------+-------------------
>
> note: RODATA_FULL_DEFAULT_ENABLED is not enabled
>
> v3:
> - renaming crashkernel_could_early_reserve() to crashkernel_early_reserve()
> - drop dma32_phys_limit, directly use max_zone_phys(32)
> - fix no previous prototype issue
> - add RB of Vijay to patch2/3
> v2 resend:
> - fix build error reported-by lkp
> v2:
> - update patch1 according to Vijay and Florian, and RB of Vijay
> - add new patch2
>
> Kefeng Wang (3):
>    arm64: mm: Do not defer reserve_crashkernel() if only ZONE_DMA32
>    arm64: mm: Don't defer reserve_crashkernel() with dma_force_32bit
>    arm64: mm: Cleanup useless parameters in zone_sizes_init()
>
>   arch/arm64/include/asm/kexec.h |  1 +
>   arch/arm64/mm/init.c           | 60 ++++++++++++++++++++++++----------
>   arch/arm64/mm/mmu.c            |  6 ++--
>   3 files changed, 46 insertions(+), 21 deletions(-)
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 0/3] arm64: mm: Do not defer reserve_crashkernel()
@ 2022-04-27 12:41   ` Kefeng Wang
  0 siblings, 0 replies; 16+ messages in thread
From: Kefeng Wang @ 2022-04-27 12:41 UTC (permalink / raw)
  To: catalin.marinas, will, linux-arm-kernel, linux-kernel; +Cc: vijayb, f.fainelli

Hi Catalin and Will, any comments, thanks.

On 2022/4/11 17:24, Kefeng Wang wrote:
> Commit 031495635b46 ("arm64: Do not defer reserve_crashkernel() for
> platforms with no DMA memory zones"), this lets the kernel benifit
> due to BLOCK_MAPPINGS, we could do more if ZONE_DMA and ZONE_DMA32
> enabled.
>
> 1) Don't defer reserve_crashkernel() if only ZONE_DMA32
> 2) Don't defer reserve_crashkernel() if ZONE_DMA with dma_force_32bit
>     kernel parameter(newly added)
>
> Here is another case to show the benefit of the block mapping.
>
> Unixbench benchmark result shows between the block mapping and page mapping.
> ----------------+------------------+-------------------
>          	| block mapping    |   page mapping
> ----------------+------------------+-------------------
> Process Creation|  5,030.7         |    4,711.8
> (in unixbench)  |                  |
> ----------------+------------------+-------------------
>
> note: RODATA_FULL_DEFAULT_ENABLED is not enabled
>
> v3:
> - renaming crashkernel_could_early_reserve() to crashkernel_early_reserve()
> - drop dma32_phys_limit, directly use max_zone_phys(32)
> - fix no previous prototype issue
> - add RB of Vijay to patch2/3
> v2 resend:
> - fix build error reported-by lkp
> v2:
> - update patch1 according to Vijay and Florian, and RB of Vijay
> - add new patch2
>
> Kefeng Wang (3):
>    arm64: mm: Do not defer reserve_crashkernel() if only ZONE_DMA32
>    arm64: mm: Don't defer reserve_crashkernel() with dma_force_32bit
>    arm64: mm: Cleanup useless parameters in zone_sizes_init()
>
>   arch/arm64/include/asm/kexec.h |  1 +
>   arch/arm64/mm/init.c           | 60 ++++++++++++++++++++++++----------
>   arch/arm64/mm/mmu.c            |  6 ++--
>   3 files changed, 46 insertions(+), 21 deletions(-)
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 0/3] arm64: mm: Do not defer reserve_crashkernel()
  2022-04-11  9:24 ` Kefeng Wang
@ 2022-05-03 18:20   ` Catalin Marinas
  -1 siblings, 0 replies; 16+ messages in thread
From: Catalin Marinas @ 2022-05-03 18:20 UTC (permalink / raw)
  To: Kefeng Wang; +Cc: will, linux-arm-kernel, linux-kernel, vijayb, f.fainelli

On Mon, Apr 11, 2022 at 05:24:52PM +0800, Kefeng Wang wrote:
> Commit 031495635b46 ("arm64: Do not defer reserve_crashkernel() for
> platforms with no DMA memory zones"), this lets the kernel benifit
> due to BLOCK_MAPPINGS, we could do more if ZONE_DMA and ZONE_DMA32
> enabled.
> 
> 1) Don't defer reserve_crashkernel() if only ZONE_DMA32
> 2) Don't defer reserve_crashkernel() if ZONE_DMA with dma_force_32bit
>    kernel parameter(newly added)

I'm not really keen on a new kernel parameter for this. But even with
such parameter, there is another series that allows crashkernel
reservations above ZONE_DMA32, so that would also need
NO_BLOCK_MAPPINGS, at least initially. I think there was a proposal to
do the high reservation first and only defer the low one in ZONE_DMA but
suggested we get the reservations sorted first and look at optimisations
later.

If hardware is so bad with page mappings, I think we need to look at
different ways to enable the block mappings, e.g. some safe break
before make change of the mappings or maybe switching to another TTBR1
during boot.

Does FEAT_BBM level 2 allow us to change the block size without a break
before make? I think that can still trigger a TLB conflict abort, maybe
we can trap it and invalidate the TLBs (the conflict should be on the
linear map not where the kernel image is mapped).

-- 
Catalin

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 0/3] arm64: mm: Do not defer reserve_crashkernel()
@ 2022-05-03 18:20   ` Catalin Marinas
  0 siblings, 0 replies; 16+ messages in thread
From: Catalin Marinas @ 2022-05-03 18:20 UTC (permalink / raw)
  To: Kefeng Wang; +Cc: will, linux-arm-kernel, linux-kernel, vijayb, f.fainelli

On Mon, Apr 11, 2022 at 05:24:52PM +0800, Kefeng Wang wrote:
> Commit 031495635b46 ("arm64: Do not defer reserve_crashkernel() for
> platforms with no DMA memory zones"), this lets the kernel benifit
> due to BLOCK_MAPPINGS, we could do more if ZONE_DMA and ZONE_DMA32
> enabled.
> 
> 1) Don't defer reserve_crashkernel() if only ZONE_DMA32
> 2) Don't defer reserve_crashkernel() if ZONE_DMA with dma_force_32bit
>    kernel parameter(newly added)

I'm not really keen on a new kernel parameter for this. But even with
such parameter, there is another series that allows crashkernel
reservations above ZONE_DMA32, so that would also need
NO_BLOCK_MAPPINGS, at least initially. I think there was a proposal to
do the high reservation first and only defer the low one in ZONE_DMA but
suggested we get the reservations sorted first and look at optimisations
later.

If hardware is so bad with page mappings, I think we need to look at
different ways to enable the block mappings, e.g. some safe break
before make change of the mappings or maybe switching to another TTBR1
during boot.

Does FEAT_BBM level 2 allow us to change the block size without a break
before make? I think that can still trigger a TLB conflict abort, maybe
we can trap it and invalidate the TLBs (the conflict should be on the
linear map not where the kernel image is mapped).

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 0/3] arm64: mm: Do not defer reserve_crashkernel()
  2022-05-03 18:20   ` Catalin Marinas
@ 2022-05-05  3:04     ` Kefeng Wang
  -1 siblings, 0 replies; 16+ messages in thread
From: Kefeng Wang @ 2022-05-05  3:04 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: will, linux-arm-kernel, linux-kernel, vijayb, f.fainelli


On 2022/5/4 2:20, Catalin Marinas wrote:
> On Mon, Apr 11, 2022 at 05:24:52PM +0800, Kefeng Wang wrote:
>> Commit 031495635b46 ("arm64: Do not defer reserve_crashkernel() for
>> platforms with no DMA memory zones"), this lets the kernel benifit
>> due to BLOCK_MAPPINGS, we could do more if ZONE_DMA and ZONE_DMA32
>> enabled.
>>
>> 1) Don't defer reserve_crashkernel() if only ZONE_DMA32
>> 2) Don't defer reserve_crashkernel() if ZONE_DMA with dma_force_32bit
>>     kernel parameter(newly added)
> I'm not really keen on a new kernel parameter for this. But even with
> such parameter, there is another series that allows crashkernel
> reservations above ZONE_DMA32, so that would also need
> NO_BLOCK_MAPPINGS, at least initially. I think there was a proposal to
> do the high reservation first and only defer the low one in ZONE_DMA but
> suggested we get the reservations sorted first and look at optimisations
> later.
OK, we could look it again after patch "support reserving crashkernel
above 4G on arm64 kdump".

The patch3 is a small cleanup, could you pick it up?

> If hardware is so bad with page mappings, I think we need to look at
> different ways to enable the block mappings, e.g. some safe break
> before make change of the mappings or maybe switching to another TTBR1
> during boot.
>
> Does FEAT_BBM level 2 allow us to change the block size without a break
> before make? I think that can still trigger a TLB conflict abort, maybe
> we can trap it and invalidate the TLBs (the conflict should be on the
> linear map not where the kernel image is mapped).

Block mapping is better than page mapping in some testcase(unixbench,
booting time, and mysql, maybe more). KFENCE will make the liner
mapping to page mapping too. If there is a new way to let's enable
the block mapping, that's great.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 0/3] arm64: mm: Do not defer reserve_crashkernel()
@ 2022-05-05  3:04     ` Kefeng Wang
  0 siblings, 0 replies; 16+ messages in thread
From: Kefeng Wang @ 2022-05-05  3:04 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: will, linux-arm-kernel, linux-kernel, vijayb, f.fainelli


On 2022/5/4 2:20, Catalin Marinas wrote:
> On Mon, Apr 11, 2022 at 05:24:52PM +0800, Kefeng Wang wrote:
>> Commit 031495635b46 ("arm64: Do not defer reserve_crashkernel() for
>> platforms with no DMA memory zones"), this lets the kernel benifit
>> due to BLOCK_MAPPINGS, we could do more if ZONE_DMA and ZONE_DMA32
>> enabled.
>>
>> 1) Don't defer reserve_crashkernel() if only ZONE_DMA32
>> 2) Don't defer reserve_crashkernel() if ZONE_DMA with dma_force_32bit
>>     kernel parameter(newly added)
> I'm not really keen on a new kernel parameter for this. But even with
> such parameter, there is another series that allows crashkernel
> reservations above ZONE_DMA32, so that would also need
> NO_BLOCK_MAPPINGS, at least initially. I think there was a proposal to
> do the high reservation first and only defer the low one in ZONE_DMA but
> suggested we get the reservations sorted first and look at optimisations
> later.
OK, we could look it again after patch "support reserving crashkernel
above 4G on arm64 kdump".

The patch3 is a small cleanup, could you pick it up?

> If hardware is so bad with page mappings, I think we need to look at
> different ways to enable the block mappings, e.g. some safe break
> before make change of the mappings or maybe switching to another TTBR1
> during boot.
>
> Does FEAT_BBM level 2 allow us to change the block size without a break
> before make? I think that can still trigger a TLB conflict abort, maybe
> we can trap it and invalidate the TLBs (the conflict should be on the
> linear map not where the kernel image is mapped).

Block mapping is better than page mapping in some testcase(unixbench,
booting time, and mysql, maybe more). KFENCE will make the liner
mapping to page mapping too. If there is a new way to let's enable
the block mapping, that's great.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: (subset) [PATCH v3 0/3] arm64: mm: Do not defer reserve_crashkernel()
  2022-04-11  9:24 ` Kefeng Wang
@ 2022-05-05  8:26   ` Catalin Marinas
  -1 siblings, 0 replies; 16+ messages in thread
From: Catalin Marinas @ 2022-05-05  8:26 UTC (permalink / raw)
  To: will, linux-kernel, linux-arm-kernel, Kefeng Wang; +Cc: f.fainelli, vijayb

On Mon, 11 Apr 2022 17:24:52 +0800, Kefeng Wang wrote:
> Commit 031495635b46 ("arm64: Do not defer reserve_crashkernel() for
> platforms with no DMA memory zones"), this lets the kernel benifit
> due to BLOCK_MAPPINGS, we could do more if ZONE_DMA and ZONE_DMA32
> enabled.
> 
> 1) Don't defer reserve_crashkernel() if only ZONE_DMA32
> 2) Don't defer reserve_crashkernel() if ZONE_DMA with dma_force_32bit
>    kernel parameter(newly added)
> 
> [...]

Applied to arm64 (for-next/misc), thanks!

[3/3] arm64: mm: Cleanup useless parameters in zone_sizes_init()
      https://git.kernel.org/arm64/c/f41ef4c2ee99

-- 
Catalin


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: (subset) [PATCH v3 0/3] arm64: mm: Do not defer reserve_crashkernel()
@ 2022-05-05  8:26   ` Catalin Marinas
  0 siblings, 0 replies; 16+ messages in thread
From: Catalin Marinas @ 2022-05-05  8:26 UTC (permalink / raw)
  To: will, linux-kernel, linux-arm-kernel, Kefeng Wang; +Cc: f.fainelli, vijayb

On Mon, 11 Apr 2022 17:24:52 +0800, Kefeng Wang wrote:
> Commit 031495635b46 ("arm64: Do not defer reserve_crashkernel() for
> platforms with no DMA memory zones"), this lets the kernel benifit
> due to BLOCK_MAPPINGS, we could do more if ZONE_DMA and ZONE_DMA32
> enabled.
> 
> 1) Don't defer reserve_crashkernel() if only ZONE_DMA32
> 2) Don't defer reserve_crashkernel() if ZONE_DMA with dma_force_32bit
>    kernel parameter(newly added)
> 
> [...]

Applied to arm64 (for-next/misc), thanks!

[3/3] arm64: mm: Cleanup useless parameters in zone_sizes_init()
      https://git.kernel.org/arm64/c/f41ef4c2ee99

-- 
Catalin


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2022-05-05  8:27 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-11  9:24 [PATCH v3 0/3] arm64: mm: Do not defer reserve_crashkernel() Kefeng Wang
2022-04-11  9:24 ` Kefeng Wang
2022-04-11  9:24 ` [PATCH v3 1/3] arm64: mm: Do not defer reserve_crashkernel() if only ZONE_DMA32 Kefeng Wang
2022-04-11  9:24   ` Kefeng Wang
2022-04-11  9:24 ` [PATCH v3 2/3] arm64: mm: Don't defer reserve_crashkernel() with dma_force_32bit Kefeng Wang
2022-04-11  9:24   ` Kefeng Wang
2022-04-11  9:24 ` [PATCH v3 3/3] arm64: mm: Cleanup useless parameters in zone_sizes_init() Kefeng Wang
2022-04-11  9:24   ` Kefeng Wang
2022-04-27 12:41 ` [PATCH v3 0/3] arm64: mm: Do not defer reserve_crashkernel() Kefeng Wang
2022-04-27 12:41   ` Kefeng Wang
2022-05-03 18:20 ` Catalin Marinas
2022-05-03 18:20   ` Catalin Marinas
2022-05-05  3:04   ` Kefeng Wang
2022-05-05  3:04     ` Kefeng Wang
2022-05-05  8:26 ` (subset) " Catalin Marinas
2022-05-05  8:26   ` Catalin Marinas

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.