All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH -next 0/3] arm64: support page mapping percpu first chunk allocator
@ 2021-07-05 11:14 ` Kefeng Wang
  0 siblings, 0 replies; 30+ messages in thread
From: Kefeng Wang @ 2021-07-05 11:14 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Andrey Ryabinin, Andrey Konovalov,
	Dmitry Vyukov
  Cc: linux-arm-kernel, linux-kernel, kasan-dev, linux-mm, Kefeng Wang

Percpu embedded first chunk allocator is the firstly option, but it
could fails on ARM64, eg,
  "percpu: max_distance=0x5fcfdc640000 too large for vmalloc space 0x781fefff0000"
  "percpu: max_distance=0x600000540000 too large for vmalloc space 0x7dffb7ff0000"
  "percpu: max_distance=0x5fff9adb0000 too large for vmalloc space 0x5dffb7ff0000"

then we could meet "WARNING: CPU: 15 PID: 461 at vmalloc.c:3087 pcpu_get_vm_areas+0x488/0x838",
the system can't boot successfully.

Let's implement page mapping percpu first chunk allocator as a fallback
to the embedding allocator to increase the robustness of the system.

Also fix a crash when both NEED_PER_CPU_PAGE_FIRST_CHUNK and KASAN_VMALLOC enabled.

Tested on ARM64 qemu with cmdline "percpu_alloc=page" based on next-20210630.

Kefeng Wang (3):
  vmalloc: Choose a better start address in vm_area_register_early()
  arm64: Support page mapping percpu first chunk allocator
  kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC

 arch/arm64/Kconfig         |  4 ++
 arch/arm64/mm/kasan_init.c | 18 +++++++++
 drivers/base/arch_numa.c   | 82 +++++++++++++++++++++++++++++++++-----
 include/linux/kasan.h      |  2 +
 mm/kasan/init.c            |  5 +++
 mm/vmalloc.c               |  9 +++--
 6 files changed, 107 insertions(+), 13 deletions(-)

-- 
2.26.2


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH -next 0/3] arm64: support page mapping percpu first chunk allocator
@ 2021-07-05 11:14 ` Kefeng Wang
  0 siblings, 0 replies; 30+ messages in thread
From: Kefeng Wang @ 2021-07-05 11:14 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Andrey Ryabinin, Andrey Konovalov,
	Dmitry Vyukov
  Cc: linux-arm-kernel, linux-kernel, kasan-dev, linux-mm, Kefeng Wang

Percpu embedded first chunk allocator is the firstly option, but it
could fails on ARM64, eg,
  "percpu: max_distance=0x5fcfdc640000 too large for vmalloc space 0x781fefff0000"
  "percpu: max_distance=0x600000540000 too large for vmalloc space 0x7dffb7ff0000"
  "percpu: max_distance=0x5fff9adb0000 too large for vmalloc space 0x5dffb7ff0000"

then we could meet "WARNING: CPU: 15 PID: 461 at vmalloc.c:3087 pcpu_get_vm_areas+0x488/0x838",
the system can't boot successfully.

Let's implement page mapping percpu first chunk allocator as a fallback
to the embedding allocator to increase the robustness of the system.

Also fix a crash when both NEED_PER_CPU_PAGE_FIRST_CHUNK and KASAN_VMALLOC enabled.

Tested on ARM64 qemu with cmdline "percpu_alloc=page" based on next-20210630.

Kefeng Wang (3):
  vmalloc: Choose a better start address in vm_area_register_early()
  arm64: Support page mapping percpu first chunk allocator
  kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC

 arch/arm64/Kconfig         |  4 ++
 arch/arm64/mm/kasan_init.c | 18 +++++++++
 drivers/base/arch_numa.c   | 82 +++++++++++++++++++++++++++++++++-----
 include/linux/kasan.h      |  2 +
 mm/kasan/init.c            |  5 +++
 mm/vmalloc.c               |  9 +++--
 6 files changed, 107 insertions(+), 13 deletions(-)

-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH -next 1/3] vmalloc: Choose a better start address in vm_area_register_early()
  2021-07-05 11:14 ` Kefeng Wang
@ 2021-07-05 11:14   ` Kefeng Wang
  -1 siblings, 0 replies; 30+ messages in thread
From: Kefeng Wang @ 2021-07-05 11:14 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Andrey Ryabinin, Andrey Konovalov,
	Dmitry Vyukov
  Cc: linux-arm-kernel, linux-kernel, kasan-dev, linux-mm, Kefeng Wang

There are some fixed locations in the vmalloc area be reserved
in ARM(see iotable_init()) and ARM64(see map_kernel()), but for
pcpu_page_first_chunk(), it calls vm_area_register_early() and
choose VMALLOC_START as the start address of vmap area which
could be conflicted with above address, then could trigger a
BUG_ON in vm_area_add_early().

Let's choose the end of existing address range in vmlist as the
start address instead of VMALLOC_START to avoid the BUG_ON.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/vmalloc.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d5cd52805149..a98cf97f032f 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2238,12 +2238,14 @@ void __init vm_area_add_early(struct vm_struct *vm)
  */
 void __init vm_area_register_early(struct vm_struct *vm, size_t align)
 {
-	static size_t vm_init_off __initdata;
+	unsigned long vm_start = VMALLOC_START;
+	struct vm_struct *tmp;
 	unsigned long addr;
 
-	addr = ALIGN(VMALLOC_START + vm_init_off, align);
-	vm_init_off = PFN_ALIGN(addr + vm->size) - VMALLOC_START;
+	for (tmp = vmlist; tmp; tmp = tmp->next)
+		vm_start = (unsigned long)tmp->addr + tmp->size;
 
+	addr = ALIGN(vm_start, align);
 	vm->addr = (void *)addr;
 
 	vm_area_add_early(vm);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH -next 1/3] vmalloc: Choose a better start address in vm_area_register_early()
@ 2021-07-05 11:14   ` Kefeng Wang
  0 siblings, 0 replies; 30+ messages in thread
From: Kefeng Wang @ 2021-07-05 11:14 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Andrey Ryabinin, Andrey Konovalov,
	Dmitry Vyukov
  Cc: linux-arm-kernel, linux-kernel, kasan-dev, linux-mm, Kefeng Wang

There are some fixed locations in the vmalloc area be reserved
in ARM(see iotable_init()) and ARM64(see map_kernel()), but for
pcpu_page_first_chunk(), it calls vm_area_register_early() and
choose VMALLOC_START as the start address of vmap area which
could be conflicted with above address, then could trigger a
BUG_ON in vm_area_add_early().

Let's choose the end of existing address range in vmlist as the
start address instead of VMALLOC_START to avoid the BUG_ON.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/vmalloc.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d5cd52805149..a98cf97f032f 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2238,12 +2238,14 @@ void __init vm_area_add_early(struct vm_struct *vm)
  */
 void __init vm_area_register_early(struct vm_struct *vm, size_t align)
 {
-	static size_t vm_init_off __initdata;
+	unsigned long vm_start = VMALLOC_START;
+	struct vm_struct *tmp;
 	unsigned long addr;
 
-	addr = ALIGN(VMALLOC_START + vm_init_off, align);
-	vm_init_off = PFN_ALIGN(addr + vm->size) - VMALLOC_START;
+	for (tmp = vmlist; tmp; tmp = tmp->next)
+		vm_start = (unsigned long)tmp->addr + tmp->size;
 
+	addr = ALIGN(vm_start, align);
 	vm->addr = (void *)addr;
 
 	vm_area_add_early(vm);
-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH -next 2/3] arm64: Support page mapping percpu first chunk allocator
  2021-07-05 11:14 ` Kefeng Wang
@ 2021-07-05 11:14   ` Kefeng Wang
  -1 siblings, 0 replies; 30+ messages in thread
From: Kefeng Wang @ 2021-07-05 11:14 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Andrey Ryabinin, Andrey Konovalov,
	Dmitry Vyukov
  Cc: linux-arm-kernel, linux-kernel, kasan-dev, linux-mm, Kefeng Wang

Percpu embedded first chunk allocator is the firstly option, but it
could fails on ARM64, eg,
  "percpu: max_distance=0x5fcfdc640000 too large for vmalloc space 0x781fefff0000"
  "percpu: max_distance=0x600000540000 too large for vmalloc space 0x7dffb7ff0000"
  "percpu: max_distance=0x5fff9adb0000 too large for vmalloc space 0x5dffb7ff0000"

then we could meet many "WARNING: CPU: 15 PID: 461 at vmalloc.c:3087 pcpu_get_vm_areas+0x488/0x838",
the system can't boot successfully.

Let's implement page mapping percpu first chunk allocator as a fallback
to the embedding allocator to increase the robustness of the system.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/arm64/Kconfig       |  4 ++
 drivers/base/arch_numa.c | 82 +++++++++++++++++++++++++++++++++++-----
 2 files changed, 76 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index e07e7de9ac49..a4e410bcdacf 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1045,6 +1045,10 @@ config NEED_PER_CPU_EMBED_FIRST_CHUNK
 	def_bool y
 	depends on NUMA
 
+config NEED_PER_CPU_PAGE_FIRST_CHUNK
+	def_bool y
+	depends on NUMA
+
 source "kernel/Kconfig.hz"
 
 config ARCH_SPARSEMEM_ENABLE
diff --git a/drivers/base/arch_numa.c b/drivers/base/arch_numa.c
index 4cc4e117727d..563b2013b75a 100644
--- a/drivers/base/arch_numa.c
+++ b/drivers/base/arch_numa.c
@@ -14,6 +14,7 @@
 #include <linux/of.h>
 
 #include <asm/sections.h>
+#include <asm/pgalloc.h>
 
 struct pglist_data *node_data[MAX_NUMNODES] __read_mostly;
 EXPORT_SYMBOL(node_data);
@@ -168,22 +169,83 @@ static void __init pcpu_fc_free(void *ptr, size_t size)
 	memblock_free_early(__pa(ptr), size);
 }
 
+#ifdef CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK
+static void __init pcpu_populate_pte(unsigned long addr)
+{
+	pgd_t *pgd = pgd_offset_k(addr);
+	p4d_t *p4d;
+	pud_t *pud;
+	pmd_t *pmd;
+
+	p4d = p4d_offset(pgd, addr);
+	if (p4d_none(*p4d)) {
+		pud_t *new;
+
+		new = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+		if (!new)
+			goto err_alloc;
+		p4d_populate(&init_mm, p4d, new);
+	}
+
+	pud = pud_offset(p4d, addr);
+	if (pud_none(*pud)) {
+		pmd_t *new;
+
+		new = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+		if (!new)
+			goto err_alloc;
+		pud_populate(&init_mm, pud, new);
+	}
+
+	pmd = pmd_offset(pud, addr);
+	if (!pmd_present(*pmd)) {
+		pte_t *new;
+
+		new = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+		if (!new)
+			goto err_alloc;
+		pmd_populate_kernel(&init_mm, pmd, new);
+	}
+
+	return;
+
+err_alloc:
+	panic("%s: Failed to allocate %lu bytes align=%lx from=%lx\n",
+	      __func__, PAGE_SIZE, PAGE_SIZE, PAGE_SIZE);
+}
+#endif
+
 void __init setup_per_cpu_areas(void)
 {
 	unsigned long delta;
 	unsigned int cpu;
-	int rc;
+	int rc = -EINVAL;
+
+	if (pcpu_chosen_fc != PCPU_FC_PAGE) {
+		/*
+		 * Always reserve area for module percpu variables.  That's
+		 * what the legacy allocator did.
+		 */
+		rc = pcpu_embed_first_chunk(PERCPU_MODULE_RESERVE,
+					    PERCPU_DYNAMIC_RESERVE, PAGE_SIZE,
+					    pcpu_cpu_distance,
+					    pcpu_fc_alloc, pcpu_fc_free);
+#ifdef CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK
+		if (rc < 0)
+			pr_warn("PERCPU: %s allocator failed (%d), falling back to page size\n",
+				   pcpu_fc_names[pcpu_chosen_fc], rc);
+#endif
+	}
 
-	/*
-	 * Always reserve area for module percpu variables.  That's
-	 * what the legacy allocator did.
-	 */
-	rc = pcpu_embed_first_chunk(PERCPU_MODULE_RESERVE,
-				    PERCPU_DYNAMIC_RESERVE, PAGE_SIZE,
-				    pcpu_cpu_distance,
-				    pcpu_fc_alloc, pcpu_fc_free);
+#ifdef CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK
+	if (rc < 0)
+		rc = pcpu_page_first_chunk(PERCPU_MODULE_RESERVE,
+					   pcpu_fc_alloc,
+					   pcpu_fc_free,
+					   pcpu_populate_pte);
+#endif
 	if (rc < 0)
-		panic("Failed to initialize percpu areas.");
+		panic("Failed to initialize percpu areas (err=%d).", rc);
 
 	delta = (unsigned long)pcpu_base_addr - (unsigned long)__per_cpu_start;
 	for_each_possible_cpu(cpu)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH -next 2/3] arm64: Support page mapping percpu first chunk allocator
@ 2021-07-05 11:14   ` Kefeng Wang
  0 siblings, 0 replies; 30+ messages in thread
From: Kefeng Wang @ 2021-07-05 11:14 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Andrey Ryabinin, Andrey Konovalov,
	Dmitry Vyukov
  Cc: linux-arm-kernel, linux-kernel, kasan-dev, linux-mm, Kefeng Wang

Percpu embedded first chunk allocator is the firstly option, but it
could fails on ARM64, eg,
  "percpu: max_distance=0x5fcfdc640000 too large for vmalloc space 0x781fefff0000"
  "percpu: max_distance=0x600000540000 too large for vmalloc space 0x7dffb7ff0000"
  "percpu: max_distance=0x5fff9adb0000 too large for vmalloc space 0x5dffb7ff0000"

then we could meet many "WARNING: CPU: 15 PID: 461 at vmalloc.c:3087 pcpu_get_vm_areas+0x488/0x838",
the system can't boot successfully.

Let's implement page mapping percpu first chunk allocator as a fallback
to the embedding allocator to increase the robustness of the system.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/arm64/Kconfig       |  4 ++
 drivers/base/arch_numa.c | 82 +++++++++++++++++++++++++++++++++++-----
 2 files changed, 76 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index e07e7de9ac49..a4e410bcdacf 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1045,6 +1045,10 @@ config NEED_PER_CPU_EMBED_FIRST_CHUNK
 	def_bool y
 	depends on NUMA
 
+config NEED_PER_CPU_PAGE_FIRST_CHUNK
+	def_bool y
+	depends on NUMA
+
 source "kernel/Kconfig.hz"
 
 config ARCH_SPARSEMEM_ENABLE
diff --git a/drivers/base/arch_numa.c b/drivers/base/arch_numa.c
index 4cc4e117727d..563b2013b75a 100644
--- a/drivers/base/arch_numa.c
+++ b/drivers/base/arch_numa.c
@@ -14,6 +14,7 @@
 #include <linux/of.h>
 
 #include <asm/sections.h>
+#include <asm/pgalloc.h>
 
 struct pglist_data *node_data[MAX_NUMNODES] __read_mostly;
 EXPORT_SYMBOL(node_data);
@@ -168,22 +169,83 @@ static void __init pcpu_fc_free(void *ptr, size_t size)
 	memblock_free_early(__pa(ptr), size);
 }
 
+#ifdef CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK
+static void __init pcpu_populate_pte(unsigned long addr)
+{
+	pgd_t *pgd = pgd_offset_k(addr);
+	p4d_t *p4d;
+	pud_t *pud;
+	pmd_t *pmd;
+
+	p4d = p4d_offset(pgd, addr);
+	if (p4d_none(*p4d)) {
+		pud_t *new;
+
+		new = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+		if (!new)
+			goto err_alloc;
+		p4d_populate(&init_mm, p4d, new);
+	}
+
+	pud = pud_offset(p4d, addr);
+	if (pud_none(*pud)) {
+		pmd_t *new;
+
+		new = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+		if (!new)
+			goto err_alloc;
+		pud_populate(&init_mm, pud, new);
+	}
+
+	pmd = pmd_offset(pud, addr);
+	if (!pmd_present(*pmd)) {
+		pte_t *new;
+
+		new = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+		if (!new)
+			goto err_alloc;
+		pmd_populate_kernel(&init_mm, pmd, new);
+	}
+
+	return;
+
+err_alloc:
+	panic("%s: Failed to allocate %lu bytes align=%lx from=%lx\n",
+	      __func__, PAGE_SIZE, PAGE_SIZE, PAGE_SIZE);
+}
+#endif
+
 void __init setup_per_cpu_areas(void)
 {
 	unsigned long delta;
 	unsigned int cpu;
-	int rc;
+	int rc = -EINVAL;
+
+	if (pcpu_chosen_fc != PCPU_FC_PAGE) {
+		/*
+		 * Always reserve area for module percpu variables.  That's
+		 * what the legacy allocator did.
+		 */
+		rc = pcpu_embed_first_chunk(PERCPU_MODULE_RESERVE,
+					    PERCPU_DYNAMIC_RESERVE, PAGE_SIZE,
+					    pcpu_cpu_distance,
+					    pcpu_fc_alloc, pcpu_fc_free);
+#ifdef CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK
+		if (rc < 0)
+			pr_warn("PERCPU: %s allocator failed (%d), falling back to page size\n",
+				   pcpu_fc_names[pcpu_chosen_fc], rc);
+#endif
+	}
 
-	/*
-	 * Always reserve area for module percpu variables.  That's
-	 * what the legacy allocator did.
-	 */
-	rc = pcpu_embed_first_chunk(PERCPU_MODULE_RESERVE,
-				    PERCPU_DYNAMIC_RESERVE, PAGE_SIZE,
-				    pcpu_cpu_distance,
-				    pcpu_fc_alloc, pcpu_fc_free);
+#ifdef CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK
+	if (rc < 0)
+		rc = pcpu_page_first_chunk(PERCPU_MODULE_RESERVE,
+					   pcpu_fc_alloc,
+					   pcpu_fc_free,
+					   pcpu_populate_pte);
+#endif
 	if (rc < 0)
-		panic("Failed to initialize percpu areas.");
+		panic("Failed to initialize percpu areas (err=%d).", rc);
 
 	delta = (unsigned long)pcpu_base_addr - (unsigned long)__per_cpu_start;
 	for_each_possible_cpu(cpu)
-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC
  2021-07-05 11:14 ` Kefeng Wang
@ 2021-07-05 11:14   ` Kefeng Wang
  -1 siblings, 0 replies; 30+ messages in thread
From: Kefeng Wang @ 2021-07-05 11:14 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Andrey Ryabinin, Andrey Konovalov,
	Dmitry Vyukov
  Cc: linux-arm-kernel, linux-kernel, kasan-dev, linux-mm, Kefeng Wang

With KASAN_VMALLOC and NEED_PER_CPU_PAGE_FIRST_CHUNK, it crashs,

Unable to handle kernel paging request at virtual address ffff7000028f2000
...
swapper pgtable: 64k pages, 48-bit VAs, pgdp=0000000042440000
[ffff7000028f2000] pgd=000000063e7c0003, p4d=000000063e7c0003, pud=000000063e7c0003, pmd=000000063e7b0003, pte=0000000000000000
Internal error: Oops: 96000007 [#1] PREEMPT SMP
Modules linked in:
CPU: 0 PID: 0 Comm: swapper Not tainted 5.13.0-rc4-00003-gc6e6e28f3f30-dirty #62
Hardware name: linux,dummy-virt (DT)
pstate: 200000c5 (nzCv daIF -PAN -UAO -TCO BTYPE=--)
pc : kasan_check_range+0x90/0x1a0
lr : memcpy+0x88/0xf4
sp : ffff80001378fe20
...
Call trace:
 kasan_check_range+0x90/0x1a0
 pcpu_page_first_chunk+0x3f0/0x568
 setup_per_cpu_areas+0xb8/0x184
 start_kernel+0x8c/0x328

The vm area used in vm_area_register_early() has no kasan shadow memory,
Let's add a new kasan_populate_early_vm_area_shadow() function to populate
the vm area shadow memory to fix the issue.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/arm64/mm/kasan_init.c | 18 ++++++++++++++++++
 include/linux/kasan.h      |  2 ++
 mm/kasan/init.c            |  5 +++++
 mm/vmalloc.c               |  1 +
 4 files changed, 26 insertions(+)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 61b52a92b8b6..c295a256c573 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -287,6 +287,24 @@ static void __init kasan_init_depth(void)
 	init_task.kasan_depth = 0;
 }
 
+#ifdef CONFIG_KASAN_VMALLOC
+void __init __weak kasan_populate_early_vm_area_shadow(void *start,
+						       unsigned long size)
+{
+	unsigned long shadow_start, shadow_end;
+
+	if (!is_vmalloc_or_module_addr(start))
+		return;
+
+	shadow_start = (unsigned long)kasan_mem_to_shadow(start);
+	shadow_start = ALIGN_DOWN(shadow_start, PAGE_SIZE);
+	shadow_end = (unsigned long)kasan_mem_to_shadow(start + size);
+	shadow_end = ALIGN(shadow_end, PAGE_SIZE);
+	kasan_map_populate(shadow_start, shadow_end,
+			   early_pfn_to_nid(virt_to_pfn(start)));
+}
+#endif
+
 void __init kasan_init(void)
 {
 	kasan_init_shadow();
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 5310e217bd74..79d3895b0240 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -49,6 +49,8 @@ extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D];
 int kasan_populate_early_shadow(const void *shadow_start,
 				const void *shadow_end);
 
+void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
+
 static inline void *kasan_mem_to_shadow(const void *addr)
 {
 	return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
diff --git a/mm/kasan/init.c b/mm/kasan/init.c
index cc64ed6858c6..d39577d088a1 100644
--- a/mm/kasan/init.c
+++ b/mm/kasan/init.c
@@ -279,6 +279,11 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
 	return 0;
 }
 
+void __init __weak kasan_populate_early_vm_area_shadow(void *start,
+						       unsigned long size)
+{
+}
+
 static void kasan_free_pte(pte_t *pte_start, pmd_t *pmd)
 {
 	pte_t *pte;
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index a98cf97f032f..f19e07314ee5 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2249,6 +2249,7 @@ void __init vm_area_register_early(struct vm_struct *vm, size_t align)
 	vm->addr = (void *)addr;
 
 	vm_area_add_early(vm);
+	kasan_populate_early_vm_area_shadow(vm->addr, vm->size);
 }
 
 static void vmap_init_free_space(void)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC
@ 2021-07-05 11:14   ` Kefeng Wang
  0 siblings, 0 replies; 30+ messages in thread
From: Kefeng Wang @ 2021-07-05 11:14 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Andrey Ryabinin, Andrey Konovalov,
	Dmitry Vyukov
  Cc: linux-arm-kernel, linux-kernel, kasan-dev, linux-mm, Kefeng Wang

With KASAN_VMALLOC and NEED_PER_CPU_PAGE_FIRST_CHUNK, it crashs,

Unable to handle kernel paging request at virtual address ffff7000028f2000
...
swapper pgtable: 64k pages, 48-bit VAs, pgdp=0000000042440000
[ffff7000028f2000] pgd=000000063e7c0003, p4d=000000063e7c0003, pud=000000063e7c0003, pmd=000000063e7b0003, pte=0000000000000000
Internal error: Oops: 96000007 [#1] PREEMPT SMP
Modules linked in:
CPU: 0 PID: 0 Comm: swapper Not tainted 5.13.0-rc4-00003-gc6e6e28f3f30-dirty #62
Hardware name: linux,dummy-virt (DT)
pstate: 200000c5 (nzCv daIF -PAN -UAO -TCO BTYPE=--)
pc : kasan_check_range+0x90/0x1a0
lr : memcpy+0x88/0xf4
sp : ffff80001378fe20
...
Call trace:
 kasan_check_range+0x90/0x1a0
 pcpu_page_first_chunk+0x3f0/0x568
 setup_per_cpu_areas+0xb8/0x184
 start_kernel+0x8c/0x328

The vm area used in vm_area_register_early() has no kasan shadow memory,
Let's add a new kasan_populate_early_vm_area_shadow() function to populate
the vm area shadow memory to fix the issue.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/arm64/mm/kasan_init.c | 18 ++++++++++++++++++
 include/linux/kasan.h      |  2 ++
 mm/kasan/init.c            |  5 +++++
 mm/vmalloc.c               |  1 +
 4 files changed, 26 insertions(+)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 61b52a92b8b6..c295a256c573 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -287,6 +287,24 @@ static void __init kasan_init_depth(void)
 	init_task.kasan_depth = 0;
 }
 
+#ifdef CONFIG_KASAN_VMALLOC
+void __init __weak kasan_populate_early_vm_area_shadow(void *start,
+						       unsigned long size)
+{
+	unsigned long shadow_start, shadow_end;
+
+	if (!is_vmalloc_or_module_addr(start))
+		return;
+
+	shadow_start = (unsigned long)kasan_mem_to_shadow(start);
+	shadow_start = ALIGN_DOWN(shadow_start, PAGE_SIZE);
+	shadow_end = (unsigned long)kasan_mem_to_shadow(start + size);
+	shadow_end = ALIGN(shadow_end, PAGE_SIZE);
+	kasan_map_populate(shadow_start, shadow_end,
+			   early_pfn_to_nid(virt_to_pfn(start)));
+}
+#endif
+
 void __init kasan_init(void)
 {
 	kasan_init_shadow();
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 5310e217bd74..79d3895b0240 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -49,6 +49,8 @@ extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D];
 int kasan_populate_early_shadow(const void *shadow_start,
 				const void *shadow_end);
 
+void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
+
 static inline void *kasan_mem_to_shadow(const void *addr)
 {
 	return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
diff --git a/mm/kasan/init.c b/mm/kasan/init.c
index cc64ed6858c6..d39577d088a1 100644
--- a/mm/kasan/init.c
+++ b/mm/kasan/init.c
@@ -279,6 +279,11 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
 	return 0;
 }
 
+void __init __weak kasan_populate_early_vm_area_shadow(void *start,
+						       unsigned long size)
+{
+}
+
 static void kasan_free_pte(pte_t *pte_start, pmd_t *pmd)
 {
 	pte_t *pte;
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index a98cf97f032f..f19e07314ee5 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2249,6 +2249,7 @@ void __init vm_area_register_early(struct vm_struct *vm, size_t align)
 	vm->addr = (void *)addr;
 
 	vm_area_add_early(vm);
+	kasan_populate_early_vm_area_shadow(vm->addr, vm->size);
 }
 
 static void vmap_init_free_space(void)
-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC
  2021-07-05 11:14   ` Kefeng Wang
  (?)
@ 2021-07-05 14:10     ` kernel test robot
  -1 siblings, 0 replies; 30+ messages in thread
From: kernel test robot @ 2021-07-05 14:10 UTC (permalink / raw)
  To: Kefeng Wang, Catalin Marinas, Will Deacon, Andrey Ryabinin,
	Andrey Konovalov, Dmitry Vyukov
  Cc: kbuild-all, linux-arm-kernel, linux-kernel, kasan-dev, linux-mm,
	Kefeng Wang

[-- Attachment #1: Type: text/plain, Size: 2590 bytes --]

Hi Kefeng,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on next-20210701]

url:    https://github.com/0day-ci/linux/commits/Kefeng-Wang/arm64-support-page-mapping-percpu-first-chunk-allocator/20210705-190907
base:    fb0ca446157a86b75502c1636b0d81e642fe6bf1
config: i386-randconfig-a015-20210705 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce (this is a W=1 build):
        # https://github.com/0day-ci/linux/commit/5f6b5a402ed3e390563ddbddf12973470fd4886d
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Kefeng-Wang/arm64-support-page-mapping-percpu-first-chunk-allocator/20210705-190907
        git checkout 5f6b5a402ed3e390563ddbddf12973470fd4886d
        # save the attached .config to linux build tree
        make W=1 ARCH=i386 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   mm/vmalloc.c: In function 'vm_area_register_early':
>> mm/vmalloc.c:2252:2: error: implicit declaration of function 'kasan_populate_early_vm_area_shadow' [-Werror=implicit-function-declaration]
    2252 |  kasan_populate_early_vm_area_shadow(vm->addr, vm->size);
         |  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   cc1: some warnings being treated as errors


vim +/kasan_populate_early_vm_area_shadow +2252 mm/vmalloc.c

  2226	
  2227	/**
  2228	 * vm_area_register_early - register vmap area early during boot
  2229	 * @vm: vm_struct to register
  2230	 * @align: requested alignment
  2231	 *
  2232	 * This function is used to register kernel vm area before
  2233	 * vmalloc_init() is called.  @vm->size and @vm->flags should contain
  2234	 * proper values on entry and other fields should be zero.  On return,
  2235	 * vm->addr contains the allocated address.
  2236	 *
  2237	 * DO NOT USE THIS FUNCTION UNLESS YOU KNOW WHAT YOU'RE DOING.
  2238	 */
  2239	void __init vm_area_register_early(struct vm_struct *vm, size_t align)
  2240	{
  2241		unsigned long vm_start = VMALLOC_START;
  2242		struct vm_struct *tmp;
  2243		unsigned long addr;
  2244	
  2245		for (tmp = vmlist; tmp; tmp = tmp->next)
  2246			vm_start = (unsigned long)tmp->addr + tmp->size;
  2247	
  2248		addr = ALIGN(vm_start, align);
  2249		vm->addr = (void *)addr;
  2250	
  2251		vm_area_add_early(vm);
> 2252		kasan_populate_early_vm_area_shadow(vm->addr, vm->size);
  2253	}
  2254	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 41494 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC
@ 2021-07-05 14:10     ` kernel test robot
  0 siblings, 0 replies; 30+ messages in thread
From: kernel test robot @ 2021-07-05 14:10 UTC (permalink / raw)
  To: Kefeng Wang, Catalin Marinas, Will Deacon, Andrey Ryabinin,
	Andrey Konovalov, Dmitry Vyukov
  Cc: kbuild-all, linux-arm-kernel, linux-kernel, kasan-dev, linux-mm,
	Kefeng Wang

[-- Attachment #1: Type: text/plain, Size: 2590 bytes --]

Hi Kefeng,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on next-20210701]

url:    https://github.com/0day-ci/linux/commits/Kefeng-Wang/arm64-support-page-mapping-percpu-first-chunk-allocator/20210705-190907
base:    fb0ca446157a86b75502c1636b0d81e642fe6bf1
config: i386-randconfig-a015-20210705 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce (this is a W=1 build):
        # https://github.com/0day-ci/linux/commit/5f6b5a402ed3e390563ddbddf12973470fd4886d
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Kefeng-Wang/arm64-support-page-mapping-percpu-first-chunk-allocator/20210705-190907
        git checkout 5f6b5a402ed3e390563ddbddf12973470fd4886d
        # save the attached .config to linux build tree
        make W=1 ARCH=i386 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   mm/vmalloc.c: In function 'vm_area_register_early':
>> mm/vmalloc.c:2252:2: error: implicit declaration of function 'kasan_populate_early_vm_area_shadow' [-Werror=implicit-function-declaration]
    2252 |  kasan_populate_early_vm_area_shadow(vm->addr, vm->size);
         |  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   cc1: some warnings being treated as errors


vim +/kasan_populate_early_vm_area_shadow +2252 mm/vmalloc.c

  2226	
  2227	/**
  2228	 * vm_area_register_early - register vmap area early during boot
  2229	 * @vm: vm_struct to register
  2230	 * @align: requested alignment
  2231	 *
  2232	 * This function is used to register kernel vm area before
  2233	 * vmalloc_init() is called.  @vm->size and @vm->flags should contain
  2234	 * proper values on entry and other fields should be zero.  On return,
  2235	 * vm->addr contains the allocated address.
  2236	 *
  2237	 * DO NOT USE THIS FUNCTION UNLESS YOU KNOW WHAT YOU'RE DOING.
  2238	 */
  2239	void __init vm_area_register_early(struct vm_struct *vm, size_t align)
  2240	{
  2241		unsigned long vm_start = VMALLOC_START;
  2242		struct vm_struct *tmp;
  2243		unsigned long addr;
  2244	
  2245		for (tmp = vmlist; tmp; tmp = tmp->next)
  2246			vm_start = (unsigned long)tmp->addr + tmp->size;
  2247	
  2248		addr = ALIGN(vm_start, align);
  2249		vm->addr = (void *)addr;
  2250	
  2251		vm_area_add_early(vm);
> 2252		kasan_populate_early_vm_area_shadow(vm->addr, vm->size);
  2253	}
  2254	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 41494 bytes --]

[-- Attachment #3: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC
@ 2021-07-05 14:10     ` kernel test robot
  0 siblings, 0 replies; 30+ messages in thread
From: kernel test robot @ 2021-07-05 14:10 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 2657 bytes --]

Hi Kefeng,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on next-20210701]

url:    https://github.com/0day-ci/linux/commits/Kefeng-Wang/arm64-support-page-mapping-percpu-first-chunk-allocator/20210705-190907
base:    fb0ca446157a86b75502c1636b0d81e642fe6bf1
config: i386-randconfig-a015-20210705 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce (this is a W=1 build):
        # https://github.com/0day-ci/linux/commit/5f6b5a402ed3e390563ddbddf12973470fd4886d
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Kefeng-Wang/arm64-support-page-mapping-percpu-first-chunk-allocator/20210705-190907
        git checkout 5f6b5a402ed3e390563ddbddf12973470fd4886d
        # save the attached .config to linux build tree
        make W=1 ARCH=i386 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   mm/vmalloc.c: In function 'vm_area_register_early':
>> mm/vmalloc.c:2252:2: error: implicit declaration of function 'kasan_populate_early_vm_area_shadow' [-Werror=implicit-function-declaration]
    2252 |  kasan_populate_early_vm_area_shadow(vm->addr, vm->size);
         |  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   cc1: some warnings being treated as errors


vim +/kasan_populate_early_vm_area_shadow +2252 mm/vmalloc.c

  2226	
  2227	/**
  2228	 * vm_area_register_early - register vmap area early during boot
  2229	 * @vm: vm_struct to register
  2230	 * @align: requested alignment
  2231	 *
  2232	 * This function is used to register kernel vm area before
  2233	 * vmalloc_init() is called.  @vm->size and @vm->flags should contain
  2234	 * proper values on entry and other fields should be zero.  On return,
  2235	 * vm->addr contains the allocated address.
  2236	 *
  2237	 * DO NOT USE THIS FUNCTION UNLESS YOU KNOW WHAT YOU'RE DOING.
  2238	 */
  2239	void __init vm_area_register_early(struct vm_struct *vm, size_t align)
  2240	{
  2241		unsigned long vm_start = VMALLOC_START;
  2242		struct vm_struct *tmp;
  2243		unsigned long addr;
  2244	
  2245		for (tmp = vmlist; tmp; tmp = tmp->next)
  2246			vm_start = (unsigned long)tmp->addr + tmp->size;
  2247	
  2248		addr = ALIGN(vm_start, align);
  2249		vm->addr = (void *)addr;
  2250	
  2251		vm_area_add_early(vm);
> 2252		kasan_populate_early_vm_area_shadow(vm->addr, vm->size);
  2253	}
  2254	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 41494 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC
  2021-07-05 11:14   ` Kefeng Wang
@ 2021-07-05 15:04     ` Marco Elver
  -1 siblings, 0 replies; 30+ messages in thread
From: Marco Elver @ 2021-07-05 15:04 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Catalin Marinas, Will Deacon, Andrey Ryabinin, Andrey Konovalov,
	Dmitry Vyukov, linux-arm-kernel, linux-kernel, kasan-dev,
	linux-mm, Daniel Axtens

On Mon, Jul 05, 2021 at 07:14PM +0800, Kefeng Wang wrote:
[...]
> +#ifdef CONFIG_KASAN_VMALLOC
> +void __init __weak kasan_populate_early_vm_area_shadow(void *start,
> +						       unsigned long size)

This should probably not be __weak, otherwise you now have 2 __weak
functions.

> +{
> +	unsigned long shadow_start, shadow_end;
> +
> +	if (!is_vmalloc_or_module_addr(start))
> +		return;
> +
> +	shadow_start = (unsigned long)kasan_mem_to_shadow(start);
> +	shadow_start = ALIGN_DOWN(shadow_start, PAGE_SIZE);
> +	shadow_end = (unsigned long)kasan_mem_to_shadow(start + size);
> +	shadow_end = ALIGN(shadow_end, PAGE_SIZE);
> +	kasan_map_populate(shadow_start, shadow_end,
> +			   early_pfn_to_nid(virt_to_pfn(start)));
> +}
> +#endif

This function looks quite generic -- would any of this also apply to
other architectures? I see that ppc and sparc at least also define
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK.

>  void __init kasan_init(void)
>  {
>  	kasan_init_shadow();
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 5310e217bd74..79d3895b0240 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -49,6 +49,8 @@ extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D];
>  int kasan_populate_early_shadow(const void *shadow_start,
>  				const void *shadow_end);
>  
> +void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
> +
>  static inline void *kasan_mem_to_shadow(const void *addr)
>  {
>  	return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
> diff --git a/mm/kasan/init.c b/mm/kasan/init.c
> index cc64ed6858c6..d39577d088a1 100644
> --- a/mm/kasan/init.c
> +++ b/mm/kasan/init.c
> @@ -279,6 +279,11 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
>  	return 0;
>  }
>  
> +void __init __weak kasan_populate_early_vm_area_shadow(void *start,
> +						       unsigned long size)
> +{
> +}

I'm just wondering if this could be a generic function, perhaps with an
appropriate IS_ENABLED() check of a generic Kconfig option
(CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK ?) to short-circuit it, if it's
not only an arm64 problem.

But I haven't looked much further, so would appeal to you to either
confirm or reject this idea.

Thanks,
-- Marco

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC
@ 2021-07-05 15:04     ` Marco Elver
  0 siblings, 0 replies; 30+ messages in thread
From: Marco Elver @ 2021-07-05 15:04 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Catalin Marinas, Will Deacon, Andrey Ryabinin, Andrey Konovalov,
	Dmitry Vyukov, linux-arm-kernel, linux-kernel, kasan-dev,
	linux-mm, Daniel Axtens

On Mon, Jul 05, 2021 at 07:14PM +0800, Kefeng Wang wrote:
[...]
> +#ifdef CONFIG_KASAN_VMALLOC
> +void __init __weak kasan_populate_early_vm_area_shadow(void *start,
> +						       unsigned long size)

This should probably not be __weak, otherwise you now have 2 __weak
functions.

> +{
> +	unsigned long shadow_start, shadow_end;
> +
> +	if (!is_vmalloc_or_module_addr(start))
> +		return;
> +
> +	shadow_start = (unsigned long)kasan_mem_to_shadow(start);
> +	shadow_start = ALIGN_DOWN(shadow_start, PAGE_SIZE);
> +	shadow_end = (unsigned long)kasan_mem_to_shadow(start + size);
> +	shadow_end = ALIGN(shadow_end, PAGE_SIZE);
> +	kasan_map_populate(shadow_start, shadow_end,
> +			   early_pfn_to_nid(virt_to_pfn(start)));
> +}
> +#endif

This function looks quite generic -- would any of this also apply to
other architectures? I see that ppc and sparc at least also define
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK.

>  void __init kasan_init(void)
>  {
>  	kasan_init_shadow();
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 5310e217bd74..79d3895b0240 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -49,6 +49,8 @@ extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D];
>  int kasan_populate_early_shadow(const void *shadow_start,
>  				const void *shadow_end);
>  
> +void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
> +
>  static inline void *kasan_mem_to_shadow(const void *addr)
>  {
>  	return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
> diff --git a/mm/kasan/init.c b/mm/kasan/init.c
> index cc64ed6858c6..d39577d088a1 100644
> --- a/mm/kasan/init.c
> +++ b/mm/kasan/init.c
> @@ -279,6 +279,11 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
>  	return 0;
>  }
>  
> +void __init __weak kasan_populate_early_vm_area_shadow(void *start,
> +						       unsigned long size)
> +{
> +}

I'm just wondering if this could be a generic function, perhaps with an
appropriate IS_ENABLED() check of a generic Kconfig option
(CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK ?) to short-circuit it, if it's
not only an arm64 problem.

But I haven't looked much further, so would appeal to you to either
confirm or reject this idea.

Thanks,
-- Marco

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC
  2021-07-05 11:14   ` Kefeng Wang
  (?)
@ 2021-07-05 17:15     ` kernel test robot
  -1 siblings, 0 replies; 30+ messages in thread
From: kernel test robot @ 2021-07-05 17:15 UTC (permalink / raw)
  To: Kefeng Wang, Catalin Marinas, Will Deacon, Andrey Ryabinin,
	Andrey Konovalov, Dmitry Vyukov
  Cc: clang-built-linux, kbuild-all, linux-arm-kernel, linux-kernel,
	kasan-dev, linux-mm, Kefeng Wang

[-- Attachment #1: Type: text/plain, Size: 2860 bytes --]

Hi Kefeng,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on next-20210701]

url:    https://github.com/0day-ci/linux/commits/Kefeng-Wang/arm64-support-page-mapping-percpu-first-chunk-allocator/20210705-190907
base:    fb0ca446157a86b75502c1636b0d81e642fe6bf1
config: powerpc-randconfig-r011-20210705 (attached as .config)
compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project 3f9bf9f42a9043e20c6d2a74dd4f47a90a7e2b41)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install powerpc cross compiling tool for clang build
        # apt-get install binutils-powerpc-linux-gnu
        # https://github.com/0day-ci/linux/commit/5f6b5a402ed3e390563ddbddf12973470fd4886d
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Kefeng-Wang/arm64-support-page-mapping-percpu-first-chunk-allocator/20210705-190907
        git checkout 5f6b5a402ed3e390563ddbddf12973470fd4886d
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=powerpc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> mm/vmalloc.c:2252:2: error: implicit declaration of function 'kasan_populate_early_vm_area_shadow' [-Werror,-Wimplicit-function-declaration]
           kasan_populate_early_vm_area_shadow(vm->addr, vm->size);
           ^
   1 error generated.


vim +/kasan_populate_early_vm_area_shadow +2252 mm/vmalloc.c

  2226	
  2227	/**
  2228	 * vm_area_register_early - register vmap area early during boot
  2229	 * @vm: vm_struct to register
  2230	 * @align: requested alignment
  2231	 *
  2232	 * This function is used to register kernel vm area before
  2233	 * vmalloc_init() is called.  @vm->size and @vm->flags should contain
  2234	 * proper values on entry and other fields should be zero.  On return,
  2235	 * vm->addr contains the allocated address.
  2236	 *
  2237	 * DO NOT USE THIS FUNCTION UNLESS YOU KNOW WHAT YOU'RE DOING.
  2238	 */
  2239	void __init vm_area_register_early(struct vm_struct *vm, size_t align)
  2240	{
  2241		unsigned long vm_start = VMALLOC_START;
  2242		struct vm_struct *tmp;
  2243		unsigned long addr;
  2244	
  2245		for (tmp = vmlist; tmp; tmp = tmp->next)
  2246			vm_start = (unsigned long)tmp->addr + tmp->size;
  2247	
  2248		addr = ALIGN(vm_start, align);
  2249		vm->addr = (void *)addr;
  2250	
  2251		vm_area_add_early(vm);
> 2252		kasan_populate_early_vm_area_shadow(vm->addr, vm->size);
  2253	}
  2254	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 34053 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC
@ 2021-07-05 17:15     ` kernel test robot
  0 siblings, 0 replies; 30+ messages in thread
From: kernel test robot @ 2021-07-05 17:15 UTC (permalink / raw)
  To: Kefeng Wang, Catalin Marinas, Will Deacon, Andrey Ryabinin,
	Andrey Konovalov, Dmitry Vyukov
  Cc: clang-built-linux, kbuild-all, linux-arm-kernel, linux-kernel,
	kasan-dev, linux-mm, Kefeng Wang

[-- Attachment #1: Type: text/plain, Size: 2860 bytes --]

Hi Kefeng,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on next-20210701]

url:    https://github.com/0day-ci/linux/commits/Kefeng-Wang/arm64-support-page-mapping-percpu-first-chunk-allocator/20210705-190907
base:    fb0ca446157a86b75502c1636b0d81e642fe6bf1
config: powerpc-randconfig-r011-20210705 (attached as .config)
compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project 3f9bf9f42a9043e20c6d2a74dd4f47a90a7e2b41)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install powerpc cross compiling tool for clang build
        # apt-get install binutils-powerpc-linux-gnu
        # https://github.com/0day-ci/linux/commit/5f6b5a402ed3e390563ddbddf12973470fd4886d
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Kefeng-Wang/arm64-support-page-mapping-percpu-first-chunk-allocator/20210705-190907
        git checkout 5f6b5a402ed3e390563ddbddf12973470fd4886d
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=powerpc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> mm/vmalloc.c:2252:2: error: implicit declaration of function 'kasan_populate_early_vm_area_shadow' [-Werror,-Wimplicit-function-declaration]
           kasan_populate_early_vm_area_shadow(vm->addr, vm->size);
           ^
   1 error generated.


vim +/kasan_populate_early_vm_area_shadow +2252 mm/vmalloc.c

  2226	
  2227	/**
  2228	 * vm_area_register_early - register vmap area early during boot
  2229	 * @vm: vm_struct to register
  2230	 * @align: requested alignment
  2231	 *
  2232	 * This function is used to register kernel vm area before
  2233	 * vmalloc_init() is called.  @vm->size and @vm->flags should contain
  2234	 * proper values on entry and other fields should be zero.  On return,
  2235	 * vm->addr contains the allocated address.
  2236	 *
  2237	 * DO NOT USE THIS FUNCTION UNLESS YOU KNOW WHAT YOU'RE DOING.
  2238	 */
  2239	void __init vm_area_register_early(struct vm_struct *vm, size_t align)
  2240	{
  2241		unsigned long vm_start = VMALLOC_START;
  2242		struct vm_struct *tmp;
  2243		unsigned long addr;
  2244	
  2245		for (tmp = vmlist; tmp; tmp = tmp->next)
  2246			vm_start = (unsigned long)tmp->addr + tmp->size;
  2247	
  2248		addr = ALIGN(vm_start, align);
  2249		vm->addr = (void *)addr;
  2250	
  2251		vm_area_add_early(vm);
> 2252		kasan_populate_early_vm_area_shadow(vm->addr, vm->size);
  2253	}
  2254	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 34053 bytes --]

[-- Attachment #3: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC
@ 2021-07-05 17:15     ` kernel test robot
  0 siblings, 0 replies; 30+ messages in thread
From: kernel test robot @ 2021-07-05 17:15 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 2930 bytes --]

Hi Kefeng,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on next-20210701]

url:    https://github.com/0day-ci/linux/commits/Kefeng-Wang/arm64-support-page-mapping-percpu-first-chunk-allocator/20210705-190907
base:    fb0ca446157a86b75502c1636b0d81e642fe6bf1
config: powerpc-randconfig-r011-20210705 (attached as .config)
compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project 3f9bf9f42a9043e20c6d2a74dd4f47a90a7e2b41)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install powerpc cross compiling tool for clang build
        # apt-get install binutils-powerpc-linux-gnu
        # https://github.com/0day-ci/linux/commit/5f6b5a402ed3e390563ddbddf12973470fd4886d
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Kefeng-Wang/arm64-support-page-mapping-percpu-first-chunk-allocator/20210705-190907
        git checkout 5f6b5a402ed3e390563ddbddf12973470fd4886d
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=powerpc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> mm/vmalloc.c:2252:2: error: implicit declaration of function 'kasan_populate_early_vm_area_shadow' [-Werror,-Wimplicit-function-declaration]
           kasan_populate_early_vm_area_shadow(vm->addr, vm->size);
           ^
   1 error generated.


vim +/kasan_populate_early_vm_area_shadow +2252 mm/vmalloc.c

  2226	
  2227	/**
  2228	 * vm_area_register_early - register vmap area early during boot
  2229	 * @vm: vm_struct to register
  2230	 * @align: requested alignment
  2231	 *
  2232	 * This function is used to register kernel vm area before
  2233	 * vmalloc_init() is called.  @vm->size and @vm->flags should contain
  2234	 * proper values on entry and other fields should be zero.  On return,
  2235	 * vm->addr contains the allocated address.
  2236	 *
  2237	 * DO NOT USE THIS FUNCTION UNLESS YOU KNOW WHAT YOU'RE DOING.
  2238	 */
  2239	void __init vm_area_register_early(struct vm_struct *vm, size_t align)
  2240	{
  2241		unsigned long vm_start = VMALLOC_START;
  2242		struct vm_struct *tmp;
  2243		unsigned long addr;
  2244	
  2245		for (tmp = vmlist; tmp; tmp = tmp->next)
  2246			vm_start = (unsigned long)tmp->addr + tmp->size;
  2247	
  2248		addr = ALIGN(vm_start, align);
  2249		vm->addr = (void *)addr;
  2250	
  2251		vm_area_add_early(vm);
> 2252		kasan_populate_early_vm_area_shadow(vm->addr, vm->size);
  2253	}
  2254	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 34053 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC
  2021-07-05 15:04     ` Marco Elver
@ 2021-07-06  0:04       ` Daniel Axtens
  -1 siblings, 0 replies; 30+ messages in thread
From: Daniel Axtens @ 2021-07-06  0:04 UTC (permalink / raw)
  To: Marco Elver, Kefeng Wang
  Cc: Catalin Marinas, Will Deacon, Andrey Ryabinin, Andrey Konovalov,
	Dmitry Vyukov, linux-arm-kernel, linux-kernel, kasan-dev,
	linux-mm

Hi,

Marco Elver <elver@google.com> writes:

> On Mon, Jul 05, 2021 at 07:14PM +0800, Kefeng Wang wrote:
> [...]
>> +#ifdef CONFIG_KASAN_VMALLOC
>> +void __init __weak kasan_populate_early_vm_area_shadow(void *start,
>> +						       unsigned long size)
>
> This should probably not be __weak, otherwise you now have 2 __weak
> functions.
>
>> +{
>> +	unsigned long shadow_start, shadow_end;
>> +
>> +	if (!is_vmalloc_or_module_addr(start))
>> +		return;
>> +
>> +	shadow_start = (unsigned long)kasan_mem_to_shadow(start);
>> +	shadow_start = ALIGN_DOWN(shadow_start, PAGE_SIZE);
>> +	shadow_end = (unsigned long)kasan_mem_to_shadow(start + size);
>> +	shadow_end = ALIGN(shadow_end, PAGE_SIZE);
>> +	kasan_map_populate(shadow_start, shadow_end,
>> +			   early_pfn_to_nid(virt_to_pfn(start)));
>> +}
>> +#endif
>
> This function looks quite generic -- would any of this also apply to
> other architectures? I see that ppc and sparc at least also define
> CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK.

So I checked with my latest KASAN ppc64 series and my code also breaks
in a very similar way if you boot with percpu_alloc=page. It's not
something I knew about or tested with before!

Unfortunately kasan_map_populate - despite having a very
generic-sounding name - is actually arm64 specific. I don't know if
kasan_populate_early_shadow (which is generic) would be able to fill the
role or not. If we could keep it generic that would be better.

It looks like arm64 does indeed populate the kasan_early_shadow_p{te,md..}
values, but I don't really understand what it's doing - is it possible
to use the generic kasan_populate_early_shadow on arm64?

If so, should we put the call inside of vm_area_register_early?

Kind regards,
Daniel

>
>>  void __init kasan_init(void)
>>  {
>>  	kasan_init_shadow();
>> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>> index 5310e217bd74..79d3895b0240 100644
>> --- a/include/linux/kasan.h
>> +++ b/include/linux/kasan.h
>> @@ -49,6 +49,8 @@ extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D];
>>  int kasan_populate_early_shadow(const void *shadow_start,
>>  				const void *shadow_end);
>>  
>> +void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
>> +
>>  static inline void *kasan_mem_to_shadow(const void *addr)
>>  {
>>  	return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
>> diff --git a/mm/kasan/init.c b/mm/kasan/init.c
>> index cc64ed6858c6..d39577d088a1 100644
>> --- a/mm/kasan/init.c
>> +++ b/mm/kasan/init.c
>> @@ -279,6 +279,11 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
>>  	return 0;
>>  }
>>  
>> +void __init __weak kasan_populate_early_vm_area_shadow(void *start,
>> +						       unsigned long size)
>> +{
>> +}
>
> I'm just wondering if this could be a generic function, perhaps with an
> appropriate IS_ENABLED() check of a generic Kconfig option
> (CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK ?) to short-circuit it, if it's
> not only an arm64 problem.
>
> But I haven't looked much further, so would appeal to you to either
> confirm or reject this idea.
>
> Thanks,
> -- Marco

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC
@ 2021-07-06  0:04       ` Daniel Axtens
  0 siblings, 0 replies; 30+ messages in thread
From: Daniel Axtens @ 2021-07-06  0:04 UTC (permalink / raw)
  To: Marco Elver, Kefeng Wang
  Cc: Catalin Marinas, Will Deacon, Andrey Ryabinin, Andrey Konovalov,
	Dmitry Vyukov, linux-arm-kernel, linux-kernel, kasan-dev,
	linux-mm

Hi,

Marco Elver <elver@google.com> writes:

> On Mon, Jul 05, 2021 at 07:14PM +0800, Kefeng Wang wrote:
> [...]
>> +#ifdef CONFIG_KASAN_VMALLOC
>> +void __init __weak kasan_populate_early_vm_area_shadow(void *start,
>> +						       unsigned long size)
>
> This should probably not be __weak, otherwise you now have 2 __weak
> functions.
>
>> +{
>> +	unsigned long shadow_start, shadow_end;
>> +
>> +	if (!is_vmalloc_or_module_addr(start))
>> +		return;
>> +
>> +	shadow_start = (unsigned long)kasan_mem_to_shadow(start);
>> +	shadow_start = ALIGN_DOWN(shadow_start, PAGE_SIZE);
>> +	shadow_end = (unsigned long)kasan_mem_to_shadow(start + size);
>> +	shadow_end = ALIGN(shadow_end, PAGE_SIZE);
>> +	kasan_map_populate(shadow_start, shadow_end,
>> +			   early_pfn_to_nid(virt_to_pfn(start)));
>> +}
>> +#endif
>
> This function looks quite generic -- would any of this also apply to
> other architectures? I see that ppc and sparc at least also define
> CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK.

So I checked with my latest KASAN ppc64 series and my code also breaks
in a very similar way if you boot with percpu_alloc=page. It's not
something I knew about or tested with before!

Unfortunately kasan_map_populate - despite having a very
generic-sounding name - is actually arm64 specific. I don't know if
kasan_populate_early_shadow (which is generic) would be able to fill the
role or not. If we could keep it generic that would be better.

It looks like arm64 does indeed populate the kasan_early_shadow_p{te,md..}
values, but I don't really understand what it's doing - is it possible
to use the generic kasan_populate_early_shadow on arm64?

If so, should we put the call inside of vm_area_register_early?

Kind regards,
Daniel

>
>>  void __init kasan_init(void)
>>  {
>>  	kasan_init_shadow();
>> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>> index 5310e217bd74..79d3895b0240 100644
>> --- a/include/linux/kasan.h
>> +++ b/include/linux/kasan.h
>> @@ -49,6 +49,8 @@ extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D];
>>  int kasan_populate_early_shadow(const void *shadow_start,
>>  				const void *shadow_end);
>>  
>> +void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
>> +
>>  static inline void *kasan_mem_to_shadow(const void *addr)
>>  {
>>  	return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
>> diff --git a/mm/kasan/init.c b/mm/kasan/init.c
>> index cc64ed6858c6..d39577d088a1 100644
>> --- a/mm/kasan/init.c
>> +++ b/mm/kasan/init.c
>> @@ -279,6 +279,11 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
>>  	return 0;
>>  }
>>  
>> +void __init __weak kasan_populate_early_vm_area_shadow(void *start,
>> +						       unsigned long size)
>> +{
>> +}
>
> I'm just wondering if this could be a generic function, perhaps with an
> appropriate IS_ENABLED() check of a generic Kconfig option
> (CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK ?) to short-circuit it, if it's
> not only an arm64 problem.
>
> But I haven't looked much further, so would appeal to you to either
> confirm or reject this idea.
>
> Thanks,
> -- Marco

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC
  2021-07-06  0:04       ` Daniel Axtens
@ 2021-07-06  0:05         ` Daniel Axtens
  -1 siblings, 0 replies; 30+ messages in thread
From: Daniel Axtens @ 2021-07-06  0:05 UTC (permalink / raw)
  To: Marco Elver, Kefeng Wang
  Cc: Catalin Marinas, Will Deacon, Andrey Ryabinin, Andrey Konovalov,
	Dmitry Vyukov, linux-arm-kernel, linux-kernel, kasan-dev,
	linux-mm


> If so, should we put the call inside of vm_area_register_early?
Ah, we already do this. Sorry. My other questions remain.

Kind regards,
Daniel

>
> Kind regards,
> Daniel
>
>>
>>>  void __init kasan_init(void)
>>>  {
>>>  	kasan_init_shadow();
>>> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>>> index 5310e217bd74..79d3895b0240 100644
>>> --- a/include/linux/kasan.h
>>> +++ b/include/linux/kasan.h
>>> @@ -49,6 +49,8 @@ extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D];
>>>  int kasan_populate_early_shadow(const void *shadow_start,
>>>  				const void *shadow_end);
>>>  
>>> +void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
>>> +
>>>  static inline void *kasan_mem_to_shadow(const void *addr)
>>>  {
>>>  	return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
>>> diff --git a/mm/kasan/init.c b/mm/kasan/init.c
>>> index cc64ed6858c6..d39577d088a1 100644
>>> --- a/mm/kasan/init.c
>>> +++ b/mm/kasan/init.c
>>> @@ -279,6 +279,11 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
>>>  	return 0;
>>>  }
>>>  
>>> +void __init __weak kasan_populate_early_vm_area_shadow(void *start,
>>> +						       unsigned long size)
>>> +{
>>> +}
>>
>> I'm just wondering if this could be a generic function, perhaps with an
>> appropriate IS_ENABLED() check of a generic Kconfig option
>> (CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK ?) to short-circuit it, if it's
>> not only an arm64 problem.
>>
>> But I haven't looked much further, so would appeal to you to either
>> confirm or reject this idea.
>>
>> Thanks,
>> -- Marco

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC
@ 2021-07-06  0:05         ` Daniel Axtens
  0 siblings, 0 replies; 30+ messages in thread
From: Daniel Axtens @ 2021-07-06  0:05 UTC (permalink / raw)
  To: Marco Elver, Kefeng Wang
  Cc: Catalin Marinas, Will Deacon, Andrey Ryabinin, Andrey Konovalov,
	Dmitry Vyukov, linux-arm-kernel, linux-kernel, kasan-dev,
	linux-mm


> If so, should we put the call inside of vm_area_register_early?
Ah, we already do this. Sorry. My other questions remain.

Kind regards,
Daniel

>
> Kind regards,
> Daniel
>
>>
>>>  void __init kasan_init(void)
>>>  {
>>>  	kasan_init_shadow();
>>> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>>> index 5310e217bd74..79d3895b0240 100644
>>> --- a/include/linux/kasan.h
>>> +++ b/include/linux/kasan.h
>>> @@ -49,6 +49,8 @@ extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D];
>>>  int kasan_populate_early_shadow(const void *shadow_start,
>>>  				const void *shadow_end);
>>>  
>>> +void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
>>> +
>>>  static inline void *kasan_mem_to_shadow(const void *addr)
>>>  {
>>>  	return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
>>> diff --git a/mm/kasan/init.c b/mm/kasan/init.c
>>> index cc64ed6858c6..d39577d088a1 100644
>>> --- a/mm/kasan/init.c
>>> +++ b/mm/kasan/init.c
>>> @@ -279,6 +279,11 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
>>>  	return 0;
>>>  }
>>>  
>>> +void __init __weak kasan_populate_early_vm_area_shadow(void *start,
>>> +						       unsigned long size)
>>> +{
>>> +}
>>
>> I'm just wondering if this could be a generic function, perhaps with an
>> appropriate IS_ENABLED() check of a generic Kconfig option
>> (CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK ?) to short-circuit it, if it's
>> not only an arm64 problem.
>>
>> But I haven't looked much further, so would appeal to you to either
>> confirm or reject this idea.
>>
>> Thanks,
>> -- Marco

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC
  2021-07-05 15:04     ` Marco Elver
  (?)
  (?)
@ 2021-07-06  4:07     ` Kefeng Wang
  2021-07-16  5:06       ` Kefeng Wang
  -1 siblings, 1 reply; 30+ messages in thread
From: Kefeng Wang @ 2021-07-06  4:07 UTC (permalink / raw)
  To: Marco Elver
  Cc: Catalin Marinas, Will Deacon, Andrey Ryabinin, Andrey Konovalov,
	Dmitry Vyukov, linux-arm-kernel, linux-kernel, kasan-dev,
	linux-mm, Daniel Axtens

[-- Attachment #1: Type: text/plain, Size: 3583 bytes --]

Hi Marco and Dmitry,

On 2021/7/5 23:04, Marco Elver wrote:
> On Mon, Jul 05, 2021 at 07:14PM +0800, Kefeng Wang wrote:
> [...]
>> +#ifdef CONFIG_KASAN_VMALLOC
>> +void __init __weak kasan_populate_early_vm_area_shadow(void *start,
>> +						       unsigned long size)
> This should probably not be __weak, otherwise you now have 2 __weak
> functions.
Indeed, forget it.
>
>> +{
>> +	unsigned long shadow_start, shadow_end;
>> +
>> +	if (!is_vmalloc_or_module_addr(start))
>> +		return;
>> +
>> +	shadow_start = (unsigned long)kasan_mem_to_shadow(start);
>> +	shadow_start = ALIGN_DOWN(shadow_start, PAGE_SIZE);
>> +	shadow_end = (unsigned long)kasan_mem_to_shadow(start + size);
>> +	shadow_end = ALIGN(shadow_end, PAGE_SIZE);
>> +	kasan_map_populate(shadow_start, shadow_end,
>> +			   early_pfn_to_nid(virt_to_pfn(start)));
>> +}
>> +#endif
> This function looks quite generic -- would any of this also apply to
> other architectures? I see that ppc and sparc at least also define
> CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK.

I can't try ppc/sparc, but only ppc support KASAN_VMALLOC,

I check the x86, it supports CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK,

looks this issue is existing on x86 and ppc.

>
>>   void __init kasan_init(void)
>>   {
>>   	kasan_init_shadow();
>> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>> index 5310e217bd74..79d3895b0240 100644
>> --- a/include/linux/kasan.h
>> +++ b/include/linux/kasan.h
>> @@ -49,6 +49,8 @@ extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D];
>>   int kasan_populate_early_shadow(const void *shadow_start,
>>   				const void *shadow_end);
>>   
>> +void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
>> +
>>   static inline void *kasan_mem_to_shadow(const void *addr)
>>   {
>>   	return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
>> diff --git a/mm/kasan/init.c b/mm/kasan/init.c
>> index cc64ed6858c6..d39577d088a1 100644
>> --- a/mm/kasan/init.c
>> +++ b/mm/kasan/init.c
>> @@ -279,6 +279,11 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
>>   	return 0;
>>   }
>>   
>> +void __init __weak kasan_populate_early_vm_area_shadow(void *start,
>> +						       unsigned long size)
>> +{
>> +}
> I'm just wondering if this could be a generic function, perhaps with an
> appropriate IS_ENABLED() check of a generic Kconfig option
> (CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK ?) to short-circuit it, if it's
> not only an arm64 problem.

kasan_map_populate() is arm64 special function, and the x86 has kasan_shallow_populate_pgds(),
ppc has kasan_init_shadow_page_tables(), so look those ARCHs should do the same way like ARM64,

Here we can't use kasan_populate_early_shadow(), this functions will make the early shadow maps
everything to a single page of zeroes(kasan_early_shadow_page), and set it pte_wrprotect, see
zero_pte_populate(), right?

Also I try this, it crashs on ARM64 when change kasan_map_populate() to kasan_populate_early_shadow(),

Unable to handle kernel write to read-only memory at virtual address ffff700002938000
...
Call trace:
  __memset+0x16c/0x1c0
  kasan_unpoison+0x34/0x6c
  kasan_unpoison_vmalloc+0x2c/0x3c
  __get_vm_area_node.constprop.0+0x13c/0x240
  __vmalloc_node_range+0xf4/0x4f0
  __vmalloc_node+0x80/0x9c
  init_IRQ+0xe8/0x130
  start_kernel+0x188/0x360
  __primary_switched+0xc0/0xc8


>
> But I haven't looked much further, so would appeal to you to either
> confirm or reject this idea.
>
> Thanks,
> -- Marco
> .
>

[-- Attachment #2: Type: text/html, Size: 4837 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC
  2021-07-05 14:10     ` kernel test robot
  (?)
@ 2021-07-06  4:12       ` Kefeng Wang
  -1 siblings, 0 replies; 30+ messages in thread
From: Kefeng Wang @ 2021-07-06  4:12 UTC (permalink / raw)
  To: kernel test robot, Catalin Marinas, Will Deacon, Andrey Ryabinin,
	Andrey Konovalov, Dmitry Vyukov
  Cc: kbuild-all, linux-arm-kernel, linux-kernel, kasan-dev, linux-mm


On 2021/7/5 22:10, kernel test robot wrote:
> Hi Kefeng,
>
> Thank you for the patch! Yet something to improve:
>
> [auto build test ERROR on next-20210701]
>
> url:    https://github.com/0day-ci/linux/commits/Kefeng-Wang/arm64-support-page-mapping-percpu-first-chunk-allocator/20210705-190907
> base:    fb0ca446157a86b75502c1636b0d81e642fe6bf1
> config: i386-randconfig-a015-20210705 (attached as .config)
> compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
> reproduce (this is a W=1 build):
>          # https://github.com/0day-ci/linux/commit/5f6b5a402ed3e390563ddbddf12973470fd4886d
>          git remote add linux-review https://github.com/0day-ci/linux
>          git fetch --no-tags linux-review Kefeng-Wang/arm64-support-page-mapping-percpu-first-chunk-allocator/20210705-190907
>          git checkout 5f6b5a402ed3e390563ddbddf12973470fd4886d
>          # save the attached .config to linux build tree
>          make W=1 ARCH=i386
>
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <lkp@intel.com>
>
> All errors (new ones prefixed by >>):
>
>     mm/vmalloc.c: In function 'vm_area_register_early':
>>> mm/vmalloc.c:2252:2: error: implicit declaration of function 'kasan_populate_early_vm_area_shadow' [-Werror=implicit-function-declaration]
should add  a stub function when KASAN is not enabled, thanks.
>      2252 |  kasan_populate_early_vm_area_shadow(vm->addr, vm->size);
>           |  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>     cc1: some warnings being treated as errors
>
>
> vim +/kasan_populate_early_vm_area_shadow +2252 mm/vmalloc.c
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC
@ 2021-07-06  4:12       ` Kefeng Wang
  0 siblings, 0 replies; 30+ messages in thread
From: Kefeng Wang @ 2021-07-06  4:12 UTC (permalink / raw)
  To: kernel test robot, Catalin Marinas, Will Deacon, Andrey Ryabinin,
	Andrey Konovalov, Dmitry Vyukov
  Cc: kbuild-all, linux-arm-kernel, linux-kernel, kasan-dev, linux-mm


On 2021/7/5 22:10, kernel test robot wrote:
> Hi Kefeng,
>
> Thank you for the patch! Yet something to improve:
>
> [auto build test ERROR on next-20210701]
>
> url:    https://github.com/0day-ci/linux/commits/Kefeng-Wang/arm64-support-page-mapping-percpu-first-chunk-allocator/20210705-190907
> base:    fb0ca446157a86b75502c1636b0d81e642fe6bf1
> config: i386-randconfig-a015-20210705 (attached as .config)
> compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
> reproduce (this is a W=1 build):
>          # https://github.com/0day-ci/linux/commit/5f6b5a402ed3e390563ddbddf12973470fd4886d
>          git remote add linux-review https://github.com/0day-ci/linux
>          git fetch --no-tags linux-review Kefeng-Wang/arm64-support-page-mapping-percpu-first-chunk-allocator/20210705-190907
>          git checkout 5f6b5a402ed3e390563ddbddf12973470fd4886d
>          # save the attached .config to linux build tree
>          make W=1 ARCH=i386
>
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <lkp@intel.com>
>
> All errors (new ones prefixed by >>):
>
>     mm/vmalloc.c: In function 'vm_area_register_early':
>>> mm/vmalloc.c:2252:2: error: implicit declaration of function 'kasan_populate_early_vm_area_shadow' [-Werror=implicit-function-declaration]
should add  a stub function when KASAN is not enabled, thanks.
>      2252 |  kasan_populate_early_vm_area_shadow(vm->addr, vm->size);
>           |  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>     cc1: some warnings being treated as errors
>
>
> vim +/kasan_populate_early_vm_area_shadow +2252 mm/vmalloc.c
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC
@ 2021-07-06  4:12       ` Kefeng Wang
  0 siblings, 0 replies; 30+ messages in thread
From: Kefeng Wang @ 2021-07-06  4:12 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 1632 bytes --]


On 2021/7/5 22:10, kernel test robot wrote:
> Hi Kefeng,
>
> Thank you for the patch! Yet something to improve:
>
> [auto build test ERROR on next-20210701]
>
> url:    https://github.com/0day-ci/linux/commits/Kefeng-Wang/arm64-support-page-mapping-percpu-first-chunk-allocator/20210705-190907
> base:    fb0ca446157a86b75502c1636b0d81e642fe6bf1
> config: i386-randconfig-a015-20210705 (attached as .config)
> compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
> reproduce (this is a W=1 build):
>          # https://github.com/0day-ci/linux/commit/5f6b5a402ed3e390563ddbddf12973470fd4886d
>          git remote add linux-review https://github.com/0day-ci/linux
>          git fetch --no-tags linux-review Kefeng-Wang/arm64-support-page-mapping-percpu-first-chunk-allocator/20210705-190907
>          git checkout 5f6b5a402ed3e390563ddbddf12973470fd4886d
>          # save the attached .config to linux build tree
>          make W=1 ARCH=i386
>
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <lkp@intel.com>
>
> All errors (new ones prefixed by >>):
>
>     mm/vmalloc.c: In function 'vm_area_register_early':
>>> mm/vmalloc.c:2252:2: error: implicit declaration of function 'kasan_populate_early_vm_area_shadow' [-Werror=implicit-function-declaration]
should add  a stub function when KASAN is not enabled, thanks.
>      2252 |  kasan_populate_early_vm_area_shadow(vm->addr, vm->size);
>           |  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>     cc1: some warnings being treated as errors
>
>
> vim +/kasan_populate_early_vm_area_shadow +2252 mm/vmalloc.c
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC
  2021-07-06  4:07     ` Kefeng Wang
@ 2021-07-16  5:06       ` Kefeng Wang
  2021-07-16  7:41           ` Marco Elver
  0 siblings, 1 reply; 30+ messages in thread
From: Kefeng Wang @ 2021-07-16  5:06 UTC (permalink / raw)
  To: Marco Elver, Dmitry Vyukov
  Cc: Catalin Marinas, Will Deacon, Andrey Ryabinin, Andrey Konovalov,
	linux-arm-kernel, linux-kernel, kasan-dev, linux-mm,
	Daniel Axtens

[-- Attachment #1: Type: text/plain, Size: 3811 bytes --]

Hi Marco and Dmitry, any comments about the following replay, thanks.

On 2021/7/6 12:07, Kefeng Wang wrote:
>
> Hi Marco and Dmitry,
>
> On 2021/7/5 23:04, Marco Elver wrote:
>> On Mon, Jul 05, 2021 at 07:14PM +0800, Kefeng Wang wrote:
>> [...]
>>> +#ifdef CONFIG_KASAN_VMALLOC
>>> +void __init __weak kasan_populate_early_vm_area_shadow(void *start,
>>> +						       unsigned long size)
>> This should probably not be __weak, otherwise you now have 2 __weak
>> functions.
> Indeed, forget it.
>>> +{
>>> +	unsigned long shadow_start, shadow_end;
>>> +
>>> +	if (!is_vmalloc_or_module_addr(start))
>>> +		return;
>>> +
>>> +	shadow_start = (unsigned long)kasan_mem_to_shadow(start);
>>> +	shadow_start = ALIGN_DOWN(shadow_start, PAGE_SIZE);
>>> +	shadow_end = (unsigned long)kasan_mem_to_shadow(start + size);
>>> +	shadow_end = ALIGN(shadow_end, PAGE_SIZE);
>>> +	kasan_map_populate(shadow_start, shadow_end,
>>> +			   early_pfn_to_nid(virt_to_pfn(start)));
>>> +}
>>> +#endif
>> This function looks quite generic -- would any of this also apply to
>> other architectures? I see that ppc and sparc at least also define
>> CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK.
>
> I can't try ppc/sparc, but only ppc support KASAN_VMALLOC,
>
> I check the x86, it supports CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK,
>
> looks this issue is existing on x86 and ppc.
>
>>>   void __init kasan_init(void)
>>>   {
>>>   	kasan_init_shadow();
>>> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>>> index 5310e217bd74..79d3895b0240 100644
>>> --- a/include/linux/kasan.h
>>> +++ b/include/linux/kasan.h
>>> @@ -49,6 +49,8 @@ extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D];
>>>   int kasan_populate_early_shadow(const void *shadow_start,
>>>   				const void *shadow_end);
>>>   
>>> +void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
>>> +
>>>   static inline void *kasan_mem_to_shadow(const void *addr)
>>>   {
>>>   	return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
>>> diff --git a/mm/kasan/init.c b/mm/kasan/init.c
>>> index cc64ed6858c6..d39577d088a1 100644
>>> --- a/mm/kasan/init.c
>>> +++ b/mm/kasan/init.c
>>> @@ -279,6 +279,11 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
>>>   	return 0;
>>>   }
>>>   
>>> +void __init __weak kasan_populate_early_vm_area_shadow(void *start,
>>> +						       unsigned long size)
>>> +{
>>> +}
>> I'm just wondering if this could be a generic function, perhaps with an
>> appropriate IS_ENABLED() check of a generic Kconfig option
>> (CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK ?) to short-circuit it, if it's
>> not only an arm64 problem.
>
> kasan_map_populate() is arm64 special function, and the x86 has kasan_shallow_populate_pgds(),
> ppc has kasan_init_shadow_page_tables(), so look those ARCHs should do the same way like ARM64,
>
> Here we can't use kasan_populate_early_shadow(), this functions will make the early shadow maps
> everything to a single page of zeroes(kasan_early_shadow_page), and set it pte_wrprotect, see
> zero_pte_populate(), right?
>
> Also I try this, it crashs on ARM64 when change kasan_map_populate() to kasan_populate_early_shadow(),
>
> Unable to handle kernel write to read-only memory at virtual address ffff700002938000
> ...
> Call trace:
>   __memset+0x16c/0x1c0
>   kasan_unpoison+0x34/0x6c
>   kasan_unpoison_vmalloc+0x2c/0x3c
>   __get_vm_area_node.constprop.0+0x13c/0x240
>   __vmalloc_node_range+0xf4/0x4f0
>   __vmalloc_node+0x80/0x9c
>   init_IRQ+0xe8/0x130
>   start_kernel+0x188/0x360
>   __primary_switched+0xc0/0xc8
>
>
>> But I haven't looked much further, so would appeal to you to either
>> confirm or reject this idea.
>>
>> Thanks,
>> -- Marco
>> .
>>

[-- Attachment #2: Type: text/html, Size: 5174 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC
  2021-07-16  5:06       ` Kefeng Wang
  2021-07-16  7:41           ` Marco Elver
@ 2021-07-16  7:41           ` Marco Elver
  0 siblings, 0 replies; 30+ messages in thread
From: Marco Elver @ 2021-07-16  7:41 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Dmitry Vyukov, Catalin Marinas, Will Deacon, Andrey Ryabinin,
	Andrey Konovalov, linux-arm-kernel, linux-kernel, kasan-dev,
	linux-mm, Daniel Axtens

On Fri, 16 Jul 2021 at 07:06, Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
> Hi Marco and Dmitry, any comments about the following replay, thanks.

Can you clarify the question? I've been waiting for v2.

I think you said that this will remain arm64 specific and the existing
generic kasan_populate_early_shadow() doesn't work.

If there's nothing else that needs resolving, please go ahead and send
v2 (the __weak comment still needs resolving).

Thanks,
-- Marco

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC
@ 2021-07-16  7:41           ` Marco Elver
  0 siblings, 0 replies; 30+ messages in thread
From: Marco Elver @ 2021-07-16  7:41 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Dmitry Vyukov, Catalin Marinas, Will Deacon, Andrey Ryabinin,
	Andrey Konovalov, linux-arm-kernel, linux-kernel, kasan-dev,
	linux-mm, Daniel Axtens

On Fri, 16 Jul 2021 at 07:06, Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
> Hi Marco and Dmitry, any comments about the following replay, thanks.

Can you clarify the question? I've been waiting for v2.

I think you said that this will remain arm64 specific and the existing
generic kasan_populate_early_shadow() doesn't work.

If there's nothing else that needs resolving, please go ahead and send
v2 (the __weak comment still needs resolving).

Thanks,
-- Marco


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC
@ 2021-07-16  7:41           ` Marco Elver
  0 siblings, 0 replies; 30+ messages in thread
From: Marco Elver @ 2021-07-16  7:41 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Dmitry Vyukov, Catalin Marinas, Will Deacon, Andrey Ryabinin,
	Andrey Konovalov, linux-arm-kernel, linux-kernel, kasan-dev,
	linux-mm, Daniel Axtens

On Fri, 16 Jul 2021 at 07:06, Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
> Hi Marco and Dmitry, any comments about the following replay, thanks.

Can you clarify the question? I've been waiting for v2.

I think you said that this will remain arm64 specific and the existing
generic kasan_populate_early_shadow() doesn't work.

If there's nothing else that needs resolving, please go ahead and send
v2 (the __weak comment still needs resolving).

Thanks,
-- Marco

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC
  2021-07-16  7:41           ` Marco Elver
@ 2021-07-17  2:40             ` Kefeng Wang
  -1 siblings, 0 replies; 30+ messages in thread
From: Kefeng Wang @ 2021-07-17  2:40 UTC (permalink / raw)
  To: Marco Elver
  Cc: Dmitry Vyukov, Catalin Marinas, Will Deacon, Andrey Ryabinin,
	Andrey Konovalov, linux-arm-kernel, linux-kernel, kasan-dev,
	linux-mm, Daniel Axtens


On 2021/7/16 15:41, Marco Elver wrote:
> On Fri, 16 Jul 2021 at 07:06, Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
>> Hi Marco and Dmitry, any comments about the following replay, thanks.
> Can you clarify the question? I've been waiting for v2.
>
> I think you said that this will remain arm64 specific and the existing
> generic kasan_populate_early_shadow() doesn't work.

Yes, I can't find a generic way to solve the issue, if there is no 
better way, I

will send a new version(fix the build error and the wrong __weak comment)

>
> If there's nothing else that needs resolving, please go ahead and send
> v2 (the __weak comment still needs resolving).
Thanks. will do.
>
> Thanks,
> -- Marco
> .
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC
@ 2021-07-17  2:40             ` Kefeng Wang
  0 siblings, 0 replies; 30+ messages in thread
From: Kefeng Wang @ 2021-07-17  2:40 UTC (permalink / raw)
  To: Marco Elver
  Cc: Dmitry Vyukov, Catalin Marinas, Will Deacon, Andrey Ryabinin,
	Andrey Konovalov, linux-arm-kernel, linux-kernel, kasan-dev,
	linux-mm, Daniel Axtens


On 2021/7/16 15:41, Marco Elver wrote:
> On Fri, 16 Jul 2021 at 07:06, Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
>> Hi Marco and Dmitry, any comments about the following replay, thanks.
> Can you clarify the question? I've been waiting for v2.
>
> I think you said that this will remain arm64 specific and the existing
> generic kasan_populate_early_shadow() doesn't work.

Yes, I can't find a generic way to solve the issue, if there is no 
better way, I

will send a new version(fix the build error and the wrong __weak comment)

>
> If there's nothing else that needs resolving, please go ahead and send
> v2 (the __weak comment still needs resolving).
Thanks. will do.
>
> Thanks,
> -- Marco
> .
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2021-07-17  2:42 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-05 11:14 [PATCH -next 0/3] arm64: support page mapping percpu first chunk allocator Kefeng Wang
2021-07-05 11:14 ` Kefeng Wang
2021-07-05 11:14 ` [PATCH -next 1/3] vmalloc: Choose a better start address in vm_area_register_early() Kefeng Wang
2021-07-05 11:14   ` Kefeng Wang
2021-07-05 11:14 ` [PATCH -next 2/3] arm64: Support page mapping percpu first chunk allocator Kefeng Wang
2021-07-05 11:14   ` Kefeng Wang
2021-07-05 11:14 ` [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC Kefeng Wang
2021-07-05 11:14   ` Kefeng Wang
2021-07-05 14:10   ` kernel test robot
2021-07-05 14:10     ` kernel test robot
2021-07-05 14:10     ` kernel test robot
2021-07-06  4:12     ` Kefeng Wang
2021-07-06  4:12       ` Kefeng Wang
2021-07-06  4:12       ` Kefeng Wang
2021-07-05 15:04   ` Marco Elver
2021-07-05 15:04     ` Marco Elver
2021-07-06  0:04     ` Daniel Axtens
2021-07-06  0:04       ` Daniel Axtens
2021-07-06  0:05       ` Daniel Axtens
2021-07-06  0:05         ` Daniel Axtens
2021-07-06  4:07     ` Kefeng Wang
2021-07-16  5:06       ` Kefeng Wang
2021-07-16  7:41         ` Marco Elver
2021-07-16  7:41           ` Marco Elver
2021-07-16  7:41           ` Marco Elver
2021-07-17  2:40           ` Kefeng Wang
2021-07-17  2:40             ` Kefeng Wang
2021-07-05 17:15   ` kernel test robot
2021-07-05 17:15     ` kernel test robot
2021-07-05 17:15     ` kernel test robot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.