linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/13] kmsan: Enable on powerpc
@ 2023-12-14  5:55 Nicholas Miehlbradt
  2023-12-14  5:55 ` [PATCH 01/13] kmsan: Export kmsan_handle_dma Nicholas Miehlbradt
                   ` (13 more replies)
  0 siblings, 14 replies; 30+ messages in thread
From: Nicholas Miehlbradt @ 2023-12-14  5:55 UTC (permalink / raw)
  To: glider, elver, dvyukov, akpm, mpe, npiggin, christophe.leroy
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel,
	Nicholas Miehlbradt

This series provides the minimal support for Kernal Memory Sanitizer on 
powerpc pseries le guests. Kernal Memory Sanitizer is a tool which detects
uses of uninitialized memory. Currently KMSAN is clang only.

The clang support for powerpc has not yet been merged, the pull request can
be found here [1].

In addition to this series, there are a number of changes required in
generic kmsan code. These changes are already on mailing lists as part of
the series implementing KMSAN for s390 [2]. This series is intended to be
rebased on top of the s390 series.

In addition, I found a bug in the rtc driver used on powerpc. I have sent
a fix to this in a seperate series [3].

With this series and the two series mentioned above, I can successfully
boot pseries le defconfig without KMSAN warnings. I have not tested other
powerpc platforms.

[1] https://github.com/llvm/llvm-project/pull/73611
[2] https://lore.kernel.org/linux-mm/20231121220155.1217090-1-iii@linux.ibm.com/
[3] https://lore.kernel.org/linux-rtc/20231129073647.2624497-1-nicholas@linux.ibm.com/

Nicholas Miehlbradt (13):
  kmsan: Export kmsan_handle_dma
  hvc: Fix use of uninitialized array in udbg_hvc_putc
  powerpc: Disable KMSAN santitization for prom_init, vdso and purgatory
  powerpc: Disable CONFIG_DCACHE_WORD_ACCESS when KMSAN is enabled
  powerpc: Unpoison buffers populated by hcalls
  powerpc/pseries/nvram: Unpoison buffer populated by rtas_call
  powerpc/kprobes: Unpoison instruction in kprobe struct
  powerpc: Unpoison pt_regs
  powerpc: Disable KMSAN checks on functions which walk the stack
  powerpc: Define KMSAN metadata address ranges for vmalloc and ioremap
  powerpc: Implement architecture specific KMSAN interface
  powerpc/string: Add KMSAN support
  powerpc: Enable KMSAN on powerpc

 arch/powerpc/Kconfig                          |  3 +-
 arch/powerpc/include/asm/book3s/64/pgtable.h  | 42 +++++++++++++++
 arch/powerpc/include/asm/interrupt.h          |  2 +
 arch/powerpc/include/asm/kmsan.h              | 51 +++++++++++++++++++
 arch/powerpc/include/asm/string.h             | 18 ++++++-
 arch/powerpc/kernel/Makefile                  |  2 +
 arch/powerpc/kernel/irq_64.c                  |  2 +
 arch/powerpc/kernel/kprobes.c                 |  2 +
 arch/powerpc/kernel/module.c                  |  2 +-
 arch/powerpc/kernel/process.c                 |  6 +--
 arch/powerpc/kernel/stacktrace.c              | 10 ++--
 arch/powerpc/kernel/vdso/Makefile             |  1 +
 arch/powerpc/lib/Makefile                     |  2 +
 arch/powerpc/lib/mem_64.S                     |  5 +-
 arch/powerpc/lib/memcpy_64.S                  |  2 +
 arch/powerpc/perf/callchain.c                 |  2 +-
 arch/powerpc/platforms/pseries/hvconsole.c    |  2 +
 arch/powerpc/platforms/pseries/nvram.c        |  4 ++
 arch/powerpc/purgatory/Makefile               |  1 +
 arch/powerpc/sysdev/xive/spapr.c              |  3 ++
 drivers/tty/hvc/hvc_vio.c                     |  2 +-
 mm/kmsan/hooks.c                              |  1 +
 .../selftests/powerpc/copyloops/asm/kmsan.h   |  0
 .../powerpc/copyloops/linux/export.h          |  1 +
 24 files changed, 152 insertions(+), 14 deletions(-)
 create mode 100644 arch/powerpc/include/asm/kmsan.h
 create mode 100644 tools/testing/selftests/powerpc/copyloops/asm/kmsan.h

-- 
2.40.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 01/13] kmsan: Export kmsan_handle_dma
  2023-12-14  5:55 [PATCH 00/13] kmsan: Enable on powerpc Nicholas Miehlbradt
@ 2023-12-14  5:55 ` Nicholas Miehlbradt
  2024-02-19 19:37   ` Christophe Leroy
  2023-12-14  5:55 ` [PATCH 02/13] hvc: Fix use of uninitialized array in udbg_hvc_putc Nicholas Miehlbradt
                   ` (12 subsequent siblings)
  13 siblings, 1 reply; 30+ messages in thread
From: Nicholas Miehlbradt @ 2023-12-14  5:55 UTC (permalink / raw)
  To: glider, elver, dvyukov, akpm, mpe, npiggin, christophe.leroy
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel,
	Nicholas Miehlbradt

kmsan_handle_dma is required by virtio drivers. Export kmsan_handle_dma
so that the drivers can be compiled as modules.

Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>
---
 mm/kmsan/hooks.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c
index 7a30274b893c..3532d9275ca5 100644
--- a/mm/kmsan/hooks.c
+++ b/mm/kmsan/hooks.c
@@ -358,6 +358,7 @@ void kmsan_handle_dma(struct page *page, size_t offset, size_t size,
 		size -= to_go;
 	}
 }
+EXPORT_SYMBOL(kmsan_handle_dma);
 
 void kmsan_handle_dma_sg(struct scatterlist *sg, int nents,
 			 enum dma_data_direction dir)
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 02/13] hvc: Fix use of uninitialized array in udbg_hvc_putc
  2023-12-14  5:55 [PATCH 00/13] kmsan: Enable on powerpc Nicholas Miehlbradt
  2023-12-14  5:55 ` [PATCH 01/13] kmsan: Export kmsan_handle_dma Nicholas Miehlbradt
@ 2023-12-14  5:55 ` Nicholas Miehlbradt
  2023-12-14  8:36   ` Christophe Leroy
  2023-12-14  5:55 ` [PATCH 03/13] powerpc: Disable KMSAN santitization for prom_init, vdso and purgatory Nicholas Miehlbradt
                   ` (11 subsequent siblings)
  13 siblings, 1 reply; 30+ messages in thread
From: Nicholas Miehlbradt @ 2023-12-14  5:55 UTC (permalink / raw)
  To: glider, elver, dvyukov, akpm, mpe, npiggin, christophe.leroy
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel,
	Nicholas Miehlbradt

All elements of bounce_buffer are eventually read and passed to the
hypervisor so it should probably be fully initialized.

Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>
---
 drivers/tty/hvc/hvc_vio.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/tty/hvc/hvc_vio.c b/drivers/tty/hvc/hvc_vio.c
index 736b230f5ec0..1e88bfcdde20 100644
--- a/drivers/tty/hvc/hvc_vio.c
+++ b/drivers/tty/hvc/hvc_vio.c
@@ -227,7 +227,7 @@ static const struct hv_ops hvterm_hvsi_ops = {
 static void udbg_hvc_putc(char c)
 {
 	int count = -1;
-	unsigned char bounce_buffer[16];
+	unsigned char bounce_buffer[16] = { 0 };
 
 	if (!hvterm_privs[0])
 		return;
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 03/13] powerpc: Disable KMSAN santitization for prom_init, vdso and purgatory
  2023-12-14  5:55 [PATCH 00/13] kmsan: Enable on powerpc Nicholas Miehlbradt
  2023-12-14  5:55 ` [PATCH 01/13] kmsan: Export kmsan_handle_dma Nicholas Miehlbradt
  2023-12-14  5:55 ` [PATCH 02/13] hvc: Fix use of uninitialized array in udbg_hvc_putc Nicholas Miehlbradt
@ 2023-12-14  5:55 ` Nicholas Miehlbradt
  2023-12-14  5:55 ` [PATCH 04/13] powerpc: Disable CONFIG_DCACHE_WORD_ACCESS when KMSAN is enabled Nicholas Miehlbradt
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Nicholas Miehlbradt @ 2023-12-14  5:55 UTC (permalink / raw)
  To: glider, elver, dvyukov, akpm, mpe, npiggin, christophe.leroy
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel,
	Nicholas Miehlbradt

Other sanitizers are disabled for these, disable KMSAN too.

prom_init.o can only reference a limited set of external symbols. KMSAN
adds additional references which are not permitted so disable
sanitization.

Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>
---
 arch/powerpc/kernel/Makefile      | 2 ++
 arch/powerpc/kernel/vdso/Makefile | 1 +
 arch/powerpc/purgatory/Makefile   | 1 +
 3 files changed, 4 insertions(+)

diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 2919433be355..78ea441f7e18 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -61,6 +61,8 @@ KCSAN_SANITIZE_btext.o := n
 KCSAN_SANITIZE_paca.o := n
 KCSAN_SANITIZE_setup_64.o := n
 
+KMSAN_SANITIZE_prom_init.o := n
+
 #ifdef CONFIG_RANDOMIZE_KSTACK_OFFSET
 # Remove stack protector to avoid triggering unneeded stack canary
 # checks due to randomize_kstack_offset.
diff --git a/arch/powerpc/kernel/vdso/Makefile b/arch/powerpc/kernel/vdso/Makefile
index 0c7d82c270c3..86fa6ff1ee51 100644
--- a/arch/powerpc/kernel/vdso/Makefile
+++ b/arch/powerpc/kernel/vdso/Makefile
@@ -52,6 +52,7 @@ KCOV_INSTRUMENT := n
 UBSAN_SANITIZE := n
 KASAN_SANITIZE := n
 KCSAN_SANITIZE := n
+KMSAN_SANITIZE := n
 
 ccflags-y := -fno-common -fno-builtin
 ldflags-y := -Wl,--hash-style=both -nostdlib -shared -z noexecstack $(CLANG_FLAGS)
diff --git a/arch/powerpc/purgatory/Makefile b/arch/powerpc/purgatory/Makefile
index 78473d69cd2b..4b267061bf84 100644
--- a/arch/powerpc/purgatory/Makefile
+++ b/arch/powerpc/purgatory/Makefile
@@ -2,6 +2,7 @@
 
 KASAN_SANITIZE := n
 KCSAN_SANITIZE := n
+KMSAN_SANITIZE := n
 
 targets += trampoline_$(BITS).o purgatory.ro
 
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 04/13] powerpc: Disable CONFIG_DCACHE_WORD_ACCESS when KMSAN is enabled
  2023-12-14  5:55 [PATCH 00/13] kmsan: Enable on powerpc Nicholas Miehlbradt
                   ` (2 preceding siblings ...)
  2023-12-14  5:55 ` [PATCH 03/13] powerpc: Disable KMSAN santitization for prom_init, vdso and purgatory Nicholas Miehlbradt
@ 2023-12-14  5:55 ` Nicholas Miehlbradt
  2023-12-14  8:42   ` Christophe Leroy
  2023-12-14  5:55 ` [PATCH 05/13] powerpc: Unpoison buffers populated by hcalls Nicholas Miehlbradt
                   ` (9 subsequent siblings)
  13 siblings, 1 reply; 30+ messages in thread
From: Nicholas Miehlbradt @ 2023-12-14  5:55 UTC (permalink / raw)
  To: glider, elver, dvyukov, akpm, mpe, npiggin, christophe.leroy
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel,
	Nicholas Miehlbradt

Word sized accesses may read uninitialized data when optimizing loads.
Disable this optimization when KMSAN is enabled to prevent false
positives.

Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>
---
 arch/powerpc/Kconfig | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 6f105ee4f3cf..e33e3250c478 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -182,7 +182,7 @@ config PPC
 	select BUILDTIME_TABLE_SORT
 	select CLONE_BACKWARDS
 	select CPUMASK_OFFSTACK			if NR_CPUS >= 8192
-	select DCACHE_WORD_ACCESS		if PPC64 && CPU_LITTLE_ENDIAN
+	select DCACHE_WORD_ACCESS		if PPC64 && CPU_LITTLE_ENDIAN && !KMSAN
 	select DMA_OPS_BYPASS			if PPC64
 	select DMA_OPS				if PPC64
 	select DYNAMIC_FTRACE			if FUNCTION_TRACER
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 05/13] powerpc: Unpoison buffers populated by hcalls
  2023-12-14  5:55 [PATCH 00/13] kmsan: Enable on powerpc Nicholas Miehlbradt
                   ` (3 preceding siblings ...)
  2023-12-14  5:55 ` [PATCH 04/13] powerpc: Disable CONFIG_DCACHE_WORD_ACCESS when KMSAN is enabled Nicholas Miehlbradt
@ 2023-12-14  5:55 ` Nicholas Miehlbradt
  2023-12-14  5:55 ` [PATCH 06/13] powerpc/pseries/nvram: Unpoison buffer populated by rtas_call Nicholas Miehlbradt
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Nicholas Miehlbradt @ 2023-12-14  5:55 UTC (permalink / raw)
  To: glider, elver, dvyukov, akpm, mpe, npiggin, christophe.leroy
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel,
	Nicholas Miehlbradt

plpar_hcall provides to the hypervisor a buffer where return data should be
placed. The hypervisor initializes the buffers which is not visible to
KMSAN so unpoison them manually.

Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>
---
 arch/powerpc/platforms/pseries/hvconsole.c | 2 ++
 arch/powerpc/sysdev/xive/spapr.c           | 3 +++
 2 files changed, 5 insertions(+)

diff --git a/arch/powerpc/platforms/pseries/hvconsole.c b/arch/powerpc/platforms/pseries/hvconsole.c
index 1ac52963e08b..7ad66acd5db8 100644
--- a/arch/powerpc/platforms/pseries/hvconsole.c
+++ b/arch/powerpc/platforms/pseries/hvconsole.c
@@ -13,6 +13,7 @@
 #include <linux/kernel.h>
 #include <linux/export.h>
 #include <linux/errno.h>
+#include <linux/kmsan-checks.h>
 #include <asm/hvcall.h>
 #include <asm/hvconsole.h>
 #include <asm/plpar_wrappers.h>
@@ -32,6 +33,7 @@ int hvc_get_chars(uint32_t vtermno, char *buf, int count)
 	unsigned long *lbuf = (unsigned long *)buf;
 
 	ret = plpar_hcall(H_GET_TERM_CHAR, retbuf, vtermno);
+	kmsan_unpoison_memory(retbuf, sizeof(retbuf));
 	lbuf[0] = be64_to_cpu(retbuf[1]);
 	lbuf[1] = be64_to_cpu(retbuf[2]);
 
diff --git a/arch/powerpc/sysdev/xive/spapr.c b/arch/powerpc/sysdev/xive/spapr.c
index e45419264391..a9f48a336e4d 100644
--- a/arch/powerpc/sysdev/xive/spapr.c
+++ b/arch/powerpc/sysdev/xive/spapr.c
@@ -20,6 +20,7 @@
 #include <linux/mm.h>
 #include <linux/delay.h>
 #include <linux/libfdt.h>
+#include <linux/kmsan-checks.h>
 
 #include <asm/machdep.h>
 #include <asm/prom.h>
@@ -191,6 +192,8 @@ static long plpar_int_get_source_info(unsigned long flags,
 		return rc;
 	}
 
+	kmsan_unpoison_memory(retbuf, sizeof(retbuf));
+
 	*src_flags = retbuf[0];
 	*eoi_page  = retbuf[1];
 	*trig_page = retbuf[2];
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 06/13] powerpc/pseries/nvram: Unpoison buffer populated by rtas_call
  2023-12-14  5:55 [PATCH 00/13] kmsan: Enable on powerpc Nicholas Miehlbradt
                   ` (4 preceding siblings ...)
  2023-12-14  5:55 ` [PATCH 05/13] powerpc: Unpoison buffers populated by hcalls Nicholas Miehlbradt
@ 2023-12-14  5:55 ` Nicholas Miehlbradt
  2023-12-14  5:55 ` [PATCH 07/13] powerpc/kprobes: Unpoison instruction in kprobe struct Nicholas Miehlbradt
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Nicholas Miehlbradt @ 2023-12-14  5:55 UTC (permalink / raw)
  To: glider, elver, dvyukov, akpm, mpe, npiggin, christophe.leroy
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel,
	Nicholas Miehlbradt

rtas_call provides a buffer where the return data should be placed. Rtas
initializes the buffer which is not visible to KMSAN so unpoison it
manually.

Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>
---
 arch/powerpc/platforms/pseries/nvram.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/powerpc/platforms/pseries/nvram.c b/arch/powerpc/platforms/pseries/nvram.c
index 8130c37962c0..21a27d459347 100644
--- a/arch/powerpc/platforms/pseries/nvram.c
+++ b/arch/powerpc/platforms/pseries/nvram.c
@@ -14,6 +14,7 @@
 #include <linux/ctype.h>
 #include <linux/uaccess.h>
 #include <linux/of.h>
+#include <linux/kmsan-checks.h>
 #include <asm/nvram.h>
 #include <asm/rtas.h>
 #include <asm/machdep.h>
@@ -41,6 +42,7 @@ static ssize_t pSeries_nvram_read(char *buf, size_t count, loff_t *index)
 	int done;
 	unsigned long flags;
 	char *p = buf;
+	size_t l;
 
 
 	if (nvram_size == 0 || nvram_fetch == RTAS_UNKNOWN_SERVICE)
@@ -53,6 +55,7 @@ static ssize_t pSeries_nvram_read(char *buf, size_t count, loff_t *index)
 	if (i + count > nvram_size)
 		count = nvram_size - i;
 
+	l = count;
 	spin_lock_irqsave(&nvram_lock, flags);
 
 	for (; count != 0; count -= len) {
@@ -73,6 +76,7 @@ static ssize_t pSeries_nvram_read(char *buf, size_t count, loff_t *index)
 	}
 
 	spin_unlock_irqrestore(&nvram_lock, flags);
+	kmsan_unpoison_memory(buf, l);
 	
 	*index = i;
 	return p - buf;
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 07/13] powerpc/kprobes: Unpoison instruction in kprobe struct
  2023-12-14  5:55 [PATCH 00/13] kmsan: Enable on powerpc Nicholas Miehlbradt
                   ` (5 preceding siblings ...)
  2023-12-14  5:55 ` [PATCH 06/13] powerpc/pseries/nvram: Unpoison buffer populated by rtas_call Nicholas Miehlbradt
@ 2023-12-14  5:55 ` Nicholas Miehlbradt
  2023-12-15  7:51   ` Naveen N Rao
  2023-12-14  5:55 ` [PATCH 08/13] powerpc: Unpoison pt_regs Nicholas Miehlbradt
                   ` (6 subsequent siblings)
  13 siblings, 1 reply; 30+ messages in thread
From: Nicholas Miehlbradt @ 2023-12-14  5:55 UTC (permalink / raw)
  To: glider, elver, dvyukov, akpm, mpe, npiggin, christophe.leroy
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel,
	Nicholas Miehlbradt

KMSAN does not unpoison the ainsn field of a kprobe struct correctly.
Manually unpoison it to prevent false positives.

Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>
---
 arch/powerpc/kernel/kprobes.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index b20ee72e873a..1cbec54f2b6a 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -27,6 +27,7 @@
 #include <asm/sections.h>
 #include <asm/inst.h>
 #include <linux/uaccess.h>
+#include <linux/kmsan-checks.h>
 
 DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
 DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
@@ -179,6 +180,7 @@ int arch_prepare_kprobe(struct kprobe *p)
 
 	if (!ret) {
 		patch_instruction(p->ainsn.insn, insn);
+		kmsan_unpoison_memory(p->ainsn.insn, sizeof(kprobe_opcode_t));
 		p->opcode = ppc_inst_val(insn);
 	}
 
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 08/13] powerpc: Unpoison pt_regs
  2023-12-14  5:55 [PATCH 00/13] kmsan: Enable on powerpc Nicholas Miehlbradt
                   ` (6 preceding siblings ...)
  2023-12-14  5:55 ` [PATCH 07/13] powerpc/kprobes: Unpoison instruction in kprobe struct Nicholas Miehlbradt
@ 2023-12-14  5:55 ` Nicholas Miehlbradt
  2023-12-14  5:55 ` [PATCH 09/13] powerpc: Disable KMSAN checks on functions which walk the stack Nicholas Miehlbradt
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Nicholas Miehlbradt @ 2023-12-14  5:55 UTC (permalink / raw)
  To: glider, elver, dvyukov, akpm, mpe, npiggin, christophe.leroy
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel,
	Nicholas Miehlbradt

pt_regs is initialized ppc_save_regs which is implemented in assembly
and therefore does not mark the struct as initialized. Unpoison it so
that it will not generate false positives.

Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>
---
 arch/powerpc/include/asm/interrupt.h | 2 ++
 arch/powerpc/kernel/irq_64.c         | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index a4196ab1d016..a9bb09633689 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -68,6 +68,7 @@
 
 #include <linux/context_tracking.h>
 #include <linux/hardirq.h>
+#include <linux/kmsan.h>
 #include <asm/cputime.h>
 #include <asm/firmware.h>
 #include <asm/ftrace.h>
@@ -170,6 +171,7 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs)
 		__hard_RI_enable();
 	}
 	/* Enable MSR[RI] early, to support kernel SLB and hash faults */
+	kmsan_unpoison_entry_regs(regs);
 #endif
 
 	if (!arch_irq_disabled_regs(regs))
diff --git a/arch/powerpc/kernel/irq_64.c b/arch/powerpc/kernel/irq_64.c
index 938e66829eae..3d441f1b8c49 100644
--- a/arch/powerpc/kernel/irq_64.c
+++ b/arch/powerpc/kernel/irq_64.c
@@ -45,6 +45,7 @@
 #include <linux/vmalloc.h>
 #include <linux/pgtable.h>
 #include <linux/static_call.h>
+#include <linux/kmsan.h>
 
 #include <linux/uaccess.h>
 #include <asm/interrupt.h>
@@ -117,6 +118,7 @@ static __no_kcsan void __replay_soft_interrupts(void)
 	local_paca->irq_happened |= PACA_IRQ_REPLAYING;
 
 	ppc_save_regs(&regs);
+	kmsan_unpoison_entry_regs(&regs);
 	regs.softe = IRQS_ENABLED;
 	regs.msr |= MSR_EE;
 
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 09/13] powerpc: Disable KMSAN checks on functions which walk the stack
  2023-12-14  5:55 [PATCH 00/13] kmsan: Enable on powerpc Nicholas Miehlbradt
                   ` (7 preceding siblings ...)
  2023-12-14  5:55 ` [PATCH 08/13] powerpc: Unpoison pt_regs Nicholas Miehlbradt
@ 2023-12-14  5:55 ` Nicholas Miehlbradt
  2023-12-14  9:00   ` Christophe Leroy
  2023-12-15  9:02   ` Aneesh Kumar K.V
  2023-12-14  5:55 ` [PATCH 10/13] powerpc: Define KMSAN metadata address ranges for vmalloc and ioremap Nicholas Miehlbradt
                   ` (4 subsequent siblings)
  13 siblings, 2 replies; 30+ messages in thread
From: Nicholas Miehlbradt @ 2023-12-14  5:55 UTC (permalink / raw)
  To: glider, elver, dvyukov, akpm, mpe, npiggin, christophe.leroy
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel,
	Nicholas Miehlbradt

Functions which walk the stack read parts of the stack which cannot be
instrumented by KMSAN e.g. the backchain. Disable KMSAN sanitization of
these functions to prevent false positives.

Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>
---
 arch/powerpc/kernel/process.c    |  6 +++---
 arch/powerpc/kernel/stacktrace.c | 10 ++++++----
 arch/powerpc/perf/callchain.c    |  2 +-
 3 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 392404688cec..3dc88143c3b2 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -2276,9 +2276,9 @@ static bool empty_user_regs(struct pt_regs *regs, struct task_struct *tsk)
 
 static int kstack_depth_to_print = CONFIG_PRINT_STACK_DEPTH;
 
-void __no_sanitize_address show_stack(struct task_struct *tsk,
-				      unsigned long *stack,
-				      const char *loglvl)
+void __no_sanitize_address __no_kmsan_checks show_stack(struct task_struct *tsk,
+							unsigned long *stack,
+							const char *loglvl)
 {
 	unsigned long sp, ip, lr, newsp;
 	int count = 0;
diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
index e6a958a5da27..369b8b2a1bcd 100644
--- a/arch/powerpc/kernel/stacktrace.c
+++ b/arch/powerpc/kernel/stacktrace.c
@@ -24,8 +24,9 @@
 
 #include <asm/paca.h>
 
-void __no_sanitize_address arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
-					   struct task_struct *task, struct pt_regs *regs)
+void __no_sanitize_address __no_kmsan_checks
+	arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
+			struct task_struct *task, struct pt_regs *regs)
 {
 	unsigned long sp;
 
@@ -62,8 +63,9 @@ void __no_sanitize_address arch_stack_walk(stack_trace_consume_fn consume_entry,
  *
  * If the task is not 'current', the caller *must* ensure the task is inactive.
  */
-int __no_sanitize_address arch_stack_walk_reliable(stack_trace_consume_fn consume_entry,
-						   void *cookie, struct task_struct *task)
+int __no_sanitize_address __no_kmsan_checks
+	arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, void *cookie,
+				 struct task_struct *task)
 {
 	unsigned long sp;
 	unsigned long newsp;
diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
index 6b4434dd0ff3..c7610b38e9b8 100644
--- a/arch/powerpc/perf/callchain.c
+++ b/arch/powerpc/perf/callchain.c
@@ -40,7 +40,7 @@ static int valid_next_sp(unsigned long sp, unsigned long prev_sp)
 	return 0;
 }
 
-void __no_sanitize_address
+void __no_sanitize_address __no_kmsan_checks
 perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
 {
 	unsigned long sp, next_sp;
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 10/13] powerpc: Define KMSAN metadata address ranges for vmalloc and ioremap
  2023-12-14  5:55 [PATCH 00/13] kmsan: Enable on powerpc Nicholas Miehlbradt
                   ` (8 preceding siblings ...)
  2023-12-14  5:55 ` [PATCH 09/13] powerpc: Disable KMSAN checks on functions which walk the stack Nicholas Miehlbradt
@ 2023-12-14  5:55 ` Nicholas Miehlbradt
  2023-12-14  9:17   ` Christophe Leroy
  2023-12-15  9:27   ` Aneesh Kumar K.V
  2023-12-14  5:55 ` [PATCH 11/13] powerpc: Implement architecture specific KMSAN interface Nicholas Miehlbradt
                   ` (3 subsequent siblings)
  13 siblings, 2 replies; 30+ messages in thread
From: Nicholas Miehlbradt @ 2023-12-14  5:55 UTC (permalink / raw)
  To: glider, elver, dvyukov, akpm, mpe, npiggin, christophe.leroy
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel,
	Nicholas Miehlbradt

Splits the vmalloc region into four. The first quarter is the new
vmalloc region, the second is used to store shadow metadata and the
third is used to store origin metadata. The fourth quarter is unused.

Do the same for the ioremap region.

Module data is stored in the vmalloc region so alias the modules
metadata addresses to the respective vmalloc metadata addresses. Define
MODULES_VADDR and MODULES_END to the start and end of the vmalloc
region.

Since MODULES_VADDR was previously only defined on ppc32 targets checks
for if this macro is defined need to be updated to include
defined(CONFIG_PPC32).

Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>
---
 arch/powerpc/include/asm/book3s/64/pgtable.h | 42 ++++++++++++++++++++
 arch/powerpc/kernel/module.c                 |  2 +-
 2 files changed, 43 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index cb77eddca54b..b3a02b8d96e3 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -249,7 +249,38 @@ enum pgtable_index {
 extern unsigned long __vmalloc_start;
 extern unsigned long __vmalloc_end;
 #define VMALLOC_START	__vmalloc_start
+
+#ifndef CONFIG_KMSAN
 #define VMALLOC_END	__vmalloc_end
+#else
+/*
+ * In KMSAN builds vmalloc area is four times smaller, and the remaining 3/4
+ * are used to keep the metadata for virtual pages. The memory formerly
+ * belonging to vmalloc area is now laid out as follows:
+ *
+ * 1st quarter: VMALLOC_START to VMALLOC_END - new vmalloc area
+ * 2nd quarter: KMSAN_VMALLOC_SHADOW_START to
+ *              KMSAN_VMALLOC_SHADOW_START+VMALLOC_LEN - vmalloc area shadow
+ * 3rd quarter: KMSAN_VMALLOC_ORIGIN_START to
+ *              KMSAN_VMALLOC_ORIGIN_START+VMALLOC_LEN - vmalloc area origins
+ * 4th quarter: unused
+ */
+#define VMALLOC_LEN ((__vmalloc_end - __vmalloc_start) >> 2)
+#define VMALLOC_END (VMALLOC_START + VMALLOC_LEN)
+
+#define KMSAN_VMALLOC_SHADOW_START VMALLOC_END
+#define KMSAN_VMALLOC_ORIGIN_START (VMALLOC_END + VMALLOC_LEN)
+
+/*
+ * Module metadata is stored in the corresponding vmalloc metadata regions
+ */
+#define KMSAN_MODULES_SHADOW_START	KMSAN_VMALLOC_SHADOW_START
+#define KMSAN_MODULES_ORIGIN_START	KMSAN_VMALLOC_ORIGIN_START
+#endif /* CONFIG_KMSAN */
+
+#define MODULES_VADDR VMALLOC_START
+#define MODULES_END VMALLOC_END
+#define MODULES_LEN		(MODULES_END - MODULES_VADDR)
 
 static inline unsigned int ioremap_max_order(void)
 {
@@ -264,7 +295,18 @@ extern unsigned long __kernel_io_start;
 extern unsigned long __kernel_io_end;
 #define KERN_VIRT_START __kernel_virt_start
 #define KERN_IO_START  __kernel_io_start
+#ifndef CONFIG_KMSAN
 #define KERN_IO_END __kernel_io_end
+#else
+/*
+ * In KMSAN builds IO space is 4 times smaller, the remaining space is used to
+ * store metadata. See comment for vmalloc regions above.
+ */
+#define KERN_IO_LEN             ((__kernel_io_end - __kernel_io_start) >> 2)
+#define KERN_IO_END             (KERN_IO_START + KERN_IO_LEN)
+#define KERN_IO_SHADOW_START    KERN_IO_END
+#define KERN_IO_ORIGIN_START    (KERN_IO_SHADOW_START + KERN_IO_LEN)
+#endif /* !CONFIG_KMSAN */
 
 extern struct page *vmemmap;
 extern unsigned long pci_io_base;
diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
index f6d6ae0a1692..5043b959ad4d 100644
--- a/arch/powerpc/kernel/module.c
+++ b/arch/powerpc/kernel/module.c
@@ -107,7 +107,7 @@ __module_alloc(unsigned long size, unsigned long start, unsigned long end, bool
 
 void *module_alloc(unsigned long size)
 {
-#ifdef MODULES_VADDR
+#if defined(MODULES_VADDR) && defined(CONFIG_PPC32)
 	unsigned long limit = (unsigned long)_etext - SZ_32M;
 	void *ptr = NULL;
 
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 11/13] powerpc: Implement architecture specific KMSAN interface
  2023-12-14  5:55 [PATCH 00/13] kmsan: Enable on powerpc Nicholas Miehlbradt
                   ` (9 preceding siblings ...)
  2023-12-14  5:55 ` [PATCH 10/13] powerpc: Define KMSAN metadata address ranges for vmalloc and ioremap Nicholas Miehlbradt
@ 2023-12-14  5:55 ` Nicholas Miehlbradt
  2023-12-14  9:20   ` Christophe Leroy
  2023-12-14  5:55 ` [PATCH 12/13] powerpc/string: Add KMSAN support Nicholas Miehlbradt
                   ` (2 subsequent siblings)
  13 siblings, 1 reply; 30+ messages in thread
From: Nicholas Miehlbradt @ 2023-12-14  5:55 UTC (permalink / raw)
  To: glider, elver, dvyukov, akpm, mpe, npiggin, christophe.leroy
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel,
	Nicholas Miehlbradt

arch_kmsan_get_meta_or_null finds the metadata addresses for addresses
in the ioremap region which is mapped separately on powerpc.

kmsan_vir_addr_valid is the same as virt_addr_valid except excludes the
check that addr is less than high_memory since this function can be
called on addresses higher than this.

Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>
---
 arch/powerpc/include/asm/kmsan.h | 44 ++++++++++++++++++++++++++++++++
 1 file changed, 44 insertions(+)
 create mode 100644 arch/powerpc/include/asm/kmsan.h

diff --git a/arch/powerpc/include/asm/kmsan.h b/arch/powerpc/include/asm/kmsan.h
new file mode 100644
index 000000000000..bc84f6ff2ee9
--- /dev/null
+++ b/arch/powerpc/include/asm/kmsan.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * powerpc KMSAN support.
+ *
+ */
+
+#ifndef _ASM_POWERPC_KMSAN_H
+#define _ASM_POWERPC_KMSAN_H
+
+#ifndef __ASSEMBLY__
+#ifndef MODULE
+
+#include <linux/mmzone.h>
+#include <asm/page.h>
+#include <asm/book3s/64/pgtable.h>
+
+/*
+ * Functions below are declared in the header to make sure they are inlined.
+ * They all are called from kmsan_get_metadata() for every memory access in
+ * the kernel, so speed is important here.
+ */
+
+/*
+ * No powerpc specific metadata locations
+ */
+static inline void *arch_kmsan_get_meta_or_null(void *addr, bool is_origin)
+{
+	unsigned long addr64 = (unsigned long)addr, off;
+	if (KERN_IO_START <= addr64 && addr64 < KERN_IO_END) {
+		off = addr64 - KERN_IO_START;
+		return (void *)off + (is_origin ? KERN_IO_ORIGIN_START : KERN_IO_SHADOW_START);
+	} else {
+		return 0;
+	}
+}
+
+static inline bool kmsan_virt_addr_valid(void *addr)
+{
+	return (unsigned long)addr >= PAGE_OFFSET && pfn_valid(virt_to_pfn(addr));
+}
+
+#endif /* !MODULE */
+#endif /* !__ASSEMBLY__ */
+#endif /* _ASM_POWERPC_KMSAN_H */
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 12/13] powerpc/string: Add KMSAN support
  2023-12-14  5:55 [PATCH 00/13] kmsan: Enable on powerpc Nicholas Miehlbradt
                   ` (10 preceding siblings ...)
  2023-12-14  5:55 ` [PATCH 11/13] powerpc: Implement architecture specific KMSAN interface Nicholas Miehlbradt
@ 2023-12-14  5:55 ` Nicholas Miehlbradt
  2023-12-14  9:25   ` Christophe Leroy
  2023-12-14  5:55 ` [PATCH 13/13] powerpc: Enable KMSAN on powerpc Nicholas Miehlbradt
  2024-02-20  6:39 ` [PATCH 00/13] kmsan: Enable " Christophe Leroy
  13 siblings, 1 reply; 30+ messages in thread
From: Nicholas Miehlbradt @ 2023-12-14  5:55 UTC (permalink / raw)
  To: glider, elver, dvyukov, akpm, mpe, npiggin, christophe.leroy
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel,
	Nicholas Miehlbradt

KMSAN expects functions __mem{set,cpy,move} so add aliases pointing to
the respective functions.

Disable use of architecture specific memset{16,32,64} to ensure that
metadata is correctly updated and strn{cpy,cmp} and mem{chr,cmp} which
are implemented in assembly and therefore cannot be instrumented to
propagate/check metadata.

Alias calls to mem{set,cpy,move} to __msan_mem{set,cpy,move} in
instrumented code to correctly propagate metadata.

Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>
---
 arch/powerpc/include/asm/kmsan.h               |  7 +++++++
 arch/powerpc/include/asm/string.h              | 18 ++++++++++++++++--
 arch/powerpc/lib/Makefile                      |  2 ++
 arch/powerpc/lib/mem_64.S                      |  5 ++++-
 arch/powerpc/lib/memcpy_64.S                   |  2 ++
 .../selftests/powerpc/copyloops/asm/kmsan.h    |  0
 .../selftests/powerpc/copyloops/linux/export.h |  1 +
 7 files changed, 32 insertions(+), 3 deletions(-)
 create mode 100644 tools/testing/selftests/powerpc/copyloops/asm/kmsan.h

diff --git a/arch/powerpc/include/asm/kmsan.h b/arch/powerpc/include/asm/kmsan.h
index bc84f6ff2ee9..fc59dc24e170 100644
--- a/arch/powerpc/include/asm/kmsan.h
+++ b/arch/powerpc/include/asm/kmsan.h
@@ -7,6 +7,13 @@
 #ifndef _ASM_POWERPC_KMSAN_H
 #define _ASM_POWERPC_KMSAN_H
 
+#ifdef CONFIG_KMSAN
+#define EXPORT_SYMBOL_KMSAN(fn) SYM_FUNC_ALIAS(__##fn, fn) \
+				EXPORT_SYMBOL(__##fn)
+#else
+#define EXPORT_SYMBOL_KMSAN(fn)
+#endif
+
 #ifndef __ASSEMBLY__
 #ifndef MODULE
 
diff --git a/arch/powerpc/include/asm/string.h b/arch/powerpc/include/asm/string.h
index 60ba22770f51..412626ce619b 100644
--- a/arch/powerpc/include/asm/string.h
+++ b/arch/powerpc/include/asm/string.h
@@ -4,7 +4,7 @@
 
 #ifdef __KERNEL__
 
-#ifndef CONFIG_KASAN
+#if !defined(CONFIG_KASAN) && !defined(CONFIG_KMSAN)
 #define __HAVE_ARCH_STRNCPY
 #define __HAVE_ARCH_STRNCMP
 #define __HAVE_ARCH_MEMCHR
@@ -56,8 +56,22 @@ void *__memmove(void *to, const void *from, __kernel_size_t n);
 #endif /* CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX */
 #endif /* CONFIG_KASAN */
 
+#ifdef CONFIG_KMSAN
+
+void *__memset(void *s, int c, __kernel_size_t count);
+void *__memcpy(void *to, const void *from, __kernel_size_t n);
+void *__memmove(void *to, const void *from, __kernel_size_t n);
+
+#ifdef __SANITIZE_MEMORY__
+#include <linux/kmsan_string.h>
+#define memset __msan_memset
+#define memcpy __msan_memcpy
+#define memmove __msan_memmove
+#endif
+#endif /* CONFIG_KMSAN */
+
 #ifdef CONFIG_PPC64
-#ifndef CONFIG_KASAN
+#if !defined(CONFIG_KASAN) && !defined(CONFIG_KMSAN)
 #define __HAVE_ARCH_MEMSET32
 #define __HAVE_ARCH_MEMSET64
 
diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
index 51ad0397c17a..fc3ea3eebbd6 100644
--- a/arch/powerpc/lib/Makefile
+++ b/arch/powerpc/lib/Makefile
@@ -32,9 +32,11 @@ obj-y += code-patching.o feature-fixups.o pmem.o
 obj-$(CONFIG_CODE_PATCHING_SELFTEST) += test-code-patching.o
 
 ifndef CONFIG_KASAN
+ifndef CONFIG_KMSAN
 obj-y	+=	string.o memcmp_$(BITS).o
 obj-$(CONFIG_PPC32)	+= strlen_32.o
 endif
+endif
 
 obj-$(CONFIG_PPC32)	+= div64.o copy_32.o crtsavres.o
 
diff --git a/arch/powerpc/lib/mem_64.S b/arch/powerpc/lib/mem_64.S
index 6fd06cd20faa..a55f2fac49b3 100644
--- a/arch/powerpc/lib/mem_64.S
+++ b/arch/powerpc/lib/mem_64.S
@@ -9,8 +9,9 @@
 #include <asm/errno.h>
 #include <asm/ppc_asm.h>
 #include <asm/kasan.h>
+#include <asm/kmsan.h>
 
-#ifndef CONFIG_KASAN
+#if !defined(CONFIG_KASAN) && !defined(CONFIG_KMSAN)
 _GLOBAL(__memset16)
 	rlwimi	r4,r4,16,0,15
 	/* fall through */
@@ -96,6 +97,7 @@ _GLOBAL_KASAN(memset)
 	blr
 EXPORT_SYMBOL(memset)
 EXPORT_SYMBOL_KASAN(memset)
+EXPORT_SYMBOL_KMSAN(memset)
 
 _GLOBAL_TOC_KASAN(memmove)
 	cmplw	0,r3,r4
@@ -140,3 +142,4 @@ _GLOBAL(backwards_memcpy)
 	b	1b
 EXPORT_SYMBOL(memmove)
 EXPORT_SYMBOL_KASAN(memmove)
+EXPORT_SYMBOL_KMSAN(memmove)
diff --git a/arch/powerpc/lib/memcpy_64.S b/arch/powerpc/lib/memcpy_64.S
index b5a67e20143f..1657861618cc 100644
--- a/arch/powerpc/lib/memcpy_64.S
+++ b/arch/powerpc/lib/memcpy_64.S
@@ -8,6 +8,7 @@
 #include <asm/asm-compat.h>
 #include <asm/feature-fixups.h>
 #include <asm/kasan.h>
+#include <asm/kmsan.h>
 
 #ifndef SELFTEST_CASE
 /* For big-endian, 0 == most CPUs, 1 == POWER6, 2 == Cell */
@@ -228,3 +229,4 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
 #endif
 EXPORT_SYMBOL(memcpy)
 EXPORT_SYMBOL_KASAN(memcpy)
+EXPORT_SYMBOL_KMSAN(memcpy)
diff --git a/tools/testing/selftests/powerpc/copyloops/asm/kmsan.h b/tools/testing/selftests/powerpc/copyloops/asm/kmsan.h
new file mode 100644
index 000000000000..e69de29bb2d1
diff --git a/tools/testing/selftests/powerpc/copyloops/linux/export.h b/tools/testing/selftests/powerpc/copyloops/linux/export.h
index e6b80d5fbd14..6379624bbf9b 100644
--- a/tools/testing/selftests/powerpc/copyloops/linux/export.h
+++ b/tools/testing/selftests/powerpc/copyloops/linux/export.h
@@ -2,3 +2,4 @@
 #define EXPORT_SYMBOL(x)
 #define EXPORT_SYMBOL_GPL(x)
 #define EXPORT_SYMBOL_KASAN(x)
+#define EXPORT_SYMBOL_KMSAN(x)
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 13/13] powerpc: Enable KMSAN on powerpc
  2023-12-14  5:55 [PATCH 00/13] kmsan: Enable on powerpc Nicholas Miehlbradt
                   ` (11 preceding siblings ...)
  2023-12-14  5:55 ` [PATCH 12/13] powerpc/string: Add KMSAN support Nicholas Miehlbradt
@ 2023-12-14  5:55 ` Nicholas Miehlbradt
  2023-12-14  9:27   ` Christophe Leroy
  2024-02-20  6:39 ` [PATCH 00/13] kmsan: Enable " Christophe Leroy
  13 siblings, 1 reply; 30+ messages in thread
From: Nicholas Miehlbradt @ 2023-12-14  5:55 UTC (permalink / raw)
  To: glider, elver, dvyukov, akpm, mpe, npiggin, christophe.leroy
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel,
	Nicholas Miehlbradt

Enable KMSAN in the Kconfig.

Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>
---
 arch/powerpc/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index e33e3250c478..71cc7d2a0a72 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -217,6 +217,7 @@ config PPC
 	select HAVE_ARCH_KASAN_VMALLOC		if HAVE_ARCH_KASAN
 	select HAVE_ARCH_KCSAN
 	select HAVE_ARCH_KFENCE			if ARCH_SUPPORTS_DEBUG_PAGEALLOC
+        select HAVE_ARCH_KMSAN                  if PPC64
 	select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
 	select HAVE_ARCH_WITHIN_STACK_FRAMES
 	select HAVE_ARCH_KGDB
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH 02/13] hvc: Fix use of uninitialized array in udbg_hvc_putc
  2023-12-14  5:55 ` [PATCH 02/13] hvc: Fix use of uninitialized array in udbg_hvc_putc Nicholas Miehlbradt
@ 2023-12-14  8:36   ` Christophe Leroy
  2023-12-21 12:09     ` Michael Ellerman
  0 siblings, 1 reply; 30+ messages in thread
From: Christophe Leroy @ 2023-12-14  8:36 UTC (permalink / raw)
  To: Nicholas Miehlbradt, glider, elver, dvyukov, akpm, mpe, npiggin
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel



Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
> All elements of bounce_buffer are eventually read and passed to the
> hypervisor so it should probably be fully initialized.

should or shall ?

> 
> Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>

Should be a Fixed: tag ?

> ---
>   drivers/tty/hvc/hvc_vio.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/tty/hvc/hvc_vio.c b/drivers/tty/hvc/hvc_vio.c
> index 736b230f5ec0..1e88bfcdde20 100644
> --- a/drivers/tty/hvc/hvc_vio.c
> +++ b/drivers/tty/hvc/hvc_vio.c
> @@ -227,7 +227,7 @@ static const struct hv_ops hvterm_hvsi_ops = {
>   static void udbg_hvc_putc(char c)
>   {
>   	int count = -1;
> -	unsigned char bounce_buffer[16];
> +	unsigned char bounce_buffer[16] = { 0 };

Why 16 while we have a count of 1 in the call to hvterm_raw_put_chars() ?

>   
>   	if (!hvterm_privs[0])
>   		return;

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 04/13] powerpc: Disable CONFIG_DCACHE_WORD_ACCESS when KMSAN is enabled
  2023-12-14  5:55 ` [PATCH 04/13] powerpc: Disable CONFIG_DCACHE_WORD_ACCESS when KMSAN is enabled Nicholas Miehlbradt
@ 2023-12-14  8:42   ` Christophe Leroy
  0 siblings, 0 replies; 30+ messages in thread
From: Christophe Leroy @ 2023-12-14  8:42 UTC (permalink / raw)
  To: Nicholas Miehlbradt, glider, elver, dvyukov, akpm, mpe, npiggin
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel



Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
> Word sized accesses may read uninitialized data when optimizing loads.
> Disable this optimization when KMSAN is enabled to prevent false
> positives.
> 
> Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>
> ---
>   arch/powerpc/Kconfig | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index 6f105ee4f3cf..e33e3250c478 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -182,7 +182,7 @@ config PPC
>   	select BUILDTIME_TABLE_SORT
>   	select CLONE_BACKWARDS
>   	select CPUMASK_OFFSTACK			if NR_CPUS >= 8192
> -	select DCACHE_WORD_ACCESS		if PPC64 && CPU_LITTLE_ENDIAN
> +	select DCACHE_WORD_ACCESS		if PPC64 && CPU_LITTLE_ENDIAN && !KMSAN
>   	select DMA_OPS_BYPASS			if PPC64
>   	select DMA_OPS				if PPC64
>   	select DYNAMIC_FTRACE			if FUNCTION_TRACER


Seems like all archs do this. Maybe a better approach would be to define 
a HAVE_DCACHE_WORD_ACCESS that is selected by arches, and then the core 
part select DCACHE_WORD_ACCESS when HAVE_DCACHE_WORD_ACCESS && !KMSAN

Christophe

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 09/13] powerpc: Disable KMSAN checks on functions which walk the stack
  2023-12-14  5:55 ` [PATCH 09/13] powerpc: Disable KMSAN checks on functions which walk the stack Nicholas Miehlbradt
@ 2023-12-14  9:00   ` Christophe Leroy
  2024-01-10  4:16     ` Nicholas Miehlbradt
  2023-12-15  9:02   ` Aneesh Kumar K.V
  1 sibling, 1 reply; 30+ messages in thread
From: Christophe Leroy @ 2023-12-14  9:00 UTC (permalink / raw)
  To: Nicholas Miehlbradt, glider, elver, dvyukov, akpm, mpe, npiggin
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel



Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
> Functions which walk the stack read parts of the stack which cannot be
> instrumented by KMSAN e.g. the backchain. Disable KMSAN sanitization of
> these functions to prevent false positives.

Do other architectures have to do it as well ?

I don't see it for show_stack(), is that a specific need for powerpc ?

> 
> Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>
> ---
>   arch/powerpc/kernel/process.c    |  6 +++---
>   arch/powerpc/kernel/stacktrace.c | 10 ++++++----
>   arch/powerpc/perf/callchain.c    |  2 +-
>   3 files changed, 10 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
> index 392404688cec..3dc88143c3b2 100644
> --- a/arch/powerpc/kernel/process.c
> +++ b/arch/powerpc/kernel/process.c
> @@ -2276,9 +2276,9 @@ static bool empty_user_regs(struct pt_regs *regs, struct task_struct *tsk)
>   
>   static int kstack_depth_to_print = CONFIG_PRINT_STACK_DEPTH;
>   
> -void __no_sanitize_address show_stack(struct task_struct *tsk,
> -				      unsigned long *stack,
> -				      const char *loglvl)
> +void __no_sanitize_address __no_kmsan_checks show_stack(struct task_struct *tsk,
> +							unsigned long *stack,
> +							const char *loglvl)
>   {
>   	unsigned long sp, ip, lr, newsp;
>   	int count = 0;
> diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
> index e6a958a5da27..369b8b2a1bcd 100644
> --- a/arch/powerpc/kernel/stacktrace.c
> +++ b/arch/powerpc/kernel/stacktrace.c
> @@ -24,8 +24,9 @@
>   
>   #include <asm/paca.h>
>   
> -void __no_sanitize_address arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
> -					   struct task_struct *task, struct pt_regs *regs)
> +void __no_sanitize_address __no_kmsan_checks
> +	arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
> +			struct task_struct *task, struct pt_regs *regs)
>   {
>   	unsigned long sp;
>   
> @@ -62,8 +63,9 @@ void __no_sanitize_address arch_stack_walk(stack_trace_consume_fn consume_entry,
>    *
>    * If the task is not 'current', the caller *must* ensure the task is inactive.
>    */
> -int __no_sanitize_address arch_stack_walk_reliable(stack_trace_consume_fn consume_entry,
> -						   void *cookie, struct task_struct *task)
> +int __no_sanitize_address __no_kmsan_checks
> +	arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, void *cookie,
> +				 struct task_struct *task)
>   {
>   	unsigned long sp;
>   	unsigned long newsp;
> diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
> index 6b4434dd0ff3..c7610b38e9b8 100644
> --- a/arch/powerpc/perf/callchain.c
> +++ b/arch/powerpc/perf/callchain.c
> @@ -40,7 +40,7 @@ static int valid_next_sp(unsigned long sp, unsigned long prev_sp)
>   	return 0;
>   }
>   
> -void __no_sanitize_address
> +void __no_sanitize_address __no_kmsan_checks
>   perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
>   {
>   	unsigned long sp, next_sp;

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 10/13] powerpc: Define KMSAN metadata address ranges for vmalloc and ioremap
  2023-12-14  5:55 ` [PATCH 10/13] powerpc: Define KMSAN metadata address ranges for vmalloc and ioremap Nicholas Miehlbradt
@ 2023-12-14  9:17   ` Christophe Leroy
  2024-01-10  3:54     ` Nicholas Miehlbradt
  2023-12-15  9:27   ` Aneesh Kumar K.V
  1 sibling, 1 reply; 30+ messages in thread
From: Christophe Leroy @ 2023-12-14  9:17 UTC (permalink / raw)
  To: Nicholas Miehlbradt, glider, elver, dvyukov, akpm, mpe, npiggin
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel



Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
> Splits the vmalloc region into four. The first quarter is the new
> vmalloc region, the second is used to store shadow metadata and the
> third is used to store origin metadata. The fourth quarter is unused.
> 
> Do the same for the ioremap region.
> 
> Module data is stored in the vmalloc region so alias the modules
> metadata addresses to the respective vmalloc metadata addresses. Define
> MODULES_VADDR and MODULES_END to the start and end of the vmalloc
> region.
> 
> Since MODULES_VADDR was previously only defined on ppc32 targets checks
> for if this macro is defined need to be updated to include
> defined(CONFIG_PPC32).

Why ?

In your case MODULES_VADDR is above PAGE_OFFSET so there should be no 
difference.

Christophe

> 
> Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>
> ---
>   arch/powerpc/include/asm/book3s/64/pgtable.h | 42 ++++++++++++++++++++
>   arch/powerpc/kernel/module.c                 |  2 +-
>   2 files changed, 43 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
> index cb77eddca54b..b3a02b8d96e3 100644
> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
> @@ -249,7 +249,38 @@ enum pgtable_index {
>   extern unsigned long __vmalloc_start;
>   extern unsigned long __vmalloc_end;
>   #define VMALLOC_START	__vmalloc_start
> +
> +#ifndef CONFIG_KMSAN
>   #define VMALLOC_END	__vmalloc_end
> +#else
> +/*
> + * In KMSAN builds vmalloc area is four times smaller, and the remaining 3/4
> + * are used to keep the metadata for virtual pages. The memory formerly
> + * belonging to vmalloc area is now laid out as follows:
> + *
> + * 1st quarter: VMALLOC_START to VMALLOC_END - new vmalloc area
> + * 2nd quarter: KMSAN_VMALLOC_SHADOW_START to
> + *              KMSAN_VMALLOC_SHADOW_START+VMALLOC_LEN - vmalloc area shadow
> + * 3rd quarter: KMSAN_VMALLOC_ORIGIN_START to
> + *              KMSAN_VMALLOC_ORIGIN_START+VMALLOC_LEN - vmalloc area origins
> + * 4th quarter: unused
> + */
> +#define VMALLOC_LEN ((__vmalloc_end - __vmalloc_start) >> 2)
> +#define VMALLOC_END (VMALLOC_START + VMALLOC_LEN)
> +
> +#define KMSAN_VMALLOC_SHADOW_START VMALLOC_END
> +#define KMSAN_VMALLOC_ORIGIN_START (VMALLOC_END + VMALLOC_LEN)
> +
> +/*
> + * Module metadata is stored in the corresponding vmalloc metadata regions
> + */
> +#define KMSAN_MODULES_SHADOW_START	KMSAN_VMALLOC_SHADOW_START
> +#define KMSAN_MODULES_ORIGIN_START	KMSAN_VMALLOC_ORIGIN_START
> +#endif /* CONFIG_KMSAN */
> +
> +#define MODULES_VADDR VMALLOC_START
> +#define MODULES_END VMALLOC_END
> +#define MODULES_LEN		(MODULES_END - MODULES_VADDR)
>   
>   static inline unsigned int ioremap_max_order(void)
>   {
> @@ -264,7 +295,18 @@ extern unsigned long __kernel_io_start;
>   extern unsigned long __kernel_io_end;
>   #define KERN_VIRT_START __kernel_virt_start
>   #define KERN_IO_START  __kernel_io_start
> +#ifndef CONFIG_KMSAN
>   #define KERN_IO_END __kernel_io_end
> +#else
> +/*
> + * In KMSAN builds IO space is 4 times smaller, the remaining space is used to
> + * store metadata. See comment for vmalloc regions above.
> + */
> +#define KERN_IO_LEN             ((__kernel_io_end - __kernel_io_start) >> 2)
> +#define KERN_IO_END             (KERN_IO_START + KERN_IO_LEN)
> +#define KERN_IO_SHADOW_START    KERN_IO_END
> +#define KERN_IO_ORIGIN_START    (KERN_IO_SHADOW_START + KERN_IO_LEN)
> +#endif /* !CONFIG_KMSAN */
>   
>   extern struct page *vmemmap;
>   extern unsigned long pci_io_base;
> diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
> index f6d6ae0a1692..5043b959ad4d 100644
> --- a/arch/powerpc/kernel/module.c
> +++ b/arch/powerpc/kernel/module.c
> @@ -107,7 +107,7 @@ __module_alloc(unsigned long size, unsigned long start, unsigned long end, bool
>   
>   void *module_alloc(unsigned long size)
>   {
> -#ifdef MODULES_VADDR
> +#if defined(MODULES_VADDR) && defined(CONFIG_PPC32)
>   	unsigned long limit = (unsigned long)_etext - SZ_32M;
>   	void *ptr = NULL;
>   

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 11/13] powerpc: Implement architecture specific KMSAN interface
  2023-12-14  5:55 ` [PATCH 11/13] powerpc: Implement architecture specific KMSAN interface Nicholas Miehlbradt
@ 2023-12-14  9:20   ` Christophe Leroy
  0 siblings, 0 replies; 30+ messages in thread
From: Christophe Leroy @ 2023-12-14  9:20 UTC (permalink / raw)
  To: Nicholas Miehlbradt, glider, elver, dvyukov, akpm, mpe, npiggin
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel



Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
> arch_kmsan_get_meta_or_null finds the metadata addresses for addresses
> in the ioremap region which is mapped separately on powerpc.
> 
> kmsan_vir_addr_valid is the same as virt_addr_valid except excludes the
> check that addr is less than high_memory since this function can be
> called on addresses higher than this.
> 
> Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>
> ---
>   arch/powerpc/include/asm/kmsan.h | 44 ++++++++++++++++++++++++++++++++
>   1 file changed, 44 insertions(+)
>   create mode 100644 arch/powerpc/include/asm/kmsan.h
> 
> diff --git a/arch/powerpc/include/asm/kmsan.h b/arch/powerpc/include/asm/kmsan.h
> new file mode 100644
> index 000000000000..bc84f6ff2ee9
> --- /dev/null
> +++ b/arch/powerpc/include/asm/kmsan.h
> @@ -0,0 +1,44 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * powerpc KMSAN support.
> + *
> + */
> +
> +#ifndef _ASM_POWERPC_KMSAN_H
> +#define _ASM_POWERPC_KMSAN_H
> +
> +#ifndef __ASSEMBLY__
> +#ifndef MODULE
> +
> +#include <linux/mmzone.h>
> +#include <asm/page.h>
> +#include <asm/book3s/64/pgtable.h>
> +
> +/*
> + * Functions below are declared in the header to make sure they are inlined.
> + * They all are called from kmsan_get_metadata() for every memory access in
> + * the kernel, so speed is important here.
> + */
> +
> +/*
> + * No powerpc specific metadata locations
> + */
> +static inline void *arch_kmsan_get_meta_or_null(void *addr, bool is_origin)
> +{
> +	unsigned long addr64 = (unsigned long)addr, off;

Missing blank line.

> +	if (KERN_IO_START <= addr64 && addr64 < KERN_IO_END) {

off is only used in that block so it should be declared here, can be 
done as a single line (followed by a blank line too):

	unsigned long off = addr64 - KERN_IO_START;

> +		off = addr64 - KERN_IO_START;
> +		return (void *)off + (is_origin ? KERN_IO_ORIGIN_START : KERN_IO_SHADOW_START);
> +	} else {
> +		return 0;
> +	}
> +}
> +
> +static inline bool kmsan_virt_addr_valid(void *addr)
> +{
> +	return (unsigned long)addr >= PAGE_OFFSET && pfn_valid(virt_to_pfn(addr));
> +}
> +
> +#endif /* !MODULE */
> +#endif /* !__ASSEMBLY__ */
> +#endif /* _ASM_POWERPC_KMSAN_H */

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 12/13] powerpc/string: Add KMSAN support
  2023-12-14  5:55 ` [PATCH 12/13] powerpc/string: Add KMSAN support Nicholas Miehlbradt
@ 2023-12-14  9:25   ` Christophe Leroy
  2024-01-10  4:09     ` Nicholas Miehlbradt
  0 siblings, 1 reply; 30+ messages in thread
From: Christophe Leroy @ 2023-12-14  9:25 UTC (permalink / raw)
  To: Nicholas Miehlbradt, glider, elver, dvyukov, akpm, mpe, npiggin
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel



Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
> KMSAN expects functions __mem{set,cpy,move} so add aliases pointing to
> the respective functions.
> 
> Disable use of architecture specific memset{16,32,64} to ensure that
> metadata is correctly updated and strn{cpy,cmp} and mem{chr,cmp} which
> are implemented in assembly and therefore cannot be instrumented to
> propagate/check metadata.
> 
> Alias calls to mem{set,cpy,move} to __msan_mem{set,cpy,move} in
> instrumented code to correctly propagate metadata.
> 
> Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>
> ---
>   arch/powerpc/include/asm/kmsan.h               |  7 +++++++
>   arch/powerpc/include/asm/string.h              | 18 ++++++++++++++++--
>   arch/powerpc/lib/Makefile                      |  2 ++
>   arch/powerpc/lib/mem_64.S                      |  5 ++++-
>   arch/powerpc/lib/memcpy_64.S                   |  2 ++
>   .../selftests/powerpc/copyloops/asm/kmsan.h    |  0
>   .../selftests/powerpc/copyloops/linux/export.h |  1 +
>   7 files changed, 32 insertions(+), 3 deletions(-)
>   create mode 100644 tools/testing/selftests/powerpc/copyloops/asm/kmsan.h
> 
> diff --git a/arch/powerpc/include/asm/kmsan.h b/arch/powerpc/include/asm/kmsan.h
> index bc84f6ff2ee9..fc59dc24e170 100644
> --- a/arch/powerpc/include/asm/kmsan.h
> +++ b/arch/powerpc/include/asm/kmsan.h
> @@ -7,6 +7,13 @@
>   #ifndef _ASM_POWERPC_KMSAN_H
>   #define _ASM_POWERPC_KMSAN_H
>   
> +#ifdef CONFIG_KMSAN
> +#define EXPORT_SYMBOL_KMSAN(fn) SYM_FUNC_ALIAS(__##fn, fn) \
> +				EXPORT_SYMBOL(__##fn)
> +#else
> +#define EXPORT_SYMBOL_KMSAN(fn)
> +#endif
> +
>   #ifndef __ASSEMBLY__
>   #ifndef MODULE
>   
> diff --git a/arch/powerpc/include/asm/string.h b/arch/powerpc/include/asm/string.h
> index 60ba22770f51..412626ce619b 100644
> --- a/arch/powerpc/include/asm/string.h
> +++ b/arch/powerpc/include/asm/string.h
> @@ -4,7 +4,7 @@
>   
>   #ifdef __KERNEL__
>   
> -#ifndef CONFIG_KASAN
> +#if !defined(CONFIG_KASAN) && !defined(CONFIG_KMSAN)
>   #define __HAVE_ARCH_STRNCPY
>   #define __HAVE_ARCH_STRNCMP
>   #define __HAVE_ARCH_MEMCHR
> @@ -56,8 +56,22 @@ void *__memmove(void *to, const void *from, __kernel_size_t n);
>   #endif /* CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX */
>   #endif /* CONFIG_KASAN */
>   
> +#ifdef CONFIG_KMSAN
> +
> +void *__memset(void *s, int c, __kernel_size_t count);
> +void *__memcpy(void *to, const void *from, __kernel_size_t n);
> +void *__memmove(void *to, const void *from, __kernel_size_t n);
> +

The same is done for KASAN, can't you reuse it ?

> +#ifdef __SANITIZE_MEMORY__
> +#include <linux/kmsan_string.h>
> +#define memset __msan_memset
> +#define memcpy __msan_memcpy
> +#define memmove __msan_memmove
> +#endif

Will that work as you wish ?
What about the calls to memset() or memcpy() emited directly by GCC ?

> +#endif /* CONFIG_KMSAN */
> +
>   #ifdef CONFIG_PPC64
> -#ifndef CONFIG_KASAN
> +#if !defined(CONFIG_KASAN) && !defined(CONFIG_KMSAN)
>   #define __HAVE_ARCH_MEMSET32
>   #define __HAVE_ARCH_MEMSET64
>   
> diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
> index 51ad0397c17a..fc3ea3eebbd6 100644
> --- a/arch/powerpc/lib/Makefile
> +++ b/arch/powerpc/lib/Makefile
> @@ -32,9 +32,11 @@ obj-y += code-patching.o feature-fixups.o pmem.o
>   obj-$(CONFIG_CODE_PATCHING_SELFTEST) += test-code-patching.o
>   
>   ifndef CONFIG_KASAN
> +ifndef CONFIG_KMSAN
>   obj-y	+=	string.o memcmp_$(BITS).o
>   obj-$(CONFIG_PPC32)	+= strlen_32.o
>   endif
> +endif
>   
>   obj-$(CONFIG_PPC32)	+= div64.o copy_32.o crtsavres.o
>   
> diff --git a/arch/powerpc/lib/mem_64.S b/arch/powerpc/lib/mem_64.S
> index 6fd06cd20faa..a55f2fac49b3 100644
> --- a/arch/powerpc/lib/mem_64.S
> +++ b/arch/powerpc/lib/mem_64.S
> @@ -9,8 +9,9 @@
>   #include <asm/errno.h>
>   #include <asm/ppc_asm.h>
>   #include <asm/kasan.h>
> +#include <asm/kmsan.h>
>   
> -#ifndef CONFIG_KASAN
> +#if !defined(CONFIG_KASAN) && !defined(CONFIG_KMSAN)
>   _GLOBAL(__memset16)
>   	rlwimi	r4,r4,16,0,15
>   	/* fall through */
> @@ -96,6 +97,7 @@ _GLOBAL_KASAN(memset)
>   	blr
>   EXPORT_SYMBOL(memset)
>   EXPORT_SYMBOL_KASAN(memset)
> +EXPORT_SYMBOL_KMSAN(memset)
>   
>   _GLOBAL_TOC_KASAN(memmove)
>   	cmplw	0,r3,r4
> @@ -140,3 +142,4 @@ _GLOBAL(backwards_memcpy)
>   	b	1b
>   EXPORT_SYMBOL(memmove)
>   EXPORT_SYMBOL_KASAN(memmove)
> +EXPORT_SYMBOL_KMSAN(memmove)
> diff --git a/arch/powerpc/lib/memcpy_64.S b/arch/powerpc/lib/memcpy_64.S
> index b5a67e20143f..1657861618cc 100644
> --- a/arch/powerpc/lib/memcpy_64.S
> +++ b/arch/powerpc/lib/memcpy_64.S
> @@ -8,6 +8,7 @@
>   #include <asm/asm-compat.h>
>   #include <asm/feature-fixups.h>
>   #include <asm/kasan.h>
> +#include <asm/kmsan.h>
>   
>   #ifndef SELFTEST_CASE
>   /* For big-endian, 0 == most CPUs, 1 == POWER6, 2 == Cell */
> @@ -228,3 +229,4 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
>   #endif
>   EXPORT_SYMBOL(memcpy)
>   EXPORT_SYMBOL_KASAN(memcpy)
> +EXPORT_SYMBOL_KMSAN(memcpy)
> diff --git a/tools/testing/selftests/powerpc/copyloops/asm/kmsan.h b/tools/testing/selftests/powerpc/copyloops/asm/kmsan.h
> new file mode 100644
> index 000000000000..e69de29bb2d1
> diff --git a/tools/testing/selftests/powerpc/copyloops/linux/export.h b/tools/testing/selftests/powerpc/copyloops/linux/export.h
> index e6b80d5fbd14..6379624bbf9b 100644
> --- a/tools/testing/selftests/powerpc/copyloops/linux/export.h
> +++ b/tools/testing/selftests/powerpc/copyloops/linux/export.h
> @@ -2,3 +2,4 @@
>   #define EXPORT_SYMBOL(x)
>   #define EXPORT_SYMBOL_GPL(x)
>   #define EXPORT_SYMBOL_KASAN(x)
> +#define EXPORT_SYMBOL_KMSAN(x)

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 13/13] powerpc: Enable KMSAN on powerpc
  2023-12-14  5:55 ` [PATCH 13/13] powerpc: Enable KMSAN on powerpc Nicholas Miehlbradt
@ 2023-12-14  9:27   ` Christophe Leroy
  0 siblings, 0 replies; 30+ messages in thread
From: Christophe Leroy @ 2023-12-14  9:27 UTC (permalink / raw)
  To: Nicholas Miehlbradt, glider, elver, dvyukov, akpm, mpe, npiggin
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel



Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
> Enable KMSAN in the Kconfig.
> 
> Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>
> ---
>   arch/powerpc/Kconfig | 1 +
>   1 file changed, 1 insertion(+)
> 
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index e33e3250c478..71cc7d2a0a72 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -217,6 +217,7 @@ config PPC
>   	select HAVE_ARCH_KASAN_VMALLOC		if HAVE_ARCH_KASAN
>   	select HAVE_ARCH_KCSAN
>   	select HAVE_ARCH_KFENCE			if ARCH_SUPPORTS_DEBUG_PAGEALLOC
> +        select HAVE_ARCH_KMSAN                  if PPC64

You said in cover letter you are doing it for "pseries le guests".

Will it also work on BE and also on nohash/64 ?

>   	select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
>   	select HAVE_ARCH_WITHIN_STACK_FRAMES
>   	select HAVE_ARCH_KGDB

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 07/13] powerpc/kprobes: Unpoison instruction in kprobe struct
  2023-12-14  5:55 ` [PATCH 07/13] powerpc/kprobes: Unpoison instruction in kprobe struct Nicholas Miehlbradt
@ 2023-12-15  7:51   ` Naveen N Rao
  0 siblings, 0 replies; 30+ messages in thread
From: Naveen N Rao @ 2023-12-15  7:51 UTC (permalink / raw)
  To: Nicholas Miehlbradt
  Cc: glider, elver, dvyukov, akpm, mpe, npiggin, christophe.leroy,
	linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel

On Thu, Dec 14, 2023 at 05:55:33AM +0000, Nicholas Miehlbradt wrote:
> KMSAN does not unpoison the ainsn field of a kprobe struct correctly.
> Manually unpoison it to prevent false positives.
> 
> Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>
> ---
>  arch/powerpc/kernel/kprobes.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> index b20ee72e873a..1cbec54f2b6a 100644
> --- a/arch/powerpc/kernel/kprobes.c
> +++ b/arch/powerpc/kernel/kprobes.c
> @@ -27,6 +27,7 @@
>  #include <asm/sections.h>
>  #include <asm/inst.h>
>  #include <linux/uaccess.h>
> +#include <linux/kmsan-checks.h>
>  
>  DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
>  DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
> @@ -179,6 +180,7 @@ int arch_prepare_kprobe(struct kprobe *p)
>  
>  	if (!ret) {
>  		patch_instruction(p->ainsn.insn, insn);
> +		kmsan_unpoison_memory(p->ainsn.insn, sizeof(kprobe_opcode_t));

kprobe_opcode_t is u32, but we could be probing a prefixed instruction.  
You can pass the instruction length through ppc_inst_len(insn).


- Naveen

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 09/13] powerpc: Disable KMSAN checks on functions which walk the stack
  2023-12-14  5:55 ` [PATCH 09/13] powerpc: Disable KMSAN checks on functions which walk the stack Nicholas Miehlbradt
  2023-12-14  9:00   ` Christophe Leroy
@ 2023-12-15  9:02   ` Aneesh Kumar K.V
  1 sibling, 0 replies; 30+ messages in thread
From: Aneesh Kumar K.V @ 2023-12-15  9:02 UTC (permalink / raw)
  To: Nicholas Miehlbradt, glider, elver, dvyukov, akpm, mpe, npiggin,
	christophe.leroy
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel,
	Nicholas Miehlbradt

Nicholas Miehlbradt <nicholas@linux.ibm.com> writes:

> Functions which walk the stack read parts of the stack which cannot be
> instrumented by KMSAN e.g. the backchain. Disable KMSAN sanitization of
> these functions to prevent false positives.
>

Is the annotation needed to avoid uninitialized access check when
reading parts of the stack? Can you provide a false positive example for
the commit message?

-aneesh

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 10/13] powerpc: Define KMSAN metadata address ranges for vmalloc and ioremap
  2023-12-14  5:55 ` [PATCH 10/13] powerpc: Define KMSAN metadata address ranges for vmalloc and ioremap Nicholas Miehlbradt
  2023-12-14  9:17   ` Christophe Leroy
@ 2023-12-15  9:27   ` Aneesh Kumar K.V
  1 sibling, 0 replies; 30+ messages in thread
From: Aneesh Kumar K.V @ 2023-12-15  9:27 UTC (permalink / raw)
  To: Nicholas Miehlbradt, glider, elver, dvyukov, akpm, mpe, npiggin,
	christophe.leroy
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel,
	Nicholas Miehlbradt

Nicholas Miehlbradt <nicholas@linux.ibm.com> writes:

> Splits the vmalloc region into four. The first quarter is the new
> vmalloc region, the second is used to store shadow metadata and the
> third is used to store origin metadata. The fourth quarter is unused.
>

Do we support KMSAN for both hash and radix? If hash is not supported
can we then using radix.h for these changes?

> Do the same for the ioremap region.
>
> Module data is stored in the vmalloc region so alias the modules
> metadata addresses to the respective vmalloc metadata addresses. Define
> MODULES_VADDR and MODULES_END to the start and end of the vmalloc
> region.
>
> Since MODULES_VADDR was previously only defined on ppc32 targets checks
> for if this macro is defined need to be updated to include
> defined(CONFIG_PPC32).

-aneesh

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 02/13] hvc: Fix use of uninitialized array in udbg_hvc_putc
  2023-12-14  8:36   ` Christophe Leroy
@ 2023-12-21 12:09     ` Michael Ellerman
  0 siblings, 0 replies; 30+ messages in thread
From: Michael Ellerman @ 2023-12-21 12:09 UTC (permalink / raw)
  To: Christophe Leroy, Nicholas Miehlbradt, glider, elver, dvyukov,
	akpm, npiggin
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel

Christophe Leroy <christophe.leroy@csgroup.eu> writes:
> Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
>> All elements of bounce_buffer are eventually read and passed to the
>> hypervisor so it should probably be fully initialized.
>
> should or shall ?
>
>> 
>> Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>
>
> Should be a Fixed: tag ?
>
>> ---
>>   drivers/tty/hvc/hvc_vio.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>> 
>> diff --git a/drivers/tty/hvc/hvc_vio.c b/drivers/tty/hvc/hvc_vio.c
>> index 736b230f5ec0..1e88bfcdde20 100644
>> --- a/drivers/tty/hvc/hvc_vio.c
>> +++ b/drivers/tty/hvc/hvc_vio.c
>> @@ -227,7 +227,7 @@ static const struct hv_ops hvterm_hvsi_ops = {
>>   static void udbg_hvc_putc(char c)
>>   {
>>   	int count = -1;
>> -	unsigned char bounce_buffer[16];
>> +	unsigned char bounce_buffer[16] = { 0 };
>
> Why 16 while we have a count of 1 in the call to hvterm_raw_put_chars() ?

Because hvterm_raw_put_chars() calls hvc_put_chars() which requires a 16
byte buffer, because it passes the buffer directly to firmware which
expects a 16 byte buffer.

It's a pretty horrible calling convention, but I guess it's to avoid
needing another bounce buffer inside hvc_put_chars().

We should probably do the change below, to at least document the
interface better.

cheers


diff --git a/arch/powerpc/include/asm/hvconsole.h b/arch/powerpc/include/asm/hvconsole.h
index ccb2034506f0..0ee7ed019e23 100644
--- a/arch/powerpc/include/asm/hvconsole.h
+++ b/arch/powerpc/include/asm/hvconsole.h
@@ -22,7 +22,7 @@
  * parm is included to conform to put_chars() function pointer template
  */
 extern int hvc_get_chars(uint32_t vtermno, char *buf, int count);
-extern int hvc_put_chars(uint32_t vtermno, const char *buf, int count);
+extern int hvc_put_chars(uint32_t vtermno, const char buf[16], int count);

 /* Provided by HVC VIO */
 void hvc_vio_init_early(void);
diff --git a/arch/powerpc/platforms/pseries/hvconsole.c b/arch/powerpc/platforms/pseries/hvconsole.c
index 1ac52963e08b..c40a82e49d59 100644
--- a/arch/powerpc/platforms/pseries/hvconsole.c
+++ b/arch/powerpc/platforms/pseries/hvconsole.c
@@ -52,7 +52,7 @@ EXPORT_SYMBOL(hvc_get_chars);
  *     firmware. Must be at least 16 bytes, even if count is less than 16.
  * @count: Send this number of characters.
  */
-int hvc_put_chars(uint32_t vtermno, const char *buf, int count)
+int hvc_put_chars(uint32_t vtermno, const char buf[16], int count)
 {
        unsigned long *lbuf = (unsigned long *) buf;
        long ret;
diff --git a/drivers/tty/hvc/hvc_vio.c b/drivers/tty/hvc/hvc_vio.c
index 736b230f5ec0..011b239a7e52 100644
--- a/drivers/tty/hvc/hvc_vio.c
+++ b/drivers/tty/hvc/hvc_vio.c
@@ -115,7 +115,7 @@ static int hvterm_raw_get_chars(uint32_t vtermno, char *buf, int count)
  *       you are sending fewer chars.
  * @count: number of chars to send.
  */
-static int hvterm_raw_put_chars(uint32_t vtermno, const char *buf, int count)
+static int hvterm_raw_put_chars(uint32_t vtermno, const char buf[16], int count)
 {
        struct hvterm_priv *pv = hvterm_privs[vtermno];

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH 10/13] powerpc: Define KMSAN metadata address ranges for vmalloc and ioremap
  2023-12-14  9:17   ` Christophe Leroy
@ 2024-01-10  3:54     ` Nicholas Miehlbradt
  0 siblings, 0 replies; 30+ messages in thread
From: Nicholas Miehlbradt @ 2024-01-10  3:54 UTC (permalink / raw)
  To: Christophe Leroy, glider, elver, dvyukov, akpm, mpe, npiggin
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel



On 14/12/2023 8:17 pm, Christophe Leroy wrote:
> 
> 
> Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
>> Splits the vmalloc region into four. The first quarter is the new
>> vmalloc region, the second is used to store shadow metadata and the
>> third is used to store origin metadata. The fourth quarter is unused.
>>
>> Do the same for the ioremap region.
>>
>> Module data is stored in the vmalloc region so alias the modules
>> metadata addresses to the respective vmalloc metadata addresses. Define
>> MODULES_VADDR and MODULES_END to the start and end of the vmalloc
>> region.
>>
>> Since MODULES_VADDR was previously only defined on ppc32 targets checks
>> for if this macro is defined need to be updated to include
>> defined(CONFIG_PPC32).
> 
> Why ?
> 
> In your case MODULES_VADDR is above PAGE_OFFSET so there should be no
> difference.
> 
> Christophe
> 
On 64 bit builds the BUILD_BUG always triggers since MODULES_VADDR 
expands to __vmalloc_start which is defined in a different translation 
unit. I can restrict the #ifdef CONFIG_PPC32 to just around the 
BUILD_BUG since as you pointed out there is no difference otherwise.
>>
>> Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>
>> ---
>>    arch/powerpc/include/asm/book3s/64/pgtable.h | 42 ++++++++++++++++++++
>>    arch/powerpc/kernel/module.c                 |  2 +-
>>    2 files changed, 43 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
>> index cb77eddca54b..b3a02b8d96e3 100644
>> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
>> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
>> @@ -249,7 +249,38 @@ enum pgtable_index {
>>    extern unsigned long __vmalloc_start;
>>    extern unsigned long __vmalloc_end;
>>    #define VMALLOC_START	__vmalloc_start
>> +
>> +#ifndef CONFIG_KMSAN
>>    #define VMALLOC_END	__vmalloc_end
>> +#else
>> +/*
>> + * In KMSAN builds vmalloc area is four times smaller, and the remaining 3/4
>> + * are used to keep the metadata for virtual pages. The memory formerly
>> + * belonging to vmalloc area is now laid out as follows:
>> + *
>> + * 1st quarter: VMALLOC_START to VMALLOC_END - new vmalloc area
>> + * 2nd quarter: KMSAN_VMALLOC_SHADOW_START to
>> + *              KMSAN_VMALLOC_SHADOW_START+VMALLOC_LEN - vmalloc area shadow
>> + * 3rd quarter: KMSAN_VMALLOC_ORIGIN_START to
>> + *              KMSAN_VMALLOC_ORIGIN_START+VMALLOC_LEN - vmalloc area origins
>> + * 4th quarter: unused
>> + */
>> +#define VMALLOC_LEN ((__vmalloc_end - __vmalloc_start) >> 2)
>> +#define VMALLOC_END (VMALLOC_START + VMALLOC_LEN)
>> +
>> +#define KMSAN_VMALLOC_SHADOW_START VMALLOC_END
>> +#define KMSAN_VMALLOC_ORIGIN_START (VMALLOC_END + VMALLOC_LEN)
>> +
>> +/*
>> + * Module metadata is stored in the corresponding vmalloc metadata regions
>> + */
>> +#define KMSAN_MODULES_SHADOW_START	KMSAN_VMALLOC_SHADOW_START
>> +#define KMSAN_MODULES_ORIGIN_START	KMSAN_VMALLOC_ORIGIN_START
>> +#endif /* CONFIG_KMSAN */
>> +
>> +#define MODULES_VADDR VMALLOC_START
>> +#define MODULES_END VMALLOC_END
>> +#define MODULES_LEN		(MODULES_END - MODULES_VADDR)
>>    
>>    static inline unsigned int ioremap_max_order(void)
>>    {
>> @@ -264,7 +295,18 @@ extern unsigned long __kernel_io_start;
>>    extern unsigned long __kernel_io_end;
>>    #define KERN_VIRT_START __kernel_virt_start
>>    #define KERN_IO_START  __kernel_io_start
>> +#ifndef CONFIG_KMSAN
>>    #define KERN_IO_END __kernel_io_end
>> +#else
>> +/*
>> + * In KMSAN builds IO space is 4 times smaller, the remaining space is used to
>> + * store metadata. See comment for vmalloc regions above.
>> + */
>> +#define KERN_IO_LEN             ((__kernel_io_end - __kernel_io_start) >> 2)
>> +#define KERN_IO_END             (KERN_IO_START + KERN_IO_LEN)
>> +#define KERN_IO_SHADOW_START    KERN_IO_END
>> +#define KERN_IO_ORIGIN_START    (KERN_IO_SHADOW_START + KERN_IO_LEN)
>> +#endif /* !CONFIG_KMSAN */
>>    
>>    extern struct page *vmemmap;
>>    extern unsigned long pci_io_base;
>> diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
>> index f6d6ae0a1692..5043b959ad4d 100644
>> --- a/arch/powerpc/kernel/module.c
>> +++ b/arch/powerpc/kernel/module.c
>> @@ -107,7 +107,7 @@ __module_alloc(unsigned long size, unsigned long start, unsigned long end, bool
>>    
>>    void *module_alloc(unsigned long size)
>>    {
>> -#ifdef MODULES_VADDR
>> +#if defined(MODULES_VADDR) && defined(CONFIG_PPC32)
>>    	unsigned long limit = (unsigned long)_etext - SZ_32M;
>>    	void *ptr = NULL;
>>    

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 12/13] powerpc/string: Add KMSAN support
  2023-12-14  9:25   ` Christophe Leroy
@ 2024-01-10  4:09     ` Nicholas Miehlbradt
  0 siblings, 0 replies; 30+ messages in thread
From: Nicholas Miehlbradt @ 2024-01-10  4:09 UTC (permalink / raw)
  To: Christophe Leroy, glider, elver, dvyukov, akpm, mpe, npiggin
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel



On 14/12/2023 8:25 pm, Christophe Leroy wrote:
> 
> 
> Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
>> KMSAN expects functions __mem{set,cpy,move} so add aliases pointing to
>> the respective functions.
>>
>> Disable use of architecture specific memset{16,32,64} to ensure that
>> metadata is correctly updated and strn{cpy,cmp} and mem{chr,cmp} which
>> are implemented in assembly and therefore cannot be instrumented to
>> propagate/check metadata.
>>
>> Alias calls to mem{set,cpy,move} to __msan_mem{set,cpy,move} in
>> instrumented code to correctly propagate metadata.
>>
>> Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>
>> ---
>>    arch/powerpc/include/asm/kmsan.h               |  7 +++++++
>>    arch/powerpc/include/asm/string.h              | 18 ++++++++++++++++--
>>    arch/powerpc/lib/Makefile                      |  2 ++
>>    arch/powerpc/lib/mem_64.S                      |  5 ++++-
>>    arch/powerpc/lib/memcpy_64.S                   |  2 ++
>>    .../selftests/powerpc/copyloops/asm/kmsan.h    |  0
>>    .../selftests/powerpc/copyloops/linux/export.h |  1 +
>>    7 files changed, 32 insertions(+), 3 deletions(-)
>>    create mode 100644 tools/testing/selftests/powerpc/copyloops/asm/kmsan.h
>>
>> diff --git a/arch/powerpc/include/asm/kmsan.h b/arch/powerpc/include/asm/kmsan.h
>> index bc84f6ff2ee9..fc59dc24e170 100644
>> --- a/arch/powerpc/include/asm/kmsan.h
>> +++ b/arch/powerpc/include/asm/kmsan.h
>> @@ -7,6 +7,13 @@
>>    #ifndef _ASM_POWERPC_KMSAN_H
>>    #define _ASM_POWERPC_KMSAN_H
>>    
>> +#ifdef CONFIG_KMSAN
>> +#define EXPORT_SYMBOL_KMSAN(fn) SYM_FUNC_ALIAS(__##fn, fn) \
>> +				EXPORT_SYMBOL(__##fn)
>> +#else
>> +#define EXPORT_SYMBOL_KMSAN(fn)
>> +#endif
>> +
>>    #ifndef __ASSEMBLY__
>>    #ifndef MODULE
>>    
>> diff --git a/arch/powerpc/include/asm/string.h b/arch/powerpc/include/asm/string.h
>> index 60ba22770f51..412626ce619b 100644
>> --- a/arch/powerpc/include/asm/string.h
>> +++ b/arch/powerpc/include/asm/string.h
>> @@ -4,7 +4,7 @@
>>    
>>    #ifdef __KERNEL__
>>    
>> -#ifndef CONFIG_KASAN
>> +#if !defined(CONFIG_KASAN) && !defined(CONFIG_KMSAN)
>>    #define __HAVE_ARCH_STRNCPY
>>    #define __HAVE_ARCH_STRNCMP
>>    #define __HAVE_ARCH_MEMCHR
>> @@ -56,8 +56,22 @@ void *__memmove(void *to, const void *from, __kernel_size_t n);
>>    #endif /* CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX */
>>    #endif /* CONFIG_KASAN */
>>    
>> +#ifdef CONFIG_KMSAN
>> +
>> +void *__memset(void *s, int c, __kernel_size_t count);
>> +void *__memcpy(void *to, const void *from, __kernel_size_t n);
>> +void *__memmove(void *to, const void *from, __kernel_size_t n);
>> +
> 
> The same is done for KASAN, can't you reuse it ?
> 
I tried this but I believe it makes the file more disorganised and 
difficult to edit since there ends up being a set of definitions for 
each intersection of features e.g. the definitions needed for both KASAN 
and KMSAN, just KASAN, just KMSAN, etc.

This way it's clearer what each sanitizer needs and changing definitions 
for one one sanitizer won't require refactors affecting other sanitizers.

>> +#ifdef __SANITIZE_MEMORY__
>> +#include <linux/kmsan_string.h>
>> +#define memset __msan_memset
>> +#define memcpy __msan_memcpy
>> +#define memmove __msan_memmove
>> +#endif
> 
> Will that work as you wish ?
> What about the calls to memset() or memcpy() emited directly by GCC ?
> 
These are handled by the compiler instrumentation which replaces these 
with calls to the instrumented equivalent.

>> +#endif /* CONFIG_KMSAN */
>> +
>>    #ifdef CONFIG_PPC64
>> -#ifndef CONFIG_KASAN
>> +#if !defined(CONFIG_KASAN) && !defined(CONFIG_KMSAN)
>>    #define __HAVE_ARCH_MEMSET32
>>    #define __HAVE_ARCH_MEMSET64
>>    
>> diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
>> index 51ad0397c17a..fc3ea3eebbd6 100644
>> --- a/arch/powerpc/lib/Makefile
>> +++ b/arch/powerpc/lib/Makefile
>> @@ -32,9 +32,11 @@ obj-y += code-patching.o feature-fixups.o pmem.o
>>    obj-$(CONFIG_CODE_PATCHING_SELFTEST) += test-code-patching.o
>>    
>>    ifndef CONFIG_KASAN
>> +ifndef CONFIG_KMSAN
>>    obj-y	+=	string.o memcmp_$(BITS).o
>>    obj-$(CONFIG_PPC32)	+= strlen_32.o
>>    endif
>> +endif
>>    
>>    obj-$(CONFIG_PPC32)	+= div64.o copy_32.o crtsavres.o
>>    
>> diff --git a/arch/powerpc/lib/mem_64.S b/arch/powerpc/lib/mem_64.S
>> index 6fd06cd20faa..a55f2fac49b3 100644
>> --- a/arch/powerpc/lib/mem_64.S
>> +++ b/arch/powerpc/lib/mem_64.S
>> @@ -9,8 +9,9 @@
>>    #include <asm/errno.h>
>>    #include <asm/ppc_asm.h>
>>    #include <asm/kasan.h>
>> +#include <asm/kmsan.h>
>>    
>> -#ifndef CONFIG_KASAN
>> +#if !defined(CONFIG_KASAN) && !defined(CONFIG_KMSAN)
>>    _GLOBAL(__memset16)
>>    	rlwimi	r4,r4,16,0,15
>>    	/* fall through */
>> @@ -96,6 +97,7 @@ _GLOBAL_KASAN(memset)
>>    	blr
>>    EXPORT_SYMBOL(memset)
>>    EXPORT_SYMBOL_KASAN(memset)
>> +EXPORT_SYMBOL_KMSAN(memset)
>>    
>>    _GLOBAL_TOC_KASAN(memmove)
>>    	cmplw	0,r3,r4
>> @@ -140,3 +142,4 @@ _GLOBAL(backwards_memcpy)
>>    	b	1b
>>    EXPORT_SYMBOL(memmove)
>>    EXPORT_SYMBOL_KASAN(memmove)
>> +EXPORT_SYMBOL_KMSAN(memmove)
>> diff --git a/arch/powerpc/lib/memcpy_64.S b/arch/powerpc/lib/memcpy_64.S
>> index b5a67e20143f..1657861618cc 100644
>> --- a/arch/powerpc/lib/memcpy_64.S
>> +++ b/arch/powerpc/lib/memcpy_64.S
>> @@ -8,6 +8,7 @@
>>    #include <asm/asm-compat.h>
>>    #include <asm/feature-fixups.h>
>>    #include <asm/kasan.h>
>> +#include <asm/kmsan.h>
>>    
>>    #ifndef SELFTEST_CASE
>>    /* For big-endian, 0 == most CPUs, 1 == POWER6, 2 == Cell */
>> @@ -228,3 +229,4 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
>>    #endif
>>    EXPORT_SYMBOL(memcpy)
>>    EXPORT_SYMBOL_KASAN(memcpy)
>> +EXPORT_SYMBOL_KMSAN(memcpy)
>> diff --git a/tools/testing/selftests/powerpc/copyloops/asm/kmsan.h b/tools/testing/selftests/powerpc/copyloops/asm/kmsan.h
>> new file mode 100644
>> index 000000000000..e69de29bb2d1
>> diff --git a/tools/testing/selftests/powerpc/copyloops/linux/export.h b/tools/testing/selftests/powerpc/copyloops/linux/export.h
>> index e6b80d5fbd14..6379624bbf9b 100644
>> --- a/tools/testing/selftests/powerpc/copyloops/linux/export.h
>> +++ b/tools/testing/selftests/powerpc/copyloops/linux/export.h
>> @@ -2,3 +2,4 @@
>>    #define EXPORT_SYMBOL(x)
>>    #define EXPORT_SYMBOL_GPL(x)
>>    #define EXPORT_SYMBOL_KASAN(x)
>> +#define EXPORT_SYMBOL_KMSAN(x)

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 09/13] powerpc: Disable KMSAN checks on functions which walk the stack
  2023-12-14  9:00   ` Christophe Leroy
@ 2024-01-10  4:16     ` Nicholas Miehlbradt
  0 siblings, 0 replies; 30+ messages in thread
From: Nicholas Miehlbradt @ 2024-01-10  4:16 UTC (permalink / raw)
  To: Christophe Leroy, glider, elver, dvyukov, akpm, mpe, npiggin
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel



On 14/12/2023 8:00 pm, Christophe Leroy wrote:
> 
> 
> Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
>> Functions which walk the stack read parts of the stack which cannot be
>> instrumented by KMSAN e.g. the backchain. Disable KMSAN sanitization of
>> these functions to prevent false positives.
> 
> Do other architectures have to do it as well ?
> 
> I don't see it for show_stack(), is that a specific need for powerpc ?
> Other archs have the annotation on functions called by show_stack(). For 
x86 it's on show_trace_log_lvl() and for s390 it's on __unwind_start() 
and unwind_next_frame().


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 01/13] kmsan: Export kmsan_handle_dma
  2023-12-14  5:55 ` [PATCH 01/13] kmsan: Export kmsan_handle_dma Nicholas Miehlbradt
@ 2024-02-19 19:37   ` Christophe Leroy
  0 siblings, 0 replies; 30+ messages in thread
From: Christophe Leroy @ 2024-02-19 19:37 UTC (permalink / raw)
  To: Nicholas Miehlbradt, glider, elver, dvyukov, akpm, mpe, npiggin
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel



Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
> kmsan_handle_dma is required by virtio drivers. Export kmsan_handle_dma
> so that the drivers can be compiled as modules.
> 
> Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>
> ---
>   mm/kmsan/hooks.c | 1 +
>   1 file changed, 1 insertion(+)
> 
> diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c
> index 7a30274b893c..3532d9275ca5 100644
> --- a/mm/kmsan/hooks.c
> +++ b/mm/kmsan/hooks.c
> @@ -358,6 +358,7 @@ void kmsan_handle_dma(struct page *page, size_t offset, size_t size,
>   		size -= to_go;
>   	}
>   }
> +EXPORT_SYMBOL(kmsan_handle_dma);

virtio is GPL and all exports inside virtio are EXPORT_SYMBOL_GPL().
Should this one be _GPL as well ?

>   
>   void kmsan_handle_dma_sg(struct scatterlist *sg, int nents,
>   			 enum dma_data_direction dir)

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 00/13] kmsan: Enable on powerpc
  2023-12-14  5:55 [PATCH 00/13] kmsan: Enable on powerpc Nicholas Miehlbradt
                   ` (12 preceding siblings ...)
  2023-12-14  5:55 ` [PATCH 13/13] powerpc: Enable KMSAN on powerpc Nicholas Miehlbradt
@ 2024-02-20  6:39 ` Christophe Leroy
  13 siblings, 0 replies; 30+ messages in thread
From: Christophe Leroy @ 2024-02-20  6:39 UTC (permalink / raw)
  To: Nicholas Miehlbradt, glider, elver, dvyukov, akpm, mpe, npiggin
  Cc: linux-mm, kasan-dev, iii, linuxppc-dev, linux-kernel



Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
> This series provides the minimal support for Kernal Memory Sanitizer on
> powerpc pseries le guests. Kernal Memory Sanitizer is a tool which detects
> uses of uninitialized memory. Currently KMSAN is clang only.
> 
> The clang support for powerpc has not yet been merged, the pull request can
> be found here [1].

As clang doesn't support it yet, it is probably prematurate to merge 
that in the kernel.

I have open https://github.com/linuxppc/issues/issues/475 to follow through

In the meantime I flag this series as "change requested" for a revisit 
it when time comes

Christophe

> 
> In addition to this series, there are a number of changes required in
> generic kmsan code. These changes are already on mailing lists as part of
> the series implementing KMSAN for s390 [2]. This series is intended to be
> rebased on top of the s390 series.
> 
> In addition, I found a bug in the rtc driver used on powerpc. I have sent
> a fix to this in a seperate series [3].
> 
> With this series and the two series mentioned above, I can successfully
> boot pseries le defconfig without KMSAN warnings. I have not tested other
> powerpc platforms.
> 
> [1] https://github.com/llvm/llvm-project/pull/73611
> [2] https://lore.kernel.org/linux-mm/20231121220155.1217090-1-iii@linux.ibm.com/
> [3] https://lore.kernel.org/linux-rtc/20231129073647.2624497-1-nicholas@linux.ibm.com/
> 
> Nicholas Miehlbradt (13):
>    kmsan: Export kmsan_handle_dma
>    hvc: Fix use of uninitialized array in udbg_hvc_putc
>    powerpc: Disable KMSAN santitization for prom_init, vdso and purgatory
>    powerpc: Disable CONFIG_DCACHE_WORD_ACCESS when KMSAN is enabled
>    powerpc: Unpoison buffers populated by hcalls
>    powerpc/pseries/nvram: Unpoison buffer populated by rtas_call
>    powerpc/kprobes: Unpoison instruction in kprobe struct
>    powerpc: Unpoison pt_regs
>    powerpc: Disable KMSAN checks on functions which walk the stack
>    powerpc: Define KMSAN metadata address ranges for vmalloc and ioremap
>    powerpc: Implement architecture specific KMSAN interface
>    powerpc/string: Add KMSAN support
>    powerpc: Enable KMSAN on powerpc
> 
>   arch/powerpc/Kconfig                          |  3 +-
>   arch/powerpc/include/asm/book3s/64/pgtable.h  | 42 +++++++++++++++
>   arch/powerpc/include/asm/interrupt.h          |  2 +
>   arch/powerpc/include/asm/kmsan.h              | 51 +++++++++++++++++++
>   arch/powerpc/include/asm/string.h             | 18 ++++++-
>   arch/powerpc/kernel/Makefile                  |  2 +
>   arch/powerpc/kernel/irq_64.c                  |  2 +
>   arch/powerpc/kernel/kprobes.c                 |  2 +
>   arch/powerpc/kernel/module.c                  |  2 +-
>   arch/powerpc/kernel/process.c                 |  6 +--
>   arch/powerpc/kernel/stacktrace.c              | 10 ++--
>   arch/powerpc/kernel/vdso/Makefile             |  1 +
>   arch/powerpc/lib/Makefile                     |  2 +
>   arch/powerpc/lib/mem_64.S                     |  5 +-
>   arch/powerpc/lib/memcpy_64.S                  |  2 +
>   arch/powerpc/perf/callchain.c                 |  2 +-
>   arch/powerpc/platforms/pseries/hvconsole.c    |  2 +
>   arch/powerpc/platforms/pseries/nvram.c        |  4 ++
>   arch/powerpc/purgatory/Makefile               |  1 +
>   arch/powerpc/sysdev/xive/spapr.c              |  3 ++
>   drivers/tty/hvc/hvc_vio.c                     |  2 +-
>   mm/kmsan/hooks.c                              |  1 +
>   .../selftests/powerpc/copyloops/asm/kmsan.h   |  0
>   .../powerpc/copyloops/linux/export.h          |  1 +
>   24 files changed, 152 insertions(+), 14 deletions(-)
>   create mode 100644 arch/powerpc/include/asm/kmsan.h
>   create mode 100644 tools/testing/selftests/powerpc/copyloops/asm/kmsan.h
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2024-02-20  6:39 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-14  5:55 [PATCH 00/13] kmsan: Enable on powerpc Nicholas Miehlbradt
2023-12-14  5:55 ` [PATCH 01/13] kmsan: Export kmsan_handle_dma Nicholas Miehlbradt
2024-02-19 19:37   ` Christophe Leroy
2023-12-14  5:55 ` [PATCH 02/13] hvc: Fix use of uninitialized array in udbg_hvc_putc Nicholas Miehlbradt
2023-12-14  8:36   ` Christophe Leroy
2023-12-21 12:09     ` Michael Ellerman
2023-12-14  5:55 ` [PATCH 03/13] powerpc: Disable KMSAN santitization for prom_init, vdso and purgatory Nicholas Miehlbradt
2023-12-14  5:55 ` [PATCH 04/13] powerpc: Disable CONFIG_DCACHE_WORD_ACCESS when KMSAN is enabled Nicholas Miehlbradt
2023-12-14  8:42   ` Christophe Leroy
2023-12-14  5:55 ` [PATCH 05/13] powerpc: Unpoison buffers populated by hcalls Nicholas Miehlbradt
2023-12-14  5:55 ` [PATCH 06/13] powerpc/pseries/nvram: Unpoison buffer populated by rtas_call Nicholas Miehlbradt
2023-12-14  5:55 ` [PATCH 07/13] powerpc/kprobes: Unpoison instruction in kprobe struct Nicholas Miehlbradt
2023-12-15  7:51   ` Naveen N Rao
2023-12-14  5:55 ` [PATCH 08/13] powerpc: Unpoison pt_regs Nicholas Miehlbradt
2023-12-14  5:55 ` [PATCH 09/13] powerpc: Disable KMSAN checks on functions which walk the stack Nicholas Miehlbradt
2023-12-14  9:00   ` Christophe Leroy
2024-01-10  4:16     ` Nicholas Miehlbradt
2023-12-15  9:02   ` Aneesh Kumar K.V
2023-12-14  5:55 ` [PATCH 10/13] powerpc: Define KMSAN metadata address ranges for vmalloc and ioremap Nicholas Miehlbradt
2023-12-14  9:17   ` Christophe Leroy
2024-01-10  3:54     ` Nicholas Miehlbradt
2023-12-15  9:27   ` Aneesh Kumar K.V
2023-12-14  5:55 ` [PATCH 11/13] powerpc: Implement architecture specific KMSAN interface Nicholas Miehlbradt
2023-12-14  9:20   ` Christophe Leroy
2023-12-14  5:55 ` [PATCH 12/13] powerpc/string: Add KMSAN support Nicholas Miehlbradt
2023-12-14  9:25   ` Christophe Leroy
2024-01-10  4:09     ` Nicholas Miehlbradt
2023-12-14  5:55 ` [PATCH 13/13] powerpc: Enable KMSAN on powerpc Nicholas Miehlbradt
2023-12-14  9:27   ` Christophe Leroy
2024-02-20  6:39 ` [PATCH 00/13] kmsan: Enable " Christophe Leroy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).