linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/2] powerpc/mce: Fix mce handler and add selftest
@ 2020-10-09  6:40 Ganesh Goudar
  2020-10-09  6:40 ` [PATCH v4 1/2] powerpc/mce: remove nmi_enter/exit from real mode handler Ganesh Goudar
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Ganesh Goudar @ 2020-10-09  6:40 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: msuchanek, Ganesh Goudar, keescook, npiggin, mahesh

This patch series fixes mce handling for pseries, Adds LKDTM test
for SLB multihit recovery and enables selftest for the same,
basically to test MCE handling on pseries/powernv machines running
in hash mmu mode.

v4:
* Use radix_enabled() to check if its in Hash or Radix mode.
* Use FW_FEATURE_LPAR instead of machine_is_pseries().

v3:
* Merging selftest changes with patch 2/2, Instead of having separate
  patch.
* Minor improvements like adding enough comments, Makefile changes,
  including header file and adding some prints.

v2:
* Remove in_nmi check before calling nmi_enter/exit,
  as nesting is supported.
* Fix build errors and remove unused variables.
* Integrate error injection code into LKDTM.
* Add support to inject multihit in paca.


Ganesh Goudar (2):
  powerpc/mce: remove nmi_enter/exit from real mode handler
  lkdtm/powerpc: Add SLB multihit test

 arch/powerpc/kernel/mce.c               |   7 +-
 drivers/misc/lkdtm/Makefile             |   1 +
 drivers/misc/lkdtm/core.c               |   3 +
 drivers/misc/lkdtm/lkdtm.h              |   3 +
 drivers/misc/lkdtm/powerpc.c            | 156 ++++++++++++++++++++++++
 tools/testing/selftests/lkdtm/tests.txt |   1 +
 6 files changed, 167 insertions(+), 4 deletions(-)
 create mode 100644 drivers/misc/lkdtm/powerpc.c

-- 
2.26.2


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v4 1/2] powerpc/mce: remove nmi_enter/exit from real mode handler
  2020-10-09  6:40 [PATCH v4 0/2] powerpc/mce: Fix mce handler and add selftest Ganesh Goudar
@ 2020-10-09  6:40 ` Ganesh Goudar
  2020-10-09  6:40 ` [PATCH v4 2/2] lkdtm/powerpc: Add SLB multihit test Ganesh Goudar
  2020-10-16 11:32 ` [PATCH v4 0/2] powerpc/mce: Fix mce handler and add selftest Michael Ellerman
  2 siblings, 0 replies; 8+ messages in thread
From: Ganesh Goudar @ 2020-10-09  6:40 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: msuchanek, Ganesh Goudar, keescook, npiggin, mahesh

Use of nmi_enter/exit in real mode handler causes the kernel to panic
and reboot on injecting slb mutihit on pseries machine running in hash
mmu mode, As these calls try to accesses memory outside RMO region in
real mode handler where translation is disabled.

Add check to not to use these calls on pseries machine running in hash
mmu mode.

Fixes: 116ac378bb3f ("powerpc/64s: machine check interrupt update NMI accounting")
Signed-off-by: Ganesh Goudar <ganeshgr@linux.ibm.com>
---
 arch/powerpc/kernel/mce.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kernel/mce.c b/arch/powerpc/kernel/mce.c
index ada59f6c4298..63702c0badb9 100644
--- a/arch/powerpc/kernel/mce.c
+++ b/arch/powerpc/kernel/mce.c
@@ -591,12 +591,11 @@ EXPORT_SYMBOL_GPL(machine_check_print_event_info);
 long notrace machine_check_early(struct pt_regs *regs)
 {
 	long handled = 0;
-	bool nested = in_nmi();
 	u8 ftrace_enabled = this_cpu_get_ftrace_enabled();
 
 	this_cpu_set_ftrace_enabled(0);
-
-	if (!nested)
+	/* Do not use nmi_enter/exit for pseries hpte guest */
+	if (radix_enabled() || !firmware_has_feature(FW_FEATURE_LPAR))
 		nmi_enter();
 
 	hv_nmi_check_nonrecoverable(regs);
@@ -607,7 +606,7 @@ long notrace machine_check_early(struct pt_regs *regs)
 	if (ppc_md.machine_check_early)
 		handled = ppc_md.machine_check_early(regs);
 
-	if (!nested)
+	if (radix_enabled() || !firmware_has_feature(FW_FEATURE_LPAR))
 		nmi_exit();
 
 	this_cpu_set_ftrace_enabled(ftrace_enabled);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v4 2/2] lkdtm/powerpc: Add SLB multihit test
  2020-10-09  6:40 [PATCH v4 0/2] powerpc/mce: Fix mce handler and add selftest Ganesh Goudar
  2020-10-09  6:40 ` [PATCH v4 1/2] powerpc/mce: remove nmi_enter/exit from real mode handler Ganesh Goudar
@ 2020-10-09  6:40 ` Ganesh Goudar
  2020-10-19 10:59   ` Michael Ellerman
  2020-10-16 11:32 ` [PATCH v4 0/2] powerpc/mce: Fix mce handler and add selftest Michael Ellerman
  2 siblings, 1 reply; 8+ messages in thread
From: Ganesh Goudar @ 2020-10-09  6:40 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: msuchanek, Ganesh Goudar, keescook, npiggin, mahesh

To check machine check handling, add support to inject slb
multihit errors.

Cc: Kees Cook <keescook@chromium.org>
Reviewed-by: Michal Suchánek <msuchanek@suse.de>
Co-developed-by: Mahesh Salgaonkar <mahesh@linux.ibm.com>
Signed-off-by: Mahesh Salgaonkar <mahesh@linux.ibm.com>
Signed-off-by: Ganesh Goudar <ganeshgr@linux.ibm.com>
---
 drivers/misc/lkdtm/Makefile             |   1 +
 drivers/misc/lkdtm/core.c               |   3 +
 drivers/misc/lkdtm/lkdtm.h              |   3 +
 drivers/misc/lkdtm/powerpc.c            | 156 ++++++++++++++++++++++++
 tools/testing/selftests/lkdtm/tests.txt |   1 +
 5 files changed, 164 insertions(+)
 create mode 100644 drivers/misc/lkdtm/powerpc.c

diff --git a/drivers/misc/lkdtm/Makefile b/drivers/misc/lkdtm/Makefile
index c70b3822013f..f37ecfb0a707 100644
--- a/drivers/misc/lkdtm/Makefile
+++ b/drivers/misc/lkdtm/Makefile
@@ -10,6 +10,7 @@ lkdtm-$(CONFIG_LKDTM)		+= rodata_objcopy.o
 lkdtm-$(CONFIG_LKDTM)		+= usercopy.o
 lkdtm-$(CONFIG_LKDTM)		+= stackleak.o
 lkdtm-$(CONFIG_LKDTM)		+= cfi.o
+lkdtm-$(CONFIG_PPC64)		+= powerpc.o
 
 KASAN_SANITIZE_stackleak.o	:= n
 KCOV_INSTRUMENT_rodata.o	:= n
diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c
index a5e344df9166..8d5db42baa90 100644
--- a/drivers/misc/lkdtm/core.c
+++ b/drivers/misc/lkdtm/core.c
@@ -178,6 +178,9 @@ static const struct crashtype crashtypes[] = {
 #ifdef CONFIG_X86_32
 	CRASHTYPE(DOUBLE_FAULT),
 #endif
+#ifdef CONFIG_PPC64
+	CRASHTYPE(PPC_SLB_MULTIHIT),
+#endif
 };
 
 
diff --git a/drivers/misc/lkdtm/lkdtm.h b/drivers/misc/lkdtm/lkdtm.h
index 8878538b2c13..b305bd511ee5 100644
--- a/drivers/misc/lkdtm/lkdtm.h
+++ b/drivers/misc/lkdtm/lkdtm.h
@@ -104,4 +104,7 @@ void lkdtm_STACKLEAK_ERASING(void);
 /* cfi.c */
 void lkdtm_CFI_FORWARD_PROTO(void);
 
+/* powerpc.c */
+void lkdtm_PPC_SLB_MULTIHIT(void);
+
 #endif
diff --git a/drivers/misc/lkdtm/powerpc.c b/drivers/misc/lkdtm/powerpc.c
new file mode 100644
index 000000000000..f388b53dccba
--- /dev/null
+++ b/drivers/misc/lkdtm/powerpc.c
@@ -0,0 +1,156 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include "lkdtm.h"
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+
+/* Gets index for new slb entry */
+static inline unsigned long get_slb_index(void)
+{
+	unsigned long index;
+
+	index = get_paca()->stab_rr;
+
+	/*
+	 * simple round-robin replacement of slb starting at SLB_NUM_BOLTED.
+	 */
+	if (index < (mmu_slb_size - 1))
+		index++;
+	else
+		index = SLB_NUM_BOLTED;
+	get_paca()->stab_rr = index;
+	return index;
+}
+
+#define slb_esid_mask(ssize)	\
+	(((ssize) == MMU_SEGSIZE_256M) ? ESID_MASK : ESID_MASK_1T)
+
+/* Form the operand for slbmte */
+static inline unsigned long mk_esid_data(unsigned long ea, int ssize,
+					 unsigned long slot)
+{
+	return (ea & slb_esid_mask(ssize)) | SLB_ESID_V | slot;
+}
+
+#define slb_vsid_shift(ssize)	\
+	((ssize) == MMU_SEGSIZE_256M ? SLB_VSID_SHIFT : SLB_VSID_SHIFT_1T)
+
+/* Form the operand for slbmte */
+static inline unsigned long mk_vsid_data(unsigned long ea, int ssize,
+					 unsigned long flags)
+{
+	return (get_kernel_vsid(ea, ssize) << slb_vsid_shift(ssize)) | flags |
+		((unsigned long)ssize << SLB_VSID_SSIZE_SHIFT);
+}
+
+/* Inserts new slb entry */
+static void insert_slb_entry(char *p, int ssize)
+{
+	unsigned long flags, entry;
+
+	flags = SLB_VSID_KERNEL | mmu_psize_defs[MMU_PAGE_64K].sllp;
+	preempt_disable();
+
+	entry = get_slb_index();
+	asm volatile("slbmte %0,%1" :
+			: "r" (mk_vsid_data((unsigned long)p, ssize, flags)),
+			  "r" (mk_esid_data((unsigned long)p, ssize, entry))
+			: "memory");
+
+	entry = get_slb_index();
+	asm volatile("slbmte %0,%1" :
+			: "r" (mk_vsid_data((unsigned long)p, ssize, flags)),
+			  "r" (mk_esid_data((unsigned long)p, ssize, entry))
+			: "memory");
+	preempt_enable();
+	/*
+	 * This triggers exception, If handled correctly we must recover
+	 * from this error.
+	 */
+	p[0] = '!';
+}
+
+/* Inject slb multihit on vmalloc-ed address i.e 0xD00... */
+static void inject_vmalloc_slb_multihit(void)
+{
+	char *p;
+
+	p = vmalloc(2048);
+	if (!p)
+		return;
+
+	insert_slb_entry(p, MMU_SEGSIZE_1T);
+	vfree(p);
+}
+
+/* Inject slb multihit on kmalloc-ed address i.e 0xC00... */
+static void inject_kmalloc_slb_multihit(void)
+{
+	char *p;
+
+	p = kmalloc(2048, GFP_KERNEL);
+	if (!p)
+		return;
+
+	insert_slb_entry(p, MMU_SEGSIZE_1T);
+	kfree(p);
+}
+
+/*
+ * Few initial SLB entries are bolted. Add a test to inject
+ * multihit in bolted entry 0.
+ */
+static void insert_dup_slb_entry_0(void)
+{
+	unsigned long test_address = 0xC000000000000000;
+	volatile unsigned long *test_ptr;
+	unsigned long entry, i = 0;
+	unsigned long esid, vsid;
+
+	test_ptr = (unsigned long *)test_address;
+	preempt_disable();
+
+	asm volatile("slbmfee  %0,%1" : "=r" (esid) : "r" (i));
+	asm volatile("slbmfev  %0,%1" : "=r" (vsid) : "r" (i));
+	entry = get_slb_index();
+
+	/* for i !=0 we would need to mask out the old entry number */
+	asm volatile("slbmte %0,%1" :
+			: "r" (vsid),
+			  "r" (esid | entry)
+			: "memory");
+
+	asm volatile("slbmfee  %0,%1" : "=r" (esid) : "r" (i));
+	asm volatile("slbmfev  %0,%1" : "=r" (vsid) : "r" (i));
+	entry = get_slb_index();
+
+	/* for i !=0 we would need to mask out the old entry number */
+	asm volatile("slbmte %0,%1" :
+			: "r" (vsid),
+			  "r" (esid | entry)
+			: "memory");
+
+	pr_info("%s accessing test address 0x%lx: 0x%lx\n",
+		__func__, test_address, *test_ptr);
+
+	preempt_enable();
+}
+
+void lkdtm_PPC_SLB_MULTIHIT(void)
+{
+	if (mmu_has_feature(MMU_FTR_HPTE_TABLE)) {
+		pr_info("Injecting SLB multihit errors\n");
+		/*
+		 * These need not be separate tests, And they do pretty
+		 * much same thing. In any case we must recover from the
+		 * errors introduced by these functions, machine would not
+		 * survive these tests in case of failure to handle.
+		 */
+		inject_vmalloc_slb_multihit();
+		inject_kmalloc_slb_multihit();
+		insert_dup_slb_entry_0();
+		pr_info("Recovered from SLB multihit errors\n");
+	} else {
+		pr_err("XFAIL: This test is for ppc64 and with hash mode MMU only\n");
+	}
+}
diff --git a/tools/testing/selftests/lkdtm/tests.txt b/tools/testing/selftests/lkdtm/tests.txt
index 9d266e79c6a2..7eb3cf91c89e 100644
--- a/tools/testing/selftests/lkdtm/tests.txt
+++ b/tools/testing/selftests/lkdtm/tests.txt
@@ -70,3 +70,4 @@ USERCOPY_KERNEL
 USERCOPY_KERNEL_DS
 STACKLEAK_ERASING OK: the rest of the thread stack is properly erased
 CFI_FORWARD_PROTO
+PPC_SLB_MULTIHIT Recovered
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 0/2] powerpc/mce: Fix mce handler and add selftest
  2020-10-09  6:40 [PATCH v4 0/2] powerpc/mce: Fix mce handler and add selftest Ganesh Goudar
  2020-10-09  6:40 ` [PATCH v4 1/2] powerpc/mce: remove nmi_enter/exit from real mode handler Ganesh Goudar
  2020-10-09  6:40 ` [PATCH v4 2/2] lkdtm/powerpc: Add SLB multihit test Ganesh Goudar
@ 2020-10-16 11:32 ` Michael Ellerman
  2020-10-19  5:26   ` Ganesh
  2 siblings, 1 reply; 8+ messages in thread
From: Michael Ellerman @ 2020-10-16 11:32 UTC (permalink / raw)
  To: mpe, Ganesh Goudar, linuxppc-dev; +Cc: msuchanek, mahesh, npiggin, keescook

On Fri, 9 Oct 2020 12:10:03 +0530, Ganesh Goudar wrote:
> This patch series fixes mce handling for pseries, Adds LKDTM test
> for SLB multihit recovery and enables selftest for the same,
> basically to test MCE handling on pseries/powernv machines running
> in hash mmu mode.
> 
> v4:
> * Use radix_enabled() to check if its in Hash or Radix mode.
> * Use FW_FEATURE_LPAR instead of machine_is_pseries().
> 
> [...]

Patch 1 applied to powerpc/fixes.

[1/2] powerpc/mce: Avoid nmi_enter/exit in real mode on pseries hash
      https://git.kernel.org/powerpc/c/8d0e2101274358d9b6b1f27232b40253ca48bab5

cheers

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 0/2] powerpc/mce: Fix mce handler and add selftest
  2020-10-16 11:32 ` [PATCH v4 0/2] powerpc/mce: Fix mce handler and add selftest Michael Ellerman
@ 2020-10-19  5:26   ` Ganesh
  0 siblings, 0 replies; 8+ messages in thread
From: Ganesh @ 2020-10-19  5:26 UTC (permalink / raw)
  To: Michael Ellerman, mpe, linuxppc-dev; +Cc: msuchanek, mahesh, npiggin, keescook

On 10/16/20 5:02 PM, Michael Ellerman wrote:

> On Fri, 9 Oct 2020 12:10:03 +0530, Ganesh Goudar wrote:
>> This patch series fixes mce handling for pseries, Adds LKDTM test
>> for SLB multihit recovery and enables selftest for the same,
>> basically to test MCE handling on pseries/powernv machines running
>> in hash mmu mode.
>>
>> v4:
>> * Use radix_enabled() to check if its in Hash or Radix mode.
>> * Use FW_FEATURE_LPAR instead of machine_is_pseries().
>>
>> [...]
> Patch 1 applied to powerpc/fixes.
>
> [1/2] powerpc/mce: Avoid nmi_enter/exit in real mode on pseries hash
>        https://git.kernel.org/powerpc/c/8d0e2101274358d9b6b1f27232b40253ca48bab5
>
> cheers
Thank you, Any comments on patch 2.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 2/2] lkdtm/powerpc: Add SLB multihit test
  2020-10-09  6:40 ` [PATCH v4 2/2] lkdtm/powerpc: Add SLB multihit test Ganesh Goudar
@ 2020-10-19 10:59   ` Michael Ellerman
  2020-10-19 13:15     ` Michal Suchánek
  0 siblings, 1 reply; 8+ messages in thread
From: Michael Ellerman @ 2020-10-19 10:59 UTC (permalink / raw)
  To: Ganesh Goudar, linuxppc-dev
  Cc: msuchanek, Ganesh Goudar, keescook, npiggin, mahesh

Hi Ganesh,

Some comments below ...

Ganesh Goudar <ganeshgr@linux.ibm.com> writes:
> To check machine check handling, add support to inject slb
> multihit errors.
>
> Cc: Kees Cook <keescook@chromium.org>
> Reviewed-by: Michal Suchánek <msuchanek@suse.de>
> Co-developed-by: Mahesh Salgaonkar <mahesh@linux.ibm.com>
> Signed-off-by: Mahesh Salgaonkar <mahesh@linux.ibm.com>
> Signed-off-by: Ganesh Goudar <ganeshgr@linux.ibm.com>
> ---
>  drivers/misc/lkdtm/Makefile             |   1 +
>  drivers/misc/lkdtm/core.c               |   3 +
>  drivers/misc/lkdtm/lkdtm.h              |   3 +
>  drivers/misc/lkdtm/powerpc.c            | 156 ++++++++++++++++++++++++
>  tools/testing/selftests/lkdtm/tests.txt |   1 +
>  5 files changed, 164 insertions(+)
>  create mode 100644 drivers/misc/lkdtm/powerpc.c
>
..
> diff --git a/drivers/misc/lkdtm/powerpc.c b/drivers/misc/lkdtm/powerpc.c
> new file mode 100644
> index 000000000000..f388b53dccba
> --- /dev/null
> +++ b/drivers/misc/lkdtm/powerpc.c
> @@ -0,0 +1,156 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include "lkdtm.h"
> +#include <linux/slab.h>
> +#include <linux/vmalloc.h>

Usual style is to include the linux headers first and then the local header.

> +
> +/* Gets index for new slb entry */
> +static inline unsigned long get_slb_index(void)
> +{
> +	unsigned long index;
> +
> +	index = get_paca()->stab_rr;
> +
> +	/*
> +	 * simple round-robin replacement of slb starting at SLB_NUM_BOLTED.
> +	 */
> +	if (index < (mmu_slb_size - 1))
> +		index++;
> +	else
> +		index = SLB_NUM_BOLTED;
> +	get_paca()->stab_rr = index;
> +	return index;
> +}

I'm not sure we need that really?

We can just always insert at SLB_MUM_BOLTED and SLB_NUM_BOLTED + 1.

Or we could allocate from the top down using mmu_slb_size - 1, and
mmu_slb_size - 2.


> +#define slb_esid_mask(ssize)	\
> +	(((ssize) == MMU_SEGSIZE_256M) ? ESID_MASK : ESID_MASK_1T)
> +
> +/* Form the operand for slbmte */
> +static inline unsigned long mk_esid_data(unsigned long ea, int ssize,
> +					 unsigned long slot)
> +{
> +	return (ea & slb_esid_mask(ssize)) | SLB_ESID_V | slot;
> +}
> +
> +#define slb_vsid_shift(ssize)	\
> +	((ssize) == MMU_SEGSIZE_256M ? SLB_VSID_SHIFT : SLB_VSID_SHIFT_1T)
> +
> +/* Form the operand for slbmte */
> +static inline unsigned long mk_vsid_data(unsigned long ea, int ssize,
> +					 unsigned long flags)
> +{
> +	return (get_kernel_vsid(ea, ssize) << slb_vsid_shift(ssize)) | flags |
> +		((unsigned long)ssize << SLB_VSID_SSIZE_SHIFT);
> +}

I realise it's not much code, but I'd rather those were in a header,
rather than copied from slb.c. That way they can never skew vs the
versions in slb.c

Best place I think would be arch/powerpc/include/asm/book3s/64/mmu-hash.h


> +
> +/* Inserts new slb entry */

It inserts two.

> +static void insert_slb_entry(char *p, int ssize)
> +{
> +	unsigned long flags, entry;
> +
> +	flags = SLB_VSID_KERNEL | mmu_psize_defs[MMU_PAGE_64K].sllp;

That won't work if the kernel is built for 4K pages. Or at least it
won't work the way we want it to.

You should use mmu_linear_psize.

But for vmalloc you should use mmu_vmalloc_psize, so it will need to be
a parameter.

> +	preempt_disable();
> +
> +	entry = get_slb_index();
> +	asm volatile("slbmte %0,%1" :
> +			: "r" (mk_vsid_data((unsigned long)p, ssize, flags)),
> +			  "r" (mk_esid_data((unsigned long)p, ssize, entry))
> +			: "memory");
> +
> +	entry = get_slb_index();
> +	asm volatile("slbmte %0,%1" :
> +			: "r" (mk_vsid_data((unsigned long)p, ssize, flags)),
> +			  "r" (mk_esid_data((unsigned long)p, ssize, entry))
> +			: "memory");
> +	preempt_enable();
> +	/*
> +	 * This triggers exception, If handled correctly we must recover
> +	 * from this error.
> +	 */
> +	p[0] = '!';

That doesn't belong in here, it should be done by the caller.

That would also mean p could be unsigned long in here, so you wouldn't
have to cast it four times.

> +}
> +
> +/* Inject slb multihit on vmalloc-ed address i.e 0xD00... */
> +static void inject_vmalloc_slb_multihit(void)
> +{
> +	char *p;
> +
> +	p = vmalloc(2048);

vmalloc() allocates whole pages, so it may as well be vmalloc(PAGE_SIZE).

> +	if (!p)
> +		return;

That's unlikely, but it should be an error that's propagated up to the caller.

> +
> +	insert_slb_entry(p, MMU_SEGSIZE_1T);
> +	vfree(p);
> +}
> +
> +/* Inject slb multihit on kmalloc-ed address i.e 0xC00... */
> +static void inject_kmalloc_slb_multihit(void)
> +{
> +	char *p;
> +
> +	p = kmalloc(2048, GFP_KERNEL);
> +	if (!p)
> +		return;
> +
> +	insert_slb_entry(p, MMU_SEGSIZE_1T);
> +	kfree(p);
> +}
> +
> +/*
> + * Few initial SLB entries are bolted. Add a test to inject
> + * multihit in bolted entry 0.
> + */
> +static void insert_dup_slb_entry_0(void)
> +{
> +	unsigned long test_address = 0xC000000000000000;

Should use PAGE_OFFSET;

> +	volatile unsigned long *test_ptr;

Does it need to be a volatile?
The slbmte should act as a compiler barrier (it has a memory clobber)
and a CPU barrier as well?

> +	unsigned long entry, i = 0;
> +	unsigned long esid, vsid;

Please group your variables:

  unsigned long esid, vsid, entry, test_address, i;
  volatile unsigned long *test_ptr;

And then initialise them as appropriate.

> +	test_ptr = (unsigned long *)test_address;
> +	preempt_disable();
> +
> +	asm volatile("slbmfee  %0,%1" : "=r" (esid) : "r" (i));
> +	asm volatile("slbmfev  %0,%1" : "=r" (vsid) : "r" (i));

Why do we need to read them out of the SLB rather than just computing
the values?

> +	entry = get_slb_index();
> +
> +	/* for i !=0 we would need to mask out the old entry number */

Or you could just compute esid and then it wouldn't be an issue.

> +	asm volatile("slbmte %0,%1" :
> +			: "r" (vsid),
> +			  "r" (esid | entry)
> +			: "memory");

At this point we've just inserted a duplicate of entry 0. So you don't
need to insert a third entry do you?

> +	asm volatile("slbmfee  %0,%1" : "=r" (esid) : "r" (i));
> +	asm volatile("slbmfev  %0,%1" : "=r" (vsid) : "r" (i));
> +	entry = get_slb_index();
> +
> +	/* for i !=0 we would need to mask out the old entry number */
> +	asm volatile("slbmte %0,%1" :
> +			: "r" (vsid),
> +			  "r" (esid | entry)
> +			: "memory");
> +
> +	pr_info("%s accessing test address 0x%lx: 0x%lx\n",
> +		__func__, test_address, *test_ptr);

This prints the first two instructions of the kernel. I happen to know
what values they should have, but most people won't understand what
they're seeing. A better test would be to read the value at the top of
the function and then load it again here and check we got the right
thing.

eg.
  unsigned long val;

  val = *test_ptr;
  ...
  if (val == *test_ptr)
    pr_info("Success ...")
  else
    pr_info("Fail ...")


> +
> +	preempt_enable();
> +}
> +
> +void lkdtm_PPC_SLB_MULTIHIT(void)
> +{
> +	if (mmu_has_feature(MMU_FTR_HPTE_TABLE)) {

That will be true even if radix is enabled. You need to use radix_enabled().

> +		pr_info("Injecting SLB multihit errors\n");
> +		/*
> +		 * These need not be separate tests, And they do pretty
> +		 * much same thing. In any case we must recover from the
> +		 * errors introduced by these functions, machine would not
> +		 * survive these tests in case of failure to handle.
> +		 */
> +		inject_vmalloc_slb_multihit();
> +		inject_kmalloc_slb_multihit();
> +		insert_dup_slb_entry_0();
> +		pr_info("Recovered from SLB multihit errors\n");
> +	} else {
> +		pr_err("XFAIL: This test is for ppc64 and with hash mode MMU only\n");

The whole file is only built if CONFIG_PPC64, so you don't need to
mention ppc64 in the message.

It should say something more like:

  XFAIL: can't test SLB multi-hit when using Radix MMU



cheers

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 2/2] lkdtm/powerpc: Add SLB multihit test
  2020-10-19 10:59   ` Michael Ellerman
@ 2020-10-19 13:15     ` Michal Suchánek
  2020-11-26 13:27       ` Ganesh
  0 siblings, 1 reply; 8+ messages in thread
From: Michal Suchánek @ 2020-10-19 13:15 UTC (permalink / raw)
  To: Michael Ellerman; +Cc: linuxppc-dev, Ganesh Goudar, keescook, npiggin, mahesh

On Mon, Oct 19, 2020 at 09:59:57PM +1100, Michael Ellerman wrote:
> Hi Ganesh,
> 
> Some comments below ...
> 
> Ganesh Goudar <ganeshgr@linux.ibm.com> writes:
> > To check machine check handling, add support to inject slb
> > multihit errors.
> >
> > Cc: Kees Cook <keescook@chromium.org>
> > Reviewed-by: Michal Suchánek <msuchanek@suse.de>
> > Co-developed-by: Mahesh Salgaonkar <mahesh@linux.ibm.com>
> > Signed-off-by: Mahesh Salgaonkar <mahesh@linux.ibm.com>
> > Signed-off-by: Ganesh Goudar <ganeshgr@linux.ibm.com>
> > ---
> >  drivers/misc/lkdtm/Makefile             |   1 +
> >  drivers/misc/lkdtm/core.c               |   3 +
> >  drivers/misc/lkdtm/lkdtm.h              |   3 +
> >  drivers/misc/lkdtm/powerpc.c            | 156 ++++++++++++++++++++++++
> >  tools/testing/selftests/lkdtm/tests.txt |   1 +
> >  5 files changed, 164 insertions(+)
> >  create mode 100644 drivers/misc/lkdtm/powerpc.c
> >
> ..
> > diff --git a/drivers/misc/lkdtm/powerpc.c b/drivers/misc/lkdtm/powerpc.c
> > new file mode 100644
> > index 000000000000..f388b53dccba
> > --- /dev/null
> > +++ b/drivers/misc/lkdtm/powerpc.c
> > @@ -0,0 +1,156 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +#include "lkdtm.h"
> > +#include <linux/slab.h>
> > +#include <linux/vmalloc.h>
> 
> Usual style is to include the linux headers first and then the local header.
> 
> > +
> > +/* Gets index for new slb entry */
> > +static inline unsigned long get_slb_index(void)
> > +{
> > +	unsigned long index;
> > +
> > +	index = get_paca()->stab_rr;
> > +
> > +	/*
> > +	 * simple round-robin replacement of slb starting at SLB_NUM_BOLTED.
> > +	 */
> > +	if (index < (mmu_slb_size - 1))
> > +		index++;
> > +	else
> > +		index = SLB_NUM_BOLTED;
> > +	get_paca()->stab_rr = index;
> > +	return index;
> > +}
> 
> I'm not sure we need that really?
> 
> We can just always insert at SLB_MUM_BOLTED and SLB_NUM_BOLTED + 1.
> 
> Or we could allocate from the top down using mmu_slb_size - 1, and
> mmu_slb_size - 2.
> 
> 
> > +#define slb_esid_mask(ssize)	\
> > +	(((ssize) == MMU_SEGSIZE_256M) ? ESID_MASK : ESID_MASK_1T)
> > +
> > +/* Form the operand for slbmte */
> > +static inline unsigned long mk_esid_data(unsigned long ea, int ssize,
> > +					 unsigned long slot)
> > +{
> > +	return (ea & slb_esid_mask(ssize)) | SLB_ESID_V | slot;
> > +}
> > +
> > +#define slb_vsid_shift(ssize)	\
> > +	((ssize) == MMU_SEGSIZE_256M ? SLB_VSID_SHIFT : SLB_VSID_SHIFT_1T)
> > +
> > +/* Form the operand for slbmte */
> > +static inline unsigned long mk_vsid_data(unsigned long ea, int ssize,
> > +					 unsigned long flags)
> > +{
> > +	return (get_kernel_vsid(ea, ssize) << slb_vsid_shift(ssize)) | flags |
> > +		((unsigned long)ssize << SLB_VSID_SSIZE_SHIFT);
> > +}
> 
> I realise it's not much code, but I'd rather those were in a header,
> rather than copied from slb.c. That way they can never skew vs the
> versions in slb.c
> 
> Best place I think would be arch/powerpc/include/asm/book3s/64/mmu-hash.h
> 
> 
> > +
> > +/* Inserts new slb entry */
> 
> It inserts two.
> 
> > +static void insert_slb_entry(char *p, int ssize)
> > +{
> > +	unsigned long flags, entry;
> > +
> > +	flags = SLB_VSID_KERNEL | mmu_psize_defs[MMU_PAGE_64K].sllp;
> 
> That won't work if the kernel is built for 4K pages. Or at least it
> won't work the way we want it to.
> 
> You should use mmu_linear_psize.
> 
> But for vmalloc you should use mmu_vmalloc_psize, so it will need to be
> a parameter.
> 
> > +	preempt_disable();
> > +
> > +	entry = get_slb_index();
> > +	asm volatile("slbmte %0,%1" :
> > +			: "r" (mk_vsid_data((unsigned long)p, ssize, flags)),
> > +			  "r" (mk_esid_data((unsigned long)p, ssize, entry))
> > +			: "memory");
> > +
> > +	entry = get_slb_index();
> > +	asm volatile("slbmte %0,%1" :
> > +			: "r" (mk_vsid_data((unsigned long)p, ssize, flags)),
> > +			  "r" (mk_esid_data((unsigned long)p, ssize, entry))
> > +			: "memory");
> > +	preempt_enable();
> > +	/*
> > +	 * This triggers exception, If handled correctly we must recover
> > +	 * from this error.
> > +	 */
> > +	p[0] = '!';
> 
> That doesn't belong in here, it should be done by the caller.
> 
> That would also mean p could be unsigned long in here, so you wouldn't
> have to cast it four times.
> 
> > +}
> > +
> > +/* Inject slb multihit on vmalloc-ed address i.e 0xD00... */
> > +static void inject_vmalloc_slb_multihit(void)
> > +{
> > +	char *p;
> > +
> > +	p = vmalloc(2048);
> 
> vmalloc() allocates whole pages, so it may as well be vmalloc(PAGE_SIZE).
> 
> > +	if (!p)
> > +		return;
> 
> That's unlikely, but it should be an error that's propagated up to the caller.
> 
> > +
> > +	insert_slb_entry(p, MMU_SEGSIZE_1T);
> > +	vfree(p);
> > +}
> > +
> > +/* Inject slb multihit on kmalloc-ed address i.e 0xC00... */
> > +static void inject_kmalloc_slb_multihit(void)
> > +{
> > +	char *p;
> > +
> > +	p = kmalloc(2048, GFP_KERNEL);
> > +	if (!p)
> > +		return;
> > +
> > +	insert_slb_entry(p, MMU_SEGSIZE_1T);
> > +	kfree(p);
> > +}
> > +
> > +/*
> > + * Few initial SLB entries are bolted. Add a test to inject
> > + * multihit in bolted entry 0.
> > + */
> > +static void insert_dup_slb_entry_0(void)
> > +{
> > +	unsigned long test_address = 0xC000000000000000;
> 
> Should use PAGE_OFFSET;
> 
> > +	volatile unsigned long *test_ptr;
> 
> Does it need to be a volatile?
> The slbmte should act as a compiler barrier (it has a memory clobber)
> and a CPU barrier as well?
> 
> > +	unsigned long entry, i = 0;
> > +	unsigned long esid, vsid;
> 
> Please group your variables:
> 
>   unsigned long esid, vsid, entry, test_address, i;
>   volatile unsigned long *test_ptr;
> 
> And then initialise them as appropriate.
> 
> > +	test_ptr = (unsigned long *)test_address;
> > +	preempt_disable();
> > +
> > +	asm volatile("slbmfee  %0,%1" : "=r" (esid) : "r" (i));
> > +	asm volatile("slbmfev  %0,%1" : "=r" (vsid) : "r" (i));
> 
> Why do we need to read them out of the SLB rather than just computing
> the values?
It ensures that the entry is perfect duplicate without copying even more
code from other parts of the kernel, doesn't it?

Especially when inserting only one duplicate as suggested later it
ensures that the test really does what it should.
> 
> > +	entry = get_slb_index();
> > +
> > +	/* for i !=0 we would need to mask out the old entry number */
> 
> Or you could just compute esid and then it wouldn't be an issue.
> 
> > +	asm volatile("slbmte %0,%1" :
> > +			: "r" (vsid),
> > +			  "r" (esid | entry)
> > +			: "memory");
> 
> At this point we've just inserted a duplicate of entry 0. So you don't
> need to insert a third entry do you?
This code was obviously adapted from the previous one which needed two
entries in case there was none for the memory region to start with.

Addin only one duplicate should suffice and it can be easily tested that
it still generates a MCE.

> 
> > +	asm volatile("slbmfee  %0,%1" : "=r" (esid) : "r" (i));
> > +	asm volatile("slbmfev  %0,%1" : "=r" (vsid) : "r" (i));
> > +	entry = get_slb_index();
> > +
> > +	/* for i !=0 we would need to mask out the old entry number */
> > +	asm volatile("slbmte %0,%1" :
> > +			: "r" (vsid),
> > +			  "r" (esid | entry)
> > +			: "memory");
> > +
> > +	pr_info("%s accessing test address 0x%lx: 0x%lx\n",
> > +		__func__, test_address, *test_ptr);
> 
> This prints the first two instructions of the kernel. I happen to know
> what values they should have, but most people won't understand what
> they're seeing. A better test would be to read the value at the top of
> the function and then load it again here and check we got the right
> thing.
It does not really matter what we read back so long as the compiler does
not optimize out the read. The point here is to access an address in the
range covered by the SLB entry 0. The failure case is that the system
crashes and the test never finishes.

Thanks

Michal

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 2/2] lkdtm/powerpc: Add SLB multihit test
  2020-10-19 13:15     ` Michal Suchánek
@ 2020-11-26 13:27       ` Ganesh
  0 siblings, 0 replies; 8+ messages in thread
From: Ganesh @ 2020-11-26 13:27 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Michal Suchánek, linuxppc-dev, keescook, npiggin, mahesh


On 10/19/20 6:45 PM, Michal Suchánek wrote:

> On Mon, Oct 19, 2020 at 09:59:57PM +1100, Michael Ellerman wrote:
>> Hi Ganesh,
>>
>> Some comments below ...
>>
>> Ganesh Goudar <ganeshgr@linux.ibm.com> writes:
>>> To check machine check handling, add support to inject slb
>>> multihit errors.
>>>
>>> Cc: Kees Cook <keescook@chromium.org>
>>> Reviewed-by: Michal Suchánek <msuchanek@suse.de>
>>> Co-developed-by: Mahesh Salgaonkar <mahesh@linux.ibm.com>
>>> Signed-off-by: Mahesh Salgaonkar <mahesh@linux.ibm.com>
>>> Signed-off-by: Ganesh Goudar <ganeshgr@linux.ibm.com>
>>> ---
>>>   drivers/misc/lkdtm/Makefile             |   1 +
>>>   drivers/misc/lkdtm/core.c               |   3 +
>>>   drivers/misc/lkdtm/lkdtm.h              |   3 +
>>>   drivers/misc/lkdtm/powerpc.c            | 156 ++++++++++++++++++++++++
>>>   tools/testing/selftests/lkdtm/tests.txt |   1 +
>>>   5 files changed, 164 insertions(+)
>>>   create mode 100644 drivers/misc/lkdtm/powerpc.c
>>>
>> ..
>>> diff --git a/drivers/misc/lkdtm/powerpc.c b/drivers/misc/lkdtm/powerpc.c
>>> new file mode 100644
>>> index 000000000000..f388b53dccba
>>> --- /dev/null
>>> +++ b/drivers/misc/lkdtm/powerpc.c
>>> @@ -0,0 +1,156 @@
>>> +// SPDX-License-Identifier: GPL-2.0
>>> +
>>> +#include "lkdtm.h"
>>> +#include <linux/slab.h>
>>> +#include <linux/vmalloc.h>
>> Usual style is to include the linux headers first and then the local header.

ok

>>> +
>>> +/* Gets index for new slb entry */
>>> +static inline unsigned long get_slb_index(void)
>>> +{
>>> +	unsigned long index;
>>> +
>>> +	index = get_paca()->stab_rr;
>>> +
>>> +	/*
>>> +	 * simple round-robin replacement of slb starting at SLB_NUM_BOLTED.
>>> +	 */
>>> +	if (index < (mmu_slb_size - 1))
>>> +		index++;
>>> +	else
>>> +		index = SLB_NUM_BOLTED;
>>> +	get_paca()->stab_rr = index;
>>> +	return index;
>>> +}
>> I'm not sure we need that really?
>>
>> We can just always insert at SLB_MUM_BOLTED and SLB_NUM_BOLTED + 1.
>>
>> Or we could allocate from the top down using mmu_slb_size - 1, and
>> mmu_slb_size - 2.

Ok, We can do that.

>>> +#define slb_esid_mask(ssize)	\
>>> +	(((ssize) == MMU_SEGSIZE_256M) ? ESID_MASK : ESID_MASK_1T)
>>> +
>>> +/* Form the operand for slbmte */
>>> +static inline unsigned long mk_esid_data(unsigned long ea, int ssize,
>>> +					 unsigned long slot)
>>> +{
>>> +	return (ea & slb_esid_mask(ssize)) | SLB_ESID_V | slot;
>>> +}
>>> +
>>> +#define slb_vsid_shift(ssize)	\
>>> +	((ssize) == MMU_SEGSIZE_256M ? SLB_VSID_SHIFT : SLB_VSID_SHIFT_1T)
>>> +
>>> +/* Form the operand for slbmte */
>>> +static inline unsigned long mk_vsid_data(unsigned long ea, int ssize,
>>> +					 unsigned long flags)
>>> +{
>>> +	return (get_kernel_vsid(ea, ssize) << slb_vsid_shift(ssize)) | flags |
>>> +		((unsigned long)ssize << SLB_VSID_SSIZE_SHIFT);
>>> +}
>> I realise it's not much code, but I'd rather those were in a header,
>> rather than copied from slb.c. That way they can never skew vs the
>> versions in slb.c
>>
>> Best place I think would be arch/powerpc/include/asm/book3s/64/mmu-hash.h

Ok, ill move them.

>>> +
>>> +/* Inserts new slb entry */
>> It inserts two.

Right.

>>> +static void insert_slb_entry(char *p, int ssize)
>>> +{
>>> +	unsigned long flags, entry;
>>> +
>>> +	flags = SLB_VSID_KERNEL | mmu_psize_defs[MMU_PAGE_64K].sllp;
>> That won't work if the kernel is built for 4K pages. Or at least it
>> won't work the way we want it to.
>>
>> You should use mmu_linear_psize.
>>
>> But for vmalloc you should use mmu_vmalloc_psize, so it will need to be
>> a parameter.

Sure, Thanks

>>> +	preempt_disable();
>>> +
>>> +	entry = get_slb_index();
>>> +	asm volatile("slbmte %0,%1" :
>>> +			: "r" (mk_vsid_data((unsigned long)p, ssize, flags)),
>>> +			  "r" (mk_esid_data((unsigned long)p, ssize, entry))
>>> +			: "memory");
>>> +
>>> +	entry = get_slb_index();
>>> +	asm volatile("slbmte %0,%1" :
>>> +			: "r" (mk_vsid_data((unsigned long)p, ssize, flags)),
>>> +			  "r" (mk_esid_data((unsigned long)p, ssize, entry))
>>> +			: "memory");
>>> +	preempt_enable();
>>> +	/*
>>> +	 * This triggers exception, If handled correctly we must recover
>>> +	 * from this error.
>>> +	 */
>>> +	p[0] = '!';
>> That doesn't belong in here, it should be done by the caller.
>>
>> That would also mean p could be unsigned long in here, so you wouldn't
>> have to cast it four times.

Sure, ill change it.

>>> +}
>>> +
>>> +/* Inject slb multihit on vmalloc-ed address i.e 0xD00... */
>>> +static void inject_vmalloc_slb_multihit(void)
>>> +{
>>> +	char *p;
>>> +
>>> +	p = vmalloc(2048);
>> vmalloc() allocates whole pages, so it may as well be vmalloc(PAGE_SIZE).

ok

>>> +	if (!p)
>>> +		return;
>> That's unlikely, but it should be an error that's propagated up to the caller.

ok

>>> +
>>> +	insert_slb_entry(p, MMU_SEGSIZE_1T);
>>> +	vfree(p);
>>> +}
>>> +
>>> +/* Inject slb multihit on kmalloc-ed address i.e 0xC00... */
>>> +static void inject_kmalloc_slb_multihit(void)
>>> +{
>>> +	char *p;
>>> +
>>> +	p = kmalloc(2048, GFP_KERNEL);
>>> +	if (!p)
>>> +		return;
>>> +
>>> +	insert_slb_entry(p, MMU_SEGSIZE_1T);
>>> +	kfree(p);
>>> +}
>>> +
>>> +/*
>>> + * Few initial SLB entries are bolted. Add a test to inject
>>> + * multihit in bolted entry 0.
>>> + */
>>> +static void insert_dup_slb_entry_0(void)
>>> +{
>>> +	unsigned long test_address = 0xC000000000000000;
>> Should use PAGE_OFFSET;

ok

>>> +	volatile unsigned long *test_ptr;
>> Does it need to be a volatile?
>> The slbmte should act as a compiler barrier (it has a memory clobber)
>> and a CPU barrier as well?

Yes, volatile is not required, ill remove it.

>>> +	unsigned long entry, i = 0;
>>> +	unsigned long esid, vsid;
>> Please group your variables:
>>
>>    unsigned long esid, vsid, entry, test_address, i;
>>    volatile unsigned long *test_ptr;
>>
>> And then initialise them as appropriate.

ok

>>> +	test_ptr = (unsigned long *)test_address;
>>> +	preempt_disable();
>>> +
>>> +	asm volatile("slbmfee  %0,%1" : "=r" (esid) : "r" (i));
>>> +	asm volatile("slbmfev  %0,%1" : "=r" (vsid) : "r" (i));
>> Why do we need to read them out of the SLB rather than just computing
>> the values?
> It ensures that the entry is perfect duplicate without copying even more
> code from other parts of the kernel, doesn't it?
>
> Especially when inserting only one duplicate as suggested later it
> ensures that the test really does what it should.
>>> +	entry = get_slb_index();
>>> +
>>> +	/* for i !=0 we would need to mask out the old entry number */
>> Or you could just compute esid and then it wouldn't be an issue.
>>
>>> +	asm volatile("slbmte %0,%1" :
>>> +			: "r" (vsid),
>>> +			  "r" (esid | entry)
>>> +			: "memory");
>> At this point we've just inserted a duplicate of entry 0. So you don't
>> need to insert a third entry do you?
> This code was obviously adapted from the previous one which needed two
> entries in case there was none for the memory region to start with.
>
> Addin only one duplicate should suffice and it can be easily tested that
> it still generates a MCE.
>
>>> +	asm volatile("slbmfee  %0,%1" : "=r" (esid) : "r" (i));
>>> +	asm volatile("slbmfev  %0,%1" : "=r" (vsid) : "r" (i));
>>> +	entry = get_slb_index();
>>> +
>>> +	/* for i !=0 we would need to mask out the old entry number */
>>> +	asm volatile("slbmte %0,%1" :
>>> +			: "r" (vsid),
>>> +			  "r" (esid | entry)
>>> +			: "memory");
>>> +
>>> +	pr_info("%s accessing test address 0x%lx: 0x%lx\n",
>>> +		__func__, test_address, *test_ptr);
>> This prints the first two instructions of the kernel. I happen to know
>> what values they should have, but most people won't understand what
>> they're seeing. A better test would be to read the value at the top of
>> the function and then load it again here and check we got the right
>> thing.
> It does not really matter what we read back so long as the compiler does
> not optimize out the read. The point here is to access an address in the
> range covered by the SLB entry 0. The failure case is that the system
> crashes and the test never finishes.
>
> Thanks
>
> Michal

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-11-26 14:30 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-09  6:40 [PATCH v4 0/2] powerpc/mce: Fix mce handler and add selftest Ganesh Goudar
2020-10-09  6:40 ` [PATCH v4 1/2] powerpc/mce: remove nmi_enter/exit from real mode handler Ganesh Goudar
2020-10-09  6:40 ` [PATCH v4 2/2] lkdtm/powerpc: Add SLB multihit test Ganesh Goudar
2020-10-19 10:59   ` Michael Ellerman
2020-10-19 13:15     ` Michal Suchánek
2020-11-26 13:27       ` Ganesh
2020-10-16 11:32 ` [PATCH v4 0/2] powerpc/mce: Fix mce handler and add selftest Michael Ellerman
2020-10-19  5:26   ` Ganesh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).