All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] powerpc: Emulation support for load/store instructions on LE
@ 2016-11-02  8:53 Ravi Bangoria
  2016-11-02  8:53 ` [PATCH 1/3] " Ravi Bangoria
                   ` (3 more replies)
  0 siblings, 4 replies; 14+ messages in thread
From: Ravi Bangoria @ 2016-11-02  8:53 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel, mpe
  Cc: benh, paulus, lsorense, oohall, naveen.n.rao, ast, chris,
	aneesh.kumar, bsingharora, anton, paul.gortmaker, bauerman, viro,
	christophe.leroy, duwe, oss, Ravi Bangoria

emulate_step is the basic infrastructure which is used by number of other
kernel infrastructures like kprobe, hw-breakpoint(data breakpoint) etc.
In case of kprobe, enabling emulation of load/store instructions will
speedup the execution of probed instruction. In case of kernel-space
breakpoint, causative instruction is first get emulated before executing
user registered handler. If emulation fails, hw-breakpoint is disabled
with error. As emulate_step does not support load/store instructions on
LE, kernel-space hw-breakpoint infrastructure is broken on LE.

emulate_step() uses a number of underlying kernel functions that were
initially not enabled for LE. This has been rectified since. So, fix
emulate_step() for LE for the corresponding instructions.

Also add selftest which will run at boot if CONFIG_KPROBES_SANITY_TEST
and CONFIG_PPC64 is set.

Changes w.r.t. RFC:
  - Enable emulation support for all types of (Normal, Floating Point,
    Vector and Vector Scalar) load/store instructions.
  - Introduce selftest to test emulate_step for load/store instructions.

Ravi Bangoria (3):
  powerpc: Emulation support for load/store instructions on LE
  powerpc: Add encoding for couple of load/store instructions
  powerpc: emulate_step test for load/store instructions

 arch/powerpc/include/asm/ppc-opcode.h |   7 +
 arch/powerpc/include/asm/sstep.h      |   8 +
 arch/powerpc/kernel/kprobes.c         |   2 +
 arch/powerpc/lib/Makefile             |   4 +
 arch/powerpc/lib/sstep.c              |  20 --
 arch/powerpc/lib/test_emulate_step.c  | 439 ++++++++++++++++++++++++++++++++++
 6 files changed, 460 insertions(+), 20 deletions(-)
 create mode 100644 arch/powerpc/lib/test_emulate_step.c

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/3] powerpc: Emulation support for load/store instructions on LE
  2016-11-02  8:53 [PATCH 0/3] powerpc: Emulation support for load/store instructions on LE Ravi Bangoria
@ 2016-11-02  8:53 ` Ravi Bangoria
  2016-11-02 21:04   ` Anton Blanchard
  2016-11-02  8:53 ` [PATCH 2/3] powerpc: Add encoding for couple of load/store instructions Ravi Bangoria
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 14+ messages in thread
From: Ravi Bangoria @ 2016-11-02  8:53 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel, mpe
  Cc: benh, paulus, lsorense, oohall, naveen.n.rao, ast, chris,
	aneesh.kumar, bsingharora, anton, paul.gortmaker, bauerman, viro,
	christophe.leroy, duwe, oss, Ravi Bangoria

emulate_step() uses a number of underlying kernel functions that were
initially not enabled for LE. This has been rectified since. So, fix
emulate_step() for LE for the corresponding instructions.

Reported-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
---
 arch/powerpc/lib/sstep.c | 20 --------------------
 1 file changed, 20 deletions(-)

diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
index 3362299..6ca3b90 100644
--- a/arch/powerpc/lib/sstep.c
+++ b/arch/powerpc/lib/sstep.c
@@ -1807,8 +1807,6 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
 		goto instr_done;
 
 	case LARX:
-		if (regs->msr & MSR_LE)
-			return 0;
 		if (op.ea & (size - 1))
 			break;		/* can't handle misaligned */
 		err = -EFAULT;
@@ -1832,8 +1830,6 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
 		goto ldst_done;
 
 	case STCX:
-		if (regs->msr & MSR_LE)
-			return 0;
 		if (op.ea & (size - 1))
 			break;		/* can't handle misaligned */
 		err = -EFAULT;
@@ -1859,8 +1855,6 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
 		goto ldst_done;
 
 	case LOAD:
-		if (regs->msr & MSR_LE)
-			return 0;
 		err = read_mem(&regs->gpr[op.reg], op.ea, size, regs);
 		if (!err) {
 			if (op.type & SIGNEXT)
@@ -1872,8 +1866,6 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
 
 #ifdef CONFIG_PPC_FPU
 	case LOAD_FP:
-		if (regs->msr & MSR_LE)
-			return 0;
 		if (size == 4)
 			err = do_fp_load(op.reg, do_lfs, op.ea, size, regs);
 		else
@@ -1882,15 +1874,11 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
 #endif
 #ifdef CONFIG_ALTIVEC
 	case LOAD_VMX:
-		if (regs->msr & MSR_LE)
-			return 0;
 		err = do_vec_load(op.reg, do_lvx, op.ea & ~0xfUL, regs);
 		goto ldst_done;
 #endif
 #ifdef CONFIG_VSX
 	case LOAD_VSX:
-		if (regs->msr & MSR_LE)
-			return 0;
 		err = do_vsx_load(op.reg, do_lxvd2x, op.ea, regs);
 		goto ldst_done;
 #endif
@@ -1913,8 +1901,6 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
 		goto instr_done;
 
 	case STORE:
-		if (regs->msr & MSR_LE)
-			return 0;
 		if ((op.type & UPDATE) && size == sizeof(long) &&
 		    op.reg == 1 && op.update_reg == 1 &&
 		    !(regs->msr & MSR_PR) &&
@@ -1927,8 +1913,6 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
 
 #ifdef CONFIG_PPC_FPU
 	case STORE_FP:
-		if (regs->msr & MSR_LE)
-			return 0;
 		if (size == 4)
 			err = do_fp_store(op.reg, do_stfs, op.ea, size, regs);
 		else
@@ -1937,15 +1921,11 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
 #endif
 #ifdef CONFIG_ALTIVEC
 	case STORE_VMX:
-		if (regs->msr & MSR_LE)
-			return 0;
 		err = do_vec_store(op.reg, do_stvx, op.ea & ~0xfUL, regs);
 		goto ldst_done;
 #endif
 #ifdef CONFIG_VSX
 	case STORE_VSX:
-		if (regs->msr & MSR_LE)
-			return 0;
 		err = do_vsx_store(op.reg, do_stxvd2x, op.ea, regs);
 		goto ldst_done;
 #endif
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/3] powerpc: Add encoding for couple of load/store instructions
  2016-11-02  8:53 [PATCH 0/3] powerpc: Emulation support for load/store instructions on LE Ravi Bangoria
  2016-11-02  8:53 ` [PATCH 1/3] " Ravi Bangoria
@ 2016-11-02  8:53 ` Ravi Bangoria
  2016-11-02  8:53 ` [PATCH 3/3] powerpc: emulate_step test for " Ravi Bangoria
  2016-11-03  7:03 ` [PATCH 0/3] powerpc: Emulation support for load/store instructions on LE Naveen N. Rao
  3 siblings, 0 replies; 14+ messages in thread
From: Ravi Bangoria @ 2016-11-02  8:53 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel, mpe
  Cc: benh, paulus, lsorense, oohall, naveen.n.rao, ast, chris,
	aneesh.kumar, bsingharora, anton, paul.gortmaker, bauerman, viro,
	christophe.leroy, duwe, oss, Ravi Bangoria

These encodings will be used in subsequent patch that test
emulate_step for load/store instructions.

Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/ppc-opcode.h | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
index 0132831..a17a09a 100644
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -284,6 +284,13 @@
 #define PPC_INST_BRANCH_COND		0x40800000
 #define PPC_INST_LBZCIX			0x7c0006aa
 #define PPC_INST_STBCIX			0x7c0007aa
+#define PPC_INST_LWZX			0x7c00002e
+#define PPC_INST_LFSX			0x7c00042e
+#define PPC_INST_STFSX			0x7c00052e
+#define PPC_INST_LFDX			0x7c0004ae
+#define PPC_INST_STFDX			0x7c0005ae
+#define PPC_INST_LVX			0x7c0000ce
+#define PPC_INST_STVX			0x7c0001ce
 
 /* macros to insert fields into opcodes */
 #define ___PPC_RA(a)	(((a) & 0x1f) << 16)
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 3/3] powerpc: emulate_step test for load/store instructions
  2016-11-02  8:53 [PATCH 0/3] powerpc: Emulation support for load/store instructions on LE Ravi Bangoria
  2016-11-02  8:53 ` [PATCH 1/3] " Ravi Bangoria
  2016-11-02  8:53 ` [PATCH 2/3] powerpc: Add encoding for couple of load/store instructions Ravi Bangoria
@ 2016-11-02  8:53 ` Ravi Bangoria
  2016-11-02 10:30   ` Naveen N. Rao
  2016-11-03  7:03 ` [PATCH 0/3] powerpc: Emulation support for load/store instructions on LE Naveen N. Rao
  3 siblings, 1 reply; 14+ messages in thread
From: Ravi Bangoria @ 2016-11-02  8:53 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel, mpe
  Cc: benh, paulus, lsorense, oohall, naveen.n.rao, ast, chris,
	aneesh.kumar, bsingharora, anton, paul.gortmaker, bauerman, viro,
	christophe.leroy, duwe, oss, Ravi Bangoria

Add new selftest that test emulate_step for Normal, Floating Point,
Vector and Vector Scalar - load/store instructions. Test should run
at boot time if CONFIG_KPROBES_SANITY_TEST and CONFIG_PPC64 is set.

Sample log:

  [    0.762063] emulate_step smoke test: start.
  [    0.762219] emulate_step smoke test: ld             : PASS
  [    0.762434] emulate_step smoke test: lwz            : PASS
  [    0.762653] emulate_step smoke test: lwzx           : PASS
  [    0.762867] emulate_step smoke test: std            : PASS
  [    0.763082] emulate_step smoke test: ldarx / stdcx. : PASS
  [    0.763302] emulate_step smoke test: lfsx           : PASS
  [    0.763514] emulate_step smoke test: stfsx          : PASS
  [    0.763727] emulate_step smoke test: lfdx           : PASS
  [    0.763942] emulate_step smoke test: stfdx          : PASS
  [    0.764134] emulate_step smoke test: lvx            : PASS
  [    0.764349] emulate_step smoke test: stvx           : PASS
  [    0.764575] emulate_step smoke test: lxvd2x         : PASS
  [    0.764788] emulate_step smoke test: stxvd2x        : PASS
  [    0.764997] emulate_step smoke test: complete.

Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/sstep.h     |   8 +
 arch/powerpc/kernel/kprobes.c        |   2 +
 arch/powerpc/lib/Makefile            |   4 +
 arch/powerpc/lib/test_emulate_step.c | 439 +++++++++++++++++++++++++++++++++++
 4 files changed, 453 insertions(+)
 create mode 100644 arch/powerpc/lib/test_emulate_step.c

diff --git a/arch/powerpc/include/asm/sstep.h b/arch/powerpc/include/asm/sstep.h
index d3a42cc..d6d3630 100644
--- a/arch/powerpc/include/asm/sstep.h
+++ b/arch/powerpc/include/asm/sstep.h
@@ -87,3 +87,11 @@ struct instruction_op {
 
 extern int analyse_instr(struct instruction_op *op, struct pt_regs *regs,
 			 unsigned int instr);
+
+#if defined(CONFIG_KPROBES_SANITY_TEST) && defined(CONFIG_PPC64)
+void test_emulate_step(void);
+#else
+static inline void test_emulate_step(void)
+{
+}
+#endif
diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index e785cc9..01d8002 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -544,6 +544,8 @@ int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
 
 int __init arch_init_kprobes(void)
 {
+	test_emulate_step();
+
 	return register_kprobe(&trampoline_p);
 }
 
diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
index 309361e8..7d046ca 100644
--- a/arch/powerpc/lib/Makefile
+++ b/arch/powerpc/lib/Makefile
@@ -35,3 +35,7 @@ obj-$(CONFIG_ALTIVEC)	+= xor_vmx.o
 CFLAGS_xor_vmx.o += -maltivec $(call cc-option,-mabi=altivec)
 
 obj-$(CONFIG_PPC64) += $(obj64-y)
+
+ifeq ($(CONFIG_PPC64), y)
+obj-$(CONFIG_KPROBES_SANITY_TEST) += test_emulate_step.o
+endif
diff --git a/arch/powerpc/lib/test_emulate_step.c b/arch/powerpc/lib/test_emulate_step.c
new file mode 100644
index 0000000..887d1db
--- /dev/null
+++ b/arch/powerpc/lib/test_emulate_step.c
@@ -0,0 +1,439 @@
+/*
+ * test_emulate_step.c - simple sanity test for emulate_step load/store
+ *			 instructions
+ *
+ * Copyright IBM Corp. 2016
+ *
+ * This program is free software;  you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See
+ * the GNU General Public License for more details.
+ */
+
+#define pr_fmt(fmt) "emulate_step smoke test: " fmt
+
+#include <linux/ptrace.h>
+#include <asm/sstep.h>
+#include <asm/ppc-opcode.h>
+
+#define IMM_L(i)		((uintptr_t)(i) & 0xffff)
+
+/*
+ * Defined with TEST_ prefix so it does not conflict with other
+ * definitions.
+ */
+#define TEST_LD(r, base, i)	(PPC_INST_LD | ___PPC_RT(r) |		\
+					___PPC_RA(base) | IMM_L(i))
+#define TEST_LWZ(r, base, i)	(PPC_INST_LWZ | ___PPC_RT(r) |		\
+					___PPC_RA(base) | IMM_L(i))
+#define TEST_LWZX(t, a, b)	(PPC_INST_LWZX | ___PPC_RT(t) |		\
+					___PPC_RA(a) | ___PPC_RB(b))
+#define TEST_STD(r, base, i)	(PPC_INST_STD | ___PPC_RS(r) |		\
+					___PPC_RA(base) | ((i) & 0xfffc))
+#define TEST_LDARX(t, a, b, eh)	(PPC_INST_LDARX | ___PPC_RT(t) |	\
+					___PPC_RA(a) | ___PPC_RB(b) |	\
+					__PPC_EH(eh))
+#define TEST_STDCX(s, a, b)	(PPC_INST_STDCX | ___PPC_RS(s) |	\
+					___PPC_RA(a) | ___PPC_RB(b))
+#define TEST_LFSX(t, a, b)	(PPC_INST_LFSX | ___PPC_RT(t) |		\
+					___PPC_RA(a) | ___PPC_RB(b))
+#define TEST_STFSX(s, a, b)	(PPC_INST_STFSX | ___PPC_RS(s) |	\
+					___PPC_RA(a) | ___PPC_RB(b))
+#define TEST_LFDX(t, a, b)	(PPC_INST_LFDX | ___PPC_RT(t) |		\
+					___PPC_RA(a) | ___PPC_RB(b))
+#define TEST_STFDX(s, a, b)	(PPC_INST_STFDX | ___PPC_RS(s) |	\
+					___PPC_RA(a) | ___PPC_RB(b))
+#define TEST_LVX(t, a, b)	(PPC_INST_LVX | ___PPC_RT(t) |		\
+					___PPC_RA(a) | ___PPC_RB(b))
+#define TEST_STVX(s, a, b)	(PPC_INST_STVX | ___PPC_RS(s) |		\
+					___PPC_RA(a) | ___PPC_RB(b))
+#define TEST_LXVD2X(s, a, b)	(PPC_INST_LXVD2X | VSX_XX1((s), R##a, R##b))
+#define TEST_STXVD2X(s, a, b)	(PPC_INST_STXVD2X | VSX_XX1((s), R##a, R##b))
+
+
+static void init_pt_regs(struct pt_regs *regs)
+{
+	static unsigned long msr;
+	static bool msr_cached;
+
+	memset(regs, 0, sizeof(struct pt_regs));
+
+	if (likely(msr_cached)) {
+		regs->msr = msr;
+		return;
+	}
+
+	asm volatile("mfmsr %0" : "=r"(regs->msr));
+
+	regs->msr |= MSR_FP;
+	regs->msr |= MSR_VEC;
+	regs->msr |= MSR_VSX;
+
+	msr = regs->msr;
+	msr_cached = true;
+}
+
+static void show_result(char *ins, char *result)
+{
+	pr_info("%-14s : %s\n", ins, result);
+}
+
+static void test_ld(void)
+{
+	struct pt_regs regs;
+	unsigned long a = 0x23;
+	int stepped = -1;
+
+	init_pt_regs(&regs);
+	regs.gpr[3] = (unsigned long) &a;
+
+	/* ld r5, 0(r3) */
+	stepped = emulate_step(&regs, TEST_LD(5, 3, 0));
+
+	if (stepped == 1 && regs.gpr[5] == a)
+		show_result("ld", "PASS");
+	else
+		show_result("ld", "FAIL");
+}
+
+static void test_lwz(void)
+{
+	struct pt_regs regs;
+	unsigned int a = 0x4545;
+	int stepped = -1;
+
+	init_pt_regs(&regs);
+	regs.gpr[3] = (unsigned long) &a;
+
+	/* lwz r5, 0(r3) */
+	stepped = emulate_step(&regs, TEST_LWZ(5, 3, 0));
+
+	if (stepped == 1 && regs.gpr[5] == a)
+		show_result("lwz", "PASS");
+	else
+		show_result("lwz", "FAIL");
+}
+
+static void test_lwzx(void)
+{
+	struct pt_regs regs;
+	unsigned int a[3] = {0x0, 0x0, 0x1234};
+	int stepped = -1;
+
+	init_pt_regs(&regs);
+	regs.gpr[3] = (unsigned long) a;
+	regs.gpr[4] = 8;
+	regs.gpr[5] = 0x8765;
+
+	/* lwzx r5, r3, r4 */
+	stepped = emulate_step(&regs, TEST_LWZX(5, 3, 4));
+	if (stepped == 1 && regs.gpr[5] == a[2])
+		show_result("lwzx", "PASS");
+	else
+		show_result("lwzx", "FAIL");
+}
+
+static void test_std(void)
+{
+	struct pt_regs regs;
+	unsigned long a = 0x1234;
+	int stepped = -1;
+
+	init_pt_regs(&regs);
+	regs.gpr[3] = (unsigned long) &a;
+	regs.gpr[5] = 0x5678;
+
+	/* std r5, 0(r3) */
+	stepped = emulate_step(&regs, TEST_STD(5, 3, 0));
+	if (stepped == 1 || regs.gpr[5] == a)
+		show_result("std", "PASS");
+	else
+		show_result("std", "FAIL");
+}
+
+static void test_ldarx_stdcx(void)
+{
+	struct pt_regs regs;
+	unsigned long a = 0x1234;
+	int stepped = -1;
+	unsigned long cr0_eq = 0x1 << 29; /* eq bit of CR0 */
+
+	init_pt_regs(&regs);
+	asm volatile("mfcr %0" : "=r"(regs.ccr));
+
+
+	/*** ldarx ***/
+
+	regs.gpr[3] = (unsigned long) &a;
+	regs.gpr[4] = 0;
+	regs.gpr[5] = 0x5678;
+
+	/* ldarx r5, r3, r4, 0 */
+	stepped = emulate_step(&regs, TEST_LDARX(5, 3, 4, 0));
+
+	/*
+	 * Don't touch 'a' here. Touching 'a' can do Load/store
+	 * of 'a' which result in failure of subsequent stdcx.
+	 * Instead, use hardcoded value for comparison.
+	 */
+	if (stepped <= 0 || regs.gpr[5] != 0x1234) {
+		show_result("ldarx / stdcx.", "FAIL (ldarx)");
+		return;
+	}
+
+
+	/*** stdcx. ***/
+
+	regs.gpr[5] = 0x9ABC;
+
+	/* stdcx. r5, r3, r4 */
+	stepped = emulate_step(&regs, TEST_STDCX(5, 3, 4));
+
+	/*
+	 * Two possible scenarios that indicates successful emulation
+	 * of stdcx. :
+	 *  1. Reservation is active and store is performed. In this
+	 *     case cr0.eq bit will be set to 1.
+	 *  2. Reservation is not active and store is not performed.
+	 *     In this case cr0.eq bit will be set to 0.
+	 */
+	if (stepped == 1 && ((regs.gpr[5] == a && (regs.ccr & cr0_eq))
+			|| (regs.gpr[5] != a && !(regs.ccr & cr0_eq))))
+		show_result("ldarx / stdcx.", "PASS");
+	else
+		show_result("ldarx / stdcx.", "FAIL (stdcx.)");
+}
+
+#ifdef CONFIG_PPC_FPU
+static void test_lfsx_stfsx(void)
+{
+	struct pt_regs regs;
+	union {
+		float a;
+		int b;
+	} c;
+	int cached_b;
+	int stepped = -1;
+
+	init_pt_regs(&regs);
+
+
+	/*** lfsx ***/
+
+	c.a = 123.45;
+	cached_b = c.b;
+
+	regs.gpr[3] = (unsigned long) &c.a;
+	regs.gpr[4] = 0;
+
+	/* lfsx frt10, r3, r4 */
+	stepped = emulate_step(&regs, TEST_LFSX(10, 3, 4));
+
+	if (stepped == 1)
+		show_result("lfsx", "PASS");
+	else
+		show_result("lfsx", "FAIL");
+
+
+	/*** stfsx ***/
+
+	c.a = 678.91;
+
+	/* stfsx frs10, r3, r4 */
+	stepped = emulate_step(&regs, TEST_STFSX(10, 3, 4));
+
+	if (stepped == 1 && c.b == cached_b)
+		show_result("stfsx", "PASS");
+	else
+		show_result("stfsx", "FAIL");
+}
+
+static void test_lfdx_stfdx(void)
+{
+	struct pt_regs regs;
+	union {
+		double a;
+		long b;
+	} c;
+	long cached_b;
+	int stepped = -1;
+
+	init_pt_regs(&regs);
+
+
+	/*** lfdx ***/
+
+	c.a = 123456.78;
+	cached_b = c.b;
+
+	regs.gpr[3] = (unsigned long) &c.a;
+	regs.gpr[4] = 0;
+
+	/* lfdx frt10, r3, r4 */
+	stepped = emulate_step(&regs, TEST_LFDX(10, 3, 4));
+
+	if (stepped == 1)
+		show_result("lfdx", "PASS");
+	else
+		show_result("lfdx", "FAIL");
+
+
+	/*** stfdx ***/
+
+	c.a = 987654.32;
+
+	/* stfdx frs10, r3, r4 */
+	stepped = emulate_step(&regs, TEST_STFDX(10, 3, 4));
+
+	if (stepped == 1 && c.b == cached_b)
+		show_result("stfdx", "PASS");
+	else
+		show_result("stfdx", "FAIL");
+}
+#else
+static void test_lfsx_stfsx(void)
+{
+	show_result("lfsx", "SKIP (CONFIG_PPC_FPU is not set)");
+	show_result("stfsx", "SKIP (CONFIG_PPC_FPU is not set)");
+}
+
+static void test_lfdx_stfdx(void)
+{
+	show_result("lfdx", "SKIP (CONFIG_PPC_FPU is not set)");
+	show_result("stfdx", "SKIP (CONFIG_PPC_FPU is not set)");
+}
+#endif /* CONFIG_PPC_FPU */
+
+#ifdef CONFIG_ALTIVEC
+static void test_lvx_stvx(void)
+{
+	struct pt_regs regs;
+	union {
+		vector128 a;
+		u32 b[4];
+	} c;
+	u32 cached_b[4];
+	int stepped = -1;
+
+	init_pt_regs(&regs);
+
+
+	/*** lvx ***/
+
+	cached_b[0] = c.b[0] = 923745;
+	cached_b[1] = c.b[1] = 2139478;
+	cached_b[2] = c.b[2] = 9012;
+	cached_b[3] = c.b[3] = 982134;
+
+	regs.gpr[3] = (unsigned long) &c.a;
+	regs.gpr[4] = 0;
+
+	/* lvx vrt10, r3, r4 */
+	stepped = emulate_step(&regs, TEST_LVX(10, 3, 4));
+
+	if (stepped == 1)
+		show_result("lvx", "PASS");
+	else
+		show_result("lvx", "FAIL");
+
+
+	/*** stvx ***/
+
+	c.b[0] = 4987513;
+	c.b[1] = 84313948;
+	c.b[2] = 71;
+	c.b[3] = 498532;
+
+	/* stvx vrs10, r3, r4 */
+	stepped = emulate_step(&regs, TEST_STVX(10, 3, 4));
+
+	if (stepped == 1 && cached_b[0] == c.b[0] && cached_b[1] == c.b[1] &&
+	    cached_b[2] == c.b[2] && cached_b[3] == c.b[3])
+		show_result("stvx", "PASS");
+	else
+		show_result("stvx", "FAIL");
+}
+#else
+static void test_lvx_stvx(void)
+{
+	show_result("lvx", "SKIP (CONFIG_ALTIVEC is not set)");
+	show_result("stvx", "SKIP (CONFIG_ALTIVEC is not set)");
+}
+#endif /* CONFIG_ALTIVEC */
+
+#ifdef CONFIG_VSX
+static void test_lxvd2x_stxvd2x(void)
+{
+	struct pt_regs regs;
+	union {
+		vector128 a;
+		u32 b[4];
+	} c;
+	u32 cached_b[4];
+	int stepped = -1;
+
+	init_pt_regs(&regs);
+
+
+	/*** lxvd2x ***/
+
+	cached_b[0] = c.b[0] = 18233;
+	cached_b[1] = c.b[1] = 34863571;
+	cached_b[2] = c.b[2] = 834;
+	cached_b[3] = c.b[3] = 6138911;
+
+	regs.gpr[3] = (unsigned long) &c.a;
+	regs.gpr[4] = 0;
+
+	/* lxvd2x vsr39, r3, r4 */
+	stepped = emulate_step(&regs, TEST_LXVD2X(39, 3, 4));
+
+	if (stepped == 1)
+		show_result("lxvd2x", "PASS");
+	else
+		show_result("lxvd2x", "FAIL");
+
+
+	/*** stxvd2x ***/
+
+	c.b[0] = 21379463;
+	c.b[1] = 87;
+	c.b[2] = 374234;
+	c.b[3] = 4;
+
+	/* stxvd2x vsr39, r3, r4 */
+	stepped = emulate_step(&regs, TEST_STXVD2X(39, 3, 4));
+
+	if (stepped == 1 && cached_b[0] == c.b[0] && cached_b[1] == c.b[1] &&
+	    cached_b[2] == c.b[2] && cached_b[3] == c.b[3])
+		show_result("stxvd2x", "PASS");
+	else
+		show_result("stxvd2x", "FAIL");
+}
+#else
+static void test_lxvd2x_stxvd2x(void)
+{
+	show_result("lxvd2x", "SKIP (CONFIG_VSX is not set)");
+	show_result("stxvd2x", "SKIP (CONFIG_VSX is not set)");
+}
+#endif /* CONFIG_VSX */
+
+void test_emulate_step(void)
+{
+	pr_info("start.\n");
+	test_ld();
+	test_lwz();
+	test_lwzx();
+	test_std();
+	test_ldarx_stdcx();
+	test_lfsx_stfsx();
+	test_lfdx_stfdx();
+	test_lvx_stvx();
+	test_lxvd2x_stxvd2x();
+	pr_info("complete.\n");
+}
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/3] powerpc: emulate_step test for load/store instructions
  2016-11-02  8:53 ` [PATCH 3/3] powerpc: emulate_step test for " Ravi Bangoria
@ 2016-11-02 10:30   ` Naveen N. Rao
  0 siblings, 0 replies; 14+ messages in thread
From: Naveen N. Rao @ 2016-11-02 10:30 UTC (permalink / raw)
  To: Ravi Bangoria
  Cc: linuxppc-dev, linux-kernel, mpe, benh, paulus, lsorense, oohall,
	ast, chris, aneesh.kumar, bsingharora, anton, paul.gortmaker,
	bauerman, viro, christophe.leroy, duwe, oss

On 2016/11/02 02:23PM, Ravi Bangoria wrote:
> Add new selftest that test emulate_step for Normal, Floating Point,
> Vector and Vector Scalar - load/store instructions. Test should run
> at boot time if CONFIG_KPROBES_SANITY_TEST and CONFIG_PPC64 is set.
> 
> Sample log:
> 
>   [    0.762063] emulate_step smoke test: start.
>   [    0.762219] emulate_step smoke test: ld             : PASS
>   [    0.762434] emulate_step smoke test: lwz            : PASS
>   [    0.762653] emulate_step smoke test: lwzx           : PASS
>   [    0.762867] emulate_step smoke test: std            : PASS
>   [    0.763082] emulate_step smoke test: ldarx / stdcx. : PASS
>   [    0.763302] emulate_step smoke test: lfsx           : PASS
>   [    0.763514] emulate_step smoke test: stfsx          : PASS
>   [    0.763727] emulate_step smoke test: lfdx           : PASS
>   [    0.763942] emulate_step smoke test: stfdx          : PASS
>   [    0.764134] emulate_step smoke test: lvx            : PASS
>   [    0.764349] emulate_step smoke test: stvx           : PASS
>   [    0.764575] emulate_step smoke test: lxvd2x         : PASS
>   [    0.764788] emulate_step smoke test: stxvd2x        : PASS
>   [    0.764997] emulate_step smoke test: complete.
> 
> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/sstep.h     |   8 +
>  arch/powerpc/kernel/kprobes.c        |   2 +
>  arch/powerpc/lib/Makefile            |   4 +
>  arch/powerpc/lib/test_emulate_step.c | 439 +++++++++++++++++++++++++++++++++++
>  4 files changed, 453 insertions(+)
>  create mode 100644 arch/powerpc/lib/test_emulate_step.c
> 
> diff --git a/arch/powerpc/include/asm/sstep.h b/arch/powerpc/include/asm/sstep.h
> index d3a42cc..d6d3630 100644
> --- a/arch/powerpc/include/asm/sstep.h
> +++ b/arch/powerpc/include/asm/sstep.h
> @@ -87,3 +87,11 @@ struct instruction_op {
> 
>  extern int analyse_instr(struct instruction_op *op, struct pt_regs *regs,
>  			 unsigned int instr);
> +
> +#if defined(CONFIG_KPROBES_SANITY_TEST) && defined(CONFIG_PPC64)
> +void test_emulate_step(void);
> +#else
> +static inline void test_emulate_step(void)
> +{
> +}
> +#endif
> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> index e785cc9..01d8002 100644
> --- a/arch/powerpc/kernel/kprobes.c
> +++ b/arch/powerpc/kernel/kprobes.c
> @@ -544,6 +544,8 @@ int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
> 
>  int __init arch_init_kprobes(void)
>  {
> +	test_emulate_step();
> +
>  	return register_kprobe(&trampoline_p);
>  }
> 
> diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
> index 309361e8..7d046ca 100644
> --- a/arch/powerpc/lib/Makefile
> +++ b/arch/powerpc/lib/Makefile
> @@ -35,3 +35,7 @@ obj-$(CONFIG_ALTIVEC)	+= xor_vmx.o
>  CFLAGS_xor_vmx.o += -maltivec $(call cc-option,-mabi=altivec)
> 
>  obj-$(CONFIG_PPC64) += $(obj64-y)
> +
> +ifeq ($(CONFIG_PPC64), y)
> +obj-$(CONFIG_KPROBES_SANITY_TEST) += test_emulate_step.o
> +endif
> diff --git a/arch/powerpc/lib/test_emulate_step.c b/arch/powerpc/lib/test_emulate_step.c
> new file mode 100644
> index 0000000..887d1db
> --- /dev/null
> +++ b/arch/powerpc/lib/test_emulate_step.c
> @@ -0,0 +1,439 @@
> +/*
> + * test_emulate_step.c - simple sanity test for emulate_step load/store
> + *			 instructions
> + *
> + * Copyright IBM Corp. 2016
> + *
> + * This program is free software;  you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it would be useful, but
> + * WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See
> + * the GNU General Public License for more details.
> + */
> +
> +#define pr_fmt(fmt) "emulate_step smoke test: " fmt
> +
> +#include <linux/ptrace.h>
> +#include <asm/sstep.h>
> +#include <asm/ppc-opcode.h>
> +
> +#define IMM_L(i)		((uintptr_t)(i) & 0xffff)
> +
> +/*
> + * Defined with TEST_ prefix so it does not conflict with other
> + * definitions.
> + */
> +#define TEST_LD(r, base, i)	(PPC_INST_LD | ___PPC_RT(r) |		\
> +					___PPC_RA(base) | IMM_L(i))
> +#define TEST_LWZ(r, base, i)	(PPC_INST_LWZ | ___PPC_RT(r) |		\
> +					___PPC_RA(base) | IMM_L(i))
> +#define TEST_LWZX(t, a, b)	(PPC_INST_LWZX | ___PPC_RT(t) |		\
> +					___PPC_RA(a) | ___PPC_RB(b))
> +#define TEST_STD(r, base, i)	(PPC_INST_STD | ___PPC_RS(r) |		\
> +					___PPC_RA(base) | ((i) & 0xfffc))
> +#define TEST_LDARX(t, a, b, eh)	(PPC_INST_LDARX | ___PPC_RT(t) |	\
> +					___PPC_RA(a) | ___PPC_RB(b) |	\
> +					__PPC_EH(eh))
> +#define TEST_STDCX(s, a, b)	(PPC_INST_STDCX | ___PPC_RS(s) |	\
> +					___PPC_RA(a) | ___PPC_RB(b))
> +#define TEST_LFSX(t, a, b)	(PPC_INST_LFSX | ___PPC_RT(t) |		\
> +					___PPC_RA(a) | ___PPC_RB(b))
> +#define TEST_STFSX(s, a, b)	(PPC_INST_STFSX | ___PPC_RS(s) |	\
> +					___PPC_RA(a) | ___PPC_RB(b))
> +#define TEST_LFDX(t, a, b)	(PPC_INST_LFDX | ___PPC_RT(t) |		\
> +					___PPC_RA(a) | ___PPC_RB(b))
> +#define TEST_STFDX(s, a, b)	(PPC_INST_STFDX | ___PPC_RS(s) |	\
> +					___PPC_RA(a) | ___PPC_RB(b))
> +#define TEST_LVX(t, a, b)	(PPC_INST_LVX | ___PPC_RT(t) |		\
> +					___PPC_RA(a) | ___PPC_RB(b))
> +#define TEST_STVX(s, a, b)	(PPC_INST_STVX | ___PPC_RS(s) |		\
> +					___PPC_RA(a) | ___PPC_RB(b))
> +#define TEST_LXVD2X(s, a, b)	(PPC_INST_LXVD2X | VSX_XX1((s), R##a, 
> R##b))
> +#define TEST_STXVD2X(s, a, b)	(PPC_INST_STXVD2X | VSX_XX1((s), R##a, R##b))

It will be good to get some of these macros into some generic headers at 
some point...

> +
> +
> +static void init_pt_regs(struct pt_regs *regs)
> +{
> +	static unsigned long msr;
> +	static bool msr_cached;
> +
> +	memset(regs, 0, sizeof(struct pt_regs));
> +
> +	if (likely(msr_cached)) {

Why not just check 'msr' itself here?

- Naveen

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/3] powerpc: Emulation support for load/store instructions on LE
  2016-11-02  8:53 ` [PATCH 1/3] " Ravi Bangoria
@ 2016-11-02 21:04   ` Anton Blanchard
  2016-11-03  5:41     ` Ravi Bangoria
  0 siblings, 1 reply; 14+ messages in thread
From: Anton Blanchard @ 2016-11-02 21:04 UTC (permalink / raw)
  To: Ravi Bangoria
  Cc: linuxppc-dev, linux-kernel, mpe, duwe, ast, viro, chris, oss,
	paul.gortmaker, oohall, aneesh.kumar, lsorense, bauerman, paulus,
	naveen.n.rao

Hi Ravi,

> emulate_step() uses a number of underlying kernel functions that were
> initially not enabled for LE. This has been rectified since. So, fix
> emulate_step() for LE for the corresponding instructions.

Thanks. Should this be queued up for stable?

Anton

> Reported-by: Anton Blanchard <anton@samba.org>
> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
> ---
>  arch/powerpc/lib/sstep.c | 20 --------------------
>  1 file changed, 20 deletions(-)
> 
> diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
> index 3362299..6ca3b90 100644
> --- a/arch/powerpc/lib/sstep.c
> +++ b/arch/powerpc/lib/sstep.c
> @@ -1807,8 +1807,6 @@ int __kprobes emulate_step(struct pt_regs
> *regs, unsigned int instr) goto instr_done;
>  
>  	case LARX:
> -		if (regs->msr & MSR_LE)
> -			return 0;
>  		if (op.ea & (size - 1))
>  			break;		/* can't handle
> misaligned */ err = -EFAULT;
> @@ -1832,8 +1830,6 @@ int __kprobes emulate_step(struct pt_regs
> *regs, unsigned int instr) goto ldst_done;
>  
>  	case STCX:
> -		if (regs->msr & MSR_LE)
> -			return 0;
>  		if (op.ea & (size - 1))
>  			break;		/* can't handle
> misaligned */ err = -EFAULT;
> @@ -1859,8 +1855,6 @@ int __kprobes emulate_step(struct pt_regs
> *regs, unsigned int instr) goto ldst_done;
>  
>  	case LOAD:
> -		if (regs->msr & MSR_LE)
> -			return 0;
>  		err = read_mem(&regs->gpr[op.reg], op.ea, size,
> regs); if (!err) {
>  			if (op.type & SIGNEXT)
> @@ -1872,8 +1866,6 @@ int __kprobes emulate_step(struct pt_regs
> *regs, unsigned int instr) 
>  #ifdef CONFIG_PPC_FPU
>  	case LOAD_FP:
> -		if (regs->msr & MSR_LE)
> -			return 0;
>  		if (size == 4)
>  			err = do_fp_load(op.reg, do_lfs, op.ea,
> size, regs); else
> @@ -1882,15 +1874,11 @@ int __kprobes emulate_step(struct pt_regs
> *regs, unsigned int instr) #endif
>  #ifdef CONFIG_ALTIVEC
>  	case LOAD_VMX:
> -		if (regs->msr & MSR_LE)
> -			return 0;
>  		err = do_vec_load(op.reg, do_lvx, op.ea & ~0xfUL,
> regs); goto ldst_done;
>  #endif
>  #ifdef CONFIG_VSX
>  	case LOAD_VSX:
> -		if (regs->msr & MSR_LE)
> -			return 0;
>  		err = do_vsx_load(op.reg, do_lxvd2x, op.ea, regs);
>  		goto ldst_done;
>  #endif
> @@ -1913,8 +1901,6 @@ int __kprobes emulate_step(struct pt_regs
> *regs, unsigned int instr) goto instr_done;
>  
>  	case STORE:
> -		if (regs->msr & MSR_LE)
> -			return 0;
>  		if ((op.type & UPDATE) && size == sizeof(long) &&
>  		    op.reg == 1 && op.update_reg == 1 &&
>  		    !(regs->msr & MSR_PR) &&
> @@ -1927,8 +1913,6 @@ int __kprobes emulate_step(struct pt_regs
> *regs, unsigned int instr) 
>  #ifdef CONFIG_PPC_FPU
>  	case STORE_FP:
> -		if (regs->msr & MSR_LE)
> -			return 0;
>  		if (size == 4)
>  			err = do_fp_store(op.reg, do_stfs, op.ea,
> size, regs); else
> @@ -1937,15 +1921,11 @@ int __kprobes emulate_step(struct pt_regs
> *regs, unsigned int instr) #endif
>  #ifdef CONFIG_ALTIVEC
>  	case STORE_VMX:
> -		if (regs->msr & MSR_LE)
> -			return 0;
>  		err = do_vec_store(op.reg, do_stvx, op.ea & ~0xfUL,
> regs); goto ldst_done;
>  #endif
>  #ifdef CONFIG_VSX
>  	case STORE_VSX:
> -		if (regs->msr & MSR_LE)
> -			return 0;
>  		err = do_vsx_store(op.reg, do_stxvd2x, op.ea, regs);
>  		goto ldst_done;
>  #endif

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/3] powerpc: Emulation support for load/store instructions on LE
  2016-11-02 21:04   ` Anton Blanchard
@ 2016-11-03  5:41     ` Ravi Bangoria
  2016-11-03  9:48       ` Michael Ellerman
  0 siblings, 1 reply; 14+ messages in thread
From: Ravi Bangoria @ 2016-11-03  5:41 UTC (permalink / raw)
  To: Anton Blanchard
  Cc: linuxppc-dev, linux-kernel, mpe, duwe, ast, viro, chris, oss,
	paul.gortmaker, oohall, aneesh.kumar, lsorense, bauerman, paulus,
	naveen.n.rao, Ravi Bangoria



On Thursday 03 November 2016 02:34 AM, Anton Blanchard wrote:
> Hi Ravi,
>
>> emulate_step() uses a number of underlying kernel functions that were
>> initially not enabled for LE. This has been rectified since. So, fix
>> emulate_step() for LE for the corresponding instructions.
> Thanks. Should this be queued up for stable?

Thanks Anton. Yes, this should go in stable.

-Ravi

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 0/3] powerpc: Emulation support for load/store instructions on LE
  2016-11-02  8:53 [PATCH 0/3] powerpc: Emulation support for load/store instructions on LE Ravi Bangoria
                   ` (2 preceding siblings ...)
  2016-11-02  8:53 ` [PATCH 3/3] powerpc: emulate_step test for " Ravi Bangoria
@ 2016-11-03  7:03 ` Naveen N. Rao
  3 siblings, 0 replies; 14+ messages in thread
From: Naveen N. Rao @ 2016-11-03  7:03 UTC (permalink / raw)
  To: Ravi Bangoria
  Cc: linuxppc-dev, linux-kernel, mpe, benh, paulus, lsorense, oohall,
	ast, chris, aneesh.kumar, bsingharora, anton, paul.gortmaker,
	bauerman, viro, christophe.leroy, duwe, oss

On 2016/11/02 02:23PM, Ravi Bangoria wrote:
> emulate_step is the basic infrastructure which is used by number of other
> kernel infrastructures like kprobe, hw-breakpoint(data breakpoint) etc.
> In case of kprobe, enabling emulation of load/store instructions will
> speedup the execution of probed instruction. In case of kernel-space
> breakpoint, causative instruction is first get emulated before executing
> user registered handler. If emulation fails, hw-breakpoint is disabled
> with error. As emulate_step does not support load/store instructions on
> LE, kernel-space hw-breakpoint infrastructure is broken on LE.
> 
> emulate_step() uses a number of underlying kernel functions that were
> initially not enabled for LE. This has been rectified since. So, fix
> emulate_step() for LE for the corresponding instructions.
> 
> Also add selftest which will run at boot if CONFIG_KPROBES_SANITY_TEST
> and CONFIG_PPC64 is set.
> 
> Changes w.r.t. RFC:
>   - Enable emulation support for all types of (Normal, Floating Point,
>     Vector and Vector Scalar) load/store instructions.
>   - Introduce selftest to test emulate_step for load/store instructions.
> 
> Ravi Bangoria (3):
>   powerpc: Emulation support for load/store instructions on LE
>   powerpc: Add encoding for couple of load/store instructions
>   powerpc: emulate_step test for load/store instructions

> 
>  arch/powerpc/include/asm/ppc-opcode.h |   7 +
>  arch/powerpc/include/asm/sstep.h      |   8 +
>  arch/powerpc/kernel/kprobes.c         |   2 +
>  arch/powerpc/lib/Makefile             |   4 +
>  arch/powerpc/lib/sstep.c              |  20 --
>  arch/powerpc/lib/test_emulate_step.c  | 439 ++++++++++++++++++++++++++++++++++
>  6 files changed, 460 insertions(+), 20 deletions(-)
>  create mode 100644 arch/powerpc/lib/test_emulate_step.c

Patch 2 can be folded into the third patch. Apart from that, and the 
minor nit with patch 3, for this series:
Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>

Thanks,
Naveen

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/3] powerpc: Emulation support for load/store instructions on LE
  2016-11-03  5:41     ` Ravi Bangoria
@ 2016-11-03  9:48       ` Michael Ellerman
  2016-11-03 10:27         ` Ravi Bangoria
  0 siblings, 1 reply; 14+ messages in thread
From: Michael Ellerman @ 2016-11-03  9:48 UTC (permalink / raw)
  To: Ravi Bangoria, Anton Blanchard
  Cc: linuxppc-dev, linux-kernel, duwe, ast, viro, chris, oss,
	paul.gortmaker, oohall, aneesh.kumar, lsorense, bauerman, paulus,
	naveen.n.rao, Ravi Bangoria

Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com> writes:

> On Thursday 03 November 2016 02:34 AM, Anton Blanchard wrote:
>> Hi Ravi,
>>
>>> emulate_step() uses a number of underlying kernel functions that were
>>> initially not enabled for LE. This has been rectified since. So, fix
>>> emulate_step() for LE for the corresponding instructions.
>> Thanks. Should this be queued up for stable?
>
> Thanks Anton. Yes, this should go in stable.

It's fairly big for stable. Does it fix an actual bug? If so what, and
how bad is it, what's the user impact.

Can you also pinpoint which commit it "fixes"?

cheers

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/3] powerpc: Emulation support for load/store instructions on LE
  2016-11-03  9:48       ` Michael Ellerman
@ 2016-11-03 10:27         ` Ravi Bangoria
  2016-11-04  2:07           ` Andrew Donnellan
  0 siblings, 1 reply; 14+ messages in thread
From: Ravi Bangoria @ 2016-11-03 10:27 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Anton Blanchard, linuxppc-dev, linux-kernel, duwe, ast, viro,
	chris, oss, paul.gortmaker, oohall, aneesh.kumar, lsorense,
	bauerman, paulus, naveen.n.rao, Ravi Bangoria



On Thursday 03 November 2016 03:18 PM, Michael Ellerman wrote:
> Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com> writes:
>
>> On Thursday 03 November 2016 02:34 AM, Anton Blanchard wrote:
>>> Hi Ravi,
>>>
>>>> emulate_step() uses a number of underlying kernel functions that were
>>>> initially not enabled for LE. This has been rectified since. So, fix
>>>> emulate_step() for LE for the corresponding instructions.
>>> Thanks. Should this be queued up for stable?
>> Thanks Anton. Yes, this should go in stable.
> It's fairly big for stable. Does it fix an actual bug? If so what, and
> how bad is it, what's the user impact.

Hi Michael,

Yes, kernel-space hw-breakpoint feature is broken on LE without this.

Actually, there is no specific commit that introduced this. Back
in 2010, Paul Mackerras has added emulation support for load/store
instructions for BE. hw-breakpoint was also develpoed by K.Prasad
in the same timeframe.

Kernel-space hw-breakpoint emulates causative instruction before
notifying to user. As emulate_step is never enabled for LE, kernel-
space hw-breakpoint is always broken on LE.

-Ravi

> Can you also pinpoint which commit it "fixes"?
>
> cheers
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/3] powerpc: Emulation support for load/store instructions on LE
  2016-11-03 10:27         ` Ravi Bangoria
@ 2016-11-04  2:07           ` Andrew Donnellan
  2016-11-04  5:31             ` Ravi Bangoria
  0 siblings, 1 reply; 14+ messages in thread
From: Andrew Donnellan @ 2016-11-04  2:07 UTC (permalink / raw)
  To: Ravi Bangoria, Michael Ellerman
  Cc: chris, aneesh.kumar, ast, linux-kernel, oss, paul.gortmaker,
	duwe, Anton Blanchard, lsorense, bauerman, oohall, naveen.n.rao,
	linuxppc-dev, paulus, viro

On 03/11/16 21:27, Ravi Bangoria wrote:
> Yes, kernel-space hw-breakpoint feature is broken on LE without this.

Is there any actual user-visible feature that depends on this, or is 
this solely for debugging and development purposes?

It would of course be *nice* to have it in stable trees (particularly so 
we pick it up in distros) but I'm not convinced that enabling HW 
breakpoints on a platform where it has *never* worked qualifies as an 
"actual bug".

(BTW many thanks for fixing this - I had a shot at it late last year but 
never quite got there!)

-- 
Andrew Donnellan              OzLabs, ADL Canberra
andrew.donnellan@au1.ibm.com  IBM Australia Limited

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/3] powerpc: Emulation support for load/store instructions on LE
  2016-11-04  2:07           ` Andrew Donnellan
@ 2016-11-04  5:31             ` Ravi Bangoria
  2016-11-05 19:31               ` Anton Blanchard
  0 siblings, 1 reply; 14+ messages in thread
From: Ravi Bangoria @ 2016-11-04  5:31 UTC (permalink / raw)
  To: Andrew Donnellan, Michael Ellerman
  Cc: chris, aneesh.kumar, ast, linux-kernel, oss, paul.gortmaker,
	duwe, Anton Blanchard, lsorense, bauerman, oohall, naveen.n.rao,
	linuxppc-dev, paulus, viro, Ravi Bangoria



On Friday 04 November 2016 07:37 AM, Andrew Donnellan wrote:
> On 03/11/16 21:27, Ravi Bangoria wrote:
>> Yes, kernel-space hw-breakpoint feature is broken on LE without this.
>
> Is there any actual user-visible feature that depends on this, or is this solely for debugging and development purposes?
>
> It would of course be *nice* to have it in stable trees (particularly so we pick it up in distros) but I'm not convinced that enabling HW breakpoints on a platform where it has *never* worked qualifies as an "actual bug".
>
> (BTW many thanks for fixing this - I had a shot at it late last year but never quite got there!)

Thanks Andrew,

kprobe, uprobe, hw-breakpoint and xmon are the only user of emulate_step.

Kprobe / uprobe single-steps instruction if they can't emulate it, so there
is no problem with them. As I mention, hw-breakpoint is broken. However
I'm not sure about xmon, I need to check that.

So yes, there is no user-visible feature that depends on this.

-Ravi

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/3] powerpc: Emulation support for load/store instructions on LE
  2016-11-04  5:31             ` Ravi Bangoria
@ 2016-11-05 19:31               ` Anton Blanchard
  2016-11-06 18:59                 ` Ravi Bangoria
  0 siblings, 1 reply; 14+ messages in thread
From: Anton Blanchard @ 2016-11-05 19:31 UTC (permalink / raw)
  To: Ravi Bangoria
  Cc: Andrew Donnellan, Michael Ellerman, chris, aneesh.kumar, ast,
	linux-kernel, oss, paul.gortmaker, duwe, lsorense, bauerman,
	oohall, naveen.n.rao, linuxppc-dev, paulus, viro

Hi,

> kprobe, uprobe, hw-breakpoint and xmon are the only user of
> emulate_step.
> 
> Kprobe / uprobe single-steps instruction if they can't emulate it, so
> there is no problem with them. As I mention, hw-breakpoint is broken.
> However I'm not sure about xmon, I need to check that.

I was mostly concerned that it would impact kprobes. Sounds like we are
ok there.

> So yes, there is no user-visible feature that depends on this.

Aren't hardware breakpoints exposed via perf? I'd call perf
user-visible.

Anton

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/3] powerpc: Emulation support for load/store instructions on LE
  2016-11-05 19:31               ` Anton Blanchard
@ 2016-11-06 18:59                 ` Ravi Bangoria
  0 siblings, 0 replies; 14+ messages in thread
From: Ravi Bangoria @ 2016-11-06 18:59 UTC (permalink / raw)
  To: Anton Blanchard, Michael Ellerman
  Cc: Andrew Donnellan, chris, aneesh.kumar, ast, linux-kernel, oss,
	paul.gortmaker, duwe, lsorense, bauerman, oohall, naveen.n.rao,
	linuxppc-dev, paulus, viro, Ravi Bangoria



On Sunday 06 November 2016 01:01 AM, Anton Blanchard wrote:
> Hi,
>
>> kprobe, uprobe, hw-breakpoint and xmon are the only user of
>> emulate_step.
>>
>> Kprobe / uprobe single-steps instruction if they can't emulate it, so
>> there is no problem with them. As I mention, hw-breakpoint is broken.
>> However I'm not sure about xmon, I need to check that.
> I was mostly concerned that it would impact kprobes. Sounds like we are
> ok there.
>
>> So yes, there is no user-visible feature that depends on this.
> Aren't hardware breakpoints exposed via perf? I'd call perf
> user-visible.


Thanks Anton, That's a good catch. I tried this on ppc64le:

  $ sudo cat /proc/kallsyms  | grep pid_max
    c00000000116998c D pid_max

  $ sudo ./perf record -a --event=mem:0xc00000000116998c sleep 10


Before patch:
  It does not record any data and throws below warning.

  $ dmesg
    [  817.895573] Unable to handle hardware breakpoint. Breakpoint at 0xc00000000116998c will be disabled.
    [  817.895581] ------------[ cut here ]------------
    [  817.895588] WARNING: CPU: 24 PID: 2032 at arch/powerpc/kernel/hw_breakpoint.c:277 hw_breakpoint_handler+0x124/0x230
    ...

After patch:
  It records data properly.

  $ sudo ./perf report --stdio
    ...
    # Samples: 36  of event 'mem:0xc00000000116998c'
    # Event count (approx.): 36
    #
    # Overhead  Command        Shared Object     Symbol      
    # ........  .............  ................  .............
    #
        63.89%  kdumpctl       [kernel.vmlinux]  [k] alloc_pid
        27.78%  opal_errd      [kernel.vmlinux]  [k] alloc_pid
         5.56%  kworker/u97:4  [kernel.vmlinux]  [k] alloc_pid
         2.78%  systemd        [kernel.vmlinux]  [k] alloc_pid


-Ravi

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2016-11-06 18:59 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-02  8:53 [PATCH 0/3] powerpc: Emulation support for load/store instructions on LE Ravi Bangoria
2016-11-02  8:53 ` [PATCH 1/3] " Ravi Bangoria
2016-11-02 21:04   ` Anton Blanchard
2016-11-03  5:41     ` Ravi Bangoria
2016-11-03  9:48       ` Michael Ellerman
2016-11-03 10:27         ` Ravi Bangoria
2016-11-04  2:07           ` Andrew Donnellan
2016-11-04  5:31             ` Ravi Bangoria
2016-11-05 19:31               ` Anton Blanchard
2016-11-06 18:59                 ` Ravi Bangoria
2016-11-02  8:53 ` [PATCH 2/3] powerpc: Add encoding for couple of load/store instructions Ravi Bangoria
2016-11-02  8:53 ` [PATCH 3/3] powerpc: emulate_step test for " Ravi Bangoria
2016-11-02 10:30   ` Naveen N. Rao
2016-11-03  7:03 ` [PATCH 0/3] powerpc: Emulation support for load/store instructions on LE Naveen N. Rao

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.