All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/5] powerpc: a few kprobe fixes and refactoring
@ 2017-04-12 10:58 Naveen N. Rao
  2017-04-12 10:58 ` [PATCH v2 1/5] kprobes: convert kprobe_lookup_name() to a function Naveen N. Rao
                   ` (5 more replies)
  0 siblings, 6 replies; 25+ messages in thread
From: Naveen N. Rao @ 2017-04-12 10:58 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Ananth N Mavinakayanahalli, Masami Hiramatsu, Ingo Molnar,
	linuxppc-dev, linux-kernel

v1:
https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1334843.html

For v2, this series has been re-ordered and rebased on top of
powerpc/next so as to make it easier to resolve conflicts with -tip. No
other changes.

- Naveen


Naveen N. Rao (5):
  kprobes: convert kprobe_lookup_name() to a function
  powerpc: kprobes: fix handling of function offsets on ABIv2
  powerpc: introduce a new helper to obtain function entry points
  powerpc: kprobes: factor out code to emulate instruction into a helper
  powerpc: kprobes: emulate instructions on kprobe handler re-entry

 arch/powerpc/include/asm/code-patching.h |  37 ++++++++++
 arch/powerpc/include/asm/kprobes.h       |  53 --------------
 arch/powerpc/kernel/kprobes.c            | 119 +++++++++++++++++++++++++------
 arch/powerpc/kernel/optprobes.c          |   6 +-
 include/linux/kprobes.h                  |   1 +
 kernel/kprobes.c                         |  21 +++---
 6 files changed, 147 insertions(+), 90 deletions(-)

-- 
2.12.1

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v2 1/5] kprobes: convert kprobe_lookup_name() to a function
  2017-04-12 10:58 [PATCH v2 0/5] powerpc: a few kprobe fixes and refactoring Naveen N. Rao
@ 2017-04-12 10:58 ` Naveen N. Rao
  2017-04-13  3:09   ` Masami Hiramatsu
  2017-04-18 12:52     ` David Laight
  2017-04-12 10:58 ` [PATCH v2 2/5] powerpc: kprobes: fix handling of function offsets on ABIv2 Naveen N. Rao
                   ` (4 subsequent siblings)
  5 siblings, 2 replies; 25+ messages in thread
From: Naveen N. Rao @ 2017-04-12 10:58 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Ananth N Mavinakayanahalli, Masami Hiramatsu, Ingo Molnar,
	linuxppc-dev, linux-kernel

The macro is now pretty long and ugly on powerpc. In the light of
further changes needed here, convert it to a __weak variant to be
over-ridden with a nicer looking function.

Suggested-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/kprobes.h | 53 ----------------------------------
 arch/powerpc/kernel/kprobes.c      | 58 ++++++++++++++++++++++++++++++++++++++
 arch/powerpc/kernel/optprobes.c    |  4 +--
 include/linux/kprobes.h            |  1 +
 kernel/kprobes.c                   | 20 ++++++-------
 5 files changed, 69 insertions(+), 67 deletions(-)

diff --git a/arch/powerpc/include/asm/kprobes.h b/arch/powerpc/include/asm/kprobes.h
index 0503c98b2117..a843884aafaf 100644
--- a/arch/powerpc/include/asm/kprobes.h
+++ b/arch/powerpc/include/asm/kprobes.h
@@ -61,59 +61,6 @@ extern kprobe_opcode_t optprobe_template_end[];
 #define MAX_OPTINSN_SIZE	(optprobe_template_end - optprobe_template_entry)
 #define RELATIVEJUMP_SIZE	sizeof(kprobe_opcode_t)	/* 4 bytes */
 
-#ifdef PPC64_ELF_ABI_v2
-/* PPC64 ABIv2 needs local entry point */
-#define kprobe_lookup_name(name, addr)					\
-{									\
-	addr = (kprobe_opcode_t *)kallsyms_lookup_name(name);		\
-	if (addr)							\
-		addr = (kprobe_opcode_t *)ppc_function_entry(addr);	\
-}
-#elif defined(PPC64_ELF_ABI_v1)
-/*
- * 64bit powerpc ABIv1 uses function descriptors:
- * - Check for the dot variant of the symbol first.
- * - If that fails, try looking up the symbol provided.
- *
- * This ensures we always get to the actual symbol and not the descriptor.
- * Also handle <module:symbol> format.
- */
-#define kprobe_lookup_name(name, addr)					\
-{									\
-	char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN];		\
-	const char *modsym;							\
-	bool dot_appended = false;					\
-	if ((modsym = strchr(name, ':')) != NULL) {			\
-		modsym++;						\
-		if (*modsym != '\0' && *modsym != '.') {		\
-			/* Convert to <module:.symbol> */		\
-			strncpy(dot_name, name, modsym - name);		\
-			dot_name[modsym - name] = '.';			\
-			dot_name[modsym - name + 1] = '\0';		\
-			strncat(dot_name, modsym,			\
-				sizeof(dot_name) - (modsym - name) - 2);\
-			dot_appended = true;				\
-		} else {						\
-			dot_name[0] = '\0';				\
-			strncat(dot_name, name, sizeof(dot_name) - 1);	\
-		}							\
-	} else if (name[0] != '.') {					\
-		dot_name[0] = '.';					\
-		dot_name[1] = '\0';					\
-		strncat(dot_name, name, KSYM_NAME_LEN - 2);		\
-		dot_appended = true;					\
-	} else {							\
-		dot_name[0] = '\0';					\
-		strncat(dot_name, name, KSYM_NAME_LEN - 1);		\
-	}								\
-	addr = (kprobe_opcode_t *)kallsyms_lookup_name(dot_name);	\
-	if (!addr && dot_appended) {					\
-		/* Let's try the original non-dot symbol lookup	*/	\
-		addr = (kprobe_opcode_t *)kallsyms_lookup_name(name);	\
-	}								\
-}
-#endif
-
 #define flush_insn_slot(p)	do { } while (0)
 #define kretprobe_blacklist_size 0
 
diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 331751701fed..a7aa7394954d 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -42,6 +42,64 @@ DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
 
 struct kretprobe_blackpoint kretprobe_blacklist[] = {{NULL, NULL}};
 
+kprobe_opcode_t *kprobe_lookup_name(const char *name)
+{
+	kprobe_opcode_t *addr;
+
+#ifdef PPC64_ELF_ABI_v2
+	/* PPC64 ABIv2 needs local entry point */
+	addr = (kprobe_opcode_t *)kallsyms_lookup_name(name);
+	if (addr)
+		addr = (kprobe_opcode_t *)ppc_function_entry(addr);
+#elif defined(PPC64_ELF_ABI_v1)
+	/*
+	 * 64bit powerpc ABIv1 uses function descriptors:
+	 * - Check for the dot variant of the symbol first.
+	 * - If that fails, try looking up the symbol provided.
+	 *
+	 * This ensures we always get to the actual symbol and not
+	 * the descriptor.
+	 *
+	 * Also handle <module:symbol> format.
+	 */
+	char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN];
+	const char *modsym;
+	bool dot_appended = false;
+	if ((modsym = strchr(name, ':')) != NULL) {
+		modsym++;
+		if (*modsym != '\0' && *modsym != '.') {
+			/* Convert to <module:.symbol> */
+			strncpy(dot_name, name, modsym - name);
+			dot_name[modsym - name] = '.';
+			dot_name[modsym - name + 1] = '\0';
+			strncat(dot_name, modsym,
+				sizeof(dot_name) - (modsym - name) - 2);
+			dot_appended = true;
+		} else {
+			dot_name[0] = '\0';
+			strncat(dot_name, name, sizeof(dot_name) - 1);
+		}
+	} else if (name[0] != '.') {
+		dot_name[0] = '.';
+		dot_name[1] = '\0';
+		strncat(dot_name, name, KSYM_NAME_LEN - 2);
+		dot_appended = true;
+	} else {
+		dot_name[0] = '\0';
+		strncat(dot_name, name, KSYM_NAME_LEN - 1);
+	}
+	addr = (kprobe_opcode_t *)kallsyms_lookup_name(dot_name);
+	if (!addr && dot_appended) {
+		/* Let's try the original non-dot symbol lookup	*/
+		addr = (kprobe_opcode_t *)kallsyms_lookup_name(name);
+	}
+#else
+	addr = (kprobe_opcode_t *)kallsyms_lookup_name(dot_name);
+#endif
+
+	return addr;
+}
+
 int __kprobes arch_prepare_kprobe(struct kprobe *p)
 {
 	int ret = 0;
diff --git a/arch/powerpc/kernel/optprobes.c b/arch/powerpc/kernel/optprobes.c
index 2282bf4e63cd..aefe076d00e0 100644
--- a/arch/powerpc/kernel/optprobes.c
+++ b/arch/powerpc/kernel/optprobes.c
@@ -243,8 +243,8 @@ int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *p)
 	/*
 	 * 2. branch to optimized_callback() and emulate_step()
 	 */
-	kprobe_lookup_name("optimized_callback", op_callback_addr);
-	kprobe_lookup_name("emulate_step", emulate_step_addr);
+	op_callback_addr = kprobe_lookup_name("optimized_callback");
+	emulate_step_addr = kprobe_lookup_name("emulate_step");
 	if (!op_callback_addr || !emulate_step_addr) {
 		WARN(1, "kprobe_lookup_name() failed\n");
 		goto error;
diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index c328e4f7dcad..16f153c84646 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -379,6 +379,7 @@ static inline struct kprobe_ctlblk *get_kprobe_ctlblk(void)
 	return this_cpu_ptr(&kprobe_ctlblk);
 }
 
+kprobe_opcode_t *kprobe_lookup_name(const char *name);
 int register_kprobe(struct kprobe *p);
 void unregister_kprobe(struct kprobe *p);
 int register_kprobes(struct kprobe **kps, int num);
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 699c5bc51a92..f3421b6b47a3 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -58,15 +58,6 @@
 #define KPROBE_TABLE_SIZE (1 << KPROBE_HASH_BITS)
 
 
-/*
- * Some oddball architectures like 64bit powerpc have function descriptors
- * so this must be overridable.
- */
-#ifndef kprobe_lookup_name
-#define kprobe_lookup_name(name, addr) \
-	addr = ((kprobe_opcode_t *)(kallsyms_lookup_name(name)))
-#endif
-
 static int kprobes_initialized;
 static struct hlist_head kprobe_table[KPROBE_TABLE_SIZE];
 static struct hlist_head kretprobe_inst_table[KPROBE_TABLE_SIZE];
@@ -81,6 +72,11 @@ static struct {
 	raw_spinlock_t lock ____cacheline_aligned_in_smp;
 } kretprobe_table_locks[KPROBE_TABLE_SIZE];
 
+kprobe_opcode_t * __weak kprobe_lookup_name(const char *name)
+{
+	return ((kprobe_opcode_t *)(kallsyms_lookup_name(name)));
+}
+
 static raw_spinlock_t *kretprobe_table_lock_ptr(unsigned long hash)
 {
 	return &(kretprobe_table_locks[hash].lock);
@@ -1400,7 +1396,7 @@ static kprobe_opcode_t *kprobe_addr(struct kprobe *p)
 		goto invalid;
 
 	if (p->symbol_name) {
-		kprobe_lookup_name(p->symbol_name, addr);
+		addr = kprobe_lookup_name(p->symbol_name);
 		if (!addr)
 			return ERR_PTR(-ENOENT);
 	}
@@ -2192,8 +2188,8 @@ static int __init init_kprobes(void)
 	if (kretprobe_blacklist_size) {
 		/* lookup the function address from its name */
 		for (i = 0; kretprobe_blacklist[i].name != NULL; i++) {
-			kprobe_lookup_name(kretprobe_blacklist[i].name,
-					   kretprobe_blacklist[i].addr);
+			kretprobe_blacklist[i].addr =
+				kprobe_lookup_name(kretprobe_blacklist[i].name);
 			if (!kretprobe_blacklist[i].addr)
 				printk("kretprobe: lookup failed: %s\n",
 				       kretprobe_blacklist[i].name);
-- 
2.12.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 2/5] powerpc: kprobes: fix handling of function offsets on ABIv2
  2017-04-12 10:58 [PATCH v2 0/5] powerpc: a few kprobe fixes and refactoring Naveen N. Rao
  2017-04-12 10:58 ` [PATCH v2 1/5] kprobes: convert kprobe_lookup_name() to a function Naveen N. Rao
@ 2017-04-12 10:58 ` Naveen N. Rao
  2017-04-13  4:28   ` Masami Hiramatsu
  2017-04-12 10:58 ` [PATCH v2 3/5] powerpc: introduce a new helper to obtain function entry points Naveen N. Rao
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 25+ messages in thread
From: Naveen N. Rao @ 2017-04-12 10:58 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Ananth N Mavinakayanahalli, Masami Hiramatsu, Ingo Molnar,
	linuxppc-dev, linux-kernel

commit 239aeba76409 ("perf powerpc: Fix kprobe and kretprobe handling
with kallsyms on ppc64le") changed how we use the offset field in struct
kprobe on ABIv2. perf now offsets from the GEP (Global entry point) if an
offset is specified and otherwise chooses the LEP (Local entry point).

Fix the same in kernel for kprobe API users. We do this by extending
kprobe_lookup_name() to accept an additional parameter to indicate the
offset specified with the kprobe registration. If offset is 0, we return
the local function entry and return the global entry point otherwise.

With:
	# cd /sys/kernel/debug/tracing/
	# echo "p _do_fork" >> kprobe_events
	# echo "p _do_fork+0x10" >> kprobe_events

before this patch:
	# cat ../kprobes/list
	c0000000000d0748  k  _do_fork+0x8    [DISABLED]
	c0000000000d0758  k  _do_fork+0x18    [DISABLED]
	c0000000000412b0  k  kretprobe_trampoline+0x0    [OPTIMIZED]

and after:
	# cat ../kprobes/list
	c0000000000d04c8  k  _do_fork+0x8    [DISABLED]
	c0000000000d04d0  k  _do_fork+0x10    [DISABLED]
	c0000000000412b0  k  kretprobe_trampoline+0x0    [OPTIMIZED]

Acked-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/kprobes.c   | 4 ++--
 arch/powerpc/kernel/optprobes.c | 4 ++--
 include/linux/kprobes.h         | 2 +-
 kernel/kprobes.c                | 7 ++++---
 4 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index a7aa7394954d..0732a0291ace 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -42,14 +42,14 @@ DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
 
 struct kretprobe_blackpoint kretprobe_blacklist[] = {{NULL, NULL}};
 
-kprobe_opcode_t *kprobe_lookup_name(const char *name)
+kprobe_opcode_t *kprobe_lookup_name(const char *name, unsigned int offset)
 {
 	kprobe_opcode_t *addr;
 
 #ifdef PPC64_ELF_ABI_v2
 	/* PPC64 ABIv2 needs local entry point */
 	addr = (kprobe_opcode_t *)kallsyms_lookup_name(name);
-	if (addr)
+	if (addr && !offset)
 		addr = (kprobe_opcode_t *)ppc_function_entry(addr);
 #elif defined(PPC64_ELF_ABI_v1)
 	/*
diff --git a/arch/powerpc/kernel/optprobes.c b/arch/powerpc/kernel/optprobes.c
index aefe076d00e0..ce81a322251c 100644
--- a/arch/powerpc/kernel/optprobes.c
+++ b/arch/powerpc/kernel/optprobes.c
@@ -243,8 +243,8 @@ int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *p)
 	/*
 	 * 2. branch to optimized_callback() and emulate_step()
 	 */
-	op_callback_addr = kprobe_lookup_name("optimized_callback");
-	emulate_step_addr = kprobe_lookup_name("emulate_step");
+	op_callback_addr = kprobe_lookup_name("optimized_callback", 0);
+	emulate_step_addr = kprobe_lookup_name("emulate_step", 0);
 	if (!op_callback_addr || !emulate_step_addr) {
 		WARN(1, "kprobe_lookup_name() failed\n");
 		goto error;
diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index 16f153c84646..1f82a3db00b1 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -379,7 +379,7 @@ static inline struct kprobe_ctlblk *get_kprobe_ctlblk(void)
 	return this_cpu_ptr(&kprobe_ctlblk);
 }
 
-kprobe_opcode_t *kprobe_lookup_name(const char *name);
+kprobe_opcode_t *kprobe_lookup_name(const char *name, unsigned int offset);
 int register_kprobe(struct kprobe *p);
 void unregister_kprobe(struct kprobe *p);
 int register_kprobes(struct kprobe **kps, int num);
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index f3421b6b47a3..6a128f3a7ed1 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -72,7 +72,8 @@ static struct {
 	raw_spinlock_t lock ____cacheline_aligned_in_smp;
 } kretprobe_table_locks[KPROBE_TABLE_SIZE];
 
-kprobe_opcode_t * __weak kprobe_lookup_name(const char *name)
+kprobe_opcode_t * __weak kprobe_lookup_name(const char *name,
+					unsigned int __unused)
 {
 	return ((kprobe_opcode_t *)(kallsyms_lookup_name(name)));
 }
@@ -1396,7 +1397,7 @@ static kprobe_opcode_t *kprobe_addr(struct kprobe *p)
 		goto invalid;
 
 	if (p->symbol_name) {
-		addr = kprobe_lookup_name(p->symbol_name);
+		addr = kprobe_lookup_name(p->symbol_name, p->offset);
 		if (!addr)
 			return ERR_PTR(-ENOENT);
 	}
@@ -2189,7 +2190,7 @@ static int __init init_kprobes(void)
 		/* lookup the function address from its name */
 		for (i = 0; kretprobe_blacklist[i].name != NULL; i++) {
 			kretprobe_blacklist[i].addr =
-				kprobe_lookup_name(kretprobe_blacklist[i].name);
+				kprobe_lookup_name(kretprobe_blacklist[i].name, 0);
 			if (!kretprobe_blacklist[i].addr)
 				printk("kretprobe: lookup failed: %s\n",
 				       kretprobe_blacklist[i].name);
-- 
2.12.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 3/5] powerpc: introduce a new helper to obtain function entry points
  2017-04-12 10:58 [PATCH v2 0/5] powerpc: a few kprobe fixes and refactoring Naveen N. Rao
  2017-04-12 10:58 ` [PATCH v2 1/5] kprobes: convert kprobe_lookup_name() to a function Naveen N. Rao
  2017-04-12 10:58 ` [PATCH v2 2/5] powerpc: kprobes: fix handling of function offsets on ABIv2 Naveen N. Rao
@ 2017-04-12 10:58 ` Naveen N. Rao
  2017-04-13  4:32   ` Masami Hiramatsu
  2017-04-12 10:58 ` [PATCH v2 4/5] powerpc: kprobes: factor out code to emulate instruction into a helper Naveen N. Rao
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 25+ messages in thread
From: Naveen N. Rao @ 2017-04-12 10:58 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Ananth N Mavinakayanahalli, Masami Hiramatsu, Ingo Molnar,
	linuxppc-dev, linux-kernel

kprobe_lookup_name() is specific to the kprobe subsystem and may not
always return the function entry point (in a subsequent patch for
KPROBES_ON_FTRACE). For looking up function entry points, introduce a
separate helper and use the same in optprobes.c

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/code-patching.h | 37 ++++++++++++++++++++++++++++++++
 arch/powerpc/kernel/optprobes.c          |  6 +++---
 2 files changed, 40 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
index 8ab937771068..3e994f404434 100644
--- a/arch/powerpc/include/asm/code-patching.h
+++ b/arch/powerpc/include/asm/code-patching.h
@@ -12,6 +12,8 @@
 
 #include <asm/types.h>
 #include <asm/ppc-opcode.h>
+#include <linux/string.h>
+#include <linux/kallsyms.h>
 
 /* Flags for create_branch:
  * "b"   == create_branch(addr, target, 0);
@@ -99,6 +101,41 @@ static inline unsigned long ppc_global_function_entry(void *func)
 #endif
 }
 
+/*
+ * Wrapper around kallsyms_lookup() to return function entry address:
+ * - For ABIv1, we lookup the dot variant.
+ * - For ABIv2, we return the local entry point.
+ */
+static inline unsigned long ppc_kallsyms_lookup_name(const char *name)
+{
+	unsigned long addr;
+#ifdef PPC64_ELF_ABI_v1
+	/* check for dot variant */
+	char dot_name[1 + KSYM_NAME_LEN];
+	bool dot_appended = false;
+	if (name[0] != '.') {
+		dot_name[0] = '.';
+		dot_name[1] = '\0';
+		strncat(dot_name, name, KSYM_NAME_LEN - 2);
+		dot_appended = true;
+	} else {
+		dot_name[0] = '\0';
+		strncat(dot_name, name, KSYM_NAME_LEN - 1);
+	}
+	addr = kallsyms_lookup_name(dot_name);
+	if (!addr && dot_appended)
+		/* Let's try the original non-dot symbol lookup	*/
+		addr = kallsyms_lookup_name(name);
+#elif defined(PPC64_ELF_ABI_v2)
+	addr = kallsyms_lookup_name(name);
+	if (addr)
+		addr = ppc_function_entry((void *)addr);
+#else
+	addr = kallsyms_lookup_name(name);
+#endif
+	return addr;
+}
+
 #ifdef CONFIG_PPC64
 /*
  * Some instruction encodings commonly used in dynamic ftracing
diff --git a/arch/powerpc/kernel/optprobes.c b/arch/powerpc/kernel/optprobes.c
index ce81a322251c..ec60ed0d4aad 100644
--- a/arch/powerpc/kernel/optprobes.c
+++ b/arch/powerpc/kernel/optprobes.c
@@ -243,10 +243,10 @@ int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *p)
 	/*
 	 * 2. branch to optimized_callback() and emulate_step()
 	 */
-	op_callback_addr = kprobe_lookup_name("optimized_callback", 0);
-	emulate_step_addr = kprobe_lookup_name("emulate_step", 0);
+	op_callback_addr = (kprobe_opcode_t *)ppc_kallsyms_lookup_name("optimized_callback");
+	emulate_step_addr = (kprobe_opcode_t *)ppc_kallsyms_lookup_name("emulate_step");
 	if (!op_callback_addr || !emulate_step_addr) {
-		WARN(1, "kprobe_lookup_name() failed\n");
+		WARN(1, "Unable to lookup optimized_callback()/emulate_step()\n");
 		goto error;
 	}
 
-- 
2.12.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 4/5] powerpc: kprobes: factor out code to emulate instruction into a helper
  2017-04-12 10:58 [PATCH v2 0/5] powerpc: a few kprobe fixes and refactoring Naveen N. Rao
                   ` (2 preceding siblings ...)
  2017-04-12 10:58 ` [PATCH v2 3/5] powerpc: introduce a new helper to obtain function entry points Naveen N. Rao
@ 2017-04-12 10:58 ` Naveen N. Rao
  2017-04-13  4:34   ` Masami Hiramatsu
  2017-04-12 10:58 ` [PATCH v2 5/5] powerpc: kprobes: emulate instructions on kprobe handler re-entry Naveen N. Rao
  2017-04-13  3:02 ` [PATCH v2 0/5] powerpc: a few kprobe fixes and refactoring Masami Hiramatsu
  5 siblings, 1 reply; 25+ messages in thread
From: Naveen N. Rao @ 2017-04-12 10:58 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Ananth N Mavinakayanahalli, Masami Hiramatsu, Ingo Molnar,
	linuxppc-dev, linux-kernel

This helper will be used in a subsequent patch to emulate instructions
on re-entering the kprobe handler. No functional change.

Acked-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/kprobes.c | 52 ++++++++++++++++++++++++++-----------------
 1 file changed, 31 insertions(+), 21 deletions(-)

diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 0732a0291ace..8b48f7d046bd 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -207,6 +207,35 @@ void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
 	regs->link = (unsigned long)kretprobe_trampoline;
 }
 
+int __kprobes try_to_emulate(struct kprobe *p, struct pt_regs *regs)
+{
+	int ret;
+	unsigned int insn = *p->ainsn.insn;
+
+	/* regs->nip is also adjusted if emulate_step returns 1 */
+	ret = emulate_step(regs, insn);
+	if (ret > 0) {
+		/*
+		 * Once this instruction has been boosted
+		 * successfully, set the boostable flag
+		 */
+		if (unlikely(p->ainsn.boostable == 0))
+			p->ainsn.boostable = 1;
+	} else if (ret < 0) {
+		/*
+		 * We don't allow kprobes on mtmsr(d)/rfi(d), etc.
+		 * So, we should never get here... but, its still
+		 * good to catch them, just in case...
+		 */
+		printk("Can't step on instruction %x\n", insn);
+		BUG();
+	} else if (ret == 0)
+		/* This instruction can't be boosted */
+		p->ainsn.boostable = -1;
+
+	return ret;
+}
+
 int __kprobes kprobe_handler(struct pt_regs *regs)
 {
 	struct kprobe *p;
@@ -302,18 +331,9 @@ int __kprobes kprobe_handler(struct pt_regs *regs)
 
 ss_probe:
 	if (p->ainsn.boostable >= 0) {
-		unsigned int insn = *p->ainsn.insn;
+		ret = try_to_emulate(p, regs);
 
-		/* regs->nip is also adjusted if emulate_step returns 1 */
-		ret = emulate_step(regs, insn);
 		if (ret > 0) {
-			/*
-			 * Once this instruction has been boosted
-			 * successfully, set the boostable flag
-			 */
-			if (unlikely(p->ainsn.boostable == 0))
-				p->ainsn.boostable = 1;
-
 			if (p->post_handler)
 				p->post_handler(p, regs, 0);
 
@@ -321,17 +341,7 @@ int __kprobes kprobe_handler(struct pt_regs *regs)
 			reset_current_kprobe();
 			preempt_enable_no_resched();
 			return 1;
-		} else if (ret < 0) {
-			/*
-			 * We don't allow kprobes on mtmsr(d)/rfi(d), etc.
-			 * So, we should never get here... but, its still
-			 * good to catch them, just in case...
-			 */
-			printk("Can't step on instruction %x\n", insn);
-			BUG();
-		} else if (ret == 0)
-			/* This instruction can't be boosted */
-			p->ainsn.boostable = -1;
+		}
 	}
 	prepare_singlestep(p, regs);
 	kcb->kprobe_status = KPROBE_HIT_SS;
-- 
2.12.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 5/5] powerpc: kprobes: emulate instructions on kprobe handler re-entry
  2017-04-12 10:58 [PATCH v2 0/5] powerpc: a few kprobe fixes and refactoring Naveen N. Rao
                   ` (3 preceding siblings ...)
  2017-04-12 10:58 ` [PATCH v2 4/5] powerpc: kprobes: factor out code to emulate instruction into a helper Naveen N. Rao
@ 2017-04-12 10:58 ` Naveen N. Rao
  2017-04-13  4:37   ` Masami Hiramatsu
  2017-04-13  3:02 ` [PATCH v2 0/5] powerpc: a few kprobe fixes and refactoring Masami Hiramatsu
  5 siblings, 1 reply; 25+ messages in thread
From: Naveen N. Rao @ 2017-04-12 10:58 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Ananth N Mavinakayanahalli, Masami Hiramatsu, Ingo Molnar,
	linuxppc-dev, linux-kernel

On kprobe handler re-entry, try to emulate the instruction rather than
single stepping always.

As a related change, remove the duplicate saving of msr as that is
already done in set_current_kprobe()

Acked-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/kprobes.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 8b48f7d046bd..005bd4a75902 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -273,10 +273,17 @@ int __kprobes kprobe_handler(struct pt_regs *regs)
 			 */
 			save_previous_kprobe(kcb);
 			set_current_kprobe(p, regs, kcb);
-			kcb->kprobe_saved_msr = regs->msr;
 			kprobes_inc_nmissed_count(p);
 			prepare_singlestep(p, regs);
 			kcb->kprobe_status = KPROBE_REENTER;
+			if (p->ainsn.boostable >= 0) {
+				ret = try_to_emulate(p, regs);
+
+				if (ret > 0) {
+					restore_previous_kprobe(kcb);
+					return 1;
+				}
+			}
 			return 1;
 		} else {
 			if (*addr != BREAKPOINT_INSTRUCTION) {
-- 
2.12.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 0/5] powerpc: a few kprobe fixes and refactoring
  2017-04-12 10:58 [PATCH v2 0/5] powerpc: a few kprobe fixes and refactoring Naveen N. Rao
                   ` (4 preceding siblings ...)
  2017-04-12 10:58 ` [PATCH v2 5/5] powerpc: kprobes: emulate instructions on kprobe handler re-entry Naveen N. Rao
@ 2017-04-13  3:02 ` Masami Hiramatsu
  2017-04-13  5:50   ` Naveen N. Rao
  5 siblings, 1 reply; 25+ messages in thread
From: Masami Hiramatsu @ 2017-04-13  3:02 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, Ananth N Mavinakayanahalli, Masami Hiramatsu,
	Ingo Molnar, linuxppc-dev, linux-kernel

Hi Naveen,

BTW, I saw you sent 3 different series, are there any
conflict each other? or can we pick those independently?

Thanks,

On Wed, 12 Apr 2017 16:28:23 +0530
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:

> v1:
> https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1334843.html
> 
> For v2, this series has been re-ordered and rebased on top of
> powerpc/next so as to make it easier to resolve conflicts with -tip. No
> other changes.
> 
> - Naveen
> 
> 
> Naveen N. Rao (5):
>   kprobes: convert kprobe_lookup_name() to a function
>   powerpc: kprobes: fix handling of function offsets on ABIv2
>   powerpc: introduce a new helper to obtain function entry points
>   powerpc: kprobes: factor out code to emulate instruction into a helper
>   powerpc: kprobes: emulate instructions on kprobe handler re-entry
> 
>  arch/powerpc/include/asm/code-patching.h |  37 ++++++++++
>  arch/powerpc/include/asm/kprobes.h       |  53 --------------
>  arch/powerpc/kernel/kprobes.c            | 119 +++++++++++++++++++++++++------
>  arch/powerpc/kernel/optprobes.c          |   6 +-
>  include/linux/kprobes.h                  |   1 +
>  kernel/kprobes.c                         |  21 +++---
>  6 files changed, 147 insertions(+), 90 deletions(-)
> 
> -- 
> 2.12.1
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 1/5] kprobes: convert kprobe_lookup_name() to a function
  2017-04-12 10:58 ` [PATCH v2 1/5] kprobes: convert kprobe_lookup_name() to a function Naveen N. Rao
@ 2017-04-13  3:09   ` Masami Hiramatsu
  2017-04-18 12:52     ` David Laight
  1 sibling, 0 replies; 25+ messages in thread
From: Masami Hiramatsu @ 2017-04-13  3:09 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, Ananth N Mavinakayanahalli, Masami Hiramatsu,
	Ingo Molnar, linuxppc-dev, linux-kernel

On Wed, 12 Apr 2017 16:28:24 +0530
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:

> The macro is now pretty long and ugly on powerpc. In the light of
> further changes needed here, convert it to a __weak variant to be
> over-ridden with a nicer looking function.

Looks good to me.

Acked-by: Masami Hiramatsu <mhiramat@kernel.org>

Thanks!

> 
> Suggested-by: Masami Hiramatsu <mhiramat@kernel.org>
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/kprobes.h | 53 ----------------------------------
>  arch/powerpc/kernel/kprobes.c      | 58 ++++++++++++++++++++++++++++++++++++++
>  arch/powerpc/kernel/optprobes.c    |  4 +--
>  include/linux/kprobes.h            |  1 +
>  kernel/kprobes.c                   | 20 ++++++-------
>  5 files changed, 69 insertions(+), 67 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/kprobes.h b/arch/powerpc/include/asm/kprobes.h
> index 0503c98b2117..a843884aafaf 100644
> --- a/arch/powerpc/include/asm/kprobes.h
> +++ b/arch/powerpc/include/asm/kprobes.h
> @@ -61,59 +61,6 @@ extern kprobe_opcode_t optprobe_template_end[];
>  #define MAX_OPTINSN_SIZE	(optprobe_template_end - optprobe_template_entry)
>  #define RELATIVEJUMP_SIZE	sizeof(kprobe_opcode_t)	/* 4 bytes */
>  
> -#ifdef PPC64_ELF_ABI_v2
> -/* PPC64 ABIv2 needs local entry point */
> -#define kprobe_lookup_name(name, addr)					\
> -{									\
> -	addr = (kprobe_opcode_t *)kallsyms_lookup_name(name);		\
> -	if (addr)							\
> -		addr = (kprobe_opcode_t *)ppc_function_entry(addr);	\
> -}
> -#elif defined(PPC64_ELF_ABI_v1)
> -/*
> - * 64bit powerpc ABIv1 uses function descriptors:
> - * - Check for the dot variant of the symbol first.
> - * - If that fails, try looking up the symbol provided.
> - *
> - * This ensures we always get to the actual symbol and not the descriptor.
> - * Also handle <module:symbol> format.
> - */
> -#define kprobe_lookup_name(name, addr)					\
> -{									\
> -	char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN];		\
> -	const char *modsym;							\
> -	bool dot_appended = false;					\
> -	if ((modsym = strchr(name, ':')) != NULL) {			\
> -		modsym++;						\
> -		if (*modsym != '\0' && *modsym != '.') {		\
> -			/* Convert to <module:.symbol> */		\
> -			strncpy(dot_name, name, modsym - name);		\
> -			dot_name[modsym - name] = '.';			\
> -			dot_name[modsym - name + 1] = '\0';		\
> -			strncat(dot_name, modsym,			\
> -				sizeof(dot_name) - (modsym - name) - 2);\
> -			dot_appended = true;				\
> -		} else {						\
> -			dot_name[0] = '\0';				\
> -			strncat(dot_name, name, sizeof(dot_name) - 1);	\
> -		}							\
> -	} else if (name[0] != '.') {					\
> -		dot_name[0] = '.';					\
> -		dot_name[1] = '\0';					\
> -		strncat(dot_name, name, KSYM_NAME_LEN - 2);		\
> -		dot_appended = true;					\
> -	} else {							\
> -		dot_name[0] = '\0';					\
> -		strncat(dot_name, name, KSYM_NAME_LEN - 1);		\
> -	}								\
> -	addr = (kprobe_opcode_t *)kallsyms_lookup_name(dot_name);	\
> -	if (!addr && dot_appended) {					\
> -		/* Let's try the original non-dot symbol lookup	*/	\
> -		addr = (kprobe_opcode_t *)kallsyms_lookup_name(name);	\
> -	}								\
> -}
> -#endif
> -
>  #define flush_insn_slot(p)	do { } while (0)
>  #define kretprobe_blacklist_size 0
>  
> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> index 331751701fed..a7aa7394954d 100644
> --- a/arch/powerpc/kernel/kprobes.c
> +++ b/arch/powerpc/kernel/kprobes.c
> @@ -42,6 +42,64 @@ DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
>  
>  struct kretprobe_blackpoint kretprobe_blacklist[] = {{NULL, NULL}};
>  
> +kprobe_opcode_t *kprobe_lookup_name(const char *name)
> +{
> +	kprobe_opcode_t *addr;
> +
> +#ifdef PPC64_ELF_ABI_v2
> +	/* PPC64 ABIv2 needs local entry point */
> +	addr = (kprobe_opcode_t *)kallsyms_lookup_name(name);
> +	if (addr)
> +		addr = (kprobe_opcode_t *)ppc_function_entry(addr);
> +#elif defined(PPC64_ELF_ABI_v1)
> +	/*
> +	 * 64bit powerpc ABIv1 uses function descriptors:
> +	 * - Check for the dot variant of the symbol first.
> +	 * - If that fails, try looking up the symbol provided.
> +	 *
> +	 * This ensures we always get to the actual symbol and not
> +	 * the descriptor.
> +	 *
> +	 * Also handle <module:symbol> format.
> +	 */
> +	char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN];
> +	const char *modsym;
> +	bool dot_appended = false;
> +	if ((modsym = strchr(name, ':')) != NULL) {
> +		modsym++;
> +		if (*modsym != '\0' && *modsym != '.') {
> +			/* Convert to <module:.symbol> */
> +			strncpy(dot_name, name, modsym - name);
> +			dot_name[modsym - name] = '.';
> +			dot_name[modsym - name + 1] = '\0';
> +			strncat(dot_name, modsym,
> +				sizeof(dot_name) - (modsym - name) - 2);
> +			dot_appended = true;
> +		} else {
> +			dot_name[0] = '\0';
> +			strncat(dot_name, name, sizeof(dot_name) - 1);
> +		}
> +	} else if (name[0] != '.') {
> +		dot_name[0] = '.';
> +		dot_name[1] = '\0';
> +		strncat(dot_name, name, KSYM_NAME_LEN - 2);
> +		dot_appended = true;
> +	} else {
> +		dot_name[0] = '\0';
> +		strncat(dot_name, name, KSYM_NAME_LEN - 1);
> +	}
> +	addr = (kprobe_opcode_t *)kallsyms_lookup_name(dot_name);
> +	if (!addr && dot_appended) {
> +		/* Let's try the original non-dot symbol lookup	*/
> +		addr = (kprobe_opcode_t *)kallsyms_lookup_name(name);
> +	}
> +#else
> +	addr = (kprobe_opcode_t *)kallsyms_lookup_name(dot_name);
> +#endif
> +
> +	return addr;
> +}
> +
>  int __kprobes arch_prepare_kprobe(struct kprobe *p)
>  {
>  	int ret = 0;
> diff --git a/arch/powerpc/kernel/optprobes.c b/arch/powerpc/kernel/optprobes.c
> index 2282bf4e63cd..aefe076d00e0 100644
> --- a/arch/powerpc/kernel/optprobes.c
> +++ b/arch/powerpc/kernel/optprobes.c
> @@ -243,8 +243,8 @@ int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *p)
>  	/*
>  	 * 2. branch to optimized_callback() and emulate_step()
>  	 */
> -	kprobe_lookup_name("optimized_callback", op_callback_addr);
> -	kprobe_lookup_name("emulate_step", emulate_step_addr);
> +	op_callback_addr = kprobe_lookup_name("optimized_callback");
> +	emulate_step_addr = kprobe_lookup_name("emulate_step");
>  	if (!op_callback_addr || !emulate_step_addr) {
>  		WARN(1, "kprobe_lookup_name() failed\n");
>  		goto error;
> diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
> index c328e4f7dcad..16f153c84646 100644
> --- a/include/linux/kprobes.h
> +++ b/include/linux/kprobes.h
> @@ -379,6 +379,7 @@ static inline struct kprobe_ctlblk *get_kprobe_ctlblk(void)
>  	return this_cpu_ptr(&kprobe_ctlblk);
>  }
>  
> +kprobe_opcode_t *kprobe_lookup_name(const char *name);
>  int register_kprobe(struct kprobe *p);
>  void unregister_kprobe(struct kprobe *p);
>  int register_kprobes(struct kprobe **kps, int num);
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index 699c5bc51a92..f3421b6b47a3 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -58,15 +58,6 @@
>  #define KPROBE_TABLE_SIZE (1 << KPROBE_HASH_BITS)
>  
>  
> -/*
> - * Some oddball architectures like 64bit powerpc have function descriptors
> - * so this must be overridable.
> - */
> -#ifndef kprobe_lookup_name
> -#define kprobe_lookup_name(name, addr) \
> -	addr = ((kprobe_opcode_t *)(kallsyms_lookup_name(name)))
> -#endif
> -
>  static int kprobes_initialized;
>  static struct hlist_head kprobe_table[KPROBE_TABLE_SIZE];
>  static struct hlist_head kretprobe_inst_table[KPROBE_TABLE_SIZE];
> @@ -81,6 +72,11 @@ static struct {
>  	raw_spinlock_t lock ____cacheline_aligned_in_smp;
>  } kretprobe_table_locks[KPROBE_TABLE_SIZE];
>  
> +kprobe_opcode_t * __weak kprobe_lookup_name(const char *name)
> +{
> +	return ((kprobe_opcode_t *)(kallsyms_lookup_name(name)));
> +}
> +
>  static raw_spinlock_t *kretprobe_table_lock_ptr(unsigned long hash)
>  {
>  	return &(kretprobe_table_locks[hash].lock);
> @@ -1400,7 +1396,7 @@ static kprobe_opcode_t *kprobe_addr(struct kprobe *p)
>  		goto invalid;
>  
>  	if (p->symbol_name) {
> -		kprobe_lookup_name(p->symbol_name, addr);
> +		addr = kprobe_lookup_name(p->symbol_name);
>  		if (!addr)
>  			return ERR_PTR(-ENOENT);
>  	}
> @@ -2192,8 +2188,8 @@ static int __init init_kprobes(void)
>  	if (kretprobe_blacklist_size) {
>  		/* lookup the function address from its name */
>  		for (i = 0; kretprobe_blacklist[i].name != NULL; i++) {
> -			kprobe_lookup_name(kretprobe_blacklist[i].name,
> -					   kretprobe_blacklist[i].addr);
> +			kretprobe_blacklist[i].addr =
> +				kprobe_lookup_name(kretprobe_blacklist[i].name);
>  			if (!kretprobe_blacklist[i].addr)
>  				printk("kretprobe: lookup failed: %s\n",
>  				       kretprobe_blacklist[i].name);
> -- 
> 2.12.1
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 2/5] powerpc: kprobes: fix handling of function offsets on ABIv2
  2017-04-12 10:58 ` [PATCH v2 2/5] powerpc: kprobes: fix handling of function offsets on ABIv2 Naveen N. Rao
@ 2017-04-13  4:28   ` Masami Hiramatsu
  0 siblings, 0 replies; 25+ messages in thread
From: Masami Hiramatsu @ 2017-04-13  4:28 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, Ananth N Mavinakayanahalli, Masami Hiramatsu,
	Ingo Molnar, linuxppc-dev, linux-kernel

On Wed, 12 Apr 2017 16:28:25 +0530
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:

> commit 239aeba76409 ("perf powerpc: Fix kprobe and kretprobe handling
> with kallsyms on ppc64le") changed how we use the offset field in struct
> kprobe on ABIv2. perf now offsets from the GEP (Global entry point) if an
> offset is specified and otherwise chooses the LEP (Local entry point).
> 
> Fix the same in kernel for kprobe API users. We do this by extending
> kprobe_lookup_name() to accept an additional parameter to indicate the
> offset specified with the kprobe registration. If offset is 0, we return
> the local function entry and return the global entry point otherwise.
> 
> With:
> 	# cd /sys/kernel/debug/tracing/
> 	# echo "p _do_fork" >> kprobe_events
> 	# echo "p _do_fork+0x10" >> kprobe_events
> 
> before this patch:
> 	# cat ../kprobes/list
> 	c0000000000d0748  k  _do_fork+0x8    [DISABLED]
> 	c0000000000d0758  k  _do_fork+0x18    [DISABLED]
> 	c0000000000412b0  k  kretprobe_trampoline+0x0    [OPTIMIZED]
> 
> and after:
> 	# cat ../kprobes/list
> 	c0000000000d04c8  k  _do_fork+0x8    [DISABLED]
> 	c0000000000d04d0  k  _do_fork+0x10    [DISABLED]
> 	c0000000000412b0  k  kretprobe_trampoline+0x0    [OPTIMIZED]
> 
> Acked-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> ---
>  arch/powerpc/kernel/kprobes.c   | 4 ++--
>  arch/powerpc/kernel/optprobes.c | 4 ++--
>  include/linux/kprobes.h         | 2 +-
>  kernel/kprobes.c                | 7 ++++---
>  4 files changed, 9 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> index a7aa7394954d..0732a0291ace 100644
> --- a/arch/powerpc/kernel/kprobes.c
> +++ b/arch/powerpc/kernel/kprobes.c
> @@ -42,14 +42,14 @@ DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
>  
>  struct kretprobe_blackpoint kretprobe_blacklist[] = {{NULL, NULL}};
>  
> -kprobe_opcode_t *kprobe_lookup_name(const char *name)
> +kprobe_opcode_t *kprobe_lookup_name(const char *name, unsigned int offset)

Hmm, if we do this change, it is natural that kprobe_lookup_name()
returns the address + offset.

Thank you,



-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 3/5] powerpc: introduce a new helper to obtain function entry points
  2017-04-12 10:58 ` [PATCH v2 3/5] powerpc: introduce a new helper to obtain function entry points Naveen N. Rao
@ 2017-04-13  4:32   ` Masami Hiramatsu
  2017-04-13  5:52     ` Naveen N. Rao
  0 siblings, 1 reply; 25+ messages in thread
From: Masami Hiramatsu @ 2017-04-13  4:32 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, Ananth N Mavinakayanahalli, Masami Hiramatsu,
	Ingo Molnar, linuxppc-dev, linux-kernel

On Wed, 12 Apr 2017 16:28:26 +0530
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:

> kprobe_lookup_name() is specific to the kprobe subsystem and may not
> always return the function entry point (in a subsequent patch for
> KPROBES_ON_FTRACE).

If so, please move this patch into that series. It is hard to review
patches which requires for other series.

Thank you,

> For looking up function entry points, introduce a
> separate helper and use the same in optprobes.c
> 
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/code-patching.h | 37 ++++++++++++++++++++++++++++++++
>  arch/powerpc/kernel/optprobes.c          |  6 +++---
>  2 files changed, 40 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
> index 8ab937771068..3e994f404434 100644
> --- a/arch/powerpc/include/asm/code-patching.h
> +++ b/arch/powerpc/include/asm/code-patching.h
> @@ -12,6 +12,8 @@
>  
>  #include <asm/types.h>
>  #include <asm/ppc-opcode.h>
> +#include <linux/string.h>
> +#include <linux/kallsyms.h>
>  
>  /* Flags for create_branch:
>   * "b"   == create_branch(addr, target, 0);
> @@ -99,6 +101,41 @@ static inline unsigned long ppc_global_function_entry(void *func)
>  #endif
>  }
>  
> +/*
> + * Wrapper around kallsyms_lookup() to return function entry address:
> + * - For ABIv1, we lookup the dot variant.
> + * - For ABIv2, we return the local entry point.
> + */
> +static inline unsigned long ppc_kallsyms_lookup_name(const char *name)
> +{
> +	unsigned long addr;
> +#ifdef PPC64_ELF_ABI_v1
> +	/* check for dot variant */
> +	char dot_name[1 + KSYM_NAME_LEN];
> +	bool dot_appended = false;
> +	if (name[0] != '.') {
> +		dot_name[0] = '.';
> +		dot_name[1] = '\0';
> +		strncat(dot_name, name, KSYM_NAME_LEN - 2);
> +		dot_appended = true;
> +	} else {
> +		dot_name[0] = '\0';
> +		strncat(dot_name, name, KSYM_NAME_LEN - 1);
> +	}
> +	addr = kallsyms_lookup_name(dot_name);
> +	if (!addr && dot_appended)
> +		/* Let's try the original non-dot symbol lookup	*/
> +		addr = kallsyms_lookup_name(name);
> +#elif defined(PPC64_ELF_ABI_v2)
> +	addr = kallsyms_lookup_name(name);
> +	if (addr)
> +		addr = ppc_function_entry((void *)addr);
> +#else
> +	addr = kallsyms_lookup_name(name);
> +#endif
> +	return addr;
> +}
> +
>  #ifdef CONFIG_PPC64
>  /*
>   * Some instruction encodings commonly used in dynamic ftracing
> diff --git a/arch/powerpc/kernel/optprobes.c b/arch/powerpc/kernel/optprobes.c
> index ce81a322251c..ec60ed0d4aad 100644
> --- a/arch/powerpc/kernel/optprobes.c
> +++ b/arch/powerpc/kernel/optprobes.c
> @@ -243,10 +243,10 @@ int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *p)
>  	/*
>  	 * 2. branch to optimized_callback() and emulate_step()
>  	 */
> -	op_callback_addr = kprobe_lookup_name("optimized_callback", 0);
> -	emulate_step_addr = kprobe_lookup_name("emulate_step", 0);
> +	op_callback_addr = (kprobe_opcode_t *)ppc_kallsyms_lookup_name("optimized_callback");
> +	emulate_step_addr = (kprobe_opcode_t *)ppc_kallsyms_lookup_name("emulate_step");
>  	if (!op_callback_addr || !emulate_step_addr) {
> -		WARN(1, "kprobe_lookup_name() failed\n");
> +		WARN(1, "Unable to lookup optimized_callback()/emulate_step()\n");
>  		goto error;
>  	}
>  
> -- 
> 2.12.1
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 4/5] powerpc: kprobes: factor out code to emulate instruction into a helper
  2017-04-12 10:58 ` [PATCH v2 4/5] powerpc: kprobes: factor out code to emulate instruction into a helper Naveen N. Rao
@ 2017-04-13  4:34   ` Masami Hiramatsu
  2017-04-13  5:53     ` Naveen N. Rao
  2017-04-13  8:50       ` Naveen N. Rao
  0 siblings, 2 replies; 25+ messages in thread
From: Masami Hiramatsu @ 2017-04-13  4:34 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, Ananth N Mavinakayanahalli, Masami Hiramatsu,
	Ingo Molnar, linuxppc-dev, linux-kernel

On Wed, 12 Apr 2017 16:28:27 +0530
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:

> This helper will be used in a subsequent patch to emulate instructions
> on re-entering the kprobe handler. No functional change.

In this case, please merge this patch into the next patch which
actually uses the factored out function unless that changes
too much.

Thank you,

> 
> Acked-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> ---
>  arch/powerpc/kernel/kprobes.c | 52 ++++++++++++++++++++++++++-----------------
>  1 file changed, 31 insertions(+), 21 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> index 0732a0291ace..8b48f7d046bd 100644
> --- a/arch/powerpc/kernel/kprobes.c
> +++ b/arch/powerpc/kernel/kprobes.c
> @@ -207,6 +207,35 @@ void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
>  	regs->link = (unsigned long)kretprobe_trampoline;
>  }
>  
> +int __kprobes try_to_emulate(struct kprobe *p, struct pt_regs *regs)
> +{
> +	int ret;
> +	unsigned int insn = *p->ainsn.insn;
> +
> +	/* regs->nip is also adjusted if emulate_step returns 1 */
> +	ret = emulate_step(regs, insn);
> +	if (ret > 0) {
> +		/*
> +		 * Once this instruction has been boosted
> +		 * successfully, set the boostable flag
> +		 */
> +		if (unlikely(p->ainsn.boostable == 0))
> +			p->ainsn.boostable = 1;
> +	} else if (ret < 0) {
> +		/*
> +		 * We don't allow kprobes on mtmsr(d)/rfi(d), etc.
> +		 * So, we should never get here... but, its still
> +		 * good to catch them, just in case...
> +		 */
> +		printk("Can't step on instruction %x\n", insn);
> +		BUG();
> +	} else if (ret == 0)
> +		/* This instruction can't be boosted */
> +		p->ainsn.boostable = -1;
> +
> +	return ret;
> +}
> +
>  int __kprobes kprobe_handler(struct pt_regs *regs)
>  {
>  	struct kprobe *p;
> @@ -302,18 +331,9 @@ int __kprobes kprobe_handler(struct pt_regs *regs)
>  
>  ss_probe:
>  	if (p->ainsn.boostable >= 0) {
> -		unsigned int insn = *p->ainsn.insn;
> +		ret = try_to_emulate(p, regs);
>  
> -		/* regs->nip is also adjusted if emulate_step returns 1 */
> -		ret = emulate_step(regs, insn);
>  		if (ret > 0) {
> -			/*
> -			 * Once this instruction has been boosted
> -			 * successfully, set the boostable flag
> -			 */
> -			if (unlikely(p->ainsn.boostable == 0))
> -				p->ainsn.boostable = 1;
> -
>  			if (p->post_handler)
>  				p->post_handler(p, regs, 0);
>  
> @@ -321,17 +341,7 @@ int __kprobes kprobe_handler(struct pt_regs *regs)
>  			reset_current_kprobe();
>  			preempt_enable_no_resched();
>  			return 1;
> -		} else if (ret < 0) {
> -			/*
> -			 * We don't allow kprobes on mtmsr(d)/rfi(d), etc.
> -			 * So, we should never get here... but, its still
> -			 * good to catch them, just in case...
> -			 */
> -			printk("Can't step on instruction %x\n", insn);
> -			BUG();
> -		} else if (ret == 0)
> -			/* This instruction can't be boosted */
> -			p->ainsn.boostable = -1;
> +		}
>  	}
>  	prepare_singlestep(p, regs);
>  	kcb->kprobe_status = KPROBE_HIT_SS;
> -- 
> 2.12.1
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 5/5] powerpc: kprobes: emulate instructions on kprobe handler re-entry
  2017-04-12 10:58 ` [PATCH v2 5/5] powerpc: kprobes: emulate instructions on kprobe handler re-entry Naveen N. Rao
@ 2017-04-13  4:37   ` Masami Hiramatsu
  2017-04-13  5:53     ` Naveen N. Rao
  0 siblings, 1 reply; 25+ messages in thread
From: Masami Hiramatsu @ 2017-04-13  4:37 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, Ananth N Mavinakayanahalli, Masami Hiramatsu,
	Ingo Molnar, linuxppc-dev, linux-kernel

On Wed, 12 Apr 2017 16:28:28 +0530
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:

> On kprobe handler re-entry, try to emulate the instruction rather than
> single stepping always.
> 

> As a related change, remove the duplicate saving of msr as that is
> already done in set_current_kprobe()

If so, this part might be separated as a cleanup patch...

Thanks,

> 
> Acked-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> ---
>  arch/powerpc/kernel/kprobes.c | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> index 8b48f7d046bd..005bd4a75902 100644
> --- a/arch/powerpc/kernel/kprobes.c
> +++ b/arch/powerpc/kernel/kprobes.c
> @@ -273,10 +273,17 @@ int __kprobes kprobe_handler(struct pt_regs *regs)
>  			 */
>  			save_previous_kprobe(kcb);
>  			set_current_kprobe(p, regs, kcb);
> -			kcb->kprobe_saved_msr = regs->msr;
>  			kprobes_inc_nmissed_count(p);
>  			prepare_singlestep(p, regs);
>  			kcb->kprobe_status = KPROBE_REENTER;
> +			if (p->ainsn.boostable >= 0) {
> +				ret = try_to_emulate(p, regs);
> +
> +				if (ret > 0) {
> +					restore_previous_kprobe(kcb);
> +					return 1;
> +				}
> +			}
>  			return 1;
>  		} else {
>  			if (*addr != BREAKPOINT_INSTRUCTION) {
> -- 
> 2.12.1
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 0/5] powerpc: a few kprobe fixes and refactoring
  2017-04-13  3:02 ` [PATCH v2 0/5] powerpc: a few kprobe fixes and refactoring Masami Hiramatsu
@ 2017-04-13  5:50   ` Naveen N. Rao
  0 siblings, 0 replies; 25+ messages in thread
From: Naveen N. Rao @ 2017-04-13  5:50 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Michael Ellerman, Ananth N Mavinakayanahalli, Ingo Molnar,
	linuxppc-dev, linux-kernel

On 2017/04/13 12:02PM, Masami Hiramatsu wrote:
> Hi Naveen,

Hi Masami,

> 
> BTW, I saw you sent 3 different series, are there any
> conflict each other? or can we pick those independently?

Yes, all these three patch series are based off powerpc/next and they do 
depend on each other, as they are all about powerpc kprobes.

Patches 1 and 2 in this series touch generic kprobes bits and Michael 
was planning on putting those in a topic branch so that -tip can pull 
them too.

Apart from those two, your optprobes patch 3/5 
(https://patchwork.ozlabs.org/patch/749934/) also touches generic code, 
but it is needed for KPROBES_ON_FTRACE on powerpc. So, I've posted that 
as part of my series. We could probably also put that in the topic 
branch.


Thanks,
Naveen

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 3/5] powerpc: introduce a new helper to obtain function entry points
  2017-04-13  4:32   ` Masami Hiramatsu
@ 2017-04-13  5:52     ` Naveen N. Rao
  0 siblings, 0 replies; 25+ messages in thread
From: Naveen N. Rao @ 2017-04-13  5:52 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Michael Ellerman, Ananth N Mavinakayanahalli, Ingo Molnar,
	linuxppc-dev, linux-kernel

On 2017/04/13 01:32PM, Masami Hiramatsu wrote:
> On Wed, 12 Apr 2017 16:28:26 +0530
> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:
> 
> > kprobe_lookup_name() is specific to the kprobe subsystem and may not
> > always return the function entry point (in a subsequent patch for
> > KPROBES_ON_FTRACE).
> 
> If so, please move this patch into that series. It is hard to review
> patches which requires for other series.

:-)
This patch was originally the first in this series to try avoiding the 
need for converting kprobe_lookup_name() in optprobes.c. But, with the 
re-shuffle, this is more suitable in the other series. I will move it.

Thanks,
Naveen

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 4/5] powerpc: kprobes: factor out code to emulate instruction into a helper
  2017-04-13  4:34   ` Masami Hiramatsu
@ 2017-04-13  5:53     ` Naveen N. Rao
  2017-04-13  8:50       ` Naveen N. Rao
  1 sibling, 0 replies; 25+ messages in thread
From: Naveen N. Rao @ 2017-04-13  5:53 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Michael Ellerman, Ananth N Mavinakayanahalli, Ingo Molnar,
	linuxppc-dev, linux-kernel

On 2017/04/13 01:34PM, Masami Hiramatsu wrote:
> On Wed, 12 Apr 2017 16:28:27 +0530
> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:
> 
> > This helper will be used in a subsequent patch to emulate instructions
> > on re-entering the kprobe handler. No functional change.
> 
> In this case, please merge this patch into the next patch which
> actually uses the factored out function unless that changes
> too much.

Ok, will do.

Thanks,
Naveen

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 5/5] powerpc: kprobes: emulate instructions on kprobe handler re-entry
  2017-04-13  4:37   ` Masami Hiramatsu
@ 2017-04-13  5:53     ` Naveen N. Rao
  0 siblings, 0 replies; 25+ messages in thread
From: Naveen N. Rao @ 2017-04-13  5:53 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Michael Ellerman, Ananth N Mavinakayanahalli, Ingo Molnar,
	linuxppc-dev, linux-kernel

On 2017/04/13 01:37PM, Masami Hiramatsu wrote:
> On Wed, 12 Apr 2017 16:28:28 +0530
> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:
> 
> > On kprobe handler re-entry, try to emulate the instruction rather than
> > single stepping always.
> > 
> 
> > As a related change, remove the duplicate saving of msr as that is
> > already done in set_current_kprobe()
> 
> If so, this part might be separated as a cleanup patch...

Sure, thanks for the review!

- Naveen

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 4/5] powerpc: kprobes: factor out code to emulate instruction into a helper
  2017-04-13  4:34   ` Masami Hiramatsu
@ 2017-04-13  8:50       ` Naveen N. Rao
  2017-04-13  8:50       ` Naveen N. Rao
  1 sibling, 0 replies; 25+ messages in thread
From: Naveen N. Rao @ 2017-04-13  8:50 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Ananth N Mavinakayanahalli, linux-kernel, linuxppc-dev,
	Ingo Molnar, Michael Ellerman

Excerpts from Masami Hiramatsu's message of April 13, 2017 10:04:
> On Wed, 12 Apr 2017 16:28:27 +0530
> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:
> 
>> This helper will be used in a subsequent patch to emulate instructions
>> on re-entering the kprobe handler. No functional change.
> 
> In this case, please merge this patch into the next patch which
> actually uses the factored out function unless that changes
> too much.

In hindsight, this patch actually just refactors the code so that the 
helper can be re-used subsequently. Using the helper constitutes a 
separate unrelated change, so I'm keeping this patch as is. I am 
updating the description to convey this better.

- Naveen

> 
> Thank you,
> 
>> 
>> Acked-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
>> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
>> ---
>>  arch/powerpc/kernel/kprobes.c | 52 ++++++++++++++++++++++++++-----------------
>>  1 file changed, 31 insertions(+), 21 deletions(-)
>> 
>> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
>> index 0732a0291ace..8b48f7d046bd 100644
>> --- a/arch/powerpc/kernel/kprobes.c
>> +++ b/arch/powerpc/kernel/kprobes.c
>> @@ -207,6 +207,35 @@ void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
>>  	regs->link = (unsigned long)kretprobe_trampoline;
>>  }
>>  
>> +int __kprobes try_to_emulate(struct kprobe *p, struct pt_regs *regs)
>> +{
>> +	int ret;
>> +	unsigned int insn = *p->ainsn.insn;
>> +
>> +	/* regs->nip is also adjusted if emulate_step returns 1 */
>> +	ret = emulate_step(regs, insn);
>> +	if (ret > 0) {
>> +		/*
>> +		 * Once this instruction has been boosted
>> +		 * successfully, set the boostable flag
>> +		 */
>> +		if (unlikely(p->ainsn.boostable == 0))
>> +			p->ainsn.boostable = 1;
>> +	} else if (ret < 0) {
>> +		/*
>> +		 * We don't allow kprobes on mtmsr(d)/rfi(d), etc.
>> +		 * So, we should never get here... but, its still
>> +		 * good to catch them, just in case...
>> +		 */
>> +		printk("Can't step on instruction %x\n", insn);
>> +		BUG();
>> +	} else if (ret == 0)
>> +		/* This instruction can't be boosted */
>> +		p->ainsn.boostable = -1;
>> +
>> +	return ret;
>> +}
>> +
>>  int __kprobes kprobe_handler(struct pt_regs *regs)
>>  {
>>  	struct kprobe *p;
>> @@ -302,18 +331,9 @@ int __kprobes kprobe_handler(struct pt_regs *regs)
>>  
>>  ss_probe:
>>  	if (p->ainsn.boostable >= 0) {
>> -		unsigned int insn = *p->ainsn.insn;
>> +		ret = try_to_emulate(p, regs);
>>  
>> -		/* regs->nip is also adjusted if emulate_step returns 1 */
>> -		ret = emulate_step(regs, insn);
>>  		if (ret > 0) {
>> -			/*
>> -			 * Once this instruction has been boosted
>> -			 * successfully, set the boostable flag
>> -			 */
>> -			if (unlikely(p->ainsn.boostable == 0))
>> -				p->ainsn.boostable = 1;
>> -
>>  			if (p->post_handler)
>>  				p->post_handler(p, regs, 0);
>>  
>> @@ -321,17 +341,7 @@ int __kprobes kprobe_handler(struct pt_regs *regs)
>>  			reset_current_kprobe();
>>  			preempt_enable_no_resched();
>>  			return 1;
>> -		} else if (ret < 0) {
>> -			/*
>> -			 * We don't allow kprobes on mtmsr(d)/rfi(d), etc.
>> -			 * So, we should never get here... but, its still
>> -			 * good to catch them, just in case...
>> -			 */
>> -			printk("Can't step on instruction %x\n", insn);
>> -			BUG();
>> -		} else if (ret == 0)
>> -			/* This instruction can't be boosted */
>> -			p->ainsn.boostable = -1;
>> +		}
>>  	}
>>  	prepare_singlestep(p, regs);
>>  	kcb->kprobe_status = KPROBE_HIT_SS;
>> -- 
>> 2.12.1
>> 
> 
> 
> -- 
> Masami Hiramatsu <mhiramat@kernel.org>
> 
> 

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 4/5] powerpc: kprobes: factor out code to emulate instruction into a helper
@ 2017-04-13  8:50       ` Naveen N. Rao
  0 siblings, 0 replies; 25+ messages in thread
From: Naveen N. Rao @ 2017-04-13  8:50 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Ananth N Mavinakayanahalli, linux-kernel, linuxppc-dev,
	Ingo Molnar, Michael Ellerman

Excerpts from Masami Hiramatsu's message of April 13, 2017 10:04:
> On Wed, 12 Apr 2017 16:28:27 +0530
> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:
>=20
>> This helper will be used in a subsequent patch to emulate instructions
>> on re-entering the kprobe handler. No functional change.
>=20
> In this case, please merge this patch into the next patch which
> actually uses the factored out function unless that changes
> too much.

In hindsight, this patch actually just refactors the code so that the=20
helper can be re-used subsequently. Using the helper constitutes a=20
separate unrelated change, so I'm keeping this patch as is. I am=20
updating the description to convey this better.

- Naveen

>=20
> Thank you,
>=20
>>=20
>> Acked-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
>> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
>> ---
>>  arch/powerpc/kernel/kprobes.c | 52 ++++++++++++++++++++++++++----------=
-------
>>  1 file changed, 31 insertions(+), 21 deletions(-)
>>=20
>> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes=
.c
>> index 0732a0291ace..8b48f7d046bd 100644
>> --- a/arch/powerpc/kernel/kprobes.c
>> +++ b/arch/powerpc/kernel/kprobes.c
>> @@ -207,6 +207,35 @@ void __kprobes arch_prepare_kretprobe(struct kretpr=
obe_instance *ri,
>>  	regs->link =3D (unsigned long)kretprobe_trampoline;
>>  }
>> =20
>> +int __kprobes try_to_emulate(struct kprobe *p, struct pt_regs *regs)
>> +{
>> +	int ret;
>> +	unsigned int insn =3D *p->ainsn.insn;
>> +
>> +	/* regs->nip is also adjusted if emulate_step returns 1 */
>> +	ret =3D emulate_step(regs, insn);
>> +	if (ret > 0) {
>> +		/*
>> +		 * Once this instruction has been boosted
>> +		 * successfully, set the boostable flag
>> +		 */
>> +		if (unlikely(p->ainsn.boostable =3D=3D 0))
>> +			p->ainsn.boostable =3D 1;
>> +	} else if (ret < 0) {
>> +		/*
>> +		 * We don't allow kprobes on mtmsr(d)/rfi(d), etc.
>> +		 * So, we should never get here... but, its still
>> +		 * good to catch them, just in case...
>> +		 */
>> +		printk("Can't step on instruction %x\n", insn);
>> +		BUG();
>> +	} else if (ret =3D=3D 0)
>> +		/* This instruction can't be boosted */
>> +		p->ainsn.boostable =3D -1;
>> +
>> +	return ret;
>> +}
>> +
>>  int __kprobes kprobe_handler(struct pt_regs *regs)
>>  {
>>  	struct kprobe *p;
>> @@ -302,18 +331,9 @@ int __kprobes kprobe_handler(struct pt_regs *regs)
>> =20
>>  ss_probe:
>>  	if (p->ainsn.boostable >=3D 0) {
>> -		unsigned int insn =3D *p->ainsn.insn;
>> +		ret =3D try_to_emulate(p, regs);
>> =20
>> -		/* regs->nip is also adjusted if emulate_step returns 1 */
>> -		ret =3D emulate_step(regs, insn);
>>  		if (ret > 0) {
>> -			/*
>> -			 * Once this instruction has been boosted
>> -			 * successfully, set the boostable flag
>> -			 */
>> -			if (unlikely(p->ainsn.boostable =3D=3D 0))
>> -				p->ainsn.boostable =3D 1;
>> -
>>  			if (p->post_handler)
>>  				p->post_handler(p, regs, 0);
>> =20
>> @@ -321,17 +341,7 @@ int __kprobes kprobe_handler(struct pt_regs *regs)
>>  			reset_current_kprobe();
>>  			preempt_enable_no_resched();
>>  			return 1;
>> -		} else if (ret < 0) {
>> -			/*
>> -			 * We don't allow kprobes on mtmsr(d)/rfi(d), etc.
>> -			 * So, we should never get here... but, its still
>> -			 * good to catch them, just in case...
>> -			 */
>> -			printk("Can't step on instruction %x\n", insn);
>> -			BUG();
>> -		} else if (ret =3D=3D 0)
>> -			/* This instruction can't be boosted */
>> -			p->ainsn.boostable =3D -1;
>> +		}
>>  	}
>>  	prepare_singlestep(p, regs);
>>  	kcb->kprobe_status =3D KPROBE_HIT_SS;
>> --=20
>> 2.12.1
>>=20
>=20
>=20
> --=20
> Masami Hiramatsu <mhiramat@kernel.org>
>=20
>=20
=

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH v2 1/5] kprobes: convert kprobe_lookup_name() to a function
  2017-04-12 10:58 ` [PATCH v2 1/5] kprobes: convert kprobe_lookup_name() to a function Naveen N. Rao
@ 2017-04-18 12:52     ` David Laight
  2017-04-18 12:52     ` David Laight
  1 sibling, 0 replies; 25+ messages in thread
From: David Laight @ 2017-04-18 12:52 UTC (permalink / raw)
  To: 'Naveen N. Rao', Michael Ellerman
  Cc: linux-kernel, linuxppc-dev, Masami Hiramatsu, Ingo Molnar

From: Naveen N. Rao
> Sent: 12 April 2017 11:58
...
> +kprobe_opcode_t *kprobe_lookup_name(const char *name)
> +{
...
> +	char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN];
> +	const char *modsym;
> +	bool dot_appended = false;
> +	if ((modsym = strchr(name, ':')) != NULL) {
> +		modsym++;
> +		if (*modsym != '\0' && *modsym != '.') {
> +			/* Convert to <module:.symbol> */
> +			strncpy(dot_name, name, modsym - name);
> +			dot_name[modsym - name] = '.';
> +			dot_name[modsym - name + 1] = '\0';
> +			strncat(dot_name, modsym,
> +				sizeof(dot_name) - (modsym - name) - 2);
> +			dot_appended = true;

If the ':' is 'a way down' name[] then although the strncpy() won't
overrun dot_name[] the rest of the code can.

The strncat() call is particularly borked.

	David

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH v2 1/5] kprobes: convert kprobe_lookup_name() to a function
@ 2017-04-18 12:52     ` David Laight
  0 siblings, 0 replies; 25+ messages in thread
From: David Laight @ 2017-04-18 12:52 UTC (permalink / raw)
  To: 'Naveen N. Rao', Michael Ellerman
  Cc: linux-kernel, linuxppc-dev, Masami Hiramatsu, Ingo Molnar

From: Naveen N. Rao
> Sent: 12 April 2017 11:58
...
> +kprobe_opcode_t *kprobe_lookup_name(const char *name)
> +{
...
> +	char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN];
> +	const char *modsym;
> +	bool dot_appended =3D false;
> +	if ((modsym =3D strchr(name, ':')) !=3D NULL) {
> +		modsym++;
> +		if (*modsym !=3D '\0' && *modsym !=3D '.') {
> +			/* Convert to <module:.symbol> */
> +			strncpy(dot_name, name, modsym - name);
> +			dot_name[modsym - name] =3D '.';
> +			dot_name[modsym - name + 1] =3D '\0';
> +			strncat(dot_name, modsym,
> +				sizeof(dot_name) - (modsym - name) - 2);
> +			dot_appended =3D true;

If the ':' is 'a way down' name[] then although the strncpy() won't
overrun dot_name[] the rest of the code can.

The strncat() call is particularly borked.

	David

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH v2 1/5] kprobes: convert kprobe_lookup_name() to a function
  2017-04-18 12:52     ` David Laight
@ 2017-04-19  8:08       ` Naveen N. Rao
  -1 siblings, 0 replies; 25+ messages in thread
From: Naveen N. Rao @ 2017-04-19  8:08 UTC (permalink / raw)
  To: David Laight, Michael Ellerman
  Cc: linux-kernel, linuxppc-dev, Masami Hiramatsu, Ingo Molnar

Excerpts from David Laight's message of April 18, 2017 18:22:
> From: Naveen N. Rao
>> Sent: 12 April 2017 11:58
> ...
>> +kprobe_opcode_t *kprobe_lookup_name(const char *name)
>> +{
> ...
>> +	char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN];
>> +	const char *modsym;
>> +	bool dot_appended = false;
>> +	if ((modsym = strchr(name, ':')) != NULL) {
>> +		modsym++;
>> +		if (*modsym != '\0' && *modsym != '.') {
>> +			/* Convert to <module:.symbol> */
>> +			strncpy(dot_name, name, modsym - name);
>> +			dot_name[modsym - name] = '.';
>> +			dot_name[modsym - name + 1] = '\0';
>> +			strncat(dot_name, modsym,
>> +				sizeof(dot_name) - (modsym - name) - 2);
>> +			dot_appended = true;
> 
> If the ':' is 'a way down' name[] then although the strncpy() won't
> overrun dot_name[] the rest of the code can.

Nice catch, thanks David!
We need to be validating the length of 'name'. I'll put out a patch for 
that.

As an aside, I'm not sure I follow what you mean when you say that the 
strncpy() won't overrun dot_name[]. If we have a name[] longer than 
sizeof(dot_name) with the ':' after that, the strncpy() can also overrun 
dot_name[].


- Naveen

> 
> The strncat() call is particularly borked.
> 
> 	David
> 
> 

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH v2 1/5] kprobes: convert kprobe_lookup_name() to a function
@ 2017-04-19  8:08       ` Naveen N. Rao
  0 siblings, 0 replies; 25+ messages in thread
From: Naveen N. Rao @ 2017-04-19  8:08 UTC (permalink / raw)
  To: David Laight, Michael Ellerman
  Cc: linux-kernel, linuxppc-dev, Masami Hiramatsu, Ingo Molnar

Excerpts from David Laight's message of April 18, 2017 18:22:
> From: Naveen N. Rao
>> Sent: 12 April 2017 11:58
> ...
>> +kprobe_opcode_t *kprobe_lookup_name(const char *name)
>> +{
> ...
>> +	char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN];
>> +	const char *modsym;
>> +	bool dot_appended =3D false;
>> +	if ((modsym =3D strchr(name, ':')) !=3D NULL) {
>> +		modsym++;
>> +		if (*modsym !=3D '\0' && *modsym !=3D '.') {
>> +			/* Convert to <module:.symbol> */
>> +			strncpy(dot_name, name, modsym - name);
>> +			dot_name[modsym - name] =3D '.';
>> +			dot_name[modsym - name + 1] =3D '\0';
>> +			strncat(dot_name, modsym,
>> +				sizeof(dot_name) - (modsym - name) - 2);
>> +			dot_appended =3D true;
>=20
> If the ':' is 'a way down' name[] then although the strncpy() won't
> overrun dot_name[] the rest of the code can.

Nice catch, thanks David!
We need to be validating the length of 'name'. I'll put out a patch for=20
that.

As an aside, I'm not sure I follow what you mean when you say that the=20
strncpy() won't overrun dot_name[]. If we have a name[] longer than=20
sizeof(dot_name) with the ':' after that, the strncpy() can also overrun=20
dot_name[].


- Naveen

>=20
> The strncat() call is particularly borked.
>=20
> 	David
>=20
>=20
=

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH v2 1/5] kprobes: convert kprobe_lookup_name() to a function
  2017-04-19  8:08       ` Naveen N. Rao
@ 2017-04-19  8:48         ` David Laight
  -1 siblings, 0 replies; 25+ messages in thread
From: David Laight @ 2017-04-19  8:48 UTC (permalink / raw)
  To: 'Naveen N. Rao', Michael Ellerman
  Cc: linux-kernel, linuxppc-dev, Masami Hiramatsu, Ingo Molnar

From: Naveen N. Rao
> Sent: 19 April 2017 09:09
> To: David Laight; Michael Ellerman
> Cc: linux-kernel@vger.kernel.org; linuxppc-dev@lists.ozlabs.org; Masami Hiramatsu; Ingo Molnar
> Subject: RE: [PATCH v2 1/5] kprobes: convert kprobe_lookup_name() to a function
> 
> Excerpts from David Laight's message of April 18, 2017 18:22:
> > From: Naveen N. Rao
> >> Sent: 12 April 2017 11:58
> > ...
> >> +kprobe_opcode_t *kprobe_lookup_name(const char *name)
> >> +{
> > ...
> >> +	char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN];
> >> +	const char *modsym;
> >> +	bool dot_appended = false;
> >> +	if ((modsym = strchr(name, ':')) != NULL) {
> >> +		modsym++;
> >> +		if (*modsym != '\0' && *modsym != '.') {
> >> +			/* Convert to <module:.symbol> */
> >> +			strncpy(dot_name, name, modsym - name);
> >> +			dot_name[modsym - name] = '.';
> >> +			dot_name[modsym - name + 1] = '\0';
> >> +			strncat(dot_name, modsym,
> >> +				sizeof(dot_name) - (modsym - name) - 2);
> >> +			dot_appended = true;
> >
> > If the ':' is 'a way down' name[] then although the strncpy() won't
> > overrun dot_name[] the rest of the code can.
> 
> Nice catch, thanks David!
> We need to be validating the length of 'name'. I'll put out a patch for
> that.

Silent truncation is almost certainly wrong here.

> As an aside, I'm not sure I follow what you mean when you say that the
> strncpy() won't overrun dot_name[]. If we have a name[] longer than
> sizeof(dot_name) with the ':' after that, the strncpy() can also overrun
> dot_name[].

Yes, that should just be a memcpy(), as should the strncat().

Using strncpy() where the length is other than the size of the target buffer
should be banned. Not that it ever does what people expect.
strncat() is even worse.

	David

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH v2 1/5] kprobes: convert kprobe_lookup_name() to a function
@ 2017-04-19  8:48         ` David Laight
  0 siblings, 0 replies; 25+ messages in thread
From: David Laight @ 2017-04-19  8:48 UTC (permalink / raw)
  To: 'Naveen N. Rao', Michael Ellerman
  Cc: linux-kernel, linuxppc-dev, Masami Hiramatsu, Ingo Molnar

RnJvbTogTmF2ZWVuIE4uIFJhbw0KPiBTZW50OiAxOSBBcHJpbCAyMDE3IDA5OjA5DQo+IFRvOiBE
YXZpZCBMYWlnaHQ7IE1pY2hhZWwgRWxsZXJtYW4NCj4gQ2M6IGxpbnV4LWtlcm5lbEB2Z2VyLmtl
cm5lbC5vcmc7IGxpbnV4cHBjLWRldkBsaXN0cy5vemxhYnMub3JnOyBNYXNhbWkgSGlyYW1hdHN1
OyBJbmdvIE1vbG5hcg0KPiBTdWJqZWN0OiBSRTogW1BBVENIIHYyIDEvNV0ga3Byb2JlczogY29u
dmVydCBrcHJvYmVfbG9va3VwX25hbWUoKSB0byBhIGZ1bmN0aW9uDQo+IA0KPiBFeGNlcnB0cyBm
cm9tIERhdmlkIExhaWdodCdzIG1lc3NhZ2Ugb2YgQXByaWwgMTgsIDIwMTcgMTg6MjI6DQo+ID4g
RnJvbTogTmF2ZWVuIE4uIFJhbw0KPiA+PiBTZW50OiAxMiBBcHJpbCAyMDE3IDExOjU4DQo+ID4g
Li4uDQo+ID4+ICtrcHJvYmVfb3Bjb2RlX3QgKmtwcm9iZV9sb29rdXBfbmFtZShjb25zdCBjaGFy
ICpuYW1lKQ0KPiA+PiArew0KPiA+IC4uLg0KPiA+PiArCWNoYXIgZG90X25hbWVbTU9EVUxFX05B
TUVfTEVOICsgMSArIEtTWU1fTkFNRV9MRU5dOw0KPiA+PiArCWNvbnN0IGNoYXIgKm1vZHN5bTsN
Cj4gPj4gKwlib29sIGRvdF9hcHBlbmRlZCA9IGZhbHNlOw0KPiA+PiArCWlmICgobW9kc3ltID0g
c3RyY2hyKG5hbWUsICc6JykpICE9IE5VTEwpIHsNCj4gPj4gKwkJbW9kc3ltKys7DQo+ID4+ICsJ
CWlmICgqbW9kc3ltICE9ICdcMCcgJiYgKm1vZHN5bSAhPSAnLicpIHsNCj4gPj4gKwkJCS8qIENv
bnZlcnQgdG8gPG1vZHVsZTouc3ltYm9sPiAqLw0KPiA+PiArCQkJc3RybmNweShkb3RfbmFtZSwg
bmFtZSwgbW9kc3ltIC0gbmFtZSk7DQo+ID4+ICsJCQlkb3RfbmFtZVttb2RzeW0gLSBuYW1lXSA9
ICcuJzsNCj4gPj4gKwkJCWRvdF9uYW1lW21vZHN5bSAtIG5hbWUgKyAxXSA9ICdcMCc7DQo+ID4+
ICsJCQlzdHJuY2F0KGRvdF9uYW1lLCBtb2RzeW0sDQo+ID4+ICsJCQkJc2l6ZW9mKGRvdF9uYW1l
KSAtIChtb2RzeW0gLSBuYW1lKSAtIDIpOw0KPiA+PiArCQkJZG90X2FwcGVuZGVkID0gdHJ1ZTsN
Cj4gPg0KPiA+IElmIHRoZSAnOicgaXMgJ2Egd2F5IGRvd24nIG5hbWVbXSB0aGVuIGFsdGhvdWdo
IHRoZSBzdHJuY3B5KCkgd29uJ3QNCj4gPiBvdmVycnVuIGRvdF9uYW1lW10gdGhlIHJlc3Qgb2Yg
dGhlIGNvZGUgY2FuLg0KPiANCj4gTmljZSBjYXRjaCwgdGhhbmtzIERhdmlkIQ0KPiBXZSBuZWVk
IHRvIGJlIHZhbGlkYXRpbmcgdGhlIGxlbmd0aCBvZiAnbmFtZScuIEknbGwgcHV0IG91dCBhIHBh
dGNoIGZvcg0KPiB0aGF0Lg0KDQpTaWxlbnQgdHJ1bmNhdGlvbiBpcyBhbG1vc3QgY2VydGFpbmx5
IHdyb25nIGhlcmUuDQoNCj4gQXMgYW4gYXNpZGUsIEknbSBub3Qgc3VyZSBJIGZvbGxvdyB3aGF0
IHlvdSBtZWFuIHdoZW4geW91IHNheSB0aGF0IHRoZQ0KPiBzdHJuY3B5KCkgd29uJ3Qgb3ZlcnJ1
biBkb3RfbmFtZVtdLiBJZiB3ZSBoYXZlIGEgbmFtZVtdIGxvbmdlciB0aGFuDQo+IHNpemVvZihk
b3RfbmFtZSkgd2l0aCB0aGUgJzonIGFmdGVyIHRoYXQsIHRoZSBzdHJuY3B5KCkgY2FuIGFsc28g
b3ZlcnJ1bg0KPiBkb3RfbmFtZVtdLg0KDQpZZXMsIHRoYXQgc2hvdWxkIGp1c3QgYmUgYSBtZW1j
cHkoKSwgYXMgc2hvdWxkIHRoZSBzdHJuY2F0KCkuDQoNClVzaW5nIHN0cm5jcHkoKSB3aGVyZSB0
aGUgbGVuZ3RoIGlzIG90aGVyIHRoYW4gdGhlIHNpemUgb2YgdGhlIHRhcmdldCBidWZmZXINCnNo
b3VsZCBiZSBiYW5uZWQuIE5vdCB0aGF0IGl0IGV2ZXIgZG9lcyB3aGF0IHBlb3BsZSBleHBlY3Qu
DQpzdHJuY2F0KCkgaXMgZXZlbiB3b3JzZS4NCg0KCURhdmlkDQoNCg0KDQoNCg==

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 1/5] kprobes: convert kprobe_lookup_name() to a function
  2017-04-19  8:48         ` David Laight
  (?)
@ 2017-04-19 11:07         ` 'Naveen N. Rao'
  -1 siblings, 0 replies; 25+ messages in thread
From: 'Naveen N. Rao' @ 2017-04-19 11:07 UTC (permalink / raw)
  To: David Laight
  Cc: Michael Ellerman, Ingo Molnar, linuxppc-dev, linux-kernel,
	Masami Hiramatsu

On 2017/04/19 08:48AM, David Laight wrote:
> From: Naveen N. Rao
> > Sent: 19 April 2017 09:09
> > To: David Laight; Michael Ellerman
> > Cc: linux-kernel@vger.kernel.org; linuxppc-dev@lists.ozlabs.org; Masami Hiramatsu; Ingo Molnar
> > Subject: RE: [PATCH v2 1/5] kprobes: convert kprobe_lookup_name() to a function
> > 
> > Excerpts from David Laight's message of April 18, 2017 18:22:
> > > From: Naveen N. Rao
> > >> Sent: 12 April 2017 11:58
> > > ...
> > >> +kprobe_opcode_t *kprobe_lookup_name(const char *name)
> > >> +{
> > > ...
> > >> +	char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN];
> > >> +	const char *modsym;
> > >> +	bool dot_appended = false;
> > >> +	if ((modsym = strchr(name, ':')) != NULL) {
> > >> +		modsym++;
> > >> +		if (*modsym != '\0' && *modsym != '.') {
> > >> +			/* Convert to <module:.symbol> */
> > >> +			strncpy(dot_name, name, modsym - name);
> > >> +			dot_name[modsym - name] = '.';
> > >> +			dot_name[modsym - name + 1] = '\0';
> > >> +			strncat(dot_name, modsym,
> > >> +				sizeof(dot_name) - (modsym - name) - 2);
> > >> +			dot_appended = true;
> > >
> > > If the ':' is 'a way down' name[] then although the strncpy() won't
> > > overrun dot_name[] the rest of the code can.
> > 
> > Nice catch, thanks David!
> > We need to be validating the length of 'name'. I'll put out a patch for
> > that.
> 
> Silent truncation is almost certainly wrong here.

Indeed. This will be handled by the earlier validation to ensure that 
the module name as well as the symbol name are within the expected 
lengths.

> 
> > As an aside, I'm not sure I follow what you mean when you say that the
> > strncpy() won't overrun dot_name[]. If we have a name[] longer than
> > sizeof(dot_name) with the ':' after that, the strncpy() can also overrun
> > dot_name[].
> 
> Yes, that should just be a memcpy(), as should the strncat().
> 
> Using strncpy() where the length is other than the size of the target buffer
> should be banned. Not that it ever does what people expect.
> strncat() is even worse.

Sure, but with the proper validation, I still think string functions are 
convenient and useful here. I agree about your view on strncat(), so 
I'll switch to strlcat() instead.

Thanks,
Naveen

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2017-04-19 11:07 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-12 10:58 [PATCH v2 0/5] powerpc: a few kprobe fixes and refactoring Naveen N. Rao
2017-04-12 10:58 ` [PATCH v2 1/5] kprobes: convert kprobe_lookup_name() to a function Naveen N. Rao
2017-04-13  3:09   ` Masami Hiramatsu
2017-04-18 12:52   ` David Laight
2017-04-18 12:52     ` David Laight
2017-04-19  8:08     ` Naveen N. Rao
2017-04-19  8:08       ` Naveen N. Rao
2017-04-19  8:48       ` David Laight
2017-04-19  8:48         ` David Laight
2017-04-19 11:07         ` 'Naveen N. Rao'
2017-04-12 10:58 ` [PATCH v2 2/5] powerpc: kprobes: fix handling of function offsets on ABIv2 Naveen N. Rao
2017-04-13  4:28   ` Masami Hiramatsu
2017-04-12 10:58 ` [PATCH v2 3/5] powerpc: introduce a new helper to obtain function entry points Naveen N. Rao
2017-04-13  4:32   ` Masami Hiramatsu
2017-04-13  5:52     ` Naveen N. Rao
2017-04-12 10:58 ` [PATCH v2 4/5] powerpc: kprobes: factor out code to emulate instruction into a helper Naveen N. Rao
2017-04-13  4:34   ` Masami Hiramatsu
2017-04-13  5:53     ` Naveen N. Rao
2017-04-13  8:50     ` Naveen N. Rao
2017-04-13  8:50       ` Naveen N. Rao
2017-04-12 10:58 ` [PATCH v2 5/5] powerpc: kprobes: emulate instructions on kprobe handler re-entry Naveen N. Rao
2017-04-13  4:37   ` Masami Hiramatsu
2017-04-13  5:53     ` Naveen N. Rao
2017-04-13  3:02 ` [PATCH v2 0/5] powerpc: a few kprobe fixes and refactoring Masami Hiramatsu
2017-04-13  5:50   ` Naveen N. Rao

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.