linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V2 0/4] OPTPROBES for powerpc
@ 2016-12-14 16:18 Anju T Sudhakar
  2016-12-14 16:18 ` [PATCH V2 3/4] arch/powerpc: Implement Optprobes Anju T Sudhakar
                   ` (4 more replies)
  0 siblings, 5 replies; 12+ messages in thread
From: Anju T Sudhakar @ 2016-12-14 16:18 UTC (permalink / raw)
  To: linux-kernel, linuxppc-dev
  Cc: ananth, naveen.n.rao, paulus, srikar, benh, mpe, mahesh, mhiramat, anju

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=y, Size: 5637 bytes --]

This is the V2 patchset of the kprobes jump optimization
(a.k.a OPTPROBES)for powerpc. Kprobe being an inevitable tool
for kernel developers, enhancing the performance of kprobe has
got much importance.

Currently kprobes inserts a trap instruction to probe a running kernel.
Jump optimization allows kprobes to replace the trap with a branch,
reducing the probe overhead drastically.

In this series, conditional branch instructions are not considered for
optimization as they have to be assessed carefully in SMP systems.

The kprobe placed on the kretprobe_trampoline during boot time, is also
optimized in this series. Patch 4/4 furnishes this.

The first two patches can go independently of the series. The helper 
functions in these patches are invoked in patch 3/4.

Performance:
============
An optimized kprobe in powerpc is 1.05 to 4.7 times faster than a kprobe.
 
Example:
 
Placed a probe at an offset 0x50 in _do_fork().
*Time Diff here is, difference in time before hitting the probe and
after the probed instruction. mftb() is employed in kernel/fork.c for
this purpose.
 
# echo 0 > /proc/sys/debug/kprobes-optimization
Kprobes globally unoptimized
 [  233.607120] Time Diff = 0x1f0
 [  233.608273] Time Diff = 0x1ee
 [  233.609228] Time Diff = 0x203
 [  233.610400] Time Diff = 0x1ec
 [  233.611335] Time Diff = 0x200
 [  233.612552] Time Diff = 0x1f0
 [  233.613386] Time Diff = 0x1ee
 [  233.614547] Time Diff = 0x212
 [  233.615570] Time Diff = 0x206
 [  233.616819] Time Diff = 0x1f3
 [  233.617773] Time Diff = 0x1ec
 [  233.618944] Time Diff = 0x1fb
 [  233.619879] Time Diff = 0x1f0
 [  233.621066] Time Diff = 0x1f9
 [  233.621999] Time Diff = 0x283
 [  233.623281] Time Diff = 0x24d
 [  233.624172] Time Diff = 0x1ea
 [  233.625381] Time Diff = 0x1f0
 [  233.626358] Time Diff = 0x200
 [  233.627572] Time Diff = 0x1ed
 
# echo 1 > /proc/sys/debug/kprobes-optimization
Kprobes globally optimized
 [   70.797075] Time Diff = 0x103
 [   70.799102] Time Diff = 0x181
 [   70.801861] Time Diff = 0x15e
 [   70.803466] Time Diff = 0xf0
 [   70.804348] Time Diff = 0xd0
 [   70.805653] Time Diff = 0xad
 [   70.806477] Time Diff = 0xe0
 [   70.807725] Time Diff = 0xbe
 [   70.808541] Time Diff = 0xc3
 [   70.810191] Time Diff = 0xc7
 [   70.811007] Time Diff = 0xc0
 [   70.812629] Time Diff = 0xc0
 [   70.813640] Time Diff = 0xda
 [   70.814915] Time Diff = 0xbb
 [   70.815726] Time Diff = 0xc4
 [   70.816955] Time Diff = 0xc0
 [   70.817778] Time Diff = 0xcd
 [   70.818999] Time Diff = 0xcd
 [   70.820099] Time Diff = 0xcb
 [   70.821333] Time Diff = 0xf0

Implementation:
===================
 
The trap instruction is replaced by a branch to a detour buffer. To address
the limitation of branch instruction in power architecture, detour buffer
slot is allocated from a reserved area . This will ensure that the branch
is within ± 32 MB range. The current kprobes insn caches allocate memory 
area for insn slots with module_alloc(). This will always be beyond 
± 32MB range.
 
The detour buffer contains a call to optimized_callback() which in turn
call the pre_handler(). Once the pre-handler is run, the original
instruction is emulated from the detour buffer itself. Also the detour
buffer is equipped with a branch back to the normal work flow after the
probed instruction is emulated. Before preparing optimization, Kprobes
inserts original(breakpoint instruction)kprobe on the specified address.
So, even if the kprobe is not possible to be optimized, it just uses a
normal kprobe.
 
Limitations:
==============
- Number of probes which can be optimized is limited by the size of the
  area reserved.
- Currently instructions which can be emulated using analyse_instr() are 
  the only candidates for optimization.
- Conditional branch instructions are not optimized.
- Probes on kernel module region are not considered for optimization now.
 
Link for the V1 patchset: 
			https://lkml.org/lkml/2016/9/7/171
			https://lkml.org/lkml/2016/9/7/174
			https://lkml.org/lkml/2016/9/7/172
			https://lkml.org/lkml/2016/9/7/173

Changes from v1:

- Merged the three patches in V1 into a single patch.
- Comments by Masami are addressed.
- Some helper functions are implemented in separate patches.
- Optimization for kprobe placed on the kretprobe_trampoline during
  boot time is implemented.


Kindly let me know your suggestions and comments.

Thanks,
-Anju


Anju T Sudhakar (2):
  arch/powerpc: Implement Optprobes
  arch/powerpc: Optimize kprobe in kretprobe_trampoline

Naveen N. Rao (2):
  powerpc: asm/ppc-opcode.h: introduce __PPC_SH64()
  powerpc: add helper to check if offset is within rel branch range

 .../features/debug/optprobes/arch-support.txt      |   2 +-
 arch/powerpc/Kconfig                               |   1 +
 arch/powerpc/include/asm/code-patching.h           |   1 +
 arch/powerpc/include/asm/kprobes.h                 |  23 +-
 arch/powerpc/include/asm/ppc-opcode.h              |   1 +
 arch/powerpc/include/asm/sstep.h                   |   1 +
 arch/powerpc/kernel/Makefile                       |   1 +
 arch/powerpc/kernel/kprobes.c                      |   8 +
 arch/powerpc/kernel/optprobes.c                    | 332 +++++++++++++++++++++
 arch/powerpc/kernel/optprobes_head.S               | 135 +++++++++
 arch/powerpc/lib/code-patching.c                   |  24 +-
 arch/powerpc/lib/sstep.c                           |  22 ++
 arch/powerpc/net/bpf_jit.h                         |  11 +-
 13 files changed, 553 insertions(+), 9 deletions(-)
 create mode 100644 arch/powerpc/kernel/optprobes.c
 create mode 100644 arch/powerpc/kernel/optprobes_head.S

-- 
2.7.4

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH V2 3/4] arch/powerpc: Implement Optprobes
  2016-12-14 16:18 [PATCH V2 0/4] OPTPROBES for powerpc Anju T Sudhakar
@ 2016-12-14 16:18 ` Anju T Sudhakar
  2016-12-16 14:02   ` Masami Hiramatsu
  2016-12-14 16:18 ` [PATCH V2 4/4] arch/powerpc: Optimize kprobe in kretprobe_trampoline Anju T Sudhakar
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 12+ messages in thread
From: Anju T Sudhakar @ 2016-12-14 16:18 UTC (permalink / raw)
  To: linux-kernel, linuxppc-dev
  Cc: ananth, naveen.n.rao, paulus, srikar, benh, mpe, mahesh, mhiramat, anju

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=y, Size: 18101 bytes --]

Detour buffer contains instructions to create an in memory pt_regs.
After the execution of the pre-handler, a call is made for instruction emulation.
The NIP is determined in advanced through dummy instruction emulation and a branch
instruction is created to the NIP at the end of the trampoline.

Instruction slot for detour buffer is allocated from the reserved area.
For the time being, 64KB is reserved in memory for this purpose.

Instructions which can be emulated using analyse_instr() are suppliants
for optimization. Before optimization ensure that the address range
between the detour buffer allocated and the instruction being probed
is within ± 32MB.

Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 .../features/debug/optprobes/arch-support.txt      |   2 +-
 arch/powerpc/Kconfig                               |   1 +
 arch/powerpc/include/asm/kprobes.h                 |  23 +-
 arch/powerpc/include/asm/sstep.h                   |   1 +
 arch/powerpc/kernel/Makefile                       |   1 +
 arch/powerpc/kernel/optprobes.c                    | 333 +++++++++++++++++++++
 arch/powerpc/kernel/optprobes_head.S               | 135 +++++++++
 arch/powerpc/lib/sstep.c                           |  22 ++
 8 files changed, 516 insertions(+), 2 deletions(-)
 create mode 100644 arch/powerpc/kernel/optprobes.c
 create mode 100644 arch/powerpc/kernel/optprobes_head.S

diff --git a/Documentation/features/debug/optprobes/arch-support.txt b/Documentation/features/debug/optprobes/arch-support.txt
index b8999d8..45bc99d 100644
--- a/Documentation/features/debug/optprobes/arch-support.txt
+++ b/Documentation/features/debug/optprobes/arch-support.txt
@@ -27,7 +27,7 @@
     |       nios2: | TODO |
     |    openrisc: | TODO |
     |      parisc: | TODO |
-    |     powerpc: | TODO |
+    |     powerpc: |  ok  |
     |        s390: | TODO |
     |       score: | TODO |
     |          sh: | TODO |
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index c7f120a..d563f0a 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -98,6 +98,7 @@ config PPC
 	select HAVE_IOREMAP_PROT
 	select HAVE_EFFICIENT_UNALIGNED_ACCESS if !(CPU_LITTLE_ENDIAN && POWER7_CPU)
 	select HAVE_KPROBES
+	select HAVE_OPTPROBES if PPC64
 	select HAVE_ARCH_KGDB
 	select HAVE_KRETPROBES
 	select HAVE_ARCH_TRACEHOOK
diff --git a/arch/powerpc/include/asm/kprobes.h b/arch/powerpc/include/asm/kprobes.h
index 2c9759bd..739ddc5 100644
--- a/arch/powerpc/include/asm/kprobes.h
+++ b/arch/powerpc/include/asm/kprobes.h
@@ -38,7 +38,22 @@ struct pt_regs;
 struct kprobe;
 
 typedef ppc_opcode_t kprobe_opcode_t;
-#define MAX_INSN_SIZE 1
+
+extern kprobe_opcode_t optinsn_slot;
+
+/* Optinsn template address */
+extern kprobe_opcode_t optprobe_template_entry[];
+extern kprobe_opcode_t optprobe_template_op_address[];
+extern kprobe_opcode_t optprobe_template_call_handler[];
+extern kprobe_opcode_t optprobe_template_insn[];
+extern kprobe_opcode_t optprobe_template_call_emulate[];
+extern kprobe_opcode_t optprobe_template_ret[];
+extern kprobe_opcode_t optprobe_template_end[];
+
+#define MAX_INSN_SIZE		1
+#define MAX_OPTIMIZED_LENGTH	4
+#define MAX_OPTINSN_SIZE	(optprobe_template_end - optprobe_template_entry)
+#define RELATIVEJUMP_SIZE	4
 
 #ifdef PPC64_ELF_ABI_v2
 /* PPC64 ABIv2 needs local entry point */
@@ -124,6 +139,12 @@ struct kprobe_ctlblk {
 	struct prev_kprobe prev_kprobe;
 };
 
+struct arch_optimized_insn {
+	kprobe_opcode_t copied_insn[1];
+	/* detour buffer */
+	kprobe_opcode_t *insn;
+};
+
 extern int kprobe_exceptions_notify(struct notifier_block *self,
 					unsigned long val, void *data);
 extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr);
diff --git a/arch/powerpc/include/asm/sstep.h b/arch/powerpc/include/asm/sstep.h
index d3a42cc..f7ad425 100644
--- a/arch/powerpc/include/asm/sstep.h
+++ b/arch/powerpc/include/asm/sstep.h
@@ -87,3 +87,4 @@ struct instruction_op {
 
 extern int analyse_instr(struct instruction_op *op, struct pt_regs *regs,
 			 unsigned int instr);
+extern bool is_conditional_branch(unsigned int instr);
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 1925341..54f0f47 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -98,6 +98,7 @@ obj-$(CONFIG_KGDB)		+= kgdb.o
 obj-$(CONFIG_BOOTX_TEXT)	+= btext.o
 obj-$(CONFIG_SMP)		+= smp.o
 obj-$(CONFIG_KPROBES)		+= kprobes.o
+obj-$(CONFIG_OPTPROBES)		+= optprobes.o optprobes_head.o
 obj-$(CONFIG_UPROBES)		+= uprobes.o
 obj-$(CONFIG_PPC_UDBG_16550)	+= legacy_serial.o udbg_16550.o
 obj-$(CONFIG_STACKTRACE)	+= stacktrace.o
diff --git a/arch/powerpc/kernel/optprobes.c b/arch/powerpc/kernel/optprobes.c
new file mode 100644
index 0000000..ecba221
--- /dev/null
+++ b/arch/powerpc/kernel/optprobes.c
@@ -0,0 +1,333 @@
+/*
+ * Code for Kernel probes Jump optimization.
+ *
+ * Copyright 2016, Anju T, IBM Corp.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/kprobes.h>
+#include <linux/jump_label.h>
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/list.h>
+#include <asm/kprobes.h>
+#include <asm/ptrace.h>
+#include <asm/cacheflush.h>
+#include <asm/code-patching.h>
+#include <asm/sstep.h>
+#include <asm/ppc-opcode.h>
+
+#define TMPL_CALL_HDLR_IDX	\
+	(optprobe_template_call_handler - optprobe_template_entry)
+#define TMPL_EMULATE_IDX	\
+	(optprobe_template_call_emulate - optprobe_template_entry)
+#define TMPL_RET_IDX		\
+	(optprobe_template_ret - optprobe_template_entry)
+#define TMPL_OP_IDX		\
+	(optprobe_template_op_address - optprobe_template_entry)
+#define TMPL_INSN_IDX		\
+	(optprobe_template_insn - optprobe_template_entry)
+#define TMPL_END_IDX		\
+	(optprobe_template_end - optprobe_template_entry)
+
+DEFINE_INSN_CACHE_OPS(ppc_optinsn);
+
+static bool insn_page_in_use;
+
+static void *__ppc_alloc_insn_page(void)
+{
+	if (insn_page_in_use)
+		return NULL;
+	insn_page_in_use = true;
+	return &optinsn_slot;
+}
+
+static void __ppc_free_insn_page(void *page __maybe_unused)
+{
+	insn_page_in_use = false;
+}
+
+struct kprobe_insn_cache kprobe_ppc_optinsn_slots = {
+	.mutex = __MUTEX_INITIALIZER(kprobe_ppc_optinsn_slots.mutex),
+	.pages = LIST_HEAD_INIT(kprobe_ppc_optinsn_slots.pages),
+	/* insn_size initialized later */
+	.alloc = __ppc_alloc_insn_page,
+	.free = __ppc_free_insn_page,
+	.nr_garbage = 0,
+};
+
+/*
+ * Check if we can optimize this probe. Returns NIP post-emulation if this can
+ * be optimized and 0 otherwise.
+ */
+static unsigned long can_optimize(struct kprobe *p)
+{
+	struct pt_regs regs;
+	struct instruction_op op;
+	unsigned long nip = 0;
+
+	/*
+	 * kprobe placed for kretprobe during boot time
+	 * is not optimizing now.
+	 *
+	 * TODO: Optimize kprobe in kretprobe_trampoline
+	 */
+	if (p->addr == (kprobe_opcode_t *)&kretprobe_trampoline)
+		return 0;
+
+	/*
+	 * We only support optimizing kernel addresses, but not
+	 * module addresses.
+	 */
+	if (!is_kernel_addr((unsigned long)p->addr))
+		return 0;
+
+	regs.nip = (unsigned long)p->addr;
+	regs.trap = 0x0;
+	regs.msr = MSR_KERNEL;
+
+	/*
+	 * Ensure that the instruction is not a conditional branch,
+	 * and that can be emulated.
+	 */
+	if (!is_conditional_branch(*p->ainsn.insn) &&
+			analyse_instr(&op, &regs, *p->ainsn.insn))
+		nip = regs.nip;
+
+	return nip;
+}
+
+static void optimized_callback(struct optimized_kprobe *op,
+			       struct pt_regs *regs)
+{
+	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+	unsigned long flags;
+
+	/* This is possible if op is under delayed unoptimizing */
+	if (kprobe_disabled(&op->kp))
+		return;
+
+	local_irq_save(flags);
+
+	if (kprobe_running()) {
+		kprobes_inc_nmissed_count(&op->kp);
+	} else {
+		__this_cpu_write(current_kprobe, &op->kp);
+		regs->nip = (unsigned long)op->kp.addr;
+		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
+		opt_pre_handler(&op->kp, regs);
+		__this_cpu_write(current_kprobe, NULL);
+	}
+	local_irq_restore(flags);
+}
+NOKPROBE_SYMBOL(optimized_callback);
+
+void arch_remove_optimized_kprobe(struct optimized_kprobe *op)
+{
+	if (op->optinsn.insn) {
+		free_ppc_optinsn_slot(op->optinsn.insn, 1);
+		op->optinsn.insn = NULL;
+	}
+}
+
+/*
+ * emulate_step() requires insn to be emulated as
+ * second parameter. Load register 'r4' with the
+ * instruction.
+ */
+void patch_imm32_load_insns(unsigned int val, kprobe_opcode_t *addr)
+{
+	/* addis r4,0,(insn)@h */
+	*addr++ = PPC_INST_ADDIS | ___PPC_RT(4) |
+		  ((val >> 16) & 0xffff);
+
+	/* ori r4,r4,(insn)@l */
+	*addr = PPC_INST_ORI | ___PPC_RA(4) | ___PPC_RS(4) |
+		(val & 0xffff);
+}
+
+/*
+ * Generate instructions to load provided immediate 64-bit value
+ * to register 'r3' and patch these instructions at 'addr'.
+ */
+void patch_imm64_load_insns(unsigned long val, kprobe_opcode_t *addr)
+{
+	/* lis r3,(op)@highest */
+	*addr++ = PPC_INST_ADDIS | ___PPC_RT(3) |
+		  ((val >> 48) & 0xffff);
+
+	/* ori r3,r3,(op)@higher */
+	*addr++ = PPC_INST_ORI | ___PPC_RA(3) | ___PPC_RS(3) |
+		  ((val >> 32) & 0xffff);
+
+	/* rldicr r3,r3,32,31 */
+	*addr++ = PPC_INST_RLDICR | ___PPC_RA(3) | ___PPC_RS(3) |
+		  __PPC_SH64(32) | __PPC_ME64(31);
+
+	/* oris r3,r3,(op)@h */
+	*addr++ = PPC_INST_ORIS | ___PPC_RA(3) | ___PPC_RS(3) |
+		  ((val >> 16) & 0xffff);
+
+	/* ori r3,r3,(op)@l */
+	*addr = PPC_INST_ORI | ___PPC_RA(3) | ___PPC_RS(3) |
+		(val & 0xffff);
+}
+
+int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *p)
+{
+	kprobe_opcode_t *buff, branch_op_callback, branch_emulate_step;
+	kprobe_opcode_t *op_callback_addr, *emulate_step_addr;
+	long b_offset;
+	unsigned long nip;
+
+	kprobe_ppc_optinsn_slots.insn_size = MAX_OPTINSN_SIZE;
+
+	nip = can_optimize(p);
+	if (!nip)
+		return -EILSEQ;
+
+	/* Allocate instruction slot for detour buffer */
+	buff = get_ppc_optinsn_slot();
+	if (!buff)
+		return -ENOMEM;
+
+	/*
+	 * OPTPROBE uses 'b' instruction to branch to optinsn.insn.
+	 *
+	 * The target address has to be relatively nearby, to permit use
+	 * of branch instruction in powerpc, because the address is specified
+	 * in an immediate field in the instruction opcode itself, ie 24 bits
+	 * in the opcode specify the address. Therefore the address should
+	 * be within 32MB on either side of the current instruction.
+	 */
+	b_offset = (unsigned long)buff - (unsigned long)p->addr;
+	if (!is_offset_in_branch_range(b_offset)) {
+		free_ppc_optinsn_slot(buff, 0);
+		return -ERANGE;
+	}
+
+	/* Check if the return address is also within 32MB range */
+	b_offset = (unsigned long)(buff + TMPL_RET_IDX) -
+			(unsigned long)nip;
+	if (!is_offset_in_branch_range(b_offset)) {
+		free_ppc_optinsn_slot(buff, 0);
+		return -ERANGE;
+	}
+
+	/* Setup template */
+	memcpy(buff, optprobe_template_entry,
+			TMPL_END_IDX * sizeof(kprobe_opcode_t));
+
+	/*
+	 * Fixup the template with instructions to:
+	 * 1. load the address of the actual probepoint
+	 */
+	patch_imm64_load_insns((unsigned long)op, buff + TMPL_OP_IDX);
+
+	/*
+	 * 2. branch to optimized_callback() and emulate_step()
+	 */
+	kprobe_lookup_name("optimized_callback", op_callback_addr);
+	kprobe_lookup_name("emulate_step", emulate_step_addr);
+	if (!op_callback_addr || !emulate_step_addr) {
+		free_ppc_optinsn_slot(buff, 0);
+		WARN(1, "kprobe_lookup_name() failed\n");
+		return -ENOENT;
+	}
+
+	branch_op_callback = create_branch((unsigned int *)buff + TMPL_CALL_HDLR_IDX,
+				(unsigned long)op_callback_addr,
+				BRANCH_SET_LINK);
+
+	branch_emulate_step = create_branch((unsigned int *)buff + TMPL_EMULATE_IDX,
+				(unsigned long)emulate_step_addr,
+				BRANCH_SET_LINK);
+
+	if (!branch_op_callback || !branch_emulate_step) {
+		free_ppc_optinsn_slot(buff, 0);
+		return -ERANGE;
+	}
+
+	buff[TMPL_CALL_HDLR_IDX] = branch_op_callback;
+	buff[TMPL_EMULATE_IDX] = branch_emulate_step;
+
+	/*
+	 * 3. load instruction to be emulated into relevant register, and
+	 */
+	patch_imm32_load_insns(*p->ainsn.insn, buff + TMPL_INSN_IDX);
+
+	/*
+	 * 4. branch back from trampoline
+	 */
+	buff[TMPL_RET_IDX] = create_branch((unsigned int *)buff + TMPL_RET_IDX,
+				(unsigned long)nip, 0);
+
+	flush_icache_range((unsigned long)buff,
+			   (unsigned long)(&buff[TMPL_END_IDX]));
+
+	op->optinsn.insn = buff;
+
+	return 0;
+}
+
+int arch_prepared_optinsn(struct arch_optimized_insn *optinsn)
+{
+	return optinsn->insn != NULL;
+}
+
+/*
+ * On powerpc, Optprobes always replaces one instruction (4 bytes
+ * aligned and 4 bytes long). It is impossible to encounter another
+ * kprobe in this address range. So always return 0.
+ */
+int arch_check_optimized_kprobe(struct optimized_kprobe *op)
+{
+	return 0;
+}
+
+void arch_optimize_kprobes(struct list_head *oplist)
+{
+	struct optimized_kprobe *op;
+	struct optimized_kprobe *tmp;
+
+	list_for_each_entry_safe(op, tmp, oplist, list) {
+		/*
+		 * Backup instructions which will be replaced
+		 * by jump address
+		 */
+		memcpy(op->optinsn.copied_insn, op->kp.addr,
+					       RELATIVEJUMP_SIZE);
+		patch_instruction(op->kp.addr,
+			create_branch((unsigned int *)op->kp.addr,
+				      (unsigned long)op->optinsn.insn, 0));
+		list_del_init(&op->list);
+	}
+}
+
+void arch_unoptimize_kprobe(struct optimized_kprobe *op)
+{
+	arch_arm_kprobe(&op->kp);
+}
+
+void arch_unoptimize_kprobes(struct list_head *oplist,
+			     struct list_head *done_list)
+{
+	struct optimized_kprobe *op;
+	struct optimized_kprobe *tmp;
+
+	list_for_each_entry_safe(op, tmp, oplist, list) {
+		arch_unoptimize_kprobe(op);
+		list_move(&op->list, done_list);
+	}
+}
+
+int arch_within_optimized_kprobe(struct optimized_kprobe *op,
+				 unsigned long addr)
+{
+	return ((unsigned long)op->kp.addr <= addr &&
+		(unsigned long)op->kp.addr + RELATIVEJUMP_SIZE > addr);
+}
diff --git a/arch/powerpc/kernel/optprobes_head.S b/arch/powerpc/kernel/optprobes_head.S
new file mode 100644
index 0000000..c86976b
--- /dev/null
+++ b/arch/powerpc/kernel/optprobes_head.S
@@ -0,0 +1,135 @@
+/*
+ * Code to prepare detour buffer for optprobes in Kernel.
+ *
+ * Copyright 2016, Anju T, IBM Corp.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <asm/ppc_asm.h>
+#include <asm/ptrace.h>
+#include <asm/asm-offsets.h>
+
+#define	OPT_SLOT_SIZE	65536
+
+	.balign	4
+
+	/*
+	 * Reserve an area to allocate slots for detour buffer.
+	 * This is part of .text section (rather than vmalloc area)
+	 * as this needs to be within 32MB of the probed address.
+	 */
+	.global optinsn_slot
+optinsn_slot:
+	.space	OPT_SLOT_SIZE
+
+	/*
+	 * Optprobe template:
+	 * This template gets copied into one of the slots in optinsn_slot
+	 * and gets fixed up with real optprobe structures et al.
+	 */
+	.global optprobe_template_entry
+optprobe_template_entry:
+	/* Create an in-memory pt_regs */
+	stdu	r1,-INT_FRAME_SIZE(r1)
+	SAVE_GPR(0,r1)
+	/* Save the previous SP into stack */
+	addi	r0,r1,INT_FRAME_SIZE
+	std	r0,GPR1(r1)
+	SAVE_10GPRS(2,r1)
+	SAVE_10GPRS(12,r1)
+	SAVE_10GPRS(22,r1)
+	/* Save SPRS */
+	mfmsr	r5
+	std	r5,_MSR(r1)
+	li	r5,0x700
+	std	r5,_TRAP(r1)
+	li	r5,0
+	std	r5,ORIG_GPR3(r1)
+	std	r5,RESULT(r1)
+	mfctr	r5
+	std	r5,_CTR(r1)
+	mflr	r5
+	std	r5,_LINK(r1)
+	mfspr	r5,SPRN_XER
+	std	r5,_XER(r1)
+	mfcr	r5
+	std	r5,_CCR(r1)
+	lbz     r5,PACASOFTIRQEN(r13)
+	std     r5,SOFTE(r1)
+	mfdar	r5
+	std	r5,_DAR(r1)
+	mfdsisr	r5
+	std	r5,_DSISR(r1)
+
+	.global optprobe_template_op_address
+optprobe_template_op_address:
+	/*
+	 * Parameters to optimized_callback():
+	 * 1. optimized_kprobe structure in r3
+	 */
+	nop
+	nop
+	nop
+	nop
+	nop
+	/* 2. pt_regs pointer in r4 */
+	addi	r4,r1,STACK_FRAME_OVERHEAD
+
+	.global optprobe_template_call_handler
+optprobe_template_call_handler:
+	/* Branch to optimized_callback() */
+	nop
+
+	/*
+	 * Parameters for instruction emulation:
+	 * 1. Pass SP in register r3.
+	 */
+	addi	r3,r1,STACK_FRAME_OVERHEAD
+
+	.global optprobe_template_insn
+optprobe_template_insn:
+	/* 2, Pass instruction to be emulated in r4 */
+	nop
+	nop
+
+	.global optprobe_template_call_emulate
+optprobe_template_call_emulate:
+	/* Branch to emulate_step()  */
+	nop
+
+	/*
+	 * All done.
+	 * Now, restore the registers...
+	 */
+	ld	r5,_MSR(r1)
+	mtmsr	r5
+	ld	r5,_CTR(r1)
+	mtctr	r5
+	ld	r5,_LINK(r1)
+	mtlr	r5
+	ld	r5,_XER(r1)
+	mtxer	r5
+	ld	r5,_CCR(r1)
+	mtcr	r5
+	ld	r5,_DAR(r1)
+	mtdar	r5
+	ld	r5,_DSISR(r1)
+	mtdsisr	r5
+	REST_GPR(0,r1)
+	REST_10GPRS(2,r1)
+	REST_10GPRS(12,r1)
+	REST_10GPRS(22,r1)
+	/* Restore the previous SP */
+	addi	r1,r1,INT_FRAME_SIZE
+
+	.global optprobe_template_ret
+optprobe_template_ret:
+	/* ... and jump back from trampoline */
+	nop
+
+	.global optprobe_template_end
+optprobe_template_end:
diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
index 3362299..0c6c2b0 100644
--- a/arch/powerpc/lib/sstep.c
+++ b/arch/powerpc/lib/sstep.c
@@ -618,6 +618,27 @@ static int __kprobes trap_compare(long v1, long v2)
 }
 
 /*
+ * Helper to check if a given instruction is a conditional branch
+ * Derived from the conditional checks in analyse_instr()
+ */
+bool __kprobes is_conditional_branch(unsigned int instr)
+{
+	unsigned int opcode = instr >> 26;
+
+	if (opcode == 16)	/* bc, bca, bcl, bcla */
+		return true;
+	if (opcode == 19) {
+		switch ((instr >> 1) & 0x3ff) {
+		case 16:	/* bclr, bclrl */
+		case 528:	/* bcctr, bcctrl */
+		case 560:	/* bctar, bctarl */
+			return true;
+		}
+	}
+	return false;
+}
+
+/*
  * Elements of 32-bit rotate and mask instructions.
  */
 #define MASK32(mb, me)	((0xffffffffUL >> (mb)) + \
@@ -2018,3 +2039,4 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
 	regs->nip = truncate_if_32bit(regs->msr, regs->nip + 4);
 	return 1;
 }
+
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH V2 4/4] arch/powerpc: Optimize kprobe in kretprobe_trampoline
  2016-12-14 16:18 [PATCH V2 0/4] OPTPROBES for powerpc Anju T Sudhakar
  2016-12-14 16:18 ` [PATCH V2 3/4] arch/powerpc: Implement Optprobes Anju T Sudhakar
@ 2016-12-14 16:18 ` Anju T Sudhakar
  2016-12-16 14:09   ` Masami Hiramatsu
  2016-12-14 16:18 ` [PATCH V2 1/4] powerpc: asm/ppc-opcode.h: introduce __PPC_SH64() Anju T Sudhakar
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 12+ messages in thread
From: Anju T Sudhakar @ 2016-12-14 16:18 UTC (permalink / raw)
  To: linux-kernel, linuxppc-dev
  Cc: ananth, naveen.n.rao, paulus, srikar, benh, mpe, mahesh, mhiramat, anju

Kprobe placed on the  kretprobe_trampoline during boot time can be 
optimized, since the instruction at probe point is a 'nop'.

Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/kprobes.c   | 8 ++++++++
 arch/powerpc/kernel/optprobes.c | 7 +++----
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index e785cc9..5b0fd07 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -282,6 +282,7 @@ asm(".global kretprobe_trampoline\n"
 	".type kretprobe_trampoline, @function\n"
 	"kretprobe_trampoline:\n"
 	"nop\n"
+	"blr\n"
 	".size kretprobe_trampoline, .-kretprobe_trampoline\n");
 
 /*
@@ -334,6 +335,13 @@ static int __kprobes trampoline_probe_handler(struct kprobe *p,
 
 	kretprobe_assert(ri, orig_ret_address, trampoline_address);
 	regs->nip = orig_ret_address;
+	/*
+	 * Make LR point to the orig_ret_address.
+	 * When the 'nop' inside the kretprobe_trampoline
+	 * is optimized, we can do a 'blr' after executing the
+	 * detour buffer code.
+	 */
+	regs->link = orig_ret_address;
 
 	reset_current_kprobe();
 	kretprobe_hash_unlock(current, &flags);
diff --git a/arch/powerpc/kernel/optprobes.c b/arch/powerpc/kernel/optprobes.c
index ecba221..5e4c254 100644
--- a/arch/powerpc/kernel/optprobes.c
+++ b/arch/powerpc/kernel/optprobes.c
@@ -72,12 +72,11 @@ static unsigned long can_optimize(struct kprobe *p)
 
 	/*
 	 * kprobe placed for kretprobe during boot time
-	 * is not optimizing now.
-	 *
-	 * TODO: Optimize kprobe in kretprobe_trampoline
+	 * has a 'nop' instruction, which can be emulated.
+	 * So further checks can be skipped.
 	 */
 	if (p->addr == (kprobe_opcode_t *)&kretprobe_trampoline)
-		return 0;
+		return (unsigned long)p->addr + sizeof(kprobe_opcode_t);
 
 	/*
 	 * We only support optimizing kernel addresses, but not
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH V2 1/4] powerpc: asm/ppc-opcode.h: introduce __PPC_SH64()
  2016-12-14 16:18 [PATCH V2 0/4] OPTPROBES for powerpc Anju T Sudhakar
  2016-12-14 16:18 ` [PATCH V2 3/4] arch/powerpc: Implement Optprobes Anju T Sudhakar
  2016-12-14 16:18 ` [PATCH V2 4/4] arch/powerpc: Optimize kprobe in kretprobe_trampoline Anju T Sudhakar
@ 2016-12-14 16:18 ` Anju T Sudhakar
  2016-12-14 16:18 ` [PATCH V2 2/4] powerpc: add helper to check if offset is within rel branch range Anju T Sudhakar
  2016-12-16 14:46 ` [PATCH V2 0/4] OPTPROBES for powerpc Balbir Singh
  4 siblings, 0 replies; 12+ messages in thread
From: Anju T Sudhakar @ 2016-12-14 16:18 UTC (permalink / raw)
  To: linux-kernel, linuxppc-dev
  Cc: ananth, naveen.n.rao, paulus, srikar, benh, mpe, mahesh, mhiramat, anju

From: "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com>

Introduce __PPC_SH64() as a 64-bit variant to encode shift field in some
of the shift and rotate instructions operating on double-words. Convert
some of the BPF instruction macros to use the same.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/ppc-opcode.h |  1 +
 arch/powerpc/net/bpf_jit.h            | 11 +++++------
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
index 0132831..630127b 100644
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -306,6 +306,7 @@
 #define __PPC_WC(w)	(((w) & 0x3) << 21)
 #define __PPC_WS(w)	(((w) & 0x1f) << 11)
 #define __PPC_SH(s)	__PPC_WS(s)
+#define __PPC_SH64(s)	(__PPC_SH(s) | (((s) & 0x20) >> 4))
 #define __PPC_MB(s)	(((s) & 0x1f) << 6)
 #define __PPC_ME(s)	(((s) & 0x1f) << 1)
 #define __PPC_MB64(s)	(__PPC_MB(s) | ((s) & 0x20))
diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
index 89f7007..30cf03f 100644
--- a/arch/powerpc/net/bpf_jit.h
+++ b/arch/powerpc/net/bpf_jit.h
@@ -157,8 +157,7 @@
 #define PPC_SRAD(d, a, s)	EMIT(PPC_INST_SRAD | ___PPC_RA(d) |	      \
 				     ___PPC_RS(a) | ___PPC_RB(s))
 #define PPC_SRADI(d, a, i)	EMIT(PPC_INST_SRADI | ___PPC_RA(d) |	      \
-				     ___PPC_RS(a) | __PPC_SH(i) |             \
-				     (((i) & 0x20) >> 4))
+				     ___PPC_RS(a) | __PPC_SH64(i))
 #define PPC_RLWINM(d, a, i, mb, me)	EMIT(PPC_INST_RLWINM | ___PPC_RA(d) | \
 					___PPC_RS(a) | __PPC_SH(i) |	      \
 					__PPC_MB(mb) | __PPC_ME(me))
@@ -166,11 +165,11 @@
 					___PPC_RS(a) | __PPC_SH(i) |	      \
 					__PPC_MB(mb) | __PPC_ME(me))
 #define PPC_RLDICL(d, a, i, mb)		EMIT(PPC_INST_RLDICL | ___PPC_RA(d) | \
-					___PPC_RS(a) | __PPC_SH(i) |	      \
-					__PPC_MB64(mb) | (((i) & 0x20) >> 4))
+					___PPC_RS(a) | __PPC_SH64(i) |	      \
+					__PPC_MB64(mb))
 #define PPC_RLDICR(d, a, i, me)		EMIT(PPC_INST_RLDICR | ___PPC_RA(d) | \
-					___PPC_RS(a) | __PPC_SH(i) |	      \
-					__PPC_ME64(me) | (((i) & 0x20) >> 4))
+					___PPC_RS(a) | __PPC_SH64(i) |	      \
+					__PPC_ME64(me))
 
 /* slwi = rlwinm Rx, Ry, n, 0, 31-n */
 #define PPC_SLWI(d, a, i)	PPC_RLWINM(d, a, i, 0, 31-(i))
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH V2 2/4] powerpc: add helper to check if offset is within rel branch range
  2016-12-14 16:18 [PATCH V2 0/4] OPTPROBES for powerpc Anju T Sudhakar
                   ` (2 preceding siblings ...)
  2016-12-14 16:18 ` [PATCH V2 1/4] powerpc: asm/ppc-opcode.h: introduce __PPC_SH64() Anju T Sudhakar
@ 2016-12-14 16:18 ` Anju T Sudhakar
  2016-12-16 11:52   ` Masami Hiramatsu
  2016-12-16 14:46 ` [PATCH V2 0/4] OPTPROBES for powerpc Balbir Singh
  4 siblings, 1 reply; 12+ messages in thread
From: Anju T Sudhakar @ 2016-12-14 16:18 UTC (permalink / raw)
  To: linux-kernel, linuxppc-dev
  Cc: ananth, naveen.n.rao, paulus, srikar, benh, mpe, mahesh, mhiramat, anju

From: "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com>

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/code-patching.h |  1 +
 arch/powerpc/lib/code-patching.c         | 24 +++++++++++++++++++++++-
 2 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
index 2015b07..75ee4f4 100644
--- a/arch/powerpc/include/asm/code-patching.h
+++ b/arch/powerpc/include/asm/code-patching.h
@@ -22,6 +22,7 @@
 #define BRANCH_SET_LINK	0x1
 #define BRANCH_ABSOLUTE	0x2
 
+bool is_offset_in_branch_range(long offset);
 unsigned int create_branch(const unsigned int *addr,
 			   unsigned long target, int flags);
 unsigned int create_cond_branch(const unsigned int *addr,
diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
index d5edbeb..f643451 100644
--- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -32,6 +32,28 @@ int patch_branch(unsigned int *addr, unsigned long target, int flags)
 	return patch_instruction(addr, create_branch(addr, target, flags));
 }
 
+bool is_offset_in_branch_range(long offset)
+{
+	/*
+	 * Powerpc branch instruction is :
+	 *
+	 *  0         6                 30   31
+	 *  +---------+----------------+---+---+
+	 *  | opcode  |     LI         |AA |LK |
+	 *  +---------+----------------+---+---+
+	 *  Where AA = 0 and LK = 0
+	 *
+	 * LI is a signed 24 bits integer. The real branch offset is computed
+	 * by: imm32 = SignExtend(LI:'0b00', 32);
+	 *
+	 * So the maximum forward branch should be:
+	 *   (0x007fffff << 2) = 0x01fffffc =  0x1fffffc
+	 * The maximum backward branch should be:
+	 *   (0xff800000 << 2) = 0xfe000000 = -0x2000000
+	 */
+	return (offset >= -0x2000000 && offset <= 0x1fffffc && !(offset & 0x3));
+}
+
 unsigned int create_branch(const unsigned int *addr,
 			   unsigned long target, int flags)
 {
@@ -43,7 +65,7 @@ unsigned int create_branch(const unsigned int *addr,
 		offset = offset - (unsigned long)addr;
 
 	/* Check we can represent the target in the instruction format */
-	if (offset < -0x2000000 || offset > 0x1fffffc || offset & 0x3)
+	if (!is_offset_in_branch_range(offset))
 		return 0;
 
 	/* Mask out the flags and target, so they don't step on each other. */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH V2 2/4] powerpc: add helper to check if offset is within rel branch range
  2016-12-14 16:18 ` [PATCH V2 2/4] powerpc: add helper to check if offset is within rel branch range Anju T Sudhakar
@ 2016-12-16 11:52   ` Masami Hiramatsu
  2016-12-16 16:31     ` Anju T Sudhakar
  0 siblings, 1 reply; 12+ messages in thread
From: Masami Hiramatsu @ 2016-12-16 11:52 UTC (permalink / raw)
  To: Anju T Sudhakar
  Cc: linux-kernel, linuxppc-dev, ananth, naveen.n.rao, paulus, srikar,
	benh, mpe, mahesh, mhiramat

On Wed, 14 Dec 2016 21:48:30 +0530
Anju T Sudhakar <anju@linux.vnet.ibm.com> wrote:

> From: "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com>
> 

The coding is OK to me. Please add a description for this patch
here, e.g. what is done by this patch, what kind of branch 
instruction will be covered, and why thse checks are needed etc.

Thank you,

> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/code-patching.h |  1 +
>  arch/powerpc/lib/code-patching.c         | 24 +++++++++++++++++++++++-
>  2 files changed, 24 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
> index 2015b07..75ee4f4 100644
> --- a/arch/powerpc/include/asm/code-patching.h
> +++ b/arch/powerpc/include/asm/code-patching.h
> @@ -22,6 +22,7 @@
>  #define BRANCH_SET_LINK	0x1
>  #define BRANCH_ABSOLUTE	0x2
>  
> +bool is_offset_in_branch_range(long offset);
>  unsigned int create_branch(const unsigned int *addr,
>  			   unsigned long target, int flags);
>  unsigned int create_cond_branch(const unsigned int *addr,
> diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
> index d5edbeb..f643451 100644
> --- a/arch/powerpc/lib/code-patching.c
> +++ b/arch/powerpc/lib/code-patching.c
> @@ -32,6 +32,28 @@ int patch_branch(unsigned int *addr, unsigned long target, int flags)
>  	return patch_instruction(addr, create_branch(addr, target, flags));
>  }
>  
> +bool is_offset_in_branch_range(long offset)
> +{
> +	/*
> +	 * Powerpc branch instruction is :
> +	 *
> +	 *  0         6                 30   31
> +	 *  +---------+----------------+---+---+
> +	 *  | opcode  |     LI         |AA |LK |
> +	 *  +---------+----------------+---+---+
> +	 *  Where AA = 0 and LK = 0
> +	 *
> +	 * LI is a signed 24 bits integer. The real branch offset is computed
> +	 * by: imm32 = SignExtend(LI:'0b00', 32);
> +	 *
> +	 * So the maximum forward branch should be:
> +	 *   (0x007fffff << 2) = 0x01fffffc =  0x1fffffc
> +	 * The maximum backward branch should be:
> +	 *   (0xff800000 << 2) = 0xfe000000 = -0x2000000
> +	 */
> +	return (offset >= -0x2000000 && offset <= 0x1fffffc && !(offset & 0x3));
> +}
> +
>  unsigned int create_branch(const unsigned int *addr,
>  			   unsigned long target, int flags)
>  {
> @@ -43,7 +65,7 @@ unsigned int create_branch(const unsigned int *addr,
>  		offset = offset - (unsigned long)addr;
>  
>  	/* Check we can represent the target in the instruction format */
> -	if (offset < -0x2000000 || offset > 0x1fffffc || offset & 0x3)
> +	if (!is_offset_in_branch_range(offset))
>  		return 0;
>  
>  	/* Mask out the flags and target, so they don't step on each other. */
> -- 
> 2.7.4
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH V2 3/4] arch/powerpc: Implement Optprobes
  2016-12-14 16:18 ` [PATCH V2 3/4] arch/powerpc: Implement Optprobes Anju T Sudhakar
@ 2016-12-16 14:02   ` Masami Hiramatsu
  2016-12-16 17:54     ` Anju T Sudhakar
  0 siblings, 1 reply; 12+ messages in thread
From: Masami Hiramatsu @ 2016-12-16 14:02 UTC (permalink / raw)
  To: Anju T Sudhakar
  Cc: linux-kernel, linuxppc-dev, ananth, naveen.n.rao, paulus, srikar,
	benh, mpe, mahesh, mhiramat

On Wed, 14 Dec 2016 21:48:27 +0530
Anju T Sudhakar <anju@linux.vnet.ibm.com> wrote:

> Detour buffer contains instructions to create an in memory pt_regs.
> After the execution of the pre-handler, a call is made for instruction emulation.
> The NIP is determined in advanced through dummy instruction emulation and a branch
> instruction is created to the NIP at the end of the trampoline.
> 
> Instruction slot for detour buffer is allocated from the reserved area.
> For the time being, 64KB is reserved in memory for this purpose.
> 
> Instructions which can be emulated using analyse_instr() are suppliants
> for optimization. Before optimization ensure that the address range
> between the detour buffer allocated and the instruction being probed
> is within ± 32MB.

Thank you for updating!
I think this has no critical issue, but I have some comments on it.
(just cleanup and hardenings)
Please see below.

> 
> Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> ---
>  .../features/debug/optprobes/arch-support.txt      |   2 +-
>  arch/powerpc/Kconfig                               |   1 +
>  arch/powerpc/include/asm/kprobes.h                 |  23 +-
>  arch/powerpc/include/asm/sstep.h                   |   1 +
>  arch/powerpc/kernel/Makefile                       |   1 +
>  arch/powerpc/kernel/optprobes.c                    | 333 +++++++++++++++++++++
>  arch/powerpc/kernel/optprobes_head.S               | 135 +++++++++
>  arch/powerpc/lib/sstep.c                           |  22 ++
>  8 files changed, 516 insertions(+), 2 deletions(-)
>  create mode 100644 arch/powerpc/kernel/optprobes.c
>  create mode 100644 arch/powerpc/kernel/optprobes_head.S
> 
> diff --git a/Documentation/features/debug/optprobes/arch-support.txt b/Documentation/features/debug/optprobes/arch-support.txt
> index b8999d8..45bc99d 100644
> --- a/Documentation/features/debug/optprobes/arch-support.txt
> +++ b/Documentation/features/debug/optprobes/arch-support.txt
> @@ -27,7 +27,7 @@
>      |       nios2: | TODO |
>      |    openrisc: | TODO |
>      |      parisc: | TODO |
> -    |     powerpc: | TODO |
> +    |     powerpc: |  ok  |
>      |        s390: | TODO |
>      |       score: | TODO |
>      |          sh: | TODO |
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index c7f120a..d563f0a 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -98,6 +98,7 @@ config PPC
>  	select HAVE_IOREMAP_PROT
>  	select HAVE_EFFICIENT_UNALIGNED_ACCESS if !(CPU_LITTLE_ENDIAN && POWER7_CPU)
>  	select HAVE_KPROBES
> +	select HAVE_OPTPROBES if PPC64
>  	select HAVE_ARCH_KGDB
>  	select HAVE_KRETPROBES
>  	select HAVE_ARCH_TRACEHOOK
> diff --git a/arch/powerpc/include/asm/kprobes.h b/arch/powerpc/include/asm/kprobes.h
> index 2c9759bd..739ddc5 100644
> --- a/arch/powerpc/include/asm/kprobes.h
> +++ b/arch/powerpc/include/asm/kprobes.h
> @@ -38,7 +38,22 @@ struct pt_regs;
>  struct kprobe;
>  
>  typedef ppc_opcode_t kprobe_opcode_t;
> -#define MAX_INSN_SIZE 1
> +
> +extern kprobe_opcode_t optinsn_slot;
> +
> +/* Optinsn template address */
> +extern kprobe_opcode_t optprobe_template_entry[];
> +extern kprobe_opcode_t optprobe_template_op_address[];
> +extern kprobe_opcode_t optprobe_template_call_handler[];
> +extern kprobe_opcode_t optprobe_template_insn[];
> +extern kprobe_opcode_t optprobe_template_call_emulate[];
> +extern kprobe_opcode_t optprobe_template_ret[];
> +extern kprobe_opcode_t optprobe_template_end[];
> +
> +#define MAX_INSN_SIZE		1
> +#define MAX_OPTIMIZED_LENGTH	4
> +#define MAX_OPTINSN_SIZE	(optprobe_template_end - optprobe_template_entry)
> +#define RELATIVEJUMP_SIZE	4

These size/length macros seems a bit odd. I guess MAX_INSN_SIZE
is based on sizeof(MAX_INSN_SIZE), but others are in byte.
Could you fix that? For example, define it with 
sizeof(kprobe_opcode_t), and add comments on it, etc.

>  
>  #ifdef PPC64_ELF_ABI_v2
>  /* PPC64 ABIv2 needs local entry point */
> @@ -124,6 +139,12 @@ struct kprobe_ctlblk {
>  	struct prev_kprobe prev_kprobe;
>  };
>  
> +struct arch_optimized_insn {
> +	kprobe_opcode_t copied_insn[1];
> +	/* detour buffer */
> +	kprobe_opcode_t *insn;
> +};
> +
>  extern int kprobe_exceptions_notify(struct notifier_block *self,
>  					unsigned long val, void *data);
>  extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr);
> diff --git a/arch/powerpc/include/asm/sstep.h b/arch/powerpc/include/asm/sstep.h
> index d3a42cc..f7ad425 100644
> --- a/arch/powerpc/include/asm/sstep.h
> +++ b/arch/powerpc/include/asm/sstep.h
> @@ -87,3 +87,4 @@ struct instruction_op {
>  
>  extern int analyse_instr(struct instruction_op *op, struct pt_regs *regs,
>  			 unsigned int instr);
> +extern bool is_conditional_branch(unsigned int instr);
> diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
> index 1925341..54f0f47 100644
> --- a/arch/powerpc/kernel/Makefile
> +++ b/arch/powerpc/kernel/Makefile
> @@ -98,6 +98,7 @@ obj-$(CONFIG_KGDB)		+= kgdb.o
>  obj-$(CONFIG_BOOTX_TEXT)	+= btext.o
>  obj-$(CONFIG_SMP)		+= smp.o
>  obj-$(CONFIG_KPROBES)		+= kprobes.o
> +obj-$(CONFIG_OPTPROBES)		+= optprobes.o optprobes_head.o
>  obj-$(CONFIG_UPROBES)		+= uprobes.o
>  obj-$(CONFIG_PPC_UDBG_16550)	+= legacy_serial.o udbg_16550.o
>  obj-$(CONFIG_STACKTRACE)	+= stacktrace.o
> diff --git a/arch/powerpc/kernel/optprobes.c b/arch/powerpc/kernel/optprobes.c
> new file mode 100644
> index 0000000..ecba221
> --- /dev/null
> +++ b/arch/powerpc/kernel/optprobes.c
> @@ -0,0 +1,333 @@
> +/*
> + * Code for Kernel probes Jump optimization.
> + *
> + * Copyright 2016, Anju T, IBM Corp.
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License
> + * as published by the Free Software Foundation; either version
> + * 2 of the License, or (at your option) any later version.
> + */
> +
> +#include <linux/kprobes.h>
> +#include <linux/jump_label.h>
> +#include <linux/types.h>
> +#include <linux/slab.h>
> +#include <linux/list.h>
> +#include <asm/kprobes.h>
> +#include <asm/ptrace.h>
> +#include <asm/cacheflush.h>
> +#include <asm/code-patching.h>
> +#include <asm/sstep.h>
> +#include <asm/ppc-opcode.h>
> +
> +#define TMPL_CALL_HDLR_IDX	\
> +	(optprobe_template_call_handler - optprobe_template_entry)
> +#define TMPL_EMULATE_IDX	\
> +	(optprobe_template_call_emulate - optprobe_template_entry)
> +#define TMPL_RET_IDX		\
> +	(optprobe_template_ret - optprobe_template_entry)
> +#define TMPL_OP_IDX		\
> +	(optprobe_template_op_address - optprobe_template_entry)
> +#define TMPL_INSN_IDX		\
> +	(optprobe_template_insn - optprobe_template_entry)
> +#define TMPL_END_IDX		\
> +	(optprobe_template_end - optprobe_template_entry)
> +
> +DEFINE_INSN_CACHE_OPS(ppc_optinsn);
> +
> +static bool insn_page_in_use;
> +
> +static void *__ppc_alloc_insn_page(void)
> +{
> +	if (insn_page_in_use)
> +		return NULL;
> +	insn_page_in_use = true;
> +	return &optinsn_slot;
> +}
> +
> +static void __ppc_free_insn_page(void *page __maybe_unused)
> +{
> +	insn_page_in_use = false;
> +}
> +
> +struct kprobe_insn_cache kprobe_ppc_optinsn_slots = {
> +	.mutex = __MUTEX_INITIALIZER(kprobe_ppc_optinsn_slots.mutex),
> +	.pages = LIST_HEAD_INIT(kprobe_ppc_optinsn_slots.pages),
> +	/* insn_size initialized later */
> +	.alloc = __ppc_alloc_insn_page,
> +	.free = __ppc_free_insn_page,
> +	.nr_garbage = 0,
> +};
> +
> +/*
> + * Check if we can optimize this probe. Returns NIP post-emulation if this can
> + * be optimized and 0 otherwise.
> + */
> +static unsigned long can_optimize(struct kprobe *p)
> +{
> +	struct pt_regs regs;
> +	struct instruction_op op;
> +	unsigned long nip = 0;
> +
> +	/*
> +	 * kprobe placed for kretprobe during boot time
> +	 * is not optimizing now.
> +	 *
> +	 * TODO: Optimize kprobe in kretprobe_trampoline
> +	 */
> +	if (p->addr == (kprobe_opcode_t *)&kretprobe_trampoline)
> +		return 0;
> +
> +	/*
> +	 * We only support optimizing kernel addresses, but not
> +	 * module addresses.
> +	 */
> +	if (!is_kernel_addr((unsigned long)p->addr))
> +		return 0;
> +
> +	regs.nip = (unsigned long)p->addr;
> +	regs.trap = 0x0;
> +	regs.msr = MSR_KERNEL;
> +
> +	/*
> +	 * Ensure that the instruction is not a conditional branch,
> +	 * and that can be emulated.
> +	 */
> +	if (!is_conditional_branch(*p->ainsn.insn) &&
> +			analyse_instr(&op, &regs, *p->ainsn.insn))
> +		nip = regs.nip;
> +
> +	return nip;
> +}
> +
> +static void optimized_callback(struct optimized_kprobe *op,
> +			       struct pt_regs *regs)
> +{
> +	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> +	unsigned long flags;
> +
> +	/* This is possible if op is under delayed unoptimizing */
> +	if (kprobe_disabled(&op->kp))
> +		return;
> +
> +	local_irq_save(flags);
> +
> +	if (kprobe_running()) {
> +		kprobes_inc_nmissed_count(&op->kp);
> +	} else {
> +		__this_cpu_write(current_kprobe, &op->kp);
> +		regs->nip = (unsigned long)op->kp.addr;
> +		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
> +		opt_pre_handler(&op->kp, regs);
> +		__this_cpu_write(current_kprobe, NULL);
> +	}
> +	local_irq_restore(flags);
> +}
> +NOKPROBE_SYMBOL(optimized_callback);
> +
> +void arch_remove_optimized_kprobe(struct optimized_kprobe *op)
> +{
> +	if (op->optinsn.insn) {
> +		free_ppc_optinsn_slot(op->optinsn.insn, 1);
> +		op->optinsn.insn = NULL;
> +	}
> +}
> +
> +/*
> + * emulate_step() requires insn to be emulated as
> + * second parameter. Load register 'r4' with the
> + * instruction.
> + */
> +void patch_imm32_load_insns(unsigned int val, kprobe_opcode_t *addr)
> +{
> +	/* addis r4,0,(insn)@h */
> +	*addr++ = PPC_INST_ADDIS | ___PPC_RT(4) |
> +		  ((val >> 16) & 0xffff);
> +
> +	/* ori r4,r4,(insn)@l */
> +	*addr = PPC_INST_ORI | ___PPC_RA(4) | ___PPC_RS(4) |
> +		(val & 0xffff);
> +}
> +
> +/*
> + * Generate instructions to load provided immediate 64-bit value
> + * to register 'r3' and patch these instructions at 'addr'.
> + */
> +void patch_imm64_load_insns(unsigned long val, kprobe_opcode_t *addr)
> +{
> +	/* lis r3,(op)@highest */
> +	*addr++ = PPC_INST_ADDIS | ___PPC_RT(3) |
> +		  ((val >> 48) & 0xffff);
> +
> +	/* ori r3,r3,(op)@higher */
> +	*addr++ = PPC_INST_ORI | ___PPC_RA(3) | ___PPC_RS(3) |
> +		  ((val >> 32) & 0xffff);
> +
> +	/* rldicr r3,r3,32,31 */
> +	*addr++ = PPC_INST_RLDICR | ___PPC_RA(3) | ___PPC_RS(3) |
> +		  __PPC_SH64(32) | __PPC_ME64(31);
> +
> +	/* oris r3,r3,(op)@h */
> +	*addr++ = PPC_INST_ORIS | ___PPC_RA(3) | ___PPC_RS(3) |
> +		  ((val >> 16) & 0xffff);
> +
> +	/* ori r3,r3,(op)@l */
> +	*addr = PPC_INST_ORI | ___PPC_RA(3) | ___PPC_RS(3) |
> +		(val & 0xffff);
> +}

I need another PPC maintainer review for these.

> +
> +int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *p)
> +{
> +	kprobe_opcode_t *buff, branch_op_callback, branch_emulate_step;
> +	kprobe_opcode_t *op_callback_addr, *emulate_step_addr;
> +	long b_offset;
> +	unsigned long nip;
> +
> +	kprobe_ppc_optinsn_slots.insn_size = MAX_OPTINSN_SIZE;
> +
> +	nip = can_optimize(p);
> +	if (!nip)
> +		return -EILSEQ;
> +
> +	/* Allocate instruction slot for detour buffer */
> +	buff = get_ppc_optinsn_slot();
> +	if (!buff)
> +		return -ENOMEM;
> +
> +	/*
> +	 * OPTPROBE uses 'b' instruction to branch to optinsn.insn.
> +	 *
> +	 * The target address has to be relatively nearby, to permit use
> +	 * of branch instruction in powerpc, because the address is specified
> +	 * in an immediate field in the instruction opcode itself, ie 24 bits
> +	 * in the opcode specify the address. Therefore the address should
> +	 * be within 32MB on either side of the current instruction.
> +	 */
> +	b_offset = (unsigned long)buff - (unsigned long)p->addr;
> +	if (!is_offset_in_branch_range(b_offset)) {
> +		free_ppc_optinsn_slot(buff, 0);
> +		return -ERANGE;
> +	}
> +
> +	/* Check if the return address is also within 32MB range */
> +	b_offset = (unsigned long)(buff + TMPL_RET_IDX) -
> +			(unsigned long)nip;
> +	if (!is_offset_in_branch_range(b_offset)) {
> +		free_ppc_optinsn_slot(buff, 0);
> +		return -ERANGE;
> +	}
> +
> +	/* Setup template */
> +	memcpy(buff, optprobe_template_entry,
> +			TMPL_END_IDX * sizeof(kprobe_opcode_t));
> +
> +	/*
> +	 * Fixup the template with instructions to:
> +	 * 1. load the address of the actual probepoint
> +	 */
> +	patch_imm64_load_insns((unsigned long)op, buff + TMPL_OP_IDX);
> +
> +	/*
> +	 * 2. branch to optimized_callback() and emulate_step()
> +	 */
> +	kprobe_lookup_name("optimized_callback", op_callback_addr);
> +	kprobe_lookup_name("emulate_step", emulate_step_addr);
> +	if (!op_callback_addr || !emulate_step_addr) {
> +		free_ppc_optinsn_slot(buff, 0);
> +		WARN(1, "kprobe_lookup_name() failed\n");
> +		return -ENOENT;
> +	}
> +
> +	branch_op_callback = create_branch((unsigned int *)buff + TMPL_CALL_HDLR_IDX,
> +				(unsigned long)op_callback_addr,
> +				BRANCH_SET_LINK);
> +
> +	branch_emulate_step = create_branch((unsigned int *)buff + TMPL_EMULATE_IDX,
> +				(unsigned long)emulate_step_addr,
> +				BRANCH_SET_LINK);
> +
> +	if (!branch_op_callback || !branch_emulate_step) {
> +		free_ppc_optinsn_slot(buff, 0);
> +		return -ERANGE;

Since there are 4 free_ppc_optinsn_slot(buff, 0) and return error pattarns,
I recommend you to consolidate it by using "goto error".

> +	}
> +
> +	buff[TMPL_CALL_HDLR_IDX] = branch_op_callback;
> +	buff[TMPL_EMULATE_IDX] = branch_emulate_step;
> +
> +	/*
> +	 * 3. load instruction to be emulated into relevant register, and
> +	 */
> +	patch_imm32_load_insns(*p->ainsn.insn, buff + TMPL_INSN_IDX);
> +
> +	/*
> +	 * 4. branch back from trampoline
> +	 */
> +	buff[TMPL_RET_IDX] = create_branch((unsigned int *)buff + TMPL_RET_IDX,
> +				(unsigned long)nip, 0);
> +
> +	flush_icache_range((unsigned long)buff,
> +			   (unsigned long)(&buff[TMPL_END_IDX]));
> +
> +	op->optinsn.insn = buff;
> +
> +	return 0;
> +}
> +
> +int arch_prepared_optinsn(struct arch_optimized_insn *optinsn)
> +{
> +	return optinsn->insn != NULL;
> +}
> +
> +/*
> + * On powerpc, Optprobes always replaces one instruction (4 bytes
> + * aligned and 4 bytes long). It is impossible to encounter another
> + * kprobe in this address range. So always return 0.
> + */
> +int arch_check_optimized_kprobe(struct optimized_kprobe *op)
> +{
> +	return 0;
> +}
> +
> +void arch_optimize_kprobes(struct list_head *oplist)
> +{
> +	struct optimized_kprobe *op;
> +	struct optimized_kprobe *tmp;
> +
> +	list_for_each_entry_safe(op, tmp, oplist, list) {
> +		/*
> +		 * Backup instructions which will be replaced
> +		 * by jump address
> +		 */
> +		memcpy(op->optinsn.copied_insn, op->kp.addr,
> +					       RELATIVEJUMP_SIZE);
> +		patch_instruction(op->kp.addr,
> +			create_branch((unsigned int *)op->kp.addr,
> +				      (unsigned long)op->optinsn.insn, 0));

Would you be sure that create_branch doesn't fail?

> +		list_del_init(&op->list);
> +	}
> +}
> +
> +void arch_unoptimize_kprobe(struct optimized_kprobe *op)
> +{
> +	arch_arm_kprobe(&op->kp);
> +}
> +
> +void arch_unoptimize_kprobes(struct list_head *oplist,
> +			     struct list_head *done_list)
> +{
> +	struct optimized_kprobe *op;
> +	struct optimized_kprobe *tmp;
> +
> +	list_for_each_entry_safe(op, tmp, oplist, list) {
> +		arch_unoptimize_kprobe(op);
> +		list_move(&op->list, done_list);
> +	}
> +}
> +
> +int arch_within_optimized_kprobe(struct optimized_kprobe *op,
> +				 unsigned long addr)
> +{
> +	return ((unsigned long)op->kp.addr <= addr &&
> +		(unsigned long)op->kp.addr + RELATIVEJUMP_SIZE > addr);
> +}
> diff --git a/arch/powerpc/kernel/optprobes_head.S b/arch/powerpc/kernel/optprobes_head.S
> new file mode 100644
> index 0000000..c86976b
> --- /dev/null
> +++ b/arch/powerpc/kernel/optprobes_head.S
> @@ -0,0 +1,135 @@
> +/*
> + * Code to prepare detour buffer for optprobes in Kernel.
> + *
> + * Copyright 2016, Anju T, IBM Corp.
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License
> + * as published by the Free Software Foundation; either version
> + * 2 of the License, or (at your option) any later version.
> + */
> +
> +#include <asm/ppc_asm.h>
> +#include <asm/ptrace.h>
> +#include <asm/asm-offsets.h>
> +
> +#define	OPT_SLOT_SIZE	65536
> +
> +	.balign	4
> +
> +	/*
> +	 * Reserve an area to allocate slots for detour buffer.
> +	 * This is part of .text section (rather than vmalloc area)
> +	 * as this needs to be within 32MB of the probed address.
> +	 */
> +	.global optinsn_slot
> +optinsn_slot:
> +	.space	OPT_SLOT_SIZE
> +
> +	/*
> +	 * Optprobe template:
> +	 * This template gets copied into one of the slots in optinsn_slot
> +	 * and gets fixed up with real optprobe structures et al.
> +	 */
> +	.global optprobe_template_entry
> +optprobe_template_entry:
> +	/* Create an in-memory pt_regs */
> +	stdu	r1,-INT_FRAME_SIZE(r1)
> +	SAVE_GPR(0,r1)
> +	/* Save the previous SP into stack */
> +	addi	r0,r1,INT_FRAME_SIZE
> +	std	r0,GPR1(r1)
> +	SAVE_10GPRS(2,r1)
> +	SAVE_10GPRS(12,r1)
> +	SAVE_10GPRS(22,r1)
> +	/* Save SPRS */
> +	mfmsr	r5
> +	std	r5,_MSR(r1)
> +	li	r5,0x700
> +	std	r5,_TRAP(r1)
> +	li	r5,0
> +	std	r5,ORIG_GPR3(r1)
> +	std	r5,RESULT(r1)
> +	mfctr	r5
> +	std	r5,_CTR(r1)
> +	mflr	r5
> +	std	r5,_LINK(r1)
> +	mfspr	r5,SPRN_XER
> +	std	r5,_XER(r1)
> +	mfcr	r5
> +	std	r5,_CCR(r1)
> +	lbz     r5,PACASOFTIRQEN(r13)
> +	std     r5,SOFTE(r1)
> +	mfdar	r5
> +	std	r5,_DAR(r1)
> +	mfdsisr	r5
> +	std	r5,_DSISR(r1)
> +
> +	.global optprobe_template_op_address
> +optprobe_template_op_address:
> +	/*
> +	 * Parameters to optimized_callback():
> +	 * 1. optimized_kprobe structure in r3
> +	 */
> +	nop
> +	nop
> +	nop
> +	nop
> +	nop
> +	/* 2. pt_regs pointer in r4 */
> +	addi	r4,r1,STACK_FRAME_OVERHEAD
> +
> +	.global optprobe_template_call_handler
> +optprobe_template_call_handler:
> +	/* Branch to optimized_callback() */
> +	nop
> +
> +	/*
> +	 * Parameters for instruction emulation:
> +	 * 1. Pass SP in register r3.
> +	 */
> +	addi	r3,r1,STACK_FRAME_OVERHEAD
> +
> +	.global optprobe_template_insn
> +optprobe_template_insn:
> +	/* 2, Pass instruction to be emulated in r4 */
> +	nop
> +	nop
> +
> +	.global optprobe_template_call_emulate
> +optprobe_template_call_emulate:
> +	/* Branch to emulate_step()  */
> +	nop
> +
> +	/*
> +	 * All done.
> +	 * Now, restore the registers...
> +	 */
> +	ld	r5,_MSR(r1)
> +	mtmsr	r5
> +	ld	r5,_CTR(r1)
> +	mtctr	r5
> +	ld	r5,_LINK(r1)
> +	mtlr	r5
> +	ld	r5,_XER(r1)
> +	mtxer	r5
> +	ld	r5,_CCR(r1)
> +	mtcr	r5
> +	ld	r5,_DAR(r1)
> +	mtdar	r5
> +	ld	r5,_DSISR(r1)
> +	mtdsisr	r5
> +	REST_GPR(0,r1)
> +	REST_10GPRS(2,r1)
> +	REST_10GPRS(12,r1)
> +	REST_10GPRS(22,r1)
> +	/* Restore the previous SP */
> +	addi	r1,r1,INT_FRAME_SIZE
> +
> +	.global optprobe_template_ret
> +optprobe_template_ret:
> +	/* ... and jump back from trampoline */
> +	nop
> +
> +	.global optprobe_template_end
> +optprobe_template_end:
> diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
> index 3362299..0c6c2b0 100644
> --- a/arch/powerpc/lib/sstep.c
> +++ b/arch/powerpc/lib/sstep.c
> @@ -618,6 +618,27 @@ static int __kprobes trap_compare(long v1, long v2)
>  }
>  
>  /*
> + * Helper to check if a given instruction is a conditional branch
> + * Derived from the conditional checks in analyse_instr()
> + */
> +bool __kprobes is_conditional_branch(unsigned int instr)
> +{
> +	unsigned int opcode = instr >> 26;
> +
> +	if (opcode == 16)	/* bc, bca, bcl, bcla */
> +		return true;
> +	if (opcode == 19) {
> +		switch ((instr >> 1) & 0x3ff) {
> +		case 16:	/* bclr, bclrl */
> +		case 528:	/* bcctr, bcctrl */
> +		case 560:	/* bctar, bctarl */
> +			return true;
> +		}
> +	}
> +	return false;
> +}
> +
> +/*
>   * Elements of 32-bit rotate and mask instructions.
>   */
>  #define MASK32(mb, me)	((0xffffffffUL >> (mb)) + \
> @@ -2018,3 +2039,4 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
>  	regs->nip = truncate_if_32bit(regs->msr, regs->nip + 4);
>  	return 1;
>  }
> +

This seems unneeded linefeed :)

Thank you,

-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH V2 4/4] arch/powerpc: Optimize kprobe in kretprobe_trampoline
  2016-12-14 16:18 ` [PATCH V2 4/4] arch/powerpc: Optimize kprobe in kretprobe_trampoline Anju T Sudhakar
@ 2016-12-16 14:09   ` Masami Hiramatsu
  0 siblings, 0 replies; 12+ messages in thread
From: Masami Hiramatsu @ 2016-12-16 14:09 UTC (permalink / raw)
  To: Anju T Sudhakar
  Cc: linux-kernel, linuxppc-dev, ananth, naveen.n.rao, paulus, srikar,
	benh, mpe, mahesh, mhiramat

On Wed, 14 Dec 2016 21:48:28 +0530
Anju T Sudhakar <anju@linux.vnet.ibm.com> wrote:

> Kprobe placed on the  kretprobe_trampoline during boot time can be 
> optimized, since the instruction at probe point is a 'nop'.
> 

How simple & clever way! ;)

Acked-by: Masami Hiramatsu <mhiramat@kernel.org>

Thank you!

> Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
> ---
>  arch/powerpc/kernel/kprobes.c   | 8 ++++++++
>  arch/powerpc/kernel/optprobes.c | 7 +++----
>  2 files changed, 11 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> index e785cc9..5b0fd07 100644
> --- a/arch/powerpc/kernel/kprobes.c
> +++ b/arch/powerpc/kernel/kprobes.c
> @@ -282,6 +282,7 @@ asm(".global kretprobe_trampoline\n"
>  	".type kretprobe_trampoline, @function\n"
>  	"kretprobe_trampoline:\n"
>  	"nop\n"
> +	"blr\n"
>  	".size kretprobe_trampoline, .-kretprobe_trampoline\n");
>  
>  /*
> @@ -334,6 +335,13 @@ static int __kprobes trampoline_probe_handler(struct kprobe *p,
>  
>  	kretprobe_assert(ri, orig_ret_address, trampoline_address);
>  	regs->nip = orig_ret_address;
> +	/*
> +	 * Make LR point to the orig_ret_address.
> +	 * When the 'nop' inside the kretprobe_trampoline
> +	 * is optimized, we can do a 'blr' after executing the
> +	 * detour buffer code.
> +	 */
> +	regs->link = orig_ret_address;
>  
>  	reset_current_kprobe();
>  	kretprobe_hash_unlock(current, &flags);
> diff --git a/arch/powerpc/kernel/optprobes.c b/arch/powerpc/kernel/optprobes.c
> index ecba221..5e4c254 100644
> --- a/arch/powerpc/kernel/optprobes.c
> +++ b/arch/powerpc/kernel/optprobes.c
> @@ -72,12 +72,11 @@ static unsigned long can_optimize(struct kprobe *p)
>  
>  	/*
>  	 * kprobe placed for kretprobe during boot time
> -	 * is not optimizing now.
> -	 *
> -	 * TODO: Optimize kprobe in kretprobe_trampoline
> +	 * has a 'nop' instruction, which can be emulated.
> +	 * So further checks can be skipped.
>  	 */
>  	if (p->addr == (kprobe_opcode_t *)&kretprobe_trampoline)
> -		return 0;
> +		return (unsigned long)p->addr + sizeof(kprobe_opcode_t);
>  
>  	/*
>  	 * We only support optimizing kernel addresses, but not
> -- 
> 2.7.4
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH V2 0/4] OPTPROBES for powerpc
  2016-12-14 16:18 [PATCH V2 0/4] OPTPROBES for powerpc Anju T Sudhakar
                   ` (3 preceding siblings ...)
  2016-12-14 16:18 ` [PATCH V2 2/4] powerpc: add helper to check if offset is within rel branch range Anju T Sudhakar
@ 2016-12-16 14:46 ` Balbir Singh
  2016-12-16 17:19   ` Naveen N. Rao
  4 siblings, 1 reply; 12+ messages in thread
From: Balbir Singh @ 2016-12-16 14:46 UTC (permalink / raw)
  To: Anju T Sudhakar, linux-kernel, linuxppc-dev
  Cc: ananth, mahesh, paulus, mhiramat, naveen.n.rao, srikar



On 15/12/16 03:18, Anju T Sudhakar wrote:
> This is the V2 patchset of the kprobes jump optimization
> (a.k.a OPTPROBES)for powerpc. Kprobe being an inevitable tool
> for kernel developers, enhancing the performance of kprobe has
> got much importance.
> 
> Currently kprobes inserts a trap instruction to probe a running kernel.
> Jump optimization allows kprobes to replace the trap with a branch,
> reducing the probe overhead drastically.
> 
> In this series, conditional branch instructions are not considered for
> optimization as they have to be assessed carefully in SMP systems.
> 
> The kprobe placed on the kretprobe_trampoline during boot time, is also
> optimized in this series. Patch 4/4 furnishes this.
> 
> The first two patches can go independently of the series. The helper 
> functions in these patches are invoked in patch 3/4.
> 
> Performance:
> ============
> An optimized kprobe in powerpc is 1.05 to 4.7 times faster than a kprobe.
>  
> Example:
>  
> Placed a probe at an offset 0x50 in _do_fork().
> *Time Diff here is, difference in time before hitting the probe and
> after the probed instruction. mftb() is employed in kernel/fork.c for
> this purpose.
>  
> # echo 0 > /proc/sys/debug/kprobes-optimization
> Kprobes globally unoptimized
>  [  233.607120] Time Diff = 0x1f0
>  [  233.608273] Time Diff = 0x1ee
>  [  233.609228] Time Diff = 0x203
>  [  233.610400] Time Diff = 0x1ec
>  [  233.611335] Time Diff = 0x200
>  [  233.612552] Time Diff = 0x1f0
>  [  233.613386] Time Diff = 0x1ee
>  [  233.614547] Time Diff = 0x212
>  [  233.615570] Time Diff = 0x206
>  [  233.616819] Time Diff = 0x1f3
>  [  233.617773] Time Diff = 0x1ec
>  [  233.618944] Time Diff = 0x1fb
>  [  233.619879] Time Diff = 0x1f0
>  [  233.621066] Time Diff = 0x1f9
>  [  233.621999] Time Diff = 0x283
>  [  233.623281] Time Diff = 0x24d
>  [  233.624172] Time Diff = 0x1ea
>  [  233.625381] Time Diff = 0x1f0
>  [  233.626358] Time Diff = 0x200
>  [  233.627572] Time Diff = 0x1ed
>  
> # echo 1 > /proc/sys/debug/kprobes-optimization
> Kprobes globally optimized
>  [   70.797075] Time Diff = 0x103
>  [   70.799102] Time Diff = 0x181
>  [   70.801861] Time Diff = 0x15e
>  [   70.803466] Time Diff = 0xf0
>  [   70.804348] Time Diff = 0xd0
>  [   70.805653] Time Diff = 0xad
>  [   70.806477] Time Diff = 0xe0
>  [   70.807725] Time Diff = 0xbe
>  [   70.808541] Time Diff = 0xc3
>  [   70.810191] Time Diff = 0xc7
>  [   70.811007] Time Diff = 0xc0
>  [   70.812629] Time Diff = 0xc0
>  [   70.813640] Time Diff = 0xda
>  [   70.814915] Time Diff = 0xbb
>  [   70.815726] Time Diff = 0xc4
>  [   70.816955] Time Diff = 0xc0
>  [   70.817778] Time Diff = 0xcd
>  [   70.818999] Time Diff = 0xcd
>  [   70.820099] Time Diff = 0xcb
>  [   70.821333] Time Diff = 0xf0
> 
> Implementation:
> ===================
>  
> The trap instruction is replaced by a branch to a detour buffer. To address
> the limitation of branch instruction in power architecture, detour buffer
> slot is allocated from a reserved area . This will ensure that the branch
> is within ± 32 MB range. The current kprobes insn caches allocate memory 
> area for insn slots with module_alloc(). This will always be beyond 
> ± 32MB range.
>  

The paragraph is a little confusing. We need the detour buffer to be within
+-32 MB, but then you say we always get memory from module_alloc() beyond
32MB.

> The detour buffer contains a call to optimized_callback() which in turn
> call the pre_handler(). Once the pre-handler is run, the original
> instruction is emulated from the detour buffer itself. Also the detour
> buffer is equipped with a branch back to the normal work flow after the
> probed instruction is emulated.

Does the branch itself use registers that need to be saved? I presume
we are going to rely on the +-32MB, what are the guarantees of success
of such a mechanism?

Balbir Singh.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH V2 2/4] powerpc: add helper to check if offset is within rel branch range
  2016-12-16 11:52   ` Masami Hiramatsu
@ 2016-12-16 16:31     ` Anju T Sudhakar
  0 siblings, 0 replies; 12+ messages in thread
From: Anju T Sudhakar @ 2016-12-16 16:31 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: linux-kernel, linuxppc-dev, ananth, naveen.n.rao, paulus, srikar,
	benh, mpe, mahesh

Hi Masami,


Thank you for reviewing the patch set.


On Friday 16 December 2016 05:22 PM, Masami Hiramatsu wrote:
> On Wed, 14 Dec 2016 21:48:30 +0530
> Anju T Sudhakar <anju@linux.vnet.ibm.com> wrote:
>
>> From: "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com>
>>
> The coding is OK to me. Please add a description for this patch
> here, e.g. what is done by this patch, what kind of branch
> instruction will be covered, and why thse checks are needed etc.


Sure. I will give a description for this patch.


Thanks and Regards,

-Anju

>
> Thank you,
>
>> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
>> Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
>> ---
>>   arch/powerpc/include/asm/code-patching.h |  1 +
>>   arch/powerpc/lib/code-patching.c         | 24 +++++++++++++++++++++++-
>>   2 files changed, 24 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
>> index 2015b07..75ee4f4 100644
>> --- a/arch/powerpc/include/asm/code-patching.h
>> +++ b/arch/powerpc/include/asm/code-patching.h
>> @@ -22,6 +22,7 @@
>>   #define BRANCH_SET_LINK	0x1
>>   #define BRANCH_ABSOLUTE	0x2
>>   
>> +bool is_offset_in_branch_range(long offset);
>>   unsigned int create_branch(const unsigned int *addr,
>>   			   unsigned long target, int flags);
>>   unsigned int create_cond_branch(const unsigned int *addr,
>> diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
>> index d5edbeb..f643451 100644
>> --- a/arch/powerpc/lib/code-patching.c
>> +++ b/arch/powerpc/lib/code-patching.c
>> @@ -32,6 +32,28 @@ int patch_branch(unsigned int *addr, unsigned long target, int flags)
>>   	return patch_instruction(addr, create_branch(addr, target, flags));
>>   }
>>   
>> +bool is_offset_in_branch_range(long offset)
>> +{
>> +	/*
>> +	 * Powerpc branch instruction is :
>> +	 *
>> +	 *  0         6                 30   31
>> +	 *  +---------+----------------+---+---+
>> +	 *  | opcode  |     LI         |AA |LK |
>> +	 *  +---------+----------------+---+---+
>> +	 *  Where AA = 0 and LK = 0
>> +	 *
>> +	 * LI is a signed 24 bits integer. The real branch offset is computed
>> +	 * by: imm32 = SignExtend(LI:'0b00', 32);
>> +	 *
>> +	 * So the maximum forward branch should be:
>> +	 *   (0x007fffff << 2) = 0x01fffffc =  0x1fffffc
>> +	 * The maximum backward branch should be:
>> +	 *   (0xff800000 << 2) = 0xfe000000 = -0x2000000
>> +	 */
>> +	return (offset >= -0x2000000 && offset <= 0x1fffffc && !(offset & 0x3));
>> +}
>> +
>>   unsigned int create_branch(const unsigned int *addr,
>>   			   unsigned long target, int flags)
>>   {
>> @@ -43,7 +65,7 @@ unsigned int create_branch(const unsigned int *addr,
>>   		offset = offset - (unsigned long)addr;
>>   
>>   	/* Check we can represent the target in the instruction format */
>> -	if (offset < -0x2000000 || offset > 0x1fffffc || offset & 0x3)
>> +	if (!is_offset_in_branch_range(offset))
>>   		return 0;
>>   
>>   	/* Mask out the flags and target, so they don't step on each other. */
>> -- 
>> 2.7.4
>>
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH V2 0/4] OPTPROBES for powerpc
  2016-12-16 14:46 ` [PATCH V2 0/4] OPTPROBES for powerpc Balbir Singh
@ 2016-12-16 17:19   ` Naveen N. Rao
  0 siblings, 0 replies; 12+ messages in thread
From: Naveen N. Rao @ 2016-12-16 17:19 UTC (permalink / raw)
  To: Balbir Singh
  Cc: Anju T Sudhakar, linux-kernel, linuxppc-dev, srikar, mahesh,
	paulus, mhiramat, ananth

On 2016/12/17 01:46AM, Balbir Singh wrote:
> 
> 
> On 15/12/16 03:18, Anju T Sudhakar wrote:
> > This is the V2 patchset of the kprobes jump optimization
> > (a.k.a OPTPROBES)for powerpc. Kprobe being an inevitable tool
> > for kernel developers, enhancing the performance of kprobe has
> > got much importance.
> > 
> > Currently kprobes inserts a trap instruction to probe a running kernel.
> > Jump optimization allows kprobes to replace the trap with a branch,
> > reducing the probe overhead drastically.
> > 
> > In this series, conditional branch instructions are not considered for
> > optimization as they have to be assessed carefully in SMP systems.
> > 
> > The kprobe placed on the kretprobe_trampoline during boot time, is also
> > optimized in this series. Patch 4/4 furnishes this.
> > 
> > The first two patches can go independently of the series. The helper 
> > functions in these patches are invoked in patch 3/4.
> > 
> > Performance:
> > ============
> > An optimized kprobe in powerpc is 1.05 to 4.7 times faster than a kprobe.
> >  
> > Example:
> >  
> > Placed a probe at an offset 0x50 in _do_fork().
> > *Time Diff here is, difference in time before hitting the probe and
> > after the probed instruction. mftb() is employed in kernel/fork.c for
> > this purpose.
> >  
> > # echo 0 > /proc/sys/debug/kprobes-optimization
> > Kprobes globally unoptimized
> >  [  233.607120] Time Diff = 0x1f0
> >  [  233.608273] Time Diff = 0x1ee
> >  [  233.609228] Time Diff = 0x203
> >  [  233.610400] Time Diff = 0x1ec
> >  [  233.611335] Time Diff = 0x200
> >  [  233.612552] Time Diff = 0x1f0
> >  [  233.613386] Time Diff = 0x1ee
> >  [  233.614547] Time Diff = 0x212
> >  [  233.615570] Time Diff = 0x206
> >  [  233.616819] Time Diff = 0x1f3
> >  [  233.617773] Time Diff = 0x1ec
> >  [  233.618944] Time Diff = 0x1fb
> >  [  233.619879] Time Diff = 0x1f0
> >  [  233.621066] Time Diff = 0x1f9
> >  [  233.621999] Time Diff = 0x283
> >  [  233.623281] Time Diff = 0x24d
> >  [  233.624172] Time Diff = 0x1ea
> >  [  233.625381] Time Diff = 0x1f0
> >  [  233.626358] Time Diff = 0x200
> >  [  233.627572] Time Diff = 0x1ed
> >  
> > # echo 1 > /proc/sys/debug/kprobes-optimization
> > Kprobes globally optimized
> >  [   70.797075] Time Diff = 0x103
> >  [   70.799102] Time Diff = 0x181
> >  [   70.801861] Time Diff = 0x15e
> >  [   70.803466] Time Diff = 0xf0
> >  [   70.804348] Time Diff = 0xd0
> >  [   70.805653] Time Diff = 0xad
> >  [   70.806477] Time Diff = 0xe0
> >  [   70.807725] Time Diff = 0xbe
> >  [   70.808541] Time Diff = 0xc3
> >  [   70.810191] Time Diff = 0xc7
> >  [   70.811007] Time Diff = 0xc0
> >  [   70.812629] Time Diff = 0xc0
> >  [   70.813640] Time Diff = 0xda
> >  [   70.814915] Time Diff = 0xbb
> >  [   70.815726] Time Diff = 0xc4
> >  [   70.816955] Time Diff = 0xc0
> >  [   70.817778] Time Diff = 0xcd
> >  [   70.818999] Time Diff = 0xcd
> >  [   70.820099] Time Diff = 0xcb
> >  [   70.821333] Time Diff = 0xf0
> > 
> > Implementation:
> > ===================
> >  
> > The trap instruction is replaced by a branch to a detour buffer. To address
> > the limitation of branch instruction in power architecture, detour buffer
> > slot is allocated from a reserved area . This will ensure that the branch
> > is within ± 32 MB range. The current kprobes insn caches allocate memory 
> > area for insn slots with module_alloc(). This will always be beyond 
> > ± 32MB range.
> >  
> 
> The paragraph is a little confusing. We need the detour buffer to be within
> +-32 MB, but then you say we always get memory from module_alloc() beyond
> 32MB.

Yes, I think it can be described better. What Anju is mentioning is that 
the existing generic approach for kprobes insn cache uses module_alloc() 
which is not suitable for us due to the 32MB range limit with relative 
branches on powerpc.

Instead, we reserve a 64k block within .text and allocate the detour 
buffer from that area. This puts the detour buffer in range for most of 
the symbols and should be a good start.

> 
> > The detour buffer contains a call to optimized_callback() which in turn
> > call the pre_handler(). Once the pre-handler is run, the original
> > instruction is emulated from the detour buffer itself. Also the detour
> > buffer is equipped with a branch back to the normal work flow after the
> > probed instruction is emulated.
> 
> Does the branch itself use registers that need to be saved? I presume

No, we use immediate values to encode the relative address.

> we are going to rely on the +-32MB, what are the guarantees of success
> of such a mechanism?

We explicitly ensure that the return branch is within range as well 
during registration. In fact, this is one of the reasons why we can't 
optimize conditional branches - we can't know in advance where we need 
to jump back.

> 
> Balbir Singh.
> 

Thanks,
- Naveen

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH V2 3/4] arch/powerpc: Implement Optprobes
  2016-12-16 14:02   ` Masami Hiramatsu
@ 2016-12-16 17:54     ` Anju T Sudhakar
  0 siblings, 0 replies; 12+ messages in thread
From: Anju T Sudhakar @ 2016-12-16 17:54 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: linux-kernel, linuxppc-dev, ananth, naveen.n.rao, paulus, srikar,
	benh, mpe, mahesh

Hi Masami,



On Friday 16 December 2016 07:32 PM, Masami Hiramatsu wrote:
> On Wed, 14 Dec 2016 21:48:27 +0530
> Anju T Sudhakar <anju@linux.vnet.ibm.com> wrote:
>
>> Detour buffer contains instructions to create an in memory pt_regs.
>> After the execution of the pre-handler, a call is made for instruction emulation.
>> The NIP is determined in advanced through dummy instruction emulation and a branch
>> instruction is created to the NIP at the end of the trampoline.
>>
>> Instruction slot for detour buffer is allocated from the reserved area.
>> For the time being, 64KB is reserved in memory for this purpose.
>>
>> Instructions which can be emulated using analyse_instr() are suppliants
>> for optimization. Before optimization ensure that the address range
>> between the detour buffer allocated and the instruction being probed
>> is within ± 32MB.
> Thank you for updating!
> I think this has no critical issue, but I have some comments on it.
> (just cleanup and hardenings)
> Please see below.
>
>> Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
>> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
>> ---
>>   .../features/debug/optprobes/arch-support.txt      |   2 +-
>>   arch/powerpc/Kconfig                               |   1 +
>>   arch/powerpc/include/asm/kprobes.h                 |  23 +-
>>   arch/powerpc/include/asm/sstep.h                   |   1 +
>>   arch/powerpc/kernel/Makefile                       |   1 +
>>   arch/powerpc/kernel/optprobes.c                    | 333 +++++++++++++++++++++
>>   arch/powerpc/kernel/optprobes_head.S               | 135 +++++++++
>>   arch/powerpc/lib/sstep.c                           |  22 ++
>>   8 files changed, 516 insertions(+), 2 deletions(-)
>>   create mode 100644 arch/powerpc/kernel/optprobes.c
>>   create mode 100644 arch/powerpc/kernel/optprobes_head.S
>>
>> diff --git a/Documentation/features/debug/optprobes/arch-support.txt b/Documentation/features/debug/optprobes/arch-support.txt
>> index b8999d8..45bc99d 100644
>> --- a/Documentation/features/debug/optprobes/arch-support.txt
>> +++ b/Documentation/features/debug/optprobes/arch-support.txt
>> @@ -27,7 +27,7 @@
>>       |       nios2: | TODO |
>>       |    openrisc: | TODO |
>>       |      parisc: | TODO |
>> -    |     powerpc: | TODO |
>> +    |     powerpc: |  ok  |
>>       |        s390: | TODO |
>>       |       score: | TODO |
>>       |          sh: | TODO |
>> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
>> index c7f120a..d563f0a 100644
>> --- a/arch/powerpc/Kconfig
>> +++ b/arch/powerpc/Kconfig
>> @@ -98,6 +98,7 @@ config PPC
>>   	select HAVE_IOREMAP_PROT
>>   	select HAVE_EFFICIENT_UNALIGNED_ACCESS if !(CPU_LITTLE_ENDIAN && POWER7_CPU)
>>   	select HAVE_KPROBES
>> +	select HAVE_OPTPROBES if PPC64
>>   	select HAVE_ARCH_KGDB
>>   	select HAVE_KRETPROBES
>>   	select HAVE_ARCH_TRACEHOOK
>> diff --git a/arch/powerpc/include/asm/kprobes.h b/arch/powerpc/include/asm/kprobes.h
>> index 2c9759bd..739ddc5 100644
>> --- a/arch/powerpc/include/asm/kprobes.h
>> +++ b/arch/powerpc/include/asm/kprobes.h
>> @@ -38,7 +38,22 @@ struct pt_regs;
>>   struct kprobe;
>>   
>>   typedef ppc_opcode_t kprobe_opcode_t;
>> -#define MAX_INSN_SIZE 1
>> +
>> +extern kprobe_opcode_t optinsn_slot;
>> +
>> +/* Optinsn template address */
>> +extern kprobe_opcode_t optprobe_template_entry[];
>> +extern kprobe_opcode_t optprobe_template_op_address[];
>> +extern kprobe_opcode_t optprobe_template_call_handler[];
>> +extern kprobe_opcode_t optprobe_template_insn[];
>> +extern kprobe_opcode_t optprobe_template_call_emulate[];
>> +extern kprobe_opcode_t optprobe_template_ret[];
>> +extern kprobe_opcode_t optprobe_template_end[];
>> +
>> +#define MAX_INSN_SIZE		1
>> +#define MAX_OPTIMIZED_LENGTH	4
>> +#define MAX_OPTINSN_SIZE	(optprobe_template_end - optprobe_template_entry)
>> +#define RELATIVEJUMP_SIZE	4
> These size/length macros seems a bit odd. I guess MAX_INSN_SIZE
> is based on sizeof(MAX_INSN_SIZE), but others are in byte.
> Could you fix that? For example, define it with
> sizeof(kprobe_opcode_t), and add comments on it, etc.

Sure. I will look into this and define it in  a consistent way.

>>   
>>   #ifdef PPC64_ELF_ABI_v2
>>   /* PPC64 ABIv2 needs local entry point */
>> @@ -124,6 +139,12 @@ struct kprobe_ctlblk {
>>   	struct prev_kprobe prev_kprobe;
>>   };
>>   
>> +struct arch_optimized_insn {
>> +	kprobe_opcode_t copied_insn[1];
>> +	/* detour buffer */
>> +	kprobe_opcode_t *insn;
>> +};
>> +
>>   extern int kprobe_exceptions_notify(struct notifier_block *self,
>>   					unsigned long val, void *data);
>>   extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr);
>> diff --git a/arch/powerpc/include/asm/sstep.h b/arch/powerpc/include/asm/sstep.h
>> index d3a42cc..f7ad425 100644
>> --- a/arch/powerpc/include/asm/sstep.h
>> +++ b/arch/powerpc/include/asm/sstep.h
>> @@ -87,3 +87,4 @@ struct instruction_op {
>>   
>>   extern int analyse_instr(struct instruction_op *op, struct pt_regs *regs,
>>   			 unsigned int instr);
>> +extern bool is_conditional_branch(unsigned int instr);
>> diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
>> index 1925341..54f0f47 100644
>> --- a/arch/powerpc/kernel/Makefile
>> +++ b/arch/powerpc/kernel/Makefile
>> @@ -98,6 +98,7 @@ obj-$(CONFIG_KGDB)		+= kgdb.o
>>   obj-$(CONFIG_BOOTX_TEXT)	+= btext.o
>>   obj-$(CONFIG_SMP)		+= smp.o
>>   obj-$(CONFIG_KPROBES)		+= kprobes.o
>> +obj-$(CONFIG_OPTPROBES)		+= optprobes.o optprobes_head.o
>>   obj-$(CONFIG_UPROBES)		+= uprobes.o
>>   obj-$(CONFIG_PPC_UDBG_16550)	+= legacy_serial.o udbg_16550.o
>>   obj-$(CONFIG_STACKTRACE)	+= stacktrace.o
>> diff --git a/arch/powerpc/kernel/optprobes.c b/arch/powerpc/kernel/optprobes.c
>> new file mode 100644
>> index 0000000..ecba221
>> --- /dev/null
>> +++ b/arch/powerpc/kernel/optprobes.c
>> @@ -0,0 +1,333 @@
>> +/*
>> + * Code for Kernel probes Jump optimization.
>> + *
>> + * Copyright 2016, Anju T, IBM Corp.
>> + *
>> + * This program is free software; you can redistribute it and/or
>> + * modify it under the terms of the GNU General Public License
>> + * as published by the Free Software Foundation; either version
>> + * 2 of the License, or (at your option) any later version.
>> + */
>> +
>> +#include <linux/kprobes.h>
>> +#include <linux/jump_label.h>
>> +#include <linux/types.h>
>> +#include <linux/slab.h>
>> +#include <linux/list.h>
>> +#include <asm/kprobes.h>
>> +#include <asm/ptrace.h>
>> +#include <asm/cacheflush.h>
>> +#include <asm/code-patching.h>
>> +#include <asm/sstep.h>
>> +#include <asm/ppc-opcode.h>
>> +
>> +#define TMPL_CALL_HDLR_IDX	\
>> +	(optprobe_template_call_handler - optprobe_template_entry)
>> +#define TMPL_EMULATE_IDX	\
>> +	(optprobe_template_call_emulate - optprobe_template_entry)
>> +#define TMPL_RET_IDX		\
>> +	(optprobe_template_ret - optprobe_template_entry)
>> +#define TMPL_OP_IDX		\
>> +	(optprobe_template_op_address - optprobe_template_entry)
>> +#define TMPL_INSN_IDX		\
>> +	(optprobe_template_insn - optprobe_template_entry)
>> +#define TMPL_END_IDX		\
>> +	(optprobe_template_end - optprobe_template_entry)
>> +
>> +DEFINE_INSN_CACHE_OPS(ppc_optinsn);
>> +
>> +static bool insn_page_in_use;
>> +
>> +static void *__ppc_alloc_insn_page(void)
>> +{
>> +	if (insn_page_in_use)
>> +		return NULL;
>> +	insn_page_in_use = true;
>> +	return &optinsn_slot;
>> +}
>> +
>> +static void __ppc_free_insn_page(void *page __maybe_unused)
>> +{
>> +	insn_page_in_use = false;
>> +}
>> +
>> +struct kprobe_insn_cache kprobe_ppc_optinsn_slots = {
>> +	.mutex = __MUTEX_INITIALIZER(kprobe_ppc_optinsn_slots.mutex),
>> +	.pages = LIST_HEAD_INIT(kprobe_ppc_optinsn_slots.pages),
>> +	/* insn_size initialized later */
>> +	.alloc = __ppc_alloc_insn_page,
>> +	.free = __ppc_free_insn_page,
>> +	.nr_garbage = 0,
>> +};
>> +
>> +/*
>> + * Check if we can optimize this probe. Returns NIP post-emulation if this can
>> + * be optimized and 0 otherwise.
>> + */
>> +static unsigned long can_optimize(struct kprobe *p)
>> +{
>> +	struct pt_regs regs;
>> +	struct instruction_op op;
>> +	unsigned long nip = 0;
>> +
>> +	/*
>> +	 * kprobe placed for kretprobe during boot time
>> +	 * is not optimizing now.
>> +	 *
>> +	 * TODO: Optimize kprobe in kretprobe_trampoline
>> +	 */
>> +	if (p->addr == (kprobe_opcode_t *)&kretprobe_trampoline)
>> +		return 0;
>> +
>> +	/*
>> +	 * We only support optimizing kernel addresses, but not
>> +	 * module addresses.
>> +	 */
>> +	if (!is_kernel_addr((unsigned long)p->addr))
>> +		return 0;
>> +
>> +	regs.nip = (unsigned long)p->addr;
>> +	regs.trap = 0x0;
>> +	regs.msr = MSR_KERNEL;
>> +
>> +	/*
>> +	 * Ensure that the instruction is not a conditional branch,
>> +	 * and that can be emulated.
>> +	 */
>> +	if (!is_conditional_branch(*p->ainsn.insn) &&
>> +			analyse_instr(&op, &regs, *p->ainsn.insn))
>> +		nip = regs.nip;
>> +
>> +	return nip;
>> +}
>> +
>> +static void optimized_callback(struct optimized_kprobe *op,
>> +			       struct pt_regs *regs)
>> +{
>> +	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
>> +	unsigned long flags;
>> +
>> +	/* This is possible if op is under delayed unoptimizing */
>> +	if (kprobe_disabled(&op->kp))
>> +		return;
>> +
>> +	local_irq_save(flags);
>> +
>> +	if (kprobe_running()) {
>> +		kprobes_inc_nmissed_count(&op->kp);
>> +	} else {
>> +		__this_cpu_write(current_kprobe, &op->kp);
>> +		regs->nip = (unsigned long)op->kp.addr;
>> +		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
>> +		opt_pre_handler(&op->kp, regs);
>> +		__this_cpu_write(current_kprobe, NULL);
>> +	}
>> +	local_irq_restore(flags);
>> +}
>> +NOKPROBE_SYMBOL(optimized_callback);
>> +
>> +void arch_remove_optimized_kprobe(struct optimized_kprobe *op)
>> +{
>> +	if (op->optinsn.insn) {
>> +		free_ppc_optinsn_slot(op->optinsn.insn, 1);
>> +		op->optinsn.insn = NULL;
>> +	}
>> +}
>> +
>> +/*
>> + * emulate_step() requires insn to be emulated as
>> + * second parameter. Load register 'r4' with the
>> + * instruction.
>> + */
>> +void patch_imm32_load_insns(unsigned int val, kprobe_opcode_t *addr)
>> +{
>> +	/* addis r4,0,(insn)@h */
>> +	*addr++ = PPC_INST_ADDIS | ___PPC_RT(4) |
>> +		  ((val >> 16) & 0xffff);
>> +
>> +	/* ori r4,r4,(insn)@l */
>> +	*addr = PPC_INST_ORI | ___PPC_RA(4) | ___PPC_RS(4) |
>> +		(val & 0xffff);
>> +}
>> +
>> +/*
>> + * Generate instructions to load provided immediate 64-bit value
>> + * to register 'r3' and patch these instructions at 'addr'.
>> + */
>> +void patch_imm64_load_insns(unsigned long val, kprobe_opcode_t *addr)
>> +{
>> +	/* lis r3,(op)@highest */
>> +	*addr++ = PPC_INST_ADDIS | ___PPC_RT(3) |
>> +		  ((val >> 48) & 0xffff);
>> +
>> +	/* ori r3,r3,(op)@higher */
>> +	*addr++ = PPC_INST_ORI | ___PPC_RA(3) | ___PPC_RS(3) |
>> +		  ((val >> 32) & 0xffff);
>> +
>> +	/* rldicr r3,r3,32,31 */
>> +	*addr++ = PPC_INST_RLDICR | ___PPC_RA(3) | ___PPC_RS(3) |
>> +		  __PPC_SH64(32) | __PPC_ME64(31);
>> +
>> +	/* oris r3,r3,(op)@h */
>> +	*addr++ = PPC_INST_ORIS | ___PPC_RA(3) | ___PPC_RS(3) |
>> +		  ((val >> 16) & 0xffff);
>> +
>> +	/* ori r3,r3,(op)@l */
>> +	*addr = PPC_INST_ORI | ___PPC_RA(3) | ___PPC_RS(3) |
>> +		(val & 0xffff);
>> +}
> I need another PPC maintainer review for these.

ok.
>> +
>> +int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *p)
>> +{
>> +	kprobe_opcode_t *buff, branch_op_callback, branch_emulate_step;
>> +	kprobe_opcode_t *op_callback_addr, *emulate_step_addr;
>> +	long b_offset;
>> +	unsigned long nip;
>> +
>> +	kprobe_ppc_optinsn_slots.insn_size = MAX_OPTINSN_SIZE;
>> +
>> +	nip = can_optimize(p);
>> +	if (!nip)
>> +		return -EILSEQ;
>> +
>> +	/* Allocate instruction slot for detour buffer */
>> +	buff = get_ppc_optinsn_slot();
>> +	if (!buff)
>> +		return -ENOMEM;
>> +
>> +	/*
>> +	 * OPTPROBE uses 'b' instruction to branch to optinsn.insn.
>> +	 *
>> +	 * The target address has to be relatively nearby, to permit use
>> +	 * of branch instruction in powerpc, because the address is specified
>> +	 * in an immediate field in the instruction opcode itself, ie 24 bits
>> +	 * in the opcode specify the address. Therefore the address should
>> +	 * be within 32MB on either side of the current instruction.
>> +	 */
>> +	b_offset = (unsigned long)buff - (unsigned long)p->addr;
>> +	if (!is_offset_in_branch_range(b_offset)) {
>> +		free_ppc_optinsn_slot(buff, 0);
>> +		return -ERANGE;
>> +	}
>> +
>> +	/* Check if the return address is also within 32MB range */
>> +	b_offset = (unsigned long)(buff + TMPL_RET_IDX) -
>> +			(unsigned long)nip;
>> +	if (!is_offset_in_branch_range(b_offset)) {
>> +		free_ppc_optinsn_slot(buff, 0);
>> +		return -ERANGE;
>> +	}
>> +
>> +	/* Setup template */
>> +	memcpy(buff, optprobe_template_entry,
>> +			TMPL_END_IDX * sizeof(kprobe_opcode_t));
>> +
>> +	/*
>> +	 * Fixup the template with instructions to:
>> +	 * 1. load the address of the actual probepoint
>> +	 */
>> +	patch_imm64_load_insns((unsigned long)op, buff + TMPL_OP_IDX);
>> +
>> +	/*
>> +	 * 2. branch to optimized_callback() and emulate_step()
>> +	 */
>> +	kprobe_lookup_name("optimized_callback", op_callback_addr);
>> +	kprobe_lookup_name("emulate_step", emulate_step_addr);
>> +	if (!op_callback_addr || !emulate_step_addr) {
>> +		free_ppc_optinsn_slot(buff, 0);
>> +		WARN(1, "kprobe_lookup_name() failed\n");
>> +		return -ENOENT;
>> +	}
>> +
>> +	branch_op_callback = create_branch((unsigned int *)buff + TMPL_CALL_HDLR_IDX,
>> +				(unsigned long)op_callback_addr,
>> +				BRANCH_SET_LINK);
>> +
>> +	branch_emulate_step = create_branch((unsigned int *)buff + TMPL_EMULATE_IDX,
>> +				(unsigned long)emulate_step_addr,
>> +				BRANCH_SET_LINK);
>> +
>> +	if (!branch_op_callback || !branch_emulate_step) {
>> +		free_ppc_optinsn_slot(buff, 0);
>> +		return -ERANGE;
> Since there are 4 free_ppc_optinsn_slot(buff, 0) and return error pattarns,
> I recommend you to consolidate it by using "goto error".

Ah, nice suggestion. Will change this. Thanks!
>> +	}
>> +
>> +	buff[TMPL_CALL_HDLR_IDX] = branch_op_callback;
>> +	buff[TMPL_EMULATE_IDX] = branch_emulate_step;
>> +
>> +	/*
>> +	 * 3. load instruction to be emulated into relevant register, and
>> +	 */
>> +	patch_imm32_load_insns(*p->ainsn.insn, buff + TMPL_INSN_IDX);
>> +
>> +	/*
>> +	 * 4. branch back from trampoline
>> +	 */
>> +	buff[TMPL_RET_IDX] = create_branch((unsigned int *)buff + TMPL_RET_IDX,
>> +				(unsigned long)nip, 0);
>> +
>> +	flush_icache_range((unsigned long)buff,
>> +			   (unsigned long)(&buff[TMPL_END_IDX]));
>> +
>> +	op->optinsn.insn = buff;
>> +
>> +	return 0;
>> +}
>> +
>> +int arch_prepared_optinsn(struct arch_optimized_insn *optinsn)
>> +{
>> +	return optinsn->insn != NULL;
>> +}
>> +
>> +/*
>> + * On powerpc, Optprobes always replaces one instruction (4 bytes
>> + * aligned and 4 bytes long). It is impossible to encounter another
>> + * kprobe in this address range. So always return 0.
>> + */
>> +int arch_check_optimized_kprobe(struct optimized_kprobe *op)
>> +{
>> +	return 0;
>> +}
>> +
>> +void arch_optimize_kprobes(struct list_head *oplist)
>> +{
>> +	struct optimized_kprobe *op;
>> +	struct optimized_kprobe *tmp;
>> +
>> +	list_for_each_entry_safe(op, tmp, oplist, list) {
>> +		/*
>> +		 * Backup instructions which will be replaced
>> +		 * by jump address
>> +		 */
>> +		memcpy(op->optinsn.copied_insn, op->kp.addr,
>> +					       RELATIVEJUMP_SIZE);
>> +		patch_instruction(op->kp.addr,
>> +			create_branch((unsigned int *)op->kp.addr,
>> +				      (unsigned long)op->optinsn.insn, 0));
> Would you be sure that create_branch doesn't fail?



The b_offset check we did, before optimizing the kprobe will guarantee 
that the branch will be in +/- 32MB range. So create_branch will not fail.


>
>> +		list_del_init(&op->list);
>> +	}
>> +}
>> +
>> +void arch_unoptimize_kprobe(struct optimized_kprobe *op)
>> +{
>> +	arch_arm_kprobe(&op->kp);
>> +}
>> +
>> +void arch_unoptimize_kprobes(struct list_head *oplist,
>> +			     struct list_head *done_list)
>> +{
>> +	struct optimized_kprobe *op;
>> +	struct optimized_kprobe *tmp;
>> +
>> +	list_for_each_entry_safe(op, tmp, oplist, list) {
>> +		arch_unoptimize_kprobe(op);
>> +		list_move(&op->list, done_list);
>> +	}
>> +}
>> +
>> +int arch_within_optimized_kprobe(struct optimized_kprobe *op,
>> +				 unsigned long addr)
>> +{
>> +	return ((unsigned long)op->kp.addr <= addr &&
>> +		(unsigned long)op->kp.addr + RELATIVEJUMP_SIZE > addr);
>> +}
>> diff --git a/arch/powerpc/kernel/optprobes_head.S b/arch/powerpc/kernel/optprobes_head.S
>> new file mode 100644
>> index 0000000..c86976b
>> --- /dev/null
>> +++ b/arch/powerpc/kernel/optprobes_head.S
>> @@ -0,0 +1,135 @@
>> +/*
>> + * Code to prepare detour buffer for optprobes in Kernel.
>> + *
>> + * Copyright 2016, Anju T, IBM Corp.
>> + *
>> + * This program is free software; you can redistribute it and/or
>> + * modify it under the terms of the GNU General Public License
>> + * as published by the Free Software Foundation; either version
>> + * 2 of the License, or (at your option) any later version.
>> + */
>> +
>> +#include <asm/ppc_asm.h>
>> +#include <asm/ptrace.h>
>> +#include <asm/asm-offsets.h>
>> +
>> +#define	OPT_SLOT_SIZE	65536
>> +
>> +	.balign	4
>> +
>> +	/*
>> +	 * Reserve an area to allocate slots for detour buffer.
>> +	 * This is part of .text section (rather than vmalloc area)
>> +	 * as this needs to be within 32MB of the probed address.
>> +	 */
>> +	.global optinsn_slot
>> +optinsn_slot:
>> +	.space	OPT_SLOT_SIZE
>> +
>> +	/*
>> +	 * Optprobe template:
>> +	 * This template gets copied into one of the slots in optinsn_slot
>> +	 * and gets fixed up with real optprobe structures et al.
>> +	 */
>> +	.global optprobe_template_entry
>> +optprobe_template_entry:
>> +	/* Create an in-memory pt_regs */
>> +	stdu	r1,-INT_FRAME_SIZE(r1)
>> +	SAVE_GPR(0,r1)
>> +	/* Save the previous SP into stack */
>> +	addi	r0,r1,INT_FRAME_SIZE
>> +	std	r0,GPR1(r1)
>> +	SAVE_10GPRS(2,r1)
>> +	SAVE_10GPRS(12,r1)
>> +	SAVE_10GPRS(22,r1)
>> +	/* Save SPRS */
>> +	mfmsr	r5
>> +	std	r5,_MSR(r1)
>> +	li	r5,0x700
>> +	std	r5,_TRAP(r1)
>> +	li	r5,0
>> +	std	r5,ORIG_GPR3(r1)
>> +	std	r5,RESULT(r1)
>> +	mfctr	r5
>> +	std	r5,_CTR(r1)
>> +	mflr	r5
>> +	std	r5,_LINK(r1)
>> +	mfspr	r5,SPRN_XER
>> +	std	r5,_XER(r1)
>> +	mfcr	r5
>> +	std	r5,_CCR(r1)
>> +	lbz     r5,PACASOFTIRQEN(r13)
>> +	std     r5,SOFTE(r1)
>> +	mfdar	r5
>> +	std	r5,_DAR(r1)
>> +	mfdsisr	r5
>> +	std	r5,_DSISR(r1)
>> +
>> +	.global optprobe_template_op_address
>> +optprobe_template_op_address:
>> +	/*
>> +	 * Parameters to optimized_callback():
>> +	 * 1. optimized_kprobe structure in r3
>> +	 */
>> +	nop
>> +	nop
>> +	nop
>> +	nop
>> +	nop
>> +	/* 2. pt_regs pointer in r4 */
>> +	addi	r4,r1,STACK_FRAME_OVERHEAD
>> +
>> +	.global optprobe_template_call_handler
>> +optprobe_template_call_handler:
>> +	/* Branch to optimized_callback() */
>> +	nop
>> +
>> +	/*
>> +	 * Parameters for instruction emulation:
>> +	 * 1. Pass SP in register r3.
>> +	 */
>> +	addi	r3,r1,STACK_FRAME_OVERHEAD
>> +
>> +	.global optprobe_template_insn
>> +optprobe_template_insn:
>> +	/* 2, Pass instruction to be emulated in r4 */
>> +	nop
>> +	nop
>> +
>> +	.global optprobe_template_call_emulate
>> +optprobe_template_call_emulate:
>> +	/* Branch to emulate_step()  */
>> +	nop
>> +
>> +	/*
>> +	 * All done.
>> +	 * Now, restore the registers...
>> +	 */
>> +	ld	r5,_MSR(r1)
>> +	mtmsr	r5
>> +	ld	r5,_CTR(r1)
>> +	mtctr	r5
>> +	ld	r5,_LINK(r1)
>> +	mtlr	r5
>> +	ld	r5,_XER(r1)
>> +	mtxer	r5
>> +	ld	r5,_CCR(r1)
>> +	mtcr	r5
>> +	ld	r5,_DAR(r1)
>> +	mtdar	r5
>> +	ld	r5,_DSISR(r1)
>> +	mtdsisr	r5
>> +	REST_GPR(0,r1)
>> +	REST_10GPRS(2,r1)
>> +	REST_10GPRS(12,r1)
>> +	REST_10GPRS(22,r1)
>> +	/* Restore the previous SP */
>> +	addi	r1,r1,INT_FRAME_SIZE
>> +
>> +	.global optprobe_template_ret
>> +optprobe_template_ret:
>> +	/* ... and jump back from trampoline */
>> +	nop
>> +
>> +	.global optprobe_template_end
>> +optprobe_template_end:
>> diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
>> index 3362299..0c6c2b0 100644
>> --- a/arch/powerpc/lib/sstep.c
>> +++ b/arch/powerpc/lib/sstep.c
>> @@ -618,6 +618,27 @@ static int __kprobes trap_compare(long v1, long v2)
>>   }
>>   
>>   /*
>> + * Helper to check if a given instruction is a conditional branch
>> + * Derived from the conditional checks in analyse_instr()
>> + */
>> +bool __kprobes is_conditional_branch(unsigned int instr)
>> +{
>> +	unsigned int opcode = instr >> 26;
>> +
>> +	if (opcode == 16)	/* bc, bca, bcl, bcla */
>> +		return true;
>> +	if (opcode == 19) {
>> +		switch ((instr >> 1) & 0x3ff) {
>> +		case 16:	/* bclr, bclrl */
>> +		case 528:	/* bcctr, bcctrl */
>> +		case 560:	/* bctar, bctarl */
>> +			return true;
>> +		}
>> +	}
>> +	return false;
>> +}
>> +
>> +/*
>>    * Elements of 32-bit rotate and mask instructions.
>>    */
>>   #define MASK32(mb, me)	((0xffffffffUL >> (mb)) + \
>> @@ -2018,3 +2039,4 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
>>   	regs->nip = truncate_if_32bit(regs->msr, regs->nip + 4);
>>   	return 1;
>>   }
>> +
> This seems unneeded linefeed :)


  :-) I will remove the extra line.



Thanks and Regards,
Anju
> Thank you,
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2016-12-16 17:55 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-14 16:18 [PATCH V2 0/4] OPTPROBES for powerpc Anju T Sudhakar
2016-12-14 16:18 ` [PATCH V2 3/4] arch/powerpc: Implement Optprobes Anju T Sudhakar
2016-12-16 14:02   ` Masami Hiramatsu
2016-12-16 17:54     ` Anju T Sudhakar
2016-12-14 16:18 ` [PATCH V2 4/4] arch/powerpc: Optimize kprobe in kretprobe_trampoline Anju T Sudhakar
2016-12-16 14:09   ` Masami Hiramatsu
2016-12-14 16:18 ` [PATCH V2 1/4] powerpc: asm/ppc-opcode.h: introduce __PPC_SH64() Anju T Sudhakar
2016-12-14 16:18 ` [PATCH V2 2/4] powerpc: add helper to check if offset is within rel branch range Anju T Sudhakar
2016-12-16 11:52   ` Masami Hiramatsu
2016-12-16 16:31     ` Anju T Sudhakar
2016-12-16 14:46 ` [PATCH V2 0/4] OPTPROBES for powerpc Balbir Singh
2016-12-16 17:19   ` Naveen N. Rao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).