All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/2] OPTPROBES for powerpc
@ 2017-02-08  9:50 Anju T Sudhakar
  2017-02-08  9:50 ` [PATCH v4 1/2] arch/powerpc: Implement Optprobes Anju T Sudhakar
  2017-02-08  9:50 ` [PATCH v4 2/2] arch/powerpc: Optimize kprobe in kretprobe_trampoline Anju T Sudhakar
  0 siblings, 2 replies; 5+ messages in thread
From: Anju T Sudhakar @ 2017-02-08  9:50 UTC (permalink / raw)
  To: linux-kernel, linuxppc-dev
  Cc: ananth, naveen.n.rao, paulus, srikar, benh, mpe, hemant, mahesh,
	mhiramat

This is the patchset of the kprobes jump optimization
(a.k.a OPTPROBES)for powerpc. Kprobe being an inevitable tool
for kernel developers, enhancing the performance of kprobe has
got much importance.

Currently kprobes inserts a trap instruction to probe a running kernel.
Jump optimization allows kprobes to replace the trap with a branch,
reducing the probe overhead drastically.

In this series, conditional branch instructions are not considered for
optimization as there isn't a foolproof mechanism to ensure the conditional
branch targets lie in the range of addresses (2^^24) that is a necessary
condition for optimization.

The kprobe placed on the kretprobe_trampoline during boot time, is also
optimized in this series. Patch 2/2 furnishes this.


Note:   This patch set depends on the patch series which I send earlier.
        Links are here :
		https://patchwork.ozlabs.org/patch/725563/
		https://patchwork.ozlabs.org/patch/725562/
		https://patchwork.ozlabs.org/patch/725564/

The helper functions in these patches are invoked in patch 1/2.


Performance:
============
An optimized kprobe in powerpc is up to 4 times faster than a trap
based kprobe.

Example:

Placed a probe at an instruction in _do_fork().
*Time Diff here is, difference in time before hitting the probe and
after the probed instruction. mftb() is employed in kernel/fork.c for
this purpose.

# echo 0 > /proc/sys/debug/kprobes-optimization
Kprobes globally unoptimized

[  172.252347] Time diff = 0xc4c
[  172.257389] Time diff = 0x86e
[  172.262035] Time diff = 0xb8f
[  172.266479] Time diff = 0x5ec
[  172.270641] Time diff = 0xd4f
[  172.273224] Time diff = 0x52b
[  172.277328] Time diff = 0x793
[  172.280520] Time diff = 0x286
[  172.284125] Time diff = 0x592
[  172.287319] Time diff = 0x593
[  172.292319] Time diff = 0xa30
[  172.294909] Time diff = 0x2e1
[  172.297806] Time diff = 0x6a3
[  172.300718] Time diff = 0x5aa
[  172.304675] Time diff = 0xa6e
[  172.307668] Time diff = 0x322
[  172.310875] Time diff = 0x844
[  172.313710] Time diff = 0x2db
[  172.317361] Time diff = 0x831
[  172.320066] Time diff = 0x327

# echo 1 > /proc/sys/debug/kprobes-optimization
Kprobes globally optimized

[  207.070301] Time diff = 0x1dd
[  207.073401] Time diff = 0x118
[  207.075724] Time diff = 0x100
[  207.078643] Time diff = 0x242
[  207.080938] Time diff = 0x129
[  207.084103] Time diff = 0x32f
[  207.087022] Time diff = 0x194
[  207.090139] Time diff = 0x13d
[  207.092436] Time diff = 0x195
[  207.095031] Time diff = 0x103
[  207.097481] Time diff = 0x15a
[  207.100414] Time diff = 0x11f
[  207.102831] Time diff = 0x161
[  207.105713] Time diff = 0x242
[  207.108271] Time diff = 0x2d7
[  207.111741] Time diff = 0x104
[  207.114389] Time diff = 0xf1
[  207.118002] Time diff = 0x2f1
[  207.120930] Time diff = 0x179
[  207.124259] Time diff = 0x10f

Implementation:
===================

The trap instruction is replaced by a branch to a detour buffer. To address
the limitation of branch instruction in POWER architecture, detour buffer
slot is allocated from a reserved area. This will ensure that the branch
is within +/- 32 MB range. The existing generic approach for kprobes
instruction cache uses module_alloc() to allocate memory area for instruction
slots. This will always be beyond +/- 32MB range.

The detour buffer contains a call to optimized_callback() which in turn
call the pre_handler(). Once the pre-handler is run, the original
instruction is emulated from the detour buffer itself. Also the detour
buffer is equipped with a branch back to the normal work flow after the
probed instruction is emulated. This branch instruction is set up during
the preparation of the detour buffer itself. The address of the instruction
to which we need to jump back is determined through analyse_instr(), which
is invoked during the sanity checks for optprobes. A dummy pt_regs along with
probed instruction is used for this purpose. 

Kprobe placed in conditional branch instructions are not optimized, as we
can't predict the nip prior with dummy pt_regs and can not ensure that
the return branch from detour buffer falls in the range of address (i.e 32MB).

Before preparing optimization, Kprobes inserts original(breakpoint instruction)
kprobe on the specified address. So, even if the kprobe is not possible to be
optimized, it just uses a normal kprobe.


Limitations:
==============
- Number of probes which can be optimized is limited by the size of the
  area reserved.
- Currently instructions which can be emulated using analyse_instr() are 
  the only candidates for optimization.
- Conditional branch instructions are not optimized.
- Probes on kernel module region are not considered for optimization now.


Changes from v3:

- The optprobe specific patches are moved to a separate series.
- Comments by Michael Ellerman are addressed.
- Performance results in the cover letter are updated with the
  latest patch set.

Changes from v2:

- Comments by Masami are addressed.
- Description in the cover letter is modified a bit.

Changes from v1:

- Merged the three patches in V1 into a single patch.
- Comments by Masami are addressed.
- Some helper functions are implemented in separate patches.
- Optimization for kprobe placed on the kretprobe_trampoline during
  boot time is implemented.


Kindly let me know your suggestions and comments.

Thanks,
-Anju


Anju T Sudhakar (2):
  arch/powerpc: Implement Optprobes
  arch/powerpc: Optimize kprobe in kretprobe_trampoline

 arch/powerpc/Kconfig                     |   1 +
 arch/powerpc/include/asm/code-patching.h |   1 +
 arch/powerpc/include/asm/kprobes.h       |  24 ++-
 arch/powerpc/kernel/Makefile             |   1 +
 arch/powerpc/kernel/kprobes.c            |   8 +
 arch/powerpc/kernel/optprobes.c          | 347 +++++++++++++++++++++++++++++++
 arch/powerpc/kernel/optprobes_head.S     | 135 ++++++++++++
 arch/powerpc/lib/code-patching.c         |  21 ++
 8 files changed, 537 insertions(+), 1 deletion(-)
 create mode 100644 arch/powerpc/kernel/optprobes.c
 create mode 100644 arch/powerpc/kernel/optprobes_head.S

-- 
2.7.4

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v4 1/2] arch/powerpc: Implement Optprobes
  2017-02-08  9:50 [PATCH v4 0/2] OPTPROBES for powerpc Anju T Sudhakar
@ 2017-02-08  9:50 ` Anju T Sudhakar
  2017-02-14 12:40   ` [v4,1/2] " Michael Ellerman
  2017-02-08  9:50 ` [PATCH v4 2/2] arch/powerpc: Optimize kprobe in kretprobe_trampoline Anju T Sudhakar
  1 sibling, 1 reply; 5+ messages in thread
From: Anju T Sudhakar @ 2017-02-08  9:50 UTC (permalink / raw)
  To: linux-kernel, linuxppc-dev
  Cc: ananth, naveen.n.rao, paulus, srikar, benh, mpe, hemant, mahesh,
	mhiramat

Current infrastructure of kprobe uses the unconditional trap instruction
to probe a running kernel. Optprobe allows kprobe to replace the trap with
a branch instruction to a detour buffer. Detour buffer contains instructions
to create an in memory pt_regs. Detour buffer also has a call to
optimized_callback() which in turn call the pre_handler().
After the execution of the pre-handler, a call is made for instruction
emulation. The NIP is determined in advanced through dummy instruction
emulation and a branch instruction is created to the NIP at the end of
the trampoline.

To address the limitation of branch instruction in POWER architecture,
detour buffer slot is allocated from a reserved area. For the time being,
64KB is reserved in memory for this purpose.

Instructions which can be emulated using analyse_instr() are the candidates
for optimization. Before optimization ensure that the address range
between the detour buffer allocated and the instruction being probed
is within +/- 32MB.

Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
---
 arch/powerpc/Kconfig                     |   1 +
 arch/powerpc/include/asm/code-patching.h |   1 +
 arch/powerpc/include/asm/kprobes.h       |  24 ++-
 arch/powerpc/kernel/Makefile             |   1 +
 arch/powerpc/kernel/optprobes.c          | 348 +++++++++++++++++++++++++++++++
 arch/powerpc/kernel/optprobes_head.S     | 135 ++++++++++++
 arch/powerpc/lib/code-patching.c         |  21 ++
 7 files changed, 530 insertions(+), 1 deletion(-)
 create mode 100644 arch/powerpc/kernel/optprobes.c
 create mode 100644 arch/powerpc/kernel/optprobes_head.S

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index a8ee573..e4d352e 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -99,6 +99,7 @@ config PPC
 	select HAVE_IOREMAP_PROT
 	select HAVE_EFFICIENT_UNALIGNED_ACCESS if !(CPU_LITTLE_ENDIAN && POWER7_CPU)
 	select HAVE_KPROBES
+	select HAVE_OPTPROBES if PPC64
 	select HAVE_ARCH_KGDB
 	select HAVE_KRETPROBES
 	select HAVE_ARCH_TRACEHOOK
diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
index 75ee4f4..8ab9377 100644
--- a/arch/powerpc/include/asm/code-patching.h
+++ b/arch/powerpc/include/asm/code-patching.h
@@ -35,6 +35,7 @@ int instr_is_branch_to_addr(const unsigned int *instr, unsigned long addr);
 unsigned long branch_target(const unsigned int *instr);
 unsigned int translate_branch(const unsigned int *dest,
 			      const unsigned int *src);
+extern bool is_conditional_branch(unsigned int instr);
 #ifdef CONFIG_PPC_BOOK3E_64
 void __patch_exception(int exc, unsigned long addr);
 #define patch_exception(exc, name) do { \
diff --git a/arch/powerpc/include/asm/kprobes.h b/arch/powerpc/include/asm/kprobes.h
index 77885d8..d821835 100644
--- a/arch/powerpc/include/asm/kprobes.h
+++ b/arch/powerpc/include/asm/kprobes.h
@@ -40,7 +40,23 @@ struct pt_regs;
 struct kprobe;
 
 typedef ppc_opcode_t kprobe_opcode_t;
-#define MAX_INSN_SIZE 1
+
+extern kprobe_opcode_t optinsn_slot;
+
+/* Optinsn template address */
+extern kprobe_opcode_t optprobe_template_entry[];
+extern kprobe_opcode_t optprobe_template_op_address[];
+extern kprobe_opcode_t optprobe_template_call_handler[];
+extern kprobe_opcode_t optprobe_template_insn[];
+extern kprobe_opcode_t optprobe_template_call_emulate[];
+extern kprobe_opcode_t optprobe_template_ret[];
+extern kprobe_opcode_t optprobe_template_end[];
+
+/* Fixed instruction size for powerpc */
+#define MAX_INSN_SIZE		1
+#define MAX_OPTIMIZED_LENGTH	sizeof(kprobe_opcode_t)	/* 4 bytes */
+#define MAX_OPTINSN_SIZE	(optprobe_template_end - optprobe_template_entry)
+#define RELATIVEJUMP_SIZE	sizeof(kprobe_opcode_t)	/* 4 bytes */
 
 #ifdef PPC64_ELF_ABI_v2
 /* PPC64 ABIv2 needs local entry point */
@@ -126,6 +142,12 @@ struct kprobe_ctlblk {
 	struct prev_kprobe prev_kprobe;
 };
 
+struct arch_optimized_insn {
+	kprobe_opcode_t copied_insn[1];
+	/* detour buffer */
+	kprobe_opcode_t *insn;
+};
+
 extern int kprobe_exceptions_notify(struct notifier_block *self,
 					unsigned long val, void *data);
 extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr);
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 23f8082..ff0eef5 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -100,6 +100,7 @@ obj-$(CONFIG_KGDB)		+= kgdb.o
 obj-$(CONFIG_BOOTX_TEXT)	+= btext.o
 obj-$(CONFIG_SMP)		+= smp.o
 obj-$(CONFIG_KPROBES)		+= kprobes.o
+obj-$(CONFIG_OPTPROBES)		+= optprobes.o optprobes_head.o
 obj-$(CONFIG_UPROBES)		+= uprobes.o
 obj-$(CONFIG_PPC_UDBG_16550)	+= legacy_serial.o udbg_16550.o
 obj-$(CONFIG_STACKTRACE)	+= stacktrace.o
diff --git a/arch/powerpc/kernel/optprobes.c b/arch/powerpc/kernel/optprobes.c
new file mode 100644
index 0000000..b569e08
--- /dev/null
+++ b/arch/powerpc/kernel/optprobes.c
@@ -0,0 +1,348 @@
+/*
+ * Code for Kernel probes Jump optimization.
+ *
+ * Copyright 2017, Anju T, IBM Corp.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/kprobes.h>
+#include <linux/jump_label.h>
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/list.h>
+#include <asm/kprobes.h>
+#include <asm/ptrace.h>
+#include <asm/cacheflush.h>
+#include <asm/code-patching.h>
+#include <asm/sstep.h>
+#include <asm/ppc-opcode.h>
+
+#define TMPL_CALL_HDLR_IDX	\
+	(optprobe_template_call_handler - optprobe_template_entry)
+#define TMPL_EMULATE_IDX	\
+	(optprobe_template_call_emulate - optprobe_template_entry)
+#define TMPL_RET_IDX		\
+	(optprobe_template_ret - optprobe_template_entry)
+#define TMPL_OP_IDX		\
+	(optprobe_template_op_address - optprobe_template_entry)
+#define TMPL_INSN_IDX		\
+	(optprobe_template_insn - optprobe_template_entry)
+#define TMPL_END_IDX		\
+	(optprobe_template_end - optprobe_template_entry)
+
+DEFINE_INSN_CACHE_OPS(ppc_optinsn);
+
+static bool insn_page_in_use;
+
+static void *__ppc_alloc_insn_page(void)
+{
+	if (insn_page_in_use)
+		return NULL;
+	insn_page_in_use = true;
+	return &optinsn_slot;
+}
+
+static void __ppc_free_insn_page(void *page __maybe_unused)
+{
+	insn_page_in_use = false;
+}
+
+struct kprobe_insn_cache kprobe_ppc_optinsn_slots = {
+	.mutex = __MUTEX_INITIALIZER(kprobe_ppc_optinsn_slots.mutex),
+	.pages = LIST_HEAD_INIT(kprobe_ppc_optinsn_slots.pages),
+	/* insn_size initialized later */
+	.alloc = __ppc_alloc_insn_page,
+	.free = __ppc_free_insn_page,
+	.nr_garbage = 0,
+};
+
+/*
+ * Check if we can optimize this probe. Returns NIP post-emulation if this can
+ * be optimized and 0 otherwise.
+ */
+static unsigned long can_optimize(struct kprobe *p)
+{
+	struct pt_regs regs;
+	struct instruction_op op;
+	unsigned long nip = 0;
+
+	/*
+	 * kprobe placed for kretprobe during boot time
+	 * is not optimizing now.
+	 *
+	 * TODO: Optimize kprobe in kretprobe_trampoline
+	 */
+	if (p->addr == (kprobe_opcode_t *)&kretprobe_trampoline)
+		return 0;
+
+	/*
+	 * We only support optimizing kernel addresses, but not
+	 * module addresses.
+	 *
+	 * FIXME: Optimize kprobes placed in module addresses.
+	 */
+	if (!is_kernel_addr((unsigned long)p->addr))
+		return 0;
+
+	memset(&regs, 0, sizeof(struct pt_regs));
+	regs.nip = (unsigned long)p->addr;
+	regs.trap = 0x0;
+	regs.msr = MSR_KERNEL;
+
+	/*
+	 * Kprobe placed in conditional branch instructions are
+	 * not optimized, as we can't predict the nip prior with
+	 * dummy pt_regs and can not ensure that the return branch
+	 * from detour buffer falls in the range of address (i.e 32MB).
+	 * A branch back from trampoline is set up in the detour buffer
+	 * to the nip returned by the analyse_instr() here.
+	 *
+	 * Ensure that the instruction is not a conditional branch,
+	 * and that can be emulated.
+	 */
+	if (!is_conditional_branch(*p->ainsn.insn) &&
+			analyse_instr(&op, &regs, *p->ainsn.insn))
+		nip = regs.nip;
+
+	return nip;
+}
+
+static void optimized_callback(struct optimized_kprobe *op,
+			       struct pt_regs *regs)
+{
+	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+	unsigned long flags;
+
+	/* This is possible if op is under delayed unoptimizing */
+	if (kprobe_disabled(&op->kp))
+		return;
+
+	local_irq_save(flags);
+	hard_irq_disable();
+
+	if (kprobe_running()) {
+		kprobes_inc_nmissed_count(&op->kp);
+	} else {
+		__this_cpu_write(current_kprobe, &op->kp);
+		regs->nip = (unsigned long)op->kp.addr;
+		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
+		opt_pre_handler(&op->kp, regs);
+		__this_cpu_write(current_kprobe, NULL);
+	}
+
+	/*
+	 * No need for an explicit __hard_irq_enable() here.
+	 * local_irq_restore() will re-enable interrupts,
+	 * if they were hard disabled.
+	 */
+	local_irq_restore(flags);
+}
+NOKPROBE_SYMBOL(optimized_callback);
+
+void arch_remove_optimized_kprobe(struct optimized_kprobe *op)
+{
+	if (op->optinsn.insn) {
+		free_ppc_optinsn_slot(op->optinsn.insn, 1);
+		op->optinsn.insn = NULL;
+	}
+}
+
+/*
+ * emulate_step() requires insn to be emulated as
+ * second parameter. Load register 'r4' with the
+ * instruction.
+ */
+void patch_imm32_load_insns(unsigned int val, kprobe_opcode_t *addr)
+{
+	/* addis r4,0,(insn)@h */
+	*addr++ = PPC_INST_ADDIS | ___PPC_RT(4) |
+		  ((val >> 16) & 0xffff);
+
+	/* ori r4,r4,(insn)@l */
+	*addr = PPC_INST_ORI | ___PPC_RA(4) | ___PPC_RS(4) |
+		(val & 0xffff);
+}
+
+/*
+ * Generate instructions to load provided immediate 64-bit value
+ * to register 'r3' and patch these instructions at 'addr'.
+ */
+void patch_imm64_load_insns(unsigned long val, kprobe_opcode_t *addr)
+{
+	/* lis r3,(op)@highest */
+	*addr++ = PPC_INST_ADDIS | ___PPC_RT(3) |
+		  ((val >> 48) & 0xffff);
+
+	/* ori r3,r3,(op)@higher */
+	*addr++ = PPC_INST_ORI | ___PPC_RA(3) | ___PPC_RS(3) |
+		  ((val >> 32) & 0xffff);
+
+	/* rldicr r3,r3,32,31 */
+	*addr++ = PPC_INST_RLDICR | ___PPC_RA(3) | ___PPC_RS(3) |
+		  __PPC_SH64(32) | __PPC_ME64(31);
+
+	/* oris r3,r3,(op)@h */
+	*addr++ = PPC_INST_ORIS | ___PPC_RA(3) | ___PPC_RS(3) |
+		  ((val >> 16) & 0xffff);
+
+	/* ori r3,r3,(op)@l */
+	*addr = PPC_INST_ORI | ___PPC_RA(3) | ___PPC_RS(3) |
+		(val & 0xffff);
+}
+
+int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *p)
+{
+	kprobe_opcode_t *buff, branch_op_callback, branch_emulate_step;
+	kprobe_opcode_t *op_callback_addr, *emulate_step_addr;
+	long b_offset;
+	unsigned long nip;
+
+	kprobe_ppc_optinsn_slots.insn_size = MAX_OPTINSN_SIZE;
+
+	nip = can_optimize(p);
+	if (!nip)
+		return -EILSEQ;
+
+	/* Allocate instruction slot for detour buffer */
+	buff = get_ppc_optinsn_slot();
+	if (!buff)
+		return -ENOMEM;
+
+	/*
+	 * OPTPROBE uses 'b' instruction to branch to optinsn.insn.
+	 *
+	 * The target address has to be relatively nearby, to permit use
+	 * of branch instruction in powerpc, because the address is specified
+	 * in an immediate field in the instruction opcode itself, ie 24 bits
+	 * in the opcode specify the address. Therefore the address should
+	 * be within 32MB on either side of the current instruction.
+	 */
+	b_offset = (unsigned long)buff - (unsigned long)p->addr;
+	if (!is_offset_in_branch_range(b_offset))
+		goto error;
+
+	/* Check if the return address is also within 32MB range */
+	b_offset = (unsigned long)(buff + TMPL_RET_IDX) -
+			(unsigned long)nip;
+	if (!is_offset_in_branch_range(b_offset))
+		goto error;
+
+	/* Setup template */
+	memcpy(buff, optprobe_template_entry,
+			TMPL_END_IDX * sizeof(kprobe_opcode_t));
+
+	/*
+	 * Fixup the template with instructions to:
+	 * 1. load the address of the actual probepoint
+	 */
+	patch_imm64_load_insns((unsigned long)op, buff + TMPL_OP_IDX);
+
+	/*
+	 * 2. branch to optimized_callback() and emulate_step()
+	 */
+	kprobe_lookup_name("optimized_callback", op_callback_addr);
+	kprobe_lookup_name("emulate_step", emulate_step_addr);
+	if (!op_callback_addr || !emulate_step_addr) {
+		WARN(1, "kprobe_lookup_name() failed\n");
+		goto error;
+	}
+
+	branch_op_callback = create_branch((unsigned int *)buff + TMPL_CALL_HDLR_IDX,
+				(unsigned long)op_callback_addr,
+				BRANCH_SET_LINK);
+
+	branch_emulate_step = create_branch((unsigned int *)buff + TMPL_EMULATE_IDX,
+				(unsigned long)emulate_step_addr,
+				BRANCH_SET_LINK);
+
+	if (!branch_op_callback || !branch_emulate_step)
+		goto error;
+
+	buff[TMPL_CALL_HDLR_IDX] = branch_op_callback;
+	buff[TMPL_EMULATE_IDX] = branch_emulate_step;
+
+	/*
+	 * 3. load instruction to be emulated into relevant register, and
+	 */
+	patch_imm32_load_insns(*p->ainsn.insn, buff + TMPL_INSN_IDX);
+
+	/*
+	 * 4. branch back from trampoline
+	 */
+	buff[TMPL_RET_IDX] = create_branch((unsigned int *)buff + TMPL_RET_IDX,
+				(unsigned long)nip, 0);
+
+	flush_icache_range((unsigned long)buff,
+			   (unsigned long)(&buff[TMPL_END_IDX]));
+
+	op->optinsn.insn = buff;
+
+	return 0;
+
+ error:
+	free_ppc_optinsn_slot(buff, 0);
+	return -ERANGE;
+
+}
+
+int arch_prepared_optinsn(struct arch_optimized_insn *optinsn)
+{
+	return optinsn->insn != NULL;
+}
+
+/*
+ * On powerpc, Optprobes always replaces one instruction (4 bytes
+ * aligned and 4 bytes long). It is impossible to encounter another
+ * kprobe in this address range. So always return 0.
+ */
+int arch_check_optimized_kprobe(struct optimized_kprobe *op)
+{
+	return 0;
+}
+
+void arch_optimize_kprobes(struct list_head *oplist)
+{
+	struct optimized_kprobe *op;
+	struct optimized_kprobe *tmp;
+
+	list_for_each_entry_safe(op, tmp, oplist, list) {
+		/*
+		 * Backup instructions which will be replaced
+		 * by jump address
+		 */
+		memcpy(op->optinsn.copied_insn, op->kp.addr,
+					       RELATIVEJUMP_SIZE);
+		patch_instruction(op->kp.addr,
+			create_branch((unsigned int *)op->kp.addr,
+				      (unsigned long)op->optinsn.insn, 0));
+		list_del_init(&op->list);
+	}
+}
+
+void arch_unoptimize_kprobe(struct optimized_kprobe *op)
+{
+	arch_arm_kprobe(&op->kp);
+}
+
+void arch_unoptimize_kprobes(struct list_head *oplist,
+			     struct list_head *done_list)
+{
+	struct optimized_kprobe *op;
+	struct optimized_kprobe *tmp;
+
+	list_for_each_entry_safe(op, tmp, oplist, list) {
+		arch_unoptimize_kprobe(op);
+		list_move(&op->list, done_list);
+	}
+}
+
+int arch_within_optimized_kprobe(struct optimized_kprobe *op,
+				 unsigned long addr)
+{
+	return ((unsigned long)op->kp.addr <= addr &&
+		(unsigned long)op->kp.addr + RELATIVEJUMP_SIZE > addr);
+}
diff --git a/arch/powerpc/kernel/optprobes_head.S b/arch/powerpc/kernel/optprobes_head.S
new file mode 100644
index 0000000..53e429b
--- /dev/null
+++ b/arch/powerpc/kernel/optprobes_head.S
@@ -0,0 +1,135 @@
+/*
+ * Code to prepare detour buffer for optprobes in Kernel.
+ *
+ * Copyright 2017, Anju T, IBM Corp.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <asm/ppc_asm.h>
+#include <asm/ptrace.h>
+#include <asm/asm-offsets.h>
+
+#define	OPT_SLOT_SIZE	65536
+
+	.balign	4
+
+	/*
+	 * Reserve an area to allocate slots for detour buffer.
+	 * This is part of .text section (rather than vmalloc area)
+	 * as this needs to be within 32MB of the probed address.
+	 */
+	.global optinsn_slot
+optinsn_slot:
+	.space	OPT_SLOT_SIZE
+
+	/*
+	 * Optprobe template:
+	 * This template gets copied into one of the slots in optinsn_slot
+	 * and gets fixed up with real optprobe structures et al.
+	 */
+	.global optprobe_template_entry
+optprobe_template_entry:
+	/* Create an in-memory pt_regs */
+	stdu	r1,-INT_FRAME_SIZE(r1)
+	SAVE_GPR(0,r1)
+	/* Save the previous SP into stack */
+	addi	r0,r1,INT_FRAME_SIZE
+	std	r0,GPR1(r1)
+	SAVE_10GPRS(2,r1)
+	SAVE_10GPRS(12,r1)
+	SAVE_10GPRS(22,r1)
+	/* Save SPRS */
+	mfmsr	r5
+	std	r5,_MSR(r1)
+	li	r5,0x700
+	std	r5,_TRAP(r1)
+	li	r5,0
+	std	r5,ORIG_GPR3(r1)
+	std	r5,RESULT(r1)
+	mfctr	r5
+	std	r5,_CTR(r1)
+	mflr	r5
+	std	r5,_LINK(r1)
+	mfspr	r5,SPRN_XER
+	std	r5,_XER(r1)
+	mfcr	r5
+	std	r5,_CCR(r1)
+	lbz     r5,PACASOFTIRQEN(r13)
+	std     r5,SOFTE(r1)
+	mfdar	r5
+	std	r5,_DAR(r1)
+	mfdsisr	r5
+	std	r5,_DSISR(r1)
+
+	.global optprobe_template_op_address
+optprobe_template_op_address:
+	/*
+	 * Parameters to optimized_callback():
+	 * 1. optimized_kprobe structure in r3
+	 */
+	nop
+	nop
+	nop
+	nop
+	nop
+	/* 2. pt_regs pointer in r4 */
+	addi	r4,r1,STACK_FRAME_OVERHEAD
+
+	.global optprobe_template_call_handler
+optprobe_template_call_handler:
+	/* Branch to optimized_callback() */
+	nop
+
+	/*
+	 * Parameters for instruction emulation:
+	 * 1. Pass SP in register r3.
+	 */
+	addi	r3,r1,STACK_FRAME_OVERHEAD
+
+	.global optprobe_template_insn
+optprobe_template_insn:
+	/* 2, Pass instruction to be emulated in r4 */
+	nop
+	nop
+
+	.global optprobe_template_call_emulate
+optprobe_template_call_emulate:
+	/* Branch to emulate_step()  */
+	nop
+
+	/*
+	 * All done.
+	 * Now, restore the registers...
+	 */
+	ld	r5,_MSR(r1)
+	mtmsr	r5
+	ld	r5,_CTR(r1)
+	mtctr	r5
+	ld	r5,_LINK(r1)
+	mtlr	r5
+	ld	r5,_XER(r1)
+	mtxer	r5
+	ld	r5,_CCR(r1)
+	mtcr	r5
+	ld	r5,_DAR(r1)
+	mtdar	r5
+	ld	r5,_DSISR(r1)
+	mtdsisr	r5
+	REST_GPR(0,r1)
+	REST_10GPRS(2,r1)
+	REST_10GPRS(12,r1)
+	REST_10GPRS(22,r1)
+	/* Restore the previous SP */
+	addi	r1,r1,INT_FRAME_SIZE
+
+	.global optprobe_template_ret
+optprobe_template_ret:
+	/* ... and jump back from trampoline */
+	nop
+
+	.global optprobe_template_end
+optprobe_template_end:
diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
index 4ccf16a..0899315 100644
--- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -54,6 +54,27 @@ bool is_offset_in_branch_range(long offset)
 	return (offset >= -0x2000000 && offset <= 0x1fffffc && !(offset & 0x3));
 }
 
+/*
+ * Helper to check if a given instruction is a conditional branch
+ * Derived from the conditional checks in analyse_instr()
+ */
+bool __kprobes is_conditional_branch(unsigned int instr)
+{
+	unsigned int opcode = instr >> 26;
+
+	if (opcode == 16)       /* bc, bca, bcl, bcla */
+		return true;
+	if (opcode == 19) {
+		switch ((instr >> 1) & 0x3ff) {
+		case 16:        /* bclr, bclrl */
+		case 528:       /* bcctr, bcctrl */
+		case 560:       /* bctar, bctarl */
+			return true;
+		}
+	}
+	return false;
+}
+
 unsigned int create_branch(const unsigned int *addr,
 			   unsigned long target, int flags)
 {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v4 2/2] arch/powerpc: Optimize kprobe in kretprobe_trampoline
  2017-02-08  9:50 [PATCH v4 0/2] OPTPROBES for powerpc Anju T Sudhakar
  2017-02-08  9:50 ` [PATCH v4 1/2] arch/powerpc: Implement Optprobes Anju T Sudhakar
@ 2017-02-08  9:50 ` Anju T Sudhakar
  1 sibling, 0 replies; 5+ messages in thread
From: Anju T Sudhakar @ 2017-02-08  9:50 UTC (permalink / raw)
  To: linux-kernel, linuxppc-dev
  Cc: ananth, naveen.n.rao, paulus, srikar, benh, mpe, hemant, mahesh,
	mhiramat

Kprobe placed on the  kretprobe_trampoline during boot time can be 
optimized, since the instruction at probe point is a 'nop'.

Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
---
 arch/powerpc/kernel/kprobes.c   | 8 ++++++++
 arch/powerpc/kernel/optprobes.c | 7 +++----
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index e785cc9..5b0fd07 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -282,6 +282,7 @@ asm(".global kretprobe_trampoline\n"
 	".type kretprobe_trampoline, @function\n"
 	"kretprobe_trampoline:\n"
 	"nop\n"
+	"blr\n"
 	".size kretprobe_trampoline, .-kretprobe_trampoline\n");
 
 /*
@@ -334,6 +335,13 @@ static int __kprobes trampoline_probe_handler(struct kprobe *p,
 
 	kretprobe_assert(ri, orig_ret_address, trampoline_address);
 	regs->nip = orig_ret_address;
+	/*
+	 * Make LR point to the orig_ret_address.
+	 * When the 'nop' inside the kretprobe_trampoline
+	 * is optimized, we can do a 'blr' after executing the
+	 * detour buffer code.
+	 */
+	regs->link = orig_ret_address;
 
 	reset_current_kprobe();
 	kretprobe_hash_unlock(current, &flags);
diff --git a/arch/powerpc/kernel/optprobes.c b/arch/powerpc/kernel/optprobes.c
index ecba221..5e4c254 100644
--- a/arch/powerpc/kernel/optprobes.c
+++ b/arch/powerpc/kernel/optprobes.c
@@ -72,12 +72,11 @@ static unsigned long can_optimize(struct kprobe *p)
 
 	/*
 	 * kprobe placed for kretprobe during boot time
-	 * is not optimizing now.
-	 *
-	 * TODO: Optimize kprobe in kretprobe_trampoline
+	 * has a 'nop' instruction, which can be emulated.
+	 * So further checks can be skipped.
 	 */
 	if (p->addr == (kprobe_opcode_t *)&kretprobe_trampoline)
-		return 0;
+		return (unsigned long)p->addr + sizeof(kprobe_opcode_t);
 
 	/*
 	 * We only support optimizing kernel addresses, but not
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [v4,1/2] arch/powerpc: Implement Optprobes
  2017-02-08  9:50 ` [PATCH v4 1/2] arch/powerpc: Implement Optprobes Anju T Sudhakar
@ 2017-02-14 12:40   ` Michael Ellerman
  2017-02-15  5:14     ` Anju T Sudhakar
  0 siblings, 1 reply; 5+ messages in thread
From: Michael Ellerman @ 2017-02-14 12:40 UTC (permalink / raw)
  To: Anju T, linux-kernel, linuxppc-dev
  Cc: ananth, mahesh, paulus, mhiramat, naveen.n.rao, hemant, srikar

On Wed, 2017-02-08 at 09:50:51 UTC, Anju T wrote:
> Current infrastructure of kprobe uses the unconditional trap instruction
> to probe a running kernel. Optprobe allows kprobe to replace the trap with
> a branch instruction to a detour buffer. Detour buffer contains instructions
> to create an in memory pt_regs. Detour buffer also has a call to
> optimized_callback() which in turn call the pre_handler().
> After the execution of the pre-handler, a call is made for instruction
> emulation. The NIP is determined in advanced through dummy instruction
> emulation and a branch instruction is created to the NIP at the end of
> the trampoline.
> 
> To address the limitation of branch instruction in POWER architecture,
> detour buffer slot is allocated from a reserved area. For the time being,
> 64KB is reserved in memory for this purpose.
> 
> Instructions which can be emulated using analyse_instr() are the candidates
> for optimization. Before optimization ensure that the address range
> between the detour buffer allocated and the instruction being probed
> is within +/- 32MB.
> 
> Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> Acked-by: Masami Hiramatsu <mhiramat@kernel.org>

Series applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/51c9c0843993528bffc920c54c2121

cheers

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [v4,1/2] arch/powerpc: Implement Optprobes
  2017-02-14 12:40   ` [v4,1/2] " Michael Ellerman
@ 2017-02-15  5:14     ` Anju T Sudhakar
  0 siblings, 0 replies; 5+ messages in thread
From: Anju T Sudhakar @ 2017-02-15  5:14 UTC (permalink / raw)
  To: Michael Ellerman, linux-kernel, linuxppc-dev
  Cc: ananth, mahesh, paulus, mhiramat, naveen.n.rao, hemant, srikar

Thank You Michael.  :)


On Tuesday 14 February 2017 06:10 PM, Michael Ellerman wrote:
> On Wed, 2017-02-08 at 09:50:51 UTC, Anju T wrote:
>> Current infrastructure of kprobe uses the unconditional trap instruction
>> to probe a running kernel. Optprobe allows kprobe to replace the trap with
>> a branch instruction to a detour buffer. Detour buffer contains instructions
>> to create an in memory pt_regs. Detour buffer also has a call to
>> optimized_callback() which in turn call the pre_handler().
>> After the execution of the pre-handler, a call is made for instruction
>> emulation. The NIP is determined in advanced through dummy instruction
>> emulation and a branch instruction is created to the NIP at the end of
>> the trampoline.
>>
>> To address the limitation of branch instruction in POWER architecture,
>> detour buffer slot is allocated from a reserved area. For the time being,
>> 64KB is reserved in memory for this purpose.
>>
>> Instructions which can be emulated using analyse_instr() are the candidates
>> for optimization. Before optimization ensure that the address range
>> between the detour buffer allocated and the instruction being probed
>> is within +/- 32MB.
>>
>> Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
>> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
>> Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
> Series applied to powerpc next, thanks.
>
> https://git.kernel.org/powerpc/c/51c9c0843993528bffc920c54c2121
>
> cheers
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2017-02-15  5:14 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-08  9:50 [PATCH v4 0/2] OPTPROBES for powerpc Anju T Sudhakar
2017-02-08  9:50 ` [PATCH v4 1/2] arch/powerpc: Implement Optprobes Anju T Sudhakar
2017-02-14 12:40   ` [v4,1/2] " Michael Ellerman
2017-02-15  5:14     ` Anju T Sudhakar
2017-02-08  9:50 ` [PATCH v4 2/2] arch/powerpc: Optimize kprobe in kretprobe_trampoline Anju T Sudhakar

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.