All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V3 0/4] OPTPROBES for powerpc
@ 2016-12-19 13:18 Anju T Sudhakar
  2016-12-19 13:18 ` [PATCH V3 3/4] arch/powerpc: Implement Optprobes Anju T Sudhakar
                   ` (3 more replies)
  0 siblings, 4 replies; 16+ messages in thread
From: Anju T Sudhakar @ 2016-12-19 13:18 UTC (permalink / raw)
  To: linux-kernel, linuxppc-dev
  Cc: ananth, naveen.n.rao, paulus, srikar, benh, mpe, mahesh, mhiramat, anju

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=y, Size: 5768 bytes --]

This is the V3 patchset of the kprobes jump optimization
(a.k.a OPTPROBES)for powerpc. Kprobe being an inevitable tool
for kernel developers, enhancing the performance of kprobe has
got much importance.

Currently kprobes inserts a trap instruction to probe a running kernel.
Jump optimization allows kprobes to replace the trap with a branch,
reducing the probe overhead drastically.

In this series, conditional branch instructions are not considered for
optimization as they have to be assessed carefully in SMP systems.

The kprobe placed on the kretprobe_trampoline during boot time, is also
optimized in this series. Patch 4/4 furnishes this.

The first two patches can go independently of the series. The helper 
functions in these patches are invoked in patch 3/4.

Performance:
============
An optimized kprobe in powerpc is 1.05 to 4.7 times faster than a kprobe.
 
Example:
 
Placed a probe at an offset 0x50 in _do_fork().
*Time Diff here is, difference in time before hitting the probe and
after the probed instruction. mftb() is employed in kernel/fork.c for
this purpose.
 
# echo 0 > /proc/sys/debug/kprobes-optimization
Kprobes globally unoptimized
 [  233.607120] Time Diff = 0x1f0
 [  233.608273] Time Diff = 0x1ee
 [  233.609228] Time Diff = 0x203
 [  233.610400] Time Diff = 0x1ec
 [  233.611335] Time Diff = 0x200
 [  233.612552] Time Diff = 0x1f0
 [  233.613386] Time Diff = 0x1ee
 [  233.614547] Time Diff = 0x212
 [  233.615570] Time Diff = 0x206
 [  233.616819] Time Diff = 0x1f3
 [  233.617773] Time Diff = 0x1ec
 [  233.618944] Time Diff = 0x1fb
 [  233.619879] Time Diff = 0x1f0
 [  233.621066] Time Diff = 0x1f9
 [  233.621999] Time Diff = 0x283
 [  233.623281] Time Diff = 0x24d
 [  233.624172] Time Diff = 0x1ea
 [  233.625381] Time Diff = 0x1f0
 [  233.626358] Time Diff = 0x200
 [  233.627572] Time Diff = 0x1ed
 
# echo 1 > /proc/sys/debug/kprobes-optimization
Kprobes globally optimized
 [   70.797075] Time Diff = 0x103
 [   70.799102] Time Diff = 0x181
 [   70.801861] Time Diff = 0x15e
 [   70.803466] Time Diff = 0xf0
 [   70.804348] Time Diff = 0xd0
 [   70.805653] Time Diff = 0xad
 [   70.806477] Time Diff = 0xe0
 [   70.807725] Time Diff = 0xbe
 [   70.808541] Time Diff = 0xc3
 [   70.810191] Time Diff = 0xc7
 [   70.811007] Time Diff = 0xc0
 [   70.812629] Time Diff = 0xc0
 [   70.813640] Time Diff = 0xda
 [   70.814915] Time Diff = 0xbb
 [   70.815726] Time Diff = 0xc4
 [   70.816955] Time Diff = 0xc0
 [   70.817778] Time Diff = 0xcd
 [   70.818999] Time Diff = 0xcd
 [   70.820099] Time Diff = 0xcb
 [   70.821333] Time Diff = 0xf0

Implementation:
===================
 
The trap instruction is replaced by a branch to a detour buffer. To address
the limitation of branch instruction in power architecture, detour buffer
slot is allocated from a reserved area. This will ensure that the branch
is within ± 32 MB range. The existing generic approach for kprobes insn 
cache uses module_alloc() to allocate memory area for insn slots. This will 
always be beyond ± 32MB range.
 
The detour buffer contains a call to optimized_callback() which in turn
call the pre_handler(). Once the pre-handler is run, the original
instruction is emulated from the detour buffer itself. Also the detour
buffer is equipped with a branch back to the normal work flow after the
probed instruction is emulated. Before preparing optimization, Kprobes
inserts original(breakpoint instruction)kprobe on the specified address.
So, even if the kprobe is not possible to be optimized, it just uses a
normal kprobe.
 
Limitations:
==============
- Number of probes which can be optimized is limited by the size of the
  area reserved.
- Currently instructions which can be emulated using analyse_instr() are 
  the only candidates for optimization.
- Conditional branch instructions are not optimized.
- Probes on kernel module region are not considered for optimization now.
 
Link for the V1 patchset: 
			https://lkml.org/lkml/2016/9/7/171
			https://lkml.org/lkml/2016/9/7/174
			https://lkml.org/lkml/2016/9/7/172
			https://lkml.org/lkml/2016/9/7/173

Changes from v1:

- Merged the three patches in V1 into a single patch.
- Comments by Masami are addressed.
- Some helper functions are implemented in separate patches.
- Optimization for kprobe placed on the kretprobe_trampoline during
  boot time is implemented.

Changes from v2:

- Comments by Masami are addressed.
- Description in the cover letter is modified a bit.


Kindly let me know your suggestions and comments.

Thanks,
-Anju


Anju T Sudhakar (2):
  arch/powerpc: Implement Optprobes
  arch/powerpc: Optimize kprobe in kretprobe_trampoline

Naveen N. Rao (2):
  powerpc: asm/ppc-opcode.h: introduce __PPC_SH64()
  powerpc: add helper to check if offset is within rel branch range

 .../features/debug/optprobes/arch-support.txt      |   2 +-
 arch/powerpc/Kconfig                               |   1 +
 arch/powerpc/include/asm/code-patching.h           |   1 +
 arch/powerpc/include/asm/kprobes.h                 |  24 +-
 arch/powerpc/include/asm/ppc-opcode.h              |   1 +
 arch/powerpc/include/asm/sstep.h                   |   1 +
 arch/powerpc/kernel/Makefile                       |   1 +
 arch/powerpc/kernel/kprobes.c                      |   8 +
 arch/powerpc/kernel/optprobes.c                    | 330 +++++++++++++++++++++
 arch/powerpc/kernel/optprobes_head.S               | 135 +++++++++
 arch/powerpc/lib/code-patching.c                   |  24 +-
 arch/powerpc/lib/sstep.c                           |  21 ++
 arch/powerpc/net/bpf_jit.h                         |  11 +-
 13 files changed, 551 insertions(+), 9 deletions(-)
 create mode 100644 arch/powerpc/kernel/optprobes.c
 create mode 100644 arch/powerpc/kernel/optprobes_head.S

-- 
2.7.4

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH V3 3/4] arch/powerpc: Implement Optprobes
  2016-12-19 13:18 [PATCH V3 0/4] OPTPROBES for powerpc Anju T Sudhakar
@ 2016-12-19 13:18 ` Anju T Sudhakar
  2016-12-25  2:54   ` Masami Hiramatsu
                     ` (2 more replies)
  2016-12-19 13:18 ` [PATCH V3 4/4] arch/powerpc: Optimize kprobe in kretprobe_trampoline Anju T Sudhakar
                   ` (2 subsequent siblings)
  3 siblings, 3 replies; 16+ messages in thread
From: Anju T Sudhakar @ 2016-12-19 13:18 UTC (permalink / raw)
  To: linux-kernel, linuxppc-dev
  Cc: ananth, naveen.n.rao, paulus, srikar, benh, mpe, mahesh, mhiramat, anju

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=y, Size: 17939 bytes --]

Detour buffer contains instructions to create an in memory pt_regs.
After the execution of the pre-handler, a call is made for instruction emulation.
The NIP is determined in advanced through dummy instruction emulation and a branch
instruction is created to the NIP at the end of the trampoline.

Instruction slot for detour buffer is allocated from the reserved area.
For the time being, 64KB is reserved in memory for this purpose.

Instructions which can be emulated using analyse_instr() are suppliants
for optimization. Before optimization ensure that the address range
between the detour buffer allocated and the instruction being probed
is within ± 32MB.

Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 .../features/debug/optprobes/arch-support.txt      |   2 +-
 arch/powerpc/Kconfig                               |   1 +
 arch/powerpc/include/asm/kprobes.h                 |  24 +-
 arch/powerpc/include/asm/sstep.h                   |   1 +
 arch/powerpc/kernel/Makefile                       |   1 +
 arch/powerpc/kernel/optprobes.c                    | 331 +++++++++++++++++++++
 arch/powerpc/kernel/optprobes_head.S               | 135 +++++++++
 arch/powerpc/lib/sstep.c                           |  21 ++
 8 files changed, 514 insertions(+), 2 deletions(-)
 create mode 100644 arch/powerpc/kernel/optprobes.c
 create mode 100644 arch/powerpc/kernel/optprobes_head.S

diff --git a/Documentation/features/debug/optprobes/arch-support.txt b/Documentation/features/debug/optprobes/arch-support.txt
index b8999d8..45bc99d 100644
--- a/Documentation/features/debug/optprobes/arch-support.txt
+++ b/Documentation/features/debug/optprobes/arch-support.txt
@@ -27,7 +27,7 @@
     |       nios2: | TODO |
     |    openrisc: | TODO |
     |      parisc: | TODO |
-    |     powerpc: | TODO |
+    |     powerpc: |  ok  |
     |        s390: | TODO |
     |       score: | TODO |
     |          sh: | TODO |
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 65fba4c..f7e9296 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -98,6 +98,7 @@ config PPC
 	select HAVE_IOREMAP_PROT
 	select HAVE_EFFICIENT_UNALIGNED_ACCESS if !(CPU_LITTLE_ENDIAN && POWER7_CPU)
 	select HAVE_KPROBES
+	select HAVE_OPTPROBES if PPC64
 	select HAVE_ARCH_KGDB
 	select HAVE_KRETPROBES
 	select HAVE_ARCH_TRACEHOOK
diff --git a/arch/powerpc/include/asm/kprobes.h b/arch/powerpc/include/asm/kprobes.h
index 2c9759bd..0cf640b 100644
--- a/arch/powerpc/include/asm/kprobes.h
+++ b/arch/powerpc/include/asm/kprobes.h
@@ -38,7 +38,23 @@ struct pt_regs;
 struct kprobe;
 
 typedef ppc_opcode_t kprobe_opcode_t;
-#define MAX_INSN_SIZE 1
+
+extern kprobe_opcode_t optinsn_slot;
+
+/* Optinsn template address */
+extern kprobe_opcode_t optprobe_template_entry[];
+extern kprobe_opcode_t optprobe_template_op_address[];
+extern kprobe_opcode_t optprobe_template_call_handler[];
+extern kprobe_opcode_t optprobe_template_insn[];
+extern kprobe_opcode_t optprobe_template_call_emulate[];
+extern kprobe_opcode_t optprobe_template_ret[];
+extern kprobe_opcode_t optprobe_template_end[];
+
+/* Fixed instruction size for powerpc */
+#define MAX_INSN_SIZE		1
+#define MAX_OPTIMIZED_LENGTH	sizeof(kprobe_opcode_t)	/* 4 bytes */
+#define MAX_OPTINSN_SIZE	(optprobe_template_end - optprobe_template_entry)
+#define RELATIVEJUMP_SIZE	sizeof(kprobe_opcode_t)	/* 4 bytes */
 
 #ifdef PPC64_ELF_ABI_v2
 /* PPC64 ABIv2 needs local entry point */
@@ -124,6 +140,12 @@ struct kprobe_ctlblk {
 	struct prev_kprobe prev_kprobe;
 };
 
+struct arch_optimized_insn {
+	kprobe_opcode_t copied_insn[1];
+	/* detour buffer */
+	kprobe_opcode_t *insn;
+};
+
 extern int kprobe_exceptions_notify(struct notifier_block *self,
 					unsigned long val, void *data);
 extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr);
diff --git a/arch/powerpc/include/asm/sstep.h b/arch/powerpc/include/asm/sstep.h
index d3a42cc..f7ad425 100644
--- a/arch/powerpc/include/asm/sstep.h
+++ b/arch/powerpc/include/asm/sstep.h
@@ -87,3 +87,4 @@ struct instruction_op {
 
 extern int analyse_instr(struct instruction_op *op, struct pt_regs *regs,
 			 unsigned int instr);
+extern bool is_conditional_branch(unsigned int instr);
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 1925341..54f0f47 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -98,6 +98,7 @@ obj-$(CONFIG_KGDB)		+= kgdb.o
 obj-$(CONFIG_BOOTX_TEXT)	+= btext.o
 obj-$(CONFIG_SMP)		+= smp.o
 obj-$(CONFIG_KPROBES)		+= kprobes.o
+obj-$(CONFIG_OPTPROBES)		+= optprobes.o optprobes_head.o
 obj-$(CONFIG_UPROBES)		+= uprobes.o
 obj-$(CONFIG_PPC_UDBG_16550)	+= legacy_serial.o udbg_16550.o
 obj-$(CONFIG_STACKTRACE)	+= stacktrace.o
diff --git a/arch/powerpc/kernel/optprobes.c b/arch/powerpc/kernel/optprobes.c
new file mode 100644
index 0000000..fb5e62d
--- /dev/null
+++ b/arch/powerpc/kernel/optprobes.c
@@ -0,0 +1,331 @@
+/*
+ * Code for Kernel probes Jump optimization.
+ *
+ * Copyright 2016, Anju T, IBM Corp.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/kprobes.h>
+#include <linux/jump_label.h>
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/list.h>
+#include <asm/kprobes.h>
+#include <asm/ptrace.h>
+#include <asm/cacheflush.h>
+#include <asm/code-patching.h>
+#include <asm/sstep.h>
+#include <asm/ppc-opcode.h>
+
+#define TMPL_CALL_HDLR_IDX	\
+	(optprobe_template_call_handler - optprobe_template_entry)
+#define TMPL_EMULATE_IDX	\
+	(optprobe_template_call_emulate - optprobe_template_entry)
+#define TMPL_RET_IDX		\
+	(optprobe_template_ret - optprobe_template_entry)
+#define TMPL_OP_IDX		\
+	(optprobe_template_op_address - optprobe_template_entry)
+#define TMPL_INSN_IDX		\
+	(optprobe_template_insn - optprobe_template_entry)
+#define TMPL_END_IDX		\
+	(optprobe_template_end - optprobe_template_entry)
+
+DEFINE_INSN_CACHE_OPS(ppc_optinsn);
+
+static bool insn_page_in_use;
+
+static void *__ppc_alloc_insn_page(void)
+{
+	if (insn_page_in_use)
+		return NULL;
+	insn_page_in_use = true;
+	return &optinsn_slot;
+}
+
+static void __ppc_free_insn_page(void *page __maybe_unused)
+{
+	insn_page_in_use = false;
+}
+
+struct kprobe_insn_cache kprobe_ppc_optinsn_slots = {
+	.mutex = __MUTEX_INITIALIZER(kprobe_ppc_optinsn_slots.mutex),
+	.pages = LIST_HEAD_INIT(kprobe_ppc_optinsn_slots.pages),
+	/* insn_size initialized later */
+	.alloc = __ppc_alloc_insn_page,
+	.free = __ppc_free_insn_page,
+	.nr_garbage = 0,
+};
+
+/*
+ * Check if we can optimize this probe. Returns NIP post-emulation if this can
+ * be optimized and 0 otherwise.
+ */
+static unsigned long can_optimize(struct kprobe *p)
+{
+	struct pt_regs regs;
+	struct instruction_op op;
+	unsigned long nip = 0;
+
+	/*
+	 * kprobe placed for kretprobe during boot time
+	 * is not optimizing now.
+	 *
+	 * TODO: Optimize kprobe in kretprobe_trampoline
+	 */
+	if (p->addr == (kprobe_opcode_t *)&kretprobe_trampoline)
+		return 0;
+
+	/*
+	 * We only support optimizing kernel addresses, but not
+	 * module addresses.
+	 */
+	if (!is_kernel_addr((unsigned long)p->addr))
+		return 0;
+
+	regs.nip = (unsigned long)p->addr;
+	regs.trap = 0x0;
+	regs.msr = MSR_KERNEL;
+
+	/*
+	 * Ensure that the instruction is not a conditional branch,
+	 * and that can be emulated.
+	 */
+	if (!is_conditional_branch(*p->ainsn.insn) &&
+			analyse_instr(&op, &regs, *p->ainsn.insn))
+		nip = regs.nip;
+
+	return nip;
+}
+
+static void optimized_callback(struct optimized_kprobe *op,
+			       struct pt_regs *regs)
+{
+	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+	unsigned long flags;
+
+	/* This is possible if op is under delayed unoptimizing */
+	if (kprobe_disabled(&op->kp))
+		return;
+
+	local_irq_save(flags);
+
+	if (kprobe_running()) {
+		kprobes_inc_nmissed_count(&op->kp);
+	} else {
+		__this_cpu_write(current_kprobe, &op->kp);
+		regs->nip = (unsigned long)op->kp.addr;
+		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
+		opt_pre_handler(&op->kp, regs);
+		__this_cpu_write(current_kprobe, NULL);
+	}
+	local_irq_restore(flags);
+}
+NOKPROBE_SYMBOL(optimized_callback);
+
+void arch_remove_optimized_kprobe(struct optimized_kprobe *op)
+{
+	if (op->optinsn.insn) {
+		free_ppc_optinsn_slot(op->optinsn.insn, 1);
+		op->optinsn.insn = NULL;
+	}
+}
+
+/*
+ * emulate_step() requires insn to be emulated as
+ * second parameter. Load register 'r4' with the
+ * instruction.
+ */
+void patch_imm32_load_insns(unsigned int val, kprobe_opcode_t *addr)
+{
+	/* addis r4,0,(insn)@h */
+	*addr++ = PPC_INST_ADDIS | ___PPC_RT(4) |
+		  ((val >> 16) & 0xffff);
+
+	/* ori r4,r4,(insn)@l */
+	*addr = PPC_INST_ORI | ___PPC_RA(4) | ___PPC_RS(4) |
+		(val & 0xffff);
+}
+
+/*
+ * Generate instructions to load provided immediate 64-bit value
+ * to register 'r3' and patch these instructions at 'addr'.
+ */
+void patch_imm64_load_insns(unsigned long val, kprobe_opcode_t *addr)
+{
+	/* lis r3,(op)@highest */
+	*addr++ = PPC_INST_ADDIS | ___PPC_RT(3) |
+		  ((val >> 48) & 0xffff);
+
+	/* ori r3,r3,(op)@higher */
+	*addr++ = PPC_INST_ORI | ___PPC_RA(3) | ___PPC_RS(3) |
+		  ((val >> 32) & 0xffff);
+
+	/* rldicr r3,r3,32,31 */
+	*addr++ = PPC_INST_RLDICR | ___PPC_RA(3) | ___PPC_RS(3) |
+		  __PPC_SH64(32) | __PPC_ME64(31);
+
+	/* oris r3,r3,(op)@h */
+	*addr++ = PPC_INST_ORIS | ___PPC_RA(3) | ___PPC_RS(3) |
+		  ((val >> 16) & 0xffff);
+
+	/* ori r3,r3,(op)@l */
+	*addr = PPC_INST_ORI | ___PPC_RA(3) | ___PPC_RS(3) |
+		(val & 0xffff);
+}
+
+int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *p)
+{
+	kprobe_opcode_t *buff, branch_op_callback, branch_emulate_step;
+	kprobe_opcode_t *op_callback_addr, *emulate_step_addr;
+	long b_offset;
+	unsigned long nip;
+
+	kprobe_ppc_optinsn_slots.insn_size = MAX_OPTINSN_SIZE;
+
+	nip = can_optimize(p);
+	if (!nip)
+		return -EILSEQ;
+
+	/* Allocate instruction slot for detour buffer */
+	buff = get_ppc_optinsn_slot();
+	if (!buff)
+		return -ENOMEM;
+
+	/*
+	 * OPTPROBE uses 'b' instruction to branch to optinsn.insn.
+	 *
+	 * The target address has to be relatively nearby, to permit use
+	 * of branch instruction in powerpc, because the address is specified
+	 * in an immediate field in the instruction opcode itself, ie 24 bits
+	 * in the opcode specify the address. Therefore the address should
+	 * be within 32MB on either side of the current instruction.
+	 */
+	b_offset = (unsigned long)buff - (unsigned long)p->addr;
+	if (!is_offset_in_branch_range(b_offset))
+		goto error;
+
+	/* Check if the return address is also within 32MB range */
+	b_offset = (unsigned long)(buff + TMPL_RET_IDX) -
+			(unsigned long)nip;
+	if (!is_offset_in_branch_range(b_offset))
+		goto error;
+
+	/* Setup template */
+	memcpy(buff, optprobe_template_entry,
+			TMPL_END_IDX * sizeof(kprobe_opcode_t));
+
+	/*
+	 * Fixup the template with instructions to:
+	 * 1. load the address of the actual probepoint
+	 */
+	patch_imm64_load_insns((unsigned long)op, buff + TMPL_OP_IDX);
+
+	/*
+	 * 2. branch to optimized_callback() and emulate_step()
+	 */
+	kprobe_lookup_name("optimized_callback", op_callback_addr);
+	kprobe_lookup_name("emulate_step", emulate_step_addr);
+	if (!op_callback_addr || !emulate_step_addr) {
+		WARN(1, "kprobe_lookup_name() failed\n");
+		goto error;
+	}
+
+	branch_op_callback = create_branch((unsigned int *)buff + TMPL_CALL_HDLR_IDX,
+				(unsigned long)op_callback_addr,
+				BRANCH_SET_LINK);
+
+	branch_emulate_step = create_branch((unsigned int *)buff + TMPL_EMULATE_IDX,
+				(unsigned long)emulate_step_addr,
+				BRANCH_SET_LINK);
+
+	if (!branch_op_callback || !branch_emulate_step)
+		goto error;
+
+	buff[TMPL_CALL_HDLR_IDX] = branch_op_callback;
+	buff[TMPL_EMULATE_IDX] = branch_emulate_step;
+
+	/*
+	 * 3. load instruction to be emulated into relevant register, and
+	 */
+	patch_imm32_load_insns(*p->ainsn.insn, buff + TMPL_INSN_IDX);
+
+	/*
+	 * 4. branch back from trampoline
+	 */
+	buff[TMPL_RET_IDX] = create_branch((unsigned int *)buff + TMPL_RET_IDX,
+				(unsigned long)nip, 0);
+
+	flush_icache_range((unsigned long)buff,
+			   (unsigned long)(&buff[TMPL_END_IDX]));
+
+	op->optinsn.insn = buff;
+
+	return 0;
+
+ error:
+	free_ppc_optinsn_slot(buff, 0);
+	return -ERANGE;
+
+}
+
+int arch_prepared_optinsn(struct arch_optimized_insn *optinsn)
+{
+	return optinsn->insn != NULL;
+}
+
+/*
+ * On powerpc, Optprobes always replaces one instruction (4 bytes
+ * aligned and 4 bytes long). It is impossible to encounter another
+ * kprobe in this address range. So always return 0.
+ */
+int arch_check_optimized_kprobe(struct optimized_kprobe *op)
+{
+	return 0;
+}
+
+void arch_optimize_kprobes(struct list_head *oplist)
+{
+	struct optimized_kprobe *op;
+	struct optimized_kprobe *tmp;
+
+	list_for_each_entry_safe(op, tmp, oplist, list) {
+		/*
+		 * Backup instructions which will be replaced
+		 * by jump address
+		 */
+		memcpy(op->optinsn.copied_insn, op->kp.addr,
+					       RELATIVEJUMP_SIZE);
+		patch_instruction(op->kp.addr,
+			create_branch((unsigned int *)op->kp.addr,
+				      (unsigned long)op->optinsn.insn, 0));
+		list_del_init(&op->list);
+	}
+}
+
+void arch_unoptimize_kprobe(struct optimized_kprobe *op)
+{
+	arch_arm_kprobe(&op->kp);
+}
+
+void arch_unoptimize_kprobes(struct list_head *oplist,
+			     struct list_head *done_list)
+{
+	struct optimized_kprobe *op;
+	struct optimized_kprobe *tmp;
+
+	list_for_each_entry_safe(op, tmp, oplist, list) {
+		arch_unoptimize_kprobe(op);
+		list_move(&op->list, done_list);
+	}
+}
+
+int arch_within_optimized_kprobe(struct optimized_kprobe *op,
+				 unsigned long addr)
+{
+	return ((unsigned long)op->kp.addr <= addr &&
+		(unsigned long)op->kp.addr + RELATIVEJUMP_SIZE > addr);
+}
diff --git a/arch/powerpc/kernel/optprobes_head.S b/arch/powerpc/kernel/optprobes_head.S
new file mode 100644
index 0000000..c86976b
--- /dev/null
+++ b/arch/powerpc/kernel/optprobes_head.S
@@ -0,0 +1,135 @@
+/*
+ * Code to prepare detour buffer for optprobes in Kernel.
+ *
+ * Copyright 2016, Anju T, IBM Corp.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <asm/ppc_asm.h>
+#include <asm/ptrace.h>
+#include <asm/asm-offsets.h>
+
+#define	OPT_SLOT_SIZE	65536
+
+	.balign	4
+
+	/*
+	 * Reserve an area to allocate slots for detour buffer.
+	 * This is part of .text section (rather than vmalloc area)
+	 * as this needs to be within 32MB of the probed address.
+	 */
+	.global optinsn_slot
+optinsn_slot:
+	.space	OPT_SLOT_SIZE
+
+	/*
+	 * Optprobe template:
+	 * This template gets copied into one of the slots in optinsn_slot
+	 * and gets fixed up with real optprobe structures et al.
+	 */
+	.global optprobe_template_entry
+optprobe_template_entry:
+	/* Create an in-memory pt_regs */
+	stdu	r1,-INT_FRAME_SIZE(r1)
+	SAVE_GPR(0,r1)
+	/* Save the previous SP into stack */
+	addi	r0,r1,INT_FRAME_SIZE
+	std	r0,GPR1(r1)
+	SAVE_10GPRS(2,r1)
+	SAVE_10GPRS(12,r1)
+	SAVE_10GPRS(22,r1)
+	/* Save SPRS */
+	mfmsr	r5
+	std	r5,_MSR(r1)
+	li	r5,0x700
+	std	r5,_TRAP(r1)
+	li	r5,0
+	std	r5,ORIG_GPR3(r1)
+	std	r5,RESULT(r1)
+	mfctr	r5
+	std	r5,_CTR(r1)
+	mflr	r5
+	std	r5,_LINK(r1)
+	mfspr	r5,SPRN_XER
+	std	r5,_XER(r1)
+	mfcr	r5
+	std	r5,_CCR(r1)
+	lbz     r5,PACASOFTIRQEN(r13)
+	std     r5,SOFTE(r1)
+	mfdar	r5
+	std	r5,_DAR(r1)
+	mfdsisr	r5
+	std	r5,_DSISR(r1)
+
+	.global optprobe_template_op_address
+optprobe_template_op_address:
+	/*
+	 * Parameters to optimized_callback():
+	 * 1. optimized_kprobe structure in r3
+	 */
+	nop
+	nop
+	nop
+	nop
+	nop
+	/* 2. pt_regs pointer in r4 */
+	addi	r4,r1,STACK_FRAME_OVERHEAD
+
+	.global optprobe_template_call_handler
+optprobe_template_call_handler:
+	/* Branch to optimized_callback() */
+	nop
+
+	/*
+	 * Parameters for instruction emulation:
+	 * 1. Pass SP in register r3.
+	 */
+	addi	r3,r1,STACK_FRAME_OVERHEAD
+
+	.global optprobe_template_insn
+optprobe_template_insn:
+	/* 2, Pass instruction to be emulated in r4 */
+	nop
+	nop
+
+	.global optprobe_template_call_emulate
+optprobe_template_call_emulate:
+	/* Branch to emulate_step()  */
+	nop
+
+	/*
+	 * All done.
+	 * Now, restore the registers...
+	 */
+	ld	r5,_MSR(r1)
+	mtmsr	r5
+	ld	r5,_CTR(r1)
+	mtctr	r5
+	ld	r5,_LINK(r1)
+	mtlr	r5
+	ld	r5,_XER(r1)
+	mtxer	r5
+	ld	r5,_CCR(r1)
+	mtcr	r5
+	ld	r5,_DAR(r1)
+	mtdar	r5
+	ld	r5,_DSISR(r1)
+	mtdsisr	r5
+	REST_GPR(0,r1)
+	REST_10GPRS(2,r1)
+	REST_10GPRS(12,r1)
+	REST_10GPRS(22,r1)
+	/* Restore the previous SP */
+	addi	r1,r1,INT_FRAME_SIZE
+
+	.global optprobe_template_ret
+optprobe_template_ret:
+	/* ... and jump back from trampoline */
+	nop
+
+	.global optprobe_template_end
+optprobe_template_end:
diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
index 3362299..895dcdd 100644
--- a/arch/powerpc/lib/sstep.c
+++ b/arch/powerpc/lib/sstep.c
@@ -618,6 +618,27 @@ static int __kprobes trap_compare(long v1, long v2)
 }
 
 /*
+ * Helper to check if a given instruction is a conditional branch
+ * Derived from the conditional checks in analyse_instr()
+ */
+bool __kprobes is_conditional_branch(unsigned int instr)
+{
+	unsigned int opcode = instr >> 26;
+
+	if (opcode == 16)	/* bc, bca, bcl, bcla */
+		return true;
+	if (opcode == 19) {
+		switch ((instr >> 1) & 0x3ff) {
+		case 16:	/* bclr, bclrl */
+		case 528:	/* bcctr, bcctrl */
+		case 560:	/* bctar, bctarl */
+			return true;
+		}
+	}
+	return false;
+}
+
+/*
  * Elements of 32-bit rotate and mask instructions.
  */
 #define MASK32(mb, me)	((0xffffffffUL >> (mb)) + \
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH V3 4/4] arch/powerpc: Optimize kprobe in kretprobe_trampoline
  2016-12-19 13:18 [PATCH V3 0/4] OPTPROBES for powerpc Anju T Sudhakar
  2016-12-19 13:18 ` [PATCH V3 3/4] arch/powerpc: Implement Optprobes Anju T Sudhakar
@ 2016-12-19 13:18 ` Anju T Sudhakar
  2016-12-19 13:18 ` [PATCH V3 1/4] powerpc: asm/ppc-opcode.h: introduce __PPC_SH64() Anju T Sudhakar
  2016-12-19 13:18 ` [PATCH V3 2/4] powerpc: add helper to check if offset is within rel branch range Anju T Sudhakar
  3 siblings, 0 replies; 16+ messages in thread
From: Anju T Sudhakar @ 2016-12-19 13:18 UTC (permalink / raw)
  To: linux-kernel, linuxppc-dev
  Cc: ananth, naveen.n.rao, paulus, srikar, benh, mpe, mahesh, mhiramat, anju

Kprobe placed on the  kretprobe_trampoline during boot time can be 
optimized, since the instruction at probe point is a 'nop'.

Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
---
 arch/powerpc/kernel/kprobes.c   | 8 ++++++++
 arch/powerpc/kernel/optprobes.c | 7 +++----
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index e785cc9..5b0fd07 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -282,6 +282,7 @@ asm(".global kretprobe_trampoline\n"
 	".type kretprobe_trampoline, @function\n"
 	"kretprobe_trampoline:\n"
 	"nop\n"
+	"blr\n"
 	".size kretprobe_trampoline, .-kretprobe_trampoline\n");
 
 /*
@@ -334,6 +335,13 @@ static int __kprobes trampoline_probe_handler(struct kprobe *p,
 
 	kretprobe_assert(ri, orig_ret_address, trampoline_address);
 	regs->nip = orig_ret_address;
+	/*
+	 * Make LR point to the orig_ret_address.
+	 * When the 'nop' inside the kretprobe_trampoline
+	 * is optimized, we can do a 'blr' after executing the
+	 * detour buffer code.
+	 */
+	regs->link = orig_ret_address;
 
 	reset_current_kprobe();
 	kretprobe_hash_unlock(current, &flags);
diff --git a/arch/powerpc/kernel/optprobes.c b/arch/powerpc/kernel/optprobes.c
index ecba221..5e4c254 100644
--- a/arch/powerpc/kernel/optprobes.c
+++ b/arch/powerpc/kernel/optprobes.c
@@ -72,12 +72,11 @@ static unsigned long can_optimize(struct kprobe *p)
 
 	/*
 	 * kprobe placed for kretprobe during boot time
-	 * is not optimizing now.
-	 *
-	 * TODO: Optimize kprobe in kretprobe_trampoline
+	 * has a 'nop' instruction, which can be emulated.
+	 * So further checks can be skipped.
 	 */
 	if (p->addr == (kprobe_opcode_t *)&kretprobe_trampoline)
-		return 0;
+		return (unsigned long)p->addr + sizeof(kprobe_opcode_t);
 
 	/*
 	 * We only support optimizing kernel addresses, but not
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH V3 1/4] powerpc: asm/ppc-opcode.h: introduce __PPC_SH64()
  2016-12-19 13:18 [PATCH V3 0/4] OPTPROBES for powerpc Anju T Sudhakar
  2016-12-19 13:18 ` [PATCH V3 3/4] arch/powerpc: Implement Optprobes Anju T Sudhakar
  2016-12-19 13:18 ` [PATCH V3 4/4] arch/powerpc: Optimize kprobe in kretprobe_trampoline Anju T Sudhakar
@ 2016-12-19 13:18 ` Anju T Sudhakar
  2016-12-19 13:18 ` [PATCH V3 2/4] powerpc: add helper to check if offset is within rel branch range Anju T Sudhakar
  3 siblings, 0 replies; 16+ messages in thread
From: Anju T Sudhakar @ 2016-12-19 13:18 UTC (permalink / raw)
  To: linux-kernel, linuxppc-dev
  Cc: ananth, naveen.n.rao, paulus, srikar, benh, mpe, mahesh, mhiramat, anju

From: "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com>

Introduce __PPC_SH64() as a 64-bit variant to encode shift field in some
of the shift and rotate instructions operating on double-words. Convert
some of the BPF instruction macros to use the same.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/ppc-opcode.h |  1 +
 arch/powerpc/net/bpf_jit.h            | 11 +++++------
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
index 0132831..630127b 100644
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -306,6 +306,7 @@
 #define __PPC_WC(w)	(((w) & 0x3) << 21)
 #define __PPC_WS(w)	(((w) & 0x1f) << 11)
 #define __PPC_SH(s)	__PPC_WS(s)
+#define __PPC_SH64(s)	(__PPC_SH(s) | (((s) & 0x20) >> 4))
 #define __PPC_MB(s)	(((s) & 0x1f) << 6)
 #define __PPC_ME(s)	(((s) & 0x1f) << 1)
 #define __PPC_MB64(s)	(__PPC_MB(s) | ((s) & 0x20))
diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
index 89f7007..30cf03f 100644
--- a/arch/powerpc/net/bpf_jit.h
+++ b/arch/powerpc/net/bpf_jit.h
@@ -157,8 +157,7 @@
 #define PPC_SRAD(d, a, s)	EMIT(PPC_INST_SRAD | ___PPC_RA(d) |	      \
 				     ___PPC_RS(a) | ___PPC_RB(s))
 #define PPC_SRADI(d, a, i)	EMIT(PPC_INST_SRADI | ___PPC_RA(d) |	      \
-				     ___PPC_RS(a) | __PPC_SH(i) |             \
-				     (((i) & 0x20) >> 4))
+				     ___PPC_RS(a) | __PPC_SH64(i))
 #define PPC_RLWINM(d, a, i, mb, me)	EMIT(PPC_INST_RLWINM | ___PPC_RA(d) | \
 					___PPC_RS(a) | __PPC_SH(i) |	      \
 					__PPC_MB(mb) | __PPC_ME(me))
@@ -166,11 +165,11 @@
 					___PPC_RS(a) | __PPC_SH(i) |	      \
 					__PPC_MB(mb) | __PPC_ME(me))
 #define PPC_RLDICL(d, a, i, mb)		EMIT(PPC_INST_RLDICL | ___PPC_RA(d) | \
-					___PPC_RS(a) | __PPC_SH(i) |	      \
-					__PPC_MB64(mb) | (((i) & 0x20) >> 4))
+					___PPC_RS(a) | __PPC_SH64(i) |	      \
+					__PPC_MB64(mb))
 #define PPC_RLDICR(d, a, i, me)		EMIT(PPC_INST_RLDICR | ___PPC_RA(d) | \
-					___PPC_RS(a) | __PPC_SH(i) |	      \
-					__PPC_ME64(me) | (((i) & 0x20) >> 4))
+					___PPC_RS(a) | __PPC_SH64(i) |	      \
+					__PPC_ME64(me))
 
 /* slwi = rlwinm Rx, Ry, n, 0, 31-n */
 #define PPC_SLWI(d, a, i)	PPC_RLWINM(d, a, i, 0, 31-(i))
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH V3 2/4] powerpc: add helper to check if offset is within rel branch range
  2016-12-19 13:18 [PATCH V3 0/4] OPTPROBES for powerpc Anju T Sudhakar
                   ` (2 preceding siblings ...)
  2016-12-19 13:18 ` [PATCH V3 1/4] powerpc: asm/ppc-opcode.h: introduce __PPC_SH64() Anju T Sudhakar
@ 2016-12-19 13:18 ` Anju T Sudhakar
  3 siblings, 0 replies; 16+ messages in thread
From: Anju T Sudhakar @ 2016-12-19 13:18 UTC (permalink / raw)
  To: linux-kernel, linuxppc-dev
  Cc: ananth, naveen.n.rao, paulus, srikar, benh, mpe, mahesh, mhiramat, anju

From: "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com>

To permit the use of relative branch instruction in powerpc, the target 
address has to be relatively nearby, since the address is specified in an 
immediate field (24 bit filed) in the instruction opcode itself. Here 
nearby refers to 32MB on either side of the current instruction.

This patch verifies whether the target address is within +/- 32MB
range or not.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/code-patching.h |  1 +
 arch/powerpc/lib/code-patching.c         | 24 +++++++++++++++++++++++-
 2 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
index 2015b07..75ee4f4 100644
--- a/arch/powerpc/include/asm/code-patching.h
+++ b/arch/powerpc/include/asm/code-patching.h
@@ -22,6 +22,7 @@
 #define BRANCH_SET_LINK	0x1
 #define BRANCH_ABSOLUTE	0x2
 
+bool is_offset_in_branch_range(long offset);
 unsigned int create_branch(const unsigned int *addr,
 			   unsigned long target, int flags);
 unsigned int create_cond_branch(const unsigned int *addr,
diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
index d5edbeb..f643451 100644
--- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -32,6 +32,28 @@ int patch_branch(unsigned int *addr, unsigned long target, int flags)
 	return patch_instruction(addr, create_branch(addr, target, flags));
 }
 
+bool is_offset_in_branch_range(long offset)
+{
+	/*
+	 * Powerpc branch instruction is :
+	 *
+	 *  0         6                 30   31
+	 *  +---------+----------------+---+---+
+	 *  | opcode  |     LI         |AA |LK |
+	 *  +---------+----------------+---+---+
+	 *  Where AA = 0 and LK = 0
+	 *
+	 * LI is a signed 24 bits integer. The real branch offset is computed
+	 * by: imm32 = SignExtend(LI:'0b00', 32);
+	 *
+	 * So the maximum forward branch should be:
+	 *   (0x007fffff << 2) = 0x01fffffc =  0x1fffffc
+	 * The maximum backward branch should be:
+	 *   (0xff800000 << 2) = 0xfe000000 = -0x2000000
+	 */
+	return (offset >= -0x2000000 && offset <= 0x1fffffc && !(offset & 0x3));
+}
+
 unsigned int create_branch(const unsigned int *addr,
 			   unsigned long target, int flags)
 {
@@ -43,7 +65,7 @@ unsigned int create_branch(const unsigned int *addr,
 		offset = offset - (unsigned long)addr;
 
 	/* Check we can represent the target in the instruction format */
-	if (offset < -0x2000000 || offset > 0x1fffffc || offset & 0x3)
+	if (!is_offset_in_branch_range(offset))
 		return 0;
 
 	/* Mask out the flags and target, so they don't step on each other. */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH V3 3/4] arch/powerpc: Implement Optprobes
  2016-12-19 13:18 ` [PATCH V3 3/4] arch/powerpc: Implement Optprobes Anju T Sudhakar
@ 2016-12-25  2:54   ` Masami Hiramatsu
  2017-01-04 10:25     ` Naveen N. Rao
  2017-01-30 20:43     ` Michael Ellerman
  2017-02-01 10:53   ` [PATCH V3 3/4] arch/powerpc: Implement Optprobes Michael Ellerman
  2 siblings, 1 reply; 16+ messages in thread
From: Masami Hiramatsu @ 2016-12-25  2:54 UTC (permalink / raw)
  To: Anju T Sudhakar
  Cc: linux-kernel, linuxppc-dev, ananth, naveen.n.rao, paulus, srikar,
	benh, mpe, mahesh, mhiramat

On Mon, 19 Dec 2016 18:48:24 +0530
Anju T Sudhakar <anju@linux.vnet.ibm.com> wrote:

> Detour buffer contains instructions to create an in memory pt_regs.
> After the execution of the pre-handler, a call is made for instruction emulation.
> The NIP is determined in advanced through dummy instruction emulation and a branch
> instruction is created to the NIP at the end of the trampoline.
> 
> Instruction slot for detour buffer is allocated from the reserved area.
> For the time being, 64KB is reserved in memory for this purpose.
> 
> Instructions which can be emulated using analyse_instr() are suppliants
> for optimization. Before optimization ensure that the address range
> between the detour buffer allocated and the instruction being probed
> is within ± 32MB.
> 
> Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>

Looks good to me :)

Acked-by: Masami Hiramatsu <mhiramat@kernel.org>

Thanks!

> ---
>  .../features/debug/optprobes/arch-support.txt      |   2 +-
>  arch/powerpc/Kconfig                               |   1 +
>  arch/powerpc/include/asm/kprobes.h                 |  24 +-
>  arch/powerpc/include/asm/sstep.h                   |   1 +
>  arch/powerpc/kernel/Makefile                       |   1 +
>  arch/powerpc/kernel/optprobes.c                    | 331 +++++++++++++++++++++
>  arch/powerpc/kernel/optprobes_head.S               | 135 +++++++++
>  arch/powerpc/lib/sstep.c                           |  21 ++
>  8 files changed, 514 insertions(+), 2 deletions(-)
>  create mode 100644 arch/powerpc/kernel/optprobes.c
>  create mode 100644 arch/powerpc/kernel/optprobes_head.S
> 
> diff --git a/Documentation/features/debug/optprobes/arch-support.txt b/Documentation/features/debug/optprobes/arch-support.txt
> index b8999d8..45bc99d 100644
> --- a/Documentation/features/debug/optprobes/arch-support.txt
> +++ b/Documentation/features/debug/optprobes/arch-support.txt
> @@ -27,7 +27,7 @@
>      |       nios2: | TODO |
>      |    openrisc: | TODO |
>      |      parisc: | TODO |
> -    |     powerpc: | TODO |
> +    |     powerpc: |  ok  |
>      |        s390: | TODO |
>      |       score: | TODO |
>      |          sh: | TODO |
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index 65fba4c..f7e9296 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -98,6 +98,7 @@ config PPC
>  	select HAVE_IOREMAP_PROT
>  	select HAVE_EFFICIENT_UNALIGNED_ACCESS if !(CPU_LITTLE_ENDIAN && POWER7_CPU)
>  	select HAVE_KPROBES
> +	select HAVE_OPTPROBES if PPC64
>  	select HAVE_ARCH_KGDB
>  	select HAVE_KRETPROBES
>  	select HAVE_ARCH_TRACEHOOK
> diff --git a/arch/powerpc/include/asm/kprobes.h b/arch/powerpc/include/asm/kprobes.h
> index 2c9759bd..0cf640b 100644
> --- a/arch/powerpc/include/asm/kprobes.h
> +++ b/arch/powerpc/include/asm/kprobes.h
> @@ -38,7 +38,23 @@ struct pt_regs;
>  struct kprobe;
>  
>  typedef ppc_opcode_t kprobe_opcode_t;
> -#define MAX_INSN_SIZE 1
> +
> +extern kprobe_opcode_t optinsn_slot;
> +
> +/* Optinsn template address */
> +extern kprobe_opcode_t optprobe_template_entry[];
> +extern kprobe_opcode_t optprobe_template_op_address[];
> +extern kprobe_opcode_t optprobe_template_call_handler[];
> +extern kprobe_opcode_t optprobe_template_insn[];
> +extern kprobe_opcode_t optprobe_template_call_emulate[];
> +extern kprobe_opcode_t optprobe_template_ret[];
> +extern kprobe_opcode_t optprobe_template_end[];
> +
> +/* Fixed instruction size for powerpc */
> +#define MAX_INSN_SIZE		1
> +#define MAX_OPTIMIZED_LENGTH	sizeof(kprobe_opcode_t)	/* 4 bytes */
> +#define MAX_OPTINSN_SIZE	(optprobe_template_end - optprobe_template_entry)
> +#define RELATIVEJUMP_SIZE	sizeof(kprobe_opcode_t)	/* 4 bytes */
>  
>  #ifdef PPC64_ELF_ABI_v2
>  /* PPC64 ABIv2 needs local entry point */
> @@ -124,6 +140,12 @@ struct kprobe_ctlblk {
>  	struct prev_kprobe prev_kprobe;
>  };
>  
> +struct arch_optimized_insn {
> +	kprobe_opcode_t copied_insn[1];
> +	/* detour buffer */
> +	kprobe_opcode_t *insn;
> +};
> +
>  extern int kprobe_exceptions_notify(struct notifier_block *self,
>  					unsigned long val, void *data);
>  extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr);
> diff --git a/arch/powerpc/include/asm/sstep.h b/arch/powerpc/include/asm/sstep.h
> index d3a42cc..f7ad425 100644
> --- a/arch/powerpc/include/asm/sstep.h
> +++ b/arch/powerpc/include/asm/sstep.h
> @@ -87,3 +87,4 @@ struct instruction_op {
>  
>  extern int analyse_instr(struct instruction_op *op, struct pt_regs *regs,
>  			 unsigned int instr);
> +extern bool is_conditional_branch(unsigned int instr);
> diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
> index 1925341..54f0f47 100644
> --- a/arch/powerpc/kernel/Makefile
> +++ b/arch/powerpc/kernel/Makefile
> @@ -98,6 +98,7 @@ obj-$(CONFIG_KGDB)		+= kgdb.o
>  obj-$(CONFIG_BOOTX_TEXT)	+= btext.o
>  obj-$(CONFIG_SMP)		+= smp.o
>  obj-$(CONFIG_KPROBES)		+= kprobes.o
> +obj-$(CONFIG_OPTPROBES)		+= optprobes.o optprobes_head.o
>  obj-$(CONFIG_UPROBES)		+= uprobes.o
>  obj-$(CONFIG_PPC_UDBG_16550)	+= legacy_serial.o udbg_16550.o
>  obj-$(CONFIG_STACKTRACE)	+= stacktrace.o
> diff --git a/arch/powerpc/kernel/optprobes.c b/arch/powerpc/kernel/optprobes.c
> new file mode 100644
> index 0000000..fb5e62d
> --- /dev/null
> +++ b/arch/powerpc/kernel/optprobes.c
> @@ -0,0 +1,331 @@
> +/*
> + * Code for Kernel probes Jump optimization.
> + *
> + * Copyright 2016, Anju T, IBM Corp.
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License
> + * as published by the Free Software Foundation; either version
> + * 2 of the License, or (at your option) any later version.
> + */
> +
> +#include <linux/kprobes.h>
> +#include <linux/jump_label.h>
> +#include <linux/types.h>
> +#include <linux/slab.h>
> +#include <linux/list.h>
> +#include <asm/kprobes.h>
> +#include <asm/ptrace.h>
> +#include <asm/cacheflush.h>
> +#include <asm/code-patching.h>
> +#include <asm/sstep.h>
> +#include <asm/ppc-opcode.h>
> +
> +#define TMPL_CALL_HDLR_IDX	\
> +	(optprobe_template_call_handler - optprobe_template_entry)
> +#define TMPL_EMULATE_IDX	\
> +	(optprobe_template_call_emulate - optprobe_template_entry)
> +#define TMPL_RET_IDX		\
> +	(optprobe_template_ret - optprobe_template_entry)
> +#define TMPL_OP_IDX		\
> +	(optprobe_template_op_address - optprobe_template_entry)
> +#define TMPL_INSN_IDX		\
> +	(optprobe_template_insn - optprobe_template_entry)
> +#define TMPL_END_IDX		\
> +	(optprobe_template_end - optprobe_template_entry)
> +
> +DEFINE_INSN_CACHE_OPS(ppc_optinsn);
> +
> +static bool insn_page_in_use;
> +
> +static void *__ppc_alloc_insn_page(void)
> +{
> +	if (insn_page_in_use)
> +		return NULL;
> +	insn_page_in_use = true;
> +	return &optinsn_slot;
> +}
> +
> +static void __ppc_free_insn_page(void *page __maybe_unused)
> +{
> +	insn_page_in_use = false;
> +}
> +
> +struct kprobe_insn_cache kprobe_ppc_optinsn_slots = {
> +	.mutex = __MUTEX_INITIALIZER(kprobe_ppc_optinsn_slots.mutex),
> +	.pages = LIST_HEAD_INIT(kprobe_ppc_optinsn_slots.pages),
> +	/* insn_size initialized later */
> +	.alloc = __ppc_alloc_insn_page,
> +	.free = __ppc_free_insn_page,
> +	.nr_garbage = 0,
> +};
> +
> +/*
> + * Check if we can optimize this probe. Returns NIP post-emulation if this can
> + * be optimized and 0 otherwise.
> + */
> +static unsigned long can_optimize(struct kprobe *p)
> +{
> +	struct pt_regs regs;
> +	struct instruction_op op;
> +	unsigned long nip = 0;
> +
> +	/*
> +	 * kprobe placed for kretprobe during boot time
> +	 * is not optimizing now.
> +	 *
> +	 * TODO: Optimize kprobe in kretprobe_trampoline
> +	 */
> +	if (p->addr == (kprobe_opcode_t *)&kretprobe_trampoline)
> +		return 0;
> +
> +	/*
> +	 * We only support optimizing kernel addresses, but not
> +	 * module addresses.
> +	 */
> +	if (!is_kernel_addr((unsigned long)p->addr))
> +		return 0;
> +
> +	regs.nip = (unsigned long)p->addr;
> +	regs.trap = 0x0;
> +	regs.msr = MSR_KERNEL;
> +
> +	/*
> +	 * Ensure that the instruction is not a conditional branch,
> +	 * and that can be emulated.
> +	 */
> +	if (!is_conditional_branch(*p->ainsn.insn) &&
> +			analyse_instr(&op, &regs, *p->ainsn.insn))
> +		nip = regs.nip;
> +
> +	return nip;
> +}
> +
> +static void optimized_callback(struct optimized_kprobe *op,
> +			       struct pt_regs *regs)
> +{
> +	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> +	unsigned long flags;
> +
> +	/* This is possible if op is under delayed unoptimizing */
> +	if (kprobe_disabled(&op->kp))
> +		return;
> +
> +	local_irq_save(flags);
> +
> +	if (kprobe_running()) {
> +		kprobes_inc_nmissed_count(&op->kp);
> +	} else {
> +		__this_cpu_write(current_kprobe, &op->kp);
> +		regs->nip = (unsigned long)op->kp.addr;
> +		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
> +		opt_pre_handler(&op->kp, regs);
> +		__this_cpu_write(current_kprobe, NULL);
> +	}
> +	local_irq_restore(flags);
> +}
> +NOKPROBE_SYMBOL(optimized_callback);
> +
> +void arch_remove_optimized_kprobe(struct optimized_kprobe *op)
> +{
> +	if (op->optinsn.insn) {
> +		free_ppc_optinsn_slot(op->optinsn.insn, 1);
> +		op->optinsn.insn = NULL;
> +	}
> +}
> +
> +/*
> + * emulate_step() requires insn to be emulated as
> + * second parameter. Load register 'r4' with the
> + * instruction.
> + */
> +void patch_imm32_load_insns(unsigned int val, kprobe_opcode_t *addr)
> +{
> +	/* addis r4,0,(insn)@h */
> +	*addr++ = PPC_INST_ADDIS | ___PPC_RT(4) |
> +		  ((val >> 16) & 0xffff);
> +
> +	/* ori r4,r4,(insn)@l */
> +	*addr = PPC_INST_ORI | ___PPC_RA(4) | ___PPC_RS(4) |
> +		(val & 0xffff);
> +}
> +
> +/*
> + * Generate instructions to load provided immediate 64-bit value
> + * to register 'r3' and patch these instructions at 'addr'.
> + */
> +void patch_imm64_load_insns(unsigned long val, kprobe_opcode_t *addr)
> +{
> +	/* lis r3,(op)@highest */
> +	*addr++ = PPC_INST_ADDIS | ___PPC_RT(3) |
> +		  ((val >> 48) & 0xffff);
> +
> +	/* ori r3,r3,(op)@higher */
> +	*addr++ = PPC_INST_ORI | ___PPC_RA(3) | ___PPC_RS(3) |
> +		  ((val >> 32) & 0xffff);
> +
> +	/* rldicr r3,r3,32,31 */
> +	*addr++ = PPC_INST_RLDICR | ___PPC_RA(3) | ___PPC_RS(3) |
> +		  __PPC_SH64(32) | __PPC_ME64(31);
> +
> +	/* oris r3,r3,(op)@h */
> +	*addr++ = PPC_INST_ORIS | ___PPC_RA(3) | ___PPC_RS(3) |
> +		  ((val >> 16) & 0xffff);
> +
> +	/* ori r3,r3,(op)@l */
> +	*addr = PPC_INST_ORI | ___PPC_RA(3) | ___PPC_RS(3) |
> +		(val & 0xffff);
> +}
> +
> +int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *p)
> +{
> +	kprobe_opcode_t *buff, branch_op_callback, branch_emulate_step;
> +	kprobe_opcode_t *op_callback_addr, *emulate_step_addr;
> +	long b_offset;
> +	unsigned long nip;
> +
> +	kprobe_ppc_optinsn_slots.insn_size = MAX_OPTINSN_SIZE;
> +
> +	nip = can_optimize(p);
> +	if (!nip)
> +		return -EILSEQ;
> +
> +	/* Allocate instruction slot for detour buffer */
> +	buff = get_ppc_optinsn_slot();
> +	if (!buff)
> +		return -ENOMEM;
> +
> +	/*
> +	 * OPTPROBE uses 'b' instruction to branch to optinsn.insn.
> +	 *
> +	 * The target address has to be relatively nearby, to permit use
> +	 * of branch instruction in powerpc, because the address is specified
> +	 * in an immediate field in the instruction opcode itself, ie 24 bits
> +	 * in the opcode specify the address. Therefore the address should
> +	 * be within 32MB on either side of the current instruction.
> +	 */
> +	b_offset = (unsigned long)buff - (unsigned long)p->addr;
> +	if (!is_offset_in_branch_range(b_offset))
> +		goto error;
> +
> +	/* Check if the return address is also within 32MB range */
> +	b_offset = (unsigned long)(buff + TMPL_RET_IDX) -
> +			(unsigned long)nip;
> +	if (!is_offset_in_branch_range(b_offset))
> +		goto error;
> +
> +	/* Setup template */
> +	memcpy(buff, optprobe_template_entry,
> +			TMPL_END_IDX * sizeof(kprobe_opcode_t));
> +
> +	/*
> +	 * Fixup the template with instructions to:
> +	 * 1. load the address of the actual probepoint
> +	 */
> +	patch_imm64_load_insns((unsigned long)op, buff + TMPL_OP_IDX);
> +
> +	/*
> +	 * 2. branch to optimized_callback() and emulate_step()
> +	 */
> +	kprobe_lookup_name("optimized_callback", op_callback_addr);
> +	kprobe_lookup_name("emulate_step", emulate_step_addr);
> +	if (!op_callback_addr || !emulate_step_addr) {
> +		WARN(1, "kprobe_lookup_name() failed\n");
> +		goto error;
> +	}
> +
> +	branch_op_callback = create_branch((unsigned int *)buff + TMPL_CALL_HDLR_IDX,
> +				(unsigned long)op_callback_addr,
> +				BRANCH_SET_LINK);
> +
> +	branch_emulate_step = create_branch((unsigned int *)buff + TMPL_EMULATE_IDX,
> +				(unsigned long)emulate_step_addr,
> +				BRANCH_SET_LINK);
> +
> +	if (!branch_op_callback || !branch_emulate_step)
> +		goto error;
> +
> +	buff[TMPL_CALL_HDLR_IDX] = branch_op_callback;
> +	buff[TMPL_EMULATE_IDX] = branch_emulate_step;
> +
> +	/*
> +	 * 3. load instruction to be emulated into relevant register, and
> +	 */
> +	patch_imm32_load_insns(*p->ainsn.insn, buff + TMPL_INSN_IDX);
> +
> +	/*
> +	 * 4. branch back from trampoline
> +	 */
> +	buff[TMPL_RET_IDX] = create_branch((unsigned int *)buff + TMPL_RET_IDX,
> +				(unsigned long)nip, 0);
> +
> +	flush_icache_range((unsigned long)buff,
> +			   (unsigned long)(&buff[TMPL_END_IDX]));
> +
> +	op->optinsn.insn = buff;
> +
> +	return 0;
> +
> + error:
> +	free_ppc_optinsn_slot(buff, 0);
> +	return -ERANGE;
> +
> +}
> +
> +int arch_prepared_optinsn(struct arch_optimized_insn *optinsn)
> +{
> +	return optinsn->insn != NULL;
> +}
> +
> +/*
> + * On powerpc, Optprobes always replaces one instruction (4 bytes
> + * aligned and 4 bytes long). It is impossible to encounter another
> + * kprobe in this address range. So always return 0.
> + */
> +int arch_check_optimized_kprobe(struct optimized_kprobe *op)
> +{
> +	return 0;
> +}
> +
> +void arch_optimize_kprobes(struct list_head *oplist)
> +{
> +	struct optimized_kprobe *op;
> +	struct optimized_kprobe *tmp;
> +
> +	list_for_each_entry_safe(op, tmp, oplist, list) {
> +		/*
> +		 * Backup instructions which will be replaced
> +		 * by jump address
> +		 */
> +		memcpy(op->optinsn.copied_insn, op->kp.addr,
> +					       RELATIVEJUMP_SIZE);
> +		patch_instruction(op->kp.addr,
> +			create_branch((unsigned int *)op->kp.addr,
> +				      (unsigned long)op->optinsn.insn, 0));
> +		list_del_init(&op->list);
> +	}
> +}
> +
> +void arch_unoptimize_kprobe(struct optimized_kprobe *op)
> +{
> +	arch_arm_kprobe(&op->kp);
> +}
> +
> +void arch_unoptimize_kprobes(struct list_head *oplist,
> +			     struct list_head *done_list)
> +{
> +	struct optimized_kprobe *op;
> +	struct optimized_kprobe *tmp;
> +
> +	list_for_each_entry_safe(op, tmp, oplist, list) {
> +		arch_unoptimize_kprobe(op);
> +		list_move(&op->list, done_list);
> +	}
> +}
> +
> +int arch_within_optimized_kprobe(struct optimized_kprobe *op,
> +				 unsigned long addr)
> +{
> +	return ((unsigned long)op->kp.addr <= addr &&
> +		(unsigned long)op->kp.addr + RELATIVEJUMP_SIZE > addr);
> +}
> diff --git a/arch/powerpc/kernel/optprobes_head.S b/arch/powerpc/kernel/optprobes_head.S
> new file mode 100644
> index 0000000..c86976b
> --- /dev/null
> +++ b/arch/powerpc/kernel/optprobes_head.S
> @@ -0,0 +1,135 @@
> +/*
> + * Code to prepare detour buffer for optprobes in Kernel.
> + *
> + * Copyright 2016, Anju T, IBM Corp.
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License
> + * as published by the Free Software Foundation; either version
> + * 2 of the License, or (at your option) any later version.
> + */
> +
> +#include <asm/ppc_asm.h>
> +#include <asm/ptrace.h>
> +#include <asm/asm-offsets.h>
> +
> +#define	OPT_SLOT_SIZE	65536
> +
> +	.balign	4
> +
> +	/*
> +	 * Reserve an area to allocate slots for detour buffer.
> +	 * This is part of .text section (rather than vmalloc area)
> +	 * as this needs to be within 32MB of the probed address.
> +	 */
> +	.global optinsn_slot
> +optinsn_slot:
> +	.space	OPT_SLOT_SIZE
> +
> +	/*
> +	 * Optprobe template:
> +	 * This template gets copied into one of the slots in optinsn_slot
> +	 * and gets fixed up with real optprobe structures et al.
> +	 */
> +	.global optprobe_template_entry
> +optprobe_template_entry:
> +	/* Create an in-memory pt_regs */
> +	stdu	r1,-INT_FRAME_SIZE(r1)
> +	SAVE_GPR(0,r1)
> +	/* Save the previous SP into stack */
> +	addi	r0,r1,INT_FRAME_SIZE
> +	std	r0,GPR1(r1)
> +	SAVE_10GPRS(2,r1)
> +	SAVE_10GPRS(12,r1)
> +	SAVE_10GPRS(22,r1)
> +	/* Save SPRS */
> +	mfmsr	r5
> +	std	r5,_MSR(r1)
> +	li	r5,0x700
> +	std	r5,_TRAP(r1)
> +	li	r5,0
> +	std	r5,ORIG_GPR3(r1)
> +	std	r5,RESULT(r1)
> +	mfctr	r5
> +	std	r5,_CTR(r1)
> +	mflr	r5
> +	std	r5,_LINK(r1)
> +	mfspr	r5,SPRN_XER
> +	std	r5,_XER(r1)
> +	mfcr	r5
> +	std	r5,_CCR(r1)
> +	lbz     r5,PACASOFTIRQEN(r13)
> +	std     r5,SOFTE(r1)
> +	mfdar	r5
> +	std	r5,_DAR(r1)
> +	mfdsisr	r5
> +	std	r5,_DSISR(r1)
> +
> +	.global optprobe_template_op_address
> +optprobe_template_op_address:
> +	/*
> +	 * Parameters to optimized_callback():
> +	 * 1. optimized_kprobe structure in r3
> +	 */
> +	nop
> +	nop
> +	nop
> +	nop
> +	nop
> +	/* 2. pt_regs pointer in r4 */
> +	addi	r4,r1,STACK_FRAME_OVERHEAD
> +
> +	.global optprobe_template_call_handler
> +optprobe_template_call_handler:
> +	/* Branch to optimized_callback() */
> +	nop
> +
> +	/*
> +	 * Parameters for instruction emulation:
> +	 * 1. Pass SP in register r3.
> +	 */
> +	addi	r3,r1,STACK_FRAME_OVERHEAD
> +
> +	.global optprobe_template_insn
> +optprobe_template_insn:
> +	/* 2, Pass instruction to be emulated in r4 */
> +	nop
> +	nop
> +
> +	.global optprobe_template_call_emulate
> +optprobe_template_call_emulate:
> +	/* Branch to emulate_step()  */
> +	nop
> +
> +	/*
> +	 * All done.
> +	 * Now, restore the registers...
> +	 */
> +	ld	r5,_MSR(r1)
> +	mtmsr	r5
> +	ld	r5,_CTR(r1)
> +	mtctr	r5
> +	ld	r5,_LINK(r1)
> +	mtlr	r5
> +	ld	r5,_XER(r1)
> +	mtxer	r5
> +	ld	r5,_CCR(r1)
> +	mtcr	r5
> +	ld	r5,_DAR(r1)
> +	mtdar	r5
> +	ld	r5,_DSISR(r1)
> +	mtdsisr	r5
> +	REST_GPR(0,r1)
> +	REST_10GPRS(2,r1)
> +	REST_10GPRS(12,r1)
> +	REST_10GPRS(22,r1)
> +	/* Restore the previous SP */
> +	addi	r1,r1,INT_FRAME_SIZE
> +
> +	.global optprobe_template_ret
> +optprobe_template_ret:
> +	/* ... and jump back from trampoline */
> +	nop
> +
> +	.global optprobe_template_end
> +optprobe_template_end:
> diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
> index 3362299..895dcdd 100644
> --- a/arch/powerpc/lib/sstep.c
> +++ b/arch/powerpc/lib/sstep.c
> @@ -618,6 +618,27 @@ static int __kprobes trap_compare(long v1, long v2)
>  }
>  
>  /*
> + * Helper to check if a given instruction is a conditional branch
> + * Derived from the conditional checks in analyse_instr()
> + */
> +bool __kprobes is_conditional_branch(unsigned int instr)
> +{
> +	unsigned int opcode = instr >> 26;
> +
> +	if (opcode == 16)	/* bc, bca, bcl, bcla */
> +		return true;
> +	if (opcode == 19) {
> +		switch ((instr >> 1) & 0x3ff) {
> +		case 16:	/* bclr, bclrl */
> +		case 528:	/* bcctr, bcctrl */
> +		case 560:	/* bctar, bctarl */
> +			return true;
> +		}
> +	}
> +	return false;
> +}
> +
> +/*
>   * Elements of 32-bit rotate and mask instructions.
>   */
>  #define MASK32(mb, me)	((0xffffffffUL >> (mb)) + \
> -- 
> 2.7.4
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH V3 3/4] arch/powerpc: Implement Optprobes
  2016-12-25  2:54   ` Masami Hiramatsu
@ 2017-01-04 10:25     ` Naveen N. Rao
  0 siblings, 0 replies; 16+ messages in thread
From: Naveen N. Rao @ 2017-01-04 10:25 UTC (permalink / raw)
  To: Masami Hiramatsu, Benjamin Herrenschmidt, Michael Ellerman
  Cc: Anju T Sudhakar, linux-kernel, linuxppc-dev, ananth, paulus,
	srikar, mahesh

On 2016/12/25 11:54AM, Masami Hiramatsu wrote:
> On Mon, 19 Dec 2016 18:48:24 +0530
> Anju T Sudhakar <anju@linux.vnet.ibm.com> wrote:
> 
> > Detour buffer contains instructions to create an in memory pt_regs.
> > After the execution of the pre-handler, a call is made for instruction emulation.
> > The NIP is determined in advanced through dummy instruction emulation and a branch
> > instruction is created to the NIP at the end of the trampoline.
> > 
> > Instruction slot for detour buffer is allocated from the reserved area.
> > For the time being, 64KB is reserved in memory for this purpose.
> > 
> > Instructions which can be emulated using analyse_instr() are suppliants
> > for optimization. Before optimization ensure that the address range
> > between the detour buffer allocated and the instruction being probed
> > is within ± 32MB.
> > 
> > Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
> > Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> 
> Looks good to me :)
> 
> Acked-by: Masami Hiramatsu <mhiramat@kernel.org>

Thanks, Masami!

Ben/Michael,
Can you please take a look and let us know your thoughts on this?

- Naveen

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH V3 3/4] arch/powerpc: Implement Optprobes
  2016-12-19 13:18 ` [PATCH V3 3/4] arch/powerpc: Implement Optprobes Anju T Sudhakar
@ 2017-01-30 20:43     ` Michael Ellerman
  2017-01-30 20:43     ` Michael Ellerman
  2017-02-01 10:53   ` [PATCH V3 3/4] arch/powerpc: Implement Optprobes Michael Ellerman
  2 siblings, 0 replies; 16+ messages in thread
From: Michael Ellerman @ 2017-01-30 20:43 UTC (permalink / raw)
  To: Anju T Sudhakar, linux-kernel, linuxppc-dev
  Cc: ananth, naveen.n.rao, paulus, srikar, benh, mahesh, mhiramat, anju

Anju T Sudhakar <anju@linux.vnet.ibm.com> writes:

> Detour buffer contains instructions to create an in memory pt_regs.
> After the execution of the pre-handler, a call is made for instruction emulation.
> The NIP is determined in advanced through dummy instruction emulation and a branch
> instruction is created to the NIP at the end of the trampoline.
>
> Instruction slot for detour buffer is allocated from the reserved area.
> For the time being, 64KB is reserved in memory for this purpose.
>
> Instructions which can be emulated using analyse_instr() are suppliants
> for optimization. Before optimization ensure that the address range
> between the detour buffer allocated and the instruction being probed
> is within ± 32MB.
>
> Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> ---
>  .../features/debug/optprobes/arch-support.txt      |   2 +-
>  arch/powerpc/Kconfig                               |   1 +
>  arch/powerpc/include/asm/kprobes.h                 |  24 +-
>  arch/powerpc/include/asm/sstep.h                   |   1 +
>  arch/powerpc/kernel/Makefile                       |   1 +
>  arch/powerpc/kernel/optprobes.c                    | 331 +++++++++++++++++++++
>  arch/powerpc/kernel/optprobes_head.S               | 135 +++++++++
>  arch/powerpc/lib/sstep.c                           |  21 ++
>  8 files changed, 514 insertions(+), 2 deletions(-)
>  create mode 100644 arch/powerpc/kernel/optprobes.c
>  create mode 100644 arch/powerpc/kernel/optprobes_head.S

This breaks the pseries_defconfig (at least) build:

  In file included from ../include/linux/kprobes.h:45:0,
                   from ../arch/powerpc/kernel/optprobes.c:12:
  ../arch/powerpc/kernel/optprobes.c: In function ‘arch_prepare_optimized_kprobe’:
  ../arch/powerpc/include/asm/kprobes.h:79:16: error: ‘MODULE_NAME_LEN’ undeclared (first use in this function)
    char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN];  \
                  ^
  ../arch/powerpc/kernel/optprobes.c:230:2: note: in expansion of macro ‘kprobe_lookup_name’
    kprobe_lookup_name("optimized_callback", op_callback_addr);
    ^~~~~~~~~~~~~~~~~~
  ../arch/powerpc/include/asm/kprobes.h:79:16: note: each undeclared identifier is reported only once for each function it appears in
    char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN];  \
                  ^
  ../arch/powerpc/kernel/optprobes.c:230:2: note: in expansion of macro ‘kprobe_lookup_name’
    kprobe_lookup_name("optimized_callback", op_callback_addr);
    ^~~~~~~~~~~~~~~~~~
  ../arch/powerpc/include/asm/kprobes.h:82:14: error: assignment discards ‘const’ qualifier from pointer target type [-Werror=discarded-qualifiers]
    if ((modsym = strchr(name, ':')) != NULL) {   \
                ^
  ../arch/powerpc/kernel/optprobes.c:230:2: note: in expansion of macro ‘kprobe_lookup_name’
    kprobe_lookup_name("optimized_callback", op_callback_addr);
    ^~~~~~~~~~~~~~~~~~
  ../arch/powerpc/include/asm/kprobes.h:79:7: error: unused variable ‘dot_name’ [-Werror=unused-variable]
    char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN];  \
         ^
  ../arch/powerpc/kernel/optprobes.c:230:2: note: in expansion of macro ‘kprobe_lookup_name’
    kprobe_lookup_name("optimized_callback", op_callback_addr);
    ^~~~~~~~~~~~~~~~~~
  ../arch/powerpc/include/asm/kprobes.h:82:14: error: assignment discards ‘const’ qualifier from pointer target type [-Werror=discarded-qualifiers]
    if ((modsym = strchr(name, ':')) != NULL) {   \
                ^
  ../arch/powerpc/kernel/optprobes.c:231:2: note: in expansion of macro ‘kprobe_lookup_name’
    kprobe_lookup_name("emulate_step", emulate_step_addr);
    ^~~~~~~~~~~~~~~~~~
  ../arch/powerpc/include/asm/kprobes.h:79:7: error: unused variable ‘dot_name’ [-Werror=unused-variable]
    char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN];  \
         ^
  ../arch/powerpc/kernel/optprobes.c:231:2: note: in expansion of macro ‘kprobe_lookup_name’
    kprobe_lookup_name("emulate_step", emulate_step_addr);
    ^~~~~~~~~~~~~~~~~~
  cc1: all warnings being treated as errors
  make[2]: *** [arch/powerpc/kernel/optprobes.o] Error 1
  make[2]: *** Waiting for unfinished jobs....
  make[1]: *** [arch/powerpc/kernel] Error 2
  make[1]: *** Waiting for unfinished jobs....
  make: *** [sub-make] Error 2


This may not be your bug, but your patch exposes it unfortunately.

cheers

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH V3 3/4] arch/powerpc: Implement Optprobes
@ 2017-01-30 20:43     ` Michael Ellerman
  0 siblings, 0 replies; 16+ messages in thread
From: Michael Ellerman @ 2017-01-30 20:43 UTC (permalink / raw)
  To: Anju T Sudhakar, linux-kernel, linuxppc-dev
  Cc: ananth, naveen.n.rao, paulus, srikar, benh, mahesh, mhiramat, anju

Anju T Sudhakar <anju@linux.vnet.ibm.com> writes:

> Detour buffer contains instructions to create an in memory pt_regs.
> After the execution of the pre-handler, a call is made for instruction em=
ulation.
> The NIP is determined in advanced through dummy instruction emulation and=
 a branch
> instruction is created to the NIP at the end of the trampoline.
>
> Instruction slot for detour buffer is allocated from the reserved area.
> For the time being, 64KB is reserved in memory for this purpose.
>
> Instructions which can be emulated using analyse_instr() are suppliants
> for optimization. Before optimization ensure that the address range
> between the detour buffer allocated and the instruction being probed
> is within =C2=B1 32MB.
>
> Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> ---
>  .../features/debug/optprobes/arch-support.txt      |   2 +-
>  arch/powerpc/Kconfig                               |   1 +
>  arch/powerpc/include/asm/kprobes.h                 |  24 +-
>  arch/powerpc/include/asm/sstep.h                   |   1 +
>  arch/powerpc/kernel/Makefile                       |   1 +
>  arch/powerpc/kernel/optprobes.c                    | 331 +++++++++++++++=
++++++
>  arch/powerpc/kernel/optprobes_head.S               | 135 +++++++++
>  arch/powerpc/lib/sstep.c                           |  21 ++
>  8 files changed, 514 insertions(+), 2 deletions(-)
>  create mode 100644 arch/powerpc/kernel/optprobes.c
>  create mode 100644 arch/powerpc/kernel/optprobes_head.S

This breaks the pseries_defconfig (at least) build:

  In file included from ../include/linux/kprobes.h:45:0,
                   from ../arch/powerpc/kernel/optprobes.c:12:
  ../arch/powerpc/kernel/optprobes.c: In function =E2=80=98arch_prepare_opt=
imized_kprobe=E2=80=99:
  ../arch/powerpc/include/asm/kprobes.h:79:16: error: =E2=80=98MODULE_NAME_=
LEN=E2=80=99 undeclared (first use in this function)
    char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN];  \
                  ^
  ../arch/powerpc/kernel/optprobes.c:230:2: note: in expansion of macro =E2=
=80=98kprobe_lookup_name=E2=80=99
    kprobe_lookup_name("optimized_callback", op_callback_addr);
    ^~~~~~~~~~~~~~~~~~
  ../arch/powerpc/include/asm/kprobes.h:79:16: note: each undeclared identi=
fier is reported only once for each function it appears in
    char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN];  \
                  ^
  ../arch/powerpc/kernel/optprobes.c:230:2: note: in expansion of macro =E2=
=80=98kprobe_lookup_name=E2=80=99
    kprobe_lookup_name("optimized_callback", op_callback_addr);
    ^~~~~~~~~~~~~~~~~~
  ../arch/powerpc/include/asm/kprobes.h:82:14: error: assignment discards =
=E2=80=98const=E2=80=99 qualifier from pointer target type [-Werror=3Ddisca=
rded-qualifiers]
    if ((modsym =3D strchr(name, ':')) !=3D NULL) {   \
                ^
  ../arch/powerpc/kernel/optprobes.c:230:2: note: in expansion of macro =E2=
=80=98kprobe_lookup_name=E2=80=99
    kprobe_lookup_name("optimized_callback", op_callback_addr);
    ^~~~~~~~~~~~~~~~~~
  ../arch/powerpc/include/asm/kprobes.h:79:7: error: unused variable =E2=80=
=98dot_name=E2=80=99 [-Werror=3Dunused-variable]
    char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN];  \
         ^
  ../arch/powerpc/kernel/optprobes.c:230:2: note: in expansion of macro =E2=
=80=98kprobe_lookup_name=E2=80=99
    kprobe_lookup_name("optimized_callback", op_callback_addr);
    ^~~~~~~~~~~~~~~~~~
  ../arch/powerpc/include/asm/kprobes.h:82:14: error: assignment discards =
=E2=80=98const=E2=80=99 qualifier from pointer target type [-Werror=3Ddisca=
rded-qualifiers]
    if ((modsym =3D strchr(name, ':')) !=3D NULL) {   \
                ^
  ../arch/powerpc/kernel/optprobes.c:231:2: note: in expansion of macro =E2=
=80=98kprobe_lookup_name=E2=80=99
    kprobe_lookup_name("emulate_step", emulate_step_addr);
    ^~~~~~~~~~~~~~~~~~
  ../arch/powerpc/include/asm/kprobes.h:79:7: error: unused variable =E2=80=
=98dot_name=E2=80=99 [-Werror=3Dunused-variable]
    char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN];  \
         ^
  ../arch/powerpc/kernel/optprobes.c:231:2: note: in expansion of macro =E2=
=80=98kprobe_lookup_name=E2=80=99
    kprobe_lookup_name("emulate_step", emulate_step_addr);
    ^~~~~~~~~~~~~~~~~~
  cc1: all warnings being treated as errors
  make[2]: *** [arch/powerpc/kernel/optprobes.o] Error 1
  make[2]: *** Waiting for unfinished jobs....
  make[1]: *** [arch/powerpc/kernel] Error 2
  make[1]: *** Waiting for unfinished jobs....
  make: *** [sub-make] Error 2


This may not be your bug, but your patch exposes it unfortunately.

cheers

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH V3 3/4] arch/powerpc: Implement Optprobes
  2017-01-30 20:43     ` Michael Ellerman
  (?)
@ 2017-01-31  7:55     ` Naveen N. Rao
  -1 siblings, 0 replies; 16+ messages in thread
From: Naveen N. Rao @ 2017-01-31  7:55 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Anju T Sudhakar, linux-kernel, linuxppc-dev, ananth, paulus,
	srikar, benh, mahesh, mhiramat

On 2017/01/31 07:43AM, Michael Ellerman wrote:
> Anju T Sudhakar <anju@linux.vnet.ibm.com> writes:
> 
> > Detour buffer contains instructions to create an in memory pt_regs.
> > After the execution of the pre-handler, a call is made for instruction emulation.
> > The NIP is determined in advanced through dummy instruction emulation and a branch
> > instruction is created to the NIP at the end of the trampoline.
> >
> > Instruction slot for detour buffer is allocated from the reserved area.
> > For the time being, 64KB is reserved in memory for this purpose.
> >
> > Instructions which can be emulated using analyse_instr() are suppliants
> > for optimization. Before optimization ensure that the address range
> > between the detour buffer allocated and the instruction being probed
> > is within ± 32MB.
> >
> > Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
> > Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> > ---
> >  .../features/debug/optprobes/arch-support.txt      |   2 +-
> >  arch/powerpc/Kconfig                               |   1 +
> >  arch/powerpc/include/asm/kprobes.h                 |  24 +-
> >  arch/powerpc/include/asm/sstep.h                   |   1 +
> >  arch/powerpc/kernel/Makefile                       |   1 +
> >  arch/powerpc/kernel/optprobes.c                    | 331 +++++++++++++++++++++
> >  arch/powerpc/kernel/optprobes_head.S               | 135 +++++++++
> >  arch/powerpc/lib/sstep.c                           |  21 ++
> >  8 files changed, 514 insertions(+), 2 deletions(-)
> >  create mode 100644 arch/powerpc/kernel/optprobes.c
> >  create mode 100644 arch/powerpc/kernel/optprobes_head.S
> 
> This breaks the pseries_defconfig (at least) build:
> 
>   In file included from ../include/linux/kprobes.h:45:0,
>                    from ../arch/powerpc/kernel/optprobes.c:12:
>   ../arch/powerpc/kernel/optprobes.c: In function ‘arch_prepare_optimized_kprobe’:
>   ../arch/powerpc/include/asm/kprobes.h:79:16: error: ‘MODULE_NAME_LEN’ undeclared (first use in this function)
>     char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN];  \
>                   ^
>   ../arch/powerpc/kernel/optprobes.c:230:2: note: in expansion of macro ‘kprobe_lookup_name’
>     kprobe_lookup_name("optimized_callback", op_callback_addr);
>     ^~~~~~~~~~~~~~~~~~
>   ../arch/powerpc/include/asm/kprobes.h:79:16: note: each undeclared identifier is reported only once for each function it appears in
>     char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN];  \
>                   ^
>   ../arch/powerpc/kernel/optprobes.c:230:2: note: in expansion of macro ‘kprobe_lookup_name’
>     kprobe_lookup_name("optimized_callback", op_callback_addr);
>     ^~~~~~~~~~~~~~~~~~
>   ../arch/powerpc/include/asm/kprobes.h:82:14: error: assignment discards ‘const’ qualifier from pointer target type [-Werror=discarded-qualifiers]
>     if ((modsym = strchr(name, ':')) != NULL) {   \
>                 ^
>   ../arch/powerpc/kernel/optprobes.c:230:2: note: in expansion of macro ‘kprobe_lookup_name’
>     kprobe_lookup_name("optimized_callback", op_callback_addr);
>     ^~~~~~~~~~~~~~~~~~
>   ../arch/powerpc/include/asm/kprobes.h:79:7: error: unused variable ‘dot_name’ [-Werror=unused-variable]
>     char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN];  \
>          ^
>   ../arch/powerpc/kernel/optprobes.c:230:2: note: in expansion of macro ‘kprobe_lookup_name’
>     kprobe_lookup_name("optimized_callback", op_callback_addr);
>     ^~~~~~~~~~~~~~~~~~
>   ../arch/powerpc/include/asm/kprobes.h:82:14: error: assignment discards ‘const’ qualifier from pointer target type [-Werror=discarded-qualifiers]
>     if ((modsym = strchr(name, ':')) != NULL) {   \
>                 ^
>   ../arch/powerpc/kernel/optprobes.c:231:2: note: in expansion of macro ‘kprobe_lookup_name’
>     kprobe_lookup_name("emulate_step", emulate_step_addr);
>     ^~~~~~~~~~~~~~~~~~
>   ../arch/powerpc/include/asm/kprobes.h:79:7: error: unused variable ‘dot_name’ [-Werror=unused-variable]
>     char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN];  \
>          ^
>   ../arch/powerpc/kernel/optprobes.c:231:2: note: in expansion of macro ‘kprobe_lookup_name’
>     kprobe_lookup_name("emulate_step", emulate_step_addr);
>     ^~~~~~~~~~~~~~~~~~
>   cc1: all warnings being treated as errors
>   make[2]: *** [arch/powerpc/kernel/optprobes.o] Error 1
>   make[2]: *** Waiting for unfinished jobs....
>   make[1]: *** [arch/powerpc/kernel] Error 2
>   make[1]: *** Waiting for unfinished jobs....
>   make: *** [sub-make] Error 2
> 
> 
> This may not be your bug, but your patch exposes it unfortunately.

Sorry for the trouble, we should have caught this.

- Naveen

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH] powerpc: kprobes: fixes for kprobe_lookup_name on BE
  2017-01-30 20:43     ` Michael Ellerman
  (?)
  (?)
@ 2017-01-31  7:59     ` Naveen N. Rao
  -1 siblings, 0 replies; 16+ messages in thread
From: Naveen N. Rao @ 2017-01-31  7:59 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Anju T Sudhakar, linux-kernel, linuxppc-dev, ananth, mhiramat, mahesh

Fix two issues with kprobes.h on BE which were exposed with the optprobes work:
- one, having to do with a missing include for linux/module.h for
MODULE_NAME_LEN -- this didn't show up previously since the only users of
kprobe_lookup_name were in kprobes.c, which included linux/module.h
through other headers, and
- two, with a missing const qualifier for a local variable which ends up
referring a string literal. Again, this is unique to how
kprobe_lookup_name is being invoked in optprobes.c

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/kprobes.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/kprobes.h b/arch/powerpc/include/asm/kprobes.h
index 5fc728af9260..d821835ade86 100644
--- a/arch/powerpc/include/asm/kprobes.h
+++ b/arch/powerpc/include/asm/kprobes.h
@@ -29,6 +29,7 @@
 #include <linux/types.h>
 #include <linux/ptrace.h>
 #include <linux/percpu.h>
+#include <linux/module.h>
 #include <asm/probes.h>
 #include <asm/code-patching.h>
 
@@ -77,7 +78,7 @@ extern kprobe_opcode_t optprobe_template_end[];
 #define kprobe_lookup_name(name, addr)					\
 {									\
 	char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN];		\
-	char *modsym;							\
+	const char *modsym;							\
 	bool dot_appended = false;					\
 	if ((modsym = strchr(name, ':')) != NULL) {			\
 		modsym++;						\
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH V3 3/4] arch/powerpc: Implement Optprobes
  2016-12-19 13:18 ` [PATCH V3 3/4] arch/powerpc: Implement Optprobes Anju T Sudhakar
  2016-12-25  2:54   ` Masami Hiramatsu
  2017-01-30 20:43     ` Michael Ellerman
@ 2017-02-01 10:53   ` Michael Ellerman
  2017-02-03 19:39     ` Naveen N. Rao
  2017-02-08  5:37     ` Anju T Sudhakar
  2 siblings, 2 replies; 16+ messages in thread
From: Michael Ellerman @ 2017-02-01 10:53 UTC (permalink / raw)
  To: Anju T Sudhakar, linux-kernel, linuxppc-dev
  Cc: ananth, naveen.n.rao, paulus, srikar, benh, mahesh, mhiramat, anju

Anju T Sudhakar <anju@linux.vnet.ibm.com> writes:

> Detour buffer contains instructions to create an in memory pt_regs.
> After the execution of the pre-handler, a call is made for instruction emulation.
> The NIP is determined in advanced through dummy instruction emulation and a branch
> instruction is created to the NIP at the end of the trampoline.

That's good detail, but it's hard to follow for someone who isn't
familiar with with kprobes/optprobes etc. You don't even tell us what an
optprobe is :)

So can you provide a bit more background before diving into the specific
details.

> Instruction slot for detour buffer is allocated from the reserved area.
> For the time being, 64KB is reserved in memory for this purpose.
>
> Instructions which can be emulated using analyse_instr() are suppliants
                                                               ^
                                                               candidates ?
                                                               
> diff --git a/Documentation/features/debug/optprobes/arch-support.txt b/Documentation/features/debug/optprobes/arch-support.txt
> index b8999d8..45bc99d 100644
> --- a/Documentation/features/debug/optprobes/arch-support.txt
> +++ b/Documentation/features/debug/optprobes/arch-support.txt
> @@ -27,7 +27,7 @@
>      |       nios2: | TODO |
>      |    openrisc: | TODO |
>      |      parisc: | TODO |
> -    |     powerpc: | TODO |
> +    |     powerpc: |  ok  |

We don't support them for modules yet, so maybe it's premature to flip
that?

> diff --git a/arch/powerpc/kernel/optprobes.c b/arch/powerpc/kernel/optprobes.c
> new file mode 100644
> index 0000000..fb5e62d
> --- /dev/null
> +++ b/arch/powerpc/kernel/optprobes.c
> @@ -0,0 +1,331 @@
> +/*
> + * Code for Kernel probes Jump optimization.
> + *
> + * Copyright 2016, Anju T, IBM Corp.
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License
> + * as published by the Free Software Foundation; either version
> + * 2 of the License, or (at your option) any later version.
> + */
> +
> +#include <linux/kprobes.h>
> +#include <linux/jump_label.h>
> +#include <linux/types.h>
> +#include <linux/slab.h>
> +#include <linux/list.h>
> +#include <asm/kprobes.h>
> +#include <asm/ptrace.h>
> +#include <asm/cacheflush.h>
> +#include <asm/code-patching.h>
> +#include <asm/sstep.h>
> +#include <asm/ppc-opcode.h>
> +
> +#define TMPL_CALL_HDLR_IDX	\
> +	(optprobe_template_call_handler - optprobe_template_entry)
> +#define TMPL_EMULATE_IDX	\
> +	(optprobe_template_call_emulate - optprobe_template_entry)
> +#define TMPL_RET_IDX		\
> +	(optprobe_template_ret - optprobe_template_entry)
> +#define TMPL_OP_IDX		\
> +	(optprobe_template_op_address - optprobe_template_entry)
> +#define TMPL_INSN_IDX		\
> +	(optprobe_template_insn - optprobe_template_entry)
> +#define TMPL_END_IDX		\
> +	(optprobe_template_end - optprobe_template_entry)
> +
> +DEFINE_INSN_CACHE_OPS(ppc_optinsn);
> +
> +static bool insn_page_in_use;
> +
> +static void *__ppc_alloc_insn_page(void)
> +{
> +	if (insn_page_in_use)
> +		return NULL;
> +	insn_page_in_use = true;

This sets off my "no-locking-visible" detector. I assume there's some
locking somewhere else that makes this work?

> +	return &optinsn_slot;
> +}
> +
> +static void __ppc_free_insn_page(void *page __maybe_unused)
> +{
> +	insn_page_in_use = false;
> +}
> +
> +struct kprobe_insn_cache kprobe_ppc_optinsn_slots = {
> +	.mutex = __MUTEX_INITIALIZER(kprobe_ppc_optinsn_slots.mutex),
> +	.pages = LIST_HEAD_INIT(kprobe_ppc_optinsn_slots.pages),
> +	/* insn_size initialized later */
> +	.alloc = __ppc_alloc_insn_page,
> +	.free = __ppc_free_insn_page,
> +	.nr_garbage = 0,
> +};
> +
> +/*
> + * Check if we can optimize this probe. Returns NIP post-emulation if this can
> + * be optimized and 0 otherwise.
> + */
> +static unsigned long can_optimize(struct kprobe *p)
> +{
> +	struct pt_regs regs;
> +	struct instruction_op op;
> +	unsigned long nip = 0;
> +
> +	/*
> +	 * kprobe placed for kretprobe during boot time
> +	 * is not optimizing now.
> +	 *
> +	 * TODO: Optimize kprobe in kretprobe_trampoline
> +	 */
> +	if (p->addr == (kprobe_opcode_t *)&kretprobe_trampoline)
> +		return 0;
> +
> +	/*
> +	 * We only support optimizing kernel addresses, but not
> +	 * module addresses.

That probably warrants a TODO or FIXME.

> +	 */
> +	if (!is_kernel_addr((unsigned long)p->addr))
> +		return 0;
> +
> +	regs.nip = (unsigned long)p->addr;
> +	regs.trap = 0x0;
> +	regs.msr = MSR_KERNEL;

It may not matter in practice, but leaving most of regs uninitialised
seems a bit fishy. I'd prefer if we zeroed it first.

> +	/*
> +	 * Ensure that the instruction is not a conditional branch,

Can you spell out why it can't be a conditional branch.

> +	 * and that can be emulated.
> +	 */
> +	if (!is_conditional_branch(*p->ainsn.insn) &&
> +			analyse_instr(&op, &regs, *p->ainsn.insn))
> +		nip = regs.nip;
> +
> +	return nip;
> +}
> +
> +static void optimized_callback(struct optimized_kprobe *op,
> +			       struct pt_regs *regs)
> +{
> +	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> +	unsigned long flags;
> +
> +	/* This is possible if op is under delayed unoptimizing */
> +	if (kprobe_disabled(&op->kp))
> +		return;
> +
> +	local_irq_save(flags);

What is that protecting against? Because on powerpc it doesn't actually
disable interrupts, it just masks some of them, the perf interrupt for
example can still run.

> +
> +	if (kprobe_running()) {
> +		kprobes_inc_nmissed_count(&op->kp);
> +	} else {
> +		__this_cpu_write(current_kprobe, &op->kp);
> +		regs->nip = (unsigned long)op->kp.addr;
> +		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
> +		opt_pre_handler(&op->kp, regs);
> +		__this_cpu_write(current_kprobe, NULL);
> +	}
> +	local_irq_restore(flags);
> +}
> +NOKPROBE_SYMBOL(optimized_callback);
> +
> +void arch_remove_optimized_kprobe(struct optimized_kprobe *op)
> +{
> +	if (op->optinsn.insn) {
> +		free_ppc_optinsn_slot(op->optinsn.insn, 1);
> +		op->optinsn.insn = NULL;
> +	}
> +}
> +
> +/*
> + * emulate_step() requires insn to be emulated as
> + * second parameter. Load register 'r4' with the
> + * instruction.
> + */
> +void patch_imm32_load_insns(unsigned int val, kprobe_opcode_t *addr)
> +{
> +	/* addis r4,0,(insn)@h */
> +	*addr++ = PPC_INST_ADDIS | ___PPC_RT(4) |
> +		  ((val >> 16) & 0xffff);
> +
> +	/* ori r4,r4,(insn)@l */
> +	*addr = PPC_INST_ORI | ___PPC_RA(4) | ___PPC_RS(4) |
> +		(val & 0xffff);
> +}
> +
> +/*
> + * Generate instructions to load provided immediate 64-bit value
> + * to register 'r3' and patch these instructions at 'addr'.
> + */
> +void patch_imm64_load_insns(unsigned long val, kprobe_opcode_t *addr)
> +{
> +	/* lis r3,(op)@highest */
> +	*addr++ = PPC_INST_ADDIS | ___PPC_RT(3) |
> +		  ((val >> 48) & 0xffff);
> +
> +	/* ori r3,r3,(op)@higher */
> +	*addr++ = PPC_INST_ORI | ___PPC_RA(3) | ___PPC_RS(3) |
> +		  ((val >> 32) & 0xffff);
> +
> +	/* rldicr r3,r3,32,31 */
> +	*addr++ = PPC_INST_RLDICR | ___PPC_RA(3) | ___PPC_RS(3) |
> +		  __PPC_SH64(32) | __PPC_ME64(31);
> +
> +	/* oris r3,r3,(op)@h */
> +	*addr++ = PPC_INST_ORIS | ___PPC_RA(3) | ___PPC_RS(3) |
> +		  ((val >> 16) & 0xffff);
> +
> +	/* ori r3,r3,(op)@l */
> +	*addr = PPC_INST_ORI | ___PPC_RA(3) | ___PPC_RS(3) |
> +		(val & 0xffff);
> +}

I can't say I love those patch functions. A PC-relative load would be
cleaner, but I guess that's ugly/expensive on current CPUs.

> diff --git a/arch/powerpc/kernel/optprobes_head.S b/arch/powerpc/kernel/optprobes_head.S
> new file mode 100644
> index 0000000..c86976b
> --- /dev/null
> +++ b/arch/powerpc/kernel/optprobes_head.S
> @@ -0,0 +1,135 @@
> +/*
> + * Code to prepare detour buffer for optprobes in Kernel.
> + *
> + * Copyright 2016, Anju T, IBM Corp.
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License
> + * as published by the Free Software Foundation; either version
> + * 2 of the License, or (at your option) any later version.
> + */
> +
> +#include <asm/ppc_asm.h>
> +#include <asm/ptrace.h>
> +#include <asm/asm-offsets.h>
> +
> +#define	OPT_SLOT_SIZE	65536
> +
> +	.balign	4
> +
> +	/*
> +	 * Reserve an area to allocate slots for detour buffer.
> +	 * This is part of .text section (rather than vmalloc area)
> +	 * as this needs to be within 32MB of the probed address.
> +	 */
> +	.global optinsn_slot
> +optinsn_slot:
> +	.space	OPT_SLOT_SIZE
> +
> +	/*
> +	 * Optprobe template:
> +	 * This template gets copied into one of the slots in optinsn_slot
> +	 * and gets fixed up with real optprobe structures et al.
> +	 */
> +	.global optprobe_template_entry
> +optprobe_template_entry:
> +	/* Create an in-memory pt_regs */
> +	stdu	r1,-INT_FRAME_SIZE(r1)
> +	SAVE_GPR(0,r1)
> +	/* Save the previous SP into stack */
> +	addi	r0,r1,INT_FRAME_SIZE
> +	std	r0,GPR1(r1)
> +	SAVE_10GPRS(2,r1)
> +	SAVE_10GPRS(12,r1)
> +	SAVE_10GPRS(22,r1)
> +	/* Save SPRS */
> +	mfmsr	r5
> +	std	r5,_MSR(r1)
> +	li	r5,0x700
> +	std	r5,_TRAP(r1)
> +	li	r5,0
> +	std	r5,ORIG_GPR3(r1)
> +	std	r5,RESULT(r1)
> +	mfctr	r5
> +	std	r5,_CTR(r1)
> +	mflr	r5
> +	std	r5,_LINK(r1)
> +	mfspr	r5,SPRN_XER
> +	std	r5,_XER(r1)
> +	mfcr	r5
> +	std	r5,_CCR(r1)
> +	lbz     r5,PACASOFTIRQEN(r13)
> +	std     r5,SOFTE(r1)
> +	mfdar	r5
> +	std	r5,_DAR(r1)
> +	mfdsisr	r5
> +	std	r5,_DSISR(r1)

So this is what made me originally reply to this patch. This
save/restore sequence.

I'm not clear on why this is what we need to save/restore.

Aren't we essentially just interposing a function call? If so do we need
to save/restore all of these? eg. MSR/DAR/DSISR. Non-volatile GPRs? And
why are we pretending there was a 0x700 trap?

Is it because we're going to end up emulating the instruction and so we
need everything in pt_regs ?


> diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
> index 3362299..895dcdd 100644
> --- a/arch/powerpc/lib/sstep.c
> +++ b/arch/powerpc/lib/sstep.c
> @@ -618,6 +618,27 @@ static int __kprobes trap_compare(long v1, long v2)
>  }
>  
>  /*
> + * Helper to check if a given instruction is a conditional branch
> + * Derived from the conditional checks in analyse_instr()
> + */
> +bool __kprobes is_conditional_branch(unsigned int instr)
> +{
> +	unsigned int opcode = instr >> 26;
> +

We already have some similar routines in code_patching.c/h. Can you move
this in there instead?

cheers

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH V3 3/4] arch/powerpc: Implement Optprobes
  2017-02-01 10:53   ` [PATCH V3 3/4] arch/powerpc: Implement Optprobes Michael Ellerman
@ 2017-02-03 19:39     ` Naveen N. Rao
  2017-02-07  1:05       ` Masami Hiramatsu
  2017-02-08  5:37     ` Anju T Sudhakar
  1 sibling, 1 reply; 16+ messages in thread
From: Naveen N. Rao @ 2017-02-03 19:39 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Anju T Sudhakar, linux-kernel, linuxppc-dev, ananth, mahesh,
	paulus, mhiramat, srikar

Hi Michael,
Thanks for the review! I'll defer to Anju on most of the aspects, but...

On 2017/02/01 09:53PM, Michael Ellerman wrote:
> Anju T Sudhakar <anju@linux.vnet.ibm.com> writes:
> 
> > +static void optimized_callback(struct optimized_kprobe *op,
> > +			       struct pt_regs *regs)
> > +{
> > +	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> > +	unsigned long flags;
> > +
> > +	/* This is possible if op is under delayed unoptimizing */
> > +	if (kprobe_disabled(&op->kp))
> > +		return;
> > +
> > +	local_irq_save(flags);
> 
> What is that protecting against? Because on powerpc it doesn't actually
> disable interrupts, it just masks some of them, the perf interrupt for
> example can still run.

That's an excellent catch, as always! :)

This is meant to prevent us from missing kprobe hits while processing 
interrupts that arrive when this optprobe is being handled. And you are 
totally right -- we would miss kprobe hits during PMI handling with the 
current approach. We need a hard_irq_disable() there.

> > +	/*
> > +	 * Optprobe template:
> > +	 * This template gets copied into one of the slots in optinsn_slot
> > +	 * and gets fixed up with real optprobe structures et al.
> > +	 */
> > +	.global optprobe_template_entry
> > +optprobe_template_entry:
> > +	/* Create an in-memory pt_regs */
> > +	stdu	r1,-INT_FRAME_SIZE(r1)
> > +	SAVE_GPR(0,r1)
> > +	/* Save the previous SP into stack */
> > +	addi	r0,r1,INT_FRAME_SIZE
> > +	std	r0,GPR1(r1)
> > +	SAVE_10GPRS(2,r1)
> > +	SAVE_10GPRS(12,r1)
> > +	SAVE_10GPRS(22,r1)
> > +	/* Save SPRS */
> > +	mfmsr	r5
> > +	std	r5,_MSR(r1)
> > +	li	r5,0x700
> > +	std	r5,_TRAP(r1)
> > +	li	r5,0
> > +	std	r5,ORIG_GPR3(r1)
> > +	std	r5,RESULT(r1)
> > +	mfctr	r5
> > +	std	r5,_CTR(r1)
> > +	mflr	r5
> > +	std	r5,_LINK(r1)
> > +	mfspr	r5,SPRN_XER
> > +	std	r5,_XER(r1)
> > +	mfcr	r5
> > +	std	r5,_CCR(r1)
> > +	lbz     r5,PACASOFTIRQEN(r13)
> > +	std     r5,SOFTE(r1)
> > +	mfdar	r5
> > +	std	r5,_DAR(r1)
> > +	mfdsisr	r5
> > +	std	r5,_DSISR(r1)
> 
> So this is what made me originally reply to this patch. This
> save/restore sequence.
> 
> I'm not clear on why this is what we need to save/restore.
> 
> Aren't we essentially just interposing a function call? If so do we need
> to save/restore all of these? eg. MSR/DAR/DSISR. Non-volatile GPRs? And
> why are we pretending there was a 0x700 trap?
> 
> Is it because we're going to end up emulating the instruction and so we
> need everything in pt_regs ?

Yes, that and also for the kprobe pre_handler() which takes pt_regs.


Regards,
- Naveen

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH V3 3/4] arch/powerpc: Implement Optprobes
  2017-02-03 19:39     ` Naveen N. Rao
@ 2017-02-07  1:05       ` Masami Hiramatsu
  2017-02-07  7:51         ` Naveen N. Rao
  0 siblings, 1 reply; 16+ messages in thread
From: Masami Hiramatsu @ 2017-02-07  1:05 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, Anju T Sudhakar, linux-kernel, linuxppc-dev,
	ananth, mahesh, paulus, mhiramat, srikar

On Sat, 4 Feb 2017 01:09:49 +0530
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:

> Hi Michael,
> Thanks for the review! I'll defer to Anju on most of the aspects, but...
> 
> On 2017/02/01 09:53PM, Michael Ellerman wrote:
> > Anju T Sudhakar <anju@linux.vnet.ibm.com> writes:
> > 
> > > +static void optimized_callback(struct optimized_kprobe *op,
> > > +			       struct pt_regs *regs)
> > > +{
> > > +	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> > > +	unsigned long flags;
> > > +
> > > +	/* This is possible if op is under delayed unoptimizing */
> > > +	if (kprobe_disabled(&op->kp))
> > > +		return;
> > > +
> > > +	local_irq_save(flags);
> > 
> > What is that protecting against? Because on powerpc it doesn't actually
> > disable interrupts, it just masks some of them, the perf interrupt for
> > example can still run.
> 
> That's an excellent catch, as always! :)
> 
> This is meant to prevent us from missing kprobe hits while processing 
> interrupts that arrive when this optprobe is being handled. And you are 
> totally right -- we would miss kprobe hits during PMI handling with the 
> current approach. We need a hard_irq_disable() there.

One note: it depends on the arch implementation of kprobes, since this
is only for "emulating" the int3 behavior on x86 for compatibility.
On x86, int3 is disabling interrupt automatically, so all the kprobes
user handlers will be run under irq-disabled. This means that user may
write their code to run as such condition. They even can not know
that is optimized or not at programming timing, because the kprobe
will be optimized after a while from enabled it.

So the important point is that you have to keep it compatible of
unoptimized kprobes.

Thank you,

-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH V3 3/4] arch/powerpc: Implement Optprobes
  2017-02-07  1:05       ` Masami Hiramatsu
@ 2017-02-07  7:51         ` Naveen N. Rao
  0 siblings, 0 replies; 16+ messages in thread
From: Naveen N. Rao @ 2017-02-07  7:51 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Michael Ellerman, Anju T Sudhakar, linux-kernel, linuxppc-dev,
	ananth, mahesh, paulus, srikar

On 2017/02/07 10:05AM, Masami Hiramatsu wrote:
> On Sat, 4 Feb 2017 01:09:49 +0530
> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:
> 
> > Hi Michael,
> > Thanks for the review! I'll defer to Anju on most of the aspects, but...
> > 
> > On 2017/02/01 09:53PM, Michael Ellerman wrote:
> > > Anju T Sudhakar <anju@linux.vnet.ibm.com> writes:
> > > 
> > > > +static void optimized_callback(struct optimized_kprobe *op,
> > > > +			       struct pt_regs *regs)
> > > > +{
> > > > +	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> > > > +	unsigned long flags;
> > > > +
> > > > +	/* This is possible if op is under delayed unoptimizing */
> > > > +	if (kprobe_disabled(&op->kp))
> > > > +		return;
> > > > +
> > > > +	local_irq_save(flags);
> > > 
> > > What is that protecting against? Because on powerpc it doesn't actually
> > > disable interrupts, it just masks some of them, the perf interrupt for
> > > example can still run.
> > 
> > That's an excellent catch, as always! :)
> > 
> > This is meant to prevent us from missing kprobe hits while processing 
> > interrupts that arrive when this optprobe is being handled. And you are 
> > totally right -- we would miss kprobe hits during PMI handling with the 
> > current approach. We need a hard_irq_disable() there.
> 
> One note: it depends on the arch implementation of kprobes, since this
> is only for "emulating" the int3 behavior on x86 for compatibility.
> On x86, int3 is disabling interrupt automatically, so all the kprobes
> user handlers will be run under irq-disabled. This means that user may
> write their code to run as such condition. They even can not know
> that is optimized or not at programming timing, because the kprobe
> will be optimized after a while from enabled it.
> 
> So the important point is that you have to keep it compatible of
> unoptimized kprobes.

Thanks for the pointers, Masami.
Yes, with unoptimized kprobes on powerpc, we run with interrupts hard 
disabled as well. So, we need to hard disable for optprobes too.

Regards,
Naveen

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH V3 3/4] arch/powerpc: Implement Optprobes
  2017-02-01 10:53   ` [PATCH V3 3/4] arch/powerpc: Implement Optprobes Michael Ellerman
  2017-02-03 19:39     ` Naveen N. Rao
@ 2017-02-08  5:37     ` Anju T Sudhakar
  1 sibling, 0 replies; 16+ messages in thread
From: Anju T Sudhakar @ 2017-02-08  5:37 UTC (permalink / raw)
  To: Michael Ellerman, linux-kernel, linuxppc-dev
  Cc: ananth, naveen.n.rao, paulus, srikar, benh, mahesh, mhiramat

Hi Michael,


Thank you so much for the review.

On Wednesday 01 February 2017 04:23 PM, Michael Ellerman wrote:
> Anju T Sudhakar <anju@linux.vnet.ibm.com> writes:
>
>> Detour buffer contains instructions to create an in memory pt_regs.
>> After the execution of the pre-handler, a call is made for instruction emulation.
>> The NIP is determined in advanced through dummy instruction emulation and a branch
>> instruction is created to the NIP at the end of the trampoline.
> That's good detail, but it's hard to follow for someone who isn't
> familiar with with kprobes/optprobes etc. You don't even tell us what an
> optprobe is :)
>
> So can you provide a bit more background before diving into the specific
> details.



Sure. I will provide sufficient details here.


>> Instruction slot for detour buffer is allocated from the reserved area.
>> For the time being, 64KB is reserved in memory for this purpose.
>>
>> Instructions which can be emulated using analyse_instr() are suppliants
>                                                                 ^
>                                                                 candidates ?




:-) yes 'candidates' will be more appropriate here.



>                                                                 
>> diff --git a/Documentation/features/debug/optprobes/arch-support.txt b/Documentation/features/debug/optprobes/arch-support.txt
>> index b8999d8..45bc99d 100644
>> --- a/Documentation/features/debug/optprobes/arch-support.txt
>> +++ b/Documentation/features/debug/optprobes/arch-support.txt
>> @@ -27,7 +27,7 @@
>>       |       nios2: | TODO |
>>       |    openrisc: | TODO |
>>       |      parisc: | TODO |
>> -    |     powerpc: | TODO |
>> +    |     powerpc: |  ok  |
> We don't support them for modules yet, so maybe it's premature to flip
> that?


ok.


>> diff --git a/arch/powerpc/kernel/optprobes.c b/arch/powerpc/kernel/optprobes.c
>> new file mode 100644
>> index 0000000..fb5e62d
>> --- /dev/null
>> +++ b/arch/powerpc/kernel/optprobes.c
>> @@ -0,0 +1,331 @@
>> +/*
>> + * Code for Kernel probes Jump optimization.
>> + *
>> + * Copyright 2016, Anju T, IBM Corp.
>> + *
>> + * This program is free software; you can redistribute it and/or
>> + * modify it under the terms of the GNU General Public License
>> + * as published by the Free Software Foundation; either version
>> + * 2 of the License, or (at your option) any later version.
>> + */
>> +
>> +#include <linux/kprobes.h>
>> +#include <linux/jump_label.h>
>> +#include <linux/types.h>
>> +#include <linux/slab.h>
>> +#include <linux/list.h>
>> +#include <asm/kprobes.h>
>> +#include <asm/ptrace.h>
>> +#include <asm/cacheflush.h>
>> +#include <asm/code-patching.h>
>> +#include <asm/sstep.h>
>> +#include <asm/ppc-opcode.h>
>> +
>> +#define TMPL_CALL_HDLR_IDX	\
>> +	(optprobe_template_call_handler - optprobe_template_entry)
>> +#define TMPL_EMULATE_IDX	\
>> +	(optprobe_template_call_emulate - optprobe_template_entry)
>> +#define TMPL_RET_IDX		\
>> +	(optprobe_template_ret - optprobe_template_entry)
>> +#define TMPL_OP_IDX		\
>> +	(optprobe_template_op_address - optprobe_template_entry)
>> +#define TMPL_INSN_IDX		\
>> +	(optprobe_template_insn - optprobe_template_entry)
>> +#define TMPL_END_IDX		\
>> +	(optprobe_template_end - optprobe_template_entry)
>> +
>> +DEFINE_INSN_CACHE_OPS(ppc_optinsn);
>> +
>> +static bool insn_page_in_use;
>> +
>> +static void *__ppc_alloc_insn_page(void)
>> +{
>> +	if (insn_page_in_use)
>> +		return NULL;
>> +	insn_page_in_use = true;
> This sets off my "no-locking-visible" detector. I assume there's some
> locking somewhere else that makes this work?



Actually we don't need a lock here, since this function is invoked by 
__get_insn_slot() (in kernel/kprobes.c).
__get_insn_slot()  already  have lock on kprobe_insn_cache.




>> +	return &optinsn_slot;
>> +}
>> +
>> +static void __ppc_free_insn_page(void *page __maybe_unused)
>> +{
>> +	insn_page_in_use = false;
>> +}
>> +
>> +struct kprobe_insn_cache kprobe_ppc_optinsn_slots = {
>> +	.mutex = __MUTEX_INITIALIZER(kprobe_ppc_optinsn_slots.mutex),
>> +	.pages = LIST_HEAD_INIT(kprobe_ppc_optinsn_slots.pages),
>> +	/* insn_size initialized later */
>> +	.alloc = __ppc_alloc_insn_page,
>> +	.free = __ppc_free_insn_page,
>> +	.nr_garbage = 0,
>> +};
>> +
>> +/*
>> + * Check if we can optimize this probe. Returns NIP post-emulation if this can
>> + * be optimized and 0 otherwise.
>> + */
>> +static unsigned long can_optimize(struct kprobe *p)
>> +{
>> +	struct pt_regs regs;
>> +	struct instruction_op op;
>> +	unsigned long nip = 0;
>> +
>> +	/*
>> +	 * kprobe placed for kretprobe during boot time
>> +	 * is not optimizing now.
>> +	 *
>> +	 * TODO: Optimize kprobe in kretprobe_trampoline
>> +	 */
>> +	if (p->addr == (kprobe_opcode_t *)&kretprobe_trampoline)
>> +		return 0;
>> +
>> +	/*
>> +	 * We only support optimizing kernel addresses, but not
>> +	 * module addresses.
> That probably warrants a TODO or FIXME.




sure.




>> +	 */
>> +	if (!is_kernel_addr((unsigned long)p->addr))
>> +		return 0;
>> +
>> +	regs.nip = (unsigned long)p->addr;
>> +	regs.trap = 0x0;
>> +	regs.msr = MSR_KERNEL;
> It may not matter in practice, but leaving most of regs uninitialised
> seems a bit fishy. I'd prefer if we zeroed it first.




yes. Will initialize the regs to zero here.



>> +	/*
>> +	 * Ensure that the instruction is not a conditional branch,
> Can you spell out why it can't be a conditional branch.



  We are not optimizing conditional branches here, because we can't 
predict the nip
prior with dummy pt_regs and can not ensure that the return branch from 
detour
buffer falls in the range of address i.e 32 MB.

I will add  proper comments here.




>> +	 * and that can be emulated.
>> +	 */
>> +	if (!is_conditional_branch(*p->ainsn.insn) &&
>> +			analyse_instr(&op, &regs, *p->ainsn.insn))
>> +		nip = regs.nip;
>> +
>> +	return nip;
>> +}
>> +
>> +static void optimized_callback(struct optimized_kprobe *op,
>> +			       struct pt_regs *regs)
>> +{
>> +	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
>> +	unsigned long flags;
>> +
>> +	/* This is possible if op is under delayed unoptimizing */
>> +	if (kprobe_disabled(&op->kp))
>> +		return;
>> +
>> +	local_irq_save(flags);
> What is that protecting against? Because on powerpc it doesn't actually
> disable interrupts, it just masks some of them, the perf interrupt for
> example can still run.



As pointed out by Naveen, we need hard_irq_disable() here.
Thanks for the catch.

:-)


>> +
>> +	if (kprobe_running()) {
>> +		kprobes_inc_nmissed_count(&op->kp);
>> +	} else {
>> +		__this_cpu_write(current_kprobe, &op->kp);
>> +		regs->nip = (unsigned long)op->kp.addr;
>> +		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
>> +		opt_pre_handler(&op->kp, regs);
>> +		__this_cpu_write(current_kprobe, NULL);
>> +	}
>> +	local_irq_restore(flags);
>> +}
>> +NOKPROBE_SYMBOL(optimized_callback);
>> +
>> +void arch_remove_optimized_kprobe(struct optimized_kprobe *op)
>> +{
>> +	if (op->optinsn.insn) {
>> +		free_ppc_optinsn_slot(op->optinsn.insn, 1);
>> +		op->optinsn.insn = NULL;
>> +	}
>> +}
>> +
>> +/*
>> + * emulate_step() requires insn to be emulated as
>> + * second parameter. Load register 'r4' with the
>> + * instruction.
>> + */
>> +void patch_imm32_load_insns(unsigned int val, kprobe_opcode_t *addr)
>> +{
>> +	/* addis r4,0,(insn)@h */
>> +	*addr++ = PPC_INST_ADDIS | ___PPC_RT(4) |
>> +		  ((val >> 16) & 0xffff);
>> +
>> +	/* ori r4,r4,(insn)@l */
>> +	*addr = PPC_INST_ORI | ___PPC_RA(4) | ___PPC_RS(4) |
>> +		(val & 0xffff);
>> +}
>> +
>> +/*
>> + * Generate instructions to load provided immediate 64-bit value
>> + * to register 'r3' and patch these instructions at 'addr'.
>> + */
>> +void patch_imm64_load_insns(unsigned long val, kprobe_opcode_t *addr)
>> +{
>> +	/* lis r3,(op)@highest */
>> +	*addr++ = PPC_INST_ADDIS | ___PPC_RT(3) |
>> +		  ((val >> 48) & 0xffff);
>> +
>> +	/* ori r3,r3,(op)@higher */
>> +	*addr++ = PPC_INST_ORI | ___PPC_RA(3) | ___PPC_RS(3) |
>> +		  ((val >> 32) & 0xffff);
>> +
>> +	/* rldicr r3,r3,32,31 */
>> +	*addr++ = PPC_INST_RLDICR | ___PPC_RA(3) | ___PPC_RS(3) |
>> +		  __PPC_SH64(32) | __PPC_ME64(31);
>> +
>> +	/* oris r3,r3,(op)@h */
>> +	*addr++ = PPC_INST_ORIS | ___PPC_RA(3) | ___PPC_RS(3) |
>> +		  ((val >> 16) & 0xffff);
>> +
>> +	/* ori r3,r3,(op)@l */
>> +	*addr = PPC_INST_ORI | ___PPC_RA(3) | ___PPC_RS(3) |
>> +		(val & 0xffff);
>> +}
> I can't say I love those patch functions. A PC-relative load would be
> cleaner, but I guess that's ugly/expensive on current CPUs.
>
>> diff --git a/arch/powerpc/kernel/optprobes_head.S b/arch/powerpc/kernel/optprobes_head.S
>> new file mode 100644
>> index 0000000..c86976b
>> --- /dev/null
>> +++ b/arch/powerpc/kernel/optprobes_head.S
>> @@ -0,0 +1,135 @@
>> +/*
>> + * Code to prepare detour buffer for optprobes in Kernel.
>> + *
>> + * Copyright 2016, Anju T, IBM Corp.
>> + *
>> + * This program is free software; you can redistribute it and/or
>> + * modify it under the terms of the GNU General Public License
>> + * as published by the Free Software Foundation; either version
>> + * 2 of the License, or (at your option) any later version.
>> + */
>> +
>> +#include <asm/ppc_asm.h>
>> +#include <asm/ptrace.h>
>> +#include <asm/asm-offsets.h>
>> +
>> +#define	OPT_SLOT_SIZE	65536
>> +
>> +	.balign	4
>> +
>> +	/*
>> +	 * Reserve an area to allocate slots for detour buffer.
>> +	 * This is part of .text section (rather than vmalloc area)
>> +	 * as this needs to be within 32MB of the probed address.
>> +	 */
>> +	.global optinsn_slot
>> +optinsn_slot:
>> +	.space	OPT_SLOT_SIZE
>> +
>> +	/*
>> +	 * Optprobe template:
>> +	 * This template gets copied into one of the slots in optinsn_slot
>> +	 * and gets fixed up with real optprobe structures et al.
>> +	 */
>> +	.global optprobe_template_entry
>> +optprobe_template_entry:
>> +	/* Create an in-memory pt_regs */
>> +	stdu	r1,-INT_FRAME_SIZE(r1)
>> +	SAVE_GPR(0,r1)
>> +	/* Save the previous SP into stack */
>> +	addi	r0,r1,INT_FRAME_SIZE
>> +	std	r0,GPR1(r1)
>> +	SAVE_10GPRS(2,r1)
>> +	SAVE_10GPRS(12,r1)
>> +	SAVE_10GPRS(22,r1)
>> +	/* Save SPRS */
>> +	mfmsr	r5
>> +	std	r5,_MSR(r1)
>> +	li	r5,0x700
>> +	std	r5,_TRAP(r1)
>> +	li	r5,0
>> +	std	r5,ORIG_GPR3(r1)
>> +	std	r5,RESULT(r1)
>> +	mfctr	r5
>> +	std	r5,_CTR(r1)
>> +	mflr	r5
>> +	std	r5,_LINK(r1)
>> +	mfspr	r5,SPRN_XER
>> +	std	r5,_XER(r1)
>> +	mfcr	r5
>> +	std	r5,_CCR(r1)
>> +	lbz     r5,PACASOFTIRQEN(r13)
>> +	std     r5,SOFTE(r1)
>> +	mfdar	r5
>> +	std	r5,_DAR(r1)
>> +	mfdsisr	r5
>> +	std	r5,_DSISR(r1)
> So this is what made me originally reply to this patch. This
> save/restore sequence.
>
> I'm not clear on why this is what we need to save/restore.
>
> Aren't we essentially just interposing a function call? If so do we need
> to save/restore all of these? eg. MSR/DAR/DSISR. Non-volatile GPRs? And
> why are we pretending there was a 0x700 trap?
>
> Is it because we're going to end up emulating the instruction and so we
> need everything in pt_regs ?
>



Yes.
we need the pt_regs to be saved in the detour buffer.


>> diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
>> index 3362299..895dcdd 100644
>> --- a/arch/powerpc/lib/sstep.c
>> +++ b/arch/powerpc/lib/sstep.c
>> @@ -618,6 +618,27 @@ static int __kprobes trap_compare(long v1, long v2)
>>   }
>>   
>>   /*
>> + * Helper to check if a given instruction is a conditional branch
>> + * Derived from the conditional checks in analyse_instr()
>> + */
>> +bool __kprobes is_conditional_branch(unsigned int instr)
>> +{
>> +	unsigned int opcode = instr >> 26;
>> +
> We already have some similar routines in code_patching.c/h. Can you move
> this in there instead?

sure.


> cheers
>




Thanks and regards,
Anju

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2017-02-08  5:37 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-19 13:18 [PATCH V3 0/4] OPTPROBES for powerpc Anju T Sudhakar
2016-12-19 13:18 ` [PATCH V3 3/4] arch/powerpc: Implement Optprobes Anju T Sudhakar
2016-12-25  2:54   ` Masami Hiramatsu
2017-01-04 10:25     ` Naveen N. Rao
2017-01-30 20:43   ` Michael Ellerman
2017-01-30 20:43     ` Michael Ellerman
2017-01-31  7:55     ` Naveen N. Rao
2017-01-31  7:59     ` [PATCH] powerpc: kprobes: fixes for kprobe_lookup_name on BE Naveen N. Rao
2017-02-01 10:53   ` [PATCH V3 3/4] arch/powerpc: Implement Optprobes Michael Ellerman
2017-02-03 19:39     ` Naveen N. Rao
2017-02-07  1:05       ` Masami Hiramatsu
2017-02-07  7:51         ` Naveen N. Rao
2017-02-08  5:37     ` Anju T Sudhakar
2016-12-19 13:18 ` [PATCH V3 4/4] arch/powerpc: Optimize kprobe in kretprobe_trampoline Anju T Sudhakar
2016-12-19 13:18 ` [PATCH V3 1/4] powerpc: asm/ppc-opcode.h: introduce __PPC_SH64() Anju T Sudhakar
2016-12-19 13:18 ` [PATCH V3 2/4] powerpc: add helper to check if offset is within rel branch range Anju T Sudhakar

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.