linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 0/3] kprobes: arm: enable OPTPROBES for ARM 32
@ 2014-08-27 13:02 Wang Nan
  2014-08-27 13:02 ` [PATCH v5 1/3] ARM: probes: check stack operation when decoding Wang Nan
                   ` (2 more replies)
  0 siblings, 3 replies; 18+ messages in thread
From: Wang Nan @ 2014-08-27 13:02 UTC (permalink / raw)
  To: Russell King, David A. Long, Jon Medhurst, Taras Kondratiuk,
	Ben Dooks, Ananth N Mavinakayanahalli, Anil S Keshavamurthy,
	David S. Miller, Masami Hiramatsu, Will Deacon
  Cc: Wang Nan, Pei Feiyue, linux-arm-kernel, linux-kernel

Following 3 patches are the 5th version of kprobe optimization for arm.
The main difference is disallowing optimize stack store instructions,
such as "str r0, [sp]" and "push {r0 - r4}".

The first patch improve arm instruction decoder to detect such
instructions, following 2 patches make it unoptimizable.

Wang Nan (3):
  ARM: probes: check stack operation when decoding
  kprobes: copy ainsn after alloc aggr kprobe
  kprobes: arm: enable OPTPROBES for ARM 32

 arch/arm/Kconfig                 |   1 +
 arch/arm/include/asm/kprobes.h   |  28 +++++
 arch/arm/include/asm/probes.h    |   1 +
 arch/arm/kernel/Makefile         |   3 +-
 arch/arm/kernel/kprobes-common.c |   4 +
 arch/arm/kernel/kprobes-opt.c    | 259 +++++++++++++++++++++++++++++++++++++++
 arch/arm/kernel/probes-arm.c     |   4 +-
 arch/arm/kernel/probes-thumb.c   |   6 +-
 arch/arm/kernel/probes.c         |  20 ++-
 arch/arm/kernel/probes.h         |   6 +
 kernel/kprobes.c                 |   7 +-
 11 files changed, 330 insertions(+), 9 deletions(-)
 create mode 100644 arch/arm/kernel/kprobes-opt.c

Cc: Russell King <linux@arm.linux.org.uk>
Cc: "David A. Long" <dave.long@linaro.org> 
Cc: Jon Medhurst <tixy@linaro.org>
Cc: Taras Kondratiuk <taras.kondratiuk@linaro.org>
Cc: Ben Dooks <ben.dooks@codethink.co.uk>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Will Deacon <will.deacon@arm.com>

-- 
1.8.4


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v5 1/3] ARM: probes: check stack operation when decoding
  2014-08-27 13:02 [PATCH v5 0/3] kprobes: arm: enable OPTPROBES for ARM 32 Wang Nan
@ 2014-08-27 13:02 ` Wang Nan
  2014-08-28  9:51   ` Masami Hiramatsu
  2014-08-27 13:02 ` [PATCH v5 2/3] kprobes: copy ainsn after alloc aggr kprobe Wang Nan
  2014-08-27 13:02 ` [PATCH v5 3/3] kprobes: arm: enable OPTPROBES for ARM 32 Wang Nan
  2 siblings, 1 reply; 18+ messages in thread
From: Wang Nan @ 2014-08-27 13:02 UTC (permalink / raw)
  To: Russell King, David A. Long, Jon Medhurst, Taras Kondratiuk,
	Ben Dooks, Ananth N Mavinakayanahalli, Anil S Keshavamurthy,
	David S. Miller, Masami Hiramatsu, Will Deacon
  Cc: Wang Nan, Pei Feiyue, linux-arm-kernel, linux-kernel

This patch improves arm instruction decoder, allows it check whether an
instruction is a stack store operation. This information is important
for kprobe optimization.

For normal str instruction, this patch add a series of _SP_STACK
register indicator in the decoder to test the base and offset register
in ldr <Rt>, [<Rn>, <Rm>] against sp.

For stm instruction, it check sp register in instruction specific
decoder.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: "David A. Long" <dave.long@linaro.org> 
Cc: Jon Medhurst <tixy@linaro.org>
Cc: Taras Kondratiuk <taras.kondratiuk@linaro.org>
Cc: Ben Dooks <ben.dooks@codethink.co.uk>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Will Deacon <will.deacon@arm.com>

---
 arch/arm/include/asm/probes.h    |  1 +
 arch/arm/kernel/kprobes-common.c |  4 ++++
 arch/arm/kernel/probes-arm.c     |  4 ++--
 arch/arm/kernel/probes-thumb.c   |  6 +++---
 arch/arm/kernel/probes.c         | 20 ++++++++++++++++++--
 arch/arm/kernel/probes.h         |  6 ++++++
 6 files changed, 34 insertions(+), 7 deletions(-)

diff --git a/arch/arm/include/asm/probes.h b/arch/arm/include/asm/probes.h
index 806cfe6..3f6912c 100644
--- a/arch/arm/include/asm/probes.h
+++ b/arch/arm/include/asm/probes.h
@@ -38,6 +38,7 @@ struct arch_probes_insn {
 	probes_check_cc			*insn_check_cc;
 	probes_insn_singlestep_t	*insn_singlestep;
 	probes_insn_fn_t		*insn_fn;
+	bool				is_stack_operation;
 };
 
 #endif
diff --git a/arch/arm/kernel/kprobes-common.c b/arch/arm/kernel/kprobes-common.c
index 0bf5d64..4e8b918 100644
--- a/arch/arm/kernel/kprobes-common.c
+++ b/arch/arm/kernel/kprobes-common.c
@@ -133,6 +133,10 @@ kprobe_decode_ldmstm(probes_opcode_t insn, struct arch_probes_insn *asi,
 	int is_ldm = insn & 0x100000;
 	int rn = (insn >> 16) & 0xf;
 
+	/* whether this is a push instruction? */
+	if ((rn == 0xd) && (!is_ldm))
+		asi->is_stack_operation = true;
+
 	if (rn <= 12 && (reglist & 0xe000) == 0) {
 		/* Instruction only uses registers in the range R0..R12 */
 		handler = emulate_generic_r0_12_noflags;
diff --git a/arch/arm/kernel/probes-arm.c b/arch/arm/kernel/probes-arm.c
index 8eaef81..5c187ba 100644
--- a/arch/arm/kernel/probes-arm.c
+++ b/arch/arm/kernel/probes-arm.c
@@ -577,7 +577,7 @@ static const union decode_item arm_cccc_01xx_table[] = {
 	/* STR (immediate)	cccc 010x x0x0 xxxx xxxx xxxx xxxx xxxx */
 	/* STRB (immediate)	cccc 010x x1x0 xxxx xxxx xxxx xxxx xxxx */
 	DECODE_EMULATEX	(0x0e100000, 0x04000000, PROBES_STORE,
-						 REGS(NOPCWB, ANY, 0, 0, 0)),
+						 REGS(NOPCWB_SP_STACK, ANY, 0, 0, 0)),
 
 	/* LDR (immediate)	cccc 010x x0x1 xxxx xxxx xxxx xxxx xxxx */
 	/* LDRB (immediate)	cccc 010x x1x1 xxxx xxxx xxxx xxxx xxxx */
@@ -587,7 +587,7 @@ static const union decode_item arm_cccc_01xx_table[] = {
 	/* STR (register)	cccc 011x x0x0 xxxx xxxx xxxx xxxx xxxx */
 	/* STRB (register)	cccc 011x x1x0 xxxx xxxx xxxx xxxx xxxx */
 	DECODE_EMULATEX	(0x0e100000, 0x06000000, PROBES_STORE,
-						 REGS(NOPCWB, ANY, 0, 0, NOPC)),
+						 REGS(NOPCWB_SP_STACK, ANY, 0, 0, NOPC_SP_STACK)),
 
 	/* LDR (register)	cccc 011x x0x1 xxxx xxxx xxxx xxxx xxxx */
 	/* LDRB (register)	cccc 011x x1x1 xxxx xxxx xxxx xxxx xxxx */
diff --git a/arch/arm/kernel/probes-thumb.c b/arch/arm/kernel/probes-thumb.c
index 4131351..d0d30d8 100644
--- a/arch/arm/kernel/probes-thumb.c
+++ b/arch/arm/kernel/probes-thumb.c
@@ -54,7 +54,7 @@ static const union decode_item t32_table_1110_100x_x1xx[] = {
 	/* STRD (immediate)	1110 1001 x1x0 xxxx xxxx xxxx xxxx xxxx */
 	/* LDRD (immediate)	1110 1001 x1x1 xxxx xxxx xxxx xxxx xxxx */
 	DECODE_EMULATEX	(0xff400000, 0xe9400000, PROBES_T32_LDRDSTRD,
-						 REGS(NOPCWB, NOSPPC, NOSPPC, 0, 0)),
+						 REGS(NOPCWB_SP_STACK, NOSPPC, NOSPPC, 0, 0)),
 
 	/* TBB			1110 1000 1101 xxxx xxxx xxxx 0000 xxxx */
 	/* TBH			1110 1000 1101 xxxx xxxx xxxx 0001 xxxx */
@@ -345,12 +345,12 @@ static const union decode_item t32_table_1111_100x[] = {
 	/* STR (immediate)	1111 1000 1100 xxxx xxxx xxxx xxxx xxxx */
 	/* LDR (immediate)	1111 1000 1101 xxxx xxxx xxxx xxxx xxxx */
 	DECODE_EMULATEX	(0xffe00000, 0xf8c00000, PROBES_T32_LDRSTR,
-						 REGS(NOPCX, ANY, 0, 0, 0)),
+						 REGS(NOPCX_SP_STACK, ANY, 0, 0, 0)),
 
 	/* STR (register)	1111 1000 0100 xxxx xxxx 0000 00xx xxxx */
 	/* LDR (register)	1111 1000 0101 xxxx xxxx 0000 00xx xxxx */
 	DECODE_EMULATEX	(0xffe00fc0, 0xf8400000, PROBES_T32_LDRSTR,
-						 REGS(NOPCX, ANY, 0, 0, NOSPPC)),
+						 REGS(NOPCX_SP_STACK, ANY, 0, 0, NOSPPC)),
 
 	/* LDRB (literal)	1111 1000 x001 1111 xxxx xxxx xxxx xxxx */
 	/* LDRSB (literal)	1111 1001 x001 1111 xxxx xxxx xxxx xxxx */
diff --git a/arch/arm/kernel/probes.c b/arch/arm/kernel/probes.c
index 1c77b8d..f811cac 100644
--- a/arch/arm/kernel/probes.c
+++ b/arch/arm/kernel/probes.c
@@ -258,7 +258,9 @@ set_emulated_insn(probes_opcode_t insn, struct arch_probes_insn *asi,
  * non-zero value, the corresponding nibble in pinsn is validated and modified
  * according to the type.
  */
-static bool __kprobes decode_regs(probes_opcode_t *pinsn, u32 regs, bool modify)
+static bool __kprobes decode_regs(probes_opcode_t *pinsn,
+		struct arch_probes_insn *asi,
+		u32 regs, bool modify)
 {
 	probes_opcode_t insn = *pinsn;
 	probes_opcode_t mask = 0xf; /* Start at least significant nibble */
@@ -307,11 +309,14 @@ static bool __kprobes decode_regs(probes_opcode_t *pinsn, u32 regs, bool modify)
 				goto reject;
 			break;
 
+		case REG_TYPE_NOPCWB_SP_STACK:
 		case REG_TYPE_NOPCWB:
 			if (!is_writeback(insn))
 				break; /* No writeback, so any register is OK */
 			/* fall through... */
+		case REG_TYPE_NOPC_SP_STACK:
 		case REG_TYPE_NOPC:
+		case REG_TYPE_NOPCX_SP_STACK:
 		case REG_TYPE_NOPCX:
 			/* Reject PC (R15) */
 			if (((insn ^ 0xffffffff) & mask) == 0)
@@ -319,6 +324,15 @@ static bool __kprobes decode_regs(probes_opcode_t *pinsn, u32 regs, bool modify)
 			break;
 		}
 
+		/* check stack operation */
+		switch (regs & 0xf) {
+			case REG_TYPE_NOPCWB_SP_STACK:
+			case REG_TYPE_NOPC_SP_STACK:
+			case REG_TYPE_NOPCX_SP_STACK:
+				if (((insn ^ 0xdddddddd) & mask) == 0)
+					asi->is_stack_operation = true;
+		}
+
 		/* Replace value of nibble with new register number... */
 		insn &= ~mask;
 		insn |= new_bits & mask;
@@ -394,6 +408,8 @@ probes_decode_insn(probes_opcode_t insn, struct arch_probes_insn *asi,
 	const struct decode_header *next;
 	bool matched = false;
 
+	asi->is_stack_operation = false;
+
 	if (emulate)
 		insn = prepare_emulated_insn(insn, asi, thumb);
 
@@ -410,7 +426,7 @@ probes_decode_insn(probes_opcode_t insn, struct arch_probes_insn *asi,
 		if (!matched && (insn & h->mask.bits) != h->value.bits)
 			continue;
 
-		if (!decode_regs(&insn, regs, emulate))
+		if (!decode_regs(&insn, asi, regs, emulate))
 			return INSN_REJECTED;
 
 		switch (type) {
diff --git a/arch/arm/kernel/probes.h b/arch/arm/kernel/probes.h
index dba9f24..568fd01 100644
--- a/arch/arm/kernel/probes.h
+++ b/arch/arm/kernel/probes.h
@@ -278,13 +278,19 @@ enum decode_reg_type {
 	REG_TYPE_NOSP,	   /* Register must not be SP */
 	REG_TYPE_NOSPPC,   /* Register must not be SP or PC */
 	REG_TYPE_NOPC,	   /* Register must not be PC */
+	REG_TYPE_NOPC_SP_STACK,	   /* REG_TYPE_NOPC and if this reg is sp
+				      then this is a stack operation */
 	REG_TYPE_NOPCWB,   /* No PC if load/store write-back flag also set */
+	REG_TYPE_NOPCWB_SP_STACK,   /* REG_TYPE_NOPCWB and, if this reg is sp
+				       then this is a stack operation */
 
 	/* The following types are used when the encoding for PC indicates
 	 * another instruction form. This distiction only matters for test
 	 * case coverage checks.
 	 */
 	REG_TYPE_NOPCX,	   /* Register must not be PC */
+	REG_TYPE_NOPCX_SP_STACK,	   /* REG_TYPE_NOPCX and if this reg is sp
+					      then this is a stack operation */
 	REG_TYPE_NOSPPCX,  /* Register must not be SP or PC */
 
 	/* Alias to allow '0' arg to be used in REGS macro. */
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v5 2/3] kprobes: copy ainsn after alloc aggr kprobe
  2014-08-27 13:02 [PATCH v5 0/3] kprobes: arm: enable OPTPROBES for ARM 32 Wang Nan
  2014-08-27 13:02 ` [PATCH v5 1/3] ARM: probes: check stack operation when decoding Wang Nan
@ 2014-08-27 13:02 ` Wang Nan
  2014-08-28  9:39   ` Masami Hiramatsu
  2014-08-27 13:02 ` [PATCH v5 3/3] kprobes: arm: enable OPTPROBES for ARM 32 Wang Nan
  2 siblings, 1 reply; 18+ messages in thread
From: Wang Nan @ 2014-08-27 13:02 UTC (permalink / raw)
  To: Russell King, David A. Long, Jon Medhurst, Taras Kondratiuk,
	Ben Dooks, Ananth N Mavinakayanahalli, Anil S Keshavamurthy,
	David S. Miller, Masami Hiramatsu, Will Deacon
  Cc: Wang Nan, Pei Feiyue, linux-arm-kernel, linux-kernel

Copy old kprobe to newly alloced optimized_kprobe before
arch_prepare_optimized_kprobe(). Original kprove can brings more
information to optimizer.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: "David A. Long" <dave.long@linaro.org> 
Cc: Jon Medhurst <tixy@linaro.org>
Cc: Taras Kondratiuk <taras.kondratiuk@linaro.org>
Cc: Ben Dooks <ben.dooks@codethink.co.uk>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 kernel/kprobes.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 3995f54..33cf568 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -730,7 +730,12 @@ static struct kprobe *alloc_aggr_kprobe(struct kprobe *p)
 		return NULL;
 
 	INIT_LIST_HEAD(&op->list);
-	op->kp.addr = p->addr;
+
+	/*
+	 * copy gives arch_prepare_optimized_kprobe
+	 * more information
+	 */
+	copy_kprobe(p, &op->kp);
 	arch_prepare_optimized_kprobe(op);
 
 	return &op->kp;
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v5 3/3] kprobes: arm: enable OPTPROBES for ARM 32
  2014-08-27 13:02 [PATCH v5 0/3] kprobes: arm: enable OPTPROBES for ARM 32 Wang Nan
  2014-08-27 13:02 ` [PATCH v5 1/3] ARM: probes: check stack operation when decoding Wang Nan
  2014-08-27 13:02 ` [PATCH v5 2/3] kprobes: copy ainsn after alloc aggr kprobe Wang Nan
@ 2014-08-27 13:02 ` Wang Nan
  2014-08-28 10:20   ` Masami Hiramatsu
  2014-09-02 13:49   ` Jon Medhurst (Tixy)
  2 siblings, 2 replies; 18+ messages in thread
From: Wang Nan @ 2014-08-27 13:02 UTC (permalink / raw)
  To: Russell King, David A. Long, Jon Medhurst, Taras Kondratiuk,
	Ben Dooks, Ananth N Mavinakayanahalli, Anil S Keshavamurthy,
	David S. Miller, Masami Hiramatsu, Will Deacon
  Cc: Wang Nan, Pei Feiyue, linux-arm-kernel, linux-kernel

This patch introduce kprobeopt for ARM 32.

Limitations:
 - Currently only kernel compiled with ARM ISA is supported.

 - Offset between probe point and optinsn slot must not larger than
   32MiB. Masami Hiramatsu suggests replacing 2 words, it will make
   things complex. Futher patch can make such optimization.

Kprobe opt on ARM is relatively simpler than kprobe opt on x86 because
ARM instruction is always 4 bytes aligned and 4 bytes long. This patch
replace probed instruction by a 'b', branch to trampoline code and then
calls optimized_callback(). optimized_callback() calls opt_pre_handler()
to execute kprobe handler. It also emulate/simulate replaced instruction.

When unregistering kprobe, the deferred manner of unoptimizer may leave
branch instruction before optimizer is called. Different from x86_64,
which only copy the probed insn after optprobe_template_end and
reexecute them, this patch call singlestep to emulate/simulate the insn
directly. Futher patch can optimize this behavior.

v1 -> v2:

 - Improvement: if replaced instruction is conditional, generate a
   conditional branch instruction for it;

 - Introduces RELATIVEJUMP_OPCODES due to ARM kprobe_opcode_t is 4
   bytes;

 - Removes size field in struct arch_optimized_insn;

 - Use arm_gen_branch() to generate branch instruction;

 - Remove all recover logic: ARM doesn't use tail buffer, no need
   recover replaced instructions like x86;

 - Remove incorrect CONFIG_THUMB checking;

 - can_optimize() always returns true if address is well aligned;

 - Improve optimized_callback: using opt_pre_handler();

 - Bugfix: correct range checking code and improve comments;

 - Fix commit message.

v2 -> v3:

 - Rename RELATIVEJUMP_OPCODES to MAX_COPIED_INSNS;

 - Remove unneeded checking:
      arch_check_optimized_kprobe(), can_optimize();

 - Add missing flush_icache_range() in arch_prepare_optimized_kprobe();

 - Remove unneeded 'return;'.

v3 -> v4:

 - Use __mem_to_opcode_arm() to translate copied_insn to ensure it
   works in big endian kernel;

 - Replace 'nop' placeholder in trampoline code template with
   '.long 0' to avoid confusion: reader may regard 'nop' as an
   instruction, but it is value in fact.

v4 -> v5:

 - Don't optimize stack store operations.

 - Introduce prepared field to arch_optimized_insn to indicate whether
   it is prepared. Similar to size field with x86. See v1 -> v2.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: "David A. Long" <dave.long@linaro.org> 
Cc: Jon Medhurst <tixy@linaro.org>
Cc: Taras Kondratiuk <taras.kondratiuk@linaro.org>
Cc: Ben Dooks <ben.dooks@codethink.co.uk>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Will Deacon <will.deacon@arm.com>

---
 arch/arm/Kconfig               |   1 +
 arch/arm/include/asm/kprobes.h |  28 +++++
 arch/arm/kernel/Makefile       |   3 +-
 arch/arm/kernel/kprobes-opt.c  | 259 +++++++++++++++++++++++++++++++++++++++++
 4 files changed, 290 insertions(+), 1 deletion(-)
 create mode 100644 arch/arm/kernel/kprobes-opt.c

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index c49a775..7106fba 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -57,6 +57,7 @@ config ARM
 	select HAVE_MEMBLOCK
 	select HAVE_MOD_ARCH_SPECIFIC if ARM_UNWIND
 	select HAVE_OPROFILE if (HAVE_PERF_EVENTS)
+	select HAVE_OPTPROBES if (!THUMB2_KERNEL)
 	select HAVE_PERF_EVENTS
 	select HAVE_PERF_REGS
 	select HAVE_PERF_USER_STACK_DUMP
diff --git a/arch/arm/include/asm/kprobes.h b/arch/arm/include/asm/kprobes.h
index 49fa0df..88a0345 100644
--- a/arch/arm/include/asm/kprobes.h
+++ b/arch/arm/include/asm/kprobes.h
@@ -51,5 +51,33 @@ int kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr);
 int kprobe_exceptions_notify(struct notifier_block *self,
 			     unsigned long val, void *data);
 
+/* optinsn template addresses */
+extern __visible kprobe_opcode_t optprobe_template_entry;
+extern __visible kprobe_opcode_t optprobe_template_val;
+extern __visible kprobe_opcode_t optprobe_template_call;
+extern __visible kprobe_opcode_t optprobe_template_end;
+
+#define MAX_OPTIMIZED_LENGTH	(4)
+#define MAX_OPTINSN_SIZE				\
+	(((unsigned long)&optprobe_template_end -	\
+	  (unsigned long)&optprobe_template_entry))
+#define RELATIVEJUMP_SIZE	(4)
+
+struct arch_optimized_insn {
+	/*
+	 * copy of the original instructions.
+	 * Different from x86, ARM kprobe_opcode_t is u32.
+	 */
+#define MAX_COPIED_INSN	((RELATIVEJUMP_SIZE) / sizeof(kprobe_opcode_t))
+	kprobe_opcode_t copied_insn[MAX_COPIED_INSN];
+	/* detour code buffer */
+	kprobe_opcode_t *insn;
+	/*
+	 *  we always copies one instruction on arm32,
+	 *  size always be 4, so no size field.
+	 */
+	/* indicate whether this optimization is prepared */
+	bool prepared;
+};
 
 #endif /* _ARM_KPROBES_H */
diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
index 38ddd9f..6a38ec1 100644
--- a/arch/arm/kernel/Makefile
+++ b/arch/arm/kernel/Makefile
@@ -52,11 +52,12 @@ obj-$(CONFIG_FUNCTION_GRAPH_TRACER)	+= ftrace.o insn.o
 obj-$(CONFIG_JUMP_LABEL)	+= jump_label.o insn.o patch.o
 obj-$(CONFIG_KEXEC)		+= machine_kexec.o relocate_kernel.o
 obj-$(CONFIG_UPROBES)		+= probes.o probes-arm.o uprobes.o uprobes-arm.o
-obj-$(CONFIG_KPROBES)		+= probes.o kprobes.o kprobes-common.o patch.o
+obj-$(CONFIG_KPROBES)		+= probes.o kprobes.o kprobes-common.o patch.o insn.o
 ifdef CONFIG_THUMB2_KERNEL
 obj-$(CONFIG_KPROBES)		+= kprobes-thumb.o probes-thumb.o
 else
 obj-$(CONFIG_KPROBES)		+= kprobes-arm.o probes-arm.o
+obj-$(CONFIG_OPTPROBES)		+= kprobes-opt.o
 endif
 obj-$(CONFIG_ARM_KPROBES_TEST)	+= test-kprobes.o
 test-kprobes-objs		:= kprobes-test.o
diff --git a/arch/arm/kernel/kprobes-opt.c b/arch/arm/kernel/kprobes-opt.c
new file mode 100644
index 0000000..8407858
--- /dev/null
+++ b/arch/arm/kernel/kprobes-opt.c
@@ -0,0 +1,259 @@
+/*
+ *  Kernel Probes Jump Optimization (Optprobes)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ *
+ * Copyright (C) IBM Corporation, 2002, 2004
+ * Copyright (C) Hitachi Ltd., 2012
+ * Copyright (C) Huawei Inc., 2014
+ */
+
+#include <linux/kprobes.h>
+#include <linux/jump_label.h>
+#include <asm/kprobes.h>
+#include <asm/cacheflush.h>
+/* for arm_gen_branch */
+#include "insn.h"
+/* for patch_text */
+#include "patch.h"
+
+asm (
+			".global optprobe_template_entry\n"
+			"optprobe_template_entry:\n"
+			"	sub	sp, sp, #80\n"
+			"	stmia	sp, {r0 - r14} \n"
+			"	add	r3, sp, #80\n"
+			"	str	r3, [sp, #52]\n"
+			"	mrs	r4, cpsr\n"
+			"	str	r4, [sp, #64]\n"
+			"	mov	r1, sp\n"
+			"	ldr	r0, 1f\n"
+			"	ldr	r2, 2f\n"
+			"	blx	r2\n"
+			"	ldr	r1, [sp, #64]\n"
+			"	msr	cpsr_fs, r1\n"
+			"	ldmia	sp, {r0 - r15}\n"
+			".global optprobe_template_val\n"
+			"optprobe_template_val:\n"
+			"1:	.long 0\n"
+			".global optprobe_template_call\n"
+			"optprobe_template_call:\n"
+			"2:	.long 0\n"
+			".global optprobe_template_end\n"
+			"optprobe_template_end:\n");
+
+#define TMPL_VAL_IDX \
+	((long)&optprobe_template_val - (long)&optprobe_template_entry)
+#define TMPL_CALL_IDX \
+	((long)&optprobe_template_call - (long)&optprobe_template_entry)
+#define TMPL_END_IDX \
+	((long)&optprobe_template_end - (long)&optprobe_template_entry)
+
+/*
+ * ARM can always optimize an instruction when using ARM ISA.
+ */
+int arch_prepared_optinsn(struct arch_optimized_insn *optinsn)
+{
+	return optinsn->prepared;
+}
+
+/*
+ * In ARM ISA, kprobe opt always replace one instruction (4 bytes
+ * aligned and 4 bytes long). It is impossiable to encounter another
+ * kprobe in the address range. So always return 0.
+ */
+int arch_check_optimized_kprobe(struct optimized_kprobe *op)
+{
+	return 0;
+}
+
+/* Caller must ensure addr & 3 == 0 */
+static int can_optimize(struct optimized_kprobe *op)
+{
+	if (op->kp.ainsn.is_stack_operation)
+		return 0;
+	return 1;
+}
+
+/* Free optimized instruction slot */
+static void
+__arch_remove_optimized_kprobe(struct optimized_kprobe *op, int dirty)
+{
+	if (op->optinsn.insn) {
+		free_optinsn_slot(op->optinsn.insn, dirty);
+		op->optinsn.insn = NULL;
+	}
+}
+
+extern void kprobe_handler(struct pt_regs *regs);
+
+static void
+optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
+{
+	unsigned long flags;
+	struct kprobe *p = &op->kp;
+	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+
+	/* Save skipped registers */
+	regs->ARM_pc = (unsigned long)op->kp.addr;
+	regs->ARM_ORIG_r0 = ~0UL;
+
+	local_irq_save(flags);
+
+	if (kprobe_running()) {
+		kprobes_inc_nmissed_count(&op->kp);
+	} else {
+		__this_cpu_write(current_kprobe, &op->kp);
+		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
+		opt_pre_handler(&op->kp, regs);
+		__this_cpu_write(current_kprobe, NULL);
+	}
+
+	/* In each case, we must singlestep the replaced instruction. */
+	op->kp.ainsn.insn_singlestep(p->opcode, &p->ainsn, regs);
+
+	local_irq_restore(flags);
+}
+
+int arch_prepare_optimized_kprobe(struct optimized_kprobe *op)
+{
+	u8 *buf;
+	unsigned long rel_chk;
+	unsigned long val;
+
+	if (!can_optimize(op))
+		return -EILSEQ;
+
+	op->optinsn.insn = get_optinsn_slot();
+	if (!op->optinsn.insn)
+		return -ENOMEM;
+
+	/*
+	 * Verify if the address gap is in 32MiB range, because this uses
+	 * a relative jump.
+	 *
+	 * kprobe opt use a 'b' instruction to branch to optinsn.insn.
+	 * According to ARM manual, branch instruction is:
+	 *
+	 *   31  28 27           24 23             0
+	 *  +------+---+---+---+---+----------------+
+	 *  | cond | 1 | 0 | 1 | 0 |      imm24     |
+	 *  +------+---+---+---+---+----------------+
+	 *
+	 * imm24 is a signed 24 bits integer. The real branch offset is computed
+	 * by: imm32 = SignExtend(imm24:'00', 32);
+	 *
+	 * So the maximum forward branch should be:
+	 *   (0x007fffff << 2) = 0x01fffffc =  0x1fffffc
+	 * The maximum backword branch should be:
+	 *   (0xff800000 << 2) = 0xfe000000 = -0x2000000
+	 *
+	 * We can simply check (rel & 0xfe000003):
+	 *  if rel is positive, (rel & 0xfe000000) shoule be 0
+	 *  if rel is negitive, (rel & 0xfe000000) should be 0xfe000000
+	 *  the last '3' is used for alignment checking.
+	 */
+	rel_chk = (unsigned long)((long)op->optinsn.insn -
+			(long)op->kp.addr + 8) & 0xfe000003;
+
+	if ((rel_chk != 0) && (rel_chk != 0xfe000000)) {
+		__arch_remove_optimized_kprobe(op, 0);
+		return -ERANGE;
+	}
+
+	buf = (u8 *)op->optinsn.insn;
+
+	/* Copy arch-dep-instance from template */
+	memcpy(buf, &optprobe_template_entry, TMPL_END_IDX);
+
+	/* Set probe information */
+	val = (unsigned long)op;
+	memcpy(buf + TMPL_VAL_IDX, &val, sizeof(val));
+
+	/* Set probe function call */
+	val = (unsigned long)optimized_callback;
+	memcpy(buf + TMPL_CALL_IDX, &val, sizeof(val));
+
+	flush_icache_range((unsigned long)buf,
+			   (unsigned long)buf + TMPL_END_IDX);
+
+	op->optinsn.prepared = true;
+	return 0;
+}
+
+void arch_optimize_kprobes(struct list_head *oplist)
+{
+	struct optimized_kprobe *op, *tmp;
+
+	list_for_each_entry_safe(op, tmp, oplist, list) {
+		unsigned long insn;
+		WARN_ON(kprobe_disabled(&op->kp));
+
+		/*
+		 * Backup instructions which will be replaced
+		 * by jump address
+		 */
+		memcpy(op->optinsn.copied_insn, op->kp.addr,
+				RELATIVEJUMP_SIZE);
+
+		insn = arm_gen_branch((unsigned long)op->kp.addr,
+				(unsigned long)op->optinsn.insn);
+		BUG_ON(insn == 0);
+
+		/*
+		 * Make it a conditional branch if replaced insn
+		 * is consitional
+		 */
+		insn = (__mem_to_opcode_arm(
+			  op->optinsn.copied_insn[0]) & 0xf0000000) |
+			(insn & 0x0fffffff);
+
+		patch_text(op->kp.addr, insn);
+
+		list_del_init(&op->list);
+	}
+}
+
+void arch_unoptimize_kprobe(struct optimized_kprobe *op)
+{
+	arch_arm_kprobe(&op->kp);
+}
+
+/*
+ * Recover original instructions and breakpoints from relative jumps.
+ * Caller must call with locking kprobe_mutex.
+ */
+void arch_unoptimize_kprobes(struct list_head *oplist,
+ 			    struct list_head *done_list)
+{
+	struct optimized_kprobe *op, *tmp;
+
+	list_for_each_entry_safe(op, tmp, oplist, list) {
+		arch_unoptimize_kprobe(op);
+		list_move(&op->list, done_list);
+	}
+}
+
+int arch_within_optimized_kprobe(struct optimized_kprobe *op,
+ 				unsigned long addr)
+{
+	return ((unsigned long)op->kp.addr <= addr &&
+		(unsigned long)op->kp.addr + RELATIVEJUMP_SIZE > addr);
+}
+
+void arch_remove_optimized_kprobe(struct optimized_kprobe *op)
+{
+	__arch_remove_optimized_kprobe(op, 1);
+}
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v5 2/3] kprobes: copy ainsn after alloc aggr kprobe
  2014-08-27 13:02 ` [PATCH v5 2/3] kprobes: copy ainsn after alloc aggr kprobe Wang Nan
@ 2014-08-28  9:39   ` Masami Hiramatsu
  2014-08-28 11:07     ` Wang Nan
  0 siblings, 1 reply; 18+ messages in thread
From: Masami Hiramatsu @ 2014-08-28  9:39 UTC (permalink / raw)
  To: Wang Nan
  Cc: Russell King, David A. Long, Jon Medhurst, Taras Kondratiuk,
	Ben Dooks, Ananth N Mavinakayanahalli, Anil S Keshavamurthy,
	David S. Miller, Will Deacon, Pei Feiyue, linux-arm-kernel,
	linux-kernel

(2014/08/27 22:02), Wang Nan wrote:
> Copy old kprobe to newly alloced optimized_kprobe before
> arch_prepare_optimized_kprobe(). Original kprove can brings more
> information to optimizer.
> 
> Signed-off-by: Wang Nan <wangnan0@huawei.com>
> Cc: Russell King <linux@arm.linux.org.uk>
> Cc: "David A. Long" <dave.long@linaro.org> 
> Cc: Jon Medhurst <tixy@linaro.org>
> Cc: Taras Kondratiuk <taras.kondratiuk@linaro.org>
> Cc: Ben Dooks <ben.dooks@codethink.co.uk>
> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
> Cc: "David S. Miller" <davem@davemloft.net>
> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  kernel/kprobes.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index 3995f54..33cf568 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -730,7 +730,12 @@ static struct kprobe *alloc_aggr_kprobe(struct kprobe *p)
>  		return NULL;
>  
>  	INIT_LIST_HEAD(&op->list);
> -	op->kp.addr = p->addr;

Do not remove this, since copy_kprobe() doesn't copy kp.addr.

static inline void copy_kprobe(struct kprobe *ap, struct kprobe *p)
{
        memcpy(&p->opcode, &ap->opcode, sizeof(kprobe_opcode_t));
        memcpy(&p->ainsn, &ap->ainsn, sizeof(struct arch_specific_insn));
}

Thank you,

> +
> +	/*
> +	 * copy gives arch_prepare_optimized_kprobe
> +	 * more information
> +	 */
> +	copy_kprobe(p, &op->kp);
>  	arch_prepare_optimized_kprobe(op);
>  
>  	return &op->kp;
> 


-- 
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@hitachi.com



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v5 1/3] ARM: probes: check stack operation when decoding
  2014-08-27 13:02 ` [PATCH v5 1/3] ARM: probes: check stack operation when decoding Wang Nan
@ 2014-08-28  9:51   ` Masami Hiramatsu
  2014-08-28 10:20     ` Russell King - ARM Linux
  0 siblings, 1 reply; 18+ messages in thread
From: Masami Hiramatsu @ 2014-08-28  9:51 UTC (permalink / raw)
  To: Wang Nan
  Cc: Russell King, David A. Long, Jon Medhurst, Taras Kondratiuk,
	Ben Dooks, Ananth N Mavinakayanahalli, Anil S Keshavamurthy,
	David S. Miller, Will Deacon, Pei Feiyue, linux-arm-kernel,
	linux-kernel

(2014/08/27 22:02), Wang Nan wrote:
> This patch improves arm instruction decoder, allows it check whether an
> instruction is a stack store operation. This information is important
> for kprobe optimization.
> 
> For normal str instruction, this patch add a series of _SP_STACK
> register indicator in the decoder to test the base and offset register
> in ldr <Rt>, [<Rn>, <Rm>] against sp.
> 
> For stm instruction, it check sp register in instruction specific
> decoder.

OK, reviewed. but since I'm not so sure about arm32 ISA,
I need help from ARM32 maintainer to ack this.

Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>

Thank you,

> 
> Signed-off-by: Wang Nan <wangnan0@huawei.com>
> Cc: Russell King <linux@arm.linux.org.uk>
> Cc: "David A. Long" <dave.long@linaro.org> 
> Cc: Jon Medhurst <tixy@linaro.org>
> Cc: Taras Kondratiuk <taras.kondratiuk@linaro.org>
> Cc: Ben Dooks <ben.dooks@codethink.co.uk>
> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
> Cc: "David S. Miller" <davem@davemloft.net>
> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
> Cc: Will Deacon <will.deacon@arm.com>
> 
> ---
>  arch/arm/include/asm/probes.h    |  1 +
>  arch/arm/kernel/kprobes-common.c |  4 ++++
>  arch/arm/kernel/probes-arm.c     |  4 ++--
>  arch/arm/kernel/probes-thumb.c   |  6 +++---
>  arch/arm/kernel/probes.c         | 20 ++++++++++++++++++--
>  arch/arm/kernel/probes.h         |  6 ++++++
>  6 files changed, 34 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/arm/include/asm/probes.h b/arch/arm/include/asm/probes.h
> index 806cfe6..3f6912c 100644
> --- a/arch/arm/include/asm/probes.h
> +++ b/arch/arm/include/asm/probes.h
> @@ -38,6 +38,7 @@ struct arch_probes_insn {
>  	probes_check_cc			*insn_check_cc;
>  	probes_insn_singlestep_t	*insn_singlestep;
>  	probes_insn_fn_t		*insn_fn;
> +	bool				is_stack_operation;
>  };
>  
>  #endif
> diff --git a/arch/arm/kernel/kprobes-common.c b/arch/arm/kernel/kprobes-common.c
> index 0bf5d64..4e8b918 100644
> --- a/arch/arm/kernel/kprobes-common.c
> +++ b/arch/arm/kernel/kprobes-common.c
> @@ -133,6 +133,10 @@ kprobe_decode_ldmstm(probes_opcode_t insn, struct arch_probes_insn *asi,
>  	int is_ldm = insn & 0x100000;
>  	int rn = (insn >> 16) & 0xf;
>  
> +	/* whether this is a push instruction? */
> +	if ((rn == 0xd) && (!is_ldm))
> +		asi->is_stack_operation = true;
> +
>  	if (rn <= 12 && (reglist & 0xe000) == 0) {
>  		/* Instruction only uses registers in the range R0..R12 */
>  		handler = emulate_generic_r0_12_noflags;
> diff --git a/arch/arm/kernel/probes-arm.c b/arch/arm/kernel/probes-arm.c
> index 8eaef81..5c187ba 100644
> --- a/arch/arm/kernel/probes-arm.c
> +++ b/arch/arm/kernel/probes-arm.c
> @@ -577,7 +577,7 @@ static const union decode_item arm_cccc_01xx_table[] = {
>  	/* STR (immediate)	cccc 010x x0x0 xxxx xxxx xxxx xxxx xxxx */
>  	/* STRB (immediate)	cccc 010x x1x0 xxxx xxxx xxxx xxxx xxxx */
>  	DECODE_EMULATEX	(0x0e100000, 0x04000000, PROBES_STORE,
> -						 REGS(NOPCWB, ANY, 0, 0, 0)),
> +						 REGS(NOPCWB_SP_STACK, ANY, 0, 0, 0)),
>  
>  	/* LDR (immediate)	cccc 010x x0x1 xxxx xxxx xxxx xxxx xxxx */
>  	/* LDRB (immediate)	cccc 010x x1x1 xxxx xxxx xxxx xxxx xxxx */
> @@ -587,7 +587,7 @@ static const union decode_item arm_cccc_01xx_table[] = {
>  	/* STR (register)	cccc 011x x0x0 xxxx xxxx xxxx xxxx xxxx */
>  	/* STRB (register)	cccc 011x x1x0 xxxx xxxx xxxx xxxx xxxx */
>  	DECODE_EMULATEX	(0x0e100000, 0x06000000, PROBES_STORE,
> -						 REGS(NOPCWB, ANY, 0, 0, NOPC)),
> +						 REGS(NOPCWB_SP_STACK, ANY, 0, 0, NOPC_SP_STACK)),
>  
>  	/* LDR (register)	cccc 011x x0x1 xxxx xxxx xxxx xxxx xxxx */
>  	/* LDRB (register)	cccc 011x x1x1 xxxx xxxx xxxx xxxx xxxx */
> diff --git a/arch/arm/kernel/probes-thumb.c b/arch/arm/kernel/probes-thumb.c
> index 4131351..d0d30d8 100644
> --- a/arch/arm/kernel/probes-thumb.c
> +++ b/arch/arm/kernel/probes-thumb.c
> @@ -54,7 +54,7 @@ static const union decode_item t32_table_1110_100x_x1xx[] = {
>  	/* STRD (immediate)	1110 1001 x1x0 xxxx xxxx xxxx xxxx xxxx */
>  	/* LDRD (immediate)	1110 1001 x1x1 xxxx xxxx xxxx xxxx xxxx */
>  	DECODE_EMULATEX	(0xff400000, 0xe9400000, PROBES_T32_LDRDSTRD,
> -						 REGS(NOPCWB, NOSPPC, NOSPPC, 0, 0)),
> +						 REGS(NOPCWB_SP_STACK, NOSPPC, NOSPPC, 0, 0)),
>  
>  	/* TBB			1110 1000 1101 xxxx xxxx xxxx 0000 xxxx */
>  	/* TBH			1110 1000 1101 xxxx xxxx xxxx 0001 xxxx */
> @@ -345,12 +345,12 @@ static const union decode_item t32_table_1111_100x[] = {
>  	/* STR (immediate)	1111 1000 1100 xxxx xxxx xxxx xxxx xxxx */
>  	/* LDR (immediate)	1111 1000 1101 xxxx xxxx xxxx xxxx xxxx */
>  	DECODE_EMULATEX	(0xffe00000, 0xf8c00000, PROBES_T32_LDRSTR,
> -						 REGS(NOPCX, ANY, 0, 0, 0)),
> +						 REGS(NOPCX_SP_STACK, ANY, 0, 0, 0)),
>  
>  	/* STR (register)	1111 1000 0100 xxxx xxxx 0000 00xx xxxx */
>  	/* LDR (register)	1111 1000 0101 xxxx xxxx 0000 00xx xxxx */
>  	DECODE_EMULATEX	(0xffe00fc0, 0xf8400000, PROBES_T32_LDRSTR,
> -						 REGS(NOPCX, ANY, 0, 0, NOSPPC)),
> +						 REGS(NOPCX_SP_STACK, ANY, 0, 0, NOSPPC)),
>  
>  	/* LDRB (literal)	1111 1000 x001 1111 xxxx xxxx xxxx xxxx */
>  	/* LDRSB (literal)	1111 1001 x001 1111 xxxx xxxx xxxx xxxx */
> diff --git a/arch/arm/kernel/probes.c b/arch/arm/kernel/probes.c
> index 1c77b8d..f811cac 100644
> --- a/arch/arm/kernel/probes.c
> +++ b/arch/arm/kernel/probes.c
> @@ -258,7 +258,9 @@ set_emulated_insn(probes_opcode_t insn, struct arch_probes_insn *asi,
>   * non-zero value, the corresponding nibble in pinsn is validated and modified
>   * according to the type.
>   */
> -static bool __kprobes decode_regs(probes_opcode_t *pinsn, u32 regs, bool modify)
> +static bool __kprobes decode_regs(probes_opcode_t *pinsn,
> +		struct arch_probes_insn *asi,
> +		u32 regs, bool modify)
>  {
>  	probes_opcode_t insn = *pinsn;
>  	probes_opcode_t mask = 0xf; /* Start at least significant nibble */
> @@ -307,11 +309,14 @@ static bool __kprobes decode_regs(probes_opcode_t *pinsn, u32 regs, bool modify)
>  				goto reject;
>  			break;
>  
> +		case REG_TYPE_NOPCWB_SP_STACK:
>  		case REG_TYPE_NOPCWB:
>  			if (!is_writeback(insn))
>  				break; /* No writeback, so any register is OK */
>  			/* fall through... */
> +		case REG_TYPE_NOPC_SP_STACK:
>  		case REG_TYPE_NOPC:
> +		case REG_TYPE_NOPCX_SP_STACK:
>  		case REG_TYPE_NOPCX:
>  			/* Reject PC (R15) */
>  			if (((insn ^ 0xffffffff) & mask) == 0)
> @@ -319,6 +324,15 @@ static bool __kprobes decode_regs(probes_opcode_t *pinsn, u32 regs, bool modify)
>  			break;
>  		}
>  
> +		/* check stack operation */
> +		switch (regs & 0xf) {
> +			case REG_TYPE_NOPCWB_SP_STACK:
> +			case REG_TYPE_NOPC_SP_STACK:
> +			case REG_TYPE_NOPCX_SP_STACK:
> +				if (((insn ^ 0xdddddddd) & mask) == 0)
> +					asi->is_stack_operation = true;
> +		}
> +
>  		/* Replace value of nibble with new register number... */
>  		insn &= ~mask;
>  		insn |= new_bits & mask;
> @@ -394,6 +408,8 @@ probes_decode_insn(probes_opcode_t insn, struct arch_probes_insn *asi,
>  	const struct decode_header *next;
>  	bool matched = false;
>  
> +	asi->is_stack_operation = false;
> +
>  	if (emulate)
>  		insn = prepare_emulated_insn(insn, asi, thumb);
>  
> @@ -410,7 +426,7 @@ probes_decode_insn(probes_opcode_t insn, struct arch_probes_insn *asi,
>  		if (!matched && (insn & h->mask.bits) != h->value.bits)
>  			continue;
>  
> -		if (!decode_regs(&insn, regs, emulate))
> +		if (!decode_regs(&insn, asi, regs, emulate))
>  			return INSN_REJECTED;
>  
>  		switch (type) {
> diff --git a/arch/arm/kernel/probes.h b/arch/arm/kernel/probes.h
> index dba9f24..568fd01 100644
> --- a/arch/arm/kernel/probes.h
> +++ b/arch/arm/kernel/probes.h
> @@ -278,13 +278,19 @@ enum decode_reg_type {
>  	REG_TYPE_NOSP,	   /* Register must not be SP */
>  	REG_TYPE_NOSPPC,   /* Register must not be SP or PC */
>  	REG_TYPE_NOPC,	   /* Register must not be PC */
> +	REG_TYPE_NOPC_SP_STACK,	   /* REG_TYPE_NOPC and if this reg is sp
> +				      then this is a stack operation */
>  	REG_TYPE_NOPCWB,   /* No PC if load/store write-back flag also set */
> +	REG_TYPE_NOPCWB_SP_STACK,   /* REG_TYPE_NOPCWB and, if this reg is sp
> +				       then this is a stack operation */
>  
>  	/* The following types are used when the encoding for PC indicates
>  	 * another instruction form. This distiction only matters for test
>  	 * case coverage checks.
>  	 */
>  	REG_TYPE_NOPCX,	   /* Register must not be PC */
> +	REG_TYPE_NOPCX_SP_STACK,	   /* REG_TYPE_NOPCX and if this reg is sp
> +					      then this is a stack operation */
>  	REG_TYPE_NOSPPCX,  /* Register must not be SP or PC */
>  
>  	/* Alias to allow '0' arg to be used in REGS macro. */
> 


-- 
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@hitachi.com



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v5 3/3] kprobes: arm: enable OPTPROBES for ARM 32
  2014-08-27 13:02 ` [PATCH v5 3/3] kprobes: arm: enable OPTPROBES for ARM 32 Wang Nan
@ 2014-08-28 10:20   ` Masami Hiramatsu
  2014-09-02 13:49   ` Jon Medhurst (Tixy)
  1 sibling, 0 replies; 18+ messages in thread
From: Masami Hiramatsu @ 2014-08-28 10:20 UTC (permalink / raw)
  To: Wang Nan
  Cc: Russell King, David A. Long, Jon Medhurst, Taras Kondratiuk,
	Ben Dooks, Ananth N Mavinakayanahalli, Anil S Keshavamurthy,
	David S. Miller, Will Deacon, Pei Feiyue, linux-arm-kernel,
	linux-kernel

(2014/08/27 22:02), Wang Nan wrote:
> +/*
> + * ARM can always optimize an instruction when using ARM ISA.
> + */

Hmm, this comment looks not correct anymore :)

> +int arch_prepared_optinsn(struct arch_optimized_insn *optinsn)
> +{
> +	return optinsn->prepared;
> +}

BTW, why don't you check optinsn->insn != NULL ?
If it is not prepared for optimizing,  optinsn->insn always be NULL.

[...]
> +int arch_prepare_optimized_kprobe(struct optimized_kprobe *op)
> +{
> +	u8 *buf;
> +	unsigned long rel_chk;
> +	unsigned long val;
> +
> +	if (!can_optimize(op))
> +		return -EILSEQ;
> +
> +	op->optinsn.insn = get_optinsn_slot();
> +	if (!op->optinsn.insn)
> +		return -ENOMEM;
> +
> +	/*
> +	 * Verify if the address gap is in 32MiB range, because this uses
> +	 * a relative jump.
> +	 *
> +	 * kprobe opt use a 'b' instruction to branch to optinsn.insn.
> +	 * According to ARM manual, branch instruction is:
> +	 *
> +	 *   31  28 27           24 23             0
> +	 *  +------+---+---+---+---+----------------+
> +	 *  | cond | 1 | 0 | 1 | 0 |      imm24     |
> +	 *  +------+---+---+---+---+----------------+
> +	 *
> +	 * imm24 is a signed 24 bits integer. The real branch offset is computed
> +	 * by: imm32 = SignExtend(imm24:'00', 32);
> +	 *
> +	 * So the maximum forward branch should be:
> +	 *   (0x007fffff << 2) = 0x01fffffc =  0x1fffffc
> +	 * The maximum backword branch should be:
> +	 *   (0xff800000 << 2) = 0xfe000000 = -0x2000000
> +	 *
> +	 * We can simply check (rel & 0xfe000003):
> +	 *  if rel is positive, (rel & 0xfe000000) shoule be 0
> +	 *  if rel is negitive, (rel & 0xfe000000) should be 0xfe000000
> +	 *  the last '3' is used for alignment checking.
> +	 */
> +	rel_chk = (unsigned long)((long)op->optinsn.insn -
> +			(long)op->kp.addr + 8) & 0xfe000003;
> +
> +	if ((rel_chk != 0) && (rel_chk != 0xfe000000)) {
> +		__arch_remove_optimized_kprobe(op, 0);
> +		return -ERANGE;
> +	}
> +
> +	buf = (u8 *)op->optinsn.insn;
> +
> +	/* Copy arch-dep-instance from template */
> +	memcpy(buf, &optprobe_template_entry, TMPL_END_IDX);
> +
> +	/* Set probe information */
> +	val = (unsigned long)op;
> +	memcpy(buf + TMPL_VAL_IDX, &val, sizeof(val));
> +
> +	/* Set probe function call */
> +	val = (unsigned long)optimized_callback;
> +	memcpy(buf + TMPL_CALL_IDX, &val, sizeof(val));
> +
> +	flush_icache_range((unsigned long)buf,
> +			   (unsigned long)buf + TMPL_END_IDX);
> +
> +	op->optinsn.prepared = true;
> +	return 0;
> +}
> +

Thank you,

-- 
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@hitachi.com



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v5 1/3] ARM: probes: check stack operation when decoding
  2014-08-28  9:51   ` Masami Hiramatsu
@ 2014-08-28 10:20     ` Russell King - ARM Linux
  2014-08-28 10:24       ` Will Deacon
  0 siblings, 1 reply; 18+ messages in thread
From: Russell King - ARM Linux @ 2014-08-28 10:20 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Wang Nan, David A. Long, Jon Medhurst, Taras Kondratiuk,
	Ben Dooks, Ananth N Mavinakayanahalli, Anil S Keshavamurthy,
	David S. Miller, Will Deacon, Pei Feiyue, linux-arm-kernel,
	linux-kernel

On Thu, Aug 28, 2014 at 06:51:15PM +0900, Masami Hiramatsu wrote:
> (2014/08/27 22:02), Wang Nan wrote:
> > This patch improves arm instruction decoder, allows it check whether an
> > instruction is a stack store operation. This information is important
> > for kprobe optimization.
> > 
> > For normal str instruction, this patch add a series of _SP_STACK
> > register indicator in the decoder to test the base and offset register
> > in ldr <Rt>, [<Rn>, <Rm>] against sp.
> > 
> > For stm instruction, it check sp register in instruction specific
> > decoder.
> 
> OK, reviewed. but since I'm not so sure about arm32 ISA,
> I need help from ARM32 maintainer to ack this.

What you actually need is an ack from the ARM kprobes people who
understand this code.  That would be much more meaningful than my
ack.  They're already on the Cc list.

-- 
FTTC broadband for 0.8mile line: currently at 9.5Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v5 1/3] ARM: probes: check stack operation when decoding
  2014-08-28 10:20     ` Russell King - ARM Linux
@ 2014-08-28 10:24       ` Will Deacon
  2014-08-29  8:47         ` Jon Medhurst (Tixy)
  0 siblings, 1 reply; 18+ messages in thread
From: Will Deacon @ 2014-08-28 10:24 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Masami Hiramatsu, Wang Nan, David A. Long, Jon Medhurst,
	Taras Kondratiuk, Ben Dooks, Ananth N Mavinakayanahalli,
	Anil S Keshavamurthy, David S. Miller, Pei Feiyue,
	linux-arm-kernel, linux-kernel

On Thu, Aug 28, 2014 at 11:20:21AM +0100, Russell King - ARM Linux wrote:
> On Thu, Aug 28, 2014 at 06:51:15PM +0900, Masami Hiramatsu wrote:
> > (2014/08/27 22:02), Wang Nan wrote:
> > > This patch improves arm instruction decoder, allows it check whether an
> > > instruction is a stack store operation. This information is important
> > > for kprobe optimization.
> > > 
> > > For normal str instruction, this patch add a series of _SP_STACK
> > > register indicator in the decoder to test the base and offset register
> > > in ldr <Rt>, [<Rn>, <Rm>] against sp.
> > > 
> > > For stm instruction, it check sp register in instruction specific
> > > decoder.
> > 
> > OK, reviewed. but since I'm not so sure about arm32 ISA,
> > I need help from ARM32 maintainer to ack this.
> 
> What you actually need is an ack from the ARM kprobes people who
> understand this code.  That would be much more meaningful than my
> ack.  They're already on the Cc list.

Tixy, can you take a look please?

Will

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v5 2/3] kprobes: copy ainsn after alloc aggr kprobe
  2014-08-28  9:39   ` Masami Hiramatsu
@ 2014-08-28 11:07     ` Wang Nan
  0 siblings, 0 replies; 18+ messages in thread
From: Wang Nan @ 2014-08-28 11:07 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Russell King, David A. Long, Jon Medhurst, Taras Kondratiuk,
	Ben Dooks, Ananth N Mavinakayanahalli, Anil S Keshavamurthy,
	David S. Miller, Will Deacon, Pei Feiyue, linux-arm-kernel,
	linux-kernel

On 2014/8/28 17:39, Masami Hiramatsu wrote:
> (2014/08/27 22:02), Wang Nan wrote:
>> Copy old kprobe to newly alloced optimized_kprobe before
>> arch_prepare_optimized_kprobe(). Original kprove can brings more
>> information to optimizer.
>>
>> Signed-off-by: Wang Nan <wangnan0@huawei.com>
>> Cc: Russell King <linux@arm.linux.org.uk>
>> Cc: "David A. Long" <dave.long@linaro.org> 
>> Cc: Jon Medhurst <tixy@linaro.org>
>> Cc: Taras Kondratiuk <taras.kondratiuk@linaro.org>
>> Cc: Ben Dooks <ben.dooks@codethink.co.uk>
>> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
>> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
>> Cc: "David S. Miller" <davem@davemloft.net>
>> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> ---
>>  kernel/kprobes.c | 7 ++++++-
>>  1 file changed, 6 insertions(+), 1 deletion(-)
>>
>> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
>> index 3995f54..33cf568 100644
>> --- a/kernel/kprobes.c
>> +++ b/kernel/kprobes.c
>> @@ -730,7 +730,12 @@ static struct kprobe *alloc_aggr_kprobe(struct kprobe *p)
>>  		return NULL;
>>  
>>  	INIT_LIST_HEAD(&op->list);
>> -	op->kp.addr = p->addr;
> 
> Do not remove this, since copy_kprobe() doesn't copy kp.addr.
> 
> static inline void copy_kprobe(struct kprobe *ap, struct kprobe *p)
> {
>         memcpy(&p->opcode, &ap->opcode, sizeof(kprobe_opcode_t));
>         memcpy(&p->ainsn, &ap->ainsn, sizeof(struct arch_specific_insn));
> }
> 
> Thank you,
> 

It is my fault. I'll fix it in next version.

Thank you for your comment!

>> +
>> +	/*
>> +	 * copy gives arch_prepare_optimized_kprobe
>> +	 * more information
>> +	 */
>> +	copy_kprobe(p, &op->kp);
>>  	arch_prepare_optimized_kprobe(op);
>>  
>>  	return &op->kp;
>>
> 
> 



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v5 1/3] ARM: probes: check stack operation when decoding
  2014-08-28 10:24       ` Will Deacon
@ 2014-08-29  8:47         ` Jon Medhurst (Tixy)
  2014-08-30  1:28           ` Wang Nan
  0 siblings, 1 reply; 18+ messages in thread
From: Jon Medhurst (Tixy) @ 2014-08-29  8:47 UTC (permalink / raw)
  To: Will Deacon
  Cc: Russell King - ARM Linux, Masami Hiramatsu, Wang Nan,
	David A. Long, Taras Kondratiuk, Ben Dooks,
	Ananth N Mavinakayanahalli, Anil S Keshavamurthy,
	David S. Miller, Pei Feiyue, linux-arm-kernel, linux-kernel

On Thu, 2014-08-28 at 11:24 +0100, Will Deacon wrote:
> On Thu, Aug 28, 2014 at 11:20:21AM +0100, Russell King - ARM Linux wrote:
> > On Thu, Aug 28, 2014 at 06:51:15PM +0900, Masami Hiramatsu wrote:
> > > (2014/08/27 22:02), Wang Nan wrote:
> > > > This patch improves arm instruction decoder, allows it check whether an
> > > > instruction is a stack store operation. This information is important
> > > > for kprobe optimization.
> > > > 
> > > > For normal str instruction, this patch add a series of _SP_STACK
> > > > register indicator in the decoder to test the base and offset register
> > > > in ldr <Rt>, [<Rn>, <Rm>] against sp.
> > > > 
> > > > For stm instruction, it check sp register in instruction specific
> > > > decoder.
> > > 
> > > OK, reviewed. but since I'm not so sure about arm32 ISA,
> > > I need help from ARM32 maintainer to ack this.
> > 
> > What you actually need is an ack from the ARM kprobes people who
> > understand this code.  That would be much more meaningful than my
> > ack.  They're already on the Cc list.
> 
> Tixy, can you take a look please?

I'll take an in depth look on Monday as I'm currently on holiday, so for
now just some brief and possibly not well thought out comments...

- If the intent is to not optimise stack push operations, then this
actually excludes the main use of kprobes which I believe is to insert
probes at the start of functions (there's even a specific jprobes API
for that) this is because functions usually start by saving registers on
the stack.

- Crowbarring in special case testing for stack operations looks a bit
inelegant and not a sustainable way of doing this, what about the next
special case we need? However, stack push operations _are_ a general
special cases for instruction emulation so perhaps that's OK, and leads
me to...

- The current 'unoptimised' kprobes implementation allows for pushing on
the stack (see __und_svc and the unused (?) jprobe_return) but this is
just aimed at stm instructions, not things like "str r0, [sp, -imm]!"
that might be used to simultaneously save a register and reserve an
arbitrary amount of stack space. Probing such instructions could lead to
the kprobes code trashing the kernel stack.

-- 
Tixy



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v5 1/3] ARM: probes: check stack operation when decoding
  2014-08-29  8:47         ` Jon Medhurst (Tixy)
@ 2014-08-30  1:28           ` Wang Nan
  2014-09-01 17:29             ` Jon Medhurst (Tixy)
  0 siblings, 1 reply; 18+ messages in thread
From: Wang Nan @ 2014-08-30  1:28 UTC (permalink / raw)
  To: Jon Medhurst (Tixy), Will Deacon
  Cc: Russell King - ARM Linux, Masami Hiramatsu, David A. Long,
	Taras Kondratiuk, Ben Dooks, Ananth N Mavinakayanahalli,
	Anil S Keshavamurthy, David S. Miller, Pei Feiyue,
	linux-arm-kernel, linux-kernel

On 2014/8/29 16:47, Jon Medhurst (Tixy) wrote:
> On Thu, 2014-08-28 at 11:24 +0100, Will Deacon wrote:
>> On Thu, Aug 28, 2014 at 11:20:21AM +0100, Russell King - ARM Linux wrote:
>>> On Thu, Aug 28, 2014 at 06:51:15PM +0900, Masami Hiramatsu wrote:
>>>> (2014/08/27 22:02), Wang Nan wrote:
>>>>> This patch improves arm instruction decoder, allows it check whether an
>>>>> instruction is a stack store operation. This information is important
>>>>> for kprobe optimization.
>>>>>
>>>>> For normal str instruction, this patch add a series of _SP_STACK
>>>>> register indicator in the decoder to test the base and offset register
>>>>> in ldr <Rt>, [<Rn>, <Rm>] against sp.
>>>>>
>>>>> For stm instruction, it check sp register in instruction specific
>>>>> decoder.
>>>>
>>>> OK, reviewed. but since I'm not so sure about arm32 ISA,
>>>> I need help from ARM32 maintainer to ack this.
>>>
>>> What you actually need is an ack from the ARM kprobes people who
>>> understand this code.  That would be much more meaningful than my
>>> ack.  They're already on the Cc list.
>>
>> Tixy, can you take a look please?
> 
> I'll take an in depth look on Monday as I'm currently on holiday, so for
> now just some brief and possibly not well thought out comments...
> 
> - If the intent is to not optimise stack push operations, then this
> actually excludes the main use of kprobes which I believe is to insert
> probes at the start of functions (there's even a specific jprobes API
> for that) this is because functions usually start by saving registers on
> the stack.

Agree. If the decoder can bring up more information, kprobeopt can dynamically
compute the range of stack an instruction require, then adjust stack protection range.
This need ARM decoder bring up more information. For example: for a "push {r4, r5}"
instruction, decoder should report it is a stack store operation, require 8 bytes
of stack, then when composing trampoline code, we can put registers at
[sp, #-8]. Only instructions such as "str r0, [sp, r1]" should be prevented.

However, this need more improvement on decoder: all store operations should use
a special decorer then. What do you think?

> 
> - Crowbarring in special case testing for stack operations looks a bit
> inelegant and not a sustainable way of doing this, what about the next
> special case we need? However, stack push operations _are_ a general
> special cases for instruction emulation so perhaps that's OK, and leads
> me to...
> 
> - The current 'unoptimised' kprobes implementation allows for pushing on
> the stack (see __und_svc and the unused (?) jprobe_return) but this is
> just aimed at stm instructions, not things like "str r0, [sp, -imm]!"
> that might be used to simultaneously save a register and reserve an
> arbitrary amount of stack space. Probing such instructions could lead to
> the kprobes code trashing the kernel stack.

By a quick search I just find tow instructions matching "str.*\[sp,[^\]]*-[^4]",
one in Ldiv0_64, another in Ldiv0, both are "str     lr, [sp, #-8]!". So I think
such instructions are very special.

Furthermore, I thought "unoptimised" kprobe use another stack, could you please
explain how such probing trashing normal kernel stack?

Thank you.



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v5 1/3] ARM: probes: check stack operation when decoding
  2014-08-30  1:28           ` Wang Nan
@ 2014-09-01 17:29             ` Jon Medhurst (Tixy)
  0 siblings, 0 replies; 18+ messages in thread
From: Jon Medhurst (Tixy) @ 2014-09-01 17:29 UTC (permalink / raw)
  To: Wang Nan
  Cc: Will Deacon, Russell King - ARM Linux, Masami Hiramatsu,
	David A. Long, Taras Kondratiuk, Ben Dooks,
	Ananth N Mavinakayanahalli, Anil S Keshavamurthy,
	David S. Miller, Pei Feiyue, linux-arm-kernel, linux-kernel

On Sat, 2014-08-30 at 09:28 +0800, Wang Nan wrote:
> On 2014/8/29 16:47, Jon Medhurst (Tixy) wrote:
> > On Thu, 2014-08-28 at 11:24 +0100, Will Deacon wrote:
> >> On Thu, Aug 28, 2014 at 11:20:21AM +0100, Russell King - ARM Linux wrote:
> >>> On Thu, Aug 28, 2014 at 06:51:15PM +0900, Masami Hiramatsu wrote:
> >>>> (2014/08/27 22:02), Wang Nan wrote:
> >>>>> This patch improves arm instruction decoder, allows it check whether an
> >>>>> instruction is a stack store operation. This information is important
> >>>>> for kprobe optimization.
> >>>>>
> >>>>> For normal str instruction, this patch add a series of _SP_STACK
> >>>>> register indicator in the decoder to test the base and offset register
> >>>>> in ldr <Rt>, [<Rn>, <Rm>] against sp.
> >>>>>
> >>>>> For stm instruction, it check sp register in instruction specific
> >>>>> decoder.
> >>>>
> >>>> OK, reviewed. but since I'm not so sure about arm32 ISA,
> >>>> I need help from ARM32 maintainer to ack this.
> >>>
> >>> What you actually need is an ack from the ARM kprobes people who
> >>> understand this code.  That would be much more meaningful than my
> >>> ack.  They're already on the Cc list.
> >>
> >> Tixy, can you take a look please?
> > 
> > I'll take an in depth look on Monday as I'm currently on holiday, so for
> > now just some brief and possibly not well thought out comments...
> > 
> > - If the intent is to not optimise stack push operations, then this
> > actually excludes the main use of kprobes which I believe is to insert
> > probes at the start of functions (there's even a specific jprobes API
> > for that) this is because functions usually start by saving registers on
> > the stack.
> 
> Agree. If the decoder can bring up more information, kprobeopt can dynamically
> compute the range of stack an instruction require, then adjust stack protection range.
> This need ARM decoder bring up more information. For example: for a "push {r4, r5}"
> instruction, decoder should report it is a stack store operation, require 8 bytes
> of stack, then when composing trampoline code, we can put registers at
> [sp, #-8]. Only instructions such as "str r0, [sp, r1]" should be prevented.
> 
> However, this need more improvement on decoder: all store operations should use
> a special decorer then. What do you think?

This doesn't work for the non-optimised kprobes case because, when a
probe is hit, we couldn't know what stack addresses to reserve until
we're several calls deep in the exception handler and possibly already
using those addresses. Anyway, perhaps we don't need to worry about
these instructions after all, more below...

> 
> > 
> > - Crowbarring in special case testing for stack operations looks a bit
> > inelegant and not a sustainable way of doing this, what about the next
> > special case we need? However, stack push operations _are_ a general
> > special cases for instruction emulation so perhaps that's OK, and leads
> > me to...
> > 
> > - The current 'unoptimised' kprobes implementation allows for pushing on
> > the stack (see __und_svc and the unused (?) jprobe_return) but this is
> > just aimed at stm instructions, not things like "str r0, [sp, -imm]!"
> > that might be used to simultaneously save a register and reserve an
> > arbitrary amount of stack space. Probing such instructions could lead to
> > the kprobes code trashing the kernel stack.
> 
> By a quick search I just find tow instructions matching "str.*\[sp,[^\]]*-[^4]",
> one in Ldiv0_64, another in Ldiv0, both are "str     lr, [sp, #-8]!". So I think
> such instructions are very special.

Yes, I built a multi_v7_defconfig kernel with GCC 4.9 and I too could
only find those occurrences of the problematic instructions, which come
human written assembler, so we probably aren't restricting any kprobes
users if we don't support probing of those types of str instructions.

That would just leave us to support stm instructions which push
registers onto the stack, and the optimised kprobes could take the same
approach as the unoptimised ones and just unconditionally reserve 64
bytes of stack on every probe (see __und_svc in entry-armv.S).

> Furthermore, I thought "unoptimised" kprobe use another stack, could you please
> explain how such probing trashing normal kernel stack?

No, unoptimised probes doesn't use another stack, they use the stack of
the current kernel task.

-- 
Tixy


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v5 3/3] kprobes: arm: enable OPTPROBES for ARM 32
  2014-08-27 13:02 ` [PATCH v5 3/3] kprobes: arm: enable OPTPROBES for ARM 32 Wang Nan
  2014-08-28 10:20   ` Masami Hiramatsu
@ 2014-09-02 13:49   ` Jon Medhurst (Tixy)
  2014-09-03 10:18     ` Masami Hiramatsu
  1 sibling, 1 reply; 18+ messages in thread
From: Jon Medhurst (Tixy) @ 2014-09-02 13:49 UTC (permalink / raw)
  To: Wang Nan
  Cc: Russell King, David A. Long, Taras Kondratiuk, Ben Dooks,
	Ananth N Mavinakayanahalli, Anil S Keshavamurthy,
	David S. Miller, Masami Hiramatsu, Will Deacon, Pei Feiyue,
	linux-arm-kernel, linux-kernel

I gave the patches a quick test and in doing so found a bug which stops
any probes actually being optimised, and the same bug should affect X86,
see comment below...

On Wed, 2014-08-27 at 21:02 +0800, Wang Nan wrote:
[...]
> +int arch_prepare_optimized_kprobe(struct optimized_kprobe *op)
> +{
> +	u8 *buf;
> +	unsigned long rel_chk;
> +	unsigned long val;
> +
> +	if (!can_optimize(op))
> +		return -EILSEQ;
> +
> +	op->optinsn.insn = get_optinsn_slot();
> +	if (!op->optinsn.insn)
> +		return -ENOMEM;
> +
> +	/*
> +	 * Verify if the address gap is in 32MiB range, because this uses
> +	 * a relative jump.
> +	 *
> +	 * kprobe opt use a 'b' instruction to branch to optinsn.insn.
> +	 * According to ARM manual, branch instruction is:
> +	 *
> +	 *   31  28 27           24 23             0
> +	 *  +------+---+---+---+---+----------------+
> +	 *  | cond | 1 | 0 | 1 | 0 |      imm24     |
> +	 *  +------+---+---+---+---+----------------+
> +	 *
> +	 * imm24 is a signed 24 bits integer. The real branch offset is computed
> +	 * by: imm32 = SignExtend(imm24:'00', 32);
> +	 *
> +	 * So the maximum forward branch should be:
> +	 *   (0x007fffff << 2) = 0x01fffffc =  0x1fffffc
> +	 * The maximum backword branch should be:
> +	 *   (0xff800000 << 2) = 0xfe000000 = -0x2000000
> +	 *
> +	 * We can simply check (rel & 0xfe000003):
> +	 *  if rel is positive, (rel & 0xfe000000) shoule be 0
> +	 *  if rel is negitive, (rel & 0xfe000000) should be 0xfe000000
> +	 *  the last '3' is used for alignment checking.
> +	 */
> +	rel_chk = (unsigned long)((long)op->optinsn.insn -
> +			(long)op->kp.addr + 8) & 0xfe000003;

This check always fails because op->kp.addr is zero. Debugging this I
found that this function is called from alloc_aggr_kprobe() and that
copies the real kprobe into op->kp using copy_kprobe(), which doesn't
actually copy the 'addr' value...

        static inline void copy_kprobe(struct kprobe *ap, struct kprobe *p)
        {
        	memcpy(&p->opcode, &ap->opcode, sizeof(kprobe_opcode_t));
        	memcpy(&p->ainsn, &ap->ainsn, sizeof(struct arch_specific_insn));
        }

Thing is, the new ARM code is a close copy of the existing X86 version
so that would also suffer the same problem of kp.addr always being zero.
So either I've miss understood something or this is fundamental bug no
one has noticed before.

Throwing in 'p->addr = ap->addr' into the copy_kprobe function fixed the
behaviour arch_prepare_optimized_kprobe.

I was testing this by running the kprobes tests
(CONFIG_ARM_KPROBES_TEST=y) and putting a few printk's in strategic
places in kprobes-opt.c to check to see what code paths got executed,
which is how I discovered the problem.

Two things to note when running kprobes tests...

1. On SMP systems it's very slow because of kprobe's use of stop_machine
for applying and removing probes, this forces the system to idle and
wait for the next scheduler tick for each probe change.

2. It's a good idea to enable VERBOSE in kprobes-test.h to get output
for each test case instruction, this reassures you things are
progressing and if things explode lets you know what instruction type
triggered it.

-- 
Tixy


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v5 3/3] kprobes: arm: enable OPTPROBES for ARM 32
  2014-09-02 13:49   ` Jon Medhurst (Tixy)
@ 2014-09-03 10:18     ` Masami Hiramatsu
  2014-09-03 10:30       ` Will Deacon
  0 siblings, 1 reply; 18+ messages in thread
From: Masami Hiramatsu @ 2014-09-03 10:18 UTC (permalink / raw)
  To: Jon Medhurst (Tixy)
  Cc: Wang Nan, Russell King, David A. Long, Taras Kondratiuk,
	Ben Dooks, Ananth N Mavinakayanahalli, Anil S Keshavamurthy,
	David S. Miller, Will Deacon, Pei Feiyue, linux-arm-kernel,
	linux-kernel

(2014/09/02 22:49), Jon Medhurst (Tixy) wrote:
> I gave the patches a quick test and in doing so found a bug which stops
> any probes actually being optimised, and the same bug should affect X86,
> see comment below...
> 
> On Wed, 2014-08-27 at 21:02 +0800, Wang Nan wrote:
> [...]
>> +int arch_prepare_optimized_kprobe(struct optimized_kprobe *op)
>> +{
>> +	u8 *buf;
>> +	unsigned long rel_chk;
>> +	unsigned long val;
>> +
>> +	if (!can_optimize(op))
>> +		return -EILSEQ;
>> +
>> +	op->optinsn.insn = get_optinsn_slot();
>> +	if (!op->optinsn.insn)
>> +		return -ENOMEM;
>> +
>> +	/*
>> +	 * Verify if the address gap is in 32MiB range, because this uses
>> +	 * a relative jump.
>> +	 *
>> +	 * kprobe opt use a 'b' instruction to branch to optinsn.insn.
>> +	 * According to ARM manual, branch instruction is:
>> +	 *
>> +	 *   31  28 27           24 23             0
>> +	 *  +------+---+---+---+---+----------------+
>> +	 *  | cond | 1 | 0 | 1 | 0 |      imm24     |
>> +	 *  +------+---+---+---+---+----------------+
>> +	 *
>> +	 * imm24 is a signed 24 bits integer. The real branch offset is computed
>> +	 * by: imm32 = SignExtend(imm24:'00', 32);
>> +	 *
>> +	 * So the maximum forward branch should be:
>> +	 *   (0x007fffff << 2) = 0x01fffffc =  0x1fffffc
>> +	 * The maximum backword branch should be:
>> +	 *   (0xff800000 << 2) = 0xfe000000 = -0x2000000
>> +	 *
>> +	 * We can simply check (rel & 0xfe000003):
>> +	 *  if rel is positive, (rel & 0xfe000000) shoule be 0
>> +	 *  if rel is negitive, (rel & 0xfe000000) should be 0xfe000000
>> +	 *  the last '3' is used for alignment checking.
>> +	 */
>> +	rel_chk = (unsigned long)((long)op->optinsn.insn -
>> +			(long)op->kp.addr + 8) & 0xfe000003;
> 
> This check always fails because op->kp.addr is zero. Debugging this I
> found that this function is called from alloc_aggr_kprobe() and that
> copies the real kprobe into op->kp using copy_kprobe(), which doesn't
> actually copy the 'addr' value...

Right, I've already pointed that :)

https://lkml.org/lkml/2014/8/28/114

> 
>         static inline void copy_kprobe(struct kprobe *ap, struct kprobe *p)
>         {
>         	memcpy(&p->opcode, &ap->opcode, sizeof(kprobe_opcode_t));
>         	memcpy(&p->ainsn, &ap->ainsn, sizeof(struct arch_specific_insn));
>         }
> 
> Thing is, the new ARM code is a close copy of the existing X86 version
> so that would also suffer the same problem of kp.addr always being zero.
> So either I've miss understood something or this is fundamental bug no
> one has noticed before.
> 
> Throwing in 'p->addr = ap->addr' into the copy_kprobe function fixed the
> behaviour arch_prepare_optimized_kprobe.
> 
> I was testing this by running the kprobes tests
> (CONFIG_ARM_KPROBES_TEST=y) and putting a few printk's in strategic
> places in kprobes-opt.c to check to see what code paths got executed,
> which is how I discovered the problem.
> 
> Two things to note when running kprobes tests...
> 
> 1. On SMP systems it's very slow because of kprobe's use of stop_machine
> for applying and removing probes, this forces the system to idle and
> wait for the next scheduler tick for each probe change.

Hmm, agreed. It seems that arm32 limitation of self-modifying code on SMP.
I'm not sure how we can handle it, but I guess;
 - for some processors which have better coherent cache for SMP, we can
   atomically replace the breakpoint code with original code.
 - Even if we get an "undefined instruction" exception, its handler can
   ask kprobes if the address is under modifying or not. And if it is,
   we can just return from the exception to retry the execution.

Thank you,

> 2. It's a good idea to enable VERBOSE in kprobes-test.h to get output
> for each test case instruction, this reassures you things are
> progressing and if things explode lets you know what instruction type
> triggered it.
> 


-- 
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@hitachi.com



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v5 3/3] kprobes: arm: enable OPTPROBES for ARM 32
  2014-09-03 10:18     ` Masami Hiramatsu
@ 2014-09-03 10:30       ` Will Deacon
  2014-09-04 10:40         ` Jon Medhurst (Tixy)
  0 siblings, 1 reply; 18+ messages in thread
From: Will Deacon @ 2014-09-03 10:30 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Jon Medhurst (Tixy),
	Wang Nan, Russell King, David A. Long, Taras Kondratiuk,
	Ben Dooks, Ananth N Mavinakayanahalli, Anil S Keshavamurthy,
	David S. Miller, Pei Feiyue, linux-arm-kernel, linux-kernel

On Wed, Sep 03, 2014 at 11:18:04AM +0100, Masami Hiramatsu wrote:
> (2014/09/02 22:49), Jon Medhurst (Tixy) wrote:
> > 1. On SMP systems it's very slow because of kprobe's use of stop_machine
> > for applying and removing probes, this forces the system to idle and
> > wait for the next scheduler tick for each probe change.
> 
> Hmm, agreed. It seems that arm32 limitation of self-modifying code on SMP.
> I'm not sure how we can handle it, but I guess;
>  - for some processors which have better coherent cache for SMP, we can
>    atomically replace the breakpoint code with original code.

Except that it's not an architected breakpoint instruction, as I mentioned
before. It's also not really a property of the cache.

>  - Even if we get an "undefined instruction" exception, its handler can
>    ask kprobes if the address is under modifying or not. And if it is,
>    we can just return from the exception to retry the execution.

It's not as simple as that -- you could potentially see an interleaving of
the two instructions. The architecture is even broader than that:

 Concurrent modification and execution of instructions can lead to the
 resulting instruction performing any behavior that can be achieved by
 executing any sequence of instructions that can be executed from the
 same Exception level,

There are additional guarantees for some instructions (like the architected
BKPT instruction).

Will

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v5 3/3] kprobes: arm: enable OPTPROBES for ARM 32
  2014-09-03 10:30       ` Will Deacon
@ 2014-09-04 10:40         ` Jon Medhurst (Tixy)
  2014-09-04 10:52           ` Will Deacon
  0 siblings, 1 reply; 18+ messages in thread
From: Jon Medhurst (Tixy) @ 2014-09-04 10:40 UTC (permalink / raw)
  To: Will Deacon
  Cc: Masami Hiramatsu, Wang Nan, Russell King, David A. Long,
	Taras Kondratiuk, Ben Dooks, Ananth N Mavinakayanahalli,
	Anil S Keshavamurthy, David S. Miller, Pei Feiyue,
	linux-arm-kernel, linux-kernel

On Wed, 2014-09-03 at 11:30 +0100, Will Deacon wrote:
> On Wed, Sep 03, 2014 at 11:18:04AM +0100, Masami Hiramatsu wrote:
> > (2014/09/02 22:49), Jon Medhurst (Tixy) wrote:
> > > 1. On SMP systems it's very slow because of kprobe's use of stop_machine
> > > for applying and removing probes, this forces the system to idle and
> > > wait for the next scheduler tick for each probe change.
> > 
> > Hmm, agreed. It seems that arm32 limitation of self-modifying code on SMP.
> > I'm not sure how we can handle it, but I guess;
> >  - for some processors which have better coherent cache for SMP, we can
> >    atomically replace the breakpoint code with original code.
> 
> Except that it's not an architected breakpoint instruction, as I mentioned
> before. It's also not really a property of the cache.
> 
> >  - Even if we get an "undefined instruction" exception, its handler can
> >    ask kprobes if the address is under modifying or not. And if it is,
> >    we can just return from the exception to retry the execution.
> 
> It's not as simple as that -- you could potentially see an interleaving of
> the two instructions. The architecture is even broader than that:
> 
>  Concurrent modification and execution of instructions can lead to the
>  resulting instruction performing any behavior that can be achieved by
>  executing any sequence of instructions that can be executed from the
>  same Exception level,
> 
> There are additional guarantees for some instructions (like the architected
> BKPT instruction).

I should point out that the current implementation of kprobes doesn't
use stop_machine because it's trying to meet the above architecture
restrictions, and that arming kprobes (changing probed instruction to an
undefined instruction) isn't usually done under stop_machine, so other
CPUs could be executing the original instruction as it's being modified.

So, should we be making patch_text unconditionally use stop machine and
remove all direct use of __patch_text? (E.g. by jump labels.)

-- 
Tixy


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v5 3/3] kprobes: arm: enable OPTPROBES for ARM 32
  2014-09-04 10:40         ` Jon Medhurst (Tixy)
@ 2014-09-04 10:52           ` Will Deacon
  0 siblings, 0 replies; 18+ messages in thread
From: Will Deacon @ 2014-09-04 10:52 UTC (permalink / raw)
  To: Jon Medhurst (Tixy)
  Cc: Masami Hiramatsu, Wang Nan, Russell King, David A. Long,
	Taras Kondratiuk, Ben Dooks, Ananth N Mavinakayanahalli,
	Anil S Keshavamurthy, David S. Miller, Pei Feiyue,
	linux-arm-kernel, linux-kernel

On Thu, Sep 04, 2014 at 11:40:35AM +0100, Jon Medhurst (Tixy) wrote:
> On Wed, 2014-09-03 at 11:30 +0100, Will Deacon wrote:
> > On Wed, Sep 03, 2014 at 11:18:04AM +0100, Masami Hiramatsu wrote:
> > > (2014/09/02 22:49), Jon Medhurst (Tixy) wrote:
> > > > 1. On SMP systems it's very slow because of kprobe's use of stop_machine
> > > > for applying and removing probes, this forces the system to idle and
> > > > wait for the next scheduler tick for each probe change.
> > > 
> > > Hmm, agreed. It seems that arm32 limitation of self-modifying code on SMP.
> > > I'm not sure how we can handle it, but I guess;
> > >  - for some processors which have better coherent cache for SMP, we can
> > >    atomically replace the breakpoint code with original code.
> > 
> > Except that it's not an architected breakpoint instruction, as I mentioned
> > before. It's also not really a property of the cache.
> > 
> > >  - Even if we get an "undefined instruction" exception, its handler can
> > >    ask kprobes if the address is under modifying or not. And if it is,
> > >    we can just return from the exception to retry the execution.
> > 
> > It's not as simple as that -- you could potentially see an interleaving of
> > the two instructions. The architecture is even broader than that:
> > 
> >  Concurrent modification and execution of instructions can lead to the
> >  resulting instruction performing any behavior that can be achieved by
> >  executing any sequence of instructions that can be executed from the
> >  same Exception level,
> > 
> > There are additional guarantees for some instructions (like the architected
> > BKPT instruction).
> 
> I should point out that the current implementation of kprobes doesn't
> use stop_machine because it's trying to meet the above architecture
> restrictions, and that arming kprobes (changing probed instruction to an
> undefined instruction) isn't usually done under stop_machine, so other
> CPUs could be executing the original instruction as it's being modified.
> 
> So, should we be making patch_text unconditionally use stop machine and
> remove all direct use of __patch_text? (E.g. by jump labels.)

You could take a look at what we do for arm64 (see aarch64_insn_hotpatch_safe)
for inspiration.

Will

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2014-09-04 10:53 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-08-27 13:02 [PATCH v5 0/3] kprobes: arm: enable OPTPROBES for ARM 32 Wang Nan
2014-08-27 13:02 ` [PATCH v5 1/3] ARM: probes: check stack operation when decoding Wang Nan
2014-08-28  9:51   ` Masami Hiramatsu
2014-08-28 10:20     ` Russell King - ARM Linux
2014-08-28 10:24       ` Will Deacon
2014-08-29  8:47         ` Jon Medhurst (Tixy)
2014-08-30  1:28           ` Wang Nan
2014-09-01 17:29             ` Jon Medhurst (Tixy)
2014-08-27 13:02 ` [PATCH v5 2/3] kprobes: copy ainsn after alloc aggr kprobe Wang Nan
2014-08-28  9:39   ` Masami Hiramatsu
2014-08-28 11:07     ` Wang Nan
2014-08-27 13:02 ` [PATCH v5 3/3] kprobes: arm: enable OPTPROBES for ARM 32 Wang Nan
2014-08-28 10:20   ` Masami Hiramatsu
2014-09-02 13:49   ` Jon Medhurst (Tixy)
2014-09-03 10:18     ` Masami Hiramatsu
2014-09-03 10:30       ` Will Deacon
2014-09-04 10:40         ` Jon Medhurst (Tixy)
2014-09-04 10:52           ` Will Deacon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).