linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v14 00/10] arm64: Add kernel probes (kprobes) support
@ 2016-06-27  3:06 David Long
  2016-06-27  3:06 ` [PATCH v14 01/10] arm64: Add HAVE_REGS_AND_STACK_ACCESS_API feature David Long
                   ` (11 more replies)
  0 siblings, 12 replies; 16+ messages in thread
From: David Long @ 2016-06-27  3:06 UTC (permalink / raw)
  To: Catalin Marinas, Huang Shijie, James Morse, Marc Zyngier,
	Pratyush Anand, Sandeepa Prabhu, Will Deacon, William Cohen,
	linux-arm-kernel, linux-kernel, Steve Capper, Masami Hiramatsu,
	Li Bin
  Cc: Adam Buchbinder, Alex Bennée, Andrew Morton,
	Andrey Ryabinin, Ard Biesheuvel, Christoffer Dall,
	Daniel Thompson, Dave P Martin, Jens Wiklander, Jisheng Zhang,
	John Blackwood, Mark Rutland, Petr Mladek, Robin Murphy,
	Suzuki K Poulose, Vladimir Murzin, Yang Shi, Zi Shen Lim,
	yalin wang, Mark Brown

From: "David A. Long" <dave.long@linaro.org>

This patchset is heavily based on Sandeepa Prabhu's ARM v8 kprobes patches,
first seen in October 2013. This version attempts to address concerns
raised by reviewers and also fixes problems discovered during testing.

This patchset adds support for kernel probes(kprobes), jump probes(jprobes)
and return probes(kretprobes) support for ARM64.

The kprobes mechanism makes use of software breakpoint and single stepping
support available in the ARM v8 kernel.

Changes since v2 include:

1) Removal of NOP padding in kprobe XOL slots. Slots are now exactly one
instruction long.
2) Disabling of interrupts during execution in single-step mode.
3) Fixing of numerous problems in instruction simulation code (mostly
thanks to Will Cohen).
4) Support for the HAVE_REGS_AND_STACK_ACCESS_API feature is added, to
allow access to kprobes through debugfs.
5) kprobes is *not* enabled in defconfig.
6) Numerous complaints from checkpatch have been cleaned up, although a
couple remain as removing the function pointer typedefs results in ugly
code.

Changes since v3 include:

1) Remove table-driven instruction parsing and replace with an if statement
calling out to old and new instruction test functions in insn.c.
2) I removed the addition of orig_x0 to ptrace.h.
3) Reorder the patches.
4) Replace the previous interrupt disabling (from Will Cohen) with
an improved solution (from Steve Capper).

Changes since v4 include:

1) Added insn.c functions to detect exception instructions and DAIF
   read/write instructions, and use them to reject probing same.
2) Changed adr detect function to also recognize adrp. Reject both.
3) Added missing __kprobes for some new functions.
4) Added call to kprobes_fault_handler from mm do_page_fault.
5) Reject all non-simulated branch/ret instructions, not just those
   that use an immediate offset.
6) Moved software breakpoint definitions into debug-monitors.h.
7) Removed "!XIP_KERNEL" from Kconfig.
8) changed kprobes_condition_check_t and kprobes_prepare_t to probes_*,
   for future sharing with uprobes.
9) Removed bogus call to kprobes_restore_local_irqflag() from 
   trampoline_probe_handler().

Changes since v5 include:

1) Replaced installation of breakpoint hook with direct call from the
handlers in debug-monitors.c, as requested.
2) Reject probing of instructions that read the interrupt mask, in
addition to instructions that set it.
3) Cleaned up comments describing usage of Debug Mask.
4) Added KPROBE_REENTER case in reenter_kprobe.
5) Corrected the ifdef'd definitions for notify_page_fault() to be
consistent when KPROBES is not configed.
6) Changed "cpsr" to "pstate" for HAVE_REGS_AND_STACK_ACCESS_API feature.
7) Added back in missing new files in previous patch.
8) Changed two instances of pr_warning() to pr_warn().

Note that there seems to be at least a potential issue with kprobes
on multiple (possibly all) platforms having to do with use of kfree
inside of the kretprobes trampoline handler.  This has manifested
occasionally in systemtap testing on arm64.  There does not appear to
be an simple solution to the problem.

Changes since v6 include:

1) New trampoline code from Will Cohen fixes the occasional failure seen
when processing kretprobes by replacing the software breakpoint with
assembly code to implement the return to the original execution stream.
2) Changed ip0, ip1, fp, and lr to plain numbered registers for purposes
of recognizing them as an ascii string in the stack/reg access code.
3) Removed orig_x0.
4) Moved ARM_x* defines from arch/arm64/include/uapi/asm/ptrace.h to
arch/arm64/kernel/ptrace.c.

Changes since v7 include:

1) Move trampoline entry/return code into separate ".S" file instead
of making it a macro in a header file.
2) Add missing register name definitions in asm-offsets.c and use them
in place of hard-coded integer offsets in the trampoline code.
3) Correct the values used to decode MSR immediate instructions, in insn.h.
4) Remove the currently unused simulate_none() function.

Changes since v8 include:

1) Replaced use of REG_OFFSET_NAME with GPR_OFFSET_NAME for numbered
registers.
2) Added an alias for "lr" in the register name lookup table, which perf
tools need to be able to recognize.
3) Changed the code for checking instruction types for probeability and
steppability as per review feedback.
4) Fixed the size of cache being flushed when filling single-step slot.
5) Fixed big-endian issues.
6) Blacklisted copy_to/from_user to avoid aborts while single-stepping.
7) Record conditional instructions that fail the conditional test just
like any other probed (non-conditional) instruction.
8) Removed use of magic number for detecting jprobe return and just
check the breakpoint address instead.
9) Got rid of the unnecessary arch/arm64/kprobes.h.
10) The PSTATE and SP are now properly saved in the kretprobe trampoline
code.
11) This patch no longer depends on the "Consolidate redundant
register/stack access code" patch set.
12) Remove call to fixup_exception from kprobe_fault_handler.

Changes since v9 include:

1) Remove arch/arm/opcodes.c from the arm64 build and move the renamed
arm64_check_condition() function to armv8_deprecated.c. Remove the
asmlinkage.
2) Various other type and style changes suggested by Marc Zyngier.
3) Put back the call to fixup_exception from kprobe_fault_handler.
It proved to be necessary for correct operation.

Changes since v10 include:

1) Rename arm64_check_condition() to arm32_check_condition().
2) Remove redundant define of ARM_OPCODE_CONDITION_UNCOND.
3) Use a accessor functions to read and write registers by number
in the simulation code, to avoid accidentally overriding parts of
the pt_regs structure (e.g.: when the reg is xzr).
4) Remove unused register offset defines.
5) Replace instance of "(void *) 0" with NULL.
6) Rewrite the kretprobe trampoline code using arch/arm64/kvm/hyp/entry.S
as an example. Construct a more complete saved PSTATE in this code.

Changes since v11 include:

1) Add check for address within irq stack, in regs_within_kernel_stack()
2) Replaced inappropriate use of user_pt_regs with pt_regs.
3) Added comments to opcode_condition_checks table explaining equivalence
of "nv" and "al" condition codes.
4) Cleaned up some subtle problems in the instruction simulation code.
5) Readability improvements in kprobes_trampoline.S.
6) Additional blacklisting for entry code, exception handling code, and
select debug functions.
7) Check address to be probed for proper alignment.
8) Add rodata section to areas where kprobes may not be placed.

Changes since v12 include:

1) Changed regs_get_register() to expicitly reference pt_regs structure
fields instead of just using an address offset.
2) Reject probing of eret.
3) Correctly handle addresses on the interrupt stack
4) Add kprobe_ctlblk argument to static irqflag handling functions to avoid
doing extra calls to get_kprobe_ctlblk().
5) Removed a couple of logically redundant assignments to kprobe_status.
6) Added calls to pause_graph_tracing/unpause_graph_tracing to avoid
disaster when kprobe'ing and tracing at the same time.
7) Added idmap and hypervisor text sections to blacklisted regions
8) Numerous additional comments, formatting changes, and rearranging
of if-else statements.

Changes since v13 include:

1) Fixed regs_get_register() from previous version to correctly calculate
the offset of registers in struct pt_regs.
2) I removed the removal of the typecast inside the instruction_pointer()
define in ptrace.h, and added a define for instruction_pointer_set(). This
was necessary to correct warnings that were being emitted when compiling
kgdb code.
3) Removed a redundant/bogus "NOKPROBE_SYMBOL(do_debug_exception)"
statement.
4) Fixed aarch64_insn_extract_system_reg() from previous version to use the
correct name "aarch64_insn_extract_system_reg()".
5) Changed opcode_condition_checks[] to aarch32_opcode_cond_checks[] and
arm32_check_condition() to aarch32_check_condition().
6) I switched the order of the main kprobes patch and the symbol function
blacklisting patch back to the order they were done in the earlier patches.
7) I got rid of struct kprobe_pc_restore and now just use a non-zero saved
PC as the flag to restore the PC.
8) I changed the names of some of the arm64 kprobes source files and moved
then into their own "kprobes" subdirectory under arch/arm64/kernel.
9) I moved the INSN_GOOD_NO_SLOT enum value to the later commit that makes
use of it.
10) I added kernel_disable_single_stap() and spsr_set_debug_flag() calls in
kprobe_fault_handler() for the KPROBE_REENTER case.
11) I brought trampoline_probe_handler() up to date with x86 sources to
pick up a fix from Syuhei (commit 737480a0d525).
12) I changed samples/kprobes/kprobe_example.c modifications to more
closely match what is currently done for other architectures.

David A. Long (3):
  arm64: Add HAVE_REGS_AND_STACK_ACCESS_API feature
  arm64: Add more test functions to insn.c
  arm64: add conditional instruction simulation support

Pratyush Anand (2):
  arm64: Blacklist non-kprobe-able symbol
  arm64: Treat all entry code as non-kprobe-able

Sandeepa Prabhu (4):
  arm64: Kprobes with single stepping support
  arm64: kprobes instruction simulation support
  arm64: Add kernel return probes support (kretprobes)
  kprobes: Add arm64 case in kprobe example module

William Cohen (1):
  arm64: Add trampoline code for kretprobes

 arch/arm64/Kconfig                             |   3 +
 arch/arm64/include/asm/debug-monitors.h        |   5 +
 arch/arm64/include/asm/insn.h                  |  41 ++
 arch/arm64/include/asm/kprobes.h               |  62 +++
 arch/arm64/include/asm/probes.h                |  35 ++
 arch/arm64/include/asm/ptrace.h                |  53 ++
 arch/arm64/kernel/Makefile                     |   5 +-
 arch/arm64/kernel/arm64ksyms.c                 |   2 +
 arch/arm64/kernel/armv8_deprecated.c           |  19 +-
 arch/arm64/kernel/asm-offsets.c                |  11 +
 arch/arm64/kernel/debug-monitors.c             |  33 +-
 arch/arm64/kernel/entry.S                      |   3 +
 arch/arm64/kernel/hw_breakpoint.c              |   8 +
 arch/arm64/kernel/insn.c                       | 133 +++++
 arch/arm64/kernel/kgdb.c                       |   4 +
 arch/arm64/kernel/kprobes/Makefile             |   3 +
 arch/arm64/kernel/kprobes/decode-insn.c        | 174 +++++++
 arch/arm64/kernel/kprobes/decode-insn.h        |  35 ++
 arch/arm64/kernel/kprobes/kprobes.c            | 675 +++++++++++++++++++++++++
 arch/arm64/kernel/kprobes/kprobes_trampoline.S |  85 ++++
 arch/arm64/kernel/kprobes/simulate-insn.c      | 218 ++++++++
 arch/arm64/kernel/kprobes/simulate-insn.h      |  28 +
 arch/arm64/kernel/ptrace.c                     | 118 +++++
 arch/arm64/kernel/vmlinux.lds.S                |   2 +
 arch/arm64/mm/fault.c                          |  26 +
 samples/kprobes/kprobe_example.c               |   9 +
 26 files changed, 1783 insertions(+), 7 deletions(-)
 create mode 100644 arch/arm64/include/asm/kprobes.h
 create mode 100644 arch/arm64/include/asm/probes.h
 create mode 100644 arch/arm64/kernel/kprobes/Makefile
 create mode 100644 arch/arm64/kernel/kprobes/decode-insn.c
 create mode 100644 arch/arm64/kernel/kprobes/decode-insn.h
 create mode 100644 arch/arm64/kernel/kprobes/kprobes.c
 create mode 100644 arch/arm64/kernel/kprobes/kprobes_trampoline.S
 create mode 100644 arch/arm64/kernel/kprobes/simulate-insn.c
 create mode 100644 arch/arm64/kernel/kprobes/simulate-insn.h

-- 
2.5.0

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v14 01/10] arm64: Add HAVE_REGS_AND_STACK_ACCESS_API feature
  2016-06-27  3:06 [PATCH v14 00/10] arm64: Add kernel probes (kprobes) support David Long
@ 2016-06-27  3:06 ` David Long
  2016-06-27  3:06 ` [PATCH v14 02/10] arm64: Add more test functions to insn.c David Long
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: David Long @ 2016-06-27  3:06 UTC (permalink / raw)
  To: Catalin Marinas, Huang Shijie, James Morse, Marc Zyngier,
	Pratyush Anand, Sandeepa Prabhu, Will Deacon, William Cohen,
	linux-arm-kernel, linux-kernel, Steve Capper, Masami Hiramatsu,
	Li Bin
  Cc: Adam Buchbinder, Alex Bennée, Andrew Morton,
	Andrey Ryabinin, Ard Biesheuvel, Christoffer Dall,
	Daniel Thompson, Dave P Martin, Jens Wiklander, Jisheng Zhang,
	John Blackwood, Mark Rutland, Petr Mladek, Robin Murphy,
	Suzuki K Poulose, Vladimir Murzin, Yang Shi, Zi Shen Lim,
	yalin wang, Mark Brown

From: "David A. Long" <dave.long@linaro.org>

Add HAVE_REGS_AND_STACK_ACCESS_API feature for arm64, including supporting
functions and defines.

Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm64/Kconfig              |   1 +
 arch/arm64/include/asm/ptrace.h |  52 ++++++++++++++++++
 arch/arm64/kernel/ptrace.c      | 118 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 171 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 5a0a691..fab133c 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -85,6 +85,7 @@ config ARM64
 	select HAVE_PERF_EVENTS
 	select HAVE_PERF_REGS
 	select HAVE_PERF_USER_STACK_DUMP
+	select HAVE_REGS_AND_STACK_ACCESS_API
 	select HAVE_RCU_TABLE_FREE
 	select HAVE_SYSCALL_TRACEPOINTS
 	select IOMMU_DMA if IOMMU_SUPPORT
diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h
index a307eb6..6c0c7d3 100644
--- a/arch/arm64/include/asm/ptrace.h
+++ b/arch/arm64/include/asm/ptrace.h
@@ -74,6 +74,7 @@
 #define COMPAT_PT_DATA_ADDR		0x10004
 #define COMPAT_PT_TEXT_END_ADDR		0x10008
 #ifndef __ASSEMBLY__
+#include <linux/bug.h>
 
 /* sizeof(struct user) for AArch32 */
 #define COMPAT_USER_SZ	296
@@ -119,6 +120,8 @@ struct pt_regs {
 	u64 syscallno;
 };
 
+#define MAX_REG_OFFSET offsetof(struct pt_regs, pstate)
+
 #define arch_has_single_step()	(1)
 
 #ifdef CONFIG_COMPAT
@@ -147,6 +150,55 @@ struct pt_regs {
 #define user_stack_pointer(regs) \
 	(!compat_user_mode(regs) ? (regs)->sp : (regs)->compat_sp)
 
+extern int regs_query_register_offset(const char *name);
+extern const char *regs_query_register_name(unsigned int offset);
+extern bool regs_within_kernel_stack(struct pt_regs *regs, unsigned long addr);
+extern unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs,
+					       unsigned int n);
+
+/**
+ * regs_get_register() - get register value from its offset
+ * @regs:	   pt_regs from which register value is gotten
+ * @offset:    offset of the register.
+ *
+ * regs_get_register returns the value of a register whose offset from @regs.
+ * The @offset is the offset of the register in struct pt_regs.
+ * If @offset is bigger than MAX_REG_OFFSET, this returns 0.
+ */
+static inline u64 regs_get_register(struct pt_regs *regs,
+					      unsigned int offset)
+{
+	u64 val = 0;
+
+	WARN_ON(offset & 7);
+
+	offset >>= 3;
+	switch (offset) {
+	case	0 ... 30:
+		val = regs->regs[offset];
+		break;
+	case offsetof(struct pt_regs, sp) >> 3:
+		val = regs->sp;
+		break;
+	case offsetof(struct pt_regs, pc) >> 3:
+		val = regs->pc;
+		break;
+	case offsetof(struct pt_regs, pstate) >> 3:
+		val = regs->pstate;
+		break;
+	default:
+		val = 0;
+	}
+
+	return val;
+}
+
+/* Valid only for Kernel mode traps. */
+static inline unsigned long kernel_stack_pointer(struct pt_regs *regs)
+{
+	return regs->sp;
+}
+
 static inline unsigned long regs_return_value(struct pt_regs *regs)
 {
 	return regs->regs[0];
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 3f6cd5c..2c88c33 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -48,6 +48,124 @@
 #define CREATE_TRACE_POINTS
 #include <trace/events/syscalls.h>
 
+struct pt_regs_offset {
+	const char *name;
+	int offset;
+};
+
+#define REG_OFFSET_NAME(r) {.name = #r, .offset = offsetof(struct pt_regs, r)}
+#define REG_OFFSET_END {.name = NULL, .offset = 0}
+#define	GPR_OFFSET_NAME(r)	\
+	{.name = "x" #r, .offset = offsetof(struct pt_regs, regs[r])}
+
+static const struct pt_regs_offset regoffset_table[] = {
+	GPR_OFFSET_NAME(0),
+	GPR_OFFSET_NAME(1),
+	GPR_OFFSET_NAME(2),
+	GPR_OFFSET_NAME(3),
+	GPR_OFFSET_NAME(4),
+	GPR_OFFSET_NAME(5),
+	GPR_OFFSET_NAME(6),
+	GPR_OFFSET_NAME(7),
+	GPR_OFFSET_NAME(8),
+	GPR_OFFSET_NAME(9),
+	GPR_OFFSET_NAME(10),
+	GPR_OFFSET_NAME(11),
+	GPR_OFFSET_NAME(12),
+	GPR_OFFSET_NAME(13),
+	GPR_OFFSET_NAME(14),
+	GPR_OFFSET_NAME(15),
+	GPR_OFFSET_NAME(16),
+	GPR_OFFSET_NAME(17),
+	GPR_OFFSET_NAME(18),
+	GPR_OFFSET_NAME(19),
+	GPR_OFFSET_NAME(20),
+	GPR_OFFSET_NAME(21),
+	GPR_OFFSET_NAME(22),
+	GPR_OFFSET_NAME(23),
+	GPR_OFFSET_NAME(24),
+	GPR_OFFSET_NAME(25),
+	GPR_OFFSET_NAME(26),
+	GPR_OFFSET_NAME(27),
+	GPR_OFFSET_NAME(28),
+	GPR_OFFSET_NAME(29),
+	GPR_OFFSET_NAME(30),
+	{.name = "lr", .offset = offsetof(struct pt_regs, regs[30])},
+	REG_OFFSET_NAME(sp),
+	REG_OFFSET_NAME(pc),
+	REG_OFFSET_NAME(pstate),
+	REG_OFFSET_END,
+};
+
+/**
+ * regs_query_register_offset() - query register offset from its name
+ * @name:	the name of a register
+ *
+ * regs_query_register_offset() returns the offset of a register in struct
+ * pt_regs from its name. If the name is invalid, this returns -EINVAL;
+ */
+int regs_query_register_offset(const char *name)
+{
+	const struct pt_regs_offset *roff;
+
+	for (roff = regoffset_table; roff->name != NULL; roff++)
+		if (!strcmp(roff->name, name))
+			return roff->offset;
+	return -EINVAL;
+}
+
+/**
+ * regs_query_register_name() - query register name from its offset
+ * @offset:	the offset of a register in struct pt_regs.
+ *
+ * regs_query_register_name() returns the name of a register from its
+ * offset in struct pt_regs. If the @offset is invalid, this returns NULL;
+ */
+const char *regs_query_register_name(unsigned int offset)
+{
+	const struct pt_regs_offset *roff;
+
+	for (roff = regoffset_table; roff->name != NULL; roff++)
+		if (roff->offset == offset)
+			return roff->name;
+	return NULL;
+}
+
+/**
+ * regs_within_kernel_stack() - check the address in the stack
+ * @regs:      pt_regs which contains kernel stack pointer.
+ * @addr:      address which is checked.
+ *
+ * regs_within_kernel_stack() checks @addr is within the kernel stack page(s).
+ * If @addr is within the kernel stack, it returns true. If not, returns false.
+ */
+bool regs_within_kernel_stack(struct pt_regs *regs, unsigned long addr)
+{
+	return ((addr & ~(THREAD_SIZE - 1))  ==
+		(kernel_stack_pointer(regs) & ~(THREAD_SIZE - 1))) ||
+		on_irq_stack(addr, raw_smp_processor_id());
+}
+
+/**
+ * regs_get_kernel_stack_nth() - get Nth entry of the stack
+ * @regs:	pt_regs which contains kernel stack pointer.
+ * @n:		stack entry number.
+ *
+ * regs_get_kernel_stack_nth() returns @n th entry of the kernel stack which
+ * is specified by @regs. If the @n th entry is NOT in the kernel stack,
+ * this returns 0.
+ */
+unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs, unsigned int n)
+{
+	unsigned long *addr = (unsigned long *)kernel_stack_pointer(regs);
+
+	addr += n;
+	if (regs_within_kernel_stack(regs, (unsigned long)addr))
+		return *addr;
+	else
+		return 0;
+}
+
 /*
  * TODO: does not yet catch signals sent when the child dies.
  * in exit.c or in signal.c.
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v14 02/10] arm64: Add more test functions to insn.c
  2016-06-27  3:06 [PATCH v14 00/10] arm64: Add kernel probes (kprobes) support David Long
  2016-06-27  3:06 ` [PATCH v14 01/10] arm64: Add HAVE_REGS_AND_STACK_ACCESS_API feature David Long
@ 2016-06-27  3:06 ` David Long
  2016-06-27  3:06 ` [PATCH v14 03/10] arm64: add conditional instruction simulation support David Long
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: David Long @ 2016-06-27  3:06 UTC (permalink / raw)
  To: Catalin Marinas, Huang Shijie, James Morse, Marc Zyngier,
	Pratyush Anand, Sandeepa Prabhu, Will Deacon, William Cohen,
	linux-arm-kernel, linux-kernel, Steve Capper, Masami Hiramatsu,
	Li Bin
  Cc: Adam Buchbinder, Alex Bennée, Andrew Morton,
	Andrey Ryabinin, Ard Biesheuvel, Christoffer Dall,
	Daniel Thompson, Dave P Martin, Jens Wiklander, Jisheng Zhang,
	John Blackwood, Mark Rutland, Petr Mladek, Robin Murphy,
	Suzuki K Poulose, Vladimir Murzin, Yang Shi, Zi Shen Lim,
	yalin wang, Mark Brown

From: "David A. Long" <dave.long@linaro.org>

Certain instructions are hard to execute correctly out-of-line (as in
kprobes).  Test functions are added to insn.[hc] to identify these.  The
instructions include any that use PC-relative addressing, change the PC,
or change interrupt masking. For efficiency and simplicity test
functions are also added for small collections of related instructions.

Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm64/include/asm/insn.h | 36 ++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/insn.c      | 34 ++++++++++++++++++++++++++++++++++
 2 files changed, 70 insertions(+)

diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index 30e50eb..497f7a2 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -120,6 +120,29 @@ enum aarch64_insn_register {
 	AARCH64_INSN_REG_SP = 31  /* Stack pointer: as load/store base reg */
 };
 
+enum aarch64_insn_special_register {
+	AARCH64_INSN_SPCLREG_SPSR_EL1	= 0xC200,
+	AARCH64_INSN_SPCLREG_ELR_EL1	= 0xC201,
+	AARCH64_INSN_SPCLREG_SP_EL0	= 0xC208,
+	AARCH64_INSN_SPCLREG_SPSEL	= 0xC210,
+	AARCH64_INSN_SPCLREG_CURRENTEL	= 0xC212,
+	AARCH64_INSN_SPCLREG_DAIF	= 0xDA11,
+	AARCH64_INSN_SPCLREG_NZCV	= 0xDA10,
+	AARCH64_INSN_SPCLREG_FPCR	= 0xDA20,
+	AARCH64_INSN_SPCLREG_DSPSR_EL0	= 0xDA28,
+	AARCH64_INSN_SPCLREG_DLR_EL0	= 0xDA29,
+	AARCH64_INSN_SPCLREG_SPSR_EL2	= 0xE200,
+	AARCH64_INSN_SPCLREG_ELR_EL2	= 0xE201,
+	AARCH64_INSN_SPCLREG_SP_EL1	= 0xE208,
+	AARCH64_INSN_SPCLREG_SPSR_INQ	= 0xE218,
+	AARCH64_INSN_SPCLREG_SPSR_ABT	= 0xE219,
+	AARCH64_INSN_SPCLREG_SPSR_UND	= 0xE21A,
+	AARCH64_INSN_SPCLREG_SPSR_FIQ	= 0xE21B,
+	AARCH64_INSN_SPCLREG_SPSR_EL3	= 0xF200,
+	AARCH64_INSN_SPCLREG_ELR_EL3	= 0xF201,
+	AARCH64_INSN_SPCLREG_SP_EL2	= 0xF210
+};
+
 enum aarch64_insn_variant {
 	AARCH64_INSN_VARIANT_32BIT,
 	AARCH64_INSN_VARIANT_64BIT
@@ -223,8 +246,13 @@ static __always_inline bool aarch64_insn_is_##abbr(u32 code) \
 static __always_inline u32 aarch64_insn_get_##abbr##_value(void) \
 { return (val); }
 
+__AARCH64_INSN_FUNCS(adr_adrp,	0x1F000000, 0x10000000)
+__AARCH64_INSN_FUNCS(prfm_lit,	0xFF000000, 0xD8000000)
 __AARCH64_INSN_FUNCS(str_reg,	0x3FE0EC00, 0x38206800)
 __AARCH64_INSN_FUNCS(ldr_reg,	0x3FE0EC00, 0x38606800)
+__AARCH64_INSN_FUNCS(ldr_lit,	0xBF000000, 0x18000000)
+__AARCH64_INSN_FUNCS(ldrsw_lit,	0xFF000000, 0x98000000)
+__AARCH64_INSN_FUNCS(exclusive,	0x3F800000, 0x08000000)
 __AARCH64_INSN_FUNCS(stp_post,	0x7FC00000, 0x28800000)
 __AARCH64_INSN_FUNCS(ldp_post,	0x7FC00000, 0x28C00000)
 __AARCH64_INSN_FUNCS(stp_pre,	0x7FC00000, 0x29800000)
@@ -273,10 +301,15 @@ __AARCH64_INSN_FUNCS(svc,	0xFFE0001F, 0xD4000001)
 __AARCH64_INSN_FUNCS(hvc,	0xFFE0001F, 0xD4000002)
 __AARCH64_INSN_FUNCS(smc,	0xFFE0001F, 0xD4000003)
 __AARCH64_INSN_FUNCS(brk,	0xFFE0001F, 0xD4200000)
+__AARCH64_INSN_FUNCS(exception,	0xFF000000, 0xD4000000)
 __AARCH64_INSN_FUNCS(hint,	0xFFFFF01F, 0xD503201F)
 __AARCH64_INSN_FUNCS(br,	0xFFFFFC1F, 0xD61F0000)
 __AARCH64_INSN_FUNCS(blr,	0xFFFFFC1F, 0xD63F0000)
 __AARCH64_INSN_FUNCS(ret,	0xFFFFFC1F, 0xD65F0000)
+__AARCH64_INSN_FUNCS(eret,	0xFFFFFFFF, 0xD69F03E0)
+__AARCH64_INSN_FUNCS(mrs,	0xFFF00000, 0xD5300000)
+__AARCH64_INSN_FUNCS(msr_imm,	0xFFF8F01F, 0xD500401F)
+__AARCH64_INSN_FUNCS(msr_reg,	0xFFF00000, 0xD5100000)
 
 #undef	__AARCH64_INSN_FUNCS
 
@@ -286,6 +319,8 @@ bool aarch64_insn_is_branch_imm(u32 insn);
 int aarch64_insn_read(void *addr, u32 *insnp);
 int aarch64_insn_write(void *addr, u32 insn);
 enum aarch64_insn_encoding_class aarch64_get_insn_class(u32 insn);
+bool aarch64_insn_uses_literal(u32 insn);
+bool aarch64_insn_is_branch(u32 insn);
 u64 aarch64_insn_decode_immediate(enum aarch64_insn_imm_type type, u32 insn);
 u32 aarch64_insn_encode_immediate(enum aarch64_insn_imm_type type,
 				  u32 insn, u64 imm);
@@ -367,6 +402,7 @@ bool aarch32_insn_is_wide(u32 insn);
 #define A32_RT_OFFSET	12
 #define A32_RT2_OFFSET	 0
 
+u32 aarch64_insn_extract_system_reg(u32 insn);
 u32 aarch32_insn_extract_reg_num(u32 insn, int offset);
 u32 aarch32_insn_mcr_extract_opc2(u32 insn);
 u32 aarch32_insn_mcr_extract_crm(u32 insn);
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 368c082..28c6110f 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -162,6 +162,32 @@ static bool __kprobes __aarch64_insn_hotpatch_safe(u32 insn)
 		aarch64_insn_is_nop(insn);
 }
 
+bool __kprobes aarch64_insn_uses_literal(u32 insn)
+{
+	/* ldr/ldrsw (literal), prfm */
+
+	return aarch64_insn_is_ldr_lit(insn) ||
+		aarch64_insn_is_ldrsw_lit(insn) ||
+		aarch64_insn_is_adr_adrp(insn) ||
+		aarch64_insn_is_prfm_lit(insn);
+}
+
+bool __kprobes aarch64_insn_is_branch(u32 insn)
+{
+	/* b, bl, cb*, tb*, b.cond, br, blr */
+
+	return aarch64_insn_is_b(insn) ||
+		aarch64_insn_is_bl(insn) ||
+		aarch64_insn_is_cbz(insn) ||
+		aarch64_insn_is_cbnz(insn) ||
+		aarch64_insn_is_tbz(insn) ||
+		aarch64_insn_is_tbnz(insn) ||
+		aarch64_insn_is_ret(insn) ||
+		aarch64_insn_is_br(insn) ||
+		aarch64_insn_is_blr(insn) ||
+		aarch64_insn_is_bcond(insn);
+}
+
 /*
  * ARM Architecture Reference Manual for ARMv8 Profile-A, Issue A.a
  * Section B2.6.5 "Concurrent modification and execution of instructions":
@@ -1175,6 +1201,14 @@ u32 aarch64_set_branch_offset(u32 insn, s32 offset)
 	BUG();
 }
 
+/*
+ * Extract the Op/CR data from a msr/mrs instruction.
+ */
+u32 aarch64_insn_extract_system_reg(u32 insn)
+{
+	return (insn & 0x1FFFE0) >> 5;
+}
+
 bool aarch32_insn_is_wide(u32 insn)
 {
 	return insn >= 0xe800;
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v14 03/10] arm64: add conditional instruction simulation support
  2016-06-27  3:06 [PATCH v14 00/10] arm64: Add kernel probes (kprobes) support David Long
  2016-06-27  3:06 ` [PATCH v14 01/10] arm64: Add HAVE_REGS_AND_STACK_ACCESS_API feature David Long
  2016-06-27  3:06 ` [PATCH v14 02/10] arm64: Add more test functions to insn.c David Long
@ 2016-06-27  3:06 ` David Long
  2016-06-27  3:06 ` [PATCH v14 04/10] arm64: Kprobes with single stepping support David Long
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: David Long @ 2016-06-27  3:06 UTC (permalink / raw)
  To: Catalin Marinas, Huang Shijie, James Morse, Marc Zyngier,
	Pratyush Anand, Sandeepa Prabhu, Will Deacon, William Cohen,
	linux-arm-kernel, linux-kernel, Steve Capper, Masami Hiramatsu,
	Li Bin
  Cc: Adam Buchbinder, Alex Bennée, Andrew Morton,
	Andrey Ryabinin, Ard Biesheuvel, Christoffer Dall,
	Daniel Thompson, Dave P Martin, Jens Wiklander, Jisheng Zhang,
	John Blackwood, Mark Rutland, Petr Mladek, Robin Murphy,
	Suzuki K Poulose, Vladimir Murzin, Yang Shi, Zi Shen Lim,
	yalin wang, Mark Brown

From: "David A. Long" <dave.long@linaro.org>

Cease using the arm32 arm_check_condition() function and replace it with
a local version for use in deprecated instruction support on arm64. Also
make the function table used by this available for future use by kprobes
and/or uprobes.

This function is derived from code written by Sandeepa Prabhu.

Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm64/include/asm/insn.h        |  3 ++
 arch/arm64/kernel/Makefile           |  3 +-
 arch/arm64/kernel/armv8_deprecated.c | 19 ++++++-
 arch/arm64/kernel/insn.c             | 98 ++++++++++++++++++++++++++++++++++++
 4 files changed, 119 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index 497f7a2..a44abbd 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -406,6 +406,9 @@ u32 aarch64_insn_extract_system_reg(u32 insn);
 u32 aarch32_insn_extract_reg_num(u32 insn, int offset);
 u32 aarch32_insn_mcr_extract_opc2(u32 insn);
 u32 aarch32_insn_mcr_extract_crm(u32 insn);
+
+typedef bool (pstate_check_t)(unsigned long);
+extern pstate_check_t * const aarch32_opcode_cond_checks[16];
 #endif /* __ASSEMBLY__ */
 
 #endif	/* __ASM_INSN_H */
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 2173149..4653aca 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -26,8 +26,7 @@ $(obj)/%.stub.o: $(obj)/%.o FORCE
 	$(call if_changed,objcopy)
 
 arm64-obj-$(CONFIG_COMPAT)		+= sys32.o kuser32.o signal32.o 	\
-					   sys_compat.o entry32.o		\
-					   ../../arm/kernel/opcodes.o
+					   sys_compat.o entry32.o
 arm64-obj-$(CONFIG_FUNCTION_TRACER)	+= ftrace.o entry-ftrace.o
 arm64-obj-$(CONFIG_MODULES)		+= arm64ksyms.o module.o
 arm64-obj-$(CONFIG_ARM64_MODULE_PLTS)	+= module-plts.o
diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c
index c37202c..2934894 100644
--- a/arch/arm64/kernel/armv8_deprecated.c
+++ b/arch/arm64/kernel/armv8_deprecated.c
@@ -366,6 +366,21 @@ static int emulate_swpX(unsigned int address, unsigned int *data,
 	return res;
 }
 
+#define	ARM_OPCODE_CONDITION_UNCOND	0xf
+
+static unsigned int __kprobes aarch32_check_condition(u32 opcode, u32 psr)
+{
+	u32 cc_bits  = opcode >> 28;
+
+	if (cc_bits != ARM_OPCODE_CONDITION_UNCOND) {
+		if ((*aarch32_opcode_cond_checks[cc_bits])(psr))
+			return ARM_OPCODE_CONDTEST_PASS;
+		else
+			return ARM_OPCODE_CONDTEST_FAIL;
+	}
+	return ARM_OPCODE_CONDTEST_UNCOND;
+}
+
 /*
  * swp_handler logs the id of calling process, dissects the instruction, sanity
  * checks the memory location, calls emulate_swpX for the actual operation and
@@ -380,7 +395,7 @@ static int swp_handler(struct pt_regs *regs, u32 instr)
 
 	type = instr & TYPE_SWPB;
 
-	switch (arm_check_condition(instr, regs->pstate)) {
+	switch (aarch32_check_condition(instr, regs->pstate)) {
 	case ARM_OPCODE_CONDTEST_PASS:
 		break;
 	case ARM_OPCODE_CONDTEST_FAIL:
@@ -461,7 +476,7 @@ static int cp15barrier_handler(struct pt_regs *regs, u32 instr)
 {
 	perf_sw_event(PERF_COUNT_SW_EMULATION_FAULTS, 1, regs, regs->pc);
 
-	switch (arm_check_condition(instr, regs->pstate)) {
+	switch (aarch32_check_condition(instr, regs->pstate)) {
 	case ARM_OPCODE_CONDTEST_PASS:
 		break;
 	case ARM_OPCODE_CONDTEST_FAIL:
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 28c6110f..5cb2f3d 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -1234,3 +1234,101 @@ u32 aarch32_insn_mcr_extract_crm(u32 insn)
 {
 	return insn & CRM_MASK;
 }
+
+static bool __kprobes __check_eq(unsigned long pstate)
+{
+	return (pstate & PSR_Z_BIT) != 0;
+}
+
+static bool __kprobes __check_ne(unsigned long pstate)
+{
+	return (pstate & PSR_Z_BIT) == 0;
+}
+
+static bool __kprobes __check_cs(unsigned long pstate)
+{
+	return (pstate & PSR_C_BIT) != 0;
+}
+
+static bool __kprobes __check_cc(unsigned long pstate)
+{
+	return (pstate & PSR_C_BIT) == 0;
+}
+
+static bool __kprobes __check_mi(unsigned long pstate)
+{
+	return (pstate & PSR_N_BIT) != 0;
+}
+
+static bool __kprobes __check_pl(unsigned long pstate)
+{
+	return (pstate & PSR_N_BIT) == 0;
+}
+
+static bool __kprobes __check_vs(unsigned long pstate)
+{
+	return (pstate & PSR_V_BIT) != 0;
+}
+
+static bool __kprobes __check_vc(unsigned long pstate)
+{
+	return (pstate & PSR_V_BIT) == 0;
+}
+
+static bool __kprobes __check_hi(unsigned long pstate)
+{
+	pstate &= ~(pstate >> 1);	/* PSR_C_BIT &= ~PSR_Z_BIT */
+	return (pstate & PSR_C_BIT) != 0;
+}
+
+static bool __kprobes __check_ls(unsigned long pstate)
+{
+	pstate &= ~(pstate >> 1);	/* PSR_C_BIT &= ~PSR_Z_BIT */
+	return (pstate & PSR_C_BIT) == 0;
+}
+
+static bool __kprobes __check_ge(unsigned long pstate)
+{
+	pstate ^= (pstate << 3);	/* PSR_N_BIT ^= PSR_V_BIT */
+	return (pstate & PSR_N_BIT) == 0;
+}
+
+static bool __kprobes __check_lt(unsigned long pstate)
+{
+	pstate ^= (pstate << 3);	/* PSR_N_BIT ^= PSR_V_BIT */
+	return (pstate & PSR_N_BIT) != 0;
+}
+
+static bool __kprobes __check_gt(unsigned long pstate)
+{
+	/*PSR_N_BIT ^= PSR_V_BIT */
+	unsigned long temp = pstate ^ (pstate << 3);
+
+	temp |= (pstate << 1);	/*PSR_N_BIT |= PSR_Z_BIT */
+	return (temp & PSR_N_BIT) == 0;
+}
+
+static bool __kprobes __check_le(unsigned long pstate)
+{
+	/*PSR_N_BIT ^= PSR_V_BIT */
+	unsigned long temp = pstate ^ (pstate << 3);
+
+	temp |= (pstate << 1);	/*PSR_N_BIT |= PSR_Z_BIT */
+	return (temp & PSR_N_BIT) != 0;
+}
+
+static bool __kprobes __check_al(unsigned long pstate)
+{
+	return true;
+}
+
+/*
+ * Note that the ARMv8 ARM calls condition code 0b1111 "nv", but states that
+ * it behaves identically to 0b1110 ("al").
+ */
+pstate_check_t * const aarch32_opcode_cond_checks[16] = {
+	__check_eq, __check_ne, __check_cs, __check_cc,
+	__check_mi, __check_pl, __check_vs, __check_vc,
+	__check_hi, __check_ls, __check_ge, __check_lt,
+	__check_gt, __check_le, __check_al, __check_al
+};
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v14 04/10] arm64: Kprobes with single stepping support
  2016-06-27  3:06 [PATCH v14 00/10] arm64: Add kernel probes (kprobes) support David Long
                   ` (2 preceding siblings ...)
  2016-06-27  3:06 ` [PATCH v14 03/10] arm64: add conditional instruction simulation support David Long
@ 2016-06-27  3:06 ` David Long
  2016-06-27  6:57   ` Pratyush Anand
  2016-06-27  3:06 ` [PATCH v14 05/10] arm64: Blacklist non-kprobe-able symbol David Long
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 16+ messages in thread
From: David Long @ 2016-06-27  3:06 UTC (permalink / raw)
  To: Catalin Marinas, Huang Shijie, James Morse, Marc Zyngier,
	Pratyush Anand, Sandeepa Prabhu, Will Deacon, William Cohen,
	linux-arm-kernel, linux-kernel, Steve Capper, Masami Hiramatsu,
	Li Bin
  Cc: Adam Buchbinder, Alex Bennée, Andrew Morton,
	Andrey Ryabinin, Ard Biesheuvel, Christoffer Dall,
	Daniel Thompson, Dave P Martin, Jens Wiklander, Jisheng Zhang,
	John Blackwood, Mark Rutland, Petr Mladek, Robin Murphy,
	Suzuki K Poulose, Vladimir Murzin, Yang Shi, Zi Shen Lim,
	yalin wang, Mark Brown

From: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>

Add support for basic kernel probes(kprobes) and jump probes
(jprobes) for ARM64.

Kprobes utilizes software breakpoint and single step debug
exceptions supported on ARM v8.

A software breakpoint is placed at the probe address to trap the
kernel execution into the kprobe handler.

ARM v8 supports enabling single stepping before the break exception
return (ERET), with next PC in exception return address (ELR_EL1). The
kprobe handler prepares an executable memory slot for out-of-line
execution with a copy of the original instruction being probed, and
enables single stepping. The PC is set to the out-of-line slot address
before the ERET. With this scheme, the instruction is executed with the
exact same register context except for the PC (and DAIF) registers.

Debug mask (PSTATE.D) is enabled only when single stepping a recursive
kprobe, e.g.: during kprobes reenter so that probed instruction can be
single stepped within the kprobe handler -exception- context.
The recursion depth of kprobe is always 2, i.e. upon probe re-entry,
any further re-entry is prevented by not calling handlers and the case
counted as a missed kprobe).

Single stepping from the x-o-l slot has a drawback for PC-relative accesses
like branching and symbolic literals access as the offset from the new PC
(slot address) may not be ensured to fit in the immediate value of
the opcode. Such instructions need simulation, so reject
probing them.

Instructions generating exceptions or cpu mode change are rejected
for probing.

Exclusive load/store instructions are rejected too.  Additionally, the
code is checked to see if it is inside an exclusive load/store sequence
(code from Pratyush).

System instructions are mostly enabled for stepping, except MSR/MRS
accesses to "DAIF" flags in PSTATE, which are not safe for
probing.

Thanks to Steve Capper and Pratyush Anand for several suggested
Changes.

Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
Signed-off-by: Pratyush Anand <panand@redhat.com>
---
 arch/arm64/Kconfig                      |   1 +
 arch/arm64/include/asm/debug-monitors.h |   5 +
 arch/arm64/include/asm/insn.h           |   2 +
 arch/arm64/include/asm/kprobes.h        |  60 ++++
 arch/arm64/include/asm/probes.h         |  34 +++
 arch/arm64/include/asm/ptrace.h         |   1 +
 arch/arm64/kernel/Makefile              |   2 +-
 arch/arm64/kernel/debug-monitors.c      |  16 +-
 arch/arm64/kernel/kprobes/Makefile      |   1 +
 arch/arm64/kernel/kprobes/decode-insn.c | 143 +++++++++
 arch/arm64/kernel/kprobes/decode-insn.h |  34 +++
 arch/arm64/kernel/kprobes/kprobes.c     | 525 ++++++++++++++++++++++++++++++++
 arch/arm64/kernel/vmlinux.lds.S         |   1 +
 arch/arm64/mm/fault.c                   |  26 ++
 14 files changed, 848 insertions(+), 3 deletions(-)
 create mode 100644 arch/arm64/include/asm/kprobes.h
 create mode 100644 arch/arm64/include/asm/probes.h
 create mode 100644 arch/arm64/kernel/kprobes/Makefile
 create mode 100644 arch/arm64/kernel/kprobes/decode-insn.c
 create mode 100644 arch/arm64/kernel/kprobes/decode-insn.h
 create mode 100644 arch/arm64/kernel/kprobes/kprobes.c

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index fab133c..1f7d644 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -88,6 +88,7 @@ config ARM64
 	select HAVE_REGS_AND_STACK_ACCESS_API
 	select HAVE_RCU_TABLE_FREE
 	select HAVE_SYSCALL_TRACEPOINTS
+	select HAVE_KPROBES
 	select IOMMU_DMA if IOMMU_SUPPORT
 	select IRQ_DOMAIN
 	select IRQ_FORCED_THREADING
diff --git a/arch/arm64/include/asm/debug-monitors.h b/arch/arm64/include/asm/debug-monitors.h
index 2fcb9b7..4b6b3f7 100644
--- a/arch/arm64/include/asm/debug-monitors.h
+++ b/arch/arm64/include/asm/debug-monitors.h
@@ -66,6 +66,11 @@
 
 #define CACHE_FLUSH_IS_SAFE		1
 
+/* kprobes BRK opcodes with ESR encoding  */
+#define BRK64_ESR_MASK		0xFFFF
+#define BRK64_ESR_KPROBES	0x0004
+#define BRK64_OPCODE_KPROBES	(AARCH64_BREAK_MON | (BRK64_ESR_KPROBES << 5))
+
 /* AArch32 */
 #define DBG_ESR_EVT_BKPT	0x4
 #define DBG_ESR_EVT_VECC	0x5
diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index a44abbd..1dbaa90 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -253,6 +253,8 @@ __AARCH64_INSN_FUNCS(ldr_reg,	0x3FE0EC00, 0x38606800)
 __AARCH64_INSN_FUNCS(ldr_lit,	0xBF000000, 0x18000000)
 __AARCH64_INSN_FUNCS(ldrsw_lit,	0xFF000000, 0x98000000)
 __AARCH64_INSN_FUNCS(exclusive,	0x3F800000, 0x08000000)
+__AARCH64_INSN_FUNCS(load_ex,	0x3F400000, 0x08400000)
+__AARCH64_INSN_FUNCS(store_ex,	0x3F400000, 0x08000000)
 __AARCH64_INSN_FUNCS(stp_post,	0x7FC00000, 0x28800000)
 __AARCH64_INSN_FUNCS(ldp_post,	0x7FC00000, 0x28C00000)
 __AARCH64_INSN_FUNCS(stp_pre,	0x7FC00000, 0x29800000)
diff --git a/arch/arm64/include/asm/kprobes.h b/arch/arm64/include/asm/kprobes.h
new file mode 100644
index 0000000..79c9511
--- /dev/null
+++ b/arch/arm64/include/asm/kprobes.h
@@ -0,0 +1,60 @@
+/*
+ * arch/arm64/include/asm/kprobes.h
+ *
+ * Copyright (C) 2013 Linaro Limited
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+
+#ifndef _ARM_KPROBES_H
+#define _ARM_KPROBES_H
+
+#include <linux/types.h>
+#include <linux/ptrace.h>
+#include <linux/percpu.h>
+
+#define __ARCH_WANT_KPROBES_INSN_SLOT
+#define MAX_INSN_SIZE			1
+#define MAX_STACK_SIZE			128
+
+#define flush_insn_slot(p)		do { } while (0)
+#define kretprobe_blacklist_size	0
+
+#include <asm/probes.h>
+
+struct prev_kprobe {
+	struct kprobe *kp;
+	unsigned int status;
+};
+
+/* Single step context for kprobe */
+struct kprobe_step_ctx {
+	unsigned long ss_pending;
+	unsigned long match_addr;
+};
+
+/* per-cpu kprobe control block */
+struct kprobe_ctlblk {
+	unsigned int kprobe_status;
+	unsigned long saved_irqflag;
+	struct prev_kprobe prev_kprobe;
+	struct kprobe_step_ctx ss_ctx;
+	struct pt_regs jprobe_saved_regs;
+	char jprobes_stack[MAX_STACK_SIZE];
+};
+
+void arch_remove_kprobe(struct kprobe *);
+int kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr);
+int kprobe_exceptions_notify(struct notifier_block *self,
+			     unsigned long val, void *data);
+int kprobe_breakpoint_handler(struct pt_regs *regs, unsigned int esr);
+int kprobe_single_step_handler(struct pt_regs *regs, unsigned int esr);
+
+#endif /* _ARM_KPROBES_H */
diff --git a/arch/arm64/include/asm/probes.h b/arch/arm64/include/asm/probes.h
new file mode 100644
index 0000000..1e8a21a
--- /dev/null
+++ b/arch/arm64/include/asm/probes.h
@@ -0,0 +1,34 @@
+/*
+ * arch/arm64/include/asm/probes.h
+ *
+ * Copyright (C) 2013 Linaro Limited
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+#ifndef _ARM_PROBES_H
+#define _ARM_PROBES_H
+
+struct kprobe;
+struct arch_specific_insn;
+
+typedef u32 kprobe_opcode_t;
+typedef unsigned long (kprobes_pstate_check_t)(unsigned long);
+typedef void (kprobes_handler_t) (u32 opcode, long addr, struct pt_regs *);
+
+/* architecture specific copy of original instruction */
+struct arch_specific_insn {
+	kprobe_opcode_t *insn;
+	kprobes_pstate_check_t *pstate_cc;
+	kprobes_handler_t *handler;
+	/* restore address after step xol */
+	unsigned long restore;
+};
+
+#endif
diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h
index 6c0c7d3..c7bbeed 100644
--- a/arch/arm64/include/asm/ptrace.h
+++ b/arch/arm64/include/asm/ptrace.h
@@ -209,6 +209,7 @@ struct task_struct;
 int valid_user_regs(struct user_pt_regs *regs, struct task_struct *task);
 
 #define instruction_pointer(regs)	((unsigned long)(regs)->pc)
+#define instruction_pointer_set(regs, value)	((regs)->pc = ((u64) (value)))
 
 extern unsigned long profile_pc(struct pt_regs *regs);
 
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 4653aca..75b3ae7 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -46,7 +46,7 @@ arm64-obj-$(CONFIG_PARAVIRT)		+= paravirt.o
 arm64-obj-$(CONFIG_RANDOMIZE_BASE)	+= kaslr.o
 arm64-obj-$(CONFIG_HIBERNATION)		+= hibernate.o hibernate-asm.o
 
-obj-y					+= $(arm64-obj-y) vdso/
+obj-y					+= $(arm64-obj-y) vdso/ kprobes/
 obj-m					+= $(arm64-obj-m)
 head-y					:= head.o
 extra-y					+= $(head-y) vmlinux.lds
diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
index 4fbf3c5..395de61 100644
--- a/arch/arm64/kernel/debug-monitors.c
+++ b/arch/arm64/kernel/debug-monitors.c
@@ -23,6 +23,7 @@
 #include <linux/hardirq.h>
 #include <linux/init.h>
 #include <linux/ptrace.h>
+#include <linux/kprobes.h>
 #include <linux/stat.h>
 #include <linux/uaccess.h>
 
@@ -266,6 +267,10 @@ static int single_step_handler(unsigned long addr, unsigned int esr,
 		 */
 		user_rewind_single_step(current);
 	} else {
+#ifdef	CONFIG_KPROBES
+		if (kprobe_single_step_handler(regs, esr) == DBG_HOOK_HANDLED)
+			return 0;
+#endif
 		if (call_step_hook(regs, esr) == DBG_HOOK_HANDLED)
 			return 0;
 
@@ -322,8 +327,15 @@ static int brk_handler(unsigned long addr, unsigned int esr,
 {
 	if (user_mode(regs)) {
 		send_user_sigtrap(TRAP_BRKPT);
-	} else if (call_break_hook(regs, esr) != DBG_HOOK_HANDLED) {
-		pr_warning("Unexpected kernel BRK exception at EL1\n");
+	}
+#ifdef	CONFIG_KPROBES
+	else if ((esr & BRK64_ESR_MASK) == BRK64_ESR_KPROBES) {
+		if (kprobe_breakpoint_handler(regs, esr) != DBG_HOOK_HANDLED)
+			return -EFAULT;
+	}
+#endif
+	else if (call_break_hook(regs, esr) != DBG_HOOK_HANDLED) {
+		pr_warn("Unexpected kernel BRK exception at EL1\n");
 		return -EFAULT;
 	}
 
diff --git a/arch/arm64/kernel/kprobes/Makefile b/arch/arm64/kernel/kprobes/Makefile
new file mode 100644
index 0000000..bc159bf
--- /dev/null
+++ b/arch/arm64/kernel/kprobes/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_KPROBES)		+= kprobes.o decode-insn.o
diff --git a/arch/arm64/kernel/kprobes/decode-insn.c b/arch/arm64/kernel/kprobes/decode-insn.c
new file mode 100644
index 0000000..0ca1584
--- /dev/null
+++ b/arch/arm64/kernel/kprobes/decode-insn.c
@@ -0,0 +1,143 @@
+/*
+ * arch/arm64/kernel/kprobes/decode-insn.c
+ *
+ * Copyright (C) 2013 Linaro Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/kprobes.h>
+#include <linux/module.h>
+#include <asm/kprobes.h>
+#include <asm/insn.h>
+#include <asm/sections.h>
+
+#include "decode-insn.h"
+
+static bool __kprobes aarch64_insn_is_steppable(u32 insn)
+{
+	/*
+	 * Branch instructions will write a new value into the PC which is
+	 * likely to be relative to the XOL address and therefore invalid.
+	 * Deliberate generation of an exception during stepping is also not
+	 * currently safe. Lastly, MSR instructions can do any number of nasty
+	 * things we can't handle during single-stepping.
+	 */
+	if (aarch64_get_insn_class(insn) == AARCH64_INSN_CLS_BR_SYS) {
+		if (aarch64_insn_is_branch(insn) ||
+		    aarch64_insn_is_msr_imm(insn) ||
+		    aarch64_insn_is_msr_reg(insn) ||
+		    aarch64_insn_is_exception(insn) ||
+		    aarch64_insn_is_eret(insn))
+			return false;
+
+		/*
+		 * The MRS instruction may not return a correct value when
+		 * executing in the single-stepping environment. We do make one
+		 * exception, for reading the DAIF bits.
+		 */
+		if (aarch64_insn_is_mrs(insn))
+			return aarch64_insn_extract_system_reg(insn)
+			     != AARCH64_INSN_SPCLREG_DAIF;
+
+		/*
+		 * The HINT instruction is is problematic when single-stepping,
+		 * except for the NOP case.
+		 */
+		if (aarch64_insn_is_hint(insn))
+			return aarch64_insn_is_nop(insn);
+
+		return true;
+	}
+
+	/*
+	 * Instructions which load PC relative literals are not going to work
+	 * when executed from an XOL slot. Instructions doing an exclusive
+	 * load/store are not going to complete successfully when single-step
+	 * exception handling happens in the middle of the sequence.
+	 */
+	if (aarch64_insn_uses_literal(insn) ||
+	    aarch64_insn_is_exclusive(insn))
+		return false;
+
+	return true;
+}
+
+/* Return:
+ *   INSN_REJECTED     If instruction is one not allowed to kprobe,
+ *   INSN_GOOD         If instruction is supported and uses instruction slot,
+ */
+static enum kprobe_insn __kprobes
+arm_probe_decode_insn(kprobe_opcode_t insn, struct arch_specific_insn *asi)
+{
+	/*
+	 * Instructions reading or modifying the PC won't work from the XOL
+	 * slot.
+	 */
+	if (aarch64_insn_is_steppable(insn))
+		return INSN_GOOD;
+	else
+		return INSN_REJECTED;
+}
+
+static bool __kprobes
+is_probed_address_atomic(kprobe_opcode_t *scan_start, kprobe_opcode_t *scan_end)
+{
+	while (scan_start > scan_end) {
+		/*
+		 * atomic region starts from exclusive load and ends with
+		 * exclusive store.
+		 */
+		if (aarch64_insn_is_store_ex(le32_to_cpu(*scan_start)))
+			return false;
+		else if (aarch64_insn_is_load_ex(le32_to_cpu(*scan_start)))
+			return true;
+		scan_start--;
+	}
+
+	return false;
+}
+
+enum kprobe_insn __kprobes
+arm_kprobe_decode_insn(kprobe_opcode_t *addr, struct arch_specific_insn *asi)
+{
+	enum kprobe_insn decoded;
+	kprobe_opcode_t insn = le32_to_cpu(*addr);
+	kprobe_opcode_t *scan_start = addr - 1;
+	kprobe_opcode_t *scan_end = addr - MAX_ATOMIC_CONTEXT_SIZE;
+#if defined(CONFIG_MODULES) && defined(MODULES_VADDR)
+	struct module *mod;
+#endif
+
+	if (addr >= (kprobe_opcode_t *)_text &&
+	    scan_end < (kprobe_opcode_t *)_text)
+		scan_end = (kprobe_opcode_t *)_text;
+#if defined(CONFIG_MODULES) && defined(MODULES_VADDR)
+	else {
+		preempt_disable();
+		mod = __module_address((unsigned long)addr);
+		if (mod && within_module_init((unsigned long)addr, mod) &&
+			!within_module_init((unsigned long)scan_end, mod))
+			scan_end = (kprobe_opcode_t *)mod->init_layout.base;
+		else if (mod && within_module_core((unsigned long)addr, mod) &&
+			!within_module_core((unsigned long)scan_end, mod))
+			scan_end = (kprobe_opcode_t *)mod->core_layout.base;
+		preempt_enable();
+	}
+#endif
+	decoded = arm_probe_decode_insn(insn, asi);
+
+	if (decoded == INSN_REJECTED ||
+			is_probed_address_atomic(scan_start, scan_end))
+		return INSN_REJECTED;
+
+	return decoded;
+}
diff --git a/arch/arm64/kernel/kprobes/decode-insn.h b/arch/arm64/kernel/kprobes/decode-insn.h
new file mode 100644
index 0000000..b98774d
--- /dev/null
+++ b/arch/arm64/kernel/kprobes/decode-insn.h
@@ -0,0 +1,34 @@
+/*
+ * arch/arm64/kernel/kprobes/decode-insn.h
+ *
+ * Copyright (C) 2013 Linaro Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+
+#ifndef _ARM_KERNEL_KPROBES_ARM64_H
+#define _ARM_KERNEL_KPROBES_ARM64_H
+
+/*
+ * ARM strongly recommends a limit of 128 bytes between LoadExcl and
+ * StoreExcl instructions in a single thread of execution. So keep the
+ * max atomic context size as 32.
+ */
+#define MAX_ATOMIC_CONTEXT_SIZE	(128 / sizeof(kprobe_opcode_t))
+
+enum kprobe_insn {
+	INSN_REJECTED,
+	INSN_GOOD,
+};
+
+enum kprobe_insn __kprobes
+arm_kprobe_decode_insn(kprobe_opcode_t *addr, struct arch_specific_insn *asi);
+
+#endif /* _ARM_KERNEL_KPROBES_ARM64_H */
diff --git a/arch/arm64/kernel/kprobes/kprobes.c b/arch/arm64/kernel/kprobes/kprobes.c
new file mode 100644
index 0000000..189b0d2
--- /dev/null
+++ b/arch/arm64/kernel/kprobes/kprobes.c
@@ -0,0 +1,525 @@
+/*
+ * arch/arm64/kernel/kprobes.c
+ *
+ * Kprobes support for ARM64
+ *
+ * Copyright (C) 2013 Linaro Limited.
+ * Author: Sandeepa Prabhu <sandeepa.prabhu@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ */
+#include <linux/kernel.h>
+#include <linux/kprobes.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/stop_machine.h>
+#include <linux/stringify.h>
+#include <asm/traps.h>
+#include <asm/ptrace.h>
+#include <asm/cacheflush.h>
+#include <asm/debug-monitors.h>
+#include <asm/system_misc.h>
+#include <asm/insn.h>
+#include <asm/uaccess.h>
+#include <asm/irq.h>
+
+#include "decode-insn.h"
+
+#define MIN_STACK_SIZE(addr)	(on_irq_stack(addr, raw_smp_processor_id()) ? \
+	min((unsigned long)IRQ_STACK_SIZE,	\
+	IRQ_STACK_PTR(raw_smp_processor_id()) - (addr)) : \
+	min((unsigned long)MAX_STACK_SIZE,	\
+	(unsigned long)current_thread_info() + THREAD_START_SP - (addr)))
+
+void jprobe_return_break(void);
+
+DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
+DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
+
+static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
+{
+	/* prepare insn slot */
+	p->ainsn.insn[0] = cpu_to_le32(p->opcode);
+
+	flush_icache_range((uintptr_t) (p->ainsn.insn),
+			   (uintptr_t) (p->ainsn.insn) +
+			   MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
+
+	/*
+	 * Needs restoring of return address after stepping xol.
+	 */
+	p->ainsn.restore = (unsigned long) p->addr +
+	  sizeof(kprobe_opcode_t);
+}
+
+int __kprobes arch_prepare_kprobe(struct kprobe *p)
+{
+	unsigned long probe_addr = (unsigned long)p->addr;
+	extern char __start_rodata[];
+	extern char __end_rodata[];
+
+	if (probe_addr & 0x3)
+		return -EINVAL;
+
+	/* copy instruction */
+	p->opcode = le32_to_cpu(*p->addr);
+
+	if (in_exception_text(probe_addr))
+		return -EINVAL;
+	if (probe_addr >= (unsigned long) __start_rodata &&
+	    probe_addr <= (unsigned long) __end_rodata)
+		return -EINVAL;
+
+	/* decode instruction */
+	switch (arm_kprobe_decode_insn(p->addr, &p->ainsn)) {
+	case INSN_REJECTED:	/* insn not supported */
+		return -EINVAL;
+
+	case INSN_GOOD:	/* instruction uses slot */
+		p->ainsn.insn = get_insn_slot();
+		if (!p->ainsn.insn)
+			return -ENOMEM;
+		break;
+	};
+
+	/* prepare the instruction */
+	arch_prepare_ss_slot(p);
+
+	return 0;
+}
+
+static int __kprobes patch_text(kprobe_opcode_t *addr, u32 opcode)
+{
+	void *addrs[1];
+	u32 insns[1];
+
+	addrs[0] = (void *)addr;
+	insns[0] = (u32)opcode;
+
+	return aarch64_insn_patch_text(addrs, insns, 1);
+}
+
+/* arm kprobe: install breakpoint in text */
+void __kprobes arch_arm_kprobe(struct kprobe *p)
+{
+	patch_text(p->addr, BRK64_OPCODE_KPROBES);
+}
+
+/* disarm kprobe: remove breakpoint from text */
+void __kprobes arch_disarm_kprobe(struct kprobe *p)
+{
+	patch_text(p->addr, p->opcode);
+}
+
+void __kprobes arch_remove_kprobe(struct kprobe *p)
+{
+	if (p->ainsn.insn) {
+		free_insn_slot(p->ainsn.insn, 0);
+		p->ainsn.insn = NULL;
+	}
+}
+
+static void __kprobes save_previous_kprobe(struct kprobe_ctlblk *kcb)
+{
+	kcb->prev_kprobe.kp = kprobe_running();
+	kcb->prev_kprobe.status = kcb->kprobe_status;
+}
+
+static void __kprobes restore_previous_kprobe(struct kprobe_ctlblk *kcb)
+{
+	__this_cpu_write(current_kprobe, kcb->prev_kprobe.kp);
+	kcb->kprobe_status = kcb->prev_kprobe.status;
+}
+
+static void __kprobes set_current_kprobe(struct kprobe *p)
+{
+	__this_cpu_write(current_kprobe, p);
+}
+
+/*
+ * The D-flag (Debug mask) is set (masked) upon debug exception entry.
+ * Kprobes needs to clear (unmask) D-flag -ONLY- in case of recursive
+ * probe i.e. when probe hit from kprobe handler context upon
+ * executing the pre/post handlers. In this case we return with
+ * D-flag clear so that single-stepping can be carried-out.
+ *
+ * Leave D-flag set in all other cases.
+ */
+static void __kprobes
+spsr_set_debug_flag(struct pt_regs *regs, int mask)
+{
+	unsigned long spsr = regs->pstate;
+
+	if (mask)
+		spsr |= PSR_D_BIT;
+	else
+		spsr &= ~PSR_D_BIT;
+
+	regs->pstate = spsr;
+}
+
+/*
+ * Interrupts need to be disabled before single-step mode is set, and not
+ * reenabled until after single-step mode ends.
+ * Without disabling interrupt on local CPU, there is a chance of
+ * interrupt occurrence in the period of exception return and  start of
+ * out-of-line single-step, that result in wrongly single stepping
+ * into the interrupt handler.
+ */
+static void __kprobes kprobes_save_local_irqflag(struct kprobe_ctlblk *kcb,
+						struct pt_regs *regs)
+{
+	kcb->saved_irqflag = regs->pstate;
+	regs->pstate |= PSR_I_BIT;
+}
+
+static void __kprobes kprobes_restore_local_irqflag(struct kprobe_ctlblk *kcb,
+						struct pt_regs *regs)
+{
+	if (kcb->saved_irqflag & PSR_I_BIT)
+		regs->pstate |= PSR_I_BIT;
+	else
+		regs->pstate &= ~PSR_I_BIT;
+}
+
+static void __kprobes
+set_ss_context(struct kprobe_ctlblk *kcb, unsigned long addr)
+{
+	kcb->ss_ctx.ss_pending = true;
+	kcb->ss_ctx.match_addr = addr + sizeof(kprobe_opcode_t);
+}
+
+static void __kprobes clear_ss_context(struct kprobe_ctlblk *kcb)
+{
+	kcb->ss_ctx.ss_pending = false;
+	kcb->ss_ctx.match_addr = 0;
+}
+
+static void __kprobes setup_singlestep(struct kprobe *p,
+				       struct pt_regs *regs,
+				       struct kprobe_ctlblk *kcb, int reenter)
+{
+	unsigned long slot;
+
+	if (reenter) {
+		save_previous_kprobe(kcb);
+		set_current_kprobe(p);
+		kcb->kprobe_status = KPROBE_REENTER;
+	} else {
+		kcb->kprobe_status = KPROBE_HIT_SS;
+	}
+
+	BUG_ON(!p->ainsn.insn);
+
+	/* prepare for single stepping */
+	slot = (unsigned long)p->ainsn.insn;
+
+	set_ss_context(kcb, slot);	/* mark pending ss */
+
+	if (kcb->kprobe_status == KPROBE_REENTER)
+		spsr_set_debug_flag(regs, 0);
+
+	/* IRQs and single stepping do not mix well. */
+	kprobes_save_local_irqflag(kcb, regs);
+	kernel_enable_single_step(regs);
+	instruction_pointer_set(regs, slot);
+}
+
+static int __kprobes reenter_kprobe(struct kprobe *p,
+				    struct pt_regs *regs,
+				    struct kprobe_ctlblk *kcb)
+{
+	switch (kcb->kprobe_status) {
+	case KPROBE_HIT_SSDONE:
+	case KPROBE_HIT_ACTIVE:
+		kprobes_inc_nmissed_count(p);
+		setup_singlestep(p, regs, kcb, 1);
+		break;
+	case KPROBE_HIT_SS:
+	case KPROBE_REENTER:
+		pr_warn("Unrecoverable kprobe detected at %p.\n", p->addr);
+		dump_kprobe(p);
+		BUG();
+		break;
+	default:
+		WARN_ON(1);
+		return 0;
+	}
+
+	return 1;
+}
+
+static void __kprobes
+post_kprobe_handler(struct kprobe_ctlblk *kcb, struct pt_regs *regs)
+{
+	struct kprobe *cur = kprobe_running();
+
+	if (!cur)
+		return;
+
+	/* return addr restore if non-branching insn */
+	if (cur->ainsn.restore != 0)
+		instruction_pointer_set(regs, cur->ainsn.restore);
+
+	/* restore back original saved kprobe variables and continue */
+	if (kcb->kprobe_status == KPROBE_REENTER) {
+		restore_previous_kprobe(kcb);
+		return;
+	}
+	/* call post handler */
+	kcb->kprobe_status = KPROBE_HIT_SSDONE;
+	if (cur->post_handler)	{
+		/* post_handler can hit breakpoint and single step
+		 * again, so we enable D-flag for recursive exception.
+		 */
+		cur->post_handler(cur, regs, 0);
+	}
+
+	reset_current_kprobe();
+}
+
+int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr)
+{
+	struct kprobe *cur = kprobe_running();
+	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+
+	switch (kcb->kprobe_status) {
+	case KPROBE_HIT_SS:
+	case KPROBE_REENTER:
+		/*
+		 * We are here because the instruction being single
+		 * stepped caused a page fault. We reset the current
+		 * kprobe and the ip points back to the probe address
+		 * and allow the page fault handler to continue as a
+		 * normal page fault.
+		 */
+		instruction_pointer_set(regs, cur->addr);
+		if (!instruction_pointer(regs))
+			BUG();
+
+		kernel_disable_single_step();
+		if (kcb->kprobe_status == KPROBE_REENTER)
+			spsr_set_debug_flag(regs, 1);
+
+		if (kcb->kprobe_status == KPROBE_REENTER)
+			restore_previous_kprobe(kcb);
+		else
+			reset_current_kprobe();
+
+		break;
+	case KPROBE_HIT_ACTIVE:
+	case KPROBE_HIT_SSDONE:
+		/*
+		 * We increment the nmissed count for accounting,
+		 * we can also use npre/npostfault count for accounting
+		 * these specific fault cases.
+		 */
+		kprobes_inc_nmissed_count(cur);
+
+		/*
+		 * We come here because instructions in the pre/post
+		 * handler caused the page_fault, this could happen
+		 * if handler tries to access user space by
+		 * copy_from_user(), get_user() etc. Let the
+		 * user-specified handler try to fix it first.
+		 */
+		if (cur->fault_handler && cur->fault_handler(cur, regs, fsr))
+			return 1;
+
+		/*
+		 * In case the user-specified fault handler returned
+		 * zero, try to fix up.
+		 */
+		if (fixup_exception(regs))
+			return 1;
+	}
+	return 0;
+}
+
+int __kprobes kprobe_exceptions_notify(struct notifier_block *self,
+				       unsigned long val, void *data)
+{
+	return NOTIFY_DONE;
+}
+
+static void __kprobes kprobe_handler(struct pt_regs *regs)
+{
+	struct kprobe *p, *cur_kprobe;
+	struct kprobe_ctlblk *kcb;
+	unsigned long addr = instruction_pointer(regs);
+
+	kcb = get_kprobe_ctlblk();
+	cur_kprobe = kprobe_running();
+
+	p = get_kprobe((kprobe_opcode_t *) addr);
+
+	if (p) {
+		if (cur_kprobe) {
+			if (reenter_kprobe(p, regs, kcb))
+				return;
+		} else {
+			/* Probe hit */
+			set_current_kprobe(p);
+			kcb->kprobe_status = KPROBE_HIT_ACTIVE;
+
+			/*
+			 * If we have no pre-handler or it returned 0, we
+			 * continue with normal processing.  If we have a
+			 * pre-handler and it returned non-zero, it prepped
+			 * for calling the break_handler below on re-entry,
+			 * so get out doing nothing more here.
+			 *
+			 * pre_handler can hit a breakpoint and can step thru
+			 * before return, keep PSTATE D-flag enabled until
+			 * pre_handler return back.
+			 */
+			if (!p->pre_handler || !p->pre_handler(p, regs)) {
+				setup_singlestep(p, regs, kcb, 0);
+				return;
+			}
+		}
+	} else if ((le32_to_cpu(*(kprobe_opcode_t *) addr) ==
+	    BRK64_OPCODE_KPROBES) && cur_kprobe) {
+		/* We probably hit a jprobe.  Call its break handler. */
+		if (cur_kprobe->break_handler  &&
+		     cur_kprobe->break_handler(cur_kprobe, regs)) {
+			setup_singlestep(cur_kprobe, regs, kcb, 0);
+			return;
+		}
+	}
+	/*
+	 * The breakpoint instruction was removed right
+	 * after we hit it.  Another cpu has removed
+	 * either a probepoint or a debugger breakpoint
+	 * at this address.  In either case, no further
+	 * handling of this interrupt is appropriate.
+	 * Return back to original instruction, and continue.
+	 */
+}
+
+static int __kprobes
+kprobe_ss_hit(struct kprobe_ctlblk *kcb, unsigned long addr)
+{
+	if ((kcb->ss_ctx.ss_pending)
+	    && (kcb->ss_ctx.match_addr == addr)) {
+		clear_ss_context(kcb);	/* clear pending ss */
+		return DBG_HOOK_HANDLED;
+	}
+	/* not ours, kprobes should ignore it */
+	return DBG_HOOK_ERROR;
+}
+
+int __kprobes
+kprobe_single_step_handler(struct pt_regs *regs, unsigned int esr)
+{
+	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+	int retval;
+
+	/* return error if this is not our step */
+	retval = kprobe_ss_hit(kcb, instruction_pointer(regs));
+
+	if (retval == DBG_HOOK_HANDLED) {
+		kprobes_restore_local_irqflag(kcb, regs);
+		kernel_disable_single_step();
+
+		if (kcb->kprobe_status == KPROBE_REENTER)
+			spsr_set_debug_flag(regs, 1);
+
+		post_kprobe_handler(kcb, regs);
+	}
+
+	return retval;
+}
+
+int __kprobes
+kprobe_breakpoint_handler(struct pt_regs *regs, unsigned int esr)
+{
+	kprobe_handler(regs);
+	return DBG_HOOK_HANDLED;
+}
+
+int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
+{
+	struct jprobe *jp = container_of(p, struct jprobe, kp);
+	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+	long stack_ptr = kernel_stack_pointer(regs);
+
+	kcb->jprobe_saved_regs = *regs;
+	/*
+	 * As Linus pointed out, gcc assumes that the callee
+	 * owns the argument space and could overwrite it, e.g.
+	 * tailcall optimization. So, to be absolutely safe
+	 * we also save and restore enough stack bytes to cover
+	 * the argument area.
+	 */
+	memcpy(kcb->jprobes_stack, (void *)stack_ptr,
+	       MIN_STACK_SIZE(stack_ptr));
+
+	instruction_pointer_set(regs, jp->entry);
+	preempt_disable();
+	pause_graph_tracing();
+	return 1;
+}
+
+void __kprobes jprobe_return(void)
+{
+	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+
+	/*
+	 * Jprobe handler return by entering break exception,
+	 * encoded same as kprobe, but with following conditions
+	 * -a magic number in x0 to identify from rest of other kprobes.
+	 * -restore stack addr to original saved pt_regs
+	 */
+	asm volatile ("ldr x0, [%0]\n\t"
+		      "mov sp, x0\n\t"
+		      ".globl jprobe_return_break\n\t"
+		      "jprobe_return_break:\n\t"
+		      "brk %1\n\t"
+		      :
+		      : "r"(&kcb->jprobe_saved_regs.sp),
+		      "I"(BRK64_ESR_KPROBES)
+		      : "memory");
+}
+
+int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
+{
+	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+	long stack_addr = kcb->jprobe_saved_regs.sp;
+	long orig_sp = kernel_stack_pointer(regs);
+	struct jprobe *jp = container_of(p, struct jprobe, kp);
+
+	if (instruction_pointer(regs) != (u64) jprobe_return_break)
+		return 0;
+
+	if (orig_sp != stack_addr) {
+		struct pt_regs *saved_regs =
+		    (struct pt_regs *)kcb->jprobe_saved_regs.sp;
+		pr_err("current sp %lx does not match saved sp %lx\n",
+		       orig_sp, stack_addr);
+		pr_err("Saved registers for jprobe %p\n", jp);
+		show_regs(saved_regs);
+		pr_err("Current registers\n");
+		show_regs(regs);
+		BUG();
+	}
+	unpause_graph_tracing();
+	*regs = kcb->jprobe_saved_regs;
+	memcpy((void *)stack_addr, kcb->jprobes_stack,
+	       MIN_STACK_SIZE(stack_addr));
+	preempt_enable_no_resched();
+	return 1;
+}
+
+int __init arch_init_kprobes(void)
+{
+	return 0;
+}
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 435e820..075ce32 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -121,6 +121,7 @@ SECTIONS
 			TEXT_TEXT
 			SCHED_TEXT
 			LOCK_TEXT
+			KPROBES_TEXT
 			HYPERVISOR_TEXT
 			IDMAP_TEXT
 			HIBERNATE_TEXT
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 013e2cb..2408e51 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -41,6 +41,28 @@
 
 static const char *fault_name(unsigned int esr);
 
+#ifdef CONFIG_KPROBES
+static inline int notify_page_fault(struct pt_regs *regs, unsigned int esr)
+{
+	int ret = 0;
+
+	/* kprobe_running() needs smp_processor_id() */
+	if (!user_mode(regs)) {
+		preempt_disable();
+		if (kprobe_running() && kprobe_fault_handler(regs, esr))
+			ret = 1;
+		preempt_enable();
+	}
+
+	return ret;
+}
+#else
+static inline int notify_page_fault(struct pt_regs *regs, unsigned int esr)
+{
+	return 0;
+}
+#endif
+
 /*
  * Dump out the page tables associated with 'addr' in mm 'mm'.
  */
@@ -259,6 +281,9 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr,
 	unsigned long vm_flags = VM_READ | VM_WRITE | VM_EXEC;
 	unsigned int mm_flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
 
+	if (notify_page_fault(regs, esr))
+		return 0;
+
 	tsk = current;
 	mm  = tsk->mm;
 
@@ -629,6 +654,7 @@ asmlinkage int __exception do_debug_exception(unsigned long addr,
 
 	return rv;
 }
+NOKPROBE_SYMBOL(do_debug_exception);
 
 #ifdef CONFIG_ARM64_PAN
 void cpu_enable_pan(void *__unused)
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v14 05/10] arm64: Blacklist non-kprobe-able symbol
  2016-06-27  3:06 [PATCH v14 00/10] arm64: Add kernel probes (kprobes) support David Long
                   ` (3 preceding siblings ...)
  2016-06-27  3:06 ` [PATCH v14 04/10] arm64: Kprobes with single stepping support David Long
@ 2016-06-27  3:06 ` David Long
  2016-06-27  3:06 ` [PATCH v14 06/10] arm64: Treat all entry code as non-kprobe-able David Long
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: David Long @ 2016-06-27  3:06 UTC (permalink / raw)
  To: Catalin Marinas, Huang Shijie, James Morse, Marc Zyngier,
	Pratyush Anand, Sandeepa Prabhu, Will Deacon, William Cohen,
	linux-arm-kernel, linux-kernel, Steve Capper, Masami Hiramatsu,
	Li Bin
  Cc: Adam Buchbinder, Alex Bennée, Andrew Morton,
	Andrey Ryabinin, Ard Biesheuvel, Christoffer Dall,
	Daniel Thompson, Dave P Martin, Jens Wiklander, Jisheng Zhang,
	John Blackwood, Mark Rutland, Petr Mladek, Robin Murphy,
	Suzuki K Poulose, Vladimir Murzin, Yang Shi, Zi Shen Lim,
	yalin wang, Mark Brown

From: Pratyush Anand <panand@redhat.com>

Add all function symbols which are called from do_debug_exception under
NOKPROBE_SYMBOL, as they can not kprobed.

Signed-off-by: Pratyush Anand <panand@redhat.com>
---
 arch/arm64/kernel/arm64ksyms.c     |  2 ++
 arch/arm64/kernel/debug-monitors.c | 17 +++++++++++++++++
 arch/arm64/kernel/hw_breakpoint.c  |  8 ++++++++
 arch/arm64/kernel/kgdb.c           |  4 ++++
 4 files changed, 31 insertions(+)

diff --git a/arch/arm64/kernel/arm64ksyms.c b/arch/arm64/kernel/arm64ksyms.c
index 678f30b0..b96ff1a 100644
--- a/arch/arm64/kernel/arm64ksyms.c
+++ b/arch/arm64/kernel/arm64ksyms.c
@@ -27,6 +27,7 @@
 #include <linux/uaccess.h>
 #include <linux/io.h>
 #include <linux/arm-smccc.h>
+#include <linux/kprobes.h>
 
 #include <asm/checksum.h>
 
@@ -68,6 +69,7 @@ EXPORT_SYMBOL(test_and_change_bit);
 
 #ifdef CONFIG_FUNCTION_TRACER
 EXPORT_SYMBOL(_mcount);
+NOKPROBE_SYMBOL(_mcount);
 #endif
 
 	/* arm-smccc */
diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
index 395de61..2fbc1b9 100644
--- a/arch/arm64/kernel/debug-monitors.c
+++ b/arch/arm64/kernel/debug-monitors.c
@@ -49,6 +49,7 @@ static void mdscr_write(u32 mdscr)
 	asm volatile("msr mdscr_el1, %0" :: "r" (mdscr));
 	local_dbg_restore(flags);
 }
+NOKPROBE_SYMBOL(mdscr_write);
 
 static u32 mdscr_read(void)
 {
@@ -56,6 +57,7 @@ static u32 mdscr_read(void)
 	asm volatile("mrs %0, mdscr_el1" : "=r" (mdscr));
 	return mdscr;
 }
+NOKPROBE_SYMBOL(mdscr_read);
 
 /*
  * Allow root to disable self-hosted debug from userspace.
@@ -104,6 +106,7 @@ void enable_debug_monitors(enum dbg_active_el el)
 		mdscr_write(mdscr);
 	}
 }
+NOKPROBE_SYMBOL(enable_debug_monitors);
 
 void disable_debug_monitors(enum dbg_active_el el)
 {
@@ -124,6 +127,7 @@ void disable_debug_monitors(enum dbg_active_el el)
 		mdscr_write(mdscr);
 	}
 }
+NOKPROBE_SYMBOL(disable_debug_monitors);
 
 /*
  * OS lock clearing.
@@ -174,6 +178,7 @@ static void set_regs_spsr_ss(struct pt_regs *regs)
 	spsr |= DBG_SPSR_SS;
 	regs->pstate = spsr;
 }
+NOKPROBE_SYMBOL(set_regs_spsr_ss);
 
 static void clear_regs_spsr_ss(struct pt_regs *regs)
 {
@@ -183,6 +188,7 @@ static void clear_regs_spsr_ss(struct pt_regs *regs)
 	spsr &= ~DBG_SPSR_SS;
 	regs->pstate = spsr;
 }
+NOKPROBE_SYMBOL(clear_regs_spsr_ss);
 
 /* EL1 Single Step Handler hooks */
 static LIST_HEAD(step_hook);
@@ -226,6 +232,7 @@ static int call_step_hook(struct pt_regs *regs, unsigned int esr)
 
 	return retval;
 }
+NOKPROBE_SYMBOL(call_step_hook);
 
 static void send_user_sigtrap(int si_code)
 {
@@ -284,6 +291,7 @@ static int single_step_handler(unsigned long addr, unsigned int esr,
 
 	return 0;
 }
+NOKPROBE_SYMBOL(single_step_handler);
 
 /*
  * Breakpoint handler is re-entrant as another breakpoint can
@@ -321,6 +329,7 @@ static int call_break_hook(struct pt_regs *regs, unsigned int esr)
 
 	return fn ? fn(regs, esr) : DBG_HOOK_ERROR;
 }
+NOKPROBE_SYMBOL(call_break_hook);
 
 static int brk_handler(unsigned long addr, unsigned int esr,
 		       struct pt_regs *regs)
@@ -341,6 +350,7 @@ static int brk_handler(unsigned long addr, unsigned int esr,
 
 	return 0;
 }
+NOKPROBE_SYMBOL(brk_handler);
 
 int aarch32_break_handler(struct pt_regs *regs)
 {
@@ -377,6 +387,7 @@ int aarch32_break_handler(struct pt_regs *regs)
 	send_user_sigtrap(TRAP_BRKPT);
 	return 0;
 }
+NOKPROBE_SYMBOL(aarch32_break_handler);
 
 static int __init debug_traps_init(void)
 {
@@ -398,6 +409,7 @@ void user_rewind_single_step(struct task_struct *task)
 	if (test_ti_thread_flag(task_thread_info(task), TIF_SINGLESTEP))
 		set_regs_spsr_ss(task_pt_regs(task));
 }
+NOKPROBE_SYMBOL(user_rewind_single_step);
 
 void user_fastforward_single_step(struct task_struct *task)
 {
@@ -413,6 +425,7 @@ void kernel_enable_single_step(struct pt_regs *regs)
 	mdscr_write(mdscr_read() | DBG_MDSCR_SS);
 	enable_debug_monitors(DBG_ACTIVE_EL1);
 }
+NOKPROBE_SYMBOL(kernel_enable_single_step);
 
 void kernel_disable_single_step(void)
 {
@@ -420,12 +433,14 @@ void kernel_disable_single_step(void)
 	mdscr_write(mdscr_read() & ~DBG_MDSCR_SS);
 	disable_debug_monitors(DBG_ACTIVE_EL1);
 }
+NOKPROBE_SYMBOL(kernel_disable_single_step);
 
 int kernel_active_single_step(void)
 {
 	WARN_ON(!irqs_disabled());
 	return mdscr_read() & DBG_MDSCR_SS;
 }
+NOKPROBE_SYMBOL(kernel_active_single_step);
 
 /* ptrace API */
 void user_enable_single_step(struct task_struct *task)
@@ -433,8 +448,10 @@ void user_enable_single_step(struct task_struct *task)
 	set_ti_thread_flag(task_thread_info(task), TIF_SINGLESTEP);
 	set_regs_spsr_ss(task_pt_regs(task));
 }
+NOKPROBE_SYMBOL(user_enable_single_step);
 
 void user_disable_single_step(struct task_struct *task)
 {
 	clear_ti_thread_flag(task_thread_info(task), TIF_SINGLESTEP);
 }
+NOKPROBE_SYMBOL(user_disable_single_step);
diff --git a/arch/arm64/kernel/hw_breakpoint.c b/arch/arm64/kernel/hw_breakpoint.c
index ce21aa8..26a6bf7 100644
--- a/arch/arm64/kernel/hw_breakpoint.c
+++ b/arch/arm64/kernel/hw_breakpoint.c
@@ -24,6 +24,7 @@
 #include <linux/cpu_pm.h>
 #include <linux/errno.h>
 #include <linux/hw_breakpoint.h>
+#include <linux/kprobes.h>
 #include <linux/perf_event.h>
 #include <linux/ptrace.h>
 #include <linux/smp.h>
@@ -127,6 +128,7 @@ static u64 read_wb_reg(int reg, int n)
 
 	return val;
 }
+NOKPROBE_SYMBOL(read_wb_reg);
 
 static void write_wb_reg(int reg, int n, u64 val)
 {
@@ -140,6 +142,7 @@ static void write_wb_reg(int reg, int n, u64 val)
 	}
 	isb();
 }
+NOKPROBE_SYMBOL(write_wb_reg);
 
 /*
  * Convert a breakpoint privilege level to the corresponding exception
@@ -157,6 +160,7 @@ static enum dbg_active_el debug_exception_level(int privilege)
 		return -EINVAL;
 	}
 }
+NOKPROBE_SYMBOL(debug_exception_level);
 
 enum hw_breakpoint_ops {
 	HW_BREAKPOINT_INSTALL,
@@ -575,6 +579,7 @@ static void toggle_bp_registers(int reg, enum dbg_active_el el, int enable)
 		write_wb_reg(reg, i, ctrl);
 	}
 }
+NOKPROBE_SYMBOL(toggle_bp_registers);
 
 /*
  * Debug exception handlers.
@@ -654,6 +659,7 @@ unlock:
 
 	return 0;
 }
+NOKPROBE_SYMBOL(breakpoint_handler);
 
 static int watchpoint_handler(unsigned long addr, unsigned int esr,
 			      struct pt_regs *regs)
@@ -756,6 +762,7 @@ unlock:
 
 	return 0;
 }
+NOKPROBE_SYMBOL(watchpoint_handler);
 
 /*
  * Handle single-step exception.
@@ -813,6 +820,7 @@ int reinstall_suspended_bps(struct pt_regs *regs)
 
 	return !handled_exception;
 }
+NOKPROBE_SYMBOL(reinstall_suspended_bps);
 
 /*
  * Context-switcher for restoring suspended breakpoints.
diff --git a/arch/arm64/kernel/kgdb.c b/arch/arm64/kernel/kgdb.c
index b5f063e..8c57f64 100644
--- a/arch/arm64/kernel/kgdb.c
+++ b/arch/arm64/kernel/kgdb.c
@@ -22,6 +22,7 @@
 #include <linux/irq.h>
 #include <linux/kdebug.h>
 #include <linux/kgdb.h>
+#include <linux/kprobes.h>
 #include <asm/traps.h>
 
 struct dbg_reg_def_t dbg_reg_def[DBG_MAX_REG_NUM] = {
@@ -230,6 +231,7 @@ static int kgdb_brk_fn(struct pt_regs *regs, unsigned int esr)
 	kgdb_handle_exception(1, SIGTRAP, 0, regs);
 	return 0;
 }
+NOKPROBE_SYMBOL(kgdb_brk_fn)
 
 static int kgdb_compiled_brk_fn(struct pt_regs *regs, unsigned int esr)
 {
@@ -238,12 +240,14 @@ static int kgdb_compiled_brk_fn(struct pt_regs *regs, unsigned int esr)
 
 	return 0;
 }
+NOKPROBE_SYMBOL(kgdb_compiled_brk_fn);
 
 static int kgdb_step_brk_fn(struct pt_regs *regs, unsigned int esr)
 {
 	kgdb_handle_exception(1, SIGTRAP, 0, regs);
 	return 0;
 }
+NOKPROBE_SYMBOL(kgdb_step_brk_fn);
 
 static struct break_hook kgdb_brkpt_hook = {
 	.esr_mask	= 0xffffffff,
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v14 06/10] arm64: Treat all entry code as non-kprobe-able
  2016-06-27  3:06 [PATCH v14 00/10] arm64: Add kernel probes (kprobes) support David Long
                   ` (4 preceding siblings ...)
  2016-06-27  3:06 ` [PATCH v14 05/10] arm64: Blacklist non-kprobe-able symbol David Long
@ 2016-06-27  3:06 ` David Long
  2016-06-27  3:06 ` [PATCH v14 07/10] arm64: kprobes instruction simulation support David Long
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: David Long @ 2016-06-27  3:06 UTC (permalink / raw)
  To: Catalin Marinas, Huang Shijie, James Morse, Marc Zyngier,
	Pratyush Anand, Sandeepa Prabhu, Will Deacon, William Cohen,
	linux-arm-kernel, linux-kernel, Steve Capper, Masami Hiramatsu,
	Li Bin
  Cc: Adam Buchbinder, Alex Bennée, Andrew Morton,
	Andrey Ryabinin, Ard Biesheuvel, Christoffer Dall,
	Daniel Thompson, Dave P Martin, Jens Wiklander, Jisheng Zhang,
	John Blackwood, Mark Rutland, Petr Mladek, Robin Murphy,
	Suzuki K Poulose, Vladimir Murzin, Yang Shi, Zi Shen Lim,
	yalin wang, Mark Brown

From: Pratyush Anand <panand@redhat.com>

Entry symbols are not kprobe safe. So blacklist them for kprobing.

Signed-off-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
---
 arch/arm64/kernel/entry.S           |  3 +++
 arch/arm64/kernel/kprobes/kprobes.c | 26 ++++++++++++++++++++++++++
 arch/arm64/kernel/vmlinux.lds.S     |  1 +
 3 files changed, 30 insertions(+)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 12e8d2b..7d99bed 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -243,6 +243,7 @@ tsk	.req	x28		// current thread_info
  * Exception vectors.
  */
 
+	.pushsection ".entry.text", "ax"
 	.align	11
 ENTRY(vectors)
 	ventry	el1_sync_invalid		// Synchronous EL1t
@@ -781,3 +782,5 @@ ENTRY(sys_rt_sigreturn_wrapper)
 	mov	x0, sp
 	b	sys_rt_sigreturn
 ENDPROC(sys_rt_sigreturn_wrapper)
+
+	.popsection
diff --git a/arch/arm64/kernel/kprobes/kprobes.c b/arch/arm64/kernel/kprobes/kprobes.c
index 189b0d2..ca0c0c9 100644
--- a/arch/arm64/kernel/kprobes/kprobes.c
+++ b/arch/arm64/kernel/kprobes/kprobes.c
@@ -30,6 +30,7 @@
 #include <asm/insn.h>
 #include <asm/uaccess.h>
 #include <asm/irq.h>
+#include <asm-generic/sections.h>
 
 #include "decode-insn.h"
 
@@ -519,6 +520,31 @@ int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
 	return 1;
 }
 
+bool arch_within_kprobe_blacklist(unsigned long addr)
+{
+	extern char __idmap_text_start[], __idmap_text_end[];
+	extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[];
+
+	if ((addr >= (unsigned long)__kprobes_text_start &&
+	    addr < (unsigned long)__kprobes_text_end) ||
+	    (addr >= (unsigned long)__entry_text_start &&
+	    addr < (unsigned long)__entry_text_end) ||
+	    (addr >= (unsigned long)__idmap_text_start &&
+	    addr < (unsigned long)__idmap_text_end) ||
+	    !!search_exception_tables(addr))
+		return true;
+
+	if (!is_kernel_in_hyp_mode()) {
+		if ((addr >= (unsigned long)__hyp_text_start &&
+		    addr < (unsigned long)__hyp_text_end) ||
+		    (addr >= (unsigned long)__hyp_idmap_text_start &&
+		    addr < (unsigned long)__hyp_idmap_text_end))
+			return true;
+	}
+
+	return false;
+}
+
 int __init arch_init_kprobes(void)
 {
 	return 0;
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 075ce32..9f59394 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -118,6 +118,7 @@ SECTIONS
 			__exception_text_end = .;
 			IRQENTRY_TEXT
 			SOFTIRQENTRY_TEXT
+			ENTRY_TEXT
 			TEXT_TEXT
 			SCHED_TEXT
 			LOCK_TEXT
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v14 07/10] arm64: kprobes instruction simulation support
  2016-06-27  3:06 [PATCH v14 00/10] arm64: Add kernel probes (kprobes) support David Long
                   ` (5 preceding siblings ...)
  2016-06-27  3:06 ` [PATCH v14 06/10] arm64: Treat all entry code as non-kprobe-able David Long
@ 2016-06-27  3:06 ` David Long
  2016-06-27  3:06 ` [PATCH v14 08/10] arm64: Add trampoline code for kretprobes David Long
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: David Long @ 2016-06-27  3:06 UTC (permalink / raw)
  To: Catalin Marinas, Huang Shijie, James Morse, Marc Zyngier,
	Pratyush Anand, Sandeepa Prabhu, Will Deacon, William Cohen,
	linux-arm-kernel, linux-kernel, Steve Capper, Masami Hiramatsu,
	Li Bin
  Cc: Adam Buchbinder, Alex Bennée, Andrew Morton,
	Andrey Ryabinin, Ard Biesheuvel, Christoffer Dall,
	Daniel Thompson, Dave P Martin, Jens Wiklander, Jisheng Zhang,
	John Blackwood, Mark Rutland, Petr Mladek, Robin Murphy,
	Suzuki K Poulose, Vladimir Murzin, Yang Shi, Zi Shen Lim,
	yalin wang, Mark Brown

From: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>

Kprobes needs simulation of instructions that cannot be stepped
from a different memory location, e.g.: those instructions
that uses PC-relative addressing. In simulation, the behaviour
of the instruction is implemented using a copy of pt_regs.

The following instruction categories are simulated:
 - All branching instructions(conditional, register, and immediate)
 - Literal access instructions(load-literal, adr/adrp)

Conditional execution is limited to branching instructions in
ARM v8. If conditions at PSTATE do not match the condition fields
of opcode, the instruction is effectively NOP.

Thanks to Will Cohen for assorted suggested changes.

Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
Signed-off-by: William Cohen <wcohen@redhat.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm64/include/asm/probes.h           |   5 +-
 arch/arm64/kernel/insn.c                  |   1 +
 arch/arm64/kernel/kprobes/Makefile        |   3 +-
 arch/arm64/kernel/kprobes/decode-insn.c   |  33 ++++-
 arch/arm64/kernel/kprobes/decode-insn.h   |   1 +
 arch/arm64/kernel/kprobes/kprobes.c       |  53 ++++++--
 arch/arm64/kernel/kprobes/simulate-insn.c | 218 ++++++++++++++++++++++++++++++
 arch/arm64/kernel/kprobes/simulate-insn.h |  28 ++++
 8 files changed, 327 insertions(+), 15 deletions(-)
 create mode 100644 arch/arm64/kernel/kprobes/simulate-insn.c
 create mode 100644 arch/arm64/kernel/kprobes/simulate-insn.h

diff --git a/arch/arm64/include/asm/probes.h b/arch/arm64/include/asm/probes.h
index 1e8a21a..5af574d 100644
--- a/arch/arm64/include/asm/probes.h
+++ b/arch/arm64/include/asm/probes.h
@@ -15,17 +15,18 @@
 #ifndef _ARM_PROBES_H
 #define _ARM_PROBES_H
 
+#include <asm/opcodes.h>
+
 struct kprobe;
 struct arch_specific_insn;
 
 typedef u32 kprobe_opcode_t;
-typedef unsigned long (kprobes_pstate_check_t)(unsigned long);
 typedef void (kprobes_handler_t) (u32 opcode, long addr, struct pt_regs *);
 
 /* architecture specific copy of original instruction */
 struct arch_specific_insn {
 	kprobe_opcode_t *insn;
-	kprobes_pstate_check_t *pstate_cc;
+	pstate_check_t *pstate_cc;
 	kprobes_handler_t *handler;
 	/* restore address after step xol */
 	unsigned long restore;
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 5cb2f3d..63f9432 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -30,6 +30,7 @@
 #include <asm/cacheflush.h>
 #include <asm/debug-monitors.h>
 #include <asm/fixmap.h>
+#include <asm/opcodes.h>
 #include <asm/insn.h>
 
 #define AARCH64_INSN_SF_BIT	BIT(31)
diff --git a/arch/arm64/kernel/kprobes/Makefile b/arch/arm64/kernel/kprobes/Makefile
index bc159bf..e184d00 100644
--- a/arch/arm64/kernel/kprobes/Makefile
+++ b/arch/arm64/kernel/kprobes/Makefile
@@ -1 +1,2 @@
-obj-$(CONFIG_KPROBES)		+= kprobes.o decode-insn.o
+obj-$(CONFIG_KPROBES)		+= kprobes.o decode-insn.o	\
+				   simulate-insn.o
diff --git a/arch/arm64/kernel/kprobes/decode-insn.c b/arch/arm64/kernel/kprobes/decode-insn.c
index 0ca1584..aff06cf 100644
--- a/arch/arm64/kernel/kprobes/decode-insn.c
+++ b/arch/arm64/kernel/kprobes/decode-insn.c
@@ -21,6 +21,7 @@
 #include <asm/sections.h>
 
 #include "decode-insn.h"
+#include "simulate-insn.h"
 
 static bool __kprobes aarch64_insn_is_steppable(u32 insn)
 {
@@ -74,6 +75,7 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn)
 /* Return:
  *   INSN_REJECTED     If instruction is one not allowed to kprobe,
  *   INSN_GOOD         If instruction is supported and uses instruction slot,
+ *   INSN_GOOD_NO_SLOT If instruction is supported but doesn't use its slot.
  */
 static enum kprobe_insn __kprobes
 arm_probe_decode_insn(kprobe_opcode_t insn, struct arch_specific_insn *asi)
@@ -84,8 +86,37 @@ arm_probe_decode_insn(kprobe_opcode_t insn, struct arch_specific_insn *asi)
 	 */
 	if (aarch64_insn_is_steppable(insn))
 		return INSN_GOOD;
-	else
+
+	if (aarch64_insn_is_bcond(insn)) {
+		asi->handler = simulate_b_cond;
+	} else if (aarch64_insn_is_cbz(insn) ||
+	    aarch64_insn_is_cbnz(insn)) {
+		asi->handler = simulate_cbz_cbnz;
+	} else if (aarch64_insn_is_tbz(insn) ||
+	    aarch64_insn_is_tbnz(insn)) {
+		asi->handler = simulate_tbz_tbnz;
+	} else if (aarch64_insn_is_adr_adrp(insn)) {
+		asi->handler = simulate_adr_adrp;
+	} else if (aarch64_insn_is_b(insn) ||
+	    aarch64_insn_is_bl(insn)) {
+		asi->handler = simulate_b_bl;
+	} else if (aarch64_insn_is_br(insn) ||
+	    aarch64_insn_is_blr(insn) ||
+	    aarch64_insn_is_ret(insn)) {
+		asi->handler = simulate_br_blr_ret;
+	} else if (aarch64_insn_is_ldr_lit(insn)) {
+		asi->handler = simulate_ldr_literal;
+	} else if (aarch64_insn_is_ldrsw_lit(insn)) {
+		asi->handler = simulate_ldrsw_literal;
+	} else {
+		/*
+		 * Instruction cannot be stepped out-of-line and we don't
+		 * (yet) simulate it.
+		 */
 		return INSN_REJECTED;
+	}
+
+	return INSN_GOOD_NO_SLOT;
 }
 
 static bool __kprobes
diff --git a/arch/arm64/kernel/kprobes/decode-insn.h b/arch/arm64/kernel/kprobes/decode-insn.h
index b98774d..ecacc42 100644
--- a/arch/arm64/kernel/kprobes/decode-insn.h
+++ b/arch/arm64/kernel/kprobes/decode-insn.h
@@ -25,6 +25,7 @@
 
 enum kprobe_insn {
 	INSN_REJECTED,
+	INSN_GOOD_NO_SLOT,
 	INSN_GOOD,
 };
 
diff --git a/arch/arm64/kernel/kprobes/kprobes.c b/arch/arm64/kernel/kprobes/kprobes.c
index ca0c0c9..4dca25b 100644
--- a/arch/arm64/kernel/kprobes/kprobes.c
+++ b/arch/arm64/kernel/kprobes/kprobes.c
@@ -45,6 +45,9 @@ void jprobe_return_break(void);
 DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
 DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
 
+static void __kprobes
+post_kprobe_handler(struct kprobe_ctlblk *, struct pt_regs *);
+
 static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
 {
 	/* prepare insn slot */
@@ -61,6 +64,23 @@ static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
 	  sizeof(kprobe_opcode_t);
 }
 
+static void __kprobes arch_prepare_simulate(struct kprobe *p)
+{
+	/* This instructions is not executed xol. No need to adjust the PC */
+	p->ainsn.restore = 0;
+}
+
+static void __kprobes arch_simulate_insn(struct kprobe *p, struct pt_regs *regs)
+{
+	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+
+	if (p->ainsn.handler)
+		p->ainsn.handler((u32)p->opcode, (long)p->addr, regs);
+
+	/* single step simulated, now go for post processing */
+	post_kprobe_handler(kcb, regs);
+}
+
 int __kprobes arch_prepare_kprobe(struct kprobe *p)
 {
 	unsigned long probe_addr = (unsigned long)p->addr;
@@ -84,6 +104,10 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p)
 	case INSN_REJECTED:	/* insn not supported */
 		return -EINVAL;
 
+	case INSN_GOOD_NO_SLOT:	/* insn need simulation */
+		p->ainsn.insn = NULL;
+		break;
+
 	case INSN_GOOD:	/* instruction uses slot */
 		p->ainsn.insn = get_insn_slot();
 		if (!p->ainsn.insn)
@@ -92,7 +116,10 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p)
 	};
 
 	/* prepare the instruction */
-	arch_prepare_ss_slot(p);
+	if (p->ainsn.insn)
+		arch_prepare_ss_slot(p);
+	else
+		arch_prepare_simulate(p);
 
 	return 0;
 }
@@ -218,20 +245,24 @@ static void __kprobes setup_singlestep(struct kprobe *p,
 		kcb->kprobe_status = KPROBE_HIT_SS;
 	}
 
-	BUG_ON(!p->ainsn.insn);
 
-	/* prepare for single stepping */
-	slot = (unsigned long)p->ainsn.insn;
+	if (p->ainsn.insn) {
+		/* prepare for single stepping */
+		slot = (unsigned long)p->ainsn.insn;
 
-	set_ss_context(kcb, slot);	/* mark pending ss */
+		set_ss_context(kcb, slot);	/* mark pending ss */
 
-	if (kcb->kprobe_status == KPROBE_REENTER)
-		spsr_set_debug_flag(regs, 0);
+		if (kcb->kprobe_status == KPROBE_REENTER)
+			spsr_set_debug_flag(regs, 0);
 
-	/* IRQs and single stepping do not mix well. */
-	kprobes_save_local_irqflag(kcb, regs);
-	kernel_enable_single_step(regs);
-	instruction_pointer_set(regs, slot);
+		/* IRQs and single stepping do not mix well. */
+		kprobes_save_local_irqflag(kcb, regs);
+		kernel_enable_single_step(regs);
+		instruction_pointer_set(regs, slot);
+	} else {
+		/* insn simulation */
+		arch_simulate_insn(p, regs);
+	}
 }
 
 static int __kprobes reenter_kprobe(struct kprobe *p,
diff --git a/arch/arm64/kernel/kprobes/simulate-insn.c b/arch/arm64/kernel/kprobes/simulate-insn.c
new file mode 100644
index 0000000..f2b1a23a
--- /dev/null
+++ b/arch/arm64/kernel/kprobes/simulate-insn.c
@@ -0,0 +1,218 @@
+/*
+ * arch/arm64/kernel/kprobes/simulate-insn.c
+ *
+ * Copyright (C) 2013 Linaro Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/kprobes.h>
+#include <linux/module.h>
+
+#include "simulate-insn.h"
+
+#define sign_extend(x, signbit)		\
+	((x) | (0 - ((x) & (1 << (signbit)))))
+
+#define bbl_displacement(insn)		\
+	sign_extend(((insn) & 0x3ffffff) << 2, 27)
+
+#define bcond_displacement(insn)	\
+	sign_extend(((insn >> 5) & 0x7ffff) << 2, 20)
+
+#define cbz_displacement(insn)	\
+	sign_extend(((insn >> 5) & 0x7ffff) << 2, 20)
+
+#define tbz_displacement(insn)	\
+	sign_extend(((insn >> 5) & 0x3fff) << 2, 15)
+
+#define ldr_displacement(insn)	\
+	sign_extend(((insn >> 5) & 0x7ffff) << 2, 20)
+
+static inline void set_x_reg(struct pt_regs *regs, int reg, u64 val)
+{
+	if (reg < 31)
+		regs->regs[reg] = val;
+}
+
+static inline void set_w_reg(struct pt_regs *regs, int reg, u64 val)
+{
+	if (reg < 31)
+		regs->regs[reg] = lower_32_bits(val);
+}
+
+static inline u64 get_x_reg(struct pt_regs *regs, int reg)
+{
+	if (reg < 31)
+		return regs->regs[reg];
+	else
+		return 0;
+}
+
+static inline u32 get_w_reg(struct pt_regs *regs, int reg)
+{
+	if (reg < 31)
+		return lower_32_bits(regs->regs[reg]);
+	else
+		return 0;
+}
+
+static bool __kprobes check_cbz(u32 opcode, struct pt_regs *regs)
+{
+	int xn = opcode & 0x1f;
+
+	return (opcode & (1 << 31)) ?
+	    (get_x_reg(regs, xn) == 0) : (get_w_reg(regs, xn) == 0);
+}
+
+static bool __kprobes check_cbnz(u32 opcode, struct pt_regs *regs)
+{
+	int xn = opcode & 0x1f;
+
+	return (opcode & (1 << 31)) ?
+	    (get_x_reg(regs, xn) != 0) : (get_w_reg(regs, xn) != 0);
+}
+
+static bool __kprobes check_tbz(u32 opcode, struct pt_regs *regs)
+{
+	int xn = opcode & 0x1f;
+	int bit_pos = ((opcode & (1 << 31)) >> 26) | ((opcode >> 19) & 0x1f);
+
+	return ((get_x_reg(regs, xn) >> bit_pos) & 0x1) == 0;
+}
+
+static bool __kprobes check_tbnz(u32 opcode, struct pt_regs *regs)
+{
+	int xn = opcode & 0x1f;
+	int bit_pos = ((opcode & (1 << 31)) >> 26) | ((opcode >> 19) & 0x1f);
+
+	return ((get_x_reg(regs, xn) >> bit_pos) & 0x1) != 0;
+}
+
+/*
+ * instruction simulation functions
+ */
+void __kprobes
+simulate_adr_adrp(u32 opcode, long addr, struct pt_regs *regs)
+{
+	long imm, xn, val;
+
+	xn = opcode & 0x1f;
+	imm = ((opcode >> 3) & 0x1ffffc) | ((opcode >> 29) & 0x3);
+	imm = sign_extend(imm, 20);
+	if (opcode & 0x80000000)
+		val = (imm<<12) + (addr & 0xfffffffffffff000);
+	else
+		val = imm + addr;
+
+	set_x_reg(regs, xn, val);
+
+	instruction_pointer_set(regs, instruction_pointer(regs) + 4);
+}
+
+void __kprobes
+simulate_b_bl(u32 opcode, long addr, struct pt_regs *regs)
+{
+	int disp = bbl_displacement(opcode);
+
+	/* Link register is x30 */
+	if (opcode & (1 << 31))
+		set_x_reg(regs, 30, addr + 4);
+
+	instruction_pointer_set(regs, addr + disp);
+}
+
+void __kprobes
+simulate_b_cond(u32 opcode, long addr, struct pt_regs *regs)
+{
+	int disp = 4;
+
+	if (aarch32_opcode_cond_checks[opcode & 0xf](regs->pstate & 0xffffffff))
+		disp = bcond_displacement(opcode);
+
+	instruction_pointer_set(regs, addr + disp);
+}
+
+void __kprobes
+simulate_br_blr_ret(u32 opcode, long addr, struct pt_regs *regs)
+{
+	int xn = (opcode >> 5) & 0x1f;
+
+	/* update pc first in case we're doing a "blr lr" */
+	instruction_pointer_set(regs, get_x_reg(regs, xn));
+
+	/* Link register is x30 */
+	if (((opcode >> 21) & 0x3) == 1)
+		set_x_reg(regs, 30, addr + 4);
+}
+
+void __kprobes
+simulate_cbz_cbnz(u32 opcode, long addr, struct pt_regs *regs)
+{
+	int disp = 4;
+
+	if (opcode & (1 << 24)) {
+		if (check_cbnz(opcode, regs))
+			disp = cbz_displacement(opcode);
+	} else {
+		if (check_cbz(opcode, regs))
+			disp = cbz_displacement(opcode);
+	}
+	instruction_pointer_set(regs, addr + disp);
+}
+
+void __kprobes
+simulate_tbz_tbnz(u32 opcode, long addr, struct pt_regs *regs)
+{
+	int disp = 4;
+
+	if (opcode & (1 << 24)) {
+		if (check_tbnz(opcode, regs))
+			disp = tbz_displacement(opcode);
+	} else {
+		if (check_tbz(opcode, regs))
+			disp = tbz_displacement(opcode);
+	}
+	instruction_pointer_set(regs, addr + disp);
+}
+
+void __kprobes
+simulate_ldr_literal(u32 opcode, long addr, struct pt_regs *regs)
+{
+	u64 *load_addr;
+	int xn = opcode & 0x1f;
+	int disp;
+
+	disp = ldr_displacement(opcode);
+	load_addr = (u64 *) (addr + disp);
+
+	if (opcode & (1 << 30))	/* x0-x30 */
+		set_x_reg(regs, xn, *load_addr);
+	else			/* w0-w30 */
+		set_w_reg(regs, xn, *load_addr);
+
+	instruction_pointer_set(regs, instruction_pointer(regs) + 4);
+}
+
+void __kprobes
+simulate_ldrsw_literal(u32 opcode, long addr, struct pt_regs *regs)
+{
+	s32 *load_addr;
+	int xn = opcode & 0x1f;
+	int disp;
+
+	disp = ldr_displacement(opcode);
+	load_addr = (s32 *) (addr + disp);
+
+	set_x_reg(regs, xn, *load_addr);
+
+	instruction_pointer_set(regs, instruction_pointer(regs) + 4);
+}
diff --git a/arch/arm64/kernel/kprobes/simulate-insn.h b/arch/arm64/kernel/kprobes/simulate-insn.h
new file mode 100644
index 0000000..1a9d49a
--- /dev/null
+++ b/arch/arm64/kernel/kprobes/simulate-insn.h
@@ -0,0 +1,28 @@
+/*
+ * arch/arm64/kernel/kprobes/simulate-insn.h
+ *
+ * Copyright (C) 2013 Linaro Limited
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+
+#ifndef _ARM_KERNEL_KPROBES_SIMULATE_INSN_H
+#define _ARM_KERNEL_KPROBES_SIMULATE_INSN_H
+
+void simulate_adr_adrp(u32 opcode, long addr, struct pt_regs *regs);
+void simulate_b_bl(u32 opcode, long addr, struct pt_regs *regs);
+void simulate_b_cond(u32 opcode, long addr, struct pt_regs *regs);
+void simulate_br_blr_ret(u32 opcode, long addr, struct pt_regs *regs);
+void simulate_cbz_cbnz(u32 opcode, long addr, struct pt_regs *regs);
+void simulate_tbz_tbnz(u32 opcode, long addr, struct pt_regs *regs);
+void simulate_ldr_literal(u32 opcode, long addr, struct pt_regs *regs);
+void simulate_ldrsw_literal(u32 opcode, long addr, struct pt_regs *regs);
+
+#endif /* _ARM_KERNEL_KPROBES_SIMULATE_INSN_H */
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v14 08/10] arm64: Add trampoline code for kretprobes
  2016-06-27  3:06 [PATCH v14 00/10] arm64: Add kernel probes (kprobes) support David Long
                   ` (6 preceding siblings ...)
  2016-06-27  3:06 ` [PATCH v14 07/10] arm64: kprobes instruction simulation support David Long
@ 2016-06-27  3:06 ` David Long
  2016-06-27  3:06 ` [PATCH v14 09/10] arm64: Add kernel return probes support (kretprobes) David Long
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: David Long @ 2016-06-27  3:06 UTC (permalink / raw)
  To: Catalin Marinas, Huang Shijie, James Morse, Marc Zyngier,
	Pratyush Anand, Sandeepa Prabhu, Will Deacon, William Cohen,
	linux-arm-kernel, linux-kernel, Steve Capper, Masami Hiramatsu,
	Li Bin
  Cc: Adam Buchbinder, Alex Bennée, Andrew Morton,
	Andrey Ryabinin, Ard Biesheuvel, Christoffer Dall,
	Daniel Thompson, Dave P Martin, Jens Wiklander, Jisheng Zhang,
	John Blackwood, Mark Rutland, Petr Mladek, Robin Murphy,
	Suzuki K Poulose, Vladimir Murzin, Yang Shi, Zi Shen Lim,
	yalin wang, Mark Brown

From: William Cohen <wcohen@redhat.com>

The trampoline code is used by kretprobes to capture a return from a probed
function.  This is done by saving the registers, calling the handler, and
restoring the registers. The code then returns to the original saved caller
return address. It is necessary to do this directly instead of using a
software breakpoint because the code used in processing that breakpoint
could itself be kprobe'd and cause a problematic reentry into the debug
exception handler.

Signed-off-by: William Cohen <wcohen@redhat.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
---
 arch/arm64/include/asm/kprobes.h               |  2 +
 arch/arm64/kernel/asm-offsets.c                | 11 ++++
 arch/arm64/kernel/kprobes/Makefile             |  1 +
 arch/arm64/kernel/kprobes/kprobes.c            |  5 ++
 arch/arm64/kernel/kprobes/kprobes_trampoline.S | 85 ++++++++++++++++++++++++++
 5 files changed, 104 insertions(+)
 create mode 100644 arch/arm64/kernel/kprobes/kprobes_trampoline.S

diff --git a/arch/arm64/include/asm/kprobes.h b/arch/arm64/include/asm/kprobes.h
index 79c9511..61b4915 100644
--- a/arch/arm64/include/asm/kprobes.h
+++ b/arch/arm64/include/asm/kprobes.h
@@ -56,5 +56,7 @@ int kprobe_exceptions_notify(struct notifier_block *self,
 			     unsigned long val, void *data);
 int kprobe_breakpoint_handler(struct pt_regs *regs, unsigned int esr);
 int kprobe_single_step_handler(struct pt_regs *regs, unsigned int esr);
+void kretprobe_trampoline(void);
+void __kprobes *trampoline_probe_handler(struct pt_regs *regs);
 
 #endif /* _ARM_KPROBES_H */
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index f8e5d47..03dfa27 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -51,6 +51,17 @@ int main(void)
   DEFINE(S_X5,			offsetof(struct pt_regs, regs[5]));
   DEFINE(S_X6,			offsetof(struct pt_regs, regs[6]));
   DEFINE(S_X7,			offsetof(struct pt_regs, regs[7]));
+  DEFINE(S_X8,			offsetof(struct pt_regs, regs[8]));
+  DEFINE(S_X10,			offsetof(struct pt_regs, regs[10]));
+  DEFINE(S_X12,			offsetof(struct pt_regs, regs[12]));
+  DEFINE(S_X14,			offsetof(struct pt_regs, regs[14]));
+  DEFINE(S_X16,			offsetof(struct pt_regs, regs[16]));
+  DEFINE(S_X18,			offsetof(struct pt_regs, regs[18]));
+  DEFINE(S_X20,			offsetof(struct pt_regs, regs[20]));
+  DEFINE(S_X22,			offsetof(struct pt_regs, regs[22]));
+  DEFINE(S_X24,			offsetof(struct pt_regs, regs[24]));
+  DEFINE(S_X26,			offsetof(struct pt_regs, regs[26]));
+  DEFINE(S_X28,			offsetof(struct pt_regs, regs[28]));
   DEFINE(S_LR,			offsetof(struct pt_regs, regs[30]));
   DEFINE(S_SP,			offsetof(struct pt_regs, sp));
 #ifdef CONFIG_COMPAT
diff --git a/arch/arm64/kernel/kprobes/Makefile b/arch/arm64/kernel/kprobes/Makefile
index e184d00..ce06312 100644
--- a/arch/arm64/kernel/kprobes/Makefile
+++ b/arch/arm64/kernel/kprobes/Makefile
@@ -1,2 +1,3 @@
 obj-$(CONFIG_KPROBES)		+= kprobes.o decode-insn.o	\
+				   kprobes_trampoline.o		\
 				   simulate-insn.o
diff --git a/arch/arm64/kernel/kprobes/kprobes.c b/arch/arm64/kernel/kprobes/kprobes.c
index 4dca25b..89936d2 100644
--- a/arch/arm64/kernel/kprobes/kprobes.c
+++ b/arch/arm64/kernel/kprobes/kprobes.c
@@ -576,6 +576,11 @@ bool arch_within_kprobe_blacklist(unsigned long addr)
 	return false;
 }
 
+void __kprobes __used *trampoline_probe_handler(struct pt_regs *regs)
+{
+	return NULL;
+}
+
 int __init arch_init_kprobes(void)
 {
 	return 0;
diff --git a/arch/arm64/kernel/kprobes/kprobes_trampoline.S b/arch/arm64/kernel/kprobes/kprobes_trampoline.S
new file mode 100644
index 0000000..ba37d85
--- /dev/null
+++ b/arch/arm64/kernel/kprobes/kprobes_trampoline.S
@@ -0,0 +1,85 @@
+/*
+ * trampoline entry and return code for kretprobes.
+ */
+
+#include <linux/linkage.h>
+#include <asm/asm-offsets.h>
+#include <asm/assembler.h>
+
+	.text
+
+.macro save_all_base_regs
+	stp x0, x1, [sp, #S_X0]
+	stp x2, x3, [sp, #S_X2]
+	stp x4, x5, [sp, #S_X4]
+	stp x6, x7, [sp, #S_X6]
+	stp x8, x9, [sp, #S_X8]
+	stp x10, x11, [sp, #S_X10]
+	stp x12, x13, [sp, #S_X12]
+	stp x14, x15, [sp, #S_X14]
+	stp x16, x17, [sp, #S_X16]
+	stp x18, x19, [sp, #S_X18]
+	stp x20, x21, [sp, #S_X20]
+	stp x22, x23, [sp, #S_X22]
+	stp x24, x25, [sp, #S_X24]
+	stp x26, x27, [sp, #S_X26]
+	stp x28, x29, [sp, #S_X28]
+	add x0, sp, #S_FRAME_SIZE
+	stp lr, x0, [sp, #S_LR]
+/*
+ * Construct a useful saved PSTATE
+ */
+	mrs x0, nzcv
+	and x0, x0, #(PSR_N_BIT | PSR_Z_BIT | PSR_C_BIT | PSR_V_BIT)
+	mrs x1, daif
+	and x1, x1, #(PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT)
+	orr x0, x0, x1
+	mrs x1, CurrentEL
+	and x1, x1, #(3 << 2)
+	orr x0, x1, x0
+	mrs x1, SPSel
+	and x1, x1, #1
+	orr x0, x1, x0
+	str x0, [sp, #S_PSTATE]
+.endm
+
+.macro restore_all_base_regs
+	ldr x0, [sp, #S_PSTATE]
+	and x0, x0, #(PSR_N_BIT | PSR_Z_BIT | PSR_C_BIT | PSR_V_BIT)
+	msr nzcv, x0
+	ldp x0, x1, [sp, #S_X0]
+	ldp x2, x3, [sp, #S_X2]
+	ldp x4, x5, [sp, #S_X4]
+	ldp x6, x7, [sp, #S_X6]
+	ldp x8, x9, [sp, #S_X8]
+	ldp x10, x11, [sp, #S_X10]
+	ldp x12, x13, [sp, #S_X12]
+	ldp x14, x15, [sp, #S_X14]
+	ldp x16, x17, [sp, #S_X16]
+	ldp x18, x19, [sp, #S_X18]
+	ldp x20, x21, [sp, #S_X20]
+	ldp x22, x23, [sp, #S_X22]
+	ldp x24, x25, [sp, #S_X24]
+	ldp x26, x27, [sp, #S_X26]
+	ldp x28, x29, [sp, #S_X28]
+.endm
+
+ENTRY(kretprobe_trampoline)
+
+	sub sp, sp, #S_FRAME_SIZE
+
+	save_all_base_regs
+
+	mov x0, sp
+	bl trampoline_probe_handler
+	/* Replace trampoline address in lr with actual
+	   orig_ret_addr return address. */
+	mov lr, x0
+
+	restore_all_base_regs
+
+	add sp, sp, #S_FRAME_SIZE
+
+	ret
+
+ENDPROC(kretprobe_trampoline)
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v14 09/10] arm64: Add kernel return probes support (kretprobes)
  2016-06-27  3:06 [PATCH v14 00/10] arm64: Add kernel probes (kprobes) support David Long
                   ` (7 preceding siblings ...)
  2016-06-27  3:06 ` [PATCH v14 08/10] arm64: Add trampoline code for kretprobes David Long
@ 2016-06-27  3:06 ` David Long
  2016-06-27  3:06 ` [PATCH v14 10/10] kprobes: Add arm64 case in kprobe example module David Long
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: David Long @ 2016-06-27  3:06 UTC (permalink / raw)
  To: Catalin Marinas, Huang Shijie, James Morse, Marc Zyngier,
	Pratyush Anand, Sandeepa Prabhu, Will Deacon, William Cohen,
	linux-arm-kernel, linux-kernel, Steve Capper, Masami Hiramatsu,
	Li Bin
  Cc: Adam Buchbinder, Alex Bennée, Andrew Morton,
	Andrey Ryabinin, Ard Biesheuvel, Christoffer Dall,
	Daniel Thompson, Dave P Martin, Jens Wiklander, Jisheng Zhang,
	John Blackwood, Mark Rutland, Petr Mladek, Robin Murphy,
	Suzuki K Poulose, Vladimir Murzin, Yang Shi, Zi Shen Lim,
	yalin wang, Mark Brown

From: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>

The pre-handler of this special 'trampoline' kprobe executes the return
probe handler functions and restores original return address in ELR_EL1.
This way the saved pt_regs still hold the original register context to be
carried back to the probed kernel function.

Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 arch/arm64/Kconfig                  |  1 +
 arch/arm64/kernel/kprobes/kprobes.c | 90 ++++++++++++++++++++++++++++++++++++-
 2 files changed, 90 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 1f7d644..6af0e2e 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -89,6 +89,7 @@ config ARM64
 	select HAVE_RCU_TABLE_FREE
 	select HAVE_SYSCALL_TRACEPOINTS
 	select HAVE_KPROBES
+	select HAVE_KRETPROBES if HAVE_KPROBES
 	select IOMMU_DMA if IOMMU_SUPPORT
 	select IRQ_DOMAIN
 	select IRQ_FORCED_THREADING
diff --git a/arch/arm64/kernel/kprobes/kprobes.c b/arch/arm64/kernel/kprobes/kprobes.c
index 89936d2..b5d9b9c 100644
--- a/arch/arm64/kernel/kprobes/kprobes.c
+++ b/arch/arm64/kernel/kprobes/kprobes.c
@@ -578,7 +578,95 @@ bool arch_within_kprobe_blacklist(unsigned long addr)
 
 void __kprobes __used *trampoline_probe_handler(struct pt_regs *regs)
 {
-	return NULL;
+	struct kretprobe_instance *ri = NULL;
+	struct hlist_head *head, empty_rp;
+	struct hlist_node *tmp;
+	unsigned long flags, orig_ret_address = 0;
+	unsigned long trampoline_address =
+		(unsigned long)&kretprobe_trampoline;
+	kprobe_opcode_t *correct_ret_addr = NULL;
+
+	INIT_HLIST_HEAD(&empty_rp);
+	kretprobe_hash_lock(current, &head, &flags);
+
+	/*
+	 * It is possible to have multiple instances associated with a given
+	 * task either because multiple functions in the call path have
+	 * return probes installed on them, and/or more than one
+	 * return probe was registered for a target function.
+	 *
+	 * We can handle this because:
+	 *     - instances are always pushed into the head of the list
+	 *     - when multiple return probes are registered for the same
+	 *	 function, the (chronologically) first instance's ret_addr
+	 *	 will be the real return address, and all the rest will
+	 *	 point to kretprobe_trampoline.
+	 */
+	hlist_for_each_entry_safe(ri, tmp, head, hlist) {
+		if (ri->task != current)
+			/* another task is sharing our hash bucket */
+			continue;
+
+		orig_ret_address = (unsigned long)ri->ret_addr;
+
+		if (orig_ret_address != trampoline_address)
+			/*
+			 * This is the real return address. Any other
+			 * instances associated with this task are for
+			 * other calls deeper on the call stack
+			 */
+			break;
+	}
+
+	kretprobe_assert(ri, orig_ret_address, trampoline_address);
+
+	correct_ret_addr = ri->ret_addr;
+	hlist_for_each_entry_safe(ri, tmp, head, hlist) {
+		if (ri->task != current)
+			/* another task is sharing our hash bucket */
+			continue;
+
+		orig_ret_address = (unsigned long)ri->ret_addr;
+		if (ri->rp && ri->rp->handler) {
+			__this_cpu_write(current_kprobe, &ri->rp->kp);
+			get_kprobe_ctlblk()->kprobe_status = KPROBE_HIT_ACTIVE;
+			ri->ret_addr = correct_ret_addr;
+			ri->rp->handler(ri, regs);
+			__this_cpu_write(current_kprobe, NULL);
+		}
+
+		recycle_rp_inst(ri, &empty_rp);
+
+		if (orig_ret_address != trampoline_address)
+			/*
+			 * This is the real return address. Any other
+			 * instances associated with this task are for
+			 * other calls deeper on the call stack
+			 */
+			break;
+	}
+
+	kretprobe_hash_unlock(current, &flags);
+
+	hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
+		hlist_del(&ri->hlist);
+		kfree(ri);
+	}
+	return (void *)orig_ret_address;
+}
+
+void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
+				      struct pt_regs *regs)
+{
+	ri->ret_addr = (kprobe_opcode_t *)regs->regs[30];
+
+	/* replace return addr (x30) with trampoline */
+	regs->regs[30] = (long)&kretprobe_trampoline;
+}
+
+int __kprobes arch_trampoline_kprobe(struct kprobe *p)
+{
+	return 0;
 }
 
 int __init arch_init_kprobes(void)
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v14 10/10] kprobes: Add arm64 case in kprobe example module
  2016-06-27  3:06 [PATCH v14 00/10] arm64: Add kernel probes (kprobes) support David Long
                   ` (8 preceding siblings ...)
  2016-06-27  3:06 ` [PATCH v14 09/10] arm64: Add kernel return probes support (kretprobes) David Long
@ 2016-06-27  3:06 ` David Long
  2016-06-28  7:31 ` [PATCH v14 00/10] arm64: Add kernel probes (kprobes) support Masami Hiramatsu
  2016-06-28  8:13 ` Huang Shijie
  11 siblings, 0 replies; 16+ messages in thread
From: David Long @ 2016-06-27  3:06 UTC (permalink / raw)
  To: Catalin Marinas, Huang Shijie, James Morse, Marc Zyngier,
	Pratyush Anand, Sandeepa Prabhu, Will Deacon, William Cohen,
	linux-arm-kernel, linux-kernel, Steve Capper, Masami Hiramatsu,
	Li Bin
  Cc: Adam Buchbinder, Alex Bennée, Andrew Morton,
	Andrey Ryabinin, Ard Biesheuvel, Christoffer Dall,
	Daniel Thompson, Dave P Martin, Jens Wiklander, Jisheng Zhang,
	John Blackwood, Mark Rutland, Petr Mladek, Robin Murphy,
	Suzuki K Poulose, Vladimir Murzin, Yang Shi, Zi Shen Lim,
	yalin wang, Mark Brown

From: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>

Add info prints in sample kprobe handlers for ARM64

Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
---
 samples/kprobes/kprobe_example.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/samples/kprobes/kprobe_example.c b/samples/kprobes/kprobe_example.c
index ed0ca0c..f3b61b4 100644
--- a/samples/kprobes/kprobe_example.c
+++ b/samples/kprobes/kprobe_example.c
@@ -46,6 +46,11 @@ static int handler_pre(struct kprobe *p, struct pt_regs *regs)
 			" ex1 = 0x%lx\n",
 		p->symbol_name, p->addr, regs->pc, regs->ex1);
 #endif
+#ifdef CONFIG_ARM64
+	pr_info("<%s> pre_handler: p->addr = 0x%p, pc = 0x%lx,"
+			" pstate = 0x%lx\n",
+		p->symbol_name, p->addr, (long)regs->pc, (long)regs->pstate);
+#endif
 
 	/* A dump_stack() here will give a stack backtrace */
 	return 0;
@@ -71,6 +76,10 @@ static void handler_post(struct kprobe *p, struct pt_regs *regs,
 	printk(KERN_INFO "<%s> post_handler: p->addr = 0x%p, ex1 = 0x%lx\n",
 		p->symbol_name, p->addr, regs->ex1);
 #endif
+#ifdef CONFIG_ARM64
+	pr_info("<%s> post_handler: p->addr = 0x%p, pstate = 0x%lx\n",
+		p->symbol_name, p->addr, (long)regs->pstate);
+#endif
 }
 
 /*
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v14 04/10] arm64: Kprobes with single stepping support
  2016-06-27  3:06 ` [PATCH v14 04/10] arm64: Kprobes with single stepping support David Long
@ 2016-06-27  6:57   ` Pratyush Anand
  2016-06-27 14:06     ` David Long
  0 siblings, 1 reply; 16+ messages in thread
From: Pratyush Anand @ 2016-06-27  6:57 UTC (permalink / raw)
  To: David Long
  Cc: Catalin Marinas, Huang Shijie, James Morse, Marc Zyngier,
	Sandeepa Prabhu, Will Deacon, William Cohen, linux-arm-kernel,
	linux-kernel, Steve Capper, Masami Hiramatsu, Li Bin,
	Adam Buchbinder, Alex Bennée, Andrew Morton,
	Andrey Ryabinin, Ard Biesheuvel, Christoffer Dall,
	Daniel Thompson, Dave P Martin, Jens Wiklander, Jisheng Zhang,
	John Blackwood, Mark Rutland, Petr Mladek, Robin Murphy,
	Suzuki K Poulose, Vladimir Murzin, Yang Shi, Zi Shen Lim,
	yalin wang, Mark Brown

Hi David,

On 26/06/2016:11:06:47 PM, David Long wrote:
> From: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
> 
> Add support for basic kernel probes(kprobes) and jump probes
> (jprobes) for ARM64.
> 
> Kprobes utilizes software breakpoint and single step debug
> exceptions supported on ARM v8.
> 
> A software breakpoint is placed at the probe address to trap the
> kernel execution into the kprobe handler.
> 
> ARM v8 supports enabling single stepping before the break exception
> return (ERET), with next PC in exception return address (ELR_EL1). The
> kprobe handler prepares an executable memory slot for out-of-line
> execution with a copy of the original instruction being probed, and
> enables single stepping. The PC is set to the out-of-line slot address
> before the ERET. With this scheme, the instruction is executed with the
> exact same register context except for the PC (and DAIF) registers.
> 
> Debug mask (PSTATE.D) is enabled only when single stepping a recursive
> kprobe, e.g.: during kprobes reenter so that probed instruction can be
> single stepped within the kprobe handler -exception- context.
> The recursion depth of kprobe is always 2, i.e. upon probe re-entry,
> any further re-entry is prevented by not calling handlers and the case
> counted as a missed kprobe).
> 
> Single stepping from the x-o-l slot has a drawback for PC-relative accesses
> like branching and symbolic literals access as the offset from the new PC
> (slot address) may not be ensured to fit in the immediate value of
> the opcode. Such instructions need simulation, so reject
> probing them.
> 
> Instructions generating exceptions or cpu mode change are rejected
> for probing.
> 
> Exclusive load/store instructions are rejected too.  Additionally, the
> code is checked to see if it is inside an exclusive load/store sequence
> (code from Pratyush).
> 
> System instructions are mostly enabled for stepping, except MSR/MRS
> accesses to "DAIF" flags in PSTATE, which are not safe for
> probing.
> 
> Thanks to Steve Capper and Pratyush Anand for several suggested
> Changes.
> 
> Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
> Signed-off-by: David A. Long <dave.long@linaro.org>
> Signed-off-by: Pratyush Anand <panand@redhat.com>
> ---
>  arch/arm64/Kconfig                      |   1 +
>  arch/arm64/include/asm/debug-monitors.h |   5 +
>  arch/arm64/include/asm/insn.h           |   2 +
>  arch/arm64/include/asm/kprobes.h        |  60 ++++
>  arch/arm64/include/asm/probes.h         |  34 +++
>  arch/arm64/include/asm/ptrace.h         |   1 +
>  arch/arm64/kernel/Makefile              |   2 +-
>  arch/arm64/kernel/debug-monitors.c      |  16 +-
>  arch/arm64/kernel/kprobes/Makefile      |   1 +
>  arch/arm64/kernel/kprobes/decode-insn.c | 143 +++++++++
>  arch/arm64/kernel/kprobes/decode-insn.h |  34 +++
>  arch/arm64/kernel/kprobes/kprobes.c     | 525 ++++++++++++++++++++++++++++++++
>  arch/arm64/kernel/vmlinux.lds.S         |   1 +
>  arch/arm64/mm/fault.c                   |  26 ++
>  14 files changed, 848 insertions(+), 3 deletions(-)
>  create mode 100644 arch/arm64/include/asm/kprobes.h
>  create mode 100644 arch/arm64/include/asm/probes.h
>  create mode 100644 arch/arm64/kernel/kprobes/Makefile
>  create mode 100644 arch/arm64/kernel/kprobes/decode-insn.c
>  create mode 100644 arch/arm64/kernel/kprobes/decode-insn.h

Can we rename kernel/kprobes as kernel/probes? uprobes code will use
decode-insn.c and further simulate-insn.c as well. So, I would like to place my
uprobes.c in same directory.

>  create mode 100644 arch/arm64/kernel/kprobes/kprobes.c
> 

[...]

> diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h
> index 6c0c7d3..c7bbeed 100644
> --- a/arch/arm64/include/asm/ptrace.h
> +++ b/arch/arm64/include/asm/ptrace.h
> @@ -209,6 +209,7 @@ struct task_struct;
>  int valid_user_regs(struct user_pt_regs *regs, struct task_struct *task);
>  
>  #define instruction_pointer(regs)	((unsigned long)(regs)->pc)
> +#define instruction_pointer_set(regs, value)	((regs)->pc = ((u64) (value)))

IIRC, Will Daecon had asked to include asm-generic/ptrace.h into asm/ptrace.h when
I had done similar changes for uprobe needs. May be you can pick patch from
my uprobe tree.

https://github.com/pratyushanand/linux/commit/bb3e114797c2888ed8ad528cca20e569dd2d818e

~Pratyush

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v14 04/10] arm64: Kprobes with single stepping support
  2016-06-27  6:57   ` Pratyush Anand
@ 2016-06-27 14:06     ` David Long
  2016-06-28  7:25       ` Masami Hiramatsu
  0 siblings, 1 reply; 16+ messages in thread
From: David Long @ 2016-06-27 14:06 UTC (permalink / raw)
  To: Pratyush Anand, Masami Hiramatsu, Mark Brown
  Cc: Catalin Marinas, Huang Shijie, James Morse, Marc Zyngier,
	Sandeepa Prabhu, Will Deacon, William Cohen, linux-arm-kernel,
	linux-kernel, Steve Capper, Li Bin, Adam Buchbinder,
	Alex Bennée, Andrew Morton, Andrey Ryabinin, Ard Biesheuvel,
	Christoffer Dall, Daniel Thompson, Dave P Martin, Jens Wiklander,
	Jisheng Zhang, John Blackwood, Mark Rutland, Petr Mladek,
	Robin Murphy, Suzuki K Poulose, Vladimir Murzin, Yang Shi,
	Zi Shen Lim, yalin wang

On 06/27/2016 02:57 AM, Pratyush Anand wrote:
> Hi David,
>
> On 26/06/2016:11:06:47 PM, David Long wrote:
>> From: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
>>
>> Add support for basic kernel probes(kprobes) and jump probes
>> (jprobes) for ARM64.
>>
>> Kprobes utilizes software breakpoint and single step debug
>> exceptions supported on ARM v8.
>>
>> A software breakpoint is placed at the probe address to trap the
>> kernel execution into the kprobe handler.
>>
>> ARM v8 supports enabling single stepping before the break exception
>> return (ERET), with next PC in exception return address (ELR_EL1). The
>> kprobe handler prepares an executable memory slot for out-of-line
>> execution with a copy of the original instruction being probed, and
>> enables single stepping. The PC is set to the out-of-line slot address
>> before the ERET. With this scheme, the instruction is executed with the
>> exact same register context except for the PC (and DAIF) registers.
>>
>> Debug mask (PSTATE.D) is enabled only when single stepping a recursive
>> kprobe, e.g.: during kprobes reenter so that probed instruction can be
>> single stepped within the kprobe handler -exception- context.
>> The recursion depth of kprobe is always 2, i.e. upon probe re-entry,
>> any further re-entry is prevented by not calling handlers and the case
>> counted as a missed kprobe).
>>
>> Single stepping from the x-o-l slot has a drawback for PC-relative accesses
>> like branching and symbolic literals access as the offset from the new PC
>> (slot address) may not be ensured to fit in the immediate value of
>> the opcode. Such instructions need simulation, so reject
>> probing them.
>>
>> Instructions generating exceptions or cpu mode change are rejected
>> for probing.
>>
>> Exclusive load/store instructions are rejected too.  Additionally, the
>> code is checked to see if it is inside an exclusive load/store sequence
>> (code from Pratyush).
>>
>> System instructions are mostly enabled for stepping, except MSR/MRS
>> accesses to "DAIF" flags in PSTATE, which are not safe for
>> probing.
>>
>> Thanks to Steve Capper and Pratyush Anand for several suggested
>> Changes.
>>
>> Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
>> Signed-off-by: David A. Long <dave.long@linaro.org>
>> Signed-off-by: Pratyush Anand <panand@redhat.com>
>> ---
>>   arch/arm64/Kconfig                      |   1 +
>>   arch/arm64/include/asm/debug-monitors.h |   5 +
>>   arch/arm64/include/asm/insn.h           |   2 +
>>   arch/arm64/include/asm/kprobes.h        |  60 ++++
>>   arch/arm64/include/asm/probes.h         |  34 +++
>>   arch/arm64/include/asm/ptrace.h         |   1 +
>>   arch/arm64/kernel/Makefile              |   2 +-
>>   arch/arm64/kernel/debug-monitors.c      |  16 +-
>>   arch/arm64/kernel/kprobes/Makefile      |   1 +
>>   arch/arm64/kernel/kprobes/decode-insn.c | 143 +++++++++
>>   arch/arm64/kernel/kprobes/decode-insn.h |  34 +++
>>   arch/arm64/kernel/kprobes/kprobes.c     | 525 ++++++++++++++++++++++++++++++++
>>   arch/arm64/kernel/vmlinux.lds.S         |   1 +
>>   arch/arm64/mm/fault.c                   |  26 ++
>>   14 files changed, 848 insertions(+), 3 deletions(-)
>>   create mode 100644 arch/arm64/include/asm/kprobes.h
>>   create mode 100644 arch/arm64/include/asm/probes.h
>>   create mode 100644 arch/arm64/kernel/kprobes/Makefile
>>   create mode 100644 arch/arm64/kernel/kprobes/decode-insn.c
>>   create mode 100644 arch/arm64/kernel/kprobes/decode-insn.h
>
> Can we rename kernel/kprobes as kernel/probes? uprobes code will use
> decode-insn.c and further simulate-insn.c as well. So, I would like to place my
> uprobes.c in same directory.

I had some reservations about making it a "kprobes" subdir for that 
reason but the advice I got was to simply go with Masami Hiramatsu's 
specific feedback and have a subsequent uprobes patch move these files 
to a replacement "probes" subdir.  If there ends up being yet another 
revision of this patch maybe he will consider allowing it to be called 
"probes" from the start.

>
>>   create mode 100644 arch/arm64/kernel/kprobes/kprobes.c
>>
>
> [...]
>
>> diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h
>> index 6c0c7d3..c7bbeed 100644
>> --- a/arch/arm64/include/asm/ptrace.h
>> +++ b/arch/arm64/include/asm/ptrace.h
>> @@ -209,6 +209,7 @@ struct task_struct;
>>   int valid_user_regs(struct user_pt_regs *regs, struct task_struct *task);
>>
>>   #define instruction_pointer(regs)	((unsigned long)(regs)->pc)
>> +#define instruction_pointer_set(regs, value)	((regs)->pc = ((u64) (value)))
>
> IIRC, Will Daecon had asked to include asm-generic/ptrace.h into asm/ptrace.h when
> I had done similar changes for uprobe needs. May be you can pick patch from
> my uprobe tree.
>
> https://github.com/pratyushanand/linux/commit/bb3e114797c2888ed8ad528cca20e569dd2d818e

I will look at this.

>
> ~Pratyush
>

Thanks,
-dl

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v14 04/10] arm64: Kprobes with single stepping support
  2016-06-27 14:06     ` David Long
@ 2016-06-28  7:25       ` Masami Hiramatsu
  0 siblings, 0 replies; 16+ messages in thread
From: Masami Hiramatsu @ 2016-06-28  7:25 UTC (permalink / raw)
  To: David Long
  Cc: Pratyush Anand, Mark Brown, Catalin Marinas, Huang Shijie,
	James Morse, Marc Zyngier, Sandeepa Prabhu, Will Deacon,
	William Cohen, linux-arm-kernel, linux-kernel, Steve Capper,
	Li Bin, Adam Buchbinder, Alex Bennée, Andrew Morton,
	Andrey Ryabinin, Ard Biesheuvel, Christoffer Dall,
	Daniel Thompson, Dave P Martin, Jens Wiklander, Jisheng Zhang,
	John Blackwood, Mark Rutland, Petr Mladek, Robin Murphy,
	Suzuki K Poulose, Vladimir Murzin, Yang Shi, Zi Shen Lim,
	yalin wang

On Mon, 27 Jun 2016 10:06:57 -0400
David Long <dave.long@linaro.org> wrote:

> On 06/27/2016 02:57 AM, Pratyush Anand wrote:
> > Hi David,
> >
> > On 26/06/2016:11:06:47 PM, David Long wrote:
> >> From: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
> >>
> >> Add support for basic kernel probes(kprobes) and jump probes
> >> (jprobes) for ARM64.
> >>
> >> Kprobes utilizes software breakpoint and single step debug
> >> exceptions supported on ARM v8.
> >>
> >> A software breakpoint is placed at the probe address to trap the
> >> kernel execution into the kprobe handler.
> >>
> >> ARM v8 supports enabling single stepping before the break exception
> >> return (ERET), with next PC in exception return address (ELR_EL1). The
> >> kprobe handler prepares an executable memory slot for out-of-line
> >> execution with a copy of the original instruction being probed, and
> >> enables single stepping. The PC is set to the out-of-line slot address
> >> before the ERET. With this scheme, the instruction is executed with the
> >> exact same register context except for the PC (and DAIF) registers.
> >>
> >> Debug mask (PSTATE.D) is enabled only when single stepping a recursive
> >> kprobe, e.g.: during kprobes reenter so that probed instruction can be
> >> single stepped within the kprobe handler -exception- context.
> >> The recursion depth of kprobe is always 2, i.e. upon probe re-entry,
> >> any further re-entry is prevented by not calling handlers and the case
> >> counted as a missed kprobe).
> >>
> >> Single stepping from the x-o-l slot has a drawback for PC-relative accesses
> >> like branching and symbolic literals access as the offset from the new PC
> >> (slot address) may not be ensured to fit in the immediate value of
> >> the opcode. Such instructions need simulation, so reject
> >> probing them.
> >>
> >> Instructions generating exceptions or cpu mode change are rejected
> >> for probing.
> >>
> >> Exclusive load/store instructions are rejected too.  Additionally, the
> >> code is checked to see if it is inside an exclusive load/store sequence
> >> (code from Pratyush).
> >>
> >> System instructions are mostly enabled for stepping, except MSR/MRS
> >> accesses to "DAIF" flags in PSTATE, which are not safe for
> >> probing.
> >>
> >> Thanks to Steve Capper and Pratyush Anand for several suggested
> >> Changes.
> >>
> >> Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
> >> Signed-off-by: David A. Long <dave.long@linaro.org>
> >> Signed-off-by: Pratyush Anand <panand@redhat.com>
> >> ---
> >>   arch/arm64/Kconfig                      |   1 +
> >>   arch/arm64/include/asm/debug-monitors.h |   5 +
> >>   arch/arm64/include/asm/insn.h           |   2 +
> >>   arch/arm64/include/asm/kprobes.h        |  60 ++++
> >>   arch/arm64/include/asm/probes.h         |  34 +++
> >>   arch/arm64/include/asm/ptrace.h         |   1 +
> >>   arch/arm64/kernel/Makefile              |   2 +-
> >>   arch/arm64/kernel/debug-monitors.c      |  16 +-
> >>   arch/arm64/kernel/kprobes/Makefile      |   1 +
> >>   arch/arm64/kernel/kprobes/decode-insn.c | 143 +++++++++
> >>   arch/arm64/kernel/kprobes/decode-insn.h |  34 +++
> >>   arch/arm64/kernel/kprobes/kprobes.c     | 525 ++++++++++++++++++++++++++++++++
> >>   arch/arm64/kernel/vmlinux.lds.S         |   1 +
> >>   arch/arm64/mm/fault.c                   |  26 ++
> >>   14 files changed, 848 insertions(+), 3 deletions(-)
> >>   create mode 100644 arch/arm64/include/asm/kprobes.h
> >>   create mode 100644 arch/arm64/include/asm/probes.h
> >>   create mode 100644 arch/arm64/kernel/kprobes/Makefile
> >>   create mode 100644 arch/arm64/kernel/kprobes/decode-insn.c
> >>   create mode 100644 arch/arm64/kernel/kprobes/decode-insn.h
> >
> > Can we rename kernel/kprobes as kernel/probes? uprobes code will use
> > decode-insn.c and further simulate-insn.c as well. So, I would like to place my
> > uprobes.c in same directory.
> 
> I had some reservations about making it a "kprobes" subdir for that 
> reason but the advice I got was to simply go with Masami Hiramatsu's 
> specific feedback and have a subsequent uprobes patch move these files 
> to a replacement "probes" subdir.  If there ends up being yet another 
> revision of this patch maybe he will consider allowing it to be called 
> "probes" from the start.

Yeah, if you've already been sure it will be done,
I'm OK to rename it "probe" :) I just didn't know your work.


> 
> >
> >>   create mode 100644 arch/arm64/kernel/kprobes/kprobes.c
> >>
> >
> > [...]
> >
> >> diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h
> >> index 6c0c7d3..c7bbeed 100644
> >> --- a/arch/arm64/include/asm/ptrace.h
> >> +++ b/arch/arm64/include/asm/ptrace.h
> >> @@ -209,6 +209,7 @@ struct task_struct;
> >>   int valid_user_regs(struct user_pt_regs *regs, struct task_struct *task);
> >>
> >>   #define instruction_pointer(regs)	((unsigned long)(regs)->pc)
> >> +#define instruction_pointer_set(regs, value)	((regs)->pc = ((u64) (value)))
> >
> > IIRC, Will Daecon had asked to include asm-generic/ptrace.h into asm/ptrace.h when
> > I had done similar changes for uprobe needs. May be you can pick patch from
> > my uprobe tree.

Ah, right. we have something similar routines in asm-generic/ptrace.h.

Feel free to include it.

Thanks,

> >
> > https://github.com/pratyushanand/linux/commit/bb3e114797c2888ed8ad528cca20e569dd2d818e
> 
> I will look at this.
> 
> >
> > ~Pratyush
> >
> 
> Thanks,
> -dl
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v14 00/10] arm64: Add kernel probes (kprobes) support
  2016-06-27  3:06 [PATCH v14 00/10] arm64: Add kernel probes (kprobes) support David Long
                   ` (9 preceding siblings ...)
  2016-06-27  3:06 ` [PATCH v14 10/10] kprobes: Add arm64 case in kprobe example module David Long
@ 2016-06-28  7:31 ` Masami Hiramatsu
  2016-06-28  8:13 ` Huang Shijie
  11 siblings, 0 replies; 16+ messages in thread
From: Masami Hiramatsu @ 2016-06-28  7:31 UTC (permalink / raw)
  To: David Long
  Cc: Catalin Marinas, Huang Shijie, James Morse, Marc Zyngier,
	Pratyush Anand, Sandeepa Prabhu, Will Deacon, William Cohen,
	linux-arm-kernel, linux-kernel, Steve Capper, Li Bin,
	Adam Buchbinder, Alex Bennée, Andrew Morton,
	Andrey Ryabinin, Ard Biesheuvel, Christoffer Dall,
	Daniel Thompson, Dave P Martin, Jens Wiklander, Jisheng Zhang,
	John Blackwood, Mark Rutland, Petr Mladek, Robin Murphy,
	Suzuki K Poulose, Vladimir Murzin, Yang Shi, Zi Shen Lim,
	yalin wang, Mark Brown

Hi David,

On Sun, 26 Jun 2016 23:06:43 -0400
David Long <dave.long@linaro.org> wrote:

> From: "David A. Long" <dave.long@linaro.org>
> 
> This patchset is heavily based on Sandeepa Prabhu's ARM v8 kprobes patches,
> first seen in October 2013. This version attempts to address concerns
> raised by reviewers and also fixes problems discovered during testing.
> 
> This patchset adds support for kernel probes(kprobes), jump probes(jprobes)
> and return probes(kretprobes) support for ARM64.
> 
> The kprobes mechanism makes use of software breakpoint and single stepping
> support available in the ARM v8 kernel.

Great! All the patches in this series are OK for me :)
If you'd like to update the series about directory renaming and
including "asm-generic/ptrace.h", feel free to do so, and resend
with my Ack.

Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
for this series.

Thank you!

-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v14 00/10] arm64: Add kernel probes (kprobes) support
  2016-06-27  3:06 [PATCH v14 00/10] arm64: Add kernel probes (kprobes) support David Long
                   ` (10 preceding siblings ...)
  2016-06-28  7:31 ` [PATCH v14 00/10] arm64: Add kernel probes (kprobes) support Masami Hiramatsu
@ 2016-06-28  8:13 ` Huang Shijie
  11 siblings, 0 replies; 16+ messages in thread
From: Huang Shijie @ 2016-06-28  8:13 UTC (permalink / raw)
  To: David Long
  Cc: Catalin Marinas, James Morse, Marc Zyngier, Pratyush Anand,
	Sandeepa Prabhu, Will Deacon, William Cohen, linux-arm-kernel,
	linux-kernel, Steve Capper, Masami Hiramatsu, Li Bin,
	Adam Buchbinder, Alex Bennée, Andrew Morton,
	Andrey Ryabinin, Ard Biesheuvel, Christoffer Dall,
	Daniel Thompson, Dave P Martin, Jens Wiklander, Jisheng Zhang,
	John Blackwood, Mark Rutland, Petr Mladek, Robin Murphy,
	Suzuki K Poulose, Vladimir Murzin, Yang Shi, Zi Shen Lim,
	yalin wang, Mark Brown, nd

On Sun, Jun 26, 2016 at 11:06:43PM -0400, David Long wrote:
> From: "David A. Long" <dave.long@linaro.org>
> 
> This patchset is heavily based on Sandeepa Prabhu's ARM v8 kprobes patches,
> first seen in October 2013. This version attempts to address concerns
> raised by reviewers and also fixes problems discovered during testing.
> 
> This patchset adds support for kernel probes(kprobes), jump probes(jprobes)
> and return probes(kretprobes) support for ARM64.
> 
> The kprobes mechanism makes use of software breakpoint and single stepping
> support available in the ARM v8 kernel.
> 
I tested the whole patch set with kprobe/jprobe/kretprobe in my Juno-r1 board,
it works fine.

Tested-by: Huang Shijie <shijie.huang@arm.com>

thanks
Huang Shijie

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2016-06-28  8:14 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-27  3:06 [PATCH v14 00/10] arm64: Add kernel probes (kprobes) support David Long
2016-06-27  3:06 ` [PATCH v14 01/10] arm64: Add HAVE_REGS_AND_STACK_ACCESS_API feature David Long
2016-06-27  3:06 ` [PATCH v14 02/10] arm64: Add more test functions to insn.c David Long
2016-06-27  3:06 ` [PATCH v14 03/10] arm64: add conditional instruction simulation support David Long
2016-06-27  3:06 ` [PATCH v14 04/10] arm64: Kprobes with single stepping support David Long
2016-06-27  6:57   ` Pratyush Anand
2016-06-27 14:06     ` David Long
2016-06-28  7:25       ` Masami Hiramatsu
2016-06-27  3:06 ` [PATCH v14 05/10] arm64: Blacklist non-kprobe-able symbol David Long
2016-06-27  3:06 ` [PATCH v14 06/10] arm64: Treat all entry code as non-kprobe-able David Long
2016-06-27  3:06 ` [PATCH v14 07/10] arm64: kprobes instruction simulation support David Long
2016-06-27  3:06 ` [PATCH v14 08/10] arm64: Add trampoline code for kretprobes David Long
2016-06-27  3:06 ` [PATCH v14 09/10] arm64: Add kernel return probes support (kretprobes) David Long
2016-06-27  3:06 ` [PATCH v14 10/10] kprobes: Add arm64 case in kprobe example module David Long
2016-06-28  7:31 ` [PATCH v14 00/10] arm64: Add kernel probes (kprobes) support Masami Hiramatsu
2016-06-28  8:13 ` Huang Shijie

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).