All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V2 0/6] ARM64: Uprobe support added
@ 2016-09-27  7:43 ` Pratyush Anand
  0 siblings, 0 replies; 24+ messages in thread
From: Pratyush Anand @ 2016-09-27  7:43 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas
  Cc: linux, will.deacon, linux-kernel, wcohen, oleg, dave.long,
	steve.capper, srikar, vijaya.kumar, panand

Changes since v1:
* Exposed sync_icache_aliases() and used that in stead of flush_uprobe_xol_access()
* Assigned 0x0005 to BRK64_ESR_UPROBES in stead of 0x0008
* moved uprobe_opcode_t from probes.h to uprobes.h
* Assigned 4 to TIF_UPROBE instead of 5
* Assigned AARCH64_INSN_SIZE to UPROBE_SWBP_INSN_SIZE instead of hard code 4.
* Removed saved_fault_code from struct arch_uprobe_task
* Removed preempt_dis(en)able() from arch_uprobe_copy_ixol()
* Removed case INSN_GOOD from arch_uprobe_analyze_insn()
* Now we do check that probe point is not for a 32 bit task.
* Return a false positive from is_tarp_insn()
* Changes for rebase conflict resolution

V1 was here: https://lkml.org/lkml/2016/8/2/29
Patches have been rebased on next-20160927, so that there would be no
conflicts with other arm64/for-next/core patches.

Patches have been tested for following:
1. Step-able instructions, like sub, ldr, add etc.
2. Simulation-able like ret, cbnz, cbz etc.
3. uretprobe
4. Reject-able instructions like sev, wfe etc.
5. trapped and abort xol path
6. probe at unaligned user address.
7. longjump test cases

aarch32 task probing is not yet supported.

Pratyush Anand (6):
  arm64: kprobe: protect/rename few definitions to be reused by uprobe
  arm64: kgdb_step_brk_fn: ignore other's exception
  arm64: Handle TRAP_TRACE for user mode as well
  arm64: Handle TRAP_BRKPT for user mode as well
  arm64: introduce mm context flag to keep 32 bit task information
  arm64: Add uprobe support

 arch/arm64/Kconfig                      |   3 +
 arch/arm64/include/asm/cacheflush.h     |   1 +
 arch/arm64/include/asm/debug-monitors.h |   3 +
 arch/arm64/include/asm/elf.h            |  12 +-
 arch/arm64/include/asm/mmu.h            |   1 +
 arch/arm64/include/asm/probes.h         |  19 +--
 arch/arm64/include/asm/ptrace.h         |   8 ++
 arch/arm64/include/asm/thread_info.h    |   5 +-
 arch/arm64/include/asm/uprobes.h        |  36 ++++++
 arch/arm64/kernel/debug-monitors.c      |  40 +++---
 arch/arm64/kernel/kgdb.c                |   3 +
 arch/arm64/kernel/probes/Makefile       |   2 +
 arch/arm64/kernel/probes/decode-insn.c  |  32 ++---
 arch/arm64/kernel/probes/decode-insn.h  |   8 +-
 arch/arm64/kernel/probes/kprobes.c      |  36 +++---
 arch/arm64/kernel/probes/uprobes.c      | 221 ++++++++++++++++++++++++++++++++
 arch/arm64/kernel/signal.c              |   3 +
 arch/arm64/mm/flush.c                   |   2 +-
 18 files changed, 371 insertions(+), 64 deletions(-)
 create mode 100644 arch/arm64/include/asm/uprobes.h
 create mode 100644 arch/arm64/kernel/probes/uprobes.c

-- 
2.7.4

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH V2 0/6] ARM64: Uprobe support added
@ 2016-09-27  7:43 ` Pratyush Anand
  0 siblings, 0 replies; 24+ messages in thread
From: Pratyush Anand @ 2016-09-27  7:43 UTC (permalink / raw)
  To: linux-arm-kernel

Changes since v1:
* Exposed sync_icache_aliases() and used that in stead of flush_uprobe_xol_access()
* Assigned 0x0005 to BRK64_ESR_UPROBES in stead of 0x0008
* moved uprobe_opcode_t from probes.h to uprobes.h
* Assigned 4 to TIF_UPROBE instead of 5
* Assigned AARCH64_INSN_SIZE to UPROBE_SWBP_INSN_SIZE instead of hard code 4.
* Removed saved_fault_code from struct arch_uprobe_task
* Removed preempt_dis(en)able() from arch_uprobe_copy_ixol()
* Removed case INSN_GOOD from arch_uprobe_analyze_insn()
* Now we do check that probe point is not for a 32 bit task.
* Return a false positive from is_tarp_insn()
* Changes for rebase conflict resolution

V1 was here: https://lkml.org/lkml/2016/8/2/29
Patches have been rebased on next-20160927, so that there would be no
conflicts with other arm64/for-next/core patches.

Patches have been tested for following:
1. Step-able instructions, like sub, ldr, add etc.
2. Simulation-able like ret, cbnz, cbz etc.
3. uretprobe
4. Reject-able instructions like sev, wfe etc.
5. trapped and abort xol path
6. probe at unaligned user address.
7. longjump test cases

aarch32 task probing is not yet supported.

Pratyush Anand (6):
  arm64: kprobe: protect/rename few definitions to be reused by uprobe
  arm64: kgdb_step_brk_fn: ignore other's exception
  arm64: Handle TRAP_TRACE for user mode as well
  arm64: Handle TRAP_BRKPT for user mode as well
  arm64: introduce mm context flag to keep 32 bit task information
  arm64: Add uprobe support

 arch/arm64/Kconfig                      |   3 +
 arch/arm64/include/asm/cacheflush.h     |   1 +
 arch/arm64/include/asm/debug-monitors.h |   3 +
 arch/arm64/include/asm/elf.h            |  12 +-
 arch/arm64/include/asm/mmu.h            |   1 +
 arch/arm64/include/asm/probes.h         |  19 +--
 arch/arm64/include/asm/ptrace.h         |   8 ++
 arch/arm64/include/asm/thread_info.h    |   5 +-
 arch/arm64/include/asm/uprobes.h        |  36 ++++++
 arch/arm64/kernel/debug-monitors.c      |  40 +++---
 arch/arm64/kernel/kgdb.c                |   3 +
 arch/arm64/kernel/probes/Makefile       |   2 +
 arch/arm64/kernel/probes/decode-insn.c  |  32 ++---
 arch/arm64/kernel/probes/decode-insn.h  |   8 +-
 arch/arm64/kernel/probes/kprobes.c      |  36 +++---
 arch/arm64/kernel/probes/uprobes.c      | 221 ++++++++++++++++++++++++++++++++
 arch/arm64/kernel/signal.c              |   3 +
 arch/arm64/mm/flush.c                   |   2 +-
 18 files changed, 371 insertions(+), 64 deletions(-)
 create mode 100644 arch/arm64/include/asm/uprobes.h
 create mode 100644 arch/arm64/kernel/probes/uprobes.c

-- 
2.7.4

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH V2 1/6] arm64: kprobe: protect/rename few definitions to be reused by uprobe
  2016-09-27  7:43 ` Pratyush Anand
@ 2016-09-27  7:47   ` Pratyush Anand
  -1 siblings, 0 replies; 24+ messages in thread
From: Pratyush Anand @ 2016-09-27  7:47 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas
  Cc: linux, will.deacon, linux-kernel, wcohen, oleg, dave.long,
	steve.capper, srikar, vijaya.kumar, panand

decode-insn code has to be reused by arm64 uprobe implementation as well.
Therefore, this patch protects some portion of kprobe code and renames few
other, so that decode-insn functionality can be reused by uprobe even when
CONFIG_KPROBES is not defined.

kprobe_opcode_t and struct arch_specific_insn are also defined by
linux/kprobes.h, when CONFIG_KPROBES is not defined. So, protect these
definitions in asm/probes.h.

linux/kprobes.h already includes asm/kprobes.h. Therefore, remove inclusion
of asm/kprobes.h from decode-insn.c.

There are some definitions like kprobe_insn and kprobes_handler_t etc can
be re-used by uprobe. So, it would be better to remove 'k' from their
names.

struct arch_specific_insn is specific to kprobe. Therefore, introduce a new
struct arch_probe_insn which will be common for both kprobe and uprobe, so
that decode-insn code can be shared. Modify kprobe code accordingly.

Function arm_probe_decode_insn() will be needed by uprobe as well. So make
it global.

Signed-off-by: Pratyush Anand <panand@redhat.com>
---
 arch/arm64/include/asm/probes.h        | 19 ++++++++++--------
 arch/arm64/kernel/probes/decode-insn.c | 32 ++++++++++++++++--------------
 arch/arm64/kernel/probes/decode-insn.h |  8 ++++++--
 arch/arm64/kernel/probes/kprobes.c     | 36 +++++++++++++++++-----------------
 4 files changed, 52 insertions(+), 43 deletions(-)

diff --git a/arch/arm64/include/asm/probes.h b/arch/arm64/include/asm/probes.h
index 5af574d632fa..e175a825b187 100644
--- a/arch/arm64/include/asm/probes.h
+++ b/arch/arm64/include/asm/probes.h
@@ -17,19 +17,22 @@
 
 #include <asm/opcodes.h>
 
-struct kprobe;
-struct arch_specific_insn;
-
-typedef u32 kprobe_opcode_t;
-typedef void (kprobes_handler_t) (u32 opcode, long addr, struct pt_regs *);
+typedef u32 probe_opcode_t;
+typedef void (probes_handler_t) (u32 opcode, long addr, struct pt_regs *);
 
 /* architecture specific copy of original instruction */
-struct arch_specific_insn {
-	kprobe_opcode_t *insn;
+struct arch_probe_insn {
+	probe_opcode_t *insn;
 	pstate_check_t *pstate_cc;
-	kprobes_handler_t *handler;
+	probes_handler_t *handler;
 	/* restore address after step xol */
 	unsigned long restore;
 };
+#ifdef CONFIG_KPROBES
+typedef u32 kprobe_opcode_t;
+struct arch_specific_insn {
+	struct arch_probe_insn api;
+};
+#endif
 
 #endif
diff --git a/arch/arm64/kernel/probes/decode-insn.c b/arch/arm64/kernel/probes/decode-insn.c
index d1731bf977ef..8a29d2982eec 100644
--- a/arch/arm64/kernel/probes/decode-insn.c
+++ b/arch/arm64/kernel/probes/decode-insn.c
@@ -78,8 +78,8 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn)
  *   INSN_GOOD         If instruction is supported and uses instruction slot,
  *   INSN_GOOD_NO_SLOT If instruction is supported but doesn't use its slot.
  */
-static enum kprobe_insn __kprobes
-arm_probe_decode_insn(kprobe_opcode_t insn, struct arch_specific_insn *asi)
+enum probe_insn __kprobes
+arm_probe_decode_insn(probe_opcode_t insn, struct arch_probe_insn *api)
 {
 	/*
 	 * Instructions reading or modifying the PC won't work from the XOL
@@ -89,26 +89,26 @@ arm_probe_decode_insn(kprobe_opcode_t insn, struct arch_specific_insn *asi)
 		return INSN_GOOD;
 
 	if (aarch64_insn_is_bcond(insn)) {
-		asi->handler = simulate_b_cond;
+		api->handler = simulate_b_cond;
 	} else if (aarch64_insn_is_cbz(insn) ||
 	    aarch64_insn_is_cbnz(insn)) {
-		asi->handler = simulate_cbz_cbnz;
+		api->handler = simulate_cbz_cbnz;
 	} else if (aarch64_insn_is_tbz(insn) ||
 	    aarch64_insn_is_tbnz(insn)) {
-		asi->handler = simulate_tbz_tbnz;
+		api->handler = simulate_tbz_tbnz;
 	} else if (aarch64_insn_is_adr_adrp(insn)) {
-		asi->handler = simulate_adr_adrp;
+		api->handler = simulate_adr_adrp;
 	} else if (aarch64_insn_is_b(insn) ||
 	    aarch64_insn_is_bl(insn)) {
-		asi->handler = simulate_b_bl;
+		api->handler = simulate_b_bl;
 	} else if (aarch64_insn_is_br(insn) ||
 	    aarch64_insn_is_blr(insn) ||
 	    aarch64_insn_is_ret(insn)) {
-		asi->handler = simulate_br_blr_ret;
+		api->handler = simulate_br_blr_ret;
 	} else if (aarch64_insn_is_ldr_lit(insn)) {
-		asi->handler = simulate_ldr_literal;
+		api->handler = simulate_ldr_literal;
 	} else if (aarch64_insn_is_ldrsw_lit(insn)) {
-		asi->handler = simulate_ldrsw_literal;
+		api->handler = simulate_ldrsw_literal;
 	} else {
 		/*
 		 * Instruction cannot be stepped out-of-line and we don't
@@ -120,6 +120,7 @@ arm_probe_decode_insn(kprobe_opcode_t insn, struct arch_specific_insn *asi)
 	return INSN_GOOD_NO_SLOT;
 }
 
+#ifdef CONFIG_KPROBES
 static bool __kprobes
 is_probed_address_atomic(kprobe_opcode_t *scan_start, kprobe_opcode_t *scan_end)
 {
@@ -138,12 +139,12 @@ is_probed_address_atomic(kprobe_opcode_t *scan_start, kprobe_opcode_t *scan_end)
 	return false;
 }
 
-enum kprobe_insn __kprobes
+enum probe_insn __kprobes
 arm_kprobe_decode_insn(kprobe_opcode_t *addr, struct arch_specific_insn *asi)
 {
-	enum kprobe_insn decoded;
-	kprobe_opcode_t insn = le32_to_cpu(*addr);
-	kprobe_opcode_t *scan_end = NULL;
+	enum probe_insn decoded;
+	probe_opcode_t insn = le32_to_cpu(*addr);
+	probe_opcode_t *scan_end = NULL;
 	unsigned long size = 0, offset = 0;
 
 	/*
@@ -162,7 +163,7 @@ arm_kprobe_decode_insn(kprobe_opcode_t *addr, struct arch_specific_insn *asi)
 		else
 			scan_end = addr - MAX_ATOMIC_CONTEXT_SIZE;
 	}
-	decoded = arm_probe_decode_insn(insn, asi);
+	decoded = arm_probe_decode_insn(insn, &asi->api);
 
 	if (decoded != INSN_REJECTED && scan_end)
 		if (is_probed_address_atomic(addr - 1, scan_end))
@@ -170,3 +171,4 @@ arm_kprobe_decode_insn(kprobe_opcode_t *addr, struct arch_specific_insn *asi)
 
 	return decoded;
 }
+#endif
diff --git a/arch/arm64/kernel/probes/decode-insn.h b/arch/arm64/kernel/probes/decode-insn.h
index d438289646a6..76d3f315407f 100644
--- a/arch/arm64/kernel/probes/decode-insn.h
+++ b/arch/arm64/kernel/probes/decode-insn.h
@@ -23,13 +23,17 @@
  */
 #define MAX_ATOMIC_CONTEXT_SIZE	(128 / sizeof(kprobe_opcode_t))
 
-enum kprobe_insn {
+enum probe_insn {
 	INSN_REJECTED,
 	INSN_GOOD_NO_SLOT,
 	INSN_GOOD,
 };
 
-enum kprobe_insn __kprobes
+#ifdef CONFIG_KPROBES
+enum probe_insn __kprobes
 arm_kprobe_decode_insn(kprobe_opcode_t *addr, struct arch_specific_insn *asi);
+#endif
+enum probe_insn __kprobes
+arm_probe_decode_insn(probe_opcode_t insn, struct arch_probe_insn *asi);
 
 #endif /* _ARM_KERNEL_KPROBES_ARM64_H */
diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
index f5077ea7af6d..1decd2b2c730 100644
--- a/arch/arm64/kernel/probes/kprobes.c
+++ b/arch/arm64/kernel/probes/kprobes.c
@@ -44,31 +44,31 @@ post_kprobe_handler(struct kprobe_ctlblk *, struct pt_regs *);
 static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
 {
 	/* prepare insn slot */
-	p->ainsn.insn[0] = cpu_to_le32(p->opcode);
+	p->ainsn.api.insn[0] = cpu_to_le32(p->opcode);
 
-	flush_icache_range((uintptr_t) (p->ainsn.insn),
-			   (uintptr_t) (p->ainsn.insn) +
+	flush_icache_range((uintptr_t) (p->ainsn.api.insn),
+			   (uintptr_t) (p->ainsn.api.insn) +
 			   MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
 
 	/*
 	 * Needs restoring of return address after stepping xol.
 	 */
-	p->ainsn.restore = (unsigned long) p->addr +
+	p->ainsn.api.restore = (unsigned long) p->addr +
 	  sizeof(kprobe_opcode_t);
 }
 
 static void __kprobes arch_prepare_simulate(struct kprobe *p)
 {
 	/* This instructions is not executed xol. No need to adjust the PC */
-	p->ainsn.restore = 0;
+	p->ainsn.api.restore = 0;
 }
 
 static void __kprobes arch_simulate_insn(struct kprobe *p, struct pt_regs *regs)
 {
 	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
 
-	if (p->ainsn.handler)
-		p->ainsn.handler((u32)p->opcode, (long)p->addr, regs);
+	if (p->ainsn.api.handler)
+		p->ainsn.api.handler((u32)p->opcode, (long)p->addr, regs);
 
 	/* single step simulated, now go for post processing */
 	post_kprobe_handler(kcb, regs);
@@ -98,18 +98,18 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p)
 		return -EINVAL;
 
 	case INSN_GOOD_NO_SLOT:	/* insn need simulation */
-		p->ainsn.insn = NULL;
+		p->ainsn.api.insn = NULL;
 		break;
 
 	case INSN_GOOD:	/* instruction uses slot */
-		p->ainsn.insn = get_insn_slot();
-		if (!p->ainsn.insn)
+		p->ainsn.api.insn = get_insn_slot();
+		if (!p->ainsn.api.insn)
 			return -ENOMEM;
 		break;
 	};
 
 	/* prepare the instruction */
-	if (p->ainsn.insn)
+	if (p->ainsn.api.insn)
 		arch_prepare_ss_slot(p);
 	else
 		arch_prepare_simulate(p);
@@ -142,9 +142,9 @@ void __kprobes arch_disarm_kprobe(struct kprobe *p)
 
 void __kprobes arch_remove_kprobe(struct kprobe *p)
 {
-	if (p->ainsn.insn) {
-		free_insn_slot(p->ainsn.insn, 0);
-		p->ainsn.insn = NULL;
+	if (p->ainsn.api.insn) {
+		free_insn_slot(p->ainsn.api.insn, 0);
+		p->ainsn.api.insn = NULL;
 	}
 }
 
@@ -244,9 +244,9 @@ static void __kprobes setup_singlestep(struct kprobe *p,
 	}
 
 
-	if (p->ainsn.insn) {
+	if (p->ainsn.api.insn) {
 		/* prepare for single stepping */
-		slot = (unsigned long)p->ainsn.insn;
+		slot = (unsigned long)p->ainsn.api.insn;
 
 		set_ss_context(kcb, slot);	/* mark pending ss */
 
@@ -295,8 +295,8 @@ post_kprobe_handler(struct kprobe_ctlblk *kcb, struct pt_regs *regs)
 		return;
 
 	/* return addr restore if non-branching insn */
-	if (cur->ainsn.restore != 0)
-		instruction_pointer_set(regs, cur->ainsn.restore);
+	if (cur->ainsn.api.restore != 0)
+		instruction_pointer_set(regs, cur->ainsn.api.restore);
 
 	/* restore back original saved kprobe variables and continue */
 	if (kcb->kprobe_status == KPROBE_REENTER) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 1/6] arm64: kprobe: protect/rename few definitions to be reused by uprobe
@ 2016-09-27  7:47   ` Pratyush Anand
  0 siblings, 0 replies; 24+ messages in thread
From: Pratyush Anand @ 2016-09-27  7:47 UTC (permalink / raw)
  To: linux-arm-kernel

decode-insn code has to be reused by arm64 uprobe implementation as well.
Therefore, this patch protects some portion of kprobe code and renames few
other, so that decode-insn functionality can be reused by uprobe even when
CONFIG_KPROBES is not defined.

kprobe_opcode_t and struct arch_specific_insn are also defined by
linux/kprobes.h, when CONFIG_KPROBES is not defined. So, protect these
definitions in asm/probes.h.

linux/kprobes.h already includes asm/kprobes.h. Therefore, remove inclusion
of asm/kprobes.h from decode-insn.c.

There are some definitions like kprobe_insn and kprobes_handler_t etc can
be re-used by uprobe. So, it would be better to remove 'k' from their
names.

struct arch_specific_insn is specific to kprobe. Therefore, introduce a new
struct arch_probe_insn which will be common for both kprobe and uprobe, so
that decode-insn code can be shared. Modify kprobe code accordingly.

Function arm_probe_decode_insn() will be needed by uprobe as well. So make
it global.

Signed-off-by: Pratyush Anand <panand@redhat.com>
---
 arch/arm64/include/asm/probes.h        | 19 ++++++++++--------
 arch/arm64/kernel/probes/decode-insn.c | 32 ++++++++++++++++--------------
 arch/arm64/kernel/probes/decode-insn.h |  8 ++++++--
 arch/arm64/kernel/probes/kprobes.c     | 36 +++++++++++++++++-----------------
 4 files changed, 52 insertions(+), 43 deletions(-)

diff --git a/arch/arm64/include/asm/probes.h b/arch/arm64/include/asm/probes.h
index 5af574d632fa..e175a825b187 100644
--- a/arch/arm64/include/asm/probes.h
+++ b/arch/arm64/include/asm/probes.h
@@ -17,19 +17,22 @@
 
 #include <asm/opcodes.h>
 
-struct kprobe;
-struct arch_specific_insn;
-
-typedef u32 kprobe_opcode_t;
-typedef void (kprobes_handler_t) (u32 opcode, long addr, struct pt_regs *);
+typedef u32 probe_opcode_t;
+typedef void (probes_handler_t) (u32 opcode, long addr, struct pt_regs *);
 
 /* architecture specific copy of original instruction */
-struct arch_specific_insn {
-	kprobe_opcode_t *insn;
+struct arch_probe_insn {
+	probe_opcode_t *insn;
 	pstate_check_t *pstate_cc;
-	kprobes_handler_t *handler;
+	probes_handler_t *handler;
 	/* restore address after step xol */
 	unsigned long restore;
 };
+#ifdef CONFIG_KPROBES
+typedef u32 kprobe_opcode_t;
+struct arch_specific_insn {
+	struct arch_probe_insn api;
+};
+#endif
 
 #endif
diff --git a/arch/arm64/kernel/probes/decode-insn.c b/arch/arm64/kernel/probes/decode-insn.c
index d1731bf977ef..8a29d2982eec 100644
--- a/arch/arm64/kernel/probes/decode-insn.c
+++ b/arch/arm64/kernel/probes/decode-insn.c
@@ -78,8 +78,8 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn)
  *   INSN_GOOD         If instruction is supported and uses instruction slot,
  *   INSN_GOOD_NO_SLOT If instruction is supported but doesn't use its slot.
  */
-static enum kprobe_insn __kprobes
-arm_probe_decode_insn(kprobe_opcode_t insn, struct arch_specific_insn *asi)
+enum probe_insn __kprobes
+arm_probe_decode_insn(probe_opcode_t insn, struct arch_probe_insn *api)
 {
 	/*
 	 * Instructions reading or modifying the PC won't work from the XOL
@@ -89,26 +89,26 @@ arm_probe_decode_insn(kprobe_opcode_t insn, struct arch_specific_insn *asi)
 		return INSN_GOOD;
 
 	if (aarch64_insn_is_bcond(insn)) {
-		asi->handler = simulate_b_cond;
+		api->handler = simulate_b_cond;
 	} else if (aarch64_insn_is_cbz(insn) ||
 	    aarch64_insn_is_cbnz(insn)) {
-		asi->handler = simulate_cbz_cbnz;
+		api->handler = simulate_cbz_cbnz;
 	} else if (aarch64_insn_is_tbz(insn) ||
 	    aarch64_insn_is_tbnz(insn)) {
-		asi->handler = simulate_tbz_tbnz;
+		api->handler = simulate_tbz_tbnz;
 	} else if (aarch64_insn_is_adr_adrp(insn)) {
-		asi->handler = simulate_adr_adrp;
+		api->handler = simulate_adr_adrp;
 	} else if (aarch64_insn_is_b(insn) ||
 	    aarch64_insn_is_bl(insn)) {
-		asi->handler = simulate_b_bl;
+		api->handler = simulate_b_bl;
 	} else if (aarch64_insn_is_br(insn) ||
 	    aarch64_insn_is_blr(insn) ||
 	    aarch64_insn_is_ret(insn)) {
-		asi->handler = simulate_br_blr_ret;
+		api->handler = simulate_br_blr_ret;
 	} else if (aarch64_insn_is_ldr_lit(insn)) {
-		asi->handler = simulate_ldr_literal;
+		api->handler = simulate_ldr_literal;
 	} else if (aarch64_insn_is_ldrsw_lit(insn)) {
-		asi->handler = simulate_ldrsw_literal;
+		api->handler = simulate_ldrsw_literal;
 	} else {
 		/*
 		 * Instruction cannot be stepped out-of-line and we don't
@@ -120,6 +120,7 @@ arm_probe_decode_insn(kprobe_opcode_t insn, struct arch_specific_insn *asi)
 	return INSN_GOOD_NO_SLOT;
 }
 
+#ifdef CONFIG_KPROBES
 static bool __kprobes
 is_probed_address_atomic(kprobe_opcode_t *scan_start, kprobe_opcode_t *scan_end)
 {
@@ -138,12 +139,12 @@ is_probed_address_atomic(kprobe_opcode_t *scan_start, kprobe_opcode_t *scan_end)
 	return false;
 }
 
-enum kprobe_insn __kprobes
+enum probe_insn __kprobes
 arm_kprobe_decode_insn(kprobe_opcode_t *addr, struct arch_specific_insn *asi)
 {
-	enum kprobe_insn decoded;
-	kprobe_opcode_t insn = le32_to_cpu(*addr);
-	kprobe_opcode_t *scan_end = NULL;
+	enum probe_insn decoded;
+	probe_opcode_t insn = le32_to_cpu(*addr);
+	probe_opcode_t *scan_end = NULL;
 	unsigned long size = 0, offset = 0;
 
 	/*
@@ -162,7 +163,7 @@ arm_kprobe_decode_insn(kprobe_opcode_t *addr, struct arch_specific_insn *asi)
 		else
 			scan_end = addr - MAX_ATOMIC_CONTEXT_SIZE;
 	}
-	decoded = arm_probe_decode_insn(insn, asi);
+	decoded = arm_probe_decode_insn(insn, &asi->api);
 
 	if (decoded != INSN_REJECTED && scan_end)
 		if (is_probed_address_atomic(addr - 1, scan_end))
@@ -170,3 +171,4 @@ arm_kprobe_decode_insn(kprobe_opcode_t *addr, struct arch_specific_insn *asi)
 
 	return decoded;
 }
+#endif
diff --git a/arch/arm64/kernel/probes/decode-insn.h b/arch/arm64/kernel/probes/decode-insn.h
index d438289646a6..76d3f315407f 100644
--- a/arch/arm64/kernel/probes/decode-insn.h
+++ b/arch/arm64/kernel/probes/decode-insn.h
@@ -23,13 +23,17 @@
  */
 #define MAX_ATOMIC_CONTEXT_SIZE	(128 / sizeof(kprobe_opcode_t))
 
-enum kprobe_insn {
+enum probe_insn {
 	INSN_REJECTED,
 	INSN_GOOD_NO_SLOT,
 	INSN_GOOD,
 };
 
-enum kprobe_insn __kprobes
+#ifdef CONFIG_KPROBES
+enum probe_insn __kprobes
 arm_kprobe_decode_insn(kprobe_opcode_t *addr, struct arch_specific_insn *asi);
+#endif
+enum probe_insn __kprobes
+arm_probe_decode_insn(probe_opcode_t insn, struct arch_probe_insn *asi);
 
 #endif /* _ARM_KERNEL_KPROBES_ARM64_H */
diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
index f5077ea7af6d..1decd2b2c730 100644
--- a/arch/arm64/kernel/probes/kprobes.c
+++ b/arch/arm64/kernel/probes/kprobes.c
@@ -44,31 +44,31 @@ post_kprobe_handler(struct kprobe_ctlblk *, struct pt_regs *);
 static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
 {
 	/* prepare insn slot */
-	p->ainsn.insn[0] = cpu_to_le32(p->opcode);
+	p->ainsn.api.insn[0] = cpu_to_le32(p->opcode);
 
-	flush_icache_range((uintptr_t) (p->ainsn.insn),
-			   (uintptr_t) (p->ainsn.insn) +
+	flush_icache_range((uintptr_t) (p->ainsn.api.insn),
+			   (uintptr_t) (p->ainsn.api.insn) +
 			   MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
 
 	/*
 	 * Needs restoring of return address after stepping xol.
 	 */
-	p->ainsn.restore = (unsigned long) p->addr +
+	p->ainsn.api.restore = (unsigned long) p->addr +
 	  sizeof(kprobe_opcode_t);
 }
 
 static void __kprobes arch_prepare_simulate(struct kprobe *p)
 {
 	/* This instructions is not executed xol. No need to adjust the PC */
-	p->ainsn.restore = 0;
+	p->ainsn.api.restore = 0;
 }
 
 static void __kprobes arch_simulate_insn(struct kprobe *p, struct pt_regs *regs)
 {
 	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
 
-	if (p->ainsn.handler)
-		p->ainsn.handler((u32)p->opcode, (long)p->addr, regs);
+	if (p->ainsn.api.handler)
+		p->ainsn.api.handler((u32)p->opcode, (long)p->addr, regs);
 
 	/* single step simulated, now go for post processing */
 	post_kprobe_handler(kcb, regs);
@@ -98,18 +98,18 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p)
 		return -EINVAL;
 
 	case INSN_GOOD_NO_SLOT:	/* insn need simulation */
-		p->ainsn.insn = NULL;
+		p->ainsn.api.insn = NULL;
 		break;
 
 	case INSN_GOOD:	/* instruction uses slot */
-		p->ainsn.insn = get_insn_slot();
-		if (!p->ainsn.insn)
+		p->ainsn.api.insn = get_insn_slot();
+		if (!p->ainsn.api.insn)
 			return -ENOMEM;
 		break;
 	};
 
 	/* prepare the instruction */
-	if (p->ainsn.insn)
+	if (p->ainsn.api.insn)
 		arch_prepare_ss_slot(p);
 	else
 		arch_prepare_simulate(p);
@@ -142,9 +142,9 @@ void __kprobes arch_disarm_kprobe(struct kprobe *p)
 
 void __kprobes arch_remove_kprobe(struct kprobe *p)
 {
-	if (p->ainsn.insn) {
-		free_insn_slot(p->ainsn.insn, 0);
-		p->ainsn.insn = NULL;
+	if (p->ainsn.api.insn) {
+		free_insn_slot(p->ainsn.api.insn, 0);
+		p->ainsn.api.insn = NULL;
 	}
 }
 
@@ -244,9 +244,9 @@ static void __kprobes setup_singlestep(struct kprobe *p,
 	}
 
 
-	if (p->ainsn.insn) {
+	if (p->ainsn.api.insn) {
 		/* prepare for single stepping */
-		slot = (unsigned long)p->ainsn.insn;
+		slot = (unsigned long)p->ainsn.api.insn;
 
 		set_ss_context(kcb, slot);	/* mark pending ss */
 
@@ -295,8 +295,8 @@ post_kprobe_handler(struct kprobe_ctlblk *kcb, struct pt_regs *regs)
 		return;
 
 	/* return addr restore if non-branching insn */
-	if (cur->ainsn.restore != 0)
-		instruction_pointer_set(regs, cur->ainsn.restore);
+	if (cur->ainsn.api.restore != 0)
+		instruction_pointer_set(regs, cur->ainsn.api.restore);
 
 	/* restore back original saved kprobe variables and continue */
 	if (kcb->kprobe_status == KPROBE_REENTER) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 2/6] arm64: kgdb_step_brk_fn: ignore other's exception
  2016-09-27  7:43 ` Pratyush Anand
@ 2016-09-27  7:47   ` Pratyush Anand
  -1 siblings, 0 replies; 24+ messages in thread
From: Pratyush Anand @ 2016-09-27  7:47 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas
  Cc: linux, will.deacon, linux-kernel, wcohen, oleg, dave.long,
	steve.capper, srikar, vijaya.kumar, panand

ARM64 step exception does not have any syndrome information. So, it is
responsibility of exception handler to take care that they handle it
only if exception was raised for them.

Since kgdb_step_brk_fn() always returns 0, therefore we might have problem
when we will have other step handler registered as well.

This patch fixes kgdb_step_brk_fn() to return error in case of step handler
was not meant for kgdb.

Signed-off-by: Pratyush Anand <panand@redhat.com>
---
 arch/arm64/kernel/kgdb.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/arm64/kernel/kgdb.c b/arch/arm64/kernel/kgdb.c
index e017a9493b92..d217c9e95b06 100644
--- a/arch/arm64/kernel/kgdb.c
+++ b/arch/arm64/kernel/kgdb.c
@@ -247,6 +247,9 @@ NOKPROBE_SYMBOL(kgdb_compiled_brk_fn);
 
 static int kgdb_step_brk_fn(struct pt_regs *regs, unsigned int esr)
 {
+	if (!kgdb_single_step)
+		return DBG_HOOK_ERROR;
+
 	kgdb_handle_exception(1, SIGTRAP, 0, regs);
 	return 0;
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 2/6] arm64: kgdb_step_brk_fn: ignore other's exception
@ 2016-09-27  7:47   ` Pratyush Anand
  0 siblings, 0 replies; 24+ messages in thread
From: Pratyush Anand @ 2016-09-27  7:47 UTC (permalink / raw)
  To: linux-arm-kernel

ARM64 step exception does not have any syndrome information. So, it is
responsibility of exception handler to take care that they handle it
only if exception was raised for them.

Since kgdb_step_brk_fn() always returns 0, therefore we might have problem
when we will have other step handler registered as well.

This patch fixes kgdb_step_brk_fn() to return error in case of step handler
was not meant for kgdb.

Signed-off-by: Pratyush Anand <panand@redhat.com>
---
 arch/arm64/kernel/kgdb.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/arm64/kernel/kgdb.c b/arch/arm64/kernel/kgdb.c
index e017a9493b92..d217c9e95b06 100644
--- a/arch/arm64/kernel/kgdb.c
+++ b/arch/arm64/kernel/kgdb.c
@@ -247,6 +247,9 @@ NOKPROBE_SYMBOL(kgdb_compiled_brk_fn);
 
 static int kgdb_step_brk_fn(struct pt_regs *regs, unsigned int esr)
 {
+	if (!kgdb_single_step)
+		return DBG_HOOK_ERROR;
+
 	kgdb_handle_exception(1, SIGTRAP, 0, regs);
 	return 0;
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 3/6] arm64: Handle TRAP_TRACE for user mode as well
  2016-09-27  7:43 ` Pratyush Anand
@ 2016-09-27  7:47   ` Pratyush Anand
  -1 siblings, 0 replies; 24+ messages in thread
From: Pratyush Anand @ 2016-09-27  7:47 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas
  Cc: linux, will.deacon, linux-kernel, wcohen, oleg, dave.long,
	steve.capper, srikar, vijaya.kumar, panand

uprobe registers a handler at step_hook. So, single_step_handler now
checks for user mode as well if there is a valid hook.

Signed-off-by: Pratyush Anand <panand@redhat.com>
---
 arch/arm64/kernel/debug-monitors.c | 22 ++++++++++++----------
 1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
index 73ae90ef434c..a8f8de012250 100644
--- a/arch/arm64/kernel/debug-monitors.c
+++ b/arch/arm64/kernel/debug-monitors.c
@@ -226,6 +226,8 @@ static void send_user_sigtrap(int si_code)
 static int single_step_handler(unsigned long addr, unsigned int esr,
 			       struct pt_regs *regs)
 {
+	bool handler_found = false;
+
 	/*
 	 * If we are stepping a pending breakpoint, call the hw_breakpoint
 	 * handler first.
@@ -233,7 +235,14 @@ static int single_step_handler(unsigned long addr, unsigned int esr,
 	if (!reinstall_suspended_bps(regs))
 		return 0;
 
-	if (user_mode(regs)) {
+#ifdef	CONFIG_KPROBES
+	if (kprobe_single_step_handler(regs, esr) == DBG_HOOK_HANDLED)
+		handler_found = true;
+#endif
+	if (!handler_found && call_step_hook(regs, esr) == DBG_HOOK_HANDLED)
+		handler_found = true;
+
+	if (!handler_found && user_mode(regs)) {
 		send_user_sigtrap(TRAP_TRACE);
 
 		/*
@@ -243,15 +252,8 @@ static int single_step_handler(unsigned long addr, unsigned int esr,
 		 * to the active-not-pending state).
 		 */
 		user_rewind_single_step(current);
-	} else {
-#ifdef	CONFIG_KPROBES
-		if (kprobe_single_step_handler(regs, esr) == DBG_HOOK_HANDLED)
-			return 0;
-#endif
-		if (call_step_hook(regs, esr) == DBG_HOOK_HANDLED)
-			return 0;
-
-		pr_warning("Unexpected kernel single-step exception at EL1\n");
+	} else if (!handler_found) {
+		pr_warn("Unexpected kernel single-step exception at EL1\n");
 		/*
 		 * Re-enable stepping since we know that we will be
 		 * returning to regs.
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 3/6] arm64: Handle TRAP_TRACE for user mode as well
@ 2016-09-27  7:47   ` Pratyush Anand
  0 siblings, 0 replies; 24+ messages in thread
From: Pratyush Anand @ 2016-09-27  7:47 UTC (permalink / raw)
  To: linux-arm-kernel

uprobe registers a handler at step_hook. So, single_step_handler now
checks for user mode as well if there is a valid hook.

Signed-off-by: Pratyush Anand <panand@redhat.com>
---
 arch/arm64/kernel/debug-monitors.c | 22 ++++++++++++----------
 1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
index 73ae90ef434c..a8f8de012250 100644
--- a/arch/arm64/kernel/debug-monitors.c
+++ b/arch/arm64/kernel/debug-monitors.c
@@ -226,6 +226,8 @@ static void send_user_sigtrap(int si_code)
 static int single_step_handler(unsigned long addr, unsigned int esr,
 			       struct pt_regs *regs)
 {
+	bool handler_found = false;
+
 	/*
 	 * If we are stepping a pending breakpoint, call the hw_breakpoint
 	 * handler first.
@@ -233,7 +235,14 @@ static int single_step_handler(unsigned long addr, unsigned int esr,
 	if (!reinstall_suspended_bps(regs))
 		return 0;
 
-	if (user_mode(regs)) {
+#ifdef	CONFIG_KPROBES
+	if (kprobe_single_step_handler(regs, esr) == DBG_HOOK_HANDLED)
+		handler_found = true;
+#endif
+	if (!handler_found && call_step_hook(regs, esr) == DBG_HOOK_HANDLED)
+		handler_found = true;
+
+	if (!handler_found && user_mode(regs)) {
 		send_user_sigtrap(TRAP_TRACE);
 
 		/*
@@ -243,15 +252,8 @@ static int single_step_handler(unsigned long addr, unsigned int esr,
 		 * to the active-not-pending state).
 		 */
 		user_rewind_single_step(current);
-	} else {
-#ifdef	CONFIG_KPROBES
-		if (kprobe_single_step_handler(regs, esr) == DBG_HOOK_HANDLED)
-			return 0;
-#endif
-		if (call_step_hook(regs, esr) == DBG_HOOK_HANDLED)
-			return 0;
-
-		pr_warning("Unexpected kernel single-step exception at EL1\n");
+	} else if (!handler_found) {
+		pr_warn("Unexpected kernel single-step exception at EL1\n");
 		/*
 		 * Re-enable stepping since we know that we will be
 		 * returning to regs.
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 4/6] arm64: Handle TRAP_BRKPT for user mode as well
  2016-09-27  7:43 ` Pratyush Anand
@ 2016-09-27  7:47   ` Pratyush Anand
  -1 siblings, 0 replies; 24+ messages in thread
From: Pratyush Anand @ 2016-09-27  7:47 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas
  Cc: linux, will.deacon, linux-kernel, wcohen, oleg, dave.long,
	steve.capper, srikar, vijaya.kumar, panand

uprobe is registered at break_hook with a unique ESR code. So, when a
TRAP_BRKPT occurs, call_break_hook checks if it was for uprobe. If not,
then send a SIGTRAP to user.

Signed-off-by: Pratyush Anand <panand@redhat.com>
---
 arch/arm64/kernel/debug-monitors.c | 18 +++++++++++-------
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
index a8f8de012250..605df76f0a06 100644
--- a/arch/arm64/kernel/debug-monitors.c
+++ b/arch/arm64/kernel/debug-monitors.c
@@ -306,16 +306,20 @@ NOKPROBE_SYMBOL(call_break_hook);
 static int brk_handler(unsigned long addr, unsigned int esr,
 		       struct pt_regs *regs)
 {
-	if (user_mode(regs)) {
-		send_user_sigtrap(TRAP_BRKPT);
-	}
+	bool handler_found = false;
+
 #ifdef	CONFIG_KPROBES
-	else if ((esr & BRK64_ESR_MASK) == BRK64_ESR_KPROBES) {
-		if (kprobe_breakpoint_handler(regs, esr) != DBG_HOOK_HANDLED)
-			return -EFAULT;
+	if ((esr & BRK64_ESR_MASK) == BRK64_ESR_KPROBES) {
+		if (kprobe_breakpoint_handler(regs, esr) == DBG_HOOK_HANDLED)
+			handler_found = true;
 	}
 #endif
-	else if (call_break_hook(regs, esr) != DBG_HOOK_HANDLED) {
+	if (!handler_found && call_break_hook(regs, esr) == DBG_HOOK_HANDLED)
+		handler_found = true;
+
+	if (!handler_found && user_mode(regs)) {
+		send_user_sigtrap(TRAP_BRKPT);
+	} else if (!handler_found) {
 		pr_warn("Unexpected kernel BRK exception at EL1\n");
 		return -EFAULT;
 	}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 4/6] arm64: Handle TRAP_BRKPT for user mode as well
@ 2016-09-27  7:47   ` Pratyush Anand
  0 siblings, 0 replies; 24+ messages in thread
From: Pratyush Anand @ 2016-09-27  7:47 UTC (permalink / raw)
  To: linux-arm-kernel

uprobe is registered at break_hook with a unique ESR code. So, when a
TRAP_BRKPT occurs, call_break_hook checks if it was for uprobe. If not,
then send a SIGTRAP to user.

Signed-off-by: Pratyush Anand <panand@redhat.com>
---
 arch/arm64/kernel/debug-monitors.c | 18 +++++++++++-------
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
index a8f8de012250..605df76f0a06 100644
--- a/arch/arm64/kernel/debug-monitors.c
+++ b/arch/arm64/kernel/debug-monitors.c
@@ -306,16 +306,20 @@ NOKPROBE_SYMBOL(call_break_hook);
 static int brk_handler(unsigned long addr, unsigned int esr,
 		       struct pt_regs *regs)
 {
-	if (user_mode(regs)) {
-		send_user_sigtrap(TRAP_BRKPT);
-	}
+	bool handler_found = false;
+
 #ifdef	CONFIG_KPROBES
-	else if ((esr & BRK64_ESR_MASK) == BRK64_ESR_KPROBES) {
-		if (kprobe_breakpoint_handler(regs, esr) != DBG_HOOK_HANDLED)
-			return -EFAULT;
+	if ((esr & BRK64_ESR_MASK) == BRK64_ESR_KPROBES) {
+		if (kprobe_breakpoint_handler(regs, esr) == DBG_HOOK_HANDLED)
+			handler_found = true;
 	}
 #endif
-	else if (call_break_hook(regs, esr) != DBG_HOOK_HANDLED) {
+	if (!handler_found && call_break_hook(regs, esr) == DBG_HOOK_HANDLED)
+		handler_found = true;
+
+	if (!handler_found && user_mode(regs)) {
+		send_user_sigtrap(TRAP_BRKPT);
+	} else if (!handler_found) {
 		pr_warn("Unexpected kernel BRK exception at EL1\n");
 		return -EFAULT;
 	}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 5/6] arm64: introduce mm context flag to keep 32 bit task information
  2016-09-27  7:43 ` Pratyush Anand
@ 2016-09-27  7:47   ` Pratyush Anand
  -1 siblings, 0 replies; 24+ messages in thread
From: Pratyush Anand @ 2016-09-27  7:47 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas
  Cc: linux, will.deacon, linux-kernel, wcohen, oleg, dave.long,
	steve.capper, srikar, vijaya.kumar, panand

We need to decide in some cases like uprobe instruction analysis that
whether the current mm context belongs to a 32 bit task or 64 bit.

This patch has introduced an unsigned flag variable in mm_context_t.
Currently, we set and clear TIF_32BIT depending on the condition that
whether an elf binary load sets personality for 32 bit or 64 bit
respectively.

Signed-off-by: Pratyush Anand <panand@redhat.com>
---
 arch/arm64/include/asm/elf.h | 12 ++++++++++--
 arch/arm64/include/asm/mmu.h |  1 +
 2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h
index a55384f4a5d7..5d1700425efe 100644
--- a/arch/arm64/include/asm/elf.h
+++ b/arch/arm64/include/asm/elf.h
@@ -138,7 +138,11 @@ typedef struct user_fpsimd_state elf_fpregset_t;
  */
 #define ELF_PLAT_INIT(_r, load_addr)	(_r)->regs[0] = 0
 
-#define SET_PERSONALITY(ex)		clear_thread_flag(TIF_32BIT);
+#define SET_PERSONALITY(ex)						\
+({									\
+	clear_bit(TIF_32BIT, &current->mm->context.flags);		\
+	clear_thread_flag(TIF_32BIT);					\
+})
 
 /* update AT_VECTOR_SIZE_ARCH if the number of NEW_AUX_ENT entries changes */
 #define ARCH_DLINFO							\
@@ -183,7 +187,11 @@ typedef compat_elf_greg_t		compat_elf_gregset_t[COMPAT_ELF_NGREG];
 					 ((x)->e_flags & EF_ARM_EABI_MASK))
 
 #define compat_start_thread		compat_start_thread
-#define COMPAT_SET_PERSONALITY(ex)	set_thread_flag(TIF_32BIT);
+#define COMPAT_SET_PERSONALITY(ex)					\
+({									\
+	set_bit(TIF_32BIT, &current->mm->context.flags);		\
+	set_thread_flag(TIF_32BIT);					\
+ })
 #define COMPAT_ARCH_DLINFO
 extern int aarch32_setup_vectors_page(struct linux_binprm *bprm,
 				      int uses_interp);
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 8d9fce037b2f..d4fa21543771 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -19,6 +19,7 @@
 typedef struct {
 	atomic64_t	id;
 	void		*vdso;
+	unsigned long	flags;
 } mm_context_t;
 
 /*
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 5/6] arm64: introduce mm context flag to keep 32 bit task information
@ 2016-09-27  7:47   ` Pratyush Anand
  0 siblings, 0 replies; 24+ messages in thread
From: Pratyush Anand @ 2016-09-27  7:47 UTC (permalink / raw)
  To: linux-arm-kernel

We need to decide in some cases like uprobe instruction analysis that
whether the current mm context belongs to a 32 bit task or 64 bit.

This patch has introduced an unsigned flag variable in mm_context_t.
Currently, we set and clear TIF_32BIT depending on the condition that
whether an elf binary load sets personality for 32 bit or 64 bit
respectively.

Signed-off-by: Pratyush Anand <panand@redhat.com>
---
 arch/arm64/include/asm/elf.h | 12 ++++++++++--
 arch/arm64/include/asm/mmu.h |  1 +
 2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h
index a55384f4a5d7..5d1700425efe 100644
--- a/arch/arm64/include/asm/elf.h
+++ b/arch/arm64/include/asm/elf.h
@@ -138,7 +138,11 @@ typedef struct user_fpsimd_state elf_fpregset_t;
  */
 #define ELF_PLAT_INIT(_r, load_addr)	(_r)->regs[0] = 0
 
-#define SET_PERSONALITY(ex)		clear_thread_flag(TIF_32BIT);
+#define SET_PERSONALITY(ex)						\
+({									\
+	clear_bit(TIF_32BIT, &current->mm->context.flags);		\
+	clear_thread_flag(TIF_32BIT);					\
+})
 
 /* update AT_VECTOR_SIZE_ARCH if the number of NEW_AUX_ENT entries changes */
 #define ARCH_DLINFO							\
@@ -183,7 +187,11 @@ typedef compat_elf_greg_t		compat_elf_gregset_t[COMPAT_ELF_NGREG];
 					 ((x)->e_flags & EF_ARM_EABI_MASK))
 
 #define compat_start_thread		compat_start_thread
-#define COMPAT_SET_PERSONALITY(ex)	set_thread_flag(TIF_32BIT);
+#define COMPAT_SET_PERSONALITY(ex)					\
+({									\
+	set_bit(TIF_32BIT, &current->mm->context.flags);		\
+	set_thread_flag(TIF_32BIT);					\
+ })
 #define COMPAT_ARCH_DLINFO
 extern int aarch32_setup_vectors_page(struct linux_binprm *bprm,
 				      int uses_interp);
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 8d9fce037b2f..d4fa21543771 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -19,6 +19,7 @@
 typedef struct {
 	atomic64_t	id;
 	void		*vdso;
+	unsigned long	flags;
 } mm_context_t;
 
 /*
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 6/6] arm64: Add uprobe support
  2016-09-27  7:43 ` Pratyush Anand
@ 2016-09-27  7:48   ` Pratyush Anand
  -1 siblings, 0 replies; 24+ messages in thread
From: Pratyush Anand @ 2016-09-27  7:48 UTC (permalink / raw)
  To: linux-arm-kernel, catalin.marinas
  Cc: linux, will.deacon, linux-kernel, wcohen, oleg, dave.long,
	steve.capper, srikar, vijaya.kumar, panand

This patch adds support for uprobe on ARM64 architecture.

Unit tests for following have been done so far and they have been found
working
    1. Step-able instructions, like sub, ldr, add etc.
    2. Simulation-able like ret, cbnz, cbz etc.
    3. uretprobe
    4. Reject-able instructions like sev, wfe etc.
    5. trapped and abort xol path
    6. probe at unaligned user address.
    7. longjump test cases

Currently it does not support aarch32 instruction probing.

Signed-off-by: Pratyush Anand <panand@redhat.com>
---
 arch/arm64/Kconfig                      |   3 +
 arch/arm64/include/asm/cacheflush.h     |   1 +
 arch/arm64/include/asm/debug-monitors.h |   3 +
 arch/arm64/include/asm/ptrace.h         |   8 ++
 arch/arm64/include/asm/thread_info.h    |   5 +-
 arch/arm64/include/asm/uprobes.h        |  36 ++++++
 arch/arm64/kernel/probes/Makefile       |   2 +
 arch/arm64/kernel/probes/uprobes.c      | 221 ++++++++++++++++++++++++++++++++
 arch/arm64/kernel/signal.c              |   3 +
 arch/arm64/mm/flush.c                   |   2 +-
 10 files changed, 282 insertions(+), 2 deletions(-)
 create mode 100644 arch/arm64/include/asm/uprobes.h
 create mode 100644 arch/arm64/kernel/probes/uprobes.c

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 113326e637f0..2bee1b0b99af 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -236,6 +236,9 @@ config PGTABLE_LEVELS
 	default 3 if ARM64_16K_PAGES && ARM64_VA_BITS_47
 	default 4 if !ARM64_64K_PAGES && ARM64_VA_BITS_48
 
+config ARCH_SUPPORTS_UPROBES
+	def_bool y
+
 source "init/Kconfig"
 
 source "kernel/Kconfig.freezer"
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 2e5fb976a572..e9f64ecb75ce 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -71,6 +71,7 @@ extern void __flush_dcache_area(void *addr, size_t len);
 extern void __clean_dcache_area_poc(void *addr, size_t len);
 extern void __clean_dcache_area_pou(void *addr, size_t len);
 extern long __flush_cache_user_range(unsigned long start, unsigned long end);
+extern void sync_icache_aliases(void *kaddr, unsigned long len);
 
 static inline void flush_cache_mm(struct mm_struct *mm)
 {
diff --git a/arch/arm64/include/asm/debug-monitors.h b/arch/arm64/include/asm/debug-monitors.h
index b71420a12f26..a44cf5225429 100644
--- a/arch/arm64/include/asm/debug-monitors.h
+++ b/arch/arm64/include/asm/debug-monitors.h
@@ -68,6 +68,9 @@
 #define BRK64_ESR_MASK		0xFFFF
 #define BRK64_ESR_KPROBES	0x0004
 #define BRK64_OPCODE_KPROBES	(AARCH64_BREAK_MON | (BRK64_ESR_KPROBES << 5))
+/* uprobes BRK opcodes with ESR encoding  */
+#define BRK64_ESR_UPROBES	0x0005
+#define BRK64_OPCODE_UPROBES	(AARCH64_BREAK_MON | (BRK64_ESR_UPROBES << 5))
 
 /* AArch32 */
 #define DBG_ESR_EVT_BKPT	0x4
diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h
index ada08b5b036d..513daf050e84 100644
--- a/arch/arm64/include/asm/ptrace.h
+++ b/arch/arm64/include/asm/ptrace.h
@@ -217,6 +217,14 @@ int valid_user_regs(struct user_pt_regs *regs, struct task_struct *task);
 
 #include <asm-generic/ptrace.h>
 
+#define procedure_link_pointer(regs)	((regs)->regs[30])
+
+static inline void procedure_link_pointer_set(struct pt_regs *regs,
+					   unsigned long val)
+{
+	procedure_link_pointer(regs) = val;
+}
+
 #undef profile_pc
 extern unsigned long profile_pc(struct pt_regs *regs);
 
diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index e9ea5a6bd449..f6859831462e 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -112,6 +112,7 @@ static inline struct thread_info *current_thread_info(void)
 #define TIF_NEED_RESCHED	1
 #define TIF_NOTIFY_RESUME	2	/* callback before returning to user */
 #define TIF_FOREIGN_FPSTATE	3	/* CPU's FP state is not current's */
+#define TIF_UPROBE		4	/* uprobe breakpoint or singlestep */
 #define TIF_NOHZ		7
 #define TIF_SYSCALL_TRACE	8
 #define TIF_SYSCALL_AUDIT	9
@@ -132,10 +133,12 @@ static inline struct thread_info *current_thread_info(void)
 #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
 #define _TIF_SYSCALL_TRACEPOINT	(1 << TIF_SYSCALL_TRACEPOINT)
 #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
+#define _TIF_UPROBE		(1 << TIF_UPROBE)
 #define _TIF_32BIT		(1 << TIF_32BIT)
 
 #define _TIF_WORK_MASK		(_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
-				 _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE)
+				 _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE | \
+				 _TIF_UPROBE)
 
 #define _TIF_SYSCALL_WORK	(_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
 				 _TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \
diff --git a/arch/arm64/include/asm/uprobes.h b/arch/arm64/include/asm/uprobes.h
new file mode 100644
index 000000000000..8d004073d0e8
--- /dev/null
+++ b/arch/arm64/include/asm/uprobes.h
@@ -0,0 +1,36 @@
+/*
+ * Copyright (C) 2014-2016 Pratyush Anand <panand@redhat.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef _ASM_UPROBES_H
+#define _ASM_UPROBES_H
+
+#include <asm/debug-monitors.h>
+#include <asm/insn.h>
+#include <asm/probes.h>
+
+#define MAX_UINSN_BYTES		AARCH64_INSN_SIZE
+
+#define UPROBE_SWBP_INSN	BRK64_OPCODE_UPROBES
+#define UPROBE_SWBP_INSN_SIZE	AARCH64_INSN_SIZE
+#define UPROBE_XOL_SLOT_BYTES	MAX_UINSN_BYTES
+
+typedef u32 uprobe_opcode_t;
+
+struct arch_uprobe_task {
+};
+
+struct arch_uprobe {
+	union {
+		u8 insn[MAX_UINSN_BYTES];
+		u8 ixol[MAX_UINSN_BYTES];
+	};
+	struct arch_probe_insn api;
+	bool simulate;
+};
+
+#endif
diff --git a/arch/arm64/kernel/probes/Makefile b/arch/arm64/kernel/probes/Makefile
index ce06312e3d34..89b6df613dde 100644
--- a/arch/arm64/kernel/probes/Makefile
+++ b/arch/arm64/kernel/probes/Makefile
@@ -1,3 +1,5 @@
 obj-$(CONFIG_KPROBES)		+= kprobes.o decode-insn.o	\
 				   kprobes_trampoline.o		\
 				   simulate-insn.o
+obj-$(CONFIG_UPROBES)		+= uprobes.o decode-insn.o	\
+				   simulate-insn.o
diff --git a/arch/arm64/kernel/probes/uprobes.c b/arch/arm64/kernel/probes/uprobes.c
new file mode 100644
index 000000000000..d30d13040346
--- /dev/null
+++ b/arch/arm64/kernel/probes/uprobes.c
@@ -0,0 +1,221 @@
+/*
+ * Copyright (C) 2014-2016 Pratyush Anand <panand@redhat.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#include <linux/highmem.h>
+#include <linux/ptrace.h>
+#include <linux/uprobes.h>
+#include <asm/cacheflush.h>
+
+#include "decode-insn.h"
+
+#define UPROBE_INV_FAULT_CODE	UINT_MAX
+
+bool is_trap_insn(uprobe_opcode_t *insn)
+{
+	return false;
+}
+
+void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr,
+		void *src, unsigned long len)
+{
+	void *xol_page_kaddr = kmap_atomic(page);
+	void *dst = xol_page_kaddr + (vaddr & ~PAGE_MASK);
+
+	/* Initialize the slot */
+	memcpy(dst, src, len);
+
+	/* flush caches (dcache/icache) */
+	sync_icache_aliases(dst, len);
+
+	kunmap_atomic(xol_page_kaddr);
+}
+
+unsigned long uprobe_get_swbp_addr(struct pt_regs *regs)
+{
+	return instruction_pointer(regs);
+}
+
+int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe, struct mm_struct *mm,
+		unsigned long addr)
+{
+	probe_opcode_t insn;
+
+	/* TODO: Currently we do not support AARCH32 instruction probing */
+	if (test_bit(TIF_32BIT, &mm->context.flags))
+		return -EINVAL;
+	else if (!IS_ALIGNED(addr, AARCH64_INSN_SIZE))
+		return -EINVAL;
+
+	insn = *(probe_opcode_t *)(&auprobe->insn[0]);
+
+	switch (arm_probe_decode_insn(insn, &auprobe->api)) {
+	case INSN_REJECTED:
+		return -EINVAL;
+
+	case INSN_GOOD_NO_SLOT:
+		auprobe->simulate = true;
+		break;
+
+	default:
+		break;
+	}
+
+	return 0;
+}
+
+int arch_uprobe_pre_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
+{
+	struct uprobe_task *utask = current->utask;
+
+	/* Initialize with an invalid fault code to detect if ol insn trapped */
+	current->thread.fault_code = UPROBE_INV_FAULT_CODE;
+
+	/* Instruction points to execute ol */
+	instruction_pointer_set(regs, utask->xol_vaddr);
+
+	user_enable_single_step(current);
+
+	return 0;
+}
+
+int arch_uprobe_post_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
+{
+	struct uprobe_task *utask = current->utask;
+
+	WARN_ON_ONCE(current->thread.fault_code != UPROBE_INV_FAULT_CODE);
+
+	/* Instruction points to execute next to breakpoint address */
+	instruction_pointer_set(regs, utask->vaddr + 4);
+
+	user_disable_single_step(current);
+
+	return 0;
+}
+bool arch_uprobe_xol_was_trapped(struct task_struct *t)
+{
+	/*
+	 * Between arch_uprobe_pre_xol and arch_uprobe_post_xol, if an xol
+	 * insn itself is trapped, then detect the case with the help of
+	 * invalid fault code which is being set in arch_uprobe_pre_xol
+	 */
+	if (t->thread.fault_code != UPROBE_INV_FAULT_CODE)
+		return true;
+
+	return false;
+}
+
+bool arch_uprobe_skip_sstep(struct arch_uprobe *auprobe, struct pt_regs *regs)
+{
+	probe_opcode_t insn;
+	unsigned long addr;
+
+	if (!auprobe->simulate)
+		return false;
+
+	insn = *(probe_opcode_t *)(&auprobe->insn[0]);
+	addr = instruction_pointer(regs);
+
+	if (auprobe->api.handler)
+		auprobe->api.handler(insn, addr, regs);
+
+	return true;
+}
+
+void arch_uprobe_abort_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
+{
+	struct uprobe_task *utask = current->utask;
+
+	/*
+	 * Task has received a fatal signal, so reset back to probbed
+	 * address.
+	 */
+	instruction_pointer_set(regs, utask->vaddr);
+
+	user_disable_single_step(current);
+}
+
+bool arch_uretprobe_is_alive(struct return_instance *ret, enum rp_check ctx,
+		struct pt_regs *regs)
+{
+	/*
+	 * If a simple branch instruction (B) was called for retprobed
+	 * assembly label then return true even when regs->sp and ret->stack
+	 * are same. It will ensure that cleanup and reporting of return
+	 * instances corresponding to callee label is done when
+	 * handle_trampoline for called function is executed.
+	 */
+	if (ctx == RP_CHECK_CHAIN_CALL)
+		return regs->sp <= ret->stack;
+	else
+		return regs->sp < ret->stack;
+}
+
+unsigned long
+arch_uretprobe_hijack_return_addr(unsigned long trampoline_vaddr,
+				  struct pt_regs *regs)
+{
+	unsigned long orig_ret_vaddr;
+
+	orig_ret_vaddr = procedure_link_pointer(regs);
+	/* Replace the return addr with trampoline addr */
+	procedure_link_pointer_set(regs, trampoline_vaddr);
+
+	return orig_ret_vaddr;
+}
+
+int arch_uprobe_exception_notify(struct notifier_block *self,
+				 unsigned long val, void *data)
+{
+	return NOTIFY_DONE;
+}
+
+static int uprobe_breakpoint_handler(struct pt_regs *regs,
+		unsigned int esr)
+{
+	if (user_mode(regs) && uprobe_pre_sstep_notifier(regs))
+		return DBG_HOOK_HANDLED;
+
+	return DBG_HOOK_ERROR;
+}
+
+static int uprobe_single_step_handler(struct pt_regs *regs,
+		unsigned int esr)
+{
+	struct uprobe_task *utask = current->utask;
+
+	if (user_mode(regs)) {
+		WARN_ON(utask &&
+			(instruction_pointer(regs) != utask->xol_vaddr + 4));
+
+		if (uprobe_post_sstep_notifier(regs))
+			return DBG_HOOK_HANDLED;
+	}
+
+	return DBG_HOOK_ERROR;
+}
+
+/* uprobe breakpoint handler hook */
+static struct break_hook uprobes_break_hook = {
+	.esr_mask = BRK64_ESR_MASK,
+	.esr_val = BRK64_ESR_UPROBES,
+	.fn = uprobe_breakpoint_handler,
+};
+
+/* uprobe single step handler hook */
+static struct step_hook uprobes_step_hook = {
+	.fn = uprobe_single_step_handler,
+};
+
+static int __init arch_init_uprobes(void)
+{
+	register_break_hook(&uprobes_break_hook);
+	register_step_hook(&uprobes_step_hook);
+
+	return 0;
+}
+
+device_initcall(arch_init_uprobes);
diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
index 404dd67080b9..c7b6de62f9d3 100644
--- a/arch/arm64/kernel/signal.c
+++ b/arch/arm64/kernel/signal.c
@@ -414,6 +414,9 @@ asmlinkage void do_notify_resume(struct pt_regs *regs,
 		} else {
 			local_irq_enable();
 
+			if (thread_flags & _TIF_UPROBE)
+				uprobe_notify_resume(regs);
+
 			if (thread_flags & _TIF_SIGPENDING)
 				do_signal(regs);
 
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index 8377329d8c97..2d78d5a9b89f 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -32,7 +32,7 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
 		__flush_icache_all();
 }
 
-static void sync_icache_aliases(void *kaddr, unsigned long len)
+void sync_icache_aliases(void *kaddr, unsigned long len)
 {
 	unsigned long addr = (unsigned long)kaddr;
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH V2 6/6] arm64: Add uprobe support
@ 2016-09-27  7:48   ` Pratyush Anand
  0 siblings, 0 replies; 24+ messages in thread
From: Pratyush Anand @ 2016-09-27  7:48 UTC (permalink / raw)
  To: linux-arm-kernel

This patch adds support for uprobe on ARM64 architecture.

Unit tests for following have been done so far and they have been found
working
    1. Step-able instructions, like sub, ldr, add etc.
    2. Simulation-able like ret, cbnz, cbz etc.
    3. uretprobe
    4. Reject-able instructions like sev, wfe etc.
    5. trapped and abort xol path
    6. probe at unaligned user address.
    7. longjump test cases

Currently it does not support aarch32 instruction probing.

Signed-off-by: Pratyush Anand <panand@redhat.com>
---
 arch/arm64/Kconfig                      |   3 +
 arch/arm64/include/asm/cacheflush.h     |   1 +
 arch/arm64/include/asm/debug-monitors.h |   3 +
 arch/arm64/include/asm/ptrace.h         |   8 ++
 arch/arm64/include/asm/thread_info.h    |   5 +-
 arch/arm64/include/asm/uprobes.h        |  36 ++++++
 arch/arm64/kernel/probes/Makefile       |   2 +
 arch/arm64/kernel/probes/uprobes.c      | 221 ++++++++++++++++++++++++++++++++
 arch/arm64/kernel/signal.c              |   3 +
 arch/arm64/mm/flush.c                   |   2 +-
 10 files changed, 282 insertions(+), 2 deletions(-)
 create mode 100644 arch/arm64/include/asm/uprobes.h
 create mode 100644 arch/arm64/kernel/probes/uprobes.c

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 113326e637f0..2bee1b0b99af 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -236,6 +236,9 @@ config PGTABLE_LEVELS
 	default 3 if ARM64_16K_PAGES && ARM64_VA_BITS_47
 	default 4 if !ARM64_64K_PAGES && ARM64_VA_BITS_48
 
+config ARCH_SUPPORTS_UPROBES
+	def_bool y
+
 source "init/Kconfig"
 
 source "kernel/Kconfig.freezer"
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 2e5fb976a572..e9f64ecb75ce 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -71,6 +71,7 @@ extern void __flush_dcache_area(void *addr, size_t len);
 extern void __clean_dcache_area_poc(void *addr, size_t len);
 extern void __clean_dcache_area_pou(void *addr, size_t len);
 extern long __flush_cache_user_range(unsigned long start, unsigned long end);
+extern void sync_icache_aliases(void *kaddr, unsigned long len);
 
 static inline void flush_cache_mm(struct mm_struct *mm)
 {
diff --git a/arch/arm64/include/asm/debug-monitors.h b/arch/arm64/include/asm/debug-monitors.h
index b71420a12f26..a44cf5225429 100644
--- a/arch/arm64/include/asm/debug-monitors.h
+++ b/arch/arm64/include/asm/debug-monitors.h
@@ -68,6 +68,9 @@
 #define BRK64_ESR_MASK		0xFFFF
 #define BRK64_ESR_KPROBES	0x0004
 #define BRK64_OPCODE_KPROBES	(AARCH64_BREAK_MON | (BRK64_ESR_KPROBES << 5))
+/* uprobes BRK opcodes with ESR encoding  */
+#define BRK64_ESR_UPROBES	0x0005
+#define BRK64_OPCODE_UPROBES	(AARCH64_BREAK_MON | (BRK64_ESR_UPROBES << 5))
 
 /* AArch32 */
 #define DBG_ESR_EVT_BKPT	0x4
diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h
index ada08b5b036d..513daf050e84 100644
--- a/arch/arm64/include/asm/ptrace.h
+++ b/arch/arm64/include/asm/ptrace.h
@@ -217,6 +217,14 @@ int valid_user_regs(struct user_pt_regs *regs, struct task_struct *task);
 
 #include <asm-generic/ptrace.h>
 
+#define procedure_link_pointer(regs)	((regs)->regs[30])
+
+static inline void procedure_link_pointer_set(struct pt_regs *regs,
+					   unsigned long val)
+{
+	procedure_link_pointer(regs) = val;
+}
+
 #undef profile_pc
 extern unsigned long profile_pc(struct pt_regs *regs);
 
diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index e9ea5a6bd449..f6859831462e 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -112,6 +112,7 @@ static inline struct thread_info *current_thread_info(void)
 #define TIF_NEED_RESCHED	1
 #define TIF_NOTIFY_RESUME	2	/* callback before returning to user */
 #define TIF_FOREIGN_FPSTATE	3	/* CPU's FP state is not current's */
+#define TIF_UPROBE		4	/* uprobe breakpoint or singlestep */
 #define TIF_NOHZ		7
 #define TIF_SYSCALL_TRACE	8
 #define TIF_SYSCALL_AUDIT	9
@@ -132,10 +133,12 @@ static inline struct thread_info *current_thread_info(void)
 #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
 #define _TIF_SYSCALL_TRACEPOINT	(1 << TIF_SYSCALL_TRACEPOINT)
 #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
+#define _TIF_UPROBE		(1 << TIF_UPROBE)
 #define _TIF_32BIT		(1 << TIF_32BIT)
 
 #define _TIF_WORK_MASK		(_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
-				 _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE)
+				 _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE | \
+				 _TIF_UPROBE)
 
 #define _TIF_SYSCALL_WORK	(_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
 				 _TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \
diff --git a/arch/arm64/include/asm/uprobes.h b/arch/arm64/include/asm/uprobes.h
new file mode 100644
index 000000000000..8d004073d0e8
--- /dev/null
+++ b/arch/arm64/include/asm/uprobes.h
@@ -0,0 +1,36 @@
+/*
+ * Copyright (C) 2014-2016 Pratyush Anand <panand@redhat.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef _ASM_UPROBES_H
+#define _ASM_UPROBES_H
+
+#include <asm/debug-monitors.h>
+#include <asm/insn.h>
+#include <asm/probes.h>
+
+#define MAX_UINSN_BYTES		AARCH64_INSN_SIZE
+
+#define UPROBE_SWBP_INSN	BRK64_OPCODE_UPROBES
+#define UPROBE_SWBP_INSN_SIZE	AARCH64_INSN_SIZE
+#define UPROBE_XOL_SLOT_BYTES	MAX_UINSN_BYTES
+
+typedef u32 uprobe_opcode_t;
+
+struct arch_uprobe_task {
+};
+
+struct arch_uprobe {
+	union {
+		u8 insn[MAX_UINSN_BYTES];
+		u8 ixol[MAX_UINSN_BYTES];
+	};
+	struct arch_probe_insn api;
+	bool simulate;
+};
+
+#endif
diff --git a/arch/arm64/kernel/probes/Makefile b/arch/arm64/kernel/probes/Makefile
index ce06312e3d34..89b6df613dde 100644
--- a/arch/arm64/kernel/probes/Makefile
+++ b/arch/arm64/kernel/probes/Makefile
@@ -1,3 +1,5 @@
 obj-$(CONFIG_KPROBES)		+= kprobes.o decode-insn.o	\
 				   kprobes_trampoline.o		\
 				   simulate-insn.o
+obj-$(CONFIG_UPROBES)		+= uprobes.o decode-insn.o	\
+				   simulate-insn.o
diff --git a/arch/arm64/kernel/probes/uprobes.c b/arch/arm64/kernel/probes/uprobes.c
new file mode 100644
index 000000000000..d30d13040346
--- /dev/null
+++ b/arch/arm64/kernel/probes/uprobes.c
@@ -0,0 +1,221 @@
+/*
+ * Copyright (C) 2014-2016 Pratyush Anand <panand@redhat.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#include <linux/highmem.h>
+#include <linux/ptrace.h>
+#include <linux/uprobes.h>
+#include <asm/cacheflush.h>
+
+#include "decode-insn.h"
+
+#define UPROBE_INV_FAULT_CODE	UINT_MAX
+
+bool is_trap_insn(uprobe_opcode_t *insn)
+{
+	return false;
+}
+
+void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr,
+		void *src, unsigned long len)
+{
+	void *xol_page_kaddr = kmap_atomic(page);
+	void *dst = xol_page_kaddr + (vaddr & ~PAGE_MASK);
+
+	/* Initialize the slot */
+	memcpy(dst, src, len);
+
+	/* flush caches (dcache/icache) */
+	sync_icache_aliases(dst, len);
+
+	kunmap_atomic(xol_page_kaddr);
+}
+
+unsigned long uprobe_get_swbp_addr(struct pt_regs *regs)
+{
+	return instruction_pointer(regs);
+}
+
+int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe, struct mm_struct *mm,
+		unsigned long addr)
+{
+	probe_opcode_t insn;
+
+	/* TODO: Currently we do not support AARCH32 instruction probing */
+	if (test_bit(TIF_32BIT, &mm->context.flags))
+		return -EINVAL;
+	else if (!IS_ALIGNED(addr, AARCH64_INSN_SIZE))
+		return -EINVAL;
+
+	insn = *(probe_opcode_t *)(&auprobe->insn[0]);
+
+	switch (arm_probe_decode_insn(insn, &auprobe->api)) {
+	case INSN_REJECTED:
+		return -EINVAL;
+
+	case INSN_GOOD_NO_SLOT:
+		auprobe->simulate = true;
+		break;
+
+	default:
+		break;
+	}
+
+	return 0;
+}
+
+int arch_uprobe_pre_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
+{
+	struct uprobe_task *utask = current->utask;
+
+	/* Initialize with an invalid fault code to detect if ol insn trapped */
+	current->thread.fault_code = UPROBE_INV_FAULT_CODE;
+
+	/* Instruction points to execute ol */
+	instruction_pointer_set(regs, utask->xol_vaddr);
+
+	user_enable_single_step(current);
+
+	return 0;
+}
+
+int arch_uprobe_post_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
+{
+	struct uprobe_task *utask = current->utask;
+
+	WARN_ON_ONCE(current->thread.fault_code != UPROBE_INV_FAULT_CODE);
+
+	/* Instruction points to execute next to breakpoint address */
+	instruction_pointer_set(regs, utask->vaddr + 4);
+
+	user_disable_single_step(current);
+
+	return 0;
+}
+bool arch_uprobe_xol_was_trapped(struct task_struct *t)
+{
+	/*
+	 * Between arch_uprobe_pre_xol and arch_uprobe_post_xol, if an xol
+	 * insn itself is trapped, then detect the case with the help of
+	 * invalid fault code which is being set in arch_uprobe_pre_xol
+	 */
+	if (t->thread.fault_code != UPROBE_INV_FAULT_CODE)
+		return true;
+
+	return false;
+}
+
+bool arch_uprobe_skip_sstep(struct arch_uprobe *auprobe, struct pt_regs *regs)
+{
+	probe_opcode_t insn;
+	unsigned long addr;
+
+	if (!auprobe->simulate)
+		return false;
+
+	insn = *(probe_opcode_t *)(&auprobe->insn[0]);
+	addr = instruction_pointer(regs);
+
+	if (auprobe->api.handler)
+		auprobe->api.handler(insn, addr, regs);
+
+	return true;
+}
+
+void arch_uprobe_abort_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
+{
+	struct uprobe_task *utask = current->utask;
+
+	/*
+	 * Task has received a fatal signal, so reset back to probbed
+	 * address.
+	 */
+	instruction_pointer_set(regs, utask->vaddr);
+
+	user_disable_single_step(current);
+}
+
+bool arch_uretprobe_is_alive(struct return_instance *ret, enum rp_check ctx,
+		struct pt_regs *regs)
+{
+	/*
+	 * If a simple branch instruction (B) was called for retprobed
+	 * assembly label then return true even when regs->sp and ret->stack
+	 * are same. It will ensure that cleanup and reporting of return
+	 * instances corresponding to callee label is done when
+	 * handle_trampoline for called function is executed.
+	 */
+	if (ctx == RP_CHECK_CHAIN_CALL)
+		return regs->sp <= ret->stack;
+	else
+		return regs->sp < ret->stack;
+}
+
+unsigned long
+arch_uretprobe_hijack_return_addr(unsigned long trampoline_vaddr,
+				  struct pt_regs *regs)
+{
+	unsigned long orig_ret_vaddr;
+
+	orig_ret_vaddr = procedure_link_pointer(regs);
+	/* Replace the return addr with trampoline addr */
+	procedure_link_pointer_set(regs, trampoline_vaddr);
+
+	return orig_ret_vaddr;
+}
+
+int arch_uprobe_exception_notify(struct notifier_block *self,
+				 unsigned long val, void *data)
+{
+	return NOTIFY_DONE;
+}
+
+static int uprobe_breakpoint_handler(struct pt_regs *regs,
+		unsigned int esr)
+{
+	if (user_mode(regs) && uprobe_pre_sstep_notifier(regs))
+		return DBG_HOOK_HANDLED;
+
+	return DBG_HOOK_ERROR;
+}
+
+static int uprobe_single_step_handler(struct pt_regs *regs,
+		unsigned int esr)
+{
+	struct uprobe_task *utask = current->utask;
+
+	if (user_mode(regs)) {
+		WARN_ON(utask &&
+			(instruction_pointer(regs) != utask->xol_vaddr + 4));
+
+		if (uprobe_post_sstep_notifier(regs))
+			return DBG_HOOK_HANDLED;
+	}
+
+	return DBG_HOOK_ERROR;
+}
+
+/* uprobe breakpoint handler hook */
+static struct break_hook uprobes_break_hook = {
+	.esr_mask = BRK64_ESR_MASK,
+	.esr_val = BRK64_ESR_UPROBES,
+	.fn = uprobe_breakpoint_handler,
+};
+
+/* uprobe single step handler hook */
+static struct step_hook uprobes_step_hook = {
+	.fn = uprobe_single_step_handler,
+};
+
+static int __init arch_init_uprobes(void)
+{
+	register_break_hook(&uprobes_break_hook);
+	register_step_hook(&uprobes_step_hook);
+
+	return 0;
+}
+
+device_initcall(arch_init_uprobes);
diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
index 404dd67080b9..c7b6de62f9d3 100644
--- a/arch/arm64/kernel/signal.c
+++ b/arch/arm64/kernel/signal.c
@@ -414,6 +414,9 @@ asmlinkage void do_notify_resume(struct pt_regs *regs,
 		} else {
 			local_irq_enable();
 
+			if (thread_flags & _TIF_UPROBE)
+				uprobe_notify_resume(regs);
+
 			if (thread_flags & _TIF_SIGPENDING)
 				do_signal(regs);
 
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index 8377329d8c97..2d78d5a9b89f 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -32,7 +32,7 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
 		__flush_icache_all();
 }
 
-static void sync_icache_aliases(void *kaddr, unsigned long len)
+void sync_icache_aliases(void *kaddr, unsigned long len)
 {
 	unsigned long addr = (unsigned long)kaddr;
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH V2 0/6] ARM64: Uprobe support added
  2016-09-27  7:43 ` Pratyush Anand
@ 2016-10-26  3:17   ` Pratyush Anand
  -1 siblings, 0 replies; 24+ messages in thread
From: Pratyush Anand @ 2016-10-26  3:17 UTC (permalink / raw)
  To: linux-arm-kernel, Catalin Marinas
  Cc: Russell King - ARM Linux, Will Deacon, open list, William Cohen,
	Oleg Nesterov, David Long, Steve Capper, srikar, vijaya.kumar,
	Pratyush Anand

Hi Catalin,

Please let me know if everything else other than is_trap_insn() looks
fine to you. May be I can work well in time. It would be great if we
can make it into v4.9.


~Pratyush


On Tue, Sep 27, 2016 at 1:17 PM, Pratyush Anand <panand@redhat.com> wrote:
> Changes since v1:
> * Exposed sync_icache_aliases() and used that in stead of flush_uprobe_xol_access()
> * Assigned 0x0005 to BRK64_ESR_UPROBES in stead of 0x0008
> * moved uprobe_opcode_t from probes.h to uprobes.h
> * Assigned 4 to TIF_UPROBE instead of 5
> * Assigned AARCH64_INSN_SIZE to UPROBE_SWBP_INSN_SIZE instead of hard code 4.
> * Removed saved_fault_code from struct arch_uprobe_task
> * Removed preempt_dis(en)able() from arch_uprobe_copy_ixol()
> * Removed case INSN_GOOD from arch_uprobe_analyze_insn()
> * Now we do check that probe point is not for a 32 bit task.
> * Return a false positive from is_tarp_insn()
> * Changes for rebase conflict resolution
>
> V1 was here: https://lkml.org/lkml/2016/8/2/29
> Patches have been rebased on next-20160927, so that there would be no
> conflicts with other arm64/for-next/core patches.
>
> Patches have been tested for following:
> 1. Step-able instructions, like sub, ldr, add etc.
> 2. Simulation-able like ret, cbnz, cbz etc.
> 3. uretprobe
> 4. Reject-able instructions like sev, wfe etc.
> 5. trapped and abort xol path
> 6. probe at unaligned user address.
> 7. longjump test cases
>
> aarch32 task probing is not yet supported.
>
> Pratyush Anand (6):
>   arm64: kprobe: protect/rename few definitions to be reused by uprobe
>   arm64: kgdb_step_brk_fn: ignore other's exception
>   arm64: Handle TRAP_TRACE for user mode as well
>   arm64: Handle TRAP_BRKPT for user mode as well
>   arm64: introduce mm context flag to keep 32 bit task information
>   arm64: Add uprobe support
>
>  arch/arm64/Kconfig                      |   3 +
>  arch/arm64/include/asm/cacheflush.h     |   1 +
>  arch/arm64/include/asm/debug-monitors.h |   3 +
>  arch/arm64/include/asm/elf.h            |  12 +-
>  arch/arm64/include/asm/mmu.h            |   1 +
>  arch/arm64/include/asm/probes.h         |  19 +--
>  arch/arm64/include/asm/ptrace.h         |   8 ++
>  arch/arm64/include/asm/thread_info.h    |   5 +-
>  arch/arm64/include/asm/uprobes.h        |  36 ++++++
>  arch/arm64/kernel/debug-monitors.c      |  40 +++---
>  arch/arm64/kernel/kgdb.c                |   3 +
>  arch/arm64/kernel/probes/Makefile       |   2 +
>  arch/arm64/kernel/probes/decode-insn.c  |  32 ++---
>  arch/arm64/kernel/probes/decode-insn.h  |   8 +-
>  arch/arm64/kernel/probes/kprobes.c      |  36 +++---
>  arch/arm64/kernel/probes/uprobes.c      | 221 ++++++++++++++++++++++++++++++++
>  arch/arm64/kernel/signal.c              |   3 +
>  arch/arm64/mm/flush.c                   |   2 +-
>  18 files changed, 371 insertions(+), 64 deletions(-)
>  create mode 100644 arch/arm64/include/asm/uprobes.h
>  create mode 100644 arch/arm64/kernel/probes/uprobes.c
>
> --
> 2.7.4
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH V2 0/6] ARM64: Uprobe support added
@ 2016-10-26  3:17   ` Pratyush Anand
  0 siblings, 0 replies; 24+ messages in thread
From: Pratyush Anand @ 2016-10-26  3:17 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Catalin,

Please let me know if everything else other than is_trap_insn() looks
fine to you. May be I can work well in time. It would be great if we
can make it into v4.9.


~Pratyush


On Tue, Sep 27, 2016 at 1:17 PM, Pratyush Anand <panand@redhat.com> wrote:
> Changes since v1:
> * Exposed sync_icache_aliases() and used that in stead of flush_uprobe_xol_access()
> * Assigned 0x0005 to BRK64_ESR_UPROBES in stead of 0x0008
> * moved uprobe_opcode_t from probes.h to uprobes.h
> * Assigned 4 to TIF_UPROBE instead of 5
> * Assigned AARCH64_INSN_SIZE to UPROBE_SWBP_INSN_SIZE instead of hard code 4.
> * Removed saved_fault_code from struct arch_uprobe_task
> * Removed preempt_dis(en)able() from arch_uprobe_copy_ixol()
> * Removed case INSN_GOOD from arch_uprobe_analyze_insn()
> * Now we do check that probe point is not for a 32 bit task.
> * Return a false positive from is_tarp_insn()
> * Changes for rebase conflict resolution
>
> V1 was here: https://lkml.org/lkml/2016/8/2/29
> Patches have been rebased on next-20160927, so that there would be no
> conflicts with other arm64/for-next/core patches.
>
> Patches have been tested for following:
> 1. Step-able instructions, like sub, ldr, add etc.
> 2. Simulation-able like ret, cbnz, cbz etc.
> 3. uretprobe
> 4. Reject-able instructions like sev, wfe etc.
> 5. trapped and abort xol path
> 6. probe at unaligned user address.
> 7. longjump test cases
>
> aarch32 task probing is not yet supported.
>
> Pratyush Anand (6):
>   arm64: kprobe: protect/rename few definitions to be reused by uprobe
>   arm64: kgdb_step_brk_fn: ignore other's exception
>   arm64: Handle TRAP_TRACE for user mode as well
>   arm64: Handle TRAP_BRKPT for user mode as well
>   arm64: introduce mm context flag to keep 32 bit task information
>   arm64: Add uprobe support
>
>  arch/arm64/Kconfig                      |   3 +
>  arch/arm64/include/asm/cacheflush.h     |   1 +
>  arch/arm64/include/asm/debug-monitors.h |   3 +
>  arch/arm64/include/asm/elf.h            |  12 +-
>  arch/arm64/include/asm/mmu.h            |   1 +
>  arch/arm64/include/asm/probes.h         |  19 +--
>  arch/arm64/include/asm/ptrace.h         |   8 ++
>  arch/arm64/include/asm/thread_info.h    |   5 +-
>  arch/arm64/include/asm/uprobes.h        |  36 ++++++
>  arch/arm64/kernel/debug-monitors.c      |  40 +++---
>  arch/arm64/kernel/kgdb.c                |   3 +
>  arch/arm64/kernel/probes/Makefile       |   2 +
>  arch/arm64/kernel/probes/decode-insn.c  |  32 ++---
>  arch/arm64/kernel/probes/decode-insn.h  |   8 +-
>  arch/arm64/kernel/probes/kprobes.c      |  36 +++---
>  arch/arm64/kernel/probes/uprobes.c      | 221 ++++++++++++++++++++++++++++++++
>  arch/arm64/kernel/signal.c              |   3 +
>  arch/arm64/mm/flush.c                   |   2 +-
>  18 files changed, 371 insertions(+), 64 deletions(-)
>  create mode 100644 arch/arm64/include/asm/uprobes.h
>  create mode 100644 arch/arm64/kernel/probes/uprobes.c
>
> --
> 2.7.4
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH V2 6/6] arm64: Add uprobe support
  2016-09-27  7:48   ` Pratyush Anand
@ 2016-10-30 14:09     ` Catalin Marinas
  -1 siblings, 0 replies; 24+ messages in thread
From: Catalin Marinas @ 2016-10-30 14:09 UTC (permalink / raw)
  To: Pratyush Anand
  Cc: linux-arm-kernel, linux, srikar, vijaya.kumar, will.deacon,
	linux-kernel, oleg, dave.long, steve.capper, wcohen

On Tue, Sep 27, 2016 at 01:18:00PM +0530, Pratyush Anand wrote:
> --- /dev/null
> +++ b/arch/arm64/kernel/probes/uprobes.c
> @@ -0,0 +1,221 @@
> +/*
> + * Copyright (C) 2014-2016 Pratyush Anand <panand@redhat.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + */
> +#include <linux/highmem.h>
> +#include <linux/ptrace.h>
> +#include <linux/uprobes.h>
> +#include <asm/cacheflush.h>
> +
> +#include "decode-insn.h"
> +
> +#define UPROBE_INV_FAULT_CODE	UINT_MAX
> +
> +bool is_trap_insn(uprobe_opcode_t *insn)
> +{
> +	return false;
> +}

On the previous series, I had a comment left unanswered with regards to
always returning false in is_trap_insn():

Looking at handle_swbp(), if we hit a breakpoint for which we don't have
a valid uprobe, this function currently sends a SIGTRAP. But if
is_trap_insn() returns false always, is_trap_at_addr() would return 0 in
this case so the SIGTRAP is never issued.

-- 
Catalin

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH V2 6/6] arm64: Add uprobe support
@ 2016-10-30 14:09     ` Catalin Marinas
  0 siblings, 0 replies; 24+ messages in thread
From: Catalin Marinas @ 2016-10-30 14:09 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Sep 27, 2016 at 01:18:00PM +0530, Pratyush Anand wrote:
> --- /dev/null
> +++ b/arch/arm64/kernel/probes/uprobes.c
> @@ -0,0 +1,221 @@
> +/*
> + * Copyright (C) 2014-2016 Pratyush Anand <panand@redhat.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + */
> +#include <linux/highmem.h>
> +#include <linux/ptrace.h>
> +#include <linux/uprobes.h>
> +#include <asm/cacheflush.h>
> +
> +#include "decode-insn.h"
> +
> +#define UPROBE_INV_FAULT_CODE	UINT_MAX
> +
> +bool is_trap_insn(uprobe_opcode_t *insn)
> +{
> +	return false;
> +}

On the previous series, I had a comment left unanswered with regards to
always returning false in is_trap_insn():

Looking at handle_swbp(), if we hit a breakpoint for which we don't have
a valid uprobe, this function currently sends a SIGTRAP. But if
is_trap_insn() returns false always, is_trap_at_addr() would return 0 in
this case so the SIGTRAP is never issued.

-- 
Catalin

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH V2 6/6] arm64: Add uprobe support
  2016-10-30 14:09     ` Catalin Marinas
@ 2016-10-31  8:40       ` Pratyush Anand
  -1 siblings, 0 replies; 24+ messages in thread
From: Pratyush Anand @ 2016-10-31  8:40 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Russell King - ARM Linux, srikar, vijaya.kumar, Will Deacon,
	open list, Oleg Nesterov, David Long, Steve Capper,
	William Cohen, linux-arm-kernel

Hi Catalin,

On Sun, Oct 30, 2016 at 7:39 PM, Catalin Marinas
<catalin.marinas@arm.com> wrote:
> On Tue, Sep 27, 2016 at 01:18:00PM +0530, Pratyush Anand wrote:
>> --- /dev/null
>> +++ b/arch/arm64/kernel/probes/uprobes.c
>> @@ -0,0 +1,221 @@
>> +/*
>> + * Copyright (C) 2014-2016 Pratyush Anand <panand@redhat.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + */
>> +#include <linux/highmem.h>
>> +#include <linux/ptrace.h>
>> +#include <linux/uprobes.h>
>> +#include <asm/cacheflush.h>
>> +
>> +#include "decode-insn.h"
>> +
>> +#define UPROBE_INV_FAULT_CODE        UINT_MAX
>> +
>> +bool is_trap_insn(uprobe_opcode_t *insn)
>> +{
>> +     return false;
>> +}
>
> On the previous series, I had a comment left unanswered with regards to
> always returning false in is_trap_insn():
>
> Looking at handle_swbp(), if we hit a breakpoint for which we don't have
> a valid uprobe, this function currently sends a SIGTRAP. But if
> is_trap_insn() returns false always, is_trap_at_addr() would return 0 in
> this case so the SIGTRAP is never issued.

A agreed on this that the older implementation i.e. the default one of
is_trap_insn() is better for the time being.

probably 'strtle r0, [r0], #160' would have the closest matching
aarch32 instruction wrt BRK64_OPCODE_UPROBES(0xd42000A0). But that too
seems a bad instruction. So, there might not be any aarch32
instruction which will match to uprobe BRK instruction.


Therefore, if I send a V3 by removing aacrh64
Hi Catalin,

On Sun, Oct 30, 2016 at 7:39 PM, Catalin Marinas
<catalin.marinas@arm.com> wrote:
> On Tue, Sep 27, 2016 at 01:18:00PM +0530, Pratyush Anand wrote:
>> --- /dev/null
>> +++ b/arch/arm64/kernel/probes/uprobes.c
>> @@ -0,0 +1,221 @@
>> +/*
>> + * Copyright (C) 2014-2016 Pratyush Anand <panand@redhat.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + */
>> +#include <linux/highmem.h>
>> +#include <linux/ptrace.h>
>> +#include <linux/uprobes.h>
>> +#include <asm/cacheflush.h>
>> +
>> +#include "decode-insn.h"
>> +
>> +#define UPROBE_INV_FAULT_CODE        UINT_MAX
>> +
>> +bool is_trap_insn(uprobe_opcode_t *insn)
>> +{
>> +     return false;
>> +}
>
> On the previous series, I had a comment left unanswered with regards to
> always returning false in is_trap_insn():
>
> Looking at handle_swbp(), if we hit a breakpoint for which we don't have
> a valid uprobe, this function currently sends a SIGTRAP. But if
> is_trap_insn() returns false always, is_trap_at_addr() would return 0 in
> this case so the SIGTRAP is never issued.

A agreed on this that the older implementation i.e. the default one of
is_trap_insn() is better for the time being.

probably 'strtle r0, [r0], #160' would have the closest matching
aarch32 instruction wrt BRK64_OPCODE_UPROBES(0xd42000A0). But that too
seems a bad instruction. So, there might not be any aarch32
instruction which will match to uprobe BRK instruction.


Therefore, if I send a V3 by removing aacrh64
Hi Catalin,

On Sun, Oct 30, 2016 at 7:39 PM, Catalin Marinas
<catalin.marinas@arm.com> wrote:
> On Tue, Sep 27, 2016 at 01:18:00PM +0530, Pratyush Anand wrote:
>> --- /dev/null
>> +++ b/arch/arm64/kernel/probes/uprobes.c
>> @@ -0,0 +1,221 @@
>> +/*
>> + * Copyright (C) 2014-2016 Pratyush Anand <panand@redhat.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + */
>> +#include <linux/highmem.h>
>> +#include <linux/ptrace.h>
>> +#include <linux/uprobes.h>
>> +#include <asm/cacheflush.h>
>> +
>> +#include "decode-insn.h"
>> +
>> +#define UPROBE_INV_FAULT_CODE        UINT_MAX
>> +
>> +bool is_trap_insn(uprobe_opcode_t *insn)
>> +{
>> +     return false;
>> +}
>
> On the previous series, I had a comment left unanswered with regards to
> always returning false in is_trap_insn():
>
> Looking at handle_swbp(), if we hit a breakpoint for which we don't have
> a valid uprobe, this function currently sends a SIGTRAP. But if
> is_trap_insn() returns false always, is_trap_at_addr() would return 0 in
> this case so the SIGTRAP is never issued.

A agreed on this that the older implementation i.e. the default one of
is_trap_insn() is better for the time being. I sent out V2 before your
last comment on it in V1 :(.

probably 'strtle r0, [r0], #160' would have the closest matching
aarch32 instruction wrt BRK64_OPCODE_UPROBES(0xd42000A0). But that too
seems a bad aarch32 instruction. So, there might not be any aarch32
instruction which will match to uprobe BRK instruction.


Therefore, if I send a V3 by removing aacrh64 is_trap_insn(), would
that be acceptable, or do you see any other issue with this patch
series? If there is anything else, I would address that in V3 as well.

Thanks for your review.

~Pratyush

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH V2 6/6] arm64: Add uprobe support
@ 2016-10-31  8:40       ` Pratyush Anand
  0 siblings, 0 replies; 24+ messages in thread
From: Pratyush Anand @ 2016-10-31  8:40 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Catalin,

On Sun, Oct 30, 2016 at 7:39 PM, Catalin Marinas
<catalin.marinas@arm.com> wrote:
> On Tue, Sep 27, 2016 at 01:18:00PM +0530, Pratyush Anand wrote:
>> --- /dev/null
>> +++ b/arch/arm64/kernel/probes/uprobes.c
>> @@ -0,0 +1,221 @@
>> +/*
>> + * Copyright (C) 2014-2016 Pratyush Anand <panand@redhat.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + */
>> +#include <linux/highmem.h>
>> +#include <linux/ptrace.h>
>> +#include <linux/uprobes.h>
>> +#include <asm/cacheflush.h>
>> +
>> +#include "decode-insn.h"
>> +
>> +#define UPROBE_INV_FAULT_CODE        UINT_MAX
>> +
>> +bool is_trap_insn(uprobe_opcode_t *insn)
>> +{
>> +     return false;
>> +}
>
> On the previous series, I had a comment left unanswered with regards to
> always returning false in is_trap_insn():
>
> Looking at handle_swbp(), if we hit a breakpoint for which we don't have
> a valid uprobe, this function currently sends a SIGTRAP. But if
> is_trap_insn() returns false always, is_trap_at_addr() would return 0 in
> this case so the SIGTRAP is never issued.

A agreed on this that the older implementation i.e. the default one of
is_trap_insn() is better for the time being.

probably 'strtle r0, [r0], #160' would have the closest matching
aarch32 instruction wrt BRK64_OPCODE_UPROBES(0xd42000A0). But that too
seems a bad instruction. So, there might not be any aarch32
instruction which will match to uprobe BRK instruction.


Therefore, if I send a V3 by removing aacrh64
Hi Catalin,

On Sun, Oct 30, 2016 at 7:39 PM, Catalin Marinas
<catalin.marinas@arm.com> wrote:
> On Tue, Sep 27, 2016 at 01:18:00PM +0530, Pratyush Anand wrote:
>> --- /dev/null
>> +++ b/arch/arm64/kernel/probes/uprobes.c
>> @@ -0,0 +1,221 @@
>> +/*
>> + * Copyright (C) 2014-2016 Pratyush Anand <panand@redhat.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + */
>> +#include <linux/highmem.h>
>> +#include <linux/ptrace.h>
>> +#include <linux/uprobes.h>
>> +#include <asm/cacheflush.h>
>> +
>> +#include "decode-insn.h"
>> +
>> +#define UPROBE_INV_FAULT_CODE        UINT_MAX
>> +
>> +bool is_trap_insn(uprobe_opcode_t *insn)
>> +{
>> +     return false;
>> +}
>
> On the previous series, I had a comment left unanswered with regards to
> always returning false in is_trap_insn():
>
> Looking at handle_swbp(), if we hit a breakpoint for which we don't have
> a valid uprobe, this function currently sends a SIGTRAP. But if
> is_trap_insn() returns false always, is_trap_at_addr() would return 0 in
> this case so the SIGTRAP is never issued.

A agreed on this that the older implementation i.e. the default one of
is_trap_insn() is better for the time being.

probably 'strtle r0, [r0], #160' would have the closest matching
aarch32 instruction wrt BRK64_OPCODE_UPROBES(0xd42000A0). But that too
seems a bad instruction. So, there might not be any aarch32
instruction which will match to uprobe BRK instruction.


Therefore, if I send a V3 by removing aacrh64
Hi Catalin,

On Sun, Oct 30, 2016 at 7:39 PM, Catalin Marinas
<catalin.marinas@arm.com> wrote:
> On Tue, Sep 27, 2016 at 01:18:00PM +0530, Pratyush Anand wrote:
>> --- /dev/null
>> +++ b/arch/arm64/kernel/probes/uprobes.c
>> @@ -0,0 +1,221 @@
>> +/*
>> + * Copyright (C) 2014-2016 Pratyush Anand <panand@redhat.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + */
>> +#include <linux/highmem.h>
>> +#include <linux/ptrace.h>
>> +#include <linux/uprobes.h>
>> +#include <asm/cacheflush.h>
>> +
>> +#include "decode-insn.h"
>> +
>> +#define UPROBE_INV_FAULT_CODE        UINT_MAX
>> +
>> +bool is_trap_insn(uprobe_opcode_t *insn)
>> +{
>> +     return false;
>> +}
>
> On the previous series, I had a comment left unanswered with regards to
> always returning false in is_trap_insn():
>
> Looking at handle_swbp(), if we hit a breakpoint for which we don't have
> a valid uprobe, this function currently sends a SIGTRAP. But if
> is_trap_insn() returns false always, is_trap_at_addr() would return 0 in
> this case so the SIGTRAP is never issued.

A agreed on this that the older implementation i.e. the default one of
is_trap_insn() is better for the time being. I sent out V2 before your
last comment on it in V1 :(.

probably 'strtle r0, [r0], #160' would have the closest matching
aarch32 instruction wrt BRK64_OPCODE_UPROBES(0xd42000A0). But that too
seems a bad aarch32 instruction. So, there might not be any aarch32
instruction which will match to uprobe BRK instruction.


Therefore, if I send a V3 by removing aacrh64 is_trap_insn(), would
that be acceptable, or do you see any other issue with this patch
series? If there is anything else, I would address that in V3 as well.

Thanks for your review.

~Pratyush

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH V2 6/6] arm64: Add uprobe support
  2016-10-31  8:40       ` Pratyush Anand
@ 2016-10-31 20:32         ` Catalin Marinas
  -1 siblings, 0 replies; 24+ messages in thread
From: Catalin Marinas @ 2016-10-31 20:32 UTC (permalink / raw)
  To: Pratyush Anand
  Cc: Russell King - ARM Linux, srikar, vijaya.kumar, Will Deacon,
	Oleg Nesterov, open list, David Long, Steve Capper,
	William Cohen, linux-arm-kernel

Hi Pratyush,

On Mon, Oct 31, 2016 at 02:10:43PM +0530, Pratyush Anand wrote:
> On Sun, Oct 30, 2016 at 7:39 PM, Catalin Marinas
> <catalin.marinas@arm.com> wrote:
> > On Tue, Sep 27, 2016 at 01:18:00PM +0530, Pratyush Anand wrote:
> >> --- /dev/null
> >> +++ b/arch/arm64/kernel/probes/uprobes.c
> >> @@ -0,0 +1,221 @@
> >> +/*
> >> + * Copyright (C) 2014-2016 Pratyush Anand <panand@redhat.com>
> >> + *
> >> + * This program is free software; you can redistribute it and/or modify
> >> + * it under the terms of the GNU General Public License version 2 as
> >> + * published by the Free Software Foundation.
> >> + */
> >> +#include <linux/highmem.h>
> >> +#include <linux/ptrace.h>
> >> +#include <linux/uprobes.h>
> >> +#include <asm/cacheflush.h>
> >> +
> >> +#include "decode-insn.h"
> >> +
> >> +#define UPROBE_INV_FAULT_CODE        UINT_MAX
> >> +
> >> +bool is_trap_insn(uprobe_opcode_t *insn)
> >> +{
> >> +     return false;
> >> +}
> >
> > On the previous series, I had a comment left unanswered with regards to
> > always returning false in is_trap_insn():
> >
> > Looking at handle_swbp(), if we hit a breakpoint for which we don't have
> > a valid uprobe, this function currently sends a SIGTRAP. But if
> > is_trap_insn() returns false always, is_trap_at_addr() would return 0 in
> > this case so the SIGTRAP is never issued.
> 
> A agreed on this that the older implementation i.e. the default one of
> is_trap_insn() is better for the time being. I sent out V2 before your
> last comment on it in V1 :(.

Thinking some more about this, the default is_trap_insn() still seems
better. It may return true occasionally for 32-bit tasks but we don't
care anyway because the subsequent arch_uprobe_analyze_insn() would
prevent the installation of the uprobe. However, always returning false
in is_trap_insn() would confuse handle_swbp() if you install uprobes in
an already debugged task.

> probably 'strtle r0, [r0], #160' would have the closest matching
> aarch32 instruction wrt BRK64_OPCODE_UPROBES(0xd42000A0). But that too
> seems a bad aarch32 instruction. So, there might not be any aarch32
> instruction which will match to uprobe BRK instruction.

As I said above, even if it matches, we don't support uprobes for 32-bit
(caught by the subsequent test).

> Therefore, if I send a V3 by removing aacrh64 is_trap_insn(), would
> that be acceptable, or do you see any other issue with this patch
> series? If there is anything else, I would address that in V3 as well.

I think I have one minor comment on arch_uprobe_analyze_insn() and v3
should look ok.

-- 
Catalin

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH V2 6/6] arm64: Add uprobe support
@ 2016-10-31 20:32         ` Catalin Marinas
  0 siblings, 0 replies; 24+ messages in thread
From: Catalin Marinas @ 2016-10-31 20:32 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Pratyush,

On Mon, Oct 31, 2016 at 02:10:43PM +0530, Pratyush Anand wrote:
> On Sun, Oct 30, 2016 at 7:39 PM, Catalin Marinas
> <catalin.marinas@arm.com> wrote:
> > On Tue, Sep 27, 2016 at 01:18:00PM +0530, Pratyush Anand wrote:
> >> --- /dev/null
> >> +++ b/arch/arm64/kernel/probes/uprobes.c
> >> @@ -0,0 +1,221 @@
> >> +/*
> >> + * Copyright (C) 2014-2016 Pratyush Anand <panand@redhat.com>
> >> + *
> >> + * This program is free software; you can redistribute it and/or modify
> >> + * it under the terms of the GNU General Public License version 2 as
> >> + * published by the Free Software Foundation.
> >> + */
> >> +#include <linux/highmem.h>
> >> +#include <linux/ptrace.h>
> >> +#include <linux/uprobes.h>
> >> +#include <asm/cacheflush.h>
> >> +
> >> +#include "decode-insn.h"
> >> +
> >> +#define UPROBE_INV_FAULT_CODE        UINT_MAX
> >> +
> >> +bool is_trap_insn(uprobe_opcode_t *insn)
> >> +{
> >> +     return false;
> >> +}
> >
> > On the previous series, I had a comment left unanswered with regards to
> > always returning false in is_trap_insn():
> >
> > Looking at handle_swbp(), if we hit a breakpoint for which we don't have
> > a valid uprobe, this function currently sends a SIGTRAP. But if
> > is_trap_insn() returns false always, is_trap_at_addr() would return 0 in
> > this case so the SIGTRAP is never issued.
> 
> A agreed on this that the older implementation i.e. the default one of
> is_trap_insn() is better for the time being. I sent out V2 before your
> last comment on it in V1 :(.

Thinking some more about this, the default is_trap_insn() still seems
better. It may return true occasionally for 32-bit tasks but we don't
care anyway because the subsequent arch_uprobe_analyze_insn() would
prevent the installation of the uprobe. However, always returning false
in is_trap_insn() would confuse handle_swbp() if you install uprobes in
an already debugged task.

> probably 'strtle r0, [r0], #160' would have the closest matching
> aarch32 instruction wrt BRK64_OPCODE_UPROBES(0xd42000A0). But that too
> seems a bad aarch32 instruction. So, there might not be any aarch32
> instruction which will match to uprobe BRK instruction.

As I said above, even if it matches, we don't support uprobes for 32-bit
(caught by the subsequent test).

> Therefore, if I send a V3 by removing aacrh64 is_trap_insn(), would
> that be acceptable, or do you see any other issue with this patch
> series? If there is anything else, I would address that in V3 as well.

I think I have one minor comment on arch_uprobe_analyze_insn() and v3
should look ok.

-- 
Catalin

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH V2 6/6] arm64: Add uprobe support
  2016-09-27  7:48   ` Pratyush Anand
@ 2016-10-31 20:34     ` Catalin Marinas
  -1 siblings, 0 replies; 24+ messages in thread
From: Catalin Marinas @ 2016-10-31 20:34 UTC (permalink / raw)
  To: Pratyush Anand
  Cc: linux-arm-kernel, linux, srikar, vijaya.kumar, will.deacon,
	linux-kernel, oleg, dave.long, steve.capper, wcohen

On Tue, Sep 27, 2016 at 01:18:00PM +0530, Pratyush Anand wrote:
> +int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe, struct mm_struct *mm,
> +		unsigned long addr)
> +{
> +	probe_opcode_t insn;
> +
> +	/* TODO: Currently we do not support AARCH32 instruction probing */
> +	if (test_bit(TIF_32BIT, &mm->context.flags))
> +		return -EINVAL;

This should be -ENOTSUPP.

-- 
Catalin

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH V2 6/6] arm64: Add uprobe support
@ 2016-10-31 20:34     ` Catalin Marinas
  0 siblings, 0 replies; 24+ messages in thread
From: Catalin Marinas @ 2016-10-31 20:34 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Sep 27, 2016 at 01:18:00PM +0530, Pratyush Anand wrote:
> +int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe, struct mm_struct *mm,
> +		unsigned long addr)
> +{
> +	probe_opcode_t insn;
> +
> +	/* TODO: Currently we do not support AARCH32 instruction probing */
> +	if (test_bit(TIF_32BIT, &mm->context.flags))
> +		return -EINVAL;

This should be -ENOTSUPP.

-- 
Catalin

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2016-10-31 20:34 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-27  7:43 [PATCH V2 0/6] ARM64: Uprobe support added Pratyush Anand
2016-09-27  7:43 ` Pratyush Anand
2016-09-27  7:47 ` [PATCH V2 1/6] arm64: kprobe: protect/rename few definitions to be reused by uprobe Pratyush Anand
2016-09-27  7:47   ` Pratyush Anand
2016-09-27  7:47 ` [PATCH V2 2/6] arm64: kgdb_step_brk_fn: ignore other's exception Pratyush Anand
2016-09-27  7:47   ` Pratyush Anand
2016-09-27  7:47 ` [PATCH V2 3/6] arm64: Handle TRAP_TRACE for user mode as well Pratyush Anand
2016-09-27  7:47   ` Pratyush Anand
2016-09-27  7:47 ` [PATCH V2 4/6] arm64: Handle TRAP_BRKPT " Pratyush Anand
2016-09-27  7:47   ` Pratyush Anand
2016-09-27  7:47 ` [PATCH V2 5/6] arm64: introduce mm context flag to keep 32 bit task information Pratyush Anand
2016-09-27  7:47   ` Pratyush Anand
2016-09-27  7:48 ` [PATCH V2 6/6] arm64: Add uprobe support Pratyush Anand
2016-09-27  7:48   ` Pratyush Anand
2016-10-30 14:09   ` Catalin Marinas
2016-10-30 14:09     ` Catalin Marinas
2016-10-31  8:40     ` Pratyush Anand
2016-10-31  8:40       ` Pratyush Anand
2016-10-31 20:32       ` Catalin Marinas
2016-10-31 20:32         ` Catalin Marinas
2016-10-31 20:34   ` Catalin Marinas
2016-10-31 20:34     ` Catalin Marinas
2016-10-26  3:17 ` [PATCH V2 0/6] ARM64: Uprobe support added Pratyush Anand
2016-10-26  3:17   ` Pratyush Anand

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.