All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/5] powerpc/kprobes: Some cosmetic updates to try_to_emulate()
@ 2017-09-13 21:20 Naveen N. Rao
  2017-09-13 21:20 ` [PATCH 2/5] powerpc/kprobes: Do not suppress instruction emulation if a single run failed Naveen N. Rao
                   ` (5 more replies)
  0 siblings, 6 replies; 20+ messages in thread
From: Naveen N. Rao @ 2017-09-13 21:20 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: linuxppc-dev, Ananth N Mavinakayanahalli, Masami Hiramatsu,
	Kamalesh Babulal

1. This is only used in kprobes.c, so make it static.
2. Remove the un-necessary (ret == 0) comparison in the else clause.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/kprobes.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 367494dc67d9..c2a6ab38a67f 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -239,7 +239,7 @@ void arch_prepare_kretprobe(struct kretprobe_instance *ri, struct pt_regs *regs)
 }
 NOKPROBE_SYMBOL(arch_prepare_kretprobe);
 
-int try_to_emulate(struct kprobe *p, struct pt_regs *regs)
+static int try_to_emulate(struct kprobe *p, struct pt_regs *regs)
 {
 	int ret;
 	unsigned int insn = *p->ainsn.insn;
@@ -261,7 +261,7 @@ int try_to_emulate(struct kprobe *p, struct pt_regs *regs)
 		 */
 		printk("Can't step on instruction %x\n", insn);
 		BUG();
-	} else if (ret == 0)
+	} else
 		/* This instruction can't be boosted */
 		p->ainsn.boostable = -1;
 
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 2/5] powerpc/kprobes: Do not suppress instruction emulation if a single run failed
  2017-09-13 21:20 [PATCH 1/5] powerpc/kprobes: Some cosmetic updates to try_to_emulate() Naveen N. Rao
@ 2017-09-13 21:20 ` Naveen N. Rao
  2017-09-13 23:53   ` Masami Hiramatsu
  2017-09-13 21:20 ` [PATCH 3/5] powerpc/kprobes: Fix warnings from __this_cpu_read() on preempt kernels Naveen N. Rao
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 20+ messages in thread
From: Naveen N. Rao @ 2017-09-13 21:20 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: linuxppc-dev, Ananth N Mavinakayanahalli, Masami Hiramatsu,
	Kamalesh Babulal

Currently, we disable instruction emulation if emulate_step() fails for
any reason. However, such failures could be transient and specific to a
particular run. Instead, only disable instruction emulation if we have
never been able to emulate this. If we had emulated this instruction
successfully at least once, then we single step only this probe hit and
continue to try emulating the instruction in subsequent probe hits.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/kprobes.c | 16 +++++++++++++---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index c2a6ab38a67f..e848fe2c93fb 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -261,9 +261,19 @@ static int try_to_emulate(struct kprobe *p, struct pt_regs *regs)
 		 */
 		printk("Can't step on instruction %x\n", insn);
 		BUG();
-	} else
-		/* This instruction can't be boosted */
-		p->ainsn.boostable = -1;
+	} else {
+		/*
+		 * If we haven't previously emulated this instruction, then it
+		 * can't be boosted. Note it down so we don't try to do so again.
+		 *
+		 * If, however, we had emulated this instruction in the past,
+		 * then this is just an error with the current run. We return
+		 * 0 so that this is now single-stepped, but continue to try
+		 * emulating it in subsequent probe hits.
+		 */
+		if (unlikely(p->ainsn.boostable != 1))
+			p->ainsn.boostable = -1;
+	}
 
 	return ret;
 }
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 3/5] powerpc/kprobes: Fix warnings from __this_cpu_read() on preempt kernels
  2017-09-13 21:20 [PATCH 1/5] powerpc/kprobes: Some cosmetic updates to try_to_emulate() Naveen N. Rao
  2017-09-13 21:20 ` [PATCH 2/5] powerpc/kprobes: Do not suppress instruction emulation if a single run failed Naveen N. Rao
@ 2017-09-13 21:20 ` Naveen N. Rao
  2017-09-14  0:36   ` Masami Hiramatsu
  2017-09-14  9:48   ` Kamalesh Babulal
  2017-09-13 21:20 ` [PATCH 4/5] powerpc/jprobes: Disable preemption when triggered through ftrace Naveen N. Rao
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 20+ messages in thread
From: Naveen N. Rao @ 2017-09-13 21:20 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: linuxppc-dev, Ananth N Mavinakayanahalli, Masami Hiramatsu,
	Kamalesh Babulal

Kamalesh pointed out that we are getting the below call traces with
livepatched functions when we enable CONFIG_PREEMPT:

[  495.470721] BUG: using __this_cpu_read() in preemptible [00000000] code: cat/8394
[  495.471167] caller is is_current_kprobe_addr+0x30/0x90
[  495.471171] CPU: 4 PID: 8394 Comm: cat Tainted: G              K 4.13.0-rc7-nnr+ #95
[  495.471173] Call Trace:
[  495.471178] [c00000008fd9b960] [c0000000009f039c] dump_stack+0xec/0x160 (unreliable)
[  495.471184] [c00000008fd9b9a0] [c00000000059169c] check_preemption_disabled+0x15c/0x170
[  495.471187] [c00000008fd9ba30] [c000000000046460] is_current_kprobe_addr+0x30/0x90
[  495.471191] [c00000008fd9ba60] [c00000000004e9a0] ftrace_call+0x1c/0xb8
[  495.471195] [c00000008fd9bc30] [c000000000376fd8] seq_read+0x238/0x5c0
[  495.471199] [c00000008fd9bcd0] [c0000000003cfd78] proc_reg_read+0x88/0xd0
[  495.471203] [c00000008fd9bd00] [c00000000033e5d4] __vfs_read+0x44/0x1b0
[  495.471206] [c00000008fd9bd90] [c0000000003402ec] vfs_read+0xbc/0x1b0
[  495.471210] [c00000008fd9bde0] [c000000000342138] SyS_read+0x68/0x110
[  495.471214] [c00000008fd9be30] [c00000000000bc6c] system_call+0x58/0x6c

Commit c05b8c4474c030 ("powerpc/kprobes: Skip livepatch_handler() for
jprobes") introduced a helper is_current_kprobe_addr() to help determine
if the current function has been livepatched or if it has a jprobe
installed, both of which modify the NIP.

In the case of a jprobe, kprobe_ftrace_handler() disables pre-emption
before calling into setjmp_pre_handler() which returns without disabling
pre-emption. This is done to ensure that the jprobe handler won't
disappear beneath us if the jprobe is unregistered between the
setjmp_pre_handler() and the subsequent longjmp_break_handler() called
from the jprobe handler. Due to this, we can use __this_cpu_read() in
is_current_kprobe_addr() with the pre-emption check as we know that
pre-emption will be disabled.

However, if this function has been livepatched, we are still doing this
check and when we do so, pre-emption won't necessarily be disabled. This
results in the call trace shown above.

Fix this by only invoking is_current_kprobe_addr() when pre-emption is
disabled. And since we now guard this within a pre-emption check, we can
instead use raw_cpu_read() to get the current_kprobe value skipping the
check done by __this_cpu_read().

Fixes: c05b8c4474c030 ("powerpc/kprobes: Skip livepatch_handler() for jprobes")
Reported-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/kprobes.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index e848fe2c93fb..db40b13fd3d1 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -45,8 +45,12 @@ struct kretprobe_blackpoint kretprobe_blacklist[] = {{NULL, NULL}};
 
 int is_current_kprobe_addr(unsigned long addr)
 {
-	struct kprobe *p = kprobe_running();
-	return (p && (unsigned long)p->addr == addr) ? 1 : 0;
+	if (!preemptible()) {
+		struct kprobe *p = raw_cpu_read(current_kprobe);
+		return (p && (unsigned long)p->addr == addr) ? 1 : 0;
+	}
+
+	return 0;
 }
 
 bool arch_within_kprobe_blacklist(unsigned long addr)
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 4/5] powerpc/jprobes: Disable preemption when triggered through ftrace
  2017-09-13 21:20 [PATCH 1/5] powerpc/kprobes: Some cosmetic updates to try_to_emulate() Naveen N. Rao
  2017-09-13 21:20 ` [PATCH 2/5] powerpc/kprobes: Do not suppress instruction emulation if a single run failed Naveen N. Rao
  2017-09-13 21:20 ` [PATCH 3/5] powerpc/kprobes: Fix warnings from __this_cpu_read() on preempt kernels Naveen N. Rao
@ 2017-09-13 21:20 ` Naveen N. Rao
  2017-09-14  0:05   ` Masami Hiramatsu
  2017-09-13 21:20 ` [PATCH 5/5] powerpc/jprobes: Validate break handler invocation as being due to a jprobe_return() Naveen N. Rao
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 20+ messages in thread
From: Naveen N. Rao @ 2017-09-13 21:20 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: linuxppc-dev, Ananth N Mavinakayanahalli, Masami Hiramatsu,
	Kamalesh Babulal

KPROBES_SANITY_TEST throws the below splat when CONFIG_PREEMPT is
enabled:

[    3.140410] Kprobe smoke test: started
[    3.149680] DEBUG_LOCKS_WARN_ON(val > preempt_count())
[    3.149684] ------------[ cut here ]------------
[    3.149695] WARNING: CPU: 19 PID: 1 at kernel/sched/core.c:3094 preempt_count_sub+0xcc/0x140
[    3.149699] Modules linked in:
[    3.149705] CPU: 19 PID: 1 Comm: swapper/0 Not tainted 4.13.0-rc7-nnr+ #97
[    3.149709] task: c0000000fea80000 task.stack: c0000000feb00000
[    3.149713] NIP:  c00000000011d3dc LR: c00000000011d3d8 CTR: c000000000a090d0
[    3.149718] REGS: c0000000feb03400 TRAP: 0700   Not tainted  (4.13.0-rc7-nnr+)
[    3.149722] MSR:  8000000000021033 <SF,ME,IR,DR,RI,LE>  CR: 28000282  XER: 00000000
[    3.149732] CFAR: c00000000015aa18 SOFTE: 0
<snip>
[    3.149786] NIP [c00000000011d3dc] preempt_count_sub+0xcc/0x140
[    3.149790] LR [c00000000011d3d8] preempt_count_sub+0xc8/0x140
[    3.149794] Call Trace:
[    3.149798] [c0000000feb03680] [c00000000011d3d8] preempt_count_sub+0xc8/0x140 (unreliable)
[    3.149804] [c0000000feb036e0] [c000000000046198] kprobe_handler+0x228/0x4b0
[    3.149810] [c0000000feb03750] [c0000000000269c8] program_check_exception+0x58/0x3b0
[    3.149816] [c0000000feb037c0] [c00000000000903c] program_check_common+0x16c/0x170
[    3.149822] --- interrupt: 0 at kprobe_target+0x8/0x20
                   LR = init_test_probes+0x248/0x7d0
[    3.149829] [c0000000feb03ab0] [c000000000e4f048] kp+0x0/0x80 (unreliable)
[    3.149835] [c0000000feb03b10] [c00000000004ea60] livepatch_handler+0x38/0x74
[    3.149841] [c0000000feb03ba0] [c000000000d0de54] init_kprobes+0x1d8/0x208
[    3.149846] [c0000000feb03c40] [c00000000000daa8] do_one_initcall+0x68/0x1d0
[    3.149852] [c0000000feb03d00] [c000000000ce44f0] kernel_init_freeable+0x298/0x374
[    3.149857] [c0000000feb03dc0] [c00000000000dd84] kernel_init+0x24/0x160
[    3.149863] [c0000000feb03e30] [c00000000000bfec] ret_from_kernel_thread+0x5c/0x70
[    3.149867] Instruction dump:
[    3.149871] 419effdc 3d22001b 39299240 81290000 2f890000 409effc8 3c82ffcb 3c62ffcb
[    3.149879] 3884bc68 3863bc18 4803d5fd 60000000 <0fe00000> 4bffffa8 60000000 60000000
[    3.149890] ---[ end trace 432dd46b4ce3d29f ]---
[    3.166003] Kprobe smoke test: passed successfully

The issue is that we aren't disabling preemption in
kprobe_ftrace_handler(). Disable it.

Fixes: ead514d5fb30a0 ("powerpc/kprobes: Add support for KPROBES_ON_FTRACE")
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/kprobes-ftrace.c | 15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kernel/kprobes-ftrace.c b/arch/powerpc/kernel/kprobes-ftrace.c
index 6c089d9757c9..2d81404f818c 100644
--- a/arch/powerpc/kernel/kprobes-ftrace.c
+++ b/arch/powerpc/kernel/kprobes-ftrace.c
@@ -65,6 +65,7 @@ void kprobe_ftrace_handler(unsigned long nip, unsigned long parent_nip,
 	/* Disable irq for emulating a breakpoint and avoiding preempt */
 	local_irq_save(flags);
 	hard_irq_disable();
+	preempt_disable();
 
 	p = get_kprobe((kprobe_opcode_t *)nip);
 	if (unlikely(!p) || kprobe_disabled(p))
@@ -86,12 +87,18 @@ void kprobe_ftrace_handler(unsigned long nip, unsigned long parent_nip,
 		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
 		if (!p->pre_handler || !p->pre_handler(p, regs))
 			__skip_singlestep(p, regs, kcb, orig_nip);
-		/*
-		 * If pre_handler returns !0, it sets regs->nip and
-		 * resets current kprobe.
-		 */
+		else {
+			/*
+			 * If pre_handler returns !0, it sets regs->nip and
+			 * resets current kprobe. In this case, we still need
+			 * to restore irq, but not preemption.
+			 */
+			local_irq_restore(flags);
+			return;
+		}
 	}
 end:
+	preempt_enable_no_resched();
 	local_irq_restore(flags);
 }
 NOKPROBE_SYMBOL(kprobe_ftrace_handler);
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 5/5] powerpc/jprobes: Validate break handler invocation as being due to a jprobe_return()
  2017-09-13 21:20 [PATCH 1/5] powerpc/kprobes: Some cosmetic updates to try_to_emulate() Naveen N. Rao
                   ` (2 preceding siblings ...)
  2017-09-13 21:20 ` [PATCH 4/5] powerpc/jprobes: Disable preemption when triggered through ftrace Naveen N. Rao
@ 2017-09-13 21:20 ` Naveen N. Rao
  2017-09-14  0:38   ` Masami Hiramatsu
  2017-09-13 23:18 ` [PATCH 1/5] powerpc/kprobes: Some cosmetic updates to try_to_emulate() Masami Hiramatsu
  2017-09-14  6:16 ` Kamalesh Babulal
  5 siblings, 1 reply; 20+ messages in thread
From: Naveen N. Rao @ 2017-09-13 21:20 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: linuxppc-dev, Ananth N Mavinakayanahalli, Masami Hiramatsu,
	Kamalesh Babulal

Fix a circa 2005 FIXME by implementing a check to ensure that we
actually got into the jprobe break handler() due to the trap in
jprobe_return().

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
This is v2 of the patch posted previously (*) to change the WARN() to a 
pr_debug() as suggested by Masami.
(*) https://patchwork.ozlabs.org/patch/762670/

- Naveen

 arch/powerpc/kernel/kprobes.c | 20 +++++++++-----------
 1 file changed, 9 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index db40b13fd3d1..5b26d5202273 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -648,24 +648,22 @@ NOKPROBE_SYMBOL(setjmp_pre_handler);
 
 void __used jprobe_return(void)
 {
-	asm volatile("trap" ::: "memory");
+	asm volatile("jprobe_return_trap:\n"
+		     "trap\n"
+		     ::: "memory");
 }
 NOKPROBE_SYMBOL(jprobe_return);
 
-static void __used jprobe_return_end(void)
-{
-}
-NOKPROBE_SYMBOL(jprobe_return_end);
-
 int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
 {
 	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
 
-	/*
-	 * FIXME - we should ideally be validating that we got here 'cos
-	 * of the "trap" in jprobe_return() above, before restoring the
-	 * saved regs...
-	 */
+	if (regs->nip != ppc_kallsyms_lookup_name("jprobe_return_trap")) {
+		pr_debug("longjmp_break_handler NIP (0x%lx) does not match jprobe_return_trap (0x%lx)\n",
+				regs->nip, ppc_kallsyms_lookup_name("jprobe_return_trap"));
+		return 0;
+	}
+
 	memcpy(regs, &kcb->jprobe_saved_regs, sizeof(struct pt_regs));
 	/* It's OK to start function graph tracing again */
 	unpause_graph_tracing();
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/5] powerpc/kprobes: Some cosmetic updates to try_to_emulate()
  2017-09-13 21:20 [PATCH 1/5] powerpc/kprobes: Some cosmetic updates to try_to_emulate() Naveen N. Rao
                   ` (3 preceding siblings ...)
  2017-09-13 21:20 ` [PATCH 5/5] powerpc/jprobes: Validate break handler invocation as being due to a jprobe_return() Naveen N. Rao
@ 2017-09-13 23:18 ` Masami Hiramatsu
  2017-09-14  6:16 ` Kamalesh Babulal
  5 siblings, 0 replies; 20+ messages in thread
From: Masami Hiramatsu @ 2017-09-13 23:18 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, linuxppc-dev, Ananth N Mavinakayanahalli,
	Masami Hiramatsu, Kamalesh Babulal

On Thu, 14 Sep 2017 02:50:32 +0530
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:

> 1. This is only used in kprobes.c, so make it static.
> 2. Remove the un-necessary (ret == 0) comparison in the else clause.
> 
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>

Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>

Thanks!

> ---
>  arch/powerpc/kernel/kprobes.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> index 367494dc67d9..c2a6ab38a67f 100644
> --- a/arch/powerpc/kernel/kprobes.c
> +++ b/arch/powerpc/kernel/kprobes.c
> @@ -239,7 +239,7 @@ void arch_prepare_kretprobe(struct kretprobe_instance *ri, struct pt_regs *regs)
>  }
>  NOKPROBE_SYMBOL(arch_prepare_kretprobe);
>  
> -int try_to_emulate(struct kprobe *p, struct pt_regs *regs)
> +static int try_to_emulate(struct kprobe *p, struct pt_regs *regs)
>  {
>  	int ret;
>  	unsigned int insn = *p->ainsn.insn;
> @@ -261,7 +261,7 @@ int try_to_emulate(struct kprobe *p, struct pt_regs *regs)
>  		 */
>  		printk("Can't step on instruction %x\n", insn);
>  		BUG();
> -	} else if (ret == 0)
> +	} else
>  		/* This instruction can't be boosted */
>  		p->ainsn.boostable = -1;
>  
> -- 
> 2.14.1
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/5] powerpc/kprobes: Do not suppress instruction emulation if a single run failed
  2017-09-13 21:20 ` [PATCH 2/5] powerpc/kprobes: Do not suppress instruction emulation if a single run failed Naveen N. Rao
@ 2017-09-13 23:53   ` Masami Hiramatsu
  2017-09-14  6:38     ` Naveen N. Rao
  0 siblings, 1 reply; 20+ messages in thread
From: Masami Hiramatsu @ 2017-09-13 23:53 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, linuxppc-dev, Ananth N Mavinakayanahalli,
	Masami Hiramatsu, Kamalesh Babulal

On Thu, 14 Sep 2017 02:50:33 +0530
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:

> Currently, we disable instruction emulation if emulate_step() fails for
> any reason. However, such failures could be transient and specific to a
> particular run. Instead, only disable instruction emulation if we have
> never been able to emulate this. If we had emulated this instruction
> successfully at least once, then we single step only this probe hit and
> continue to try emulating the instruction in subsequent probe hits.

Hmm, would this mean that the instruction is emulatable or not depends
on context? What kind of situation is considerable?

Thank you,

> 
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> ---
>  arch/powerpc/kernel/kprobes.c | 16 +++++++++++++---
>  1 file changed, 13 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> index c2a6ab38a67f..e848fe2c93fb 100644
> --- a/arch/powerpc/kernel/kprobes.c
> +++ b/arch/powerpc/kernel/kprobes.c
> @@ -261,9 +261,19 @@ static int try_to_emulate(struct kprobe *p, struct pt_regs *regs)
>  		 */
>  		printk("Can't step on instruction %x\n", insn);
>  		BUG();
> -	} else
> -		/* This instruction can't be boosted */
> -		p->ainsn.boostable = -1;
> +	} else {
> +		/*
> +		 * If we haven't previously emulated this instruction, then it
> +		 * can't be boosted. Note it down so we don't try to do so again.
> +		 *
> +		 * If, however, we had emulated this instruction in the past,
> +		 * then this is just an error with the current run. We return
> +		 * 0 so that this is now single-stepped, but continue to try
> +		 * emulating it in subsequent probe hits.
> +		 */
> +		if (unlikely(p->ainsn.boostable != 1))
> +			p->ainsn.boostable = -1;
> +	}
>  
>  	return ret;
>  }
> -- 
> 2.14.1
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/5] powerpc/jprobes: Disable preemption when triggered through ftrace
  2017-09-13 21:20 ` [PATCH 4/5] powerpc/jprobes: Disable preemption when triggered through ftrace Naveen N. Rao
@ 2017-09-14  0:05   ` Masami Hiramatsu
  2017-09-14 10:25     ` Naveen N. Rao
  0 siblings, 1 reply; 20+ messages in thread
From: Masami Hiramatsu @ 2017-09-14  0:05 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, linuxppc-dev, Ananth N Mavinakayanahalli,
	Masami Hiramatsu, Kamalesh Babulal

On Thu, 14 Sep 2017 02:50:35 +0530
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:

> KPROBES_SANITY_TEST throws the below splat when CONFIG_PREEMPT is
> enabled:
> 
> [    3.140410] Kprobe smoke test: started
> [    3.149680] DEBUG_LOCKS_WARN_ON(val > preempt_count())
> [    3.149684] ------------[ cut here ]------------
> [    3.149695] WARNING: CPU: 19 PID: 1 at kernel/sched/core.c:3094 preempt_count_sub+0xcc/0x140
> [    3.149699] Modules linked in:
> [    3.149705] CPU: 19 PID: 1 Comm: swapper/0 Not tainted 4.13.0-rc7-nnr+ #97
> [    3.149709] task: c0000000fea80000 task.stack: c0000000feb00000
> [    3.149713] NIP:  c00000000011d3dc LR: c00000000011d3d8 CTR: c000000000a090d0
> [    3.149718] REGS: c0000000feb03400 TRAP: 0700   Not tainted  (4.13.0-rc7-nnr+)
> [    3.149722] MSR:  8000000000021033 <SF,ME,IR,DR,RI,LE>  CR: 28000282  XER: 00000000
> [    3.149732] CFAR: c00000000015aa18 SOFTE: 0
> <snip>
> [    3.149786] NIP [c00000000011d3dc] preempt_count_sub+0xcc/0x140
> [    3.149790] LR [c00000000011d3d8] preempt_count_sub+0xc8/0x140
> [    3.149794] Call Trace:
> [    3.149798] [c0000000feb03680] [c00000000011d3d8] preempt_count_sub+0xc8/0x140 (unreliable)
> [    3.149804] [c0000000feb036e0] [c000000000046198] kprobe_handler+0x228/0x4b0
> [    3.149810] [c0000000feb03750] [c0000000000269c8] program_check_exception+0x58/0x3b0
> [    3.149816] [c0000000feb037c0] [c00000000000903c] program_check_common+0x16c/0x170
> [    3.149822] --- interrupt: 0 at kprobe_target+0x8/0x20
>                    LR = init_test_probes+0x248/0x7d0
> [    3.149829] [c0000000feb03ab0] [c000000000e4f048] kp+0x0/0x80 (unreliable)
> [    3.149835] [c0000000feb03b10] [c00000000004ea60] livepatch_handler+0x38/0x74
> [    3.149841] [c0000000feb03ba0] [c000000000d0de54] init_kprobes+0x1d8/0x208
> [    3.149846] [c0000000feb03c40] [c00000000000daa8] do_one_initcall+0x68/0x1d0
> [    3.149852] [c0000000feb03d00] [c000000000ce44f0] kernel_init_freeable+0x298/0x374
> [    3.149857] [c0000000feb03dc0] [c00000000000dd84] kernel_init+0x24/0x160
> [    3.149863] [c0000000feb03e30] [c00000000000bfec] ret_from_kernel_thread+0x5c/0x70
> [    3.149867] Instruction dump:
> [    3.149871] 419effdc 3d22001b 39299240 81290000 2f890000 409effc8 3c82ffcb 3c62ffcb
> [    3.149879] 3884bc68 3863bc18 4803d5fd 60000000 <0fe00000> 4bffffa8 60000000 60000000
> [    3.149890] ---[ end trace 432dd46b4ce3d29f ]---
> [    3.166003] Kprobe smoke test: passed successfully
> 
> The issue is that we aren't disabling preemption in
> kprobe_ftrace_handler(). Disable it.

Oops, right! Similar patch may need for x86 too.

Acked-by: Masami Hiramatsu <mhiramat@kernel.org>

Thanks!

> Fixes: ead514d5fb30a0 ("powerpc/kprobes: Add support for KPROBES_ON_FTRACE")
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> ---
>  arch/powerpc/kernel/kprobes-ftrace.c | 15 +++++++++++----
>  1 file changed, 11 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/kprobes-ftrace.c b/arch/powerpc/kernel/kprobes-ftrace.c
> index 6c089d9757c9..2d81404f818c 100644
> --- a/arch/powerpc/kernel/kprobes-ftrace.c
> +++ b/arch/powerpc/kernel/kprobes-ftrace.c
> @@ -65,6 +65,7 @@ void kprobe_ftrace_handler(unsigned long nip, unsigned long parent_nip,
>  	/* Disable irq for emulating a breakpoint and avoiding preempt */
>  	local_irq_save(flags);
>  	hard_irq_disable();
> +	preempt_disable();
>  
>  	p = get_kprobe((kprobe_opcode_t *)nip);
>  	if (unlikely(!p) || kprobe_disabled(p))
> @@ -86,12 +87,18 @@ void kprobe_ftrace_handler(unsigned long nip, unsigned long parent_nip,
>  		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
>  		if (!p->pre_handler || !p->pre_handler(p, regs))
>  			__skip_singlestep(p, regs, kcb, orig_nip);
> -		/*
> -		 * If pre_handler returns !0, it sets regs->nip and
> -		 * resets current kprobe.
> -		 */
> +		else {
> +			/*
> +			 * If pre_handler returns !0, it sets regs->nip and
> +			 * resets current kprobe. In this case, we still need
> +			 * to restore irq, but not preemption.
> +			 */
> +			local_irq_restore(flags);
> +			return;
> +		}
>  	}
>  end:
> +	preempt_enable_no_resched();
>  	local_irq_restore(flags);
>  }
>  NOKPROBE_SYMBOL(kprobe_ftrace_handler);
> -- 
> 2.14.1
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/5] powerpc/kprobes: Fix warnings from __this_cpu_read() on preempt kernels
  2017-09-13 21:20 ` [PATCH 3/5] powerpc/kprobes: Fix warnings from __this_cpu_read() on preempt kernels Naveen N. Rao
@ 2017-09-14  0:36   ` Masami Hiramatsu
  2017-09-14  6:47     ` Naveen N. Rao
  2017-09-14  9:48   ` Kamalesh Babulal
  1 sibling, 1 reply; 20+ messages in thread
From: Masami Hiramatsu @ 2017-09-14  0:36 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, linuxppc-dev, Ananth N Mavinakayanahalli,
	Masami Hiramatsu, Kamalesh Babulal

On Thu, 14 Sep 2017 02:50:34 +0530
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:

> Kamalesh pointed out that we are getting the below call traces with
> livepatched functions when we enable CONFIG_PREEMPT:
> 
> [  495.470721] BUG: using __this_cpu_read() in preemptible [00000000] code: cat/8394
> [  495.471167] caller is is_current_kprobe_addr+0x30/0x90
> [  495.471171] CPU: 4 PID: 8394 Comm: cat Tainted: G              K 4.13.0-rc7-nnr+ #95
> [  495.471173] Call Trace:
> [  495.471178] [c00000008fd9b960] [c0000000009f039c] dump_stack+0xec/0x160 (unreliable)
> [  495.471184] [c00000008fd9b9a0] [c00000000059169c] check_preemption_disabled+0x15c/0x170
> [  495.471187] [c00000008fd9ba30] [c000000000046460] is_current_kprobe_addr+0x30/0x90
> [  495.471191] [c00000008fd9ba60] [c00000000004e9a0] ftrace_call+0x1c/0xb8
> [  495.471195] [c00000008fd9bc30] [c000000000376fd8] seq_read+0x238/0x5c0
> [  495.471199] [c00000008fd9bcd0] [c0000000003cfd78] proc_reg_read+0x88/0xd0
> [  495.471203] [c00000008fd9bd00] [c00000000033e5d4] __vfs_read+0x44/0x1b0
> [  495.471206] [c00000008fd9bd90] [c0000000003402ec] vfs_read+0xbc/0x1b0
> [  495.471210] [c00000008fd9bde0] [c000000000342138] SyS_read+0x68/0x110
> [  495.471214] [c00000008fd9be30] [c00000000000bc6c] system_call+0x58/0x6c
> 
> Commit c05b8c4474c030 ("powerpc/kprobes: Skip livepatch_handler() for
> jprobes") introduced a helper is_current_kprobe_addr() to help determine
> if the current function has been livepatched or if it has a jprobe
> installed, both of which modify the NIP.
> 
> In the case of a jprobe, kprobe_ftrace_handler() disables pre-emption
> before calling into setjmp_pre_handler() which returns without disabling
> pre-emption. This is done to ensure that the jprobe han dler won't
> disappear beneath us if the jprobe is unregistered between the
> setjmp_pre_handler() and the subsequent longjmp_break_handler() called
> from the jprobe handler. Due to this, we can use __this_cpu_read() in
> is_current_kprobe_addr() with the pre-emption check as we know that
> pre-emption will be disabled.
> 
> However, if this function has been livepatched, we are still doing this
> check and when we do so, pre-emption won't necessarily be disabled. This
> results in the call trace shown above.
> 
> Fix this by only invoking is_current_kprobe_addr() when pre-emption is
> disabled. And since we now guard this within a pre-emption check, we can
> instead use raw_cpu_read() to get the current_kprobe value skipping the
> check done by __this_cpu_read().

Hmm, can you disable preempt temporary at this function and read it?

Thank you,

> 
> Fixes: c05b8c4474c030 ("powerpc/kprobes: Skip livepatch_handler() for jprobes")
> Reported-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com>
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> ---
>  arch/powerpc/kernel/kprobes.c | 8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> index e848fe2c93fb..db40b13fd3d1 100644
> --- a/arch/powerpc/kernel/kprobes.c
> +++ b/arch/powerpc/kernel/kprobes.c
> @@ -45,8 +45,12 @@ struct kretprobe_blackpoint kretprobe_blacklist[] = {{NULL, NULL}};
>  
>  int is_current_kprobe_addr(unsigned long addr)
>  {
> -	struct kprobe *p = kprobe_running();
> -	return (p && (unsigned long)p->addr == addr) ? 1 : 0;
> +	if (!preemptible()) {
> +		struct kprobe *p = raw_cpu_read(current_kprobe);
> +		return (p && (unsigned long)p->addr == addr) ? 1 : 0;
> +	}
> +
> +	return 0;
>  }
>  
>  bool arch_within_kprobe_blacklist(unsigned long addr)
> -- 
> 2.14.1
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 5/5] powerpc/jprobes: Validate break handler invocation as being due to a jprobe_return()
  2017-09-13 21:20 ` [PATCH 5/5] powerpc/jprobes: Validate break handler invocation as being due to a jprobe_return() Naveen N. Rao
@ 2017-09-14  0:38   ` Masami Hiramatsu
  0 siblings, 0 replies; 20+ messages in thread
From: Masami Hiramatsu @ 2017-09-14  0:38 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, linuxppc-dev, Ananth N Mavinakayanahalli,
	Masami Hiramatsu, Kamalesh Babulal

On Thu, 14 Sep 2017 02:50:36 +0530
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:

> Fix a circa 2005 FIXME by implementing a check to ensure that we
> actually got into the jprobe break handler() due to the trap in
> jprobe_return().
> 

Looks good to me.

Acked-by: Masami Hiramatsu <mhiramat@kernel.org>

Thanks!

> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> ---
> This is v2 of the patch posted previously (*) to change the WARN() to a 
> pr_debug() as suggested by Masami.
> (*) https://patchwork.ozlabs.org/patch/762670/
> 
> - Naveen
> 
>  arch/powerpc/kernel/kprobes.c | 20 +++++++++-----------
>  1 file changed, 9 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> index db40b13fd3d1..5b26d5202273 100644
> --- a/arch/powerpc/kernel/kprobes.c
> +++ b/arch/powerpc/kernel/kprobes.c
> @@ -648,24 +648,22 @@ NOKPROBE_SYMBOL(setjmp_pre_handler);
>  
>  void __used jprobe_return(void)
>  {
> -	asm volatile("trap" ::: "memory");
> +	asm volatile("jprobe_return_trap:\n"
> +		     "trap\n"
> +		     ::: "memory");
>  }
>  NOKPROBE_SYMBOL(jprobe_return);
>  
> -static void __used jprobe_return_end(void)
> -{
> -}
> -NOKPROBE_SYMBOL(jprobe_return_end);
> -
>  int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
>  {
>  	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
>  
> -	/*
> -	 * FIXME - we should ideally be validating that we got here 'cos
> -	 * of the "trap" in jprobe_return() above, before restoring the
> -	 * saved regs...
> -	 */
> +	if (regs->nip != ppc_kallsyms_lookup_name("jprobe_return_trap")) {
> +		pr_debug("longjmp_break_handler NIP (0x%lx) does not match jprobe_return_trap (0x%lx)\n",
> +				regs->nip, ppc_kallsyms_lookup_name("jprobe_return_trap"));
> +		return 0;
> +	}
> +
>  	memcpy(regs, &kcb->jprobe_saved_regs, sizeof(struct pt_regs));
>  	/* It's OK to start function graph tracing again */
>  	unpause_graph_tracing();
> -- 
> 2.14.1
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/5] powerpc/kprobes: Some cosmetic updates to try_to_emulate()
  2017-09-13 21:20 [PATCH 1/5] powerpc/kprobes: Some cosmetic updates to try_to_emulate() Naveen N. Rao
                   ` (4 preceding siblings ...)
  2017-09-13 23:18 ` [PATCH 1/5] powerpc/kprobes: Some cosmetic updates to try_to_emulate() Masami Hiramatsu
@ 2017-09-14  6:16 ` Kamalesh Babulal
  5 siblings, 0 replies; 20+ messages in thread
From: Kamalesh Babulal @ 2017-09-14  6:16 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, linuxppc-dev, Ananth N Mavinakayanahalli,
	Masami Hiramatsu

On Thursday 14 September 2017 02:50 AM, Naveen N. Rao wrote:
> 1. This is only used in kprobes.c, so make it static.
> 2. Remove the un-necessary (ret == 0) comparison in the else clause.
>
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>

Reviewed-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com>

> ---
>  arch/powerpc/kernel/kprobes.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> index 367494dc67d9..c2a6ab38a67f 100644
> --- a/arch/powerpc/kernel/kprobes.c
> +++ b/arch/powerpc/kernel/kprobes.c
> @@ -239,7 +239,7 @@ void arch_prepare_kretprobe(struct kretprobe_instance *ri, struct pt_regs *regs)
>  }
>  NOKPROBE_SYMBOL(arch_prepare_kretprobe);
>
> -int try_to_emulate(struct kprobe *p, struct pt_regs *regs)
> +static int try_to_emulate(struct kprobe *p, struct pt_regs *regs)
>  {
>  	int ret;
>  	unsigned int insn = *p->ainsn.insn;
> @@ -261,7 +261,7 @@ int try_to_emulate(struct kprobe *p, struct pt_regs *regs)
>  		 */
>  		printk("Can't step on instruction %x\n", insn);
>  		BUG();
> -	} else if (ret == 0)
> +	} else
>  		/* This instruction can't be boosted */
>  		p->ainsn.boostable = -1;
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/5] powerpc/kprobes: Do not suppress instruction emulation if a single run failed
  2017-09-13 23:53   ` Masami Hiramatsu
@ 2017-09-14  6:38     ` Naveen N. Rao
  2017-09-14  9:45       ` Masami Hiramatsu
  0 siblings, 1 reply; 20+ messages in thread
From: Naveen N. Rao @ 2017-09-14  6:38 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Michael Ellerman, linuxppc-dev, Ananth N Mavinakayanahalli,
	Kamalesh Babulal

On 2017/09/13 04:53PM, Masami Hiramatsu wrote:
> On Thu, 14 Sep 2017 02:50:33 +0530
> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:
> 
> > Currently, we disable instruction emulation if emulate_step() fails for
> > any reason. However, such failures could be transient and specific to a
> > particular run. Instead, only disable instruction emulation if we have
> > never been able to emulate this. If we had emulated this instruction
> > successfully at least once, then we single step only this probe hit and
> > continue to try emulating the instruction in subsequent probe hits.
> 
> Hmm, would this mean that the instruction is emulatable or not depends
> on context? What kind of situation is considerable?

Yes, as an example, a load/store instruction can cause exceptions 
depending on the address. In some of those cases, we will have to single 
step the instruction, but we will be able to emulate in most scenarios.

Thanks for the review!
- Naveen

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/5] powerpc/kprobes: Fix warnings from __this_cpu_read() on preempt kernels
  2017-09-14  0:36   ` Masami Hiramatsu
@ 2017-09-14  6:47     ` Naveen N. Rao
  2017-09-14 10:10       ` Masami Hiramatsu
  0 siblings, 1 reply; 20+ messages in thread
From: Naveen N. Rao @ 2017-09-14  6:47 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Michael Ellerman, linuxppc-dev, Ananth N Mavinakayanahalli,
	Kamalesh Babulal

On 2017/09/13 05:36PM, Masami Hiramatsu wrote:
> On Thu, 14 Sep 2017 02:50:34 +0530
> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:
> 
> > Kamalesh pointed out that we are getting the below call traces with
> > livepatched functions when we enable CONFIG_PREEMPT:
> > 
> > [  495.470721] BUG: using __this_cpu_read() in preemptible [00000000] code: cat/8394
> > [  495.471167] caller is is_current_kprobe_addr+0x30/0x90
> > [  495.471171] CPU: 4 PID: 8394 Comm: cat Tainted: G              K 4.13.0-rc7-nnr+ #95
> > [  495.471173] Call Trace:
> > [  495.471178] [c00000008fd9b960] [c0000000009f039c] dump_stack+0xec/0x160 (unreliable)
> > [  495.471184] [c00000008fd9b9a0] [c00000000059169c] check_preemption_disabled+0x15c/0x170
> > [  495.471187] [c00000008fd9ba30] [c000000000046460] is_current_kprobe_addr+0x30/0x90
> > [  495.471191] [c00000008fd9ba60] [c00000000004e9a0] ftrace_call+0x1c/0xb8
> > [  495.471195] [c00000008fd9bc30] [c000000000376fd8] seq_read+0x238/0x5c0
> > [  495.471199] [c00000008fd9bcd0] [c0000000003cfd78] proc_reg_read+0x88/0xd0
> > [  495.471203] [c00000008fd9bd00] [c00000000033e5d4] __vfs_read+0x44/0x1b0
> > [  495.471206] [c00000008fd9bd90] [c0000000003402ec] vfs_read+0xbc/0x1b0
> > [  495.471210] [c00000008fd9bde0] [c000000000342138] SyS_read+0x68/0x110
> > [  495.471214] [c00000008fd9be30] [c00000000000bc6c] system_call+0x58/0x6c
> > 
> > Commit c05b8c4474c030 ("powerpc/kprobes: Skip livepatch_handler() for
> > jprobes") introduced a helper is_current_kprobe_addr() to help determine
> > if the current function has been livepatched or if it has a jprobe
> > installed, both of which modify the NIP.
> > 
> > In the case of a jprobe, kprobe_ftrace_handler() disables pre-emption
> > before calling into setjmp_pre_handler() which returns without disabling
> > pre-emption. This is done to ensure that the jprobe han dler won't
> > disappear beneath us if the jprobe is unregistered between the
> > setjmp_pre_handler() and the subsequent longjmp_break_handler() called
> > from the jprobe handler. Due to this, we can use __this_cpu_read() in
> > is_current_kprobe_addr() with the pre-emption check as we know that
> > pre-emption will be disabled.
> > 
> > However, if this function has been livepatched, we are still doing this
> > check and when we do so, pre-emption won't necessarily be disabled. This
> > results in the call trace shown above.
> > 
> > Fix this by only invoking is_current_kprobe_addr() when pre-emption is
> > disabled. And since we now guard this within a pre-emption check, we can
> > instead use raw_cpu_read() to get the current_kprobe value skipping the
> > check done by __this_cpu_read().
> 
> Hmm, can you disable preempt temporary at this function and read it?

Yes, but I felt this approach is more optimal specifically for live 
patching. We don't normally expect preemption to be disabled while 
handling a livepatched function, so the simple 'if (!preemptible())'
check helps us bypass looking at current_kprobe.

The check also ensures we can use raw_cpu_read() safely in other 
scenarios. Do you see any other concerns with this approach?

Thanks,
Naveen

> 
> Thank you,
> 
> > 
> > Fixes: c05b8c4474c030 ("powerpc/kprobes: Skip livepatch_handler() for jprobes")
> > Reported-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com>
> > Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> > ---
> >  arch/powerpc/kernel/kprobes.c | 8 ++++++--
> >  1 file changed, 6 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> > index e848fe2c93fb..db40b13fd3d1 100644
> > --- a/arch/powerpc/kernel/kprobes.c
> > +++ b/arch/powerpc/kernel/kprobes.c
> > @@ -45,8 +45,12 @@ struct kretprobe_blackpoint kretprobe_blacklist[] = {{NULL, NULL}};
> >  
> >  int is_current_kprobe_addr(unsigned long addr)
> >  {
> > -	struct kprobe *p = kprobe_running();
> > -	return (p && (unsigned long)p->addr == addr) ? 1 : 0;
> > +	if (!preemptible()) {
> > +		struct kprobe *p = raw_cpu_read(current_kprobe);
> > +		return (p && (unsigned long)p->addr == addr) ? 1 : 0;
> > +	}
> > +
> > +	return 0;
> >  }
> >  
> >  bool arch_within_kprobe_blacklist(unsigned long addr)
> > -- 
> > 2.14.1
> > 
> 
> 
> -- 
> Masami Hiramatsu <mhiramat@kernel.org>
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/5] powerpc/kprobes: Do not suppress instruction emulation if a single run failed
  2017-09-14  6:38     ` Naveen N. Rao
@ 2017-09-14  9:45       ` Masami Hiramatsu
  2017-09-14 10:03         ` Naveen N. Rao
  0 siblings, 1 reply; 20+ messages in thread
From: Masami Hiramatsu @ 2017-09-14  9:45 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, linuxppc-dev, Ananth N Mavinakayanahalli,
	Kamalesh Babulal

On Thu, 14 Sep 2017 12:08:07 +0530
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:

> On 2017/09/13 04:53PM, Masami Hiramatsu wrote:
> > On Thu, 14 Sep 2017 02:50:33 +0530
> > "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:
> > 
> > > Currently, we disable instruction emulation if emulate_step() fails for
> > > any reason. However, such failures could be transient and specific to a
> > > particular run. Instead, only disable instruction emulation if we have
> > > never been able to emulate this. If we had emulated this instruction
> > > successfully at least once, then we single step only this probe hit and
> > > continue to try emulating the instruction in subsequent probe hits.
> > 
> > Hmm, would this mean that the instruction is emulatable or not depends
> > on context? What kind of situation is considerable?
> 
> Yes, as an example, a load/store instruction can cause exceptions 
> depending on the address. In some of those cases, we will have to single 
> step the instruction, but we will be able to emulate in most scenarios.

OK, I got it.
Could you add this example as comment in the code so that readers can
easily understand?

Thank you,

> 
> Thanks for the review!
> - Naveen
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/5] powerpc/kprobes: Fix warnings from __this_cpu_read() on preempt kernels
  2017-09-13 21:20 ` [PATCH 3/5] powerpc/kprobes: Fix warnings from __this_cpu_read() on preempt kernels Naveen N. Rao
  2017-09-14  0:36   ` Masami Hiramatsu
@ 2017-09-14  9:48   ` Kamalesh Babulal
  1 sibling, 0 replies; 20+ messages in thread
From: Kamalesh Babulal @ 2017-09-14  9:48 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, linuxppc-dev, Ananth N Mavinakayanahalli,
	Masami Hiramatsu

On Thursday 14 September 2017 02:50 AM, Naveen N. Rao wrote:
> Kamalesh pointed out that we are getting the below call traces with
> livepatched functions when we enable CONFIG_PREEMPT:
>
> [  495.470721] BUG: using __this_cpu_read() in preemptible [00000000] code: cat/8394
> [  495.471167] caller is is_current_kprobe_addr+0x30/0x90
> [  495.471171] CPU: 4 PID: 8394 Comm: cat Tainted: G              K 4.13.0-rc7-nnr+ #95
> [  495.471173] Call Trace:
> [  495.471178] [c00000008fd9b960] [c0000000009f039c] dump_stack+0xec/0x160 (unreliable)
> [  495.471184] [c00000008fd9b9a0] [c00000000059169c] check_preemption_disabled+0x15c/0x170
> [  495.471187] [c00000008fd9ba30] [c000000000046460] is_current_kprobe_addr+0x30/0x90
> [  495.471191] [c00000008fd9ba60] [c00000000004e9a0] ftrace_call+0x1c/0xb8
> [  495.471195] [c00000008fd9bc30] [c000000000376fd8] seq_read+0x238/0x5c0
> [  495.471199] [c00000008fd9bcd0] [c0000000003cfd78] proc_reg_read+0x88/0xd0
> [  495.471203] [c00000008fd9bd00] [c00000000033e5d4] __vfs_read+0x44/0x1b0
> [  495.471206] [c00000008fd9bd90] [c0000000003402ec] vfs_read+0xbc/0x1b0
> [  495.471210] [c00000008fd9bde0] [c000000000342138] SyS_read+0x68/0x110
> [  495.471214] [c00000008fd9be30] [c00000000000bc6c] system_call+0x58/0x6c
>
> Commit c05b8c4474c030 ("powerpc/kprobes: Skip livepatch_handler() for
> jprobes") introduced a helper is_current_kprobe_addr() to help determine
> if the current function has been livepatched or if it has a jprobe
> installed, both of which modify the NIP.
>
> In the case of a jprobe, kprobe_ftrace_handler() disables pre-emption
> before calling into setjmp_pre_handler() which returns without disabling
> pre-emption. This is done to ensure that the jprobe handler won't
> disappear beneath us if the jprobe is unregistered between the
> setjmp_pre_handler() and the subsequent longjmp_break_handler() called
> from the jprobe handler. Due to this, we can use __this_cpu_read() in
> is_current_kprobe_addr() with the pre-emption check as we know that
> pre-emption will be disabled.
>
> However, if this function has been livepatched, we are still doing this
> check and when we do so, pre-emption won't necessarily be disabled. This
> results in the call trace shown above.
>
> Fix this by only invoking is_current_kprobe_addr() when pre-emption is
> disabled. And since we now guard this within a pre-emption check, we can
> instead use raw_cpu_read() to get the current_kprobe value skipping the
> check done by __this_cpu_read().
>
> Fixes: c05b8c4474c030 ("powerpc/kprobes: Skip livepatch_handler() for jprobes")
> Reported-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com>
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>

Thanks, the call traces are not seen anymore with the patch.

Tested-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com>



> ---
>  arch/powerpc/kernel/kprobes.c | 8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> index e848fe2c93fb..db40b13fd3d1 100644
> --- a/arch/powerpc/kernel/kprobes.c
> +++ b/arch/powerpc/kernel/kprobes.c
> @@ -45,8 +45,12 @@ struct kretprobe_blackpoint kretprobe_blacklist[] = {{NULL, NULL}};
>
>  int is_current_kprobe_addr(unsigned long addr)
>  {
> -	struct kprobe *p = kprobe_running();
> -	return (p && (unsigned long)p->addr == addr) ? 1 : 0;
> +	if (!preemptible()) {
> +		struct kprobe *p = raw_cpu_read(current_kprobe);
> +		return (p && (unsigned long)p->addr == addr) ? 1 : 0;
> +	}
> +
> +	return 0;
>  }
>
>  bool arch_within_kprobe_blacklist(unsigned long addr)
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/5] powerpc/kprobes: Do not suppress instruction emulation if a single run failed
  2017-09-14  9:45       ` Masami Hiramatsu
@ 2017-09-14 10:03         ` Naveen N. Rao
  0 siblings, 0 replies; 20+ messages in thread
From: Naveen N. Rao @ 2017-09-14 10:03 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Michael Ellerman, linuxppc-dev, Ananth N Mavinakayanahalli,
	Kamalesh Babulal

On 2017/09/14 02:45AM, Masami Hiramatsu wrote:
> On Thu, 14 Sep 2017 12:08:07 +0530
> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:
> 
> > On 2017/09/13 04:53PM, Masami Hiramatsu wrote:
> > > On Thu, 14 Sep 2017 02:50:33 +0530
> > > "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:
> > > 
> > > > Currently, we disable instruction emulation if emulate_step() fails for
> > > > any reason. However, such failures could be transient and specific to a
> > > > particular run. Instead, only disable instruction emulation if we have
> > > > never been able to emulate this. If we had emulated this instruction
> > > > successfully at least once, then we single step only this probe hit and
> > > > continue to try emulating the instruction in subsequent probe hits.
> > > 
> > > Hmm, would this mean that the instruction is emulatable or not depends
> > > on context? What kind of situation is considerable?
> > 
> > Yes, as an example, a load/store instruction can cause exceptions 
> > depending on the address. In some of those cases, we will have to single 
> > step the instruction, but we will be able to emulate in most scenarios.
> 
> OK, I got it.
> Could you add this example as comment in the code so that readers can
> easily understand?

Sure.

Thanks,
Naveen

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/5] powerpc/kprobes: Fix warnings from __this_cpu_read() on preempt kernels
  2017-09-14  6:47     ` Naveen N. Rao
@ 2017-09-14 10:10       ` Masami Hiramatsu
  2017-09-16 11:25         ` Naveen N. Rao
  0 siblings, 1 reply; 20+ messages in thread
From: Masami Hiramatsu @ 2017-09-14 10:10 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, linuxppc-dev, Ananth N Mavinakayanahalli,
	Kamalesh Babulal

On Thu, 14 Sep 2017 12:17:20 +0530
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:

> On 2017/09/13 05:36PM, Masami Hiramatsu wrote:
> > On Thu, 14 Sep 2017 02:50:34 +0530
> > "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:
> > 
> > > Kamalesh pointed out that we are getting the below call traces with
> > > livepatched functions when we enable CONFIG_PREEMPT:
> > > 
> > > [  495.470721] BUG: using __this_cpu_read() in preemptible [00000000] code: cat/8394
> > > [  495.471167] caller is is_current_kprobe_addr+0x30/0x90
> > > [  495.471171] CPU: 4 PID: 8394 Comm: cat Tainted: G              K 4.13.0-rc7-nnr+ #95
> > > [  495.471173] Call Trace:
> > > [  495.471178] [c00000008fd9b960] [c0000000009f039c] dump_stack+0xec/0x160 (unreliable)
> > > [  495.471184] [c00000008fd9b9a0] [c00000000059169c] check_preemption_disabled+0x15c/0x170
> > > [  495.471187] [c00000008fd9ba30] [c000000000046460] is_current_kprobe_addr+0x30/0x90
> > > [  495.471191] [c00000008fd9ba60] [c00000000004e9a0] ftrace_call+0x1c/0xb8
> > > [  495.471195] [c00000008fd9bc30] [c000000000376fd8] seq_read+0x238/0x5c0
> > > [  495.471199] [c00000008fd9bcd0] [c0000000003cfd78] proc_reg_read+0x88/0xd0
> > > [  495.471203] [c00000008fd9bd00] [c00000000033e5d4] __vfs_read+0x44/0x1b0
> > > [  495.471206] [c00000008fd9bd90] [c0000000003402ec] vfs_read+0xbc/0x1b0
> > > [  495.471210] [c00000008fd9bde0] [c000000000342138] SyS_read+0x68/0x110
> > > [  495.471214] [c00000008fd9be30] [c00000000000bc6c] system_call+0x58/0x6c
> > > 
> > > Commit c05b8c4474c030 ("powerpc/kprobes: Skip livepatch_handler() for
> > > jprobes") introduced a helper is_current_kprobe_addr() to help determine
> > > if the current function has been livepatched or if it has a jprobe
> > > installed, both of which modify the NIP.
> > > 
> > > In the case of a jprobe, kprobe_ftrace_handler() disables pre-emption
> > > before calling into setjmp_pre_handler() which returns without disabling
> > > pre-emption. This is done to ensure that the jprobe han dler won't
> > > disappear beneath us if the jprobe is unregistered between the
> > > setjmp_pre_handler() and the subsequent longjmp_break_handler() called
> > > from the jprobe handler. Due to this, we can use __this_cpu_read() in
> > > is_current_kprobe_addr() with the pre-emption check as we know that
> > > pre-emption will be disabled.
> > > 
> > > However, if this function has been livepatched, we are still doing this
> > > check and when we do so, pre-emption won't necessarily be disabled. This
> > > results in the call trace shown above.
> > > 
> > > Fix this by only invoking is_current_kprobe_addr() when pre-emption is
> > > disabled. And since we now guard this within a pre-emption check, we can
> > > instead use raw_cpu_read() to get the current_kprobe value skipping the
> > > check done by __this_cpu_read().
> > 
> > Hmm, can you disable preempt temporary at this function and read it?
> 
> Yes, but I felt this approach is more optimal specifically for live 
> patching. We don't normally expect preemption to be disabled while 
> handling a livepatched function, so the simple 'if (!preemptible())'
> check helps us bypass looking at current_kprobe.

Ah, I see. this is for checking "jprobes", since kprobes/kretprobes
usually already done there.

> 
> The check also ensures we can use raw_cpu_read() safely in other 
> scenarios. Do you see any other concerns with this approach?

Yes, there are 2 reasons.

At first, if user's custom kprobe pre-handler changes NIP for their
custom handwriting livepatching, such handler will enable preemption
before return. We don't prohibit it. I think this function can not
detect it.

And also, the function "name" is very confusing :)
Especially, since this symbol is exported, if you are checking jprobes
context, it should be renamed, at least it starts with "__" so that
show it as local use.

Thank you,


> 
> Thanks,
> Naveen
> 
> > 
> > Thank you,
> > 
> > > 
> > > Fixes: c05b8c4474c030 ("powerpc/kprobes: Skip livepatch_handler() for jprobes")
> > > Reported-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com>
> > > Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> > > ---
> > >  arch/powerpc/kernel/kprobes.c | 8 ++++++--
> > >  1 file changed, 6 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> > > index e848fe2c93fb..db40b13fd3d1 100644
> > > --- a/arch/powerpc/kernel/kprobes.c
> > > +++ b/arch/powerpc/kernel/kprobes.c
> > > @@ -45,8 +45,12 @@ struct kretprobe_blackpoint kretprobe_blacklist[] = {{NULL, NULL}};
> > >  
> > >  int is_current_kprobe_addr(unsigned long addr)
> > >  {
> > > -	struct kprobe *p = kprobe_running();
> > > -	return (p && (unsigned long)p->addr == addr) ? 1 : 0;
> > > +	if (!preemptible()) {
> > > +		struct kprobe *p = raw_cpu_read(current_kprobe);
> > > +		return (p && (unsigned long)p->addr == addr) ? 1 : 0;
> > > +	}
> > > +
> > > +	return 0;
> > >  }
> > >  
> > >  bool arch_within_kprobe_blacklist(unsigned long addr)
> > > -- 
> > > 2.14.1
> > > 
> > 
> > 
> > -- 
> > Masami Hiramatsu <mhiramat@kernel.org>
> > 
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/5] powerpc/jprobes: Disable preemption when triggered through ftrace
  2017-09-14  0:05   ` Masami Hiramatsu
@ 2017-09-14 10:25     ` Naveen N. Rao
  2017-09-14 10:53       ` Masami Hiramatsu
  0 siblings, 1 reply; 20+ messages in thread
From: Naveen N. Rao @ 2017-09-14 10:25 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Michael Ellerman, linuxppc-dev, Ananth N Mavinakayanahalli,
	Kamalesh Babulal

On 2017/09/13 05:05PM, Masami Hiramatsu wrote:
> On Thu, 14 Sep 2017 02:50:35 +0530
> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:
> 
> > KPROBES_SANITY_TEST throws the below splat when CONFIG_PREEMPT is
> > enabled:
> > 
> > [    3.140410] Kprobe smoke test: started
> > [    3.149680] DEBUG_LOCKS_WARN_ON(val > preempt_count())
> > [    3.149684] ------------[ cut here ]------------
> > [    3.149695] WARNING: CPU: 19 PID: 1 at kernel/sched/core.c:3094 preempt_count_sub+0xcc/0x140
> > [    3.149699] Modules linked in:
> > [    3.149705] CPU: 19 PID: 1 Comm: swapper/0 Not tainted 4.13.0-rc7-nnr+ #97
> > [    3.149709] task: c0000000fea80000 task.stack: c0000000feb00000
> > [    3.149713] NIP:  c00000000011d3dc LR: c00000000011d3d8 CTR: c000000000a090d0
> > [    3.149718] REGS: c0000000feb03400 TRAP: 0700   Not tainted  (4.13.0-rc7-nnr+)
> > [    3.149722] MSR:  8000000000021033 <SF,ME,IR,DR,RI,LE>  CR: 28000282  XER: 00000000
> > [    3.149732] CFAR: c00000000015aa18 SOFTE: 0
> > <snip>
> > [    3.149786] NIP [c00000000011d3dc] preempt_count_sub+0xcc/0x140
> > [    3.149790] LR [c00000000011d3d8] preempt_count_sub+0xc8/0x140
> > [    3.149794] Call Trace:
> > [    3.149798] [c0000000feb03680] [c00000000011d3d8] preempt_count_sub+0xc8/0x140 (unreliable)
> > [    3.149804] [c0000000feb036e0] [c000000000046198] kprobe_handler+0x228/0x4b0
> > [    3.149810] [c0000000feb03750] [c0000000000269c8] program_check_exception+0x58/0x3b0
> > [    3.149816] [c0000000feb037c0] [c00000000000903c] program_check_common+0x16c/0x170
> > [    3.149822] --- interrupt: 0 at kprobe_target+0x8/0x20
> >                    LR = init_test_probes+0x248/0x7d0
> > [    3.149829] [c0000000feb03ab0] [c000000000e4f048] kp+0x0/0x80 (unreliable)
> > [    3.149835] [c0000000feb03b10] [c00000000004ea60] livepatch_handler+0x38/0x74
> > [    3.149841] [c0000000feb03ba0] [c000000000d0de54] init_kprobes+0x1d8/0x208
> > [    3.149846] [c0000000feb03c40] [c00000000000daa8] do_one_initcall+0x68/0x1d0
> > [    3.149852] [c0000000feb03d00] [c000000000ce44f0] kernel_init_freeable+0x298/0x374
> > [    3.149857] [c0000000feb03dc0] [c00000000000dd84] kernel_init+0x24/0x160
> > [    3.149863] [c0000000feb03e30] [c00000000000bfec] ret_from_kernel_thread+0x5c/0x70
> > [    3.149867] Instruction dump:
> > [    3.149871] 419effdc 3d22001b 39299240 81290000 2f890000 409effc8 3c82ffcb 3c62ffcb
> > [    3.149879] 3884bc68 3863bc18 4803d5fd 60000000 <0fe00000> 4bffffa8 60000000 60000000
> > [    3.149890] ---[ end trace 432dd46b4ce3d29f ]---
> > [    3.166003] Kprobe smoke test: passed successfully
> > 
> > The issue is that we aren't disabling preemption in
> > kprobe_ftrace_handler(). Disable it.
> 
> Oops, right! Similar patch may need for x86 too.

Indeed, I will send a patch for that.

On a related note, I've been looking into why we have !PREEMPT for 
CONFIG_OPTPROBES. It looks like the primary reason is x86 having to deal 
with replacing multiple instructions. However, that isn't true with arm 
and powerpc. So, does it make sense to move 'depends on !PREEMPT' to the 
x86 code? Are there other scenarios where it might cause issues for 
arm/powerpc?

Thanks!
- Naveen

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/5] powerpc/jprobes: Disable preemption when triggered through ftrace
  2017-09-14 10:25     ` Naveen N. Rao
@ 2017-09-14 10:53       ` Masami Hiramatsu
  0 siblings, 0 replies; 20+ messages in thread
From: Masami Hiramatsu @ 2017-09-14 10:53 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Michael Ellerman, linuxppc-dev, Ananth N Mavinakayanahalli,
	Kamalesh Babulal

On Thu, 14 Sep 2017 15:55:39 +0530
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:

> On 2017/09/13 05:05PM, Masami Hiramatsu wrote:
> > On Thu, 14 Sep 2017 02:50:35 +0530
> > "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:
> > 
> > > KPROBES_SANITY_TEST throws the below splat when CONFIG_PREEMPT is
> > > enabled:
> > > 
> > > [    3.140410] Kprobe smoke test: started
> > > [    3.149680] DEBUG_LOCKS_WARN_ON(val > preempt_count())
> > > [    3.149684] ------------[ cut here ]------------
> > > [    3.149695] WARNING: CPU: 19 PID: 1 at kernel/sched/core.c:3094 preempt_count_sub+0xcc/0x140
> > > [    3.149699] Modules linked in:
> > > [    3.149705] CPU: 19 PID: 1 Comm: swapper/0 Not tainted 4.13.0-rc7-nnr+ #97
> > > [    3.149709] task: c0000000fea80000 task.stack: c0000000feb00000
> > > [    3.149713] NIP:  c00000000011d3dc LR: c00000000011d3d8 CTR: c000000000a090d0
> > > [    3.149718] REGS: c0000000feb03400 TRAP: 0700   Not tainted  (4.13.0-rc7-nnr+)
> > > [    3.149722] MSR:  8000000000021033 <SF,ME,IR,DR,RI,LE>  CR: 28000282  XER: 00000000
> > > [    3.149732] CFAR: c00000000015aa18 SOFTE: 0
> > > <snip>
> > > [    3.149786] NIP [c00000000011d3dc] preempt_count_sub+0xcc/0x140
> > > [    3.149790] LR [c00000000011d3d8] preempt_count_sub+0xc8/0x140
> > > [    3.149794] Call Trace:
> > > [    3.149798] [c0000000feb03680] [c00000000011d3d8] preempt_count_sub+0xc8/0x140 (unreliable)
> > > [    3.149804] [c0000000feb036e0] [c000000000046198] kprobe_handler+0x228/0x4b0
> > > [    3.149810] [c0000000feb03750] [c0000000000269c8] program_check_exception+0x58/0x3b0
> > > [    3.149816] [c0000000feb037c0] [c00000000000903c] program_check_common+0x16c/0x170
> > > [    3.149822] --- interrupt: 0 at kprobe_target+0x8/0x20
> > >                    LR = init_test_probes+0x248/0x7d0
> > > [    3.149829] [c0000000feb03ab0] [c000000000e4f048] kp+0x0/0x80 (unreliable)
> > > [    3.149835] [c0000000feb03b10] [c00000000004ea60] livepatch_handler+0x38/0x74
> > > [    3.149841] [c0000000feb03ba0] [c000000000d0de54] init_kprobes+0x1d8/0x208
> > > [    3.149846] [c0000000feb03c40] [c00000000000daa8] do_one_initcall+0x68/0x1d0
> > > [    3.149852] [c0000000feb03d00] [c000000000ce44f0] kernel_init_freeable+0x298/0x374
> > > [    3.149857] [c0000000feb03dc0] [c00000000000dd84] kernel_init+0x24/0x160
> > > [    3.149863] [c0000000feb03e30] [c00000000000bfec] ret_from_kernel_thread+0x5c/0x70
> > > [    3.149867] Instruction dump:
> > > [    3.149871] 419effdc 3d22001b 39299240 81290000 2f890000 409effc8 3c82ffcb 3c62ffcb
> > > [    3.149879] 3884bc68 3863bc18 4803d5fd 60000000 <0fe00000> 4bffffa8 60000000 60000000
> > > [    3.149890] ---[ end trace 432dd46b4ce3d29f ]---
> > > [    3.166003] Kprobe smoke test: passed successfully
> > > 
> > > The issue is that we aren't disabling preemption in
> > > kprobe_ftrace_handler(). Disable it.
> > 
> > Oops, right! Similar patch may need for x86 too.
> 
> Indeed, I will send a patch for that.
> 
> On a related note, I've been looking into why we have !PREEMPT for 
> CONFIG_OPTPROBES. It looks like the primary reason is x86 having to deal 
> with replacing multiple instructions. However, that isn't true with arm 
> and powerpc. So, does it make sense to move 'depends on !PREEMPT' to the 
> x86 code? Are there other scenarios where it might cause issues for 
> arm/powerpc?

Indeed! Whuat I expected was to replace several instructions in RISC processors
for jumping far site (like load & jump), but you chose different approach.
So there is no reason to prehibit that.

Thanks!


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/5] powerpc/kprobes: Fix warnings from __this_cpu_read() on preempt kernels
  2017-09-14 10:10       ` Masami Hiramatsu
@ 2017-09-16 11:25         ` Naveen N. Rao
  0 siblings, 0 replies; 20+ messages in thread
From: Naveen N. Rao @ 2017-09-16 11:25 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Michael Ellerman, linuxppc-dev, Ananth N Mavinakayanahalli,
	Kamalesh Babulal

On 2017/09/14 03:10AM, Masami Hiramatsu wrote:
> On Thu, 14 Sep 2017 12:17:20 +0530
> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:
> 
> > On 2017/09/13 05:36PM, Masami Hiramatsu wrote:
> > > On Thu, 14 Sep 2017 02:50:34 +0530
> > > "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:
> > > 
> > > > Kamalesh pointed out that we are getting the below call traces with
> > > > livepatched functions when we enable CONFIG_PREEMPT:
> > > > 
> > > > [  495.470721] BUG: using __this_cpu_read() in preemptible [00000000] code: cat/8394
> > > > [  495.471167] caller is is_current_kprobe_addr+0x30/0x90
> > > > [  495.471171] CPU: 4 PID: 8394 Comm: cat Tainted: G              K 4.13.0-rc7-nnr+ #95
> > > > [  495.471173] Call Trace:
> > > > [  495.471178] [c00000008fd9b960] [c0000000009f039c] dump_stack+0xec/0x160 (unreliable)
> > > > [  495.471184] [c00000008fd9b9a0] [c00000000059169c] check_preemption_disabled+0x15c/0x170
> > > > [  495.471187] [c00000008fd9ba30] [c000000000046460] is_current_kprobe_addr+0x30/0x90
> > > > [  495.471191] [c00000008fd9ba60] [c00000000004e9a0] ftrace_call+0x1c/0xb8
> > > > [  495.471195] [c00000008fd9bc30] [c000000000376fd8] seq_read+0x238/0x5c0
> > > > [  495.471199] [c00000008fd9bcd0] [c0000000003cfd78] proc_reg_read+0x88/0xd0
> > > > [  495.471203] [c00000008fd9bd00] [c00000000033e5d4] __vfs_read+0x44/0x1b0
> > > > [  495.471206] [c00000008fd9bd90] [c0000000003402ec] vfs_read+0xbc/0x1b0
> > > > [  495.471210] [c00000008fd9bde0] [c000000000342138] SyS_read+0x68/0x110
> > > > [  495.471214] [c00000008fd9be30] [c00000000000bc6c] system_call+0x58/0x6c
> > > > 
> > > > Commit c05b8c4474c030 ("powerpc/kprobes: Skip livepatch_handler() for
> > > > jprobes") introduced a helper is_current_kprobe_addr() to help determine
> > > > if the current function has been livepatched or if it has a jprobe
> > > > installed, both of which modify the NIP.
> > > > 
> > > > In the case of a jprobe, kprobe_ftrace_handler() disables pre-emption
> > > > before calling into setjmp_pre_handler() which returns without disabling
> > > > pre-emption. This is done to ensure that the jprobe han dler won't
> > > > disappear beneath us if the jprobe is unregistered between the
> > > > setjmp_pre_handler() and the subsequent longjmp_break_handler() called
> > > > from the jprobe handler. Due to this, we can use __this_cpu_read() in
> > > > is_current_kprobe_addr() with the pre-emption check as we know that
> > > > pre-emption will be disabled.
> > > > 
> > > > However, if this function has been livepatched, we are still doing this
> > > > check and when we do so, pre-emption won't necessarily be disabled. This
> > > > results in the call trace shown above.
> > > > 
> > > > Fix this by only invoking is_current_kprobe_addr() when pre-emption is
> > > > disabled. And since we now guard this within a pre-emption check, we can
> > > > instead use raw_cpu_read() to get the current_kprobe value skipping the
> > > > check done by __this_cpu_read().
> > > 
> > > Hmm, can you disable preempt temporary at this function and read it?
> > 
> > Yes, but I felt this approach is more optimal specifically for live 
> > patching. We don't normally expect preemption to be disabled while 
> > handling a livepatched function, so the simple 'if (!preemptible())'
> > check helps us bypass looking at current_kprobe.
> 
> Ah, I see. this is for checking "jprobes", since kprobes/kretprobes
> usually already done there.

Correct.

> 
> > 
> > The check also ensures we can use raw_cpu_read() safely in other 
> > scenarios. Do you see any other concerns with this approach?
> 
> Yes, there are 2 reasons.
> 
> At first, if user's custom kprobe pre-handler changes NIP for their
> custom handwriting livepatching, such handler will enable preemption
> before return. We don't prohibit it. I think this function can not
> detect it.

I thought about this quite a bit since we won't have a way to identify 
if the ftrace handler was a kprobe or a livepatch in such cases.  
Subsequently, I've concluded that this will _not_ be a problem for us.  
Here's why:
- The primary reason is_current_kprobe_addr() was added was to detect 
  jprobes. Jprobes is special since the alternate function does not do a 
  normal return, but uses jprobe_return()/trap to return from the 
  function. In this case, if we don't detect this, we end up needlessly 
  consuming the livepatch stack (the stack frames won't ever be 
  "popped").
- If a kprobe handler were to modify nip, it would likely reset 
  current_kprobe and re-enable pre-emption before returning. We won't be 
  able to detect this scenario. We will end up going through the 
  livepatch_handler(), allocating a stack, branching into the new nip, 
  return back from there and de-allocate the stack before going back.  
  This will be a bit in-efficient, but will work properly.
- Finally, if a custom kprobe handler were to simulate jprobes, it 
  cannot disable pre-emption or reset current_kprobe, since both are 
  needed for proper functioning.

As such, I think the current approach works fine for us.

> 
> And also, the function "name" is very confusing :)
> Especially, since this symbol is exported, if you are checking jprobes
> context, it should be renamed, at least it starts with "__" so that
> show it as local use.

Agreed. I think I will make suitable updates to specifically call out 
jprobes here.

Thanks for the review!
- Naveen

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2017-09-16 11:25 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-13 21:20 [PATCH 1/5] powerpc/kprobes: Some cosmetic updates to try_to_emulate() Naveen N. Rao
2017-09-13 21:20 ` [PATCH 2/5] powerpc/kprobes: Do not suppress instruction emulation if a single run failed Naveen N. Rao
2017-09-13 23:53   ` Masami Hiramatsu
2017-09-14  6:38     ` Naveen N. Rao
2017-09-14  9:45       ` Masami Hiramatsu
2017-09-14 10:03         ` Naveen N. Rao
2017-09-13 21:20 ` [PATCH 3/5] powerpc/kprobes: Fix warnings from __this_cpu_read() on preempt kernels Naveen N. Rao
2017-09-14  0:36   ` Masami Hiramatsu
2017-09-14  6:47     ` Naveen N. Rao
2017-09-14 10:10       ` Masami Hiramatsu
2017-09-16 11:25         ` Naveen N. Rao
2017-09-14  9:48   ` Kamalesh Babulal
2017-09-13 21:20 ` [PATCH 4/5] powerpc/jprobes: Disable preemption when triggered through ftrace Naveen N. Rao
2017-09-14  0:05   ` Masami Hiramatsu
2017-09-14 10:25     ` Naveen N. Rao
2017-09-14 10:53       ` Masami Hiramatsu
2017-09-13 21:20 ` [PATCH 5/5] powerpc/jprobes: Validate break handler invocation as being due to a jprobe_return() Naveen N. Rao
2017-09-14  0:38   ` Masami Hiramatsu
2017-09-13 23:18 ` [PATCH 1/5] powerpc/kprobes: Some cosmetic updates to try_to_emulate() Masami Hiramatsu
2017-09-14  6:16 ` Kamalesh Babulal

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.