All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH -tip v3 0/7] kprobes/x86: Preempt related enhancements
@ 2017-09-19  9:58 Masami Hiramatsu
  2017-09-19  9:59 ` [PATCH -tip v3 1/7] kprobes: Improve smoke test to check preemptible Masami Hiramatsu
                   ` (7 more replies)
  0 siblings, 8 replies; 28+ messages in thread
From: Masami Hiramatsu @ 2017-09-19  9:58 UTC (permalink / raw)
  To: Ingo Molnar, mingo
  Cc: x86, Steven Rostedt, Masami Hiramatsu, linux-kernel,
	Peter Zijlstra, Ananth N Mavinakayanahalli, Thomas Gleixner,
	H . Peter Anvin, Paul E . McKenney, Alexei Starovoitov,
	Alexei Starovoitov

Hi,

Here is the 3rd version of the series to improve preempt
related behavior in kprobes/x86. This actually includes
many enhancements/fixes from the 2nd version, which is

https://lkml.org/lkml/2017/9/11/482

With the previous patch, lkp-bot reported that an issue
( https://lkml.org/lkml/2017/9/14/3 ) and I couldn't
reproduce it. However, I found a suspicious bug and fixed
it ([2/7]).

Also, while I was checking the correct condition for 
*probe handlers in Documentation/kprobes.txt, I also
found that current implementations for ftrace-based kprobe
and optprobe were mis-reading the document.
>From the document, handlers must be run with preempt-
disabled, but interrupt disabling is not guaranteed.
So in the middle of this series, patches ([4/7],[5/7],
[6/7]) adding preempt-disabling and removing irq-disabling.

And at last, I placed the original patch (Enable optprobe
with CONFIG_PREEMPT).

The others are just for making sure this fix works well.
- [1/7] is just adding preemptible checker in kprobe
  smake tests so that we can easily find mistake.
- [3/7] is adding an assert if user tries to change
  execution path in optprobe, which is obviously
  prohibited in the document (there also be how to
  avoid it.)

Thank you,

---

Masami Hiramatsu (7):
      kprobes: Improve smoke test to check preemptible
      kprobes/x86: Move get_kprobe_ctlblk in irq-disabled block
      kprobes: Warn if optprobe handler tries to change execution path
      kprobes/x86: Disable preempt in optprobe
      kprobes/x86: Disable preempt ftrace-based jprobe
      kprobes/x86: Remove disable_irq from ftrace-based/optimized kprobe
      kprobes: Use synchronize_rcu_tasks() for optprobe with CONFIG_PREEMPT


 arch/Kconfig                     |    2 +-
 arch/x86/kernel/kprobes/ftrace.c |   32 ++++++++++++++++----------------
 arch/x86/kernel/kprobes/opt.c    |    8 +++-----
 kernel/kprobes.c                 |   23 +++++++++++++++++------
 kernel/test_kprobes.c            |   20 ++++++++++++++++++++
 5 files changed, 57 insertions(+), 28 deletions(-)

--
Masami Hiramatsu

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH -tip v3 1/7] kprobes: Improve smoke test to check preemptible
  2017-09-19  9:58 [PATCH -tip v3 0/7] kprobes/x86: Preempt related enhancements Masami Hiramatsu
@ 2017-09-19  9:59 ` Masami Hiramatsu
  2017-09-28 10:52   ` [tip:perf/core] kprobes: Improve smoke test to check preemptibility tip-bot for Masami Hiramatsu
  2017-09-19  9:59 ` [PATCH -tip v3 2/7] kprobes/x86: Move get_kprobe_ctlblk in irq-disabled block Masami Hiramatsu
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 28+ messages in thread
From: Masami Hiramatsu @ 2017-09-19  9:59 UTC (permalink / raw)
  To: Ingo Molnar, mingo
  Cc: x86, Steven Rostedt, Masami Hiramatsu, linux-kernel,
	Peter Zijlstra, Ananth N Mavinakayanahalli, Thomas Gleixner,
	H . Peter Anvin, Paul E . McKenney, Alexei Starovoitov,
	Alexei Starovoitov

Add preemptible check to each handler. Handlers are called with
non-preemtible, which is guaranteed by Documentation/kprobes.txt.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
---
 kernel/test_kprobes.c |   20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/kernel/test_kprobes.c b/kernel/test_kprobes.c
index 0dbab6d1acb4..47106a1e645a 100644
--- a/kernel/test_kprobes.c
+++ b/kernel/test_kprobes.c
@@ -34,6 +34,10 @@ static noinline u32 kprobe_target(u32 value)
 
 static int kp_pre_handler(struct kprobe *p, struct pt_regs *regs)
 {
+	if (preemptible()) {
+		handler_errors++;
+		pr_err("pre-handler is preemptible\n");
+	}
 	preh_val = (rand1 / div_factor);
 	return 0;
 }
@@ -41,6 +45,10 @@ static int kp_pre_handler(struct kprobe *p, struct pt_regs *regs)
 static void kp_post_handler(struct kprobe *p, struct pt_regs *regs,
 		unsigned long flags)
 {
+	if (preemptible()) {
+		handler_errors++;
+		pr_err("post-handler is preemptible\n");
+	}
 	if (preh_val != (rand1 / div_factor)) {
 		handler_errors++;
 		pr_err("incorrect value in post_handler\n");
@@ -156,6 +164,10 @@ static int test_kprobes(void)
 
 static u32 j_kprobe_target(u32 value)
 {
+	if (preemptible()) {
+		handler_errors++;
+		pr_err("jprobe-handler is preemptible\n");
+	}
 	if (value != rand1) {
 		handler_errors++;
 		pr_err("incorrect value in jprobe handler\n");
@@ -232,6 +244,10 @@ static u32 krph_val;
 
 static int entry_handler(struct kretprobe_instance *ri, struct pt_regs *regs)
 {
+	if (preemptible()) {
+		handler_errors++;
+		pr_err("kretprobe entry handler is preemptible\n");
+	}
 	krph_val = (rand1 / div_factor);
 	return 0;
 }
@@ -240,6 +256,10 @@ static int return_handler(struct kretprobe_instance *ri, struct pt_regs *regs)
 {
 	unsigned long ret = regs_return_value(regs);
 
+	if (preemptible()) {
+		handler_errors++;
+		pr_err("kretprobe return handler is preemptible\n");
+	}
 	if (ret != (rand1 / div_factor)) {
 		handler_errors++;
 		pr_err("incorrect value in kretprobe handler\n");

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH -tip v3 2/7] kprobes/x86: Move get_kprobe_ctlblk in irq-disabled block
  2017-09-19  9:58 [PATCH -tip v3 0/7] kprobes/x86: Preempt related enhancements Masami Hiramatsu
  2017-09-19  9:59 ` [PATCH -tip v3 1/7] kprobes: Improve smoke test to check preemptible Masami Hiramatsu
@ 2017-09-19  9:59 ` Masami Hiramatsu
  2017-09-28 10:52   ` [tip:perf/core] kprobes/x86: Move the get_kprobe_ctlblk() into " tip-bot for Masami Hiramatsu
  2017-09-19 10:00 ` [PATCH -tip v3 3/7] kprobes: Warn if optprobe handler tries to change execution path Masami Hiramatsu
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 28+ messages in thread
From: Masami Hiramatsu @ 2017-09-19  9:59 UTC (permalink / raw)
  To: Ingo Molnar, mingo
  Cc: x86, Steven Rostedt, Masami Hiramatsu, linux-kernel,
	Peter Zijlstra, Ananth N Mavinakayanahalli, Thomas Gleixner,
	H . Peter Anvin, Paul E . McKenney, Alexei Starovoitov,
	Alexei Starovoitov

Since get_kprobe_ctlblk() accesses per-cpu variable
which calls smp_processor_id(), it must be called under
preempt-disabled or irq-disabled.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
---
 arch/x86/kernel/kprobes/opt.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 4f98aad38237..259b7e828b02 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -154,7 +154,6 @@ STACK_FRAME_NON_STANDARD(optprobe_template_func);
 static void
 optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
 {
-	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
 	unsigned long flags;
 
 	/* This is possible if op is under delayed unoptimizing */
@@ -165,6 +164,7 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
 	if (kprobe_running()) {
 		kprobes_inc_nmissed_count(&op->kp);
 	} else {
+		struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
 		/* Save skipped registers */
 #ifdef CONFIG_X86_64
 		regs->cs = __KERNEL_CS;

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH -tip v3 3/7] kprobes: Warn if optprobe handler tries to change execution path
  2017-09-19  9:58 [PATCH -tip v3 0/7] kprobes/x86: Preempt related enhancements Masami Hiramatsu
  2017-09-19  9:59 ` [PATCH -tip v3 1/7] kprobes: Improve smoke test to check preemptible Masami Hiramatsu
  2017-09-19  9:59 ` [PATCH -tip v3 2/7] kprobes/x86: Move get_kprobe_ctlblk in irq-disabled block Masami Hiramatsu
@ 2017-09-19 10:00 ` Masami Hiramatsu
  2017-09-28 10:53   ` [tip:perf/core] " tip-bot for Masami Hiramatsu
  2017-10-10 17:02   ` [PATCH -tip v3 3/7] " Naveen N. Rao
  2017-09-19 10:00 ` [PATCH -tip v3 4/7] kprobes/x86: Disable preempt in optprobe Masami Hiramatsu
                   ` (4 subsequent siblings)
  7 siblings, 2 replies; 28+ messages in thread
From: Masami Hiramatsu @ 2017-09-19 10:00 UTC (permalink / raw)
  To: Ingo Molnar, mingo
  Cc: x86, Steven Rostedt, Masami Hiramatsu, linux-kernel,
	Peter Zijlstra, Ananth N Mavinakayanahalli, Thomas Gleixner,
	H . Peter Anvin, Paul E . McKenney, Alexei Starovoitov,
	Alexei Starovoitov

Warn if optprobe handler tries to change execution path.
As described in Documentation/kprobes.txt, with optprobe
user handler can not change instruction pointer. In that
case user must avoid optimizing the kprobes by setting
post_handler or break_handler.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
---
 kernel/kprobes.c |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index a1606a4224e1..de73b843c623 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -387,7 +387,10 @@ void opt_pre_handler(struct kprobe *p, struct pt_regs *regs)
 	list_for_each_entry_rcu(kp, &p->list, list) {
 		if (kp->pre_handler && likely(!kprobe_disabled(kp))) {
 			set_kprobe_instance(kp);
-			kp->pre_handler(kp, regs);
+			if (kp->pre_handler(kp, regs)) {
+				if (WARN_ON_ONCE(1))
+					pr_err("Optprobe ignores instruction pointer changing.(%pF)\n", p->addr);
+			}
 		}
 		reset_kprobe_instance();
 	}

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH -tip v3 4/7] kprobes/x86: Disable preempt in optprobe
  2017-09-19  9:58 [PATCH -tip v3 0/7] kprobes/x86: Preempt related enhancements Masami Hiramatsu
                   ` (2 preceding siblings ...)
  2017-09-19 10:00 ` [PATCH -tip v3 3/7] kprobes: Warn if optprobe handler tries to change execution path Masami Hiramatsu
@ 2017-09-19 10:00 ` Masami Hiramatsu
  2017-09-28 10:53   ` [tip:perf/core] kprobes/x86: Disable preemption " tip-bot for Masami Hiramatsu
  2017-09-19 10:01 ` [PATCH -tip v3 5/7] kprobes/x86: Disable preempt ftrace-based jprobe Masami Hiramatsu
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 28+ messages in thread
From: Masami Hiramatsu @ 2017-09-19 10:00 UTC (permalink / raw)
  To: Ingo Molnar, mingo
  Cc: x86, Steven Rostedt, Masami Hiramatsu, linux-kernel,
	Peter Zijlstra, Ananth N Mavinakayanahalli, Thomas Gleixner,
	H . Peter Anvin, Paul E . McKenney, Alexei Starovoitov,
	Alexei Starovoitov

Disable preempt in optprobe handler as described
in Documentation/kprobes.txt, there is

"Probe handlers are run with preemption disabled."

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
---
 arch/x86/kernel/kprobes/opt.c |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 259b7e828b02..36e4f61c3eec 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -161,6 +161,7 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
 		return;
 
 	local_irq_save(flags);
+	preempt_disable();
 	if (kprobe_running()) {
 		kprobes_inc_nmissed_count(&op->kp);
 	} else {
@@ -180,6 +181,7 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
 		opt_pre_handler(&op->kp, regs);
 		__this_cpu_write(current_kprobe, NULL);
 	}
+	preempt_enable_no_resched();
 	local_irq_restore(flags);
 }
 NOKPROBE_SYMBOL(optimized_callback);

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH -tip v3 5/7] kprobes/x86: Disable preempt ftrace-based jprobe
  2017-09-19  9:58 [PATCH -tip v3 0/7] kprobes/x86: Preempt related enhancements Masami Hiramatsu
                   ` (3 preceding siblings ...)
  2017-09-19 10:00 ` [PATCH -tip v3 4/7] kprobes/x86: Disable preempt in optprobe Masami Hiramatsu
@ 2017-09-19 10:01 ` Masami Hiramatsu
  2017-09-28 10:54   ` [tip:perf/core] kprobes/x86: Disable preemption in ftrace-based jprobes tip-bot for Masami Hiramatsu
  2017-09-19 10:02 ` [PATCH -tip v3 6/7] kprobes/x86: Remove disable_irq from ftrace-based/optimized kprobe Masami Hiramatsu
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 28+ messages in thread
From: Masami Hiramatsu @ 2017-09-19 10:01 UTC (permalink / raw)
  To: Ingo Molnar, mingo
  Cc: x86, Steven Rostedt, Masami Hiramatsu, linux-kernel,
	Peter Zijlstra, Ananth N Mavinakayanahalli, Thomas Gleixner,
	H . Peter Anvin, Paul E . McKenney, Alexei Starovoitov,
	Alexei Starovoitov

Disable preempt in ftrace-based jprobe handler as
described in Documentation/kprobes.txt, there is

"Probe handlers are run with preemption disabled."

This will fix jprobes behavior when CONFIG_PREEMPT=y.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
---
 arch/x86/kernel/kprobes/ftrace.c |   23 ++++++++++++++---------
 1 file changed, 14 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kernel/kprobes/ftrace.c b/arch/x86/kernel/kprobes/ftrace.c
index 041f7b6dfa0f..bcfee4f69b0e 100644
--- a/arch/x86/kernel/kprobes/ftrace.c
+++ b/arch/x86/kernel/kprobes/ftrace.c
@@ -26,7 +26,7 @@
 #include "common.h"
 
 static nokprobe_inline
-int __skip_singlestep(struct kprobe *p, struct pt_regs *regs,
+void __skip_singlestep(struct kprobe *p, struct pt_regs *regs,
 		      struct kprobe_ctlblk *kcb, unsigned long orig_ip)
 {
 	/*
@@ -41,20 +41,21 @@ int __skip_singlestep(struct kprobe *p, struct pt_regs *regs,
 	__this_cpu_write(current_kprobe, NULL);
 	if (orig_ip)
 		regs->ip = orig_ip;
-	return 1;
 }
 
 int skip_singlestep(struct kprobe *p, struct pt_regs *regs,
 		    struct kprobe_ctlblk *kcb)
 {
-	if (kprobe_ftrace(p))
-		return __skip_singlestep(p, regs, kcb, 0);
-	else
-		return 0;
+	if (kprobe_ftrace(p)) {
+		__skip_singlestep(p, regs, kcb, 0);
+		preempt_enable_no_resched();
+		return 1;
+	}
+	return 0;
 }
 NOKPROBE_SYMBOL(skip_singlestep);
 
-/* Ftrace callback handler for kprobes */
+/* Ftrace callback handler for kprobes -- called under preepmt disabed */
 void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
 			   struct ftrace_ops *ops, struct pt_regs *regs)
 {
@@ -77,13 +78,17 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
 		/* Kprobe handler expects regs->ip = ip + 1 as breakpoint hit */
 		regs->ip = ip + sizeof(kprobe_opcode_t);
 
+		/* To emulate trap based kprobes, preempt_disable here */
+		preempt_disable();
 		__this_cpu_write(current_kprobe, p);
 		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
-		if (!p->pre_handler || !p->pre_handler(p, regs))
+		if (!p->pre_handler || !p->pre_handler(p, regs)) {
 			__skip_singlestep(p, regs, kcb, orig_ip);
+			preempt_enable_no_resched();
+		}
 		/*
 		 * If pre_handler returns !0, it sets regs->ip and
-		 * resets current kprobe.
+		 * resets current kprobe, and keep preempt count +1.
 		 */
 	}
 end:

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH -tip v3 6/7] kprobes/x86: Remove disable_irq from ftrace-based/optimized kprobe
  2017-09-19  9:58 [PATCH -tip v3 0/7] kprobes/x86: Preempt related enhancements Masami Hiramatsu
                   ` (4 preceding siblings ...)
  2017-09-19 10:01 ` [PATCH -tip v3 5/7] kprobes/x86: Disable preempt ftrace-based jprobe Masami Hiramatsu
@ 2017-09-19 10:02 ` Masami Hiramatsu
  2017-09-28  7:25   ` Ingo Molnar
  2017-09-28 10:54   ` [tip:perf/core] kprobes/x86: Remove IRQ disabling from ftrace-based/optimized kprobes tip-bot for Masami Hiramatsu
  2017-09-19 10:03 ` [PATCH -tip v3 7/7] kprobes: Use synchronize_rcu_tasks() for optprobe with CONFIG_PREEMPT Masami Hiramatsu
  2017-09-21 22:00 ` [PATCH -tip v3 0/7] kprobes/x86: Preempt related enhancements Alexei Starovoitov
  7 siblings, 2 replies; 28+ messages in thread
From: Masami Hiramatsu @ 2017-09-19 10:02 UTC (permalink / raw)
  To: Ingo Molnar, mingo
  Cc: x86, Steven Rostedt, Masami Hiramatsu, linux-kernel,
	Peter Zijlstra, Ananth N Mavinakayanahalli, Thomas Gleixner,
	H . Peter Anvin, Paul E . McKenney, Alexei Starovoitov,
	Alexei Starovoitov

Actually kprobes doesn't need to disable irq if it is
called from ftrace/jump trampoline code because
Documentation/kprobes.txt says

-----
Probe handlers are run with preemption disabled.  Depending on the
architecture and optimization state, handlers may also run with
interrupts disabled (e.g., kretprobe handlers and optimized kprobe
handlers run without interrupt disabled on x86/x86-64).
-----

So let's remove irq disabling from those handlers.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
---
 arch/x86/kernel/kprobes/ftrace.c |    9 ++-------
 arch/x86/kernel/kprobes/opt.c    |    4 ----
 2 files changed, 2 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kernel/kprobes/ftrace.c b/arch/x86/kernel/kprobes/ftrace.c
index bcfee4f69b0e..8dc0161cec8f 100644
--- a/arch/x86/kernel/kprobes/ftrace.c
+++ b/arch/x86/kernel/kprobes/ftrace.c
@@ -61,14 +61,11 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
 {
 	struct kprobe *p;
 	struct kprobe_ctlblk *kcb;
-	unsigned long flags;
-
-	/* Disable irq for emulating a breakpoint and avoiding preempt */
-	local_irq_save(flags);
 
+	/* Preempt is disabled by ftrace */
 	p = get_kprobe((kprobe_opcode_t *)ip);
 	if (unlikely(!p) || kprobe_disabled(p))
-		goto end;
+		return;
 
 	kcb = get_kprobe_ctlblk();
 	if (kprobe_running()) {
@@ -91,8 +88,6 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
 		 * resets current kprobe, and keep preempt count +1.
 		 */
 	}
-end:
-	local_irq_restore(flags);
 }
 NOKPROBE_SYMBOL(kprobe_ftrace_handler);
 
diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 36e4f61c3eec..511aad1990a0 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -154,13 +154,10 @@ STACK_FRAME_NON_STANDARD(optprobe_template_func);
 static void
 optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
 {
-	unsigned long flags;
-
 	/* This is possible if op is under delayed unoptimizing */
 	if (kprobe_disabled(&op->kp))
 		return;
 
-	local_irq_save(flags);
 	preempt_disable();
 	if (kprobe_running()) {
 		kprobes_inc_nmissed_count(&op->kp);
@@ -182,7 +179,6 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
 		__this_cpu_write(current_kprobe, NULL);
 	}
 	preempt_enable_no_resched();
-	local_irq_restore(flags);
 }
 NOKPROBE_SYMBOL(optimized_callback);
 

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH -tip v3 7/7] kprobes: Use synchronize_rcu_tasks() for optprobe with CONFIG_PREEMPT
  2017-09-19  9:58 [PATCH -tip v3 0/7] kprobes/x86: Preempt related enhancements Masami Hiramatsu
                   ` (5 preceding siblings ...)
  2017-09-19 10:02 ` [PATCH -tip v3 6/7] kprobes/x86: Remove disable_irq from ftrace-based/optimized kprobe Masami Hiramatsu
@ 2017-09-19 10:03 ` Masami Hiramatsu
  2017-09-28  7:22   ` Ingo Molnar
  2017-09-21 22:00 ` [PATCH -tip v3 0/7] kprobes/x86: Preempt related enhancements Alexei Starovoitov
  7 siblings, 1 reply; 28+ messages in thread
From: Masami Hiramatsu @ 2017-09-19 10:03 UTC (permalink / raw)
  To: Ingo Molnar, mingo
  Cc: x86, Steven Rostedt, Masami Hiramatsu, linux-kernel,
	Peter Zijlstra, Ananth N Mavinakayanahalli, Thomas Gleixner,
	H . Peter Anvin, Paul E . McKenney, Alexei Starovoitov,
	Alexei Starovoitov

To enable jump optimized probe with CONFIG_PREEMPT, use
synchronize_rcu_tasks() to wait for all tasks preempted
on trampoline code back on track.

Since the jump optimized kprobes can replace multiple
instructions, there can be tasks which are preempted
on the 2nd (or 3rd) instructions. If the kprobe
replaces those instructions by a jump instruction,
when those tasks back to the preempted place, it is
a middle of the jump instruction and causes a kernel
panic.
To avoid such tragedies in advance, kprobe optimizer
prepare a detour route using normal kprobe (e.g.
int3 breakpoint on x86), and wait for the tasks which
is interrrupted on such place by synchronize_sched()
when CONFIG_PREEMPT=n.
If CONFIG_PREEMPT=y, things be more complicated, because
such interrupted thread can be preempted (other thread
can be scheduled in interrupt handler.) So, kprobes
optimizer has to wait for those tasks scheduled normally.
In this case we can use synchronize_rcu_tasks() which
ensures that all preempted tasks back on track and
schedule it.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 arch/Kconfig     |    2 +-
 kernel/kprobes.c |   18 +++++++++++++-----
 2 files changed, 14 insertions(+), 6 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 1aafb4efbb51..f75c8e8a229b 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -90,7 +90,7 @@ config STATIC_KEYS_SELFTEST
 config OPTPROBES
 	def_bool y
 	depends on KPROBES && HAVE_OPTPROBES
-	depends on !PREEMPT
+	select TASKS_RCU if PREEMPT
 
 config KPROBES_ON_FTRACE
 	def_bool y
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index de73b843c623..21d42ed2aaa5 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -577,12 +577,20 @@ static void kprobe_optimizer(struct work_struct *work)
 
 	/*
 	 * Step 2: Wait for quiesence period to ensure all running interrupts
-	 * are done. Because optprobe may modify multiple instructions
-	 * there is a chance that Nth instruction is interrupted. In that
-	 * case, running interrupt can return to 2nd-Nth byte of jump
-	 * instruction. This wait is for avoiding it.
+	 * are done. Because optprobe may modify multiple instructions,
+	 * there is a chance that the Nth instruction is interrupted. In that
+	 * case, running interrupt can return to the Nth byte of jump
+	 * instruction. This can be avoided by waiting for returning of
+	 * such interrupts, since (until here) the first byte of the optimized
+	 * probe is already replaced with normal kprobe (sw breakpoint) and
+	 * all threads which reach to the probed address will hit it and
+	 * bypass the copied instructions (instead of executing the original.)
+	 * With CONFIG_PREEMPT, such interrupts can be preepmted. To wait
+	 * for such thread, we will use synchronize_rcu_tasks() which ensures
+	 * all preeempted tasks are scheduled normally (not preempted).
+	 * So we can ensure there is no threads preempted at probed address.
 	 */
-	synchronize_sched();
+	synchronize_rcu_tasks();
 
 	/* Step 3: Optimize kprobes after quiesence period */
 	do_optimize_kprobes();

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH -tip v3 0/7] kprobes/x86: Preempt related enhancements
  2017-09-19  9:58 [PATCH -tip v3 0/7] kprobes/x86: Preempt related enhancements Masami Hiramatsu
                   ` (6 preceding siblings ...)
  2017-09-19 10:03 ` [PATCH -tip v3 7/7] kprobes: Use synchronize_rcu_tasks() for optprobe with CONFIG_PREEMPT Masami Hiramatsu
@ 2017-09-21 22:00 ` Alexei Starovoitov
  7 siblings, 0 replies; 28+ messages in thread
From: Alexei Starovoitov @ 2017-09-21 22:00 UTC (permalink / raw)
  To: Masami Hiramatsu, Ingo Molnar, mingo
  Cc: x86, Steven Rostedt, linux-kernel, Peter Zijlstra,
	Ananth N Mavinakayanahalli, Thomas Gleixner, H . Peter Anvin,
	Paul E . McKenney, Alexei Starovoitov

On 9/19/17 2:58 AM, Masami Hiramatsu wrote:
> Hi,
>
> Here is the 3rd version of the series to improve preempt
> related behavior in kprobes/x86. This actually includes
> many enhancements/fixes from the 2nd version, which is
>
> https://lkml.org/lkml/2017/9/11/482
>
> With the previous patch, lkp-bot reported that an issue
> ( https://lkml.org/lkml/2017/9/14/3 ) and I couldn't
> reproduce it. However, I found a suspicious bug and fixed
> it ([2/7]).
>
> Also, while I was checking the correct condition for
> *probe handlers in Documentation/kprobes.txt, I also
> found that current implementations for ftrace-based kprobe
> and optprobe were mis-reading the document.
>>From the document, handlers must be run with preempt-
> disabled, but interrupt disabling is not guaranteed.
> So in the middle of this series, patches ([4/7],[5/7],
> [6/7]) adding preempt-disabling and removing irq-disabling.
>
> And at last, I placed the original patch (Enable optprobe
> with CONFIG_PREEMPT).
>
> The others are just for making sure this fix works well.
> - [1/7] is just adding preemptible checker in kprobe
>   smake tests so that we can easily find mistake.
> - [3/7] is adding an assert if user tries to change
>   execution path in optprobe, which is obviously
>   prohibited in the document (there also be how to
>   avoid it.)

all patches look great to me.
Acked-by: Alexei Starovoitov <ast@kernel.org>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH -tip v3 7/7] kprobes: Use synchronize_rcu_tasks() for optprobe with CONFIG_PREEMPT
  2017-09-19 10:03 ` [PATCH -tip v3 7/7] kprobes: Use synchronize_rcu_tasks() for optprobe with CONFIG_PREEMPT Masami Hiramatsu
@ 2017-09-28  7:22   ` Ingo Molnar
  2017-09-29  7:29     ` Masami Hiramatsu
  0 siblings, 1 reply; 28+ messages in thread
From: Ingo Molnar @ 2017-09-28  7:22 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: mingo, x86, Steven Rostedt, linux-kernel, Peter Zijlstra,
	Ananth N Mavinakayanahalli, Thomas Gleixner, H . Peter Anvin,
	Paul E . McKenney, Alexei Starovoitov, Alexei Starovoitov


* Masami Hiramatsu <mhiramat@kernel.org> wrote:

> To enable jump optimized probe with CONFIG_PREEMPT, use
> synchronize_rcu_tasks() to wait for all tasks preempted
> on trampoline code back on track.

This sentence does not parse. It's missing a verb, but I'm not sure.

> Since the jump optimized kprobes can replace multiple
> instructions, there can be tasks which are preempted
> on the 2nd (or 3rd) instructions. If the kprobe
> replaces those instructions by a jump instruction,
> when those tasks back to the preempted place, it is
> a middle of the jump instruction and causes a kernel
> panic.


Again, sentence appears to be missing a verb and also an adjective I think.

> To avoid such tragedies in advance, kprobe optimizer
> prepare a detour route using normal kprobe (e.g.
> int3 breakpoint on x86), and wait for the tasks which
> is interrrupted on such place by synchronize_sched()
> when CONFIG_PREEMPT=n.

s/tragedies/mishaps

Part after the first comma does not parse.

Also the way to refer to kprobes is "kprobes" and "normal kprobes".
Use 'kprobe' only when talking about a specific kprobe instance or such.
You use this correctly later on in the changelog ...

> If CONFIG_PREEMPT=y, things be more complicated, because

s/be/are or s/be/get

> such interrupted thread can be preempted (other thread
> can be scheduled in interrupt handler.) So, kprobes

full stop in the wrong place.

> optimizer has to wait for those tasks scheduled normally.

missing verb.

> In this case we can use synchronize_rcu_tasks() which
> ensures that all preempted tasks back on track and
> schedule it.

More careful changelogs please.

> +	 * are done. Because optprobe may modify multiple instructions,
> +	 * there is a chance that the Nth instruction is interrupted. In that
> +	 * case, running interrupt can return to the Nth byte of jump
> +	 * instruction. This can be avoided by waiting for returning of
> +	 * such interrupts, since (until here) the first byte of the optimized
> +	 * probe is already replaced with normal kprobe (sw breakpoint) and
> +	 * all threads which reach to the probed address will hit it and
> +	 * bypass the copied instructions (instead of executing the original.)
> +	 * With CONFIG_PREEMPT, such interrupts can be preepmted. To wait
> +	 * for such thread, we will use synchronize_rcu_tasks() which ensures
> +	 * all preeempted tasks are scheduled normally (not preempted).
> +	 * So we can ensure there is no threads preempted at probed address.

What? Interrupts cannot be preempted.

Also, "To wait for such threads", or "To wait for such a thread".

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH -tip v3 6/7] kprobes/x86: Remove disable_irq from ftrace-based/optimized kprobe
  2017-09-19 10:02 ` [PATCH -tip v3 6/7] kprobes/x86: Remove disable_irq from ftrace-based/optimized kprobe Masami Hiramatsu
@ 2017-09-28  7:25   ` Ingo Molnar
  2017-09-29  6:48     ` Masami Hiramatsu
  2017-09-28 10:54   ` [tip:perf/core] kprobes/x86: Remove IRQ disabling from ftrace-based/optimized kprobes tip-bot for Masami Hiramatsu
  1 sibling, 1 reply; 28+ messages in thread
From: Ingo Molnar @ 2017-09-28  7:25 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: mingo, x86, Steven Rostedt, linux-kernel, Peter Zijlstra,
	Ananth N Mavinakayanahalli, Thomas Gleixner, H . Peter Anvin,
	Paul E . McKenney, Alexei Starovoitov, Alexei Starovoitov


* Masami Hiramatsu <mhiramat@kernel.org> wrote:

> Actually kprobes doesn't need to disable irq if it is
> called from ftrace/jump trampoline code because
> Documentation/kprobes.txt says
> 
> -----
> Probe handlers are run with preemption disabled.  Depending on the
> architecture and optimization state, handlers may also run with
> interrupts disabled (e.g., kretprobe handlers and optimized kprobe
> handlers run without interrupt disabled on x86/x86-64).
> -----
> 
> So let's remove irq disabling from those handlers.

> -	local_irq_save(flags);

The title is talking about disable_irq():

  kprobes/x86: Remove disable_irq from ftrace-based/optimized kprobe

... but the patch is actually using local_irq_save(), which is an entirely 
different thing! You probably wanted to say:

  kprobes/x86: Remove irq disabling from ftrace-based/optimized kprobes

Also note the plural of 'kprobes' when we refer to them as a generic thing.

I fixed the title, but _please_ read changelogs more carefully before sending 
them.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [tip:perf/core] kprobes: Improve smoke test to check preemptibility
  2017-09-19  9:59 ` [PATCH -tip v3 1/7] kprobes: Improve smoke test to check preemptible Masami Hiramatsu
@ 2017-09-28 10:52   ` tip-bot for Masami Hiramatsu
  0 siblings, 0 replies; 28+ messages in thread
From: tip-bot for Masami Hiramatsu @ 2017-09-28 10:52 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: ananth, torvalds, ast, linux-kernel, tglx, rostedt, paulmck,
	mhiramat, ast, peterz, hpa, mingo

Commit-ID:  3539d09154e11336c31a900a9cd49e386ba6d9b2
Gitweb:     https://git.kernel.org/tip/3539d09154e11336c31a900a9cd49e386ba6d9b2
Author:     Masami Hiramatsu <mhiramat@kernel.org>
AuthorDate: Tue, 19 Sep 2017 18:59:00 +0900
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Thu, 28 Sep 2017 09:23:03 +0200

kprobes: Improve smoke test to check preemptibility

Add preemptible check to each handler. Handlers are called with
non-preemtible, which is guaranteed by Documentation/kprobes.txt.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Alexei Starovoitov <ast@fb.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E . McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/150581513991.32348.7956810394499654272.stgit@devbox
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/test_kprobes.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/kernel/test_kprobes.c b/kernel/test_kprobes.c
index 0dbab6d..47106a1 100644
--- a/kernel/test_kprobes.c
+++ b/kernel/test_kprobes.c
@@ -34,6 +34,10 @@ static noinline u32 kprobe_target(u32 value)
 
 static int kp_pre_handler(struct kprobe *p, struct pt_regs *regs)
 {
+	if (preemptible()) {
+		handler_errors++;
+		pr_err("pre-handler is preemptible\n");
+	}
 	preh_val = (rand1 / div_factor);
 	return 0;
 }
@@ -41,6 +45,10 @@ static int kp_pre_handler(struct kprobe *p, struct pt_regs *regs)
 static void kp_post_handler(struct kprobe *p, struct pt_regs *regs,
 		unsigned long flags)
 {
+	if (preemptible()) {
+		handler_errors++;
+		pr_err("post-handler is preemptible\n");
+	}
 	if (preh_val != (rand1 / div_factor)) {
 		handler_errors++;
 		pr_err("incorrect value in post_handler\n");
@@ -156,6 +164,10 @@ static int test_kprobes(void)
 
 static u32 j_kprobe_target(u32 value)
 {
+	if (preemptible()) {
+		handler_errors++;
+		pr_err("jprobe-handler is preemptible\n");
+	}
 	if (value != rand1) {
 		handler_errors++;
 		pr_err("incorrect value in jprobe handler\n");
@@ -232,6 +244,10 @@ static u32 krph_val;
 
 static int entry_handler(struct kretprobe_instance *ri, struct pt_regs *regs)
 {
+	if (preemptible()) {
+		handler_errors++;
+		pr_err("kretprobe entry handler is preemptible\n");
+	}
 	krph_val = (rand1 / div_factor);
 	return 0;
 }
@@ -240,6 +256,10 @@ static int return_handler(struct kretprobe_instance *ri, struct pt_regs *regs)
 {
 	unsigned long ret = regs_return_value(regs);
 
+	if (preemptible()) {
+		handler_errors++;
+		pr_err("kretprobe return handler is preemptible\n");
+	}
 	if (ret != (rand1 / div_factor)) {
 		handler_errors++;
 		pr_err("incorrect value in kretprobe handler\n");

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [tip:perf/core] kprobes/x86: Move the get_kprobe_ctlblk() into irq-disabled block
  2017-09-19  9:59 ` [PATCH -tip v3 2/7] kprobes/x86: Move get_kprobe_ctlblk in irq-disabled block Masami Hiramatsu
@ 2017-09-28 10:52   ` tip-bot for Masami Hiramatsu
  0 siblings, 0 replies; 28+ messages in thread
From: tip-bot for Masami Hiramatsu @ 2017-09-28 10:52 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, torvalds, paulmck, peterz, ananth, ast, ast, mingo,
	rostedt, hpa, mhiramat, tglx

Commit-ID:  cd52edad55fbcd8064877a77d31445b2fb4b85c3
Gitweb:     https://git.kernel.org/tip/cd52edad55fbcd8064877a77d31445b2fb4b85c3
Author:     Masami Hiramatsu <mhiramat@kernel.org>
AuthorDate: Tue, 19 Sep 2017 18:59:39 +0900
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Thu, 28 Sep 2017 09:23:03 +0200

kprobes/x86: Move the get_kprobe_ctlblk() into irq-disabled block

Since get_kprobe_ctlblk() accesses per-cpu variables
which calls smp_processor_id(), it must be called under
preempt-disabled or irq-disabled.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Alexei Starovoitov <ast@fb.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E . McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/150581517952.32348.2655896843219158446.stgit@devbox
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/kernel/kprobes/opt.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 0cae7c0..f558103 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -154,7 +154,6 @@ STACK_FRAME_NON_STANDARD(optprobe_template_func);
 static void
 optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
 {
-	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
 	unsigned long flags;
 
 	/* This is possible if op is under delayed unoptimizing */
@@ -165,6 +164,7 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
 	if (kprobe_running()) {
 		kprobes_inc_nmissed_count(&op->kp);
 	} else {
+		struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
 		/* Save skipped registers */
 #ifdef CONFIG_X86_64
 		regs->cs = __KERNEL_CS;

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [tip:perf/core] kprobes: Warn if optprobe handler tries to change execution path
  2017-09-19 10:00 ` [PATCH -tip v3 3/7] kprobes: Warn if optprobe handler tries to change execution path Masami Hiramatsu
@ 2017-09-28 10:53   ` tip-bot for Masami Hiramatsu
  2017-10-10 17:02   ` [PATCH -tip v3 3/7] " Naveen N. Rao
  1 sibling, 0 replies; 28+ messages in thread
From: tip-bot for Masami Hiramatsu @ 2017-09-28 10:53 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: ast, ast, mhiramat, ananth, mingo, peterz, rostedt, linux-kernel,
	torvalds, hpa, paulmck, tglx

Commit-ID:  e863d5396146411b615231cae0c518cb2a23371c
Gitweb:     https://git.kernel.org/tip/e863d5396146411b615231cae0c518cb2a23371c
Author:     Masami Hiramatsu <mhiramat@kernel.org>
AuthorDate: Tue, 19 Sep 2017 19:00:19 +0900
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Thu, 28 Sep 2017 09:23:04 +0200

kprobes: Warn if optprobe handler tries to change execution path

Warn if optprobe handler tries to change execution path.
As described in Documentation/kprobes.txt, with optprobe
user handler can not change instruction pointer. In that
case user must avoid optimizing the kprobes by setting
post_handler or break_handler.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Alexei Starovoitov <ast@fb.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E . McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/150581521955.32348.3615624715034787365.stgit@devbox
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/kprobes.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 15fba7f..2d28377 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -387,7 +387,10 @@ void opt_pre_handler(struct kprobe *p, struct pt_regs *regs)
 	list_for_each_entry_rcu(kp, &p->list, list) {
 		if (kp->pre_handler && likely(!kprobe_disabled(kp))) {
 			set_kprobe_instance(kp);
-			kp->pre_handler(kp, regs);
+			if (kp->pre_handler(kp, regs)) {
+				if (WARN_ON_ONCE(1))
+					pr_err("Optprobe ignores instruction pointer changing.(%pF)\n", p->addr);
+			}
 		}
 		reset_kprobe_instance();
 	}

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [tip:perf/core] kprobes/x86: Disable preemption in optprobe
  2017-09-19 10:00 ` [PATCH -tip v3 4/7] kprobes/x86: Disable preempt in optprobe Masami Hiramatsu
@ 2017-09-28 10:53   ` tip-bot for Masami Hiramatsu
  0 siblings, 0 replies; 28+ messages in thread
From: tip-bot for Masami Hiramatsu @ 2017-09-28 10:53 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: mingo, ast, mhiramat, linux-kernel, paulmck, rostedt, torvalds,
	ast, peterz, hpa, tglx, ananth

Commit-ID:  9a09f261a4fa52de916b0db34a36956c95f78fdc
Gitweb:     https://git.kernel.org/tip/9a09f261a4fa52de916b0db34a36956c95f78fdc
Author:     Masami Hiramatsu <mhiramat@kernel.org>
AuthorDate: Tue, 19 Sep 2017 19:00:59 +0900
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Thu, 28 Sep 2017 09:23:04 +0200

kprobes/x86: Disable preemption in optprobe

Disable preemption in optprobe handler as described
in Documentation/kprobes.txt, which says:

  "Probe handlers are run with preemption disabled."

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Alexei Starovoitov <ast@fb.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E . McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/150581525942.32348.6359217983269060829.stgit@devbox
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/kernel/kprobes/opt.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index f558103..32c35cb 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -161,6 +161,7 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
 		return;
 
 	local_irq_save(flags);
+	preempt_disable();
 	if (kprobe_running()) {
 		kprobes_inc_nmissed_count(&op->kp);
 	} else {
@@ -180,6 +181,7 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
 		opt_pre_handler(&op->kp, regs);
 		__this_cpu_write(current_kprobe, NULL);
 	}
+	preempt_enable_no_resched();
 	local_irq_restore(flags);
 }
 NOKPROBE_SYMBOL(optimized_callback);

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [tip:perf/core] kprobes/x86: Disable preemption in ftrace-based jprobes
  2017-09-19 10:01 ` [PATCH -tip v3 5/7] kprobes/x86: Disable preempt ftrace-based jprobe Masami Hiramatsu
@ 2017-09-28 10:54   ` tip-bot for Masami Hiramatsu
  0 siblings, 0 replies; 28+ messages in thread
From: tip-bot for Masami Hiramatsu @ 2017-09-28 10:54 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: hpa, tglx, paulmck, mhiramat, ast, linux-kernel, mingo, rostedt,
	peterz, ast, torvalds, ananth

Commit-ID:  5bb4fc2d8641219732eb2bb654206775a4219aca
Gitweb:     https://git.kernel.org/tip/5bb4fc2d8641219732eb2bb654206775a4219aca
Author:     Masami Hiramatsu <mhiramat@kernel.org>
AuthorDate: Tue, 19 Sep 2017 19:01:40 +0900
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Thu, 28 Sep 2017 09:23:04 +0200

kprobes/x86: Disable preemption in ftrace-based jprobes

Disable preemption in ftrace-based jprobe handlers as
described in Documentation/kprobes.txt:

  "Probe handlers are run with preemption disabled."

This will fix jprobes behavior when CONFIG_PREEMPT=y.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Alexei Starovoitov <ast@fb.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E . McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/150581530024.32348.9863783558598926771.stgit@devbox
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/kernel/kprobes/ftrace.c | 23 ++++++++++++++---------
 1 file changed, 14 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kernel/kprobes/ftrace.c b/arch/x86/kernel/kprobes/ftrace.c
index 041f7b6..bcfee4f 100644
--- a/arch/x86/kernel/kprobes/ftrace.c
+++ b/arch/x86/kernel/kprobes/ftrace.c
@@ -26,7 +26,7 @@
 #include "common.h"
 
 static nokprobe_inline
-int __skip_singlestep(struct kprobe *p, struct pt_regs *regs,
+void __skip_singlestep(struct kprobe *p, struct pt_regs *regs,
 		      struct kprobe_ctlblk *kcb, unsigned long orig_ip)
 {
 	/*
@@ -41,20 +41,21 @@ int __skip_singlestep(struct kprobe *p, struct pt_regs *regs,
 	__this_cpu_write(current_kprobe, NULL);
 	if (orig_ip)
 		regs->ip = orig_ip;
-	return 1;
 }
 
 int skip_singlestep(struct kprobe *p, struct pt_regs *regs,
 		    struct kprobe_ctlblk *kcb)
 {
-	if (kprobe_ftrace(p))
-		return __skip_singlestep(p, regs, kcb, 0);
-	else
-		return 0;
+	if (kprobe_ftrace(p)) {
+		__skip_singlestep(p, regs, kcb, 0);
+		preempt_enable_no_resched();
+		return 1;
+	}
+	return 0;
 }
 NOKPROBE_SYMBOL(skip_singlestep);
 
-/* Ftrace callback handler for kprobes */
+/* Ftrace callback handler for kprobes -- called under preepmt disabed */
 void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
 			   struct ftrace_ops *ops, struct pt_regs *regs)
 {
@@ -77,13 +78,17 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
 		/* Kprobe handler expects regs->ip = ip + 1 as breakpoint hit */
 		regs->ip = ip + sizeof(kprobe_opcode_t);
 
+		/* To emulate trap based kprobes, preempt_disable here */
+		preempt_disable();
 		__this_cpu_write(current_kprobe, p);
 		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
-		if (!p->pre_handler || !p->pre_handler(p, regs))
+		if (!p->pre_handler || !p->pre_handler(p, regs)) {
 			__skip_singlestep(p, regs, kcb, orig_ip);
+			preempt_enable_no_resched();
+		}
 		/*
 		 * If pre_handler returns !0, it sets regs->ip and
-		 * resets current kprobe.
+		 * resets current kprobe, and keep preempt count +1.
 		 */
 	}
 end:

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [tip:perf/core] kprobes/x86: Remove IRQ disabling from ftrace-based/optimized kprobes
  2017-09-19 10:02 ` [PATCH -tip v3 6/7] kprobes/x86: Remove disable_irq from ftrace-based/optimized kprobe Masami Hiramatsu
  2017-09-28  7:25   ` Ingo Molnar
@ 2017-09-28 10:54   ` tip-bot for Masami Hiramatsu
  1 sibling, 0 replies; 28+ messages in thread
From: tip-bot for Masami Hiramatsu @ 2017-09-28 10:54 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: ast, mhiramat, linux-kernel, rostedt, peterz, ast, ananth, hpa,
	mingo, paulmck, torvalds, tglx

Commit-ID:  a19b2e3d783964d48d2b494439648e929bcdc976
Gitweb:     https://git.kernel.org/tip/a19b2e3d783964d48d2b494439648e929bcdc976
Author:     Masami Hiramatsu <mhiramat@kernel.org>
AuthorDate: Tue, 19 Sep 2017 19:02:20 +0900
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Thu, 28 Sep 2017 09:25:50 +0200

kprobes/x86: Remove IRQ disabling from ftrace-based/optimized kprobes

Kkprobes don't need to disable IRQs if they are called from the
ftrace/jump trampoline code, because Documentation/kprobes.txt says:

  -----
  Probe handlers are run with preemption disabled.  Depending on the
  architecture and optimization state, handlers may also run with
  interrupts disabled (e.g., kretprobe handlers and optimized kprobe
  handlers run without interrupt disabled on x86/x86-64).
  -----

So let's remove IRQ disabling from those handlers.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Alexei Starovoitov <ast@fb.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E . McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/150581534039.32348.11331736206004264553.stgit@devbox
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/kernel/kprobes/ftrace.c | 9 ++-------
 arch/x86/kernel/kprobes/opt.c    | 4 ----
 2 files changed, 2 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kernel/kprobes/ftrace.c b/arch/x86/kernel/kprobes/ftrace.c
index bcfee4f..8dc0161 100644
--- a/arch/x86/kernel/kprobes/ftrace.c
+++ b/arch/x86/kernel/kprobes/ftrace.c
@@ -61,14 +61,11 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
 {
 	struct kprobe *p;
 	struct kprobe_ctlblk *kcb;
-	unsigned long flags;
-
-	/* Disable irq for emulating a breakpoint and avoiding preempt */
-	local_irq_save(flags);
 
+	/* Preempt is disabled by ftrace */
 	p = get_kprobe((kprobe_opcode_t *)ip);
 	if (unlikely(!p) || kprobe_disabled(p))
-		goto end;
+		return;
 
 	kcb = get_kprobe_ctlblk();
 	if (kprobe_running()) {
@@ -91,8 +88,6 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
 		 * resets current kprobe, and keep preempt count +1.
 		 */
 	}
-end:
-	local_irq_restore(flags);
 }
 NOKPROBE_SYMBOL(kprobe_ftrace_handler);
 
diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 32c35cb..e941136 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -154,13 +154,10 @@ STACK_FRAME_NON_STANDARD(optprobe_template_func);
 static void
 optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
 {
-	unsigned long flags;
-
 	/* This is possible if op is under delayed unoptimizing */
 	if (kprobe_disabled(&op->kp))
 		return;
 
-	local_irq_save(flags);
 	preempt_disable();
 	if (kprobe_running()) {
 		kprobes_inc_nmissed_count(&op->kp);
@@ -182,7 +179,6 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
 		__this_cpu_write(current_kprobe, NULL);
 	}
 	preempt_enable_no_resched();
-	local_irq_restore(flags);
 }
 NOKPROBE_SYMBOL(optimized_callback);
 

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH -tip v3 6/7] kprobes/x86: Remove disable_irq from ftrace-based/optimized kprobe
  2017-09-28  7:25   ` Ingo Molnar
@ 2017-09-29  6:48     ` Masami Hiramatsu
  0 siblings, 0 replies; 28+ messages in thread
From: Masami Hiramatsu @ 2017-09-29  6:48 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: mingo, x86, Steven Rostedt, linux-kernel, Peter Zijlstra,
	Ananth N Mavinakayanahalli, Thomas Gleixner, H . Peter Anvin,
	Paul E . McKenney, Alexei Starovoitov, Alexei Starovoitov

On Thu, 28 Sep 2017 09:25:41 +0200
Ingo Molnar <mingo@kernel.org> wrote:

> 
> * Masami Hiramatsu <mhiramat@kernel.org> wrote:
> 
> > Actually kprobes doesn't need to disable irq if it is
> > called from ftrace/jump trampoline code because
> > Documentation/kprobes.txt says
> > 
> > -----
> > Probe handlers are run with preemption disabled.  Depending on the
> > architecture and optimization state, handlers may also run with
> > interrupts disabled (e.g., kretprobe handlers and optimized kprobe
> > handlers run without interrupt disabled on x86/x86-64).
> > -----
> > 
> > So let's remove irq disabling from those handlers.
> 
> > -	local_irq_save(flags);
> 
> The title is talking about disable_irq():
> 
>   kprobes/x86: Remove disable_irq from ftrace-based/optimized kprobe
> 
> ... but the patch is actually using local_irq_save(), which is an entirely 
> different thing! You probably wanted to say:
> 
>   kprobes/x86: Remove irq disabling from ftrace-based/optimized kprobes

Correct! That's my mistake. thanks!

> 
> Also note the plural of 'kprobes' when we refer to them as a generic thing.
> 
> I fixed the title, but _please_ read changelogs more carefully before sending 
> them.

Thank you again,

> 
> Thanks,
> 
> 	Ingo


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH -tip v3 7/7] kprobes: Use synchronize_rcu_tasks() for optprobe with CONFIG_PREEMPT
  2017-09-28  7:22   ` Ingo Molnar
@ 2017-09-29  7:29     ` Masami Hiramatsu
  2017-09-29  7:37       ` Ingo Molnar
  2017-10-03 23:57       ` Steven Rostedt
  0 siblings, 2 replies; 28+ messages in thread
From: Masami Hiramatsu @ 2017-09-29  7:29 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: mingo, x86, Steven Rostedt, linux-kernel, Peter Zijlstra,
	Ananth N Mavinakayanahalli, Thomas Gleixner, H . Peter Anvin,
	Paul E . McKenney, Alexei Starovoitov, Alexei Starovoitov

On Thu, 28 Sep 2017 09:22:20 +0200
Ingo Molnar <mingo@kernel.org> wrote:

> 
> * Masami Hiramatsu <mhiramat@kernel.org> wrote:
> 
> > To enable jump optimized probe with CONFIG_PREEMPT, use
> > synchronize_rcu_tasks() to wait for all tasks preempted
> > on trampoline code back on track.
> 
> This sentence does not parse. It's missing a verb, but I'm not sure.

Hmm, how about this?

Use synchthnize_rcu_tasks() to wait for all tasks preempted
on trampoline code back on track so that jump optimized probe
can be enabled with CONFIG_PREEMPT.

> 
> > Since the jump optimized kprobes can replace multiple
> > instructions, there can be tasks which are preempted
> > on the 2nd (or 3rd) instructions. If the kprobe
> > replaces those instructions by a jump instruction,
> > when those tasks back to the preempted place, it is
> > a middle of the jump instruction and causes a kernel
> > panic.
> 
> 
> Again, sentence appears to be missing a verb and also an adjective I think.
> 

Hmm, I couldn't understand, I think you are pointing below
sentence, 
----
If the kprobe replaces those instructions by a jump instruction,
when those tasks back to the preempted place, it is a middle of
the jump instruction and causes a kernel panic.
----

Of course "If" and "when" look ugly, but both have verb...

> > To avoid such tragedies in advance, kprobe optimizer
> > prepare a detour route using normal kprobe (e.g.
> > int3 breakpoint on x86), and wait for the tasks which
> > is interrrupted on such place by synchronize_sched()
> > when CONFIG_PREEMPT=n.
> 
> s/tragedies/mishaps

I got it.

> 
> Part after the first comma does not parse.

Yeah, some typos, but

kprobe optimizer prepares a detour route using normal kprobe ()
and waits for the tasks, which is interrupted on such place, by
synchronize_sched(), when CONFIG_PREEMPT=n.

will be able to parsed. ( at least google translate can ...)

> 
> Also the way to refer to kprobes is "kprobes" and "normal kprobes".
> Use 'kprobe' only when talking about a specific kprobe instance or such.
> You use this correctly later on in the changelog ...
> 
> > If CONFIG_PREEMPT=y, things be more complicated, because
> 
> s/be/are or s/be/get

thanks, get is preferred :)

> 
> > such interrupted thread can be preempted (other thread
> > can be scheduled in interrupt handler.) So, kprobes
> 
> full stop in the wrong place.
> 
> > optimizer has to wait for those tasks scheduled normally.
> 
> missing verb.

kprobe optimizer must wait for those ... 

will it work?


> 
> > In this case we can use synchronize_rcu_tasks() which
> > ensures that all preempted tasks back on track and
> > schedule it.
> 
> More careful changelogs please.
> 
> > +	 * are done. Because optprobe may modify multiple instructions,
> > +	 * there is a chance that the Nth instruction is interrupted. In that
> > +	 * case, running interrupt can return to the Nth byte of jump
> > +	 * instruction. This can be avoided by waiting for returning of
> > +	 * such interrupts, since (until here) the first byte of the optimized
> > +	 * probe is already replaced with normal kprobe (sw breakpoint) and
> > +	 * all threads which reach to the probed address will hit it and
> > +	 * bypass the copied instructions (instead of executing the original.)
> > +	 * With CONFIG_PREEMPT, such interrupts can be preepmted. To wait
> > +	 * for such thread, we will use synchronize_rcu_tasks() which ensures
> > +	 * all preeempted tasks are scheduled normally (not preempted).
> > +	 * So we can ensure there is no threads preempted at probed address.
> 
> What? Interrupts cannot be preempted.

Steve, could you correct me if I'm wrong. I thought if the kernel is
compiled with CONFIG_PREEMPT=y, even in the kernel, it can be preempted
suddenly. It means timer interrupt occurs at kernel path and it yield
to new task (=preempt.) Do I miss something?

> 
> Also, "To wait for such threads", or "To wait for such a thread".

OK,

Thank you,

> 
> Thanks,
> 
> 	Ingo


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH -tip v3 7/7] kprobes: Use synchronize_rcu_tasks() for optprobe with CONFIG_PREEMPT
  2017-09-29  7:29     ` Masami Hiramatsu
@ 2017-09-29  7:37       ` Ingo Molnar
  2017-09-29 14:44         ` Masami Hiramatsu
  2017-10-03 23:57       ` Steven Rostedt
  1 sibling, 1 reply; 28+ messages in thread
From: Ingo Molnar @ 2017-09-29  7:37 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: mingo, x86, Steven Rostedt, linux-kernel, Peter Zijlstra,
	Ananth N Mavinakayanahalli, Thomas Gleixner, H . Peter Anvin,
	Paul E . McKenney, Alexei Starovoitov, Alexei Starovoitov


* Masami Hiramatsu <mhiramat@kernel.org> wrote:

> On Thu, 28 Sep 2017 09:22:20 +0200
> Ingo Molnar <mingo@kernel.org> wrote:
> 
> > 
> > * Masami Hiramatsu <mhiramat@kernel.org> wrote:
> > 
> > > To enable jump optimized probe with CONFIG_PREEMPT, use
> > > synchronize_rcu_tasks() to wait for all tasks preempted
> > > on trampoline code back on track.
> > 
> > This sentence does not parse. It's missing a verb, but I'm not sure.
> 
> Hmm, how about this?
> 
> Use synchthnize_rcu_tasks() to wait for all tasks preempted
> on trampoline code back on track so that jump optimized probe
> can be enabled with CONFIG_PREEMPT.

What's "synchthnize"? ...

More seriously, I still don't understand it. What is 'back on track'?

Do you mean to say:

   We want to wait for all potentially preempted kprobes trampoline execution to 
   have completed. This guarantees that any freed trampoline memory is not in use
   by any task in the system anymore. synchronize_rcu_tasks() gives such a
   guarantee, so use it.

or something else?

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH -tip v3 7/7] kprobes: Use synchronize_rcu_tasks() for optprobe with CONFIG_PREEMPT
  2017-09-29  7:37       ` Ingo Molnar
@ 2017-09-29 14:44         ` Masami Hiramatsu
  2017-09-29 17:45           ` Ingo Molnar
  0 siblings, 1 reply; 28+ messages in thread
From: Masami Hiramatsu @ 2017-09-29 14:44 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: mingo, x86, Steven Rostedt, linux-kernel, Peter Zijlstra,
	Ananth N Mavinakayanahalli, Thomas Gleixner, H . Peter Anvin,
	Paul E . McKenney, Alexei Starovoitov, Alexei Starovoitov

On Fri, 29 Sep 2017 09:37:55 +0200
Ingo Molnar <mingo@kernel.org> wrote:

> 
> * Masami Hiramatsu <mhiramat@kernel.org> wrote:
> 
> > On Thu, 28 Sep 2017 09:22:20 +0200
> > Ingo Molnar <mingo@kernel.org> wrote:
> > 
> > > 
> > > * Masami Hiramatsu <mhiramat@kernel.org> wrote:
> > > 
> > > > To enable jump optimized probe with CONFIG_PREEMPT, use
> > > > synchronize_rcu_tasks() to wait for all tasks preempted
> > > > on trampoline code back on track.
> > > 
> > > This sentence does not parse. It's missing a verb, but I'm not sure.
> > 
> > Hmm, how about this?
> > 
> > Use synchthnize_rcu_tasks() to wait for all tasks preempted
> > on trampoline code back on track so that jump optimized probe
> > can be enabled with CONFIG_PREEMPT.
> 
> What's "synchthnize"? ...

Oops, it's my typo. my XPS touch pad is really unstable...

> 
> More seriously, I still don't understand it. What is 'back on track'?
> 
> Do you mean to say:
> 
>    We want to wait for all potentially preempted kprobes trampoline execution to 
>    have completed. This guarantees that any freed trampoline memory is not in use
>    by any task in the system anymore. synchronize_rcu_tasks() gives such a
>    guarantee, so use it.

Exactly, this is correct!

Thank you,

> 
> or something else?
> 
> Thanks,
> 
> 	Ingo


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH -tip v3 7/7] kprobes: Use synchronize_rcu_tasks() for optprobe with CONFIG_PREEMPT
  2017-09-29 14:44         ` Masami Hiramatsu
@ 2017-09-29 17:45           ` Ingo Molnar
  2017-09-30  5:12             ` Masami Hiramatsu
  0 siblings, 1 reply; 28+ messages in thread
From: Ingo Molnar @ 2017-09-29 17:45 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: mingo, x86, Steven Rostedt, linux-kernel, Peter Zijlstra,
	Ananth N Mavinakayanahalli, Thomas Gleixner, H . Peter Anvin,
	Paul E . McKenney, Alexei Starovoitov, Alexei Starovoitov


* Masami Hiramatsu <mhiramat@kernel.org> wrote:

> On Fri, 29 Sep 2017 09:37:55 +0200
> Ingo Molnar <mingo@kernel.org> wrote:
> 
> > 
> > * Masami Hiramatsu <mhiramat@kernel.org> wrote:
> > 
> > > On Thu, 28 Sep 2017 09:22:20 +0200
> > > Ingo Molnar <mingo@kernel.org> wrote:
> > > 
> > > > 
> > > > * Masami Hiramatsu <mhiramat@kernel.org> wrote:
> > > > 
> > > > > To enable jump optimized probe with CONFIG_PREEMPT, use
> > > > > synchronize_rcu_tasks() to wait for all tasks preempted
> > > > > on trampoline code back on track.
> > > > 
> > > > This sentence does not parse. It's missing a verb, but I'm not sure.
> > > 
> > > Hmm, how about this?
> > > 
> > > Use synchthnize_rcu_tasks() to wait for all tasks preempted
> > > on trampoline code back on track so that jump optimized probe
> > > can be enabled with CONFIG_PREEMPT.
> > 
> > What's "synchthnize"? ...
> 
> Oops, it's my typo. my XPS touch pad is really unstable...
> 
> > 
> > More seriously, I still don't understand it. What is 'back on track'?
> > 
> > Do you mean to say:
> > 
> >    We want to wait for all potentially preempted kprobes trampoline execution to 
> >    have completed. This guarantees that any freed trampoline memory is not in use
> >    by any task in the system anymore. synchronize_rcu_tasks() gives such a
> >    guarantee, so use it.
> 
> Exactly, this is correct!

Ok, great - please re-send the remaining kprobes patches that I have not applied 
yet - I'll read through the changelogs and fix any bits that might still be 
unclear.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH -tip v3 7/7] kprobes: Use synchronize_rcu_tasks() for optprobe with CONFIG_PREEMPT
  2017-09-29 17:45           ` Ingo Molnar
@ 2017-09-30  5:12             ` Masami Hiramatsu
  0 siblings, 0 replies; 28+ messages in thread
From: Masami Hiramatsu @ 2017-09-30  5:12 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: mingo, x86, Steven Rostedt, linux-kernel, Peter Zijlstra,
	Ananth N Mavinakayanahalli, Thomas Gleixner, H . Peter Anvin,
	Paul E . McKenney, Alexei Starovoitov, Alexei Starovoitov

On Fri, 29 Sep 2017 19:45:28 +0200
Ingo Molnar <mingo@kernel.org> wrote:

> 
> * Masami Hiramatsu <mhiramat@kernel.org> wrote:
> 
> > On Fri, 29 Sep 2017 09:37:55 +0200
> > Ingo Molnar <mingo@kernel.org> wrote:
> > 
> > > 
> > > * Masami Hiramatsu <mhiramat@kernel.org> wrote:
> > > 
> > > > On Thu, 28 Sep 2017 09:22:20 +0200
> > > > Ingo Molnar <mingo@kernel.org> wrote:
> > > > 
> > > > > 
> > > > > * Masami Hiramatsu <mhiramat@kernel.org> wrote:
> > > > > 
> > > > > > To enable jump optimized probe with CONFIG_PREEMPT, use
> > > > > > synchronize_rcu_tasks() to wait for all tasks preempted
> > > > > > on trampoline code back on track.
> > > > > 
> > > > > This sentence does not parse. It's missing a verb, but I'm not sure.
> > > > 
> > > > Hmm, how about this?
> > > > 
> > > > Use synchthnize_rcu_tasks() to wait for all tasks preempted
> > > > on trampoline code back on track so that jump optimized probe
> > > > can be enabled with CONFIG_PREEMPT.
> > > 
> > > What's "synchthnize"? ...
> > 
> > Oops, it's my typo. my XPS touch pad is really unstable...
> > 
> > > 
> > > More seriously, I still don't understand it. What is 'back on track'?
> > > 
> > > Do you mean to say:
> > > 
> > >    We want to wait for all potentially preempted kprobes trampoline execution to 
> > >    have completed. This guarantees that any freed trampoline memory is not in use
> > >    by any task in the system anymore. synchronize_rcu_tasks() gives such a
> > >    guarantee, so use it.
> > 
> > Exactly, this is correct!
> 
> Ok, great - please re-send the remaining kprobes patches that I have not applied 
> yet - I'll read through the changelogs and fix any bits that might still be 
> unclear.

OK, I got it. I'll check the remaining patches!

Thank you!

> 
> Thanks,
> 
> 	Ingo


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH -tip v3 7/7] kprobes: Use synchronize_rcu_tasks() for optprobe with CONFIG_PREEMPT
  2017-09-29  7:29     ` Masami Hiramatsu
  2017-09-29  7:37       ` Ingo Molnar
@ 2017-10-03 23:57       ` Steven Rostedt
  2017-10-04 14:01         ` Masami Hiramatsu
  1 sibling, 1 reply; 28+ messages in thread
From: Steven Rostedt @ 2017-10-03 23:57 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Ingo Molnar, mingo, x86, linux-kernel, Peter Zijlstra,
	Ananth N Mavinakayanahalli, Thomas Gleixner, H . Peter Anvin,
	Paul E . McKenney, Alexei Starovoitov, Alexei Starovoitov


Sorry for the late reply. Coming back from Kernel Recipes, I fell way
behind in email.

On Fri, 29 Sep 2017 00:29:38 -0700
Masami Hiramatsu <mhiramat@kernel.org> wrote:

> > > +	 * are done. Because optprobe may modify multiple instructions,
> > > +	 * there is a chance that the Nth instruction is interrupted. In that
> > > +	 * case, running interrupt can return to the Nth byte of jump
> > > +	 * instruction. This can be avoided by waiting for returning of
> > > +	 * such interrupts, since (until here) the first byte of the optimized
> > > +	 * probe is already replaced with normal kprobe (sw breakpoint) and
> > > +	 * all threads which reach to the probed address will hit it and
> > > +	 * bypass the copied instructions (instead of executing the original.)
> > > +	 * With CONFIG_PREEMPT, such interrupts can be preepmted. To wait
> > > +	 * for such thread, we will use synchronize_rcu_tasks() which ensures
> > > +	 * all preeempted tasks are scheduled normally (not preempted).
> > > +	 * So we can ensure there is no threads preempted at probed address.  
> > 
> > What? Interrupts cannot be preempted.  
> 
> Steve, could you correct me if I'm wrong. I thought if the kernel is
> compiled with CONFIG_PREEMPT=y, even in the kernel, it can be preempted
> suddenly. It means timer interrupt occurs at kernel path and it yield
> to new task (=preempt.) Do I miss something?

The above sounds correct. I believe Ingo was pointing out the line that
states "With CONFIG_PREEMPT, such interrupts can be preempted", which
is not true. I think you meant that interrupts can preempt the kernel
and cause it to schedule out. The line above sounds like you meant the
interrupt was preempted, which can't happen.

-- Steve

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH -tip v3 7/7] kprobes: Use synchronize_rcu_tasks() for optprobe with CONFIG_PREEMPT
  2017-10-03 23:57       ` Steven Rostedt
@ 2017-10-04 14:01         ` Masami Hiramatsu
  0 siblings, 0 replies; 28+ messages in thread
From: Masami Hiramatsu @ 2017-10-04 14:01 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Ingo Molnar, mingo, x86, linux-kernel, Peter Zijlstra,
	Ananth N Mavinakayanahalli, Thomas Gleixner, H . Peter Anvin,
	Paul E . McKenney, Alexei Starovoitov, Alexei Starovoitov

On Tue, 3 Oct 2017 19:57:22 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> 
> Sorry for the late reply. Coming back from Kernel Recipes, I fell way
> behind in email.
> 
> On Fri, 29 Sep 2017 00:29:38 -0700
> Masami Hiramatsu <mhiramat@kernel.org> wrote:
> 
> > > > +	 * are done. Because optprobe may modify multiple instructions,
> > > > +	 * there is a chance that the Nth instruction is interrupted. In that
> > > > +	 * case, running interrupt can return to the Nth byte of jump
> > > > +	 * instruction. This can be avoided by waiting for returning of
> > > > +	 * such interrupts, since (until here) the first byte of the optimized
> > > > +	 * probe is already replaced with normal kprobe (sw breakpoint) and
> > > > +	 * all threads which reach to the probed address will hit it and
> > > > +	 * bypass the copied instructions (instead of executing the original.)
> > > > +	 * With CONFIG_PREEMPT, such interrupts can be preepmted. To wait
> > > > +	 * for such thread, we will use synchronize_rcu_tasks() which ensures
> > > > +	 * all preeempted tasks are scheduled normally (not preempted).
> > > > +	 * So we can ensure there is no threads preempted at probed address.  
> > > 
> > > What? Interrupts cannot be preempted.  
> > 
> > Steve, could you correct me if I'm wrong. I thought if the kernel is
> > compiled with CONFIG_PREEMPT=y, even in the kernel, it can be preempted
> > suddenly. It means timer interrupt occurs at kernel path and it yield
> > to new task (=preempt.) Do I miss something?
> 
> The above sounds correct. I believe Ingo was pointing out the line that
> states "With CONFIG_PREEMPT, such interrupts can be preempted", which
> is not true. I think you meant that interrupts can preempt the kernel
> and cause it to schedule out. The line above sounds like you meant the
> interrupt was preempted, which can't happen.

Ah, now I got it. Yes, interrupt itself is not preempted...

Thank you!

> 
> -- Steve


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH -tip v3 3/7] kprobes: Warn if optprobe handler tries to change execution path
  2017-09-19 10:00 ` [PATCH -tip v3 3/7] kprobes: Warn if optprobe handler tries to change execution path Masami Hiramatsu
  2017-09-28 10:53   ` [tip:perf/core] " tip-bot for Masami Hiramatsu
@ 2017-10-10 17:02   ` Naveen N. Rao
  2017-10-12  5:04     ` Masami Hiramatsu
  1 sibling, 1 reply; 28+ messages in thread
From: Naveen N. Rao @ 2017-10-10 17:02 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Ingo Molnar, mingo, x86, Steven Rostedt, linux-kernel,
	Peter Zijlstra, Ananth N Mavinakayanahalli, Thomas Gleixner,
	H . Peter Anvin, Paul E . McKenney, Alexei Starovoitov,
	Alexei Starovoitov, Michael Ellerman

On 2017/09/19 10:00AM, Masami Hiramatsu wrote:
> Warn if optprobe handler tries to change execution path.
> As described in Documentation/kprobes.txt, with optprobe
> user handler can not change instruction pointer. In that
> case user must avoid optimizing the kprobes by setting
> post_handler or break_handler.

But, if the pre handler returns !0, does that necessarily mean that the 
[n]ip has been modified?

In Documentation/kprobes.txt, under API Reference for register_kprobe, 
we have:
  User's pre-handler (kp->pre_handler)::

	  #include <linux/kprobes.h>
	  #include <linux/ptrace.h>
	  int pre_handler(struct kprobe *p, struct pt_regs *regs);

  Called with p pointing to the kprobe associated with the breakpoint,
  and regs pointing to the struct containing the registers saved when
  the breakpoint was hit.  Return 0 here unless you're a Kprobes geek.
			   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

So, we don't seem to _require_ users to return !0 if the handler changes 
[n]ip? Or to always change [n]ip if returning !0.

The implicit assumption seems to be that the handler returns !0 if it 
wants to suppress executing the probed instruction since the handler has 
already taken care of that. So, at the least, I think the message should 
change. However...

In powerpc, we place a probe on kretprobe_trampoline and optimize it.  
This works for us (even though optprobes doesn't "honour" changes to 
[n]ip). See commit 762df10bad6954 ("powerpc/kprobes: Optimize kprobe in 
kretprobe_trampoline()"). With this patch, we are now seeing a warning 
(thanks to mpe for the report):

[  520.144449] ------------[ cut here ]------------
[  520.144676] WARNING: CPU: 2 PID: 6355 at kernel/kprobes.c:391 opt_pre_handler+0xe8/0x110
...
[  520.151806] CPU: 2 PID: 6355 Comm: ftracetest Not tainted 4.14.0-rc4-gcc6-next-20171009-g49827b9 #1
[  520.152097] task: c0000000e9ddfb80 task.stack: c0000000f881c000
[  520.152291] NIP:  c0000000001f3b78 LR: c0000000001f3b2c CTR: c0000000002436a0
[  520.152527] REGS: c0000000f881f7f0 TRAP: 0700   Not tainted  (4.14.0-rc4-gcc6-next-20171009-g49827b9)
[  520.152818] MSR:  8000000100021033 <SF,ME,IR,DR,RI,LE,TM[E]>  CR: 24002824  XER: 20000000
[  520.153080] CFAR: c0000000001f3b34 SOFTE: 0
...
[  520.155113] NIP [c0000000001f3b78] opt_pre_handler+0xe8/0x110
[  520.155320] LR [c0000000001f3b2c] opt_pre_handler+0x9c/0x110
[  520.155510] Call Trace:
[  520.155590] [c0000000f881fa70] [c0000000001f3b2c] opt_pre_handler+0x9c/0x110 (unreliable)
[  520.155825] [c0000000f881fb00] [c000000000047de8] optimized_callback+0xc8/0xe0
[  520.156047] [c0000000f881fb40] [c000000000048764] optinsn_slot+0xec/0x10000
[  520.156238] [c0000000f881fe30] [c000000000046cb0] kretprobe_trampoline+0x0/0x10
[  520.156452] Instruction dump:
[  520.156570] 7fbef840 409effa4 38210090 e8010010 eb41ffd0 eb61ffd8 eb81ffe0 eba1ffe8
[  520.156792] ebc1fff0 ebe1fff8 7c0803a6 4e800020 <0fe00000> e89e0028 3c62ffce 386362b0
[  520.157016] ---[ end trace d8cda029528a560d ]---
[  520.157172] Optprobe ignores instruction pointer changing.(kretprobe_trampoline+0x0/0x10)


So, should this patch be reverted?


- Naveen

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH -tip v3 3/7] kprobes: Warn if optprobe handler tries to change execution path
  2017-10-10 17:02   ` [PATCH -tip v3 3/7] " Naveen N. Rao
@ 2017-10-12  5:04     ` Masami Hiramatsu
  2017-10-17  8:05       ` Naveen N. Rao
  0 siblings, 1 reply; 28+ messages in thread
From: Masami Hiramatsu @ 2017-10-12  5:04 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Ingo Molnar, mingo, x86, Steven Rostedt, linux-kernel,
	Peter Zijlstra, Ananth N Mavinakayanahalli, Thomas Gleixner,
	H . Peter Anvin, Paul E . McKenney, Alexei Starovoitov,
	Alexei Starovoitov, Michael Ellerman

On Tue, 10 Oct 2017 22:32:31 +0530
"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:

> On 2017/09/19 10:00AM, Masami Hiramatsu wrote:
> > Warn if optprobe handler tries to change execution path.
> > As described in Documentation/kprobes.txt, with optprobe
> > user handler can not change instruction pointer. In that
> > case user must avoid optimizing the kprobes by setting
> > post_handler or break_handler.
> 
> But, if the pre handler returns !0, does that necessarily mean that the 
> [n]ip has been modified?

It should be prohibited to be jump optimized.

> 
> In Documentation/kprobes.txt, under API Reference for register_kprobe, 
> we have:
>   User's pre-handler (kp->pre_handler)::
> 
> 	  #include <linux/kprobes.h>
> 	  #include <linux/ptrace.h>
> 	  int pre_handler(struct kprobe *p, struct pt_regs *regs);
> 
>   Called with p pointing to the kprobe associated with the breakpoint,
>   and regs pointing to the struct containing the registers saved when
>   the breakpoint was hit.  Return 0 here unless you're a Kprobes geek.
> 			   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Yeah, this part should be updated clearly. 
Actually, you can find below NOTE also in Documentation/kprobes.txt in the
end of "How Does Jump Optimization Work?"

=========
NOTE for geeks:
The jump optimization changes the kprobe's pre_handler behavior.
Without optimization, the pre_handler can change the kernel's execution
path by changing regs->ip and returning 1.  However, when the probe
is optimized, that modification is ignored.  Thus, if you want to
tweak the kernel's execution path, you need to suppress optimization,
using one of the following techniques:

- Specify an empty function for the kprobe's post_handler or break_handler.

or

- Execute 'sysctl -w debug.kprobes_optimization=n'
=========

> So, we don't seem to _require_ users to return !0 if the handler changes 
> [n]ip? Or to always change [n]ip if returning !0.
> 
> The implicit assumption seems to be that the handler returns !0 if it 
> wants to suppress executing the probed instruction since the handler has 
> already taken care of that. So, at the least, I think the message should 
> change. However...
> 
> In powerpc, we place a probe on kretprobe_trampoline and optimize it. 

Oh, what did you do?? I think kretprobe_trampoline just calls
its handler to get correct address to return and just return to it.

> This works for us (even though optprobes doesn't "honour" changes to 
> [n]ip). See commit 762df10bad6954 ("powerpc/kprobes: Optimize kprobe in 
> kretprobe_trampoline()"). With this patch, we are now seeing a warning 
> (thanks to mpe for the report):
> 
> [  520.144449] ------------[ cut here ]------------
> [  520.144676] WARNING: CPU: 2 PID: 6355 at kernel/kprobes.c:391 opt_pre_handler+0xe8/0x110
> ...
> [  520.151806] CPU: 2 PID: 6355 Comm: ftracetest Not tainted 4.14.0-rc4-gcc6-next-20171009-g49827b9 #1
> [  520.152097] task: c0000000e9ddfb80 task.stack: c0000000f881c000
> [  520.152291] NIP:  c0000000001f3b78 LR: c0000000001f3b2c CTR: c0000000002436a0
> [  520.152527] REGS: c0000000f881f7f0 TRAP: 0700   Not tainted  (4.14.0-rc4-gcc6-next-20171009-g49827b9)
> [  520.152818] MSR:  8000000100021033 <SF,ME,IR,DR,RI,LE,TM[E]>  CR: 24002824  XER: 20000000
> [  520.153080] CFAR: c0000000001f3b34 SOFTE: 0
> ...
> [  520.155113] NIP [c0000000001f3b78] opt_pre_handler+0xe8/0x110
> [  520.155320] LR [c0000000001f3b2c] opt_pre_handler+0x9c/0x110
> [  520.155510] Call Trace:
> [  520.155590] [c0000000f881fa70] [c0000000001f3b2c] opt_pre_handler+0x9c/0x110 (unreliable)
> [  520.155825] [c0000000f881fb00] [c000000000047de8] optimized_callback+0xc8/0xe0
> [  520.156047] [c0000000f881fb40] [c000000000048764] optinsn_slot+0xec/0x10000
> [  520.156238] [c0000000f881fe30] [c000000000046cb0] kretprobe_trampoline+0x0/0x10
> [  520.156452] Instruction dump:
> [  520.156570] 7fbef840 409effa4 38210090 e8010010 eb41ffd0 eb61ffd8 eb81ffe0 eba1ffe8
> [  520.156792] ebc1fff0 ebe1fff8 7c0803a6 4e800020 <0fe00000> e89e0028 3c62ffce 386362b0
> [  520.157016] ---[ end trace d8cda029528a560d ]---
> [  520.157172] Optprobe ignores instruction pointer changing.(kretprobe_trampoline+0x0/0x10)
> 
> 
> So, should this patch be reverted?

Hmm, I got it. It seems to depend on arch implementation.
Anyway, this is just adding an warning, we can safely revert it.
And the documentation should be updated.

Ingo, could you revert this change?

Thank you,

> 
> 
> - Naveen
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH -tip v3 3/7] kprobes: Warn if optprobe handler tries to change execution path
  2017-10-12  5:04     ` Masami Hiramatsu
@ 2017-10-17  8:05       ` Naveen N. Rao
  0 siblings, 0 replies; 28+ messages in thread
From: Naveen N. Rao @ 2017-10-17  8:05 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Ingo Molnar, mingo, x86, Steven Rostedt, linux-kernel,
	Peter Zijlstra, Ananth N Mavinakayanahalli, Thomas Gleixner,
	H . Peter Anvin, Paul E . McKenney, Alexei Starovoitov,
	Alexei Starovoitov, Michael Ellerman

On 2017/10/12 05:04AM, Masami Hiramatsu wrote:
> On Tue, 10 Oct 2017 22:32:31 +0530
> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> wrote:
> 
> > On 2017/09/19 10:00AM, Masami Hiramatsu wrote:
> > So, we don't seem to _require_ users to return !0 if the handler 
> > changes [n]ip? Or to always change [n]ip if returning !0.
> > 
> > The implicit assumption seems to be that the handler returns !0 if it 
> > wants to suppress executing the probed instruction since the handler has 
> > already taken care of that. So, at the least, I think the message should 
> > change. However...
> > 
> > In powerpc, we place a probe on kretprobe_trampoline and optimize it. 
> 
> Oh, what did you do?? I think kretprobe_trampoline just calls
> its handler to get correct address to return and just return to it.

For x86 yes, but on powerpc, we use the original implementation of 
placing a probe at kretprobe_trampoline for catching the function 
return.

> 
> > This works for us (even though optprobes doesn't "honour" changes to 
> > [n]ip). See commit 762df10bad6954 ("powerpc/kprobes: Optimize kprobe in 
> > kretprobe_trampoline()"). With this patch, we are now seeing a warning 
> > (thanks to mpe for the report):
> > 
> > [  520.144449] ------------[ cut here ]------------
> > [  520.144676] WARNING: CPU: 2 PID: 6355 at kernel/kprobes.c:391 opt_pre_handler+0xe8/0x110
> > ...
> > [  520.151806] CPU: 2 PID: 6355 Comm: ftracetest Not tainted 4.14.0-rc4-gcc6-next-20171009-g49827b9 #1
> > [  520.152097] task: c0000000e9ddfb80 task.stack: c0000000f881c000
> > [  520.152291] NIP:  c0000000001f3b78 LR: c0000000001f3b2c CTR: 
> > c0000000002436a0
> > [  520.152527] REGS: c0000000f881f7f0 TRAP: 0700   Not tainted  (4.14.0-rc4-gcc6-next-20171009-g49827b9)
> > [  520.152818] MSR:  8000000100021033 <SF,ME,IR,DR,RI,LE,TM[E]>  CR: 24002824  XER: 20000000
> > [  520.153080] CFAR: c0000000001f3b34 SOFTE: 0
> > ...
> > [  520.155113] NIP [c0000000001f3b78] opt_pre_handler+0xe8/0x110
> > [  520.155320] LR [c0000000001f3b2c] opt_pre_handler+0x9c/0x110
> > [  520.155510] Call Trace:
> > [  520.155590] [c0000000f881fa70] [c0000000001f3b2c] opt_pre_handler+0x9c/0x110 (unreliable)
> > [  520.155825] [c0000000f881fb00] [c000000000047de8] optimized_callback+0xc8/0xe0
> > [  520.156047] [c0000000f881fb40] [c000000000048764] optinsn_slot+0xec/0x10000
> > [  520.156238] [c0000000f881fe30] [c000000000046cb0] kretprobe_trampoline+0x0/0x10
> > [  520.156452] Instruction dump:
> > [  520.156570] 7fbef840 409effa4 38210090 e8010010 eb41ffd0 eb61ffd8 eb81ffe0 eba1ffe8
> > [  520.156792] ebc1fff0 ebe1fff8 7c0803a6 4e800020 <0fe00000> e89e0028 3c62ffce 386362b0
> > [  520.157016] ---[ end trace d8cda029528a560d ]---
> > [  520.157172] Optprobe ignores instruction pointer changing.(kretprobe_trampoline+0x0/0x10)
> > 
> > 
> > So, should this patch be reverted?
> 
> Hmm, I got it. It seems to depend on arch implementation.

Yes, we're optimizing the probe at kretprobe_trampoline, so we need 
this.

> Anyway, this is just adding an warning, we can safely revert it.
> And the documentation should be updated.
> 
> Ingo, could you revert this change?

Thanks!
I will send a patch to revert this change.


- Naveen

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2017-10-17  8:06 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-19  9:58 [PATCH -tip v3 0/7] kprobes/x86: Preempt related enhancements Masami Hiramatsu
2017-09-19  9:59 ` [PATCH -tip v3 1/7] kprobes: Improve smoke test to check preemptible Masami Hiramatsu
2017-09-28 10:52   ` [tip:perf/core] kprobes: Improve smoke test to check preemptibility tip-bot for Masami Hiramatsu
2017-09-19  9:59 ` [PATCH -tip v3 2/7] kprobes/x86: Move get_kprobe_ctlblk in irq-disabled block Masami Hiramatsu
2017-09-28 10:52   ` [tip:perf/core] kprobes/x86: Move the get_kprobe_ctlblk() into " tip-bot for Masami Hiramatsu
2017-09-19 10:00 ` [PATCH -tip v3 3/7] kprobes: Warn if optprobe handler tries to change execution path Masami Hiramatsu
2017-09-28 10:53   ` [tip:perf/core] " tip-bot for Masami Hiramatsu
2017-10-10 17:02   ` [PATCH -tip v3 3/7] " Naveen N. Rao
2017-10-12  5:04     ` Masami Hiramatsu
2017-10-17  8:05       ` Naveen N. Rao
2017-09-19 10:00 ` [PATCH -tip v3 4/7] kprobes/x86: Disable preempt in optprobe Masami Hiramatsu
2017-09-28 10:53   ` [tip:perf/core] kprobes/x86: Disable preemption " tip-bot for Masami Hiramatsu
2017-09-19 10:01 ` [PATCH -tip v3 5/7] kprobes/x86: Disable preempt ftrace-based jprobe Masami Hiramatsu
2017-09-28 10:54   ` [tip:perf/core] kprobes/x86: Disable preemption in ftrace-based jprobes tip-bot for Masami Hiramatsu
2017-09-19 10:02 ` [PATCH -tip v3 6/7] kprobes/x86: Remove disable_irq from ftrace-based/optimized kprobe Masami Hiramatsu
2017-09-28  7:25   ` Ingo Molnar
2017-09-29  6:48     ` Masami Hiramatsu
2017-09-28 10:54   ` [tip:perf/core] kprobes/x86: Remove IRQ disabling from ftrace-based/optimized kprobes tip-bot for Masami Hiramatsu
2017-09-19 10:03 ` [PATCH -tip v3 7/7] kprobes: Use synchronize_rcu_tasks() for optprobe with CONFIG_PREEMPT Masami Hiramatsu
2017-09-28  7:22   ` Ingo Molnar
2017-09-29  7:29     ` Masami Hiramatsu
2017-09-29  7:37       ` Ingo Molnar
2017-09-29 14:44         ` Masami Hiramatsu
2017-09-29 17:45           ` Ingo Molnar
2017-09-30  5:12             ` Masami Hiramatsu
2017-10-03 23:57       ` Steven Rostedt
2017-10-04 14:01         ` Masami Hiramatsu
2017-09-21 22:00 ` [PATCH -tip v3 0/7] kprobes/x86: Preempt related enhancements Alexei Starovoitov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.