linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH -tip v2 0/5] kprobes: x86: Cleanup jprobe implementation on x86
@ 2018-03-12 14:40 Masami Hiramatsu
  2018-03-12 14:41 ` [PATCH -tip v2 1/5] kprobes: Remove jprobe API implementation Masami Hiramatsu
                   ` (5 more replies)
  0 siblings, 6 replies; 7+ messages in thread
From: Masami Hiramatsu @ 2018-03-12 14:40 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar
  Cc: x86, Masami Hiramatsu, Yang Bo, Ingo Molnar, H . Peter Anvin,
	linux-kernel, Ananth N Mavinakayanahalli, Andrew Morton,
	Steven Rostedt, Laura Abbott, Josef Bacik, Alexei Starovoitov

Hello,

Since we decided to remove jprobe from kernel last year,
its APIs are disabled and we worked on moving in-kernel
jprobe users to kprobes or trace-events. And now no jprobe
users are here anymore.

This series removes jprobe implementation from x86 and
generic code. I would like to send other series for each
arch. After all those patches are merged, I will remove
jprobes APIs and data structures, since changing those
definitions will break build on other archs.

Here is the series of patches to show how to do that.
I've dropped arch-independent header changes from this
version and merged kprobe-user code changes into the
last patch since that changed the kprobes expected
behavior when the kprobes modifies execution path.
(no need to enable preempt anymore)

 - Remove jprobe functions (register/unregister,
   setjump/longjump) from generic/arch-dependent code.
   [1/5][2/5]
 - Remove break_handler related code.
   [3/5][4/5]
 - Do not disable preemption on exception handler
   [5/5]

As I said above, the last patch [5/5] will change the
expected behavior on x86. Other archs also have to change
it. But anyway, currently such execution-path modifying
users in tree are very limited and only works on x86.
So we can safely modify it.

Thank you,

---

Masami Hiramatsu (5):
      kprobes: Remove jprobe API implementation
      x86: kprobes: Remove jprobe implementation
      kprobes: Ignore break_handler
      x86: kprobes: Ignore break_handler
      x86: kprobes: Do not disable preempt on int3 path


 Documentation/kprobes.txt        |   13 ++--
 arch/x86/include/asm/kprobes.h   |    3 -
 arch/x86/kernel/kprobes/common.h |   10 ---
 arch/x86/kernel/kprobes/core.c   |  114 ++------------------------------------
 arch/x86/kernel/kprobes/ftrace.c |   21 +------
 arch/x86/kernel/kprobes/opt.c    |    1 
 include/linux/kprobes.h          |    3 -
 kernel/fail_function.c           |    1 
 kernel/kprobes.c                 |  115 ++------------------------------------
 kernel/trace/trace_kprobe.c      |    3 -
 10 files changed, 20 insertions(+), 264 deletions(-)

--
Masami Hiramatsu (Linaro)

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH -tip v2 1/5] kprobes: Remove jprobe API implementation
  2018-03-12 14:40 [PATCH -tip v2 0/5] kprobes: x86: Cleanup jprobe implementation on x86 Masami Hiramatsu
@ 2018-03-12 14:41 ` Masami Hiramatsu
  2018-03-12 14:41 ` [PATCH -tip v2 2/5] x86: kprobes: Remove jprobe implementation Masami Hiramatsu
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Masami Hiramatsu @ 2018-03-12 14:41 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar
  Cc: x86, Masami Hiramatsu, Yang Bo, Ingo Molnar, H . Peter Anvin,
	linux-kernel, Ananth N Mavinakayanahalli, Andrew Morton,
	Steven Rostedt, Laura Abbott, Josef Bacik, Alexei Starovoitov

Remove jprobe API implementations which is no more used.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
---
 include/linux/kprobes.h |    3 --
 kernel/kprobes.c        |   78 +----------------------------------------------
 2 files changed, 1 insertion(+), 80 deletions(-)

diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index 9440a2fc8893..b520baa65682 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -389,9 +389,6 @@ int register_kprobe(struct kprobe *p);
 void unregister_kprobe(struct kprobe *p);
 int register_kprobes(struct kprobe **kps, int num);
 void unregister_kprobes(struct kprobe **kps, int num);
-int setjmp_pre_handler(struct kprobe *, struct pt_regs *);
-int longjmp_break_handler(struct kprobe *, struct pt_regs *);
-void jprobe_return(void);
 unsigned long arch_deref_entry_point(void *);
 
 int register_kretprobe(struct kretprobe *rp);
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 102160ff5c66..6d5a7c29cf99 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -1272,7 +1272,7 @@ NOKPROBE_SYMBOL(cleanup_rp_inst);
 
 /*
 * Add the new probe to ap->list. Fail if this is the
-* second jprobe at the address - two jprobes can't coexist
+* second break_handler at the address
 */
 static int add_new_kprobe(struct kprobe *ap, struct kprobe *p)
 {
@@ -1812,77 +1812,6 @@ unsigned long __weak arch_deref_entry_point(void *entry)
 	return (unsigned long)entry;
 }
 
-#if 0
-int register_jprobes(struct jprobe **jps, int num)
-{
-	int ret = 0, i;
-
-	if (num <= 0)
-		return -EINVAL;
-
-	for (i = 0; i < num; i++) {
-		ret = register_jprobe(jps[i]);
-
-		if (ret < 0) {
-			if (i > 0)
-				unregister_jprobes(jps, i);
-			break;
-		}
-	}
-
-	return ret;
-}
-EXPORT_SYMBOL_GPL(register_jprobes);
-
-int register_jprobe(struct jprobe *jp)
-{
-	unsigned long addr, offset;
-	struct kprobe *kp = &jp->kp;
-
-	/*
-	 * Verify probepoint as well as the jprobe handler are
-	 * valid function entry points.
-	 */
-	addr = arch_deref_entry_point(jp->entry);
-
-	if (kallsyms_lookup_size_offset(addr, NULL, &offset) && offset == 0 &&
-	    kprobe_on_func_entry(kp->addr, kp->symbol_name, kp->offset)) {
-		kp->pre_handler = setjmp_pre_handler;
-		kp->break_handler = longjmp_break_handler;
-		return register_kprobe(kp);
-	}
-
-	return -EINVAL;
-}
-EXPORT_SYMBOL_GPL(register_jprobe);
-
-void unregister_jprobe(struct jprobe *jp)
-{
-	unregister_jprobes(&jp, 1);
-}
-EXPORT_SYMBOL_GPL(unregister_jprobe);
-
-void unregister_jprobes(struct jprobe **jps, int num)
-{
-	int i;
-
-	if (num <= 0)
-		return;
-	mutex_lock(&kprobe_mutex);
-	for (i = 0; i < num; i++)
-		if (__unregister_kprobe_top(&jps[i]->kp) < 0)
-			jps[i]->kp.addr = NULL;
-	mutex_unlock(&kprobe_mutex);
-
-	synchronize_sched();
-	for (i = 0; i < num; i++) {
-		if (jps[i]->kp.addr)
-			__unregister_kprobe_bottom(&jps[i]->kp);
-	}
-}
-EXPORT_SYMBOL_GPL(unregister_jprobes);
-#endif
-
 #ifdef CONFIG_KRETPROBES
 /*
  * This kprobe pre_handler is registered with every kretprobe. When probe
@@ -2329,8 +2258,6 @@ static void report_probe(struct seq_file *pi, struct kprobe *p,
 
 	if (p->pre_handler == pre_handler_kretprobe)
 		kprobe_type = "r";
-	else if (p->pre_handler == setjmp_pre_handler)
-		kprobe_type = "j";
 	else
 		kprobe_type = "k";
 
@@ -2637,6 +2564,3 @@ late_initcall(debugfs_kprobe_init);
 #endif /* CONFIG_DEBUG_FS */
 
 module_init(init_kprobes);
-
-/* defined in arch/.../kernel/kprobes.c */
-EXPORT_SYMBOL_GPL(jprobe_return);

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH -tip v2 2/5] x86: kprobes: Remove jprobe implementation
  2018-03-12 14:40 [PATCH -tip v2 0/5] kprobes: x86: Cleanup jprobe implementation on x86 Masami Hiramatsu
  2018-03-12 14:41 ` [PATCH -tip v2 1/5] kprobes: Remove jprobe API implementation Masami Hiramatsu
@ 2018-03-12 14:41 ` Masami Hiramatsu
  2018-03-12 14:42 ` [PATCH -tip v2 3/5] kprobes: Ignore break_handler Masami Hiramatsu
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Masami Hiramatsu @ 2018-03-12 14:41 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar
  Cc: x86, Masami Hiramatsu, Yang Bo, Ingo Molnar, H . Peter Anvin,
	linux-kernel, Ananth N Mavinakayanahalli, Andrew Morton,
	Steven Rostedt, Laura Abbott, Josef Bacik, Alexei Starovoitov

Remove arch dependent setjump/longjump functions
and unused fields in kprobe_ctlblk for jprobes.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
---
 arch/x86/include/asm/kprobes.h |    3 -
 arch/x86/kernel/kprobes/core.c |   87 ----------------------------------------
 2 files changed, 90 deletions(-)

diff --git a/arch/x86/include/asm/kprobes.h b/arch/x86/include/asm/kprobes.h
index 367d99cff426..06782c2efa04 100644
--- a/arch/x86/include/asm/kprobes.h
+++ b/arch/x86/include/asm/kprobes.h
@@ -111,9 +111,6 @@ struct kprobe_ctlblk {
 	unsigned long kprobe_status;
 	unsigned long kprobe_old_flags;
 	unsigned long kprobe_saved_flags;
-	unsigned long *jprobe_saved_sp;
-	struct pt_regs jprobe_saved_regs;
-	kprobe_opcode_t jprobes_stack[MAX_STACK_SIZE];
 	struct prev_kprobe prev_kprobe;
 };
 
diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 0715f827607c..35b01e4f8d64 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -1079,93 +1079,6 @@ int kprobe_exceptions_notify(struct notifier_block *self, unsigned long val,
 }
 NOKPROBE_SYMBOL(kprobe_exceptions_notify);
 
-int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
-{
-	struct jprobe *jp = container_of(p, struct jprobe, kp);
-	unsigned long addr;
-	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
-
-	kcb->jprobe_saved_regs = *regs;
-	kcb->jprobe_saved_sp = stack_addr(regs);
-	addr = (unsigned long)(kcb->jprobe_saved_sp);
-
-	/*
-	 * As Linus pointed out, gcc assumes that the callee
-	 * owns the argument space and could overwrite it, e.g.
-	 * tailcall optimization. So, to be absolutely safe
-	 * we also save and restore enough stack bytes to cover
-	 * the argument area.
-	 * Use __memcpy() to avoid KASAN stack out-of-bounds reports as we copy
-	 * raw stack chunk with redzones:
-	 */
-	__memcpy(kcb->jprobes_stack, (kprobe_opcode_t *)addr, MIN_STACK_SIZE(addr));
-	regs->ip = (unsigned long)(jp->entry);
-
-	/*
-	 * jprobes use jprobe_return() which skips the normal return
-	 * path of the function, and this messes up the accounting of the
-	 * function graph tracer to get messed up.
-	 *
-	 * Pause function graph tracing while performing the jprobe function.
-	 */
-	pause_graph_tracing();
-	return 1;
-}
-NOKPROBE_SYMBOL(setjmp_pre_handler);
-
-void jprobe_return(void)
-{
-	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
-
-	/* Unpoison stack redzones in the frames we are going to jump over. */
-	kasan_unpoison_stack_above_sp_to(kcb->jprobe_saved_sp);
-
-	asm volatile (
-#ifdef CONFIG_X86_64
-			"       xchg   %%rbx,%%rsp	\n"
-#else
-			"       xchgl   %%ebx,%%esp	\n"
-#endif
-			"       int3			\n"
-			"       .globl jprobe_return_end\n"
-			"       jprobe_return_end:	\n"
-			"       nop			\n"::"b"
-			(kcb->jprobe_saved_sp):"memory");
-}
-NOKPROBE_SYMBOL(jprobe_return);
-NOKPROBE_SYMBOL(jprobe_return_end);
-
-int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
-{
-	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
-	u8 *addr = (u8 *) (regs->ip - 1);
-	struct jprobe *jp = container_of(p, struct jprobe, kp);
-	void *saved_sp = kcb->jprobe_saved_sp;
-
-	if ((addr > (u8 *) jprobe_return) &&
-	    (addr < (u8 *) jprobe_return_end)) {
-		if (stack_addr(regs) != saved_sp) {
-			struct pt_regs *saved_regs = &kcb->jprobe_saved_regs;
-			printk(KERN_ERR
-			       "current sp %p does not match saved sp %p\n",
-			       stack_addr(regs), saved_sp);
-			printk(KERN_ERR "Saved registers for jprobe %p\n", jp);
-			show_regs(saved_regs);
-			printk(KERN_ERR "Current registers\n");
-			show_regs(regs);
-			BUG();
-		}
-		/* It's OK to start function graph tracing again */
-		unpause_graph_tracing();
-		*regs = kcb->jprobe_saved_regs;
-		__memcpy(saved_sp, kcb->jprobes_stack, MIN_STACK_SIZE(saved_sp));
-		preempt_enable_no_resched();
-		return 1;
-	}
-	return 0;
-}
-NOKPROBE_SYMBOL(longjmp_break_handler);
-
 bool arch_within_kprobe_blacklist(unsigned long addr)
 {
 	bool is_in_entry_trampoline_section = false;

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH -tip v2 3/5] kprobes: Ignore break_handler
  2018-03-12 14:40 [PATCH -tip v2 0/5] kprobes: x86: Cleanup jprobe implementation on x86 Masami Hiramatsu
  2018-03-12 14:41 ` [PATCH -tip v2 1/5] kprobes: Remove jprobe API implementation Masami Hiramatsu
  2018-03-12 14:41 ` [PATCH -tip v2 2/5] x86: kprobes: Remove jprobe implementation Masami Hiramatsu
@ 2018-03-12 14:42 ` Masami Hiramatsu
  2018-03-12 14:42 ` [PATCH -tip v2 4/5] x86: " Masami Hiramatsu
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Masami Hiramatsu @ 2018-03-12 14:42 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar
  Cc: x86, Masami Hiramatsu, Yang Bo, Ingo Molnar, H . Peter Anvin,
	linux-kernel, Ananth N Mavinakayanahalli, Andrew Morton,
	Steven Rostedt, Laura Abbott, Josef Bacik, Alexei Starovoitov

Ignore break_handler related code because it was only
used by jprobe and jprobe is removed.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
---
 Documentation/kprobes.txt |    2 +-
 kernel/kprobes.c          |   39 +++++----------------------------------
 2 files changed, 6 insertions(+), 35 deletions(-)

diff --git a/Documentation/kprobes.txt b/Documentation/kprobes.txt
index 22208bf2386d..94df43224c6c 100644
--- a/Documentation/kprobes.txt
+++ b/Documentation/kprobes.txt
@@ -262,7 +262,7 @@ is optimized, that modification is ignored.  Thus, if you want to
 tweak the kernel's execution path, you need to suppress optimization,
 using one of the following techniques:
 
-- Specify an empty function for the kprobe's post_handler or break_handler.
+- Specify an empty function for the kprobe's post_handler.
 
 or
 
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 6d5a7c29cf99..889986ee3d06 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -627,8 +627,8 @@ static void optimize_kprobe(struct kprobe *p)
 	    (kprobe_disabled(p) || kprobes_all_disarmed))
 		return;
 
-	/* Both of break_handler and post_handler are not supported. */
-	if (p->break_handler || p->post_handler)
+	/* kprobes with post_handler can not be optimized */
+	if (p->post_handler)
 		return;
 
 	op = container_of(p, struct optimized_kprobe, kp);
@@ -1116,20 +1116,6 @@ static int aggr_fault_handler(struct kprobe *p, struct pt_regs *regs,
 }
 NOKPROBE_SYMBOL(aggr_fault_handler);
 
-static int aggr_break_handler(struct kprobe *p, struct pt_regs *regs)
-{
-	struct kprobe *cur = __this_cpu_read(kprobe_instance);
-	int ret = 0;
-
-	if (cur && cur->break_handler) {
-		if (cur->break_handler(cur, regs))
-			ret = 1;
-	}
-	reset_kprobe_instance();
-	return ret;
-}
-NOKPROBE_SYMBOL(aggr_break_handler);
-
 /* Walks the list and increments nmissed count for multiprobe case */
 void kprobes_inc_nmissed_count(struct kprobe *p)
 {
@@ -1270,24 +1256,15 @@ static void cleanup_rp_inst(struct kretprobe *rp)
 }
 NOKPROBE_SYMBOL(cleanup_rp_inst);
 
-/*
-* Add the new probe to ap->list. Fail if this is the
-* second break_handler at the address
-*/
+/* Add the new probe to ap->list */
 static int add_new_kprobe(struct kprobe *ap, struct kprobe *p)
 {
 	BUG_ON(kprobe_gone(ap) || kprobe_gone(p));
 
-	if (p->break_handler || p->post_handler)
+	if (p->post_handler)
 		unoptimize_kprobe(ap, true);	/* Fall back to normal kprobe */
 
-	if (p->break_handler) {
-		if (ap->break_handler)
-			return -EEXIST;
-		list_add_tail_rcu(&p->list, &ap->list);
-		ap->break_handler = aggr_break_handler;
-	} else
-		list_add_rcu(&p->list, &ap->list);
+	list_add_rcu(&p->list, &ap->list);
 	if (p->post_handler && !ap->post_handler)
 		ap->post_handler = aggr_post_handler;
 
@@ -1310,8 +1287,6 @@ static void init_aggr_kprobe(struct kprobe *ap, struct kprobe *p)
 	/* We don't care the kprobe which has gone. */
 	if (p->post_handler && !kprobe_gone(p))
 		ap->post_handler = aggr_post_handler;
-	if (p->break_handler && !kprobe_gone(p))
-		ap->break_handler = aggr_break_handler;
 
 	INIT_LIST_HEAD(&ap->list);
 	INIT_HLIST_NODE(&ap->hlist);
@@ -1706,8 +1681,6 @@ static int __unregister_kprobe_top(struct kprobe *p)
 		goto disarmed;
 	else {
 		/* If disabling probe has special handlers, update aggrprobe */
-		if (p->break_handler && !kprobe_gone(p))
-			ap->break_handler = NULL;
 		if (p->post_handler && !kprobe_gone(p)) {
 			list_for_each_entry_rcu(list_p, &ap->list, list) {
 				if ((list_p != p) && (list_p->post_handler))
@@ -1911,7 +1884,6 @@ int register_kretprobe(struct kretprobe *rp)
 	rp->kp.pre_handler = pre_handler_kretprobe;
 	rp->kp.post_handler = NULL;
 	rp->kp.fault_handler = NULL;
-	rp->kp.break_handler = NULL;
 
 	/* Pre-allocate memory for max kretprobe instances */
 	if (rp->maxactive <= 0) {
@@ -2034,7 +2006,6 @@ static void kill_kprobe(struct kprobe *p)
 		list_for_each_entry_rcu(kp, &p->list, list)
 			kp->flags |= KPROBE_FLAG_GONE;
 		p->post_handler = NULL;
-		p->break_handler = NULL;
 		kill_optimized_kprobe(p);
 	}
 	/*

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH -tip v2 4/5] x86: kprobes: Ignore break_handler
  2018-03-12 14:40 [PATCH -tip v2 0/5] kprobes: x86: Cleanup jprobe implementation on x86 Masami Hiramatsu
                   ` (2 preceding siblings ...)
  2018-03-12 14:42 ` [PATCH -tip v2 3/5] kprobes: Ignore break_handler Masami Hiramatsu
@ 2018-03-12 14:42 ` Masami Hiramatsu
  2018-03-12 14:43 ` [PATCH -tip v2 5/5] x86: kprobes: Do not disable preempt on int3 path Masami Hiramatsu
  2018-05-16  7:00 ` [PATCH -tip v2 0/5] kprobes: x86: Cleanup jprobe implementation on x86 Masami Hiramatsu
  5 siblings, 0 replies; 7+ messages in thread
From: Masami Hiramatsu @ 2018-03-12 14:42 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar
  Cc: x86, Masami Hiramatsu, Yang Bo, Ingo Molnar, H . Peter Anvin,
	linux-kernel, Ananth N Mavinakayanahalli, Andrew Morton,
	Steven Rostedt, Laura Abbott, Josef Bacik, Alexei Starovoitov

Remove break_handler related code since that was used
only for jprobe and jprobe is removed now.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
---
 arch/x86/kernel/kprobes/common.h |   10 ----------
 arch/x86/kernel/kprobes/core.c   |   13 ++-----------
 arch/x86/kernel/kprobes/ftrace.c |   16 ++--------------
 3 files changed, 4 insertions(+), 35 deletions(-)

diff --git a/arch/x86/kernel/kprobes/common.h b/arch/x86/kernel/kprobes/common.h
index ae38dccf0c8f..2b949f4fd4d8 100644
--- a/arch/x86/kernel/kprobes/common.h
+++ b/arch/x86/kernel/kprobes/common.h
@@ -105,14 +105,4 @@ static inline unsigned long __recover_optprobed_insn(kprobe_opcode_t *buf, unsig
 }
 #endif
 
-#ifdef CONFIG_KPROBES_ON_FTRACE
-extern int skip_singlestep(struct kprobe *p, struct pt_regs *regs,
-			   struct kprobe_ctlblk *kcb);
-#else
-static inline int skip_singlestep(struct kprobe *p, struct pt_regs *regs,
-				  struct kprobe_ctlblk *kcb)
-{
-	return 0;
-}
-#endif
 #endif
diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 35b01e4f8d64..e7adc2fd8a16 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -686,10 +686,8 @@ int kprobe_int3_handler(struct pt_regs *regs)
 			/*
 			 * If we have no pre-handler or it returned 0, we
 			 * continue with normal processing.  If we have a
-			 * pre-handler and it returned non-zero, it prepped
-			 * for calling the break_handler below on re-entry
-			 * for jprobe processing, so get out doing nothing
-			 * more here.
+			 * pre-handler and it returned non-zero, it modified
+			 * regs->ip so no singlestep is needed.
 			 */
 			if (!p->pre_handler || !p->pre_handler(p, regs))
 				setup_singlestep(p, regs, kcb, 0);
@@ -708,13 +706,6 @@ int kprobe_int3_handler(struct pt_regs *regs)
 		regs->ip = (unsigned long)addr;
 		preempt_enable_no_resched();
 		return 1;
-	} else if (kprobe_running()) {
-		p = __this_cpu_read(current_kprobe);
-		if (p->break_handler && p->break_handler(p, regs)) {
-			if (!skip_singlestep(p, regs, kcb))
-				setup_singlestep(p, regs, kcb, 0);
-			return 1;
-		}
 	} /* else: not a kprobe fault; let the kernel handle it */
 
 	preempt_enable_no_resched();
diff --git a/arch/x86/kernel/kprobes/ftrace.c b/arch/x86/kernel/kprobes/ftrace.c
index 8dc0161cec8f..c8696f2a583f 100644
--- a/arch/x86/kernel/kprobes/ftrace.c
+++ b/arch/x86/kernel/kprobes/ftrace.c
@@ -26,7 +26,7 @@
 #include "common.h"
 
 static nokprobe_inline
-void __skip_singlestep(struct kprobe *p, struct pt_regs *regs,
+void skip_singlestep(struct kprobe *p, struct pt_regs *regs,
 		      struct kprobe_ctlblk *kcb, unsigned long orig_ip)
 {
 	/*
@@ -43,18 +43,6 @@ void __skip_singlestep(struct kprobe *p, struct pt_regs *regs,
 		regs->ip = orig_ip;
 }
 
-int skip_singlestep(struct kprobe *p, struct pt_regs *regs,
-		    struct kprobe_ctlblk *kcb)
-{
-	if (kprobe_ftrace(p)) {
-		__skip_singlestep(p, regs, kcb, 0);
-		preempt_enable_no_resched();
-		return 1;
-	}
-	return 0;
-}
-NOKPROBE_SYMBOL(skip_singlestep);
-
 /* Ftrace callback handler for kprobes -- called under preepmt disabed */
 void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
 			   struct ftrace_ops *ops, struct pt_regs *regs)
@@ -80,7 +68,7 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
 		__this_cpu_write(current_kprobe, p);
 		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
 		if (!p->pre_handler || !p->pre_handler(p, regs)) {
-			__skip_singlestep(p, regs, kcb, orig_ip);
+			skip_singlestep(p, regs, kcb, orig_ip);
 			preempt_enable_no_resched();
 		}
 		/*

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH -tip v2 5/5] x86: kprobes: Do not disable preempt on int3 path
  2018-03-12 14:40 [PATCH -tip v2 0/5] kprobes: x86: Cleanup jprobe implementation on x86 Masami Hiramatsu
                   ` (3 preceding siblings ...)
  2018-03-12 14:42 ` [PATCH -tip v2 4/5] x86: " Masami Hiramatsu
@ 2018-03-12 14:43 ` Masami Hiramatsu
  2018-05-16  7:00 ` [PATCH -tip v2 0/5] kprobes: x86: Cleanup jprobe implementation on x86 Masami Hiramatsu
  5 siblings, 0 replies; 7+ messages in thread
From: Masami Hiramatsu @ 2018-03-12 14:43 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar
  Cc: x86, Masami Hiramatsu, Yang Bo, Ingo Molnar, H . Peter Anvin,
	linux-kernel, Ananth N Mavinakayanahalli, Andrew Morton,
	Steven Rostedt, Laura Abbott, Josef Bacik, Alexei Starovoitov

Since int3 and debug exception(for singlestep) are run with
IRQ disabled and while running single stepping we drop IF
from regs->flags, that path must not be preemptible. So we
can remove the preempt disable/enable calls from that path.

Note that, this changes the behavior of execution path override
which is done by modifying regs->ip in pre_handler().
Previously it requires reset_current_kprobe(), enable preempt
and return !0.
With this change, preempt count is not changed on int3 path, so
user no need to enable preempt. To fit this behavior, this
modifies ftrace-based kprobe too. In total, pre_handler() does
not need to recover preempt count even if it changes regs->ip.

>From above reason, this includes 2 kprobes-users which changes
regs->ip changes. Both depend on CONFIG_FUNCTION_ERROR_INJECTION
which is currently implemented only on x86. This means it is
enough to change x86 kprobes implementation just for these
users.

Of course, since this changes the kprobes behavior when it
changes execution path, we also have to update kprobes on
each arch before implementing CONFIG_FUNCTION_ERROR_INJECTION.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Suggested-by: Ingo Molnar <mingo@kernel.org>
---
 Changes in v2:
  - Include user-side changes.
---
 Documentation/kprobes.txt        |   11 +++++------
 arch/x86/kernel/kprobes/core.c   |   14 +++-----------
 arch/x86/kernel/kprobes/ftrace.c |    5 ++---
 arch/x86/kernel/kprobes/opt.c    |    1 -
 kernel/fail_function.c           |    1 -
 kernel/trace/trace_kprobe.c      |    3 ---
 6 files changed, 10 insertions(+), 25 deletions(-)

diff --git a/Documentation/kprobes.txt b/Documentation/kprobes.txt
index 94df43224c6c..679cba3380e4 100644
--- a/Documentation/kprobes.txt
+++ b/Documentation/kprobes.txt
@@ -566,12 +566,11 @@ the same handler) may run concurrently on different CPUs.
 Kprobes does not use mutexes or allocate memory except during
 registration and unregistration.
 
-Probe handlers are run with preemption disabled.  Depending on the
-architecture and optimization state, handlers may also run with
-interrupts disabled (e.g., kretprobe handlers and optimized kprobe
-handlers run without interrupt disabled on x86/x86-64).  In any case,
-your handler should not yield the CPU (e.g., by attempting to acquire
-a semaphore).
+Probe handlers are run with preemption disabled or interrupt disabled,
+which depends on the architecture and optimization state.  (e.g.,
+kretprobe handlers and optimized kprobe handlers run without interrupt
+disabled on x86/x86-64).  In any case, your handler should not yield
+the CPU (e.g., by attempting to acquire a semaphore, or waiting I/O).
 
 Since a return probe is implemented by replacing the return
 address with the trampoline's address, stack backtraces and calls
diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index e7adc2fd8a16..985e7eb839e6 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -592,7 +592,6 @@ static void setup_singlestep(struct kprobe *p, struct pt_regs *regs,
 		 * stepping.
 		 */
 		regs->ip = (unsigned long)p->ainsn.insn;
-		preempt_enable_no_resched();
 		return;
 	}
 #endif
@@ -665,12 +664,10 @@ int kprobe_int3_handler(struct pt_regs *regs)
 
 	addr = (kprobe_opcode_t *)(regs->ip - sizeof(kprobe_opcode_t));
 	/*
-	 * We don't want to be preempted for the entire
-	 * duration of kprobe processing. We conditionally
-	 * re-enable preemption at the end of this function,
-	 * and also in reenter_kprobe() and setup_singlestep().
+	 * We don't want to be preempted for the entire duration of kprobe
+	 * processing. Since int3 and debug trap disables irqs and we clear
+	 * IF while singlestepping, it must be no preemptible.
 	 */
-	preempt_disable();
 
 	kcb = get_kprobe_ctlblk();
 	p = get_kprobe(addr);
@@ -704,11 +701,9 @@ int kprobe_int3_handler(struct pt_regs *regs)
 		 * the original instruction.
 		 */
 		regs->ip = (unsigned long)addr;
-		preempt_enable_no_resched();
 		return 1;
 	} /* else: not a kprobe fault; let the kernel handle it */
 
-	preempt_enable_no_resched();
 	return 0;
 }
 NOKPROBE_SYMBOL(kprobe_int3_handler);
@@ -959,8 +954,6 @@ int kprobe_debug_handler(struct pt_regs *regs)
 	}
 	reset_current_kprobe();
 out:
-	preempt_enable_no_resched();
-
 	/*
 	 * if somebody else is singlestepping across a probe point, flags
 	 * will have TF set, in which case, continue the remaining processing
@@ -1007,7 +1000,6 @@ int kprobe_fault_handler(struct pt_regs *regs, int trapnr)
 			restore_previous_kprobe(kcb);
 		else
 			reset_current_kprobe();
-		preempt_enable_no_resched();
 	} else if (kcb->kprobe_status == KPROBE_HIT_ACTIVE ||
 		   kcb->kprobe_status == KPROBE_HIT_SSDONE) {
 		/*
diff --git a/arch/x86/kernel/kprobes/ftrace.c b/arch/x86/kernel/kprobes/ftrace.c
index c8696f2a583f..f1ac65d6b352 100644
--- a/arch/x86/kernel/kprobes/ftrace.c
+++ b/arch/x86/kernel/kprobes/ftrace.c
@@ -67,10 +67,9 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
 		preempt_disable();
 		__this_cpu_write(current_kprobe, p);
 		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
-		if (!p->pre_handler || !p->pre_handler(p, regs)) {
+		if (!p->pre_handler || !p->pre_handler(p, regs))
 			skip_singlestep(p, regs, kcb, orig_ip);
-			preempt_enable_no_resched();
-		}
+		preempt_enable_no_resched();
 		/*
 		 * If pre_handler returns !0, it sets regs->ip and
 		 * resets current kprobe, and keep preempt count +1.
diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 203d398802a3..eaf02f2e7300 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -491,7 +491,6 @@ int setup_detour_execution(struct kprobe *p, struct pt_regs *regs, int reenter)
 		regs->ip = (unsigned long)op->optinsn.insn + TMPL_END_IDX;
 		if (!reenter)
 			reset_current_kprobe();
-		preempt_enable_no_resched();
 		return 1;
 	}
 	return 0;
diff --git a/kernel/fail_function.c b/kernel/fail_function.c
index 21b0122cb39c..b1713521f096 100644
--- a/kernel/fail_function.c
+++ b/kernel/fail_function.c
@@ -176,7 +176,6 @@ static int fei_kprobe_handler(struct kprobe *kp, struct pt_regs *regs)
 		override_function_with_return(regs);
 		/* Kprobe specific fixup */
 		reset_current_kprobe();
-		preempt_enable_no_resched();
 		return 1;
 	}
 
diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
index 5ce9b8cf7be3..6c894e2be8d6 100644
--- a/kernel/trace/trace_kprobe.c
+++ b/kernel/trace/trace_kprobe.c
@@ -1221,12 +1221,9 @@ kprobe_perf_func(struct trace_kprobe *tk, struct pt_regs *regs)
 		 * We need to check and see if we modified the pc of the
 		 * pt_regs, and if so clear the kprobe and return 1 so that we
 		 * don't do the single stepping.
-		 * The ftrace kprobe handler leaves it up to us to re-enable
-		 * preemption here before returning if we've modified the ip.
 		 */
 		if (orig_ip != instruction_pointer(regs)) {
 			reset_current_kprobe();
-			preempt_enable_no_resched();
 			return 1;
 		}
 		if (!ret)

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH -tip v2 0/5] kprobes: x86: Cleanup jprobe implementation on x86
  2018-03-12 14:40 [PATCH -tip v2 0/5] kprobes: x86: Cleanup jprobe implementation on x86 Masami Hiramatsu
                   ` (4 preceding siblings ...)
  2018-03-12 14:43 ` [PATCH -tip v2 5/5] x86: kprobes: Do not disable preempt on int3 path Masami Hiramatsu
@ 2018-05-16  7:00 ` Masami Hiramatsu
  5 siblings, 0 replies; 7+ messages in thread
From: Masami Hiramatsu @ 2018-05-16  7:00 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Thomas Gleixner, Ingo Molnar, x86, Yang Bo, Ingo Molnar,
	H . Peter Anvin, linux-kernel, Ananth N Mavinakayanahalli,
	Andrew Morton, Steven Rostedt, Laura Abbott, Josef Bacik,
	Alexei Starovoitov, Ravi Bangoria

On Mon, 12 Mar 2018 23:40:55 +0900
Masami Hiramatsu <mhiramat@kernel.org> wrote:

> Hello,
> 
> Since we decided to remove jprobe from kernel last year,
> its APIs are disabled and we worked on moving in-kernel
> jprobe users to kprobes or trace-events. And now no jprobe
> users are here anymore.
> 
> This series removes jprobe implementation from x86 and
> generic code. I would like to send other series for each
> arch. After all those patches are merged, I will remove
> jprobes APIs and data structures, since changing those
> definitions will break build on other archs.
> 
> Here is the series of patches to show how to do that.
> I've dropped arch-independent header changes from this
> version and merged kprobe-user code changes into the
> last patch since that changed the kprobes expected
> behavior when the kprobes modifies execution path.
> (no need to enable preempt anymore)
> 
>  - Remove jprobe functions (register/unregister,
>    setjump/longjump) from generic/arch-dependent code.
>    [1/5][2/5]
>  - Remove break_handler related code.
>    [3/5][4/5]
>  - Do not disable preemption on exception handler
>    [5/5]
> 
> As I said above, the last patch [5/5] will change the
> expected behavior on x86. Other archs also have to change
> it. But anyway, currently such execution-path modifying
> users in tree are very limited and only works on x86.
> So we can safely modify it.

Ah, I forgot I had sent this in March... 
Basically, this is same thing what I sent last week, and this is more
aggressively removing all the jprobes related code including
break_handler. I would like to merge both series and start sending
new series, including removing jprobe test code too.

Thank you,

-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2018-05-16  7:00 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-12 14:40 [PATCH -tip v2 0/5] kprobes: x86: Cleanup jprobe implementation on x86 Masami Hiramatsu
2018-03-12 14:41 ` [PATCH -tip v2 1/5] kprobes: Remove jprobe API implementation Masami Hiramatsu
2018-03-12 14:41 ` [PATCH -tip v2 2/5] x86: kprobes: Remove jprobe implementation Masami Hiramatsu
2018-03-12 14:42 ` [PATCH -tip v2 3/5] kprobes: Ignore break_handler Masami Hiramatsu
2018-03-12 14:42 ` [PATCH -tip v2 4/5] x86: " Masami Hiramatsu
2018-03-12 14:43 ` [PATCH -tip v2 5/5] x86: kprobes: Do not disable preempt on int3 path Masami Hiramatsu
2018-05-16  7:00 ` [PATCH -tip v2 0/5] kprobes: x86: Cleanup jprobe implementation on x86 Masami Hiramatsu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).