linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/2] kprobes/x86: Fix up interaction between kprobes code recovery and ftrace
@ 2015-02-20 14:07 Petr Mladek
  2015-02-20 14:07 ` [PATCH v3 1/2] kprobes/x86: Use 5-byte NOP when the code might be modified by ftrace Petr Mladek
  2015-02-20 14:07 ` [PATCH v3 2/2] kprobes/x86: Check for invalid ftrace location in __recover_probed_insn() Petr Mladek
  0 siblings, 2 replies; 5+ messages in thread
From: Petr Mladek @ 2015-02-20 14:07 UTC (permalink / raw)
  To: Ingo Molnar, Masami Hiramatsu
  Cc: David S. Miller, Anil S Keshavamurthy, Ananth NMavinakayanahalli,
	Frederic Weisbecker, Steven Rostedt, Jiri Kosina, linux-kernel,
	Petr Mladek

The code affected by ftrace was not properly recovered in Kprobe checks.
Also the address returned by ftrace can be used for a consistency check.

This version is based on the feedback for the separate patches, see
https://lkml.org/lkml/2015/2/20/91 and
https://lkml.org/lkml/2015/2/20/90


Changes against v2:

  + avoid using MCOUNT_INSN_SIZE that is available only with
    CONFIG_FUNCTION_TRACER enabled

  + use WARN_ON() instead of BUG_ON() and correctly handle the
    situation when Kprobe is not able to recover the code


Changes against v1:

  + always use 5-byte NOP for ftrace location
  + fix indentation of the touched comment

Petr Mladek (2):
  kprobes/x86: Use 5-byte NOP when the code might be modified by ftrace
  kprobes/x86: Check for invalid ftrace location in
    __recover_probed_insn()

 arch/x86/kernel/kprobes/core.c | 54 +++++++++++++++++++++++++++++++-----------
 arch/x86/kernel/kprobes/opt.c  |  2 ++
 2 files changed, 42 insertions(+), 14 deletions(-)

-- 
1.8.5.6


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v3 1/2] kprobes/x86: Use 5-byte NOP when the code might be modified by ftrace
  2015-02-20 14:07 [PATCH v3 0/2] kprobes/x86: Fix up interaction between kprobes code recovery and ftrace Petr Mladek
@ 2015-02-20 14:07 ` Petr Mladek
  2015-02-21 17:46   ` [tip:perf/urgent] " tip-bot for Petr Mladek
  2015-02-20 14:07 ` [PATCH v3 2/2] kprobes/x86: Check for invalid ftrace location in __recover_probed_insn() Petr Mladek
  1 sibling, 1 reply; 5+ messages in thread
From: Petr Mladek @ 2015-02-20 14:07 UTC (permalink / raw)
  To: Ingo Molnar, Masami Hiramatsu
  Cc: David S. Miller, Anil S Keshavamurthy, Ananth NMavinakayanahalli,
	Frederic Weisbecker, Steven Rostedt, Jiri Kosina, linux-kernel,
	Petr Mladek

can_probe() checks if the given address points to the beginning of
an instruction. It analyzes all the instructions from the beginning
of the function until the given address. The code might be modified
by another Kprobe. In this case, the current code is read into a buffer,
int3 breakpoint is replaced by the saved opcode in the buffer, and
can_probe() analyzes the buffer instead.

There is a bug that __recover_probed_insn() tries to restore
the original code even for Kprobes using the ftrace framework.
But in this case, the opcode is not stored. See the difference
between arch_prepare_kprobe() and arch_prepare_kprobe_ftrace().
The opcode is stored by arch_copy_kprobe() only from
arch_prepare_kprobe().

This patch makes Kprobe to use the ideal 5-byte NOP when the code
can be modified by ftrace. It is the original instruction, see
ftrace_make_nop() and ftrace_nop_replace().

Note that we always need to use the NOP for ftrace locations. Kprobes
do not block ftrace and the instruction might get modified at anytime.
It might even be in an inconsistent state because it is modified step
by step using the int3 breakpoint.

The patch also fixes indentation of the touched comment.

Note that I found this problem when playing with Kprobes. I did it
on x86_64 with gcc-4.8.3 that supported -mfentry. I modified
samples/kprobes/kprobe_example.c and added offset 5 to put
the probe right after the fentry area:

 static struct kprobe kp = {
 	.symbol_name	= "do_fork",
+	.offset = 5,
 };

Then I was able to load kprobe_example before jprobe_example
but not the other way around:

$> modprobe jprobe_example
$> modprobe kprobe_example
modprobe: ERROR: could not insert 'kprobe_example': Invalid or incomplete multibyte or wide character

It did not make much sense and debugging pointed to the bug
described above.

Signed-off-by: Petr Mladek <pmladek@suse.cz>
Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
---
 arch/x86/kernel/kprobes/core.c | 42 ++++++++++++++++++++++++++++--------------
 1 file changed, 28 insertions(+), 14 deletions(-)

diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 98f654d466e5..70b9b0c12682 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -223,27 +223,41 @@ static unsigned long
 __recover_probed_insn(kprobe_opcode_t *buf, unsigned long addr)
 {
 	struct kprobe *kp;
+	unsigned long faddr;
 
 	kp = get_kprobe((void *)addr);
-	/* There is no probe, return original address */
-	if (!kp)
+	faddr = ftrace_location(addr);
+	/*
+	 * Use the current code if it is not modified by Kprobe
+	 * and it cannot be modified by ftrace.
+	 */
+	if (!kp && !faddr)
 		return addr;
 
 	/*
-	 *  Basically, kp->ainsn.insn has an original instruction.
-	 *  However, RIP-relative instruction can not do single-stepping
-	 *  at different place, __copy_instruction() tweaks the displacement of
-	 *  that instruction. In that case, we can't recover the instruction
-	 *  from the kp->ainsn.insn.
+	 * Basically, kp->ainsn.insn has an original instruction.
+	 * However, RIP-relative instruction can not do single-stepping
+	 * at different place, __copy_instruction() tweaks the displacement of
+	 * that instruction. In that case, we can't recover the instruction
+	 * from the kp->ainsn.insn.
 	 *
-	 *  On the other hand, kp->opcode has a copy of the first byte of
-	 *  the probed instruction, which is overwritten by int3. And
-	 *  the instruction at kp->addr is not modified by kprobes except
-	 *  for the first byte, we can recover the original instruction
-	 *  from it and kp->opcode.
+	 * On the other hand, in case on normal Kprobe, kp->opcode has a copy
+	 * of the first byte of the probed instruction, which is overwritten
+	 * by int3. And the instruction at kp->addr is not modified by kprobes
+	 * except for the first byte, we can recover the original instruction
+	 * from it and kp->opcode.
+	 *
+	 * In case of Kprobes using ftrace, we do not have a copy of
+	 * the original instruction. In fact, the ftrace location might
+	 * be modified at anytime and even could be in an inconsistent state.
+	 * Fortunately, we know that the original code is the ideal 5-byte
+	 * long NOP.
 	 */
-	memcpy(buf, kp->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
-	buf[0] = kp->opcode;
+	memcpy(buf, (void *)addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
+	if (faddr)
+		memcpy(buf, ideal_nops[NOP_ATOMIC5], 5);
+	else
+		buf[0] = kp->opcode;
 	return (unsigned long)buf;
 }
 
-- 
1.8.5.6


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v3 2/2] kprobes/x86: Check for invalid ftrace location in __recover_probed_insn()
  2015-02-20 14:07 [PATCH v3 0/2] kprobes/x86: Fix up interaction between kprobes code recovery and ftrace Petr Mladek
  2015-02-20 14:07 ` [PATCH v3 1/2] kprobes/x86: Use 5-byte NOP when the code might be modified by ftrace Petr Mladek
@ 2015-02-20 14:07 ` Petr Mladek
  2015-02-21 17:46   ` [tip:perf/urgent] " tip-bot for Petr Mladek
  1 sibling, 1 reply; 5+ messages in thread
From: Petr Mladek @ 2015-02-20 14:07 UTC (permalink / raw)
  To: Ingo Molnar, Masami Hiramatsu
  Cc: David S. Miller, Anil S Keshavamurthy, Ananth NMavinakayanahalli,
	Frederic Weisbecker, Steven Rostedt, Jiri Kosina, linux-kernel,
	Petr Mladek

__recover_probed_insn() should always be called from an address where
an instructions starts. The check for ftrace_location() might help to
discover a potential inconsistency.

This patch adds WARN_ON() when the inconsistency is detected. Also
it adds handling of the situation when the original code can not
get recovered.

Suggested-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Petr Mladek <pmladek@suse.cz>
---
 arch/x86/kernel/kprobes/core.c | 12 ++++++++++++
 arch/x86/kernel/kprobes/opt.c  |  2 ++
 2 files changed, 14 insertions(+)

diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 70b9b0c12682..a8e8520b953d 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -228,6 +228,13 @@ __recover_probed_insn(kprobe_opcode_t *buf, unsigned long addr)
 	kp = get_kprobe((void *)addr);
 	faddr = ftrace_location(addr);
 	/*
+	 * Addresses inside the ftrace location are refused by
+	 * arch_check_ftrace_location(). Something went terribly wrong
+	 * if such an address is checked here.
+	 */
+	if (WARN_ON(faddr && faddr != addr))
+		return 0UL;
+	/*
 	 * Use the current code if it is not modified by Kprobe
 	 * and it cannot be modified by ftrace.
 	 */
@@ -265,6 +272,7 @@ __recover_probed_insn(kprobe_opcode_t *buf, unsigned long addr)
  * Recover the probed instruction at addr for further analysis.
  * Caller must lock kprobes by kprobe_mutex, or disable preemption
  * for preventing to release referencing kprobes.
+ * Returns zero if the instruction can not get recovered.
  */
 unsigned long recover_probed_instruction(kprobe_opcode_t *buf, unsigned long addr)
 {
@@ -299,6 +307,8 @@ static int can_probe(unsigned long paddr)
 		 * normally used, we just go through if there is no kprobe.
 		 */
 		__addr = recover_probed_instruction(buf, addr);
+		if (!__addr)
+			return 0;
 		kernel_insn_init(&insn, (void *)__addr, MAX_INSN_SIZE);
 		insn_get_length(&insn);
 
@@ -347,6 +357,8 @@ int __copy_instruction(u8 *dest, u8 *src)
 	unsigned long recovered_insn =
 		recover_probed_instruction(buf, (unsigned long)src);
 
+	if (!recovered_insn)
+		return 0;
 	kernel_insn_init(&insn, (void *)recovered_insn, MAX_INSN_SIZE);
 	insn_get_length(&insn);
 	/* Another subsystem puts a breakpoint, failed to recover */
diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 0dd8d089c315..7b3b9d15c47a 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -259,6 +259,8 @@ static int can_optimize(unsigned long paddr)
 			 */
 			return 0;
 		recovered_insn = recover_probed_instruction(buf, addr);
+		if (!recovered_insn)
+			return 0;
 		kernel_insn_init(&insn, (void *)recovered_insn, MAX_INSN_SIZE);
 		insn_get_length(&insn);
 		/* Another subsystem puts a breakpoint */
-- 
1.8.5.6


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [tip:perf/urgent] kprobes/x86: Use 5-byte NOP when the code might be modified by ftrace
  2015-02-20 14:07 ` [PATCH v3 1/2] kprobes/x86: Use 5-byte NOP when the code might be modified by ftrace Petr Mladek
@ 2015-02-21 17:46   ` tip-bot for Petr Mladek
  0 siblings, 0 replies; 5+ messages in thread
From: tip-bot for Petr Mladek @ 2015-02-21 17:46 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: rostedt, linux-kernel, fweisbec, ananth, tglx, mingo, jkosina,
	masami.hiramatsu.pt, pmladek, davem, anil.s.keshavamurthy, hpa

Commit-ID:  650b7b23cb1e32d77daeefbac1ceb1329abf3b23
Gitweb:     http://git.kernel.org/tip/650b7b23cb1e32d77daeefbac1ceb1329abf3b23
Author:     Petr Mladek <pmladek@suse.cz>
AuthorDate: Fri, 20 Feb 2015 15:07:29 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Sat, 21 Feb 2015 10:33:30 +0100

kprobes/x86: Use 5-byte NOP when the code might be modified by ftrace

can_probe() checks if the given address points to the beginning
of an instruction. It analyzes all the instructions from the
beginning of the function until the given address. The code
might be modified by another Kprobe. In this case, the current
code is read into a buffer, int3 breakpoint is replaced by the
saved opcode in the buffer, and can_probe() analyzes the buffer
instead.

There is a bug that __recover_probed_insn() tries to restore
the original code even for Kprobes using the ftrace framework.
But in this case, the opcode is not stored. See the difference
between arch_prepare_kprobe() and arch_prepare_kprobe_ftrace().
The opcode is stored by arch_copy_kprobe() only from
arch_prepare_kprobe().

This patch makes Kprobe to use the ideal 5-byte NOP when the
code can be modified by ftrace. It is the original instruction,
see ftrace_make_nop() and ftrace_nop_replace().

Note that we always need to use the NOP for ftrace locations.
Kprobes do not block ftrace and the instruction might get
modified at anytime. It might even be in an inconsistent state
because it is modified step by step using the int3 breakpoint.

The patch also fixes indentation of the touched comment.

Note that I found this problem when playing with Kprobes. I did
it on x86_64 with gcc-4.8.3 that supported -mfentry. I modified
samples/kprobes/kprobe_example.c and added offset 5 to put
the probe right after the fentry area:

 static struct kprobe kp = {
 	.symbol_name	= "do_fork",
+	.offset = 5,
 };

Then I was able to load kprobe_example before jprobe_example
but not the other way around:

  $> modprobe jprobe_example
  $> modprobe kprobe_example
  modprobe: ERROR: could not insert 'kprobe_example': Invalid or incomplete multibyte or wide character

It did not make much sense and debugging pointed to the bug
described above.

Signed-off-by: Petr Mladek <pmladek@suse.cz>
Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ananth NMavinakayanahalli <ananth@in.ibm.com>
Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/r/1424441250-27146-2-git-send-email-pmladek@suse.cz
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/kernel/kprobes/core.c | 42 ++++++++++++++++++++++++++++--------------
 1 file changed, 28 insertions(+), 14 deletions(-)

diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 6a1146e..c3b4b46 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -223,27 +223,41 @@ static unsigned long
 __recover_probed_insn(kprobe_opcode_t *buf, unsigned long addr)
 {
 	struct kprobe *kp;
+	unsigned long faddr;
 
 	kp = get_kprobe((void *)addr);
-	/* There is no probe, return original address */
-	if (!kp)
+	faddr = ftrace_location(addr);
+	/*
+	 * Use the current code if it is not modified by Kprobe
+	 * and it cannot be modified by ftrace.
+	 */
+	if (!kp && !faddr)
 		return addr;
 
 	/*
-	 *  Basically, kp->ainsn.insn has an original instruction.
-	 *  However, RIP-relative instruction can not do single-stepping
-	 *  at different place, __copy_instruction() tweaks the displacement of
-	 *  that instruction. In that case, we can't recover the instruction
-	 *  from the kp->ainsn.insn.
+	 * Basically, kp->ainsn.insn has an original instruction.
+	 * However, RIP-relative instruction can not do single-stepping
+	 * at different place, __copy_instruction() tweaks the displacement of
+	 * that instruction. In that case, we can't recover the instruction
+	 * from the kp->ainsn.insn.
 	 *
-	 *  On the other hand, kp->opcode has a copy of the first byte of
-	 *  the probed instruction, which is overwritten by int3. And
-	 *  the instruction at kp->addr is not modified by kprobes except
-	 *  for the first byte, we can recover the original instruction
-	 *  from it and kp->opcode.
+	 * On the other hand, in case on normal Kprobe, kp->opcode has a copy
+	 * of the first byte of the probed instruction, which is overwritten
+	 * by int3. And the instruction at kp->addr is not modified by kprobes
+	 * except for the first byte, we can recover the original instruction
+	 * from it and kp->opcode.
+	 *
+	 * In case of Kprobes using ftrace, we do not have a copy of
+	 * the original instruction. In fact, the ftrace location might
+	 * be modified at anytime and even could be in an inconsistent state.
+	 * Fortunately, we know that the original code is the ideal 5-byte
+	 * long NOP.
 	 */
-	memcpy(buf, kp->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
-	buf[0] = kp->opcode;
+	memcpy(buf, (void *)addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
+	if (faddr)
+		memcpy(buf, ideal_nops[NOP_ATOMIC5], 5);
+	else
+		buf[0] = kp->opcode;
 	return (unsigned long)buf;
 }
 

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [tip:perf/urgent] kprobes/x86: Check for invalid ftrace location in __recover_probed_insn()
  2015-02-20 14:07 ` [PATCH v3 2/2] kprobes/x86: Check for invalid ftrace location in __recover_probed_insn() Petr Mladek
@ 2015-02-21 17:46   ` tip-bot for Petr Mladek
  0 siblings, 0 replies; 5+ messages in thread
From: tip-bot for Petr Mladek @ 2015-02-21 17:46 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: rostedt, linux-kernel, hpa, mingo, anil.s.keshavamurthy,
	masami.hiramatsu.pt, fweisbec, pmladek, jkosina, ananth, tglx,
	davem

Commit-ID:  2a6730c8b6e075adf826a89a3e2caa705807afdb
Gitweb:     http://git.kernel.org/tip/2a6730c8b6e075adf826a89a3e2caa705807afdb
Author:     Petr Mladek <pmladek@suse.cz>
AuthorDate: Fri, 20 Feb 2015 15:07:30 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Sat, 21 Feb 2015 10:33:31 +0100

kprobes/x86: Check for invalid ftrace location in __recover_probed_insn()

__recover_probed_insn() should always be called from an address
where an instructions starts. The check for ftrace_location()
might help to discover a potential inconsistency.

This patch adds WARN_ON() when the inconsistency is detected.
Also it adds handling of the situation when the original code
can not get recovered.

Suggested-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Petr Mladek <pmladek@suse.cz>
Cc: Ananth NMavinakayanahalli <ananth@in.ibm.com>
Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/r/1424441250-27146-3-git-send-email-pmladek@suse.cz
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/kernel/kprobes/core.c | 12 ++++++++++++
 arch/x86/kernel/kprobes/opt.c  |  2 ++
 2 files changed, 14 insertions(+)

diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index c3b4b46..4e3d5a9 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -228,6 +228,13 @@ __recover_probed_insn(kprobe_opcode_t *buf, unsigned long addr)
 	kp = get_kprobe((void *)addr);
 	faddr = ftrace_location(addr);
 	/*
+	 * Addresses inside the ftrace location are refused by
+	 * arch_check_ftrace_location(). Something went terribly wrong
+	 * if such an address is checked here.
+	 */
+	if (WARN_ON(faddr && faddr != addr))
+		return 0UL;
+	/*
 	 * Use the current code if it is not modified by Kprobe
 	 * and it cannot be modified by ftrace.
 	 */
@@ -265,6 +272,7 @@ __recover_probed_insn(kprobe_opcode_t *buf, unsigned long addr)
  * Recover the probed instruction at addr for further analysis.
  * Caller must lock kprobes by kprobe_mutex, or disable preemption
  * for preventing to release referencing kprobes.
+ * Returns zero if the instruction can not get recovered.
  */
 unsigned long recover_probed_instruction(kprobe_opcode_t *buf, unsigned long addr)
 {
@@ -299,6 +307,8 @@ static int can_probe(unsigned long paddr)
 		 * normally used, we just go through if there is no kprobe.
 		 */
 		__addr = recover_probed_instruction(buf, addr);
+		if (!__addr)
+			return 0;
 		kernel_insn_init(&insn, (void *)__addr, MAX_INSN_SIZE);
 		insn_get_length(&insn);
 
@@ -347,6 +357,8 @@ int __copy_instruction(u8 *dest, u8 *src)
 	unsigned long recovered_insn =
 		recover_probed_instruction(buf, (unsigned long)src);
 
+	if (!recovered_insn)
+		return 0;
 	kernel_insn_init(&insn, (void *)recovered_insn, MAX_INSN_SIZE);
 	insn_get_length(&insn);
 	/* Another subsystem puts a breakpoint, failed to recover */
diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 7c523bb..3aef248 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -259,6 +259,8 @@ static int can_optimize(unsigned long paddr)
 			 */
 			return 0;
 		recovered_insn = recover_probed_instruction(buf, addr);
+		if (!recovered_insn)
+			return 0;
 		kernel_insn_init(&insn, (void *)recovered_insn, MAX_INSN_SIZE);
 		insn_get_length(&insn);
 		/* Another subsystem puts a breakpoint */

^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2015-02-21 17:47 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-20 14:07 [PATCH v3 0/2] kprobes/x86: Fix up interaction between kprobes code recovery and ftrace Petr Mladek
2015-02-20 14:07 ` [PATCH v3 1/2] kprobes/x86: Use 5-byte NOP when the code might be modified by ftrace Petr Mladek
2015-02-21 17:46   ` [tip:perf/urgent] " tip-bot for Petr Mladek
2015-02-20 14:07 ` [PATCH v3 2/2] kprobes/x86: Check for invalid ftrace location in __recover_probed_insn() Petr Mladek
2015-02-21 17:46   ` [tip:perf/urgent] " tip-bot for Petr Mladek

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).