All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH v2 00/26] Early kprobe: enable kprobes at very early booting stage.
@ 2015-02-12 12:17 ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:17 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

This is version 2 of my previous early kprobe patch series. V1 can be
found from:

https://lkml.org/lkml/2015/1/7/76

I haven't finished development of early kprobe. The user interface and
data collection are still very weak, so you will find the weak point
pointed out by Steven Rostedt (https://lkml.org/lkml/2015/1/16/430)
still exists. I post this series because it has already become larger
than what I've ever thought. I want to get some early review. In futher
I'd like to drop patch 26/26 totally and redesign user interface.

The main change of this version is to allow early probe on ftrace entry
(makes early kprobe support KPROBE_ON_FTRACE). With this patch, on x86
we are able to probe at function entry if CONFIG_FTRACE is on.

The basic idea is to introduce a notify chain to ftrace, and make ftrace
notify kprobe when ftrace fails to modify instructions.

This patch series are based on linux-next commit df9f91e.

Patch 1/26 - 3/26 are already accepted, but currently they are not in
linux-next repository. I resend them here only for convenience if
someone want to test my code.

Patch 4/26 - 8/26 are some small ftrace improvement. Patch 4 - 5 keep
rec->flags unchanged when failure, make further code able to redu the
failed operation. Patch 6 - 7 makes ftrace_location() can be used at
early stage by sorting mcount_loc eariler. Patch 8 enables early kprobe
do ftrace_make_nop() before ftrace_init(), which is important to x86
because in x86 we are unable to boost 'call' instruction.

Patch 9/26 - 10/26 introduce a notify chain to ftrace and use it to
notify registered subsystems to try to fix the problem before issuing
ftrace_bug().

Patch 11/26 - 21/26 are core early kprobe code. Patch 11/26 introduces a
kprobe_is_early() function in response to Masami Hiramatsu's comment on

https://lkml.org/lkml/2015/1/13/389

that he thought comparing kprobes_initialized is hacky. There are no too
much change in these patches.

Patch 22/26 - 25/26 utilize the notify chain to support probe on ftrace.
Patch 22 is for x86. In setup_arch(), ideal_nops is possible to change.
We fix the probed nop by catching ftrace failure in
ftrace_code_disable(). Patch 23/26 makes kprobe able to temporarily
restore the probed instruction so ftrace is able to convert it.

Patch 24/26 is the core logic which enable early kprobe on ftrace,
including converting early kprobe on ftrace to normal kprobe on ftrace.

Patch 25/26 is corresponding kconfig update.

Patch 26/26 is a rough kernel cmdline support. The usage is similar to
my V1 patch. I'd like to drop it and design a new one so let it
unchanged.

In my v2 patch, it is possible to proble at function entries on x86:

 ... ekprobe=__alloc_pages_nodemask ...

and ekprobe option is able to coexist with ftrace= and ftrace_filter=
options:

   ... ekprobe=__alloc_pages_nodemask ftrace=function \
     ftrace_filter=__alloc_pages_nodemask ...

In that case, events between ftrace enabled and normal kprobe fully
initialized are missed in that case.

Thank you!

Wang Nan (26):
  kprobes: set kprobes_all_disarmed earlier to enable re-optimization.
  kprobes: makes kprobes/enabled works correctly for optimized kprobes.
  kprobes: x86: mark 2 bytes NOP as boostable.
  ftrace: don't update record flags if code modification fail.
  ftrace/x86: Ensure rec->flags no change when failure occures.
  ftrace: sort ftrace entries earlier.
  ftrace: allow search ftrace addr before ftrace fully inited.
  ftrace: enable other subsystems make ftrace nop before ftrace_init()
  ftrace: callchain and ftrace_bug_tryfix
  ftrace: x86: try to fix ftrace when ftrace_replace_code.
  early kprobes: introduce kprobe_is_early for futher early kprobe use.
  early kprobes: Add an KPROBE_FLAG_EARLY for early kprobe.
  early kprobes: ARM: directly modify code.
  early kprobes: ARM: introduce early kprobes related code area.
  early kprobes: x86: directly modify code.
  early kprobes: x86: introduce early kprobes related code area.
  early kprobes: introduces macros for allocing early kprobe resources.
  early kprobes: allows __alloc_insn_slot() from early kprobes slots.
  early kprobes: perhibit probing at early kprobe reserved area.
  early kprobes: core logic of eraly kprobes.
  early kprobes: add CONFIG_EARLY_KPROBES option.
  early kprobes: introduce arch_fix_ftrace_early_kprobe().
  early kprobes: x86: arch_restore_optimized_kprobe().
  early kprobes: core logic to support early kprobe on ftrace.
  early kprobes: introduce kconfig option to support early kprobe on
    ftrace.
  kprobes: enable 'ekprobe=' cmdline option for early kprobes.

 arch/Kconfig                      |  12 +
 arch/arm/include/asm/kprobes.h    |  31 ++-
 arch/arm/kernel/vmlinux.lds.S     |   2 +
 arch/arm/probes/kprobes/opt-arm.c |  12 +-
 arch/x86/include/asm/insn.h       |   7 +-
 arch/x86/include/asm/kprobes.h    |  47 +++-
 arch/x86/kernel/ftrace.c          |  23 +-
 arch/x86/kernel/kprobes/core.c    |   2 +-
 arch/x86/kernel/kprobes/opt.c     |  69 +++++-
 arch/x86/kernel/vmlinux.lds.S     |   2 +
 include/linux/ftrace.h            |  37 ++++
 include/linux/kprobes.h           | 131 +++++++++++
 init/main.c                       |   1 +
 kernel/kprobes.c                  | 451 +++++++++++++++++++++++++++++++++++++-
 kernel/trace/ftrace.c             | 145 ++++++++++--
 15 files changed, 928 insertions(+), 44 deletions(-)

-- 
1.8.4


^ permalink raw reply	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 00/26] Early kprobe: enable kprobes at very early booting stage.
@ 2015-02-12 12:17 ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:17 UTC (permalink / raw)
  To: linux-arm-kernel

This is version 2 of my previous early kprobe patch series. V1 can be
found from:

https://lkml.org/lkml/2015/1/7/76

I haven't finished development of early kprobe. The user interface and
data collection are still very weak, so you will find the weak point
pointed out by Steven Rostedt (https://lkml.org/lkml/2015/1/16/430)
still exists. I post this series because it has already become larger
than what I've ever thought. I want to get some early review. In futher
I'd like to drop patch 26/26 totally and redesign user interface.

The main change of this version is to allow early probe on ftrace entry
(makes early kprobe support KPROBE_ON_FTRACE). With this patch, on x86
we are able to probe at function entry if CONFIG_FTRACE is on.

The basic idea is to introduce a notify chain to ftrace, and make ftrace
notify kprobe when ftrace fails to modify instructions.

This patch series are based on linux-next commit df9f91e.

Patch 1/26 - 3/26 are already accepted, but currently they are not in
linux-next repository. I resend them here only for convenience if
someone want to test my code.

Patch 4/26 - 8/26 are some small ftrace improvement. Patch 4 - 5 keep
rec->flags unchanged when failure, make further code able to redu the
failed operation. Patch 6 - 7 makes ftrace_location() can be used at
early stage by sorting mcount_loc eariler. Patch 8 enables early kprobe
do ftrace_make_nop() before ftrace_init(), which is important to x86
because in x86 we are unable to boost 'call' instruction.

Patch 9/26 - 10/26 introduce a notify chain to ftrace and use it to
notify registered subsystems to try to fix the problem before issuing
ftrace_bug().

Patch 11/26 - 21/26 are core early kprobe code. Patch 11/26 introduces a
kprobe_is_early() function in response to Masami Hiramatsu's comment on

https://lkml.org/lkml/2015/1/13/389

that he thought comparing kprobes_initialized is hacky. There are no too
much change in these patches.

Patch 22/26 - 25/26 utilize the notify chain to support probe on ftrace.
Patch 22 is for x86. In setup_arch(), ideal_nops is possible to change.
We fix the probed nop by catching ftrace failure in
ftrace_code_disable(). Patch 23/26 makes kprobe able to temporarily
restore the probed instruction so ftrace is able to convert it.

Patch 24/26 is the core logic which enable early kprobe on ftrace,
including converting early kprobe on ftrace to normal kprobe on ftrace.

Patch 25/26 is corresponding kconfig update.

Patch 26/26 is a rough kernel cmdline support. The usage is similar to
my V1 patch. I'd like to drop it and design a new one so let it
unchanged.

In my v2 patch, it is possible to proble at function entries on x86:

 ... ekprobe=__alloc_pages_nodemask ...

and ekprobe option is able to coexist with ftrace= and ftrace_filter=
options:

   ... ekprobe=__alloc_pages_nodemask ftrace=function \
     ftrace_filter=__alloc_pages_nodemask ...

In that case, events between ftrace enabled and normal kprobe fully
initialized are missed in that case.

Thank you!

Wang Nan (26):
  kprobes: set kprobes_all_disarmed earlier to enable re-optimization.
  kprobes: makes kprobes/enabled works correctly for optimized kprobes.
  kprobes: x86: mark 2 bytes NOP as boostable.
  ftrace: don't update record flags if code modification fail.
  ftrace/x86: Ensure rec->flags no change when failure occures.
  ftrace: sort ftrace entries earlier.
  ftrace: allow search ftrace addr before ftrace fully inited.
  ftrace: enable other subsystems make ftrace nop before ftrace_init()
  ftrace: callchain and ftrace_bug_tryfix
  ftrace: x86: try to fix ftrace when ftrace_replace_code.
  early kprobes: introduce kprobe_is_early for futher early kprobe use.
  early kprobes: Add an KPROBE_FLAG_EARLY for early kprobe.
  early kprobes: ARM: directly modify code.
  early kprobes: ARM: introduce early kprobes related code area.
  early kprobes: x86: directly modify code.
  early kprobes: x86: introduce early kprobes related code area.
  early kprobes: introduces macros for allocing early kprobe resources.
  early kprobes: allows __alloc_insn_slot() from early kprobes slots.
  early kprobes: perhibit probing at early kprobe reserved area.
  early kprobes: core logic of eraly kprobes.
  early kprobes: add CONFIG_EARLY_KPROBES option.
  early kprobes: introduce arch_fix_ftrace_early_kprobe().
  early kprobes: x86: arch_restore_optimized_kprobe().
  early kprobes: core logic to support early kprobe on ftrace.
  early kprobes: introduce kconfig option to support early kprobe on
    ftrace.
  kprobes: enable 'ekprobe=' cmdline option for early kprobes.

 arch/Kconfig                      |  12 +
 arch/arm/include/asm/kprobes.h    |  31 ++-
 arch/arm/kernel/vmlinux.lds.S     |   2 +
 arch/arm/probes/kprobes/opt-arm.c |  12 +-
 arch/x86/include/asm/insn.h       |   7 +-
 arch/x86/include/asm/kprobes.h    |  47 +++-
 arch/x86/kernel/ftrace.c          |  23 +-
 arch/x86/kernel/kprobes/core.c    |   2 +-
 arch/x86/kernel/kprobes/opt.c     |  69 +++++-
 arch/x86/kernel/vmlinux.lds.S     |   2 +
 include/linux/ftrace.h            |  37 ++++
 include/linux/kprobes.h           | 131 +++++++++++
 init/main.c                       |   1 +
 kernel/kprobes.c                  | 451 +++++++++++++++++++++++++++++++++++++-
 kernel/trace/ftrace.c             | 145 ++++++++++--
 15 files changed, 928 insertions(+), 44 deletions(-)

-- 
1.8.4

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 01/26] kprobes: set kprobes_all_disarmed earlier to enable re-optimization.
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-12 12:19   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:19 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

In original code, the probed instruction doesn't get optimized after

echo 0 > /sys/kernel/debug/kprobes/enabled
echo 1 > /sys/kernel/debug/kprobes/enabled

This is because original code checks kprobes_all_disarmed in
optimize_kprobe(), but this flag is turned off after calling that
function. Therefore, optimize_kprobe() will see
kprobes_all_disarmed == true and doesn't do the optimization.

This patch simply turns off kprobes_all_disarmed earlier to enable
optimization.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
 kernel/kprobes.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 2ca272f..c397900 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -2320,6 +2320,12 @@ static void arm_all_kprobes(void)
 	if (!kprobes_all_disarmed)
 		goto already_enabled;
 
+	/*
+	 * optimize_kprobe() called by arm_kprobe() checks
+	 * kprobes_all_disarmed, so set kprobes_all_disarmed before
+	 * arm_kprobe.
+	 */
+	kprobes_all_disarmed = false;
 	/* Arming kprobes doesn't optimize kprobe itself */
 	for (i = 0; i < KPROBE_TABLE_SIZE; i++) {
 		head = &kprobe_table[i];
@@ -2328,7 +2334,6 @@ static void arm_all_kprobes(void)
 				arm_kprobe(p);
 	}
 
-	kprobes_all_disarmed = false;
 	printk(KERN_INFO "Kprobes globally enabled\n");
 
 already_enabled:
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 01/26] kprobes: set kprobes_all_disarmed earlier to enable re-optimization.
@ 2015-02-12 12:19   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:19 UTC (permalink / raw)
  To: linux-arm-kernel

In original code, the probed instruction doesn't get optimized after

echo 0 > /sys/kernel/debug/kprobes/enabled
echo 1 > /sys/kernel/debug/kprobes/enabled

This is because original code checks kprobes_all_disarmed in
optimize_kprobe(), but this flag is turned off after calling that
function. Therefore, optimize_kprobe() will see
kprobes_all_disarmed == true and doesn't do the optimization.

This patch simply turns off kprobes_all_disarmed earlier to enable
optimization.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
 kernel/kprobes.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 2ca272f..c397900 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -2320,6 +2320,12 @@ static void arm_all_kprobes(void)
 	if (!kprobes_all_disarmed)
 		goto already_enabled;
 
+	/*
+	 * optimize_kprobe() called by arm_kprobe() checks
+	 * kprobes_all_disarmed, so set kprobes_all_disarmed before
+	 * arm_kprobe.
+	 */
+	kprobes_all_disarmed = false;
 	/* Arming kprobes doesn't optimize kprobe itself */
 	for (i = 0; i < KPROBE_TABLE_SIZE; i++) {
 		head = &kprobe_table[i];
@@ -2328,7 +2334,6 @@ static void arm_all_kprobes(void)
 				arm_kprobe(p);
 	}
 
-	kprobes_all_disarmed = false;
 	printk(KERN_INFO "Kprobes globally enabled\n");
 
 already_enabled:
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 02/26] kprobes: makes kprobes/enabled works correctly for optimized kprobes.
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-12 12:19   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:19 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

debugfs/kprobes/enabled doesn't work correctly on optimized kprobes.
Masami Hiramatsu has a test report on x86_64 platform:

https://lkml.org/lkml/2015/1/19/274

This patch forces it to unoptimize kprobe if kprobes_all_disarmed
is set. It also checks the flag in unregistering path for skipping
unneeded disarming process when kprobes globally disarmed.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
 kernel/kprobes.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index c397900..c90e417 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -869,7 +869,8 @@ static void __disarm_kprobe(struct kprobe *p, bool reopt)
 {
 	struct kprobe *_p;
 
-	unoptimize_kprobe(p, false);	/* Try to unoptimize */
+	/* Try to unoptimize */
+	unoptimize_kprobe(p, kprobes_all_disarmed);
 
 	if (!kprobe_queued(p)) {
 		arch_disarm_kprobe(p);
@@ -1571,7 +1572,13 @@ static struct kprobe *__disable_kprobe(struct kprobe *p)
 
 		/* Try to disarm and disable this/parent probe */
 		if (p == orig_p || aggr_kprobe_disabled(orig_p)) {
-			disarm_kprobe(orig_p, true);
+			/*
+			 * If kprobes_all_disarmed is set, orig_p
+			 * should have already been disarmed, so
+			 * skip unneed disarming process.
+			 */
+			if (!kprobes_all_disarmed)
+				disarm_kprobe(orig_p, true);
 			orig_p->flags |= KPROBE_FLAG_DISABLED;
 		}
 	}
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 02/26] kprobes: makes kprobes/enabled works correctly for optimized kprobes.
@ 2015-02-12 12:19   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:19 UTC (permalink / raw)
  To: linux-arm-kernel

debugfs/kprobes/enabled doesn't work correctly on optimized kprobes.
Masami Hiramatsu has a test report on x86_64 platform:

https://lkml.org/lkml/2015/1/19/274

This patch forces it to unoptimize kprobe if kprobes_all_disarmed
is set. It also checks the flag in unregistering path for skipping
unneeded disarming process when kprobes globally disarmed.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
 kernel/kprobes.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index c397900..c90e417 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -869,7 +869,8 @@ static void __disarm_kprobe(struct kprobe *p, bool reopt)
 {
 	struct kprobe *_p;
 
-	unoptimize_kprobe(p, false);	/* Try to unoptimize */
+	/* Try to unoptimize */
+	unoptimize_kprobe(p, kprobes_all_disarmed);
 
 	if (!kprobe_queued(p)) {
 		arch_disarm_kprobe(p);
@@ -1571,7 +1572,13 @@ static struct kprobe *__disable_kprobe(struct kprobe *p)
 
 		/* Try to disarm and disable this/parent probe */
 		if (p == orig_p || aggr_kprobe_disabled(orig_p)) {
-			disarm_kprobe(orig_p, true);
+			/*
+			 * If kprobes_all_disarmed is set, orig_p
+			 * should have already been disarmed, so
+			 * skip unneed disarming process.
+			 */
+			if (!kprobes_all_disarmed)
+				disarm_kprobe(orig_p, true);
 			orig_p->flags |= KPROBE_FLAG_DISABLED;
 		}
 	}
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 03/26] kprobes: x86: mark 2 bytes NOP as boostable.
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-12 12:19   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:19 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

Currently, x86 kprobes is unable to boost 2 bytes nop like:

nopl 0x0(%rax,%rax,1)

which is 0x0f 0x1f 0x44 0x00 0x00.

Such nops have exactly 5 bytes which is able to hold a relative jmp
instruction. Boosting them should be obviously safe.

This patch enable boosting such nops by simply updating
twobyte_is_boostable[] array.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
---
 arch/x86/kernel/kprobes/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 98f654d..6a1146e 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -84,7 +84,7 @@ static volatile u32 twobyte_is_boostable[256 / 32] = {
 	/*      0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f          */
 	/*      ----------------------------------------------          */
 	W(0x00, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0) | /* 00 */
-	W(0x10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) , /* 10 */
+	W(0x10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1) , /* 10 */
 	W(0x20, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) | /* 20 */
 	W(0x30, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) , /* 30 */
 	W(0x40, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) | /* 40 */
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 03/26] kprobes: x86: mark 2 bytes NOP as boostable.
@ 2015-02-12 12:19   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:19 UTC (permalink / raw)
  To: linux-arm-kernel

Currently, x86 kprobes is unable to boost 2 bytes nop like:

nopl 0x0(%rax,%rax,1)

which is 0x0f 0x1f 0x44 0x00 0x00.

Such nops have exactly 5 bytes which is able to hold a relative jmp
instruction. Boosting them should be obviously safe.

This patch enable boosting such nops by simply updating
twobyte_is_boostable[] array.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
---
 arch/x86/kernel/kprobes/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 98f654d..6a1146e 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -84,7 +84,7 @@ static volatile u32 twobyte_is_boostable[256 / 32] = {
 	/*      0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f          */
 	/*      ----------------------------------------------          */
 	W(0x00, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0) | /* 00 */
-	W(0x10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) , /* 10 */
+	W(0x10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1) , /* 10 */
 	W(0x20, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) | /* 20 */
 	W(0x30, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) , /* 30 */
 	W(0x40, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) | /* 40 */
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 04/26] ftrace: don't update record flags if code modification fail.
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-12 12:19   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:19 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

X86 and common ftrace_replace_code() behave differently.

In x86, rec->flags get updated only when (almost) all works are done. In
common code, rec->flags is updated before code modification, and never
get restored when code modification fails.

This patch ensures rec->flags kept its original value if
ftrace_replace_code() fail. A later patch will correct that function
for x86.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 kernel/trace/ftrace.c | 17 ++++++++++++-----
 1 file changed, 12 insertions(+), 5 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 45e5cb1..6c6cbb1 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -2254,23 +2254,30 @@ __ftrace_replace_code(struct dyn_ftrace *rec, int enable)
 	/* This needs to be done before we call ftrace_update_record */
 	ftrace_old_addr = ftrace_get_addr_curr(rec);
 
-	ret = ftrace_update_record(rec, enable);
+	ret = ftrace_test_record(rec, enable);
 
 	switch (ret) {
 	case FTRACE_UPDATE_IGNORE:
 		return 0;
 
 	case FTRACE_UPDATE_MAKE_CALL:
-		return ftrace_make_call(rec, ftrace_addr);
+		ret = ftrace_make_call(rec, ftrace_addr);
+		break;
 
 	case FTRACE_UPDATE_MAKE_NOP:
-		return ftrace_make_nop(NULL, rec, ftrace_old_addr);
+		ret = ftrace_make_nop(NULL, rec, ftrace_old_addr);
+		break;
 
 	case FTRACE_UPDATE_MODIFY_CALL:
-		return ftrace_modify_call(rec, ftrace_old_addr, ftrace_addr);
+		ret = ftrace_modify_call(rec, ftrace_old_addr, ftrace_addr);
+		break;
 	}
 
-	return -1; /* unknow ftrace bug */
+	if (ret)
+		return -1; /* unknow ftrace bug */
+
+	ftrace_update_record(rec, enable);
+	return 0;
 }
 
 void __weak ftrace_replace_code(int enable)
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 04/26] ftrace: don't update record flags if code modification fail.
@ 2015-02-12 12:19   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:19 UTC (permalink / raw)
  To: linux-arm-kernel

X86 and common ftrace_replace_code() behave differently.

In x86, rec->flags get updated only when (almost) all works are done. In
common code, rec->flags is updated before code modification, and never
get restored when code modification fails.

This patch ensures rec->flags kept its original value if
ftrace_replace_code() fail. A later patch will correct that function
for x86.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 kernel/trace/ftrace.c | 17 ++++++++++++-----
 1 file changed, 12 insertions(+), 5 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 45e5cb1..6c6cbb1 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -2254,23 +2254,30 @@ __ftrace_replace_code(struct dyn_ftrace *rec, int enable)
 	/* This needs to be done before we call ftrace_update_record */
 	ftrace_old_addr = ftrace_get_addr_curr(rec);
 
-	ret = ftrace_update_record(rec, enable);
+	ret = ftrace_test_record(rec, enable);
 
 	switch (ret) {
 	case FTRACE_UPDATE_IGNORE:
 		return 0;
 
 	case FTRACE_UPDATE_MAKE_CALL:
-		return ftrace_make_call(rec, ftrace_addr);
+		ret = ftrace_make_call(rec, ftrace_addr);
+		break;
 
 	case FTRACE_UPDATE_MAKE_NOP:
-		return ftrace_make_nop(NULL, rec, ftrace_old_addr);
+		ret = ftrace_make_nop(NULL, rec, ftrace_old_addr);
+		break;
 
 	case FTRACE_UPDATE_MODIFY_CALL:
-		return ftrace_modify_call(rec, ftrace_old_addr, ftrace_addr);
+		ret = ftrace_modify_call(rec, ftrace_old_addr, ftrace_addr);
+		break;
 	}
 
-	return -1; /* unknow ftrace bug */
+	if (ret)
+		return -1; /* unknow ftrace bug */
+
+	ftrace_update_record(rec, enable);
+	return 0;
 }
 
 void __weak ftrace_replace_code(int enable)
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 05/26] ftrace/x86: Ensure rec->flags no change when failure occures.
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-12 12:19   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:19 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

Don't change rec->flags if code modification fails.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 arch/x86/kernel/ftrace.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 8b7b0a5..7bdba65 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -497,6 +497,7 @@ static int finish_update(struct dyn_ftrace *rec, int enable)
 {
 	unsigned long ftrace_addr;
 	int ret;
+	unsigned long old_flags = rec->flags;
 
 	ret = ftrace_update_record(rec, enable);
 
@@ -509,14 +510,18 @@ static int finish_update(struct dyn_ftrace *rec, int enable)
 	case FTRACE_UPDATE_MODIFY_CALL:
 	case FTRACE_UPDATE_MAKE_CALL:
 		/* converting nop to call */
-		return finish_update_call(rec, ftrace_addr);
+		ret = finish_update_call(rec, ftrace_addr);
+		break;
 
 	case FTRACE_UPDATE_MAKE_NOP:
 		/* converting a call to a nop */
-		return finish_update_nop(rec);
+		ret = finish_update_nop(rec);
+		break;
 	}
 
-	return 0;
+	if (ret)
+		rec->flags = old_flags;
+	return ret;
 }
 
 static void do_sync_core(void *data)
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 05/26] ftrace/x86: Ensure rec->flags no change when failure occures.
@ 2015-02-12 12:19   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:19 UTC (permalink / raw)
  To: linux-arm-kernel

Don't change rec->flags if code modification fails.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 arch/x86/kernel/ftrace.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 8b7b0a5..7bdba65 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -497,6 +497,7 @@ static int finish_update(struct dyn_ftrace *rec, int enable)
 {
 	unsigned long ftrace_addr;
 	int ret;
+	unsigned long old_flags = rec->flags;
 
 	ret = ftrace_update_record(rec, enable);
 
@@ -509,14 +510,18 @@ static int finish_update(struct dyn_ftrace *rec, int enable)
 	case FTRACE_UPDATE_MODIFY_CALL:
 	case FTRACE_UPDATE_MAKE_CALL:
 		/* converting nop to call */
-		return finish_update_call(rec, ftrace_addr);
+		ret = finish_update_call(rec, ftrace_addr);
+		break;
 
 	case FTRACE_UPDATE_MAKE_NOP:
 		/* converting a call to a nop */
-		return finish_update_nop(rec);
+		ret = finish_update_nop(rec);
+		break;
 	}
 
-	return 0;
+	if (ret)
+		rec->flags = old_flags;
+	return ret;
 }
 
 static void do_sync_core(void *data)
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 06/26] ftrace: sort ftrace entries earlier.
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-12 12:19   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:19 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 include/linux/ftrace.h |  2 ++
 init/main.c            |  1 +
 kernel/trace/ftrace.c  | 29 +++++++++++++++++++++++++++--
 3 files changed, 30 insertions(+), 2 deletions(-)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 1da6029..8db315a 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -701,8 +701,10 @@ static inline void __ftrace_enabled_restore(int enabled)
 
 #ifdef CONFIG_FTRACE_MCOUNT_RECORD
 extern void ftrace_init(void);
+extern void ftrace_init_early(void);
 #else
 static inline void ftrace_init(void) { }
+static inline void ftrace_init_early(void) { }
 #endif
 
 /*
diff --git a/init/main.c b/init/main.c
index 6f0f1c5f..eaafc3e 100644
--- a/init/main.c
+++ b/init/main.c
@@ -517,6 +517,7 @@ asmlinkage __visible void __init start_kernel(void)
 	boot_cpu_init();
 	page_address_init();
 	pr_notice("%s", linux_banner);
+	ftrace_init_early();
 	setup_arch(&command_line);
 	mm_init_cpumask(&init_mm);
 	setup_command_line(command_line);
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 6c6cbb1..a6a6b09 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1169,6 +1169,7 @@ struct ftrace_page {
 
 static struct ftrace_page	*ftrace_pages_start;
 static struct ftrace_page	*ftrace_pages;
+static bool mcount_sorted = false;
 
 static bool __always_inline ftrace_hash_empty(struct ftrace_hash *hash)
 {
@@ -4743,6 +4744,26 @@ static void ftrace_swap_ips(void *a, void *b, int size)
 	*ipb = t;
 }
 
+static void ftrace_sort_mcount_area(void)
+{
+	extern unsigned long __start_mcount_loc[];
+	extern unsigned long __stop_mcount_loc[];
+
+	unsigned long *start = __start_mcount_loc;
+	unsigned long *end = __stop_mcount_loc;
+	unsigned long count;
+
+	count = end - start;
+	if (!count)
+		return;
+
+	if (!mcount_sorted) {
+		sort(start, count, sizeof(*start),
+		     ftrace_cmp_ips, ftrace_swap_ips);
+		mcount_sorted = true;
+	}
+}
+
 static int ftrace_process_locs(struct module *mod,
 			       unsigned long *start,
 			       unsigned long *end)
@@ -4761,8 +4782,7 @@ static int ftrace_process_locs(struct module *mod,
 	if (!count)
 		return 0;
 
-	sort(start, count, sizeof(*start),
-	     ftrace_cmp_ips, ftrace_swap_ips);
+	ftrace_sort_mcount_area();
 
 	start_pg = ftrace_allocate_pages(count);
 	if (!start_pg)
@@ -4965,6 +4985,11 @@ void __init ftrace_init(void)
 	ftrace_disabled = 1;
 }
 
+void __init ftrace_init_early(void)
+{
+	ftrace_sort_mcount_area();
+}
+
 /* Do nothing if arch does not support this */
 void __weak arch_ftrace_update_trampoline(struct ftrace_ops *ops)
 {
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 06/26] ftrace: sort ftrace entries earlier.
@ 2015-02-12 12:19   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:19 UTC (permalink / raw)
  To: linux-arm-kernel

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 include/linux/ftrace.h |  2 ++
 init/main.c            |  1 +
 kernel/trace/ftrace.c  | 29 +++++++++++++++++++++++++++--
 3 files changed, 30 insertions(+), 2 deletions(-)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 1da6029..8db315a 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -701,8 +701,10 @@ static inline void __ftrace_enabled_restore(int enabled)
 
 #ifdef CONFIG_FTRACE_MCOUNT_RECORD
 extern void ftrace_init(void);
+extern void ftrace_init_early(void);
 #else
 static inline void ftrace_init(void) { }
+static inline void ftrace_init_early(void) { }
 #endif
 
 /*
diff --git a/init/main.c b/init/main.c
index 6f0f1c5f..eaafc3e 100644
--- a/init/main.c
+++ b/init/main.c
@@ -517,6 +517,7 @@ asmlinkage __visible void __init start_kernel(void)
 	boot_cpu_init();
 	page_address_init();
 	pr_notice("%s", linux_banner);
+	ftrace_init_early();
 	setup_arch(&command_line);
 	mm_init_cpumask(&init_mm);
 	setup_command_line(command_line);
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 6c6cbb1..a6a6b09 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1169,6 +1169,7 @@ struct ftrace_page {
 
 static struct ftrace_page	*ftrace_pages_start;
 static struct ftrace_page	*ftrace_pages;
+static bool mcount_sorted = false;
 
 static bool __always_inline ftrace_hash_empty(struct ftrace_hash *hash)
 {
@@ -4743,6 +4744,26 @@ static void ftrace_swap_ips(void *a, void *b, int size)
 	*ipb = t;
 }
 
+static void ftrace_sort_mcount_area(void)
+{
+	extern unsigned long __start_mcount_loc[];
+	extern unsigned long __stop_mcount_loc[];
+
+	unsigned long *start = __start_mcount_loc;
+	unsigned long *end = __stop_mcount_loc;
+	unsigned long count;
+
+	count = end - start;
+	if (!count)
+		return;
+
+	if (!mcount_sorted) {
+		sort(start, count, sizeof(*start),
+		     ftrace_cmp_ips, ftrace_swap_ips);
+		mcount_sorted = true;
+	}
+}
+
 static int ftrace_process_locs(struct module *mod,
 			       unsigned long *start,
 			       unsigned long *end)
@@ -4761,8 +4782,7 @@ static int ftrace_process_locs(struct module *mod,
 	if (!count)
 		return 0;
 
-	sort(start, count, sizeof(*start),
-	     ftrace_cmp_ips, ftrace_swap_ips);
+	ftrace_sort_mcount_area();
 
 	start_pg = ftrace_allocate_pages(count);
 	if (!start_pg)
@@ -4965,6 +4985,11 @@ void __init ftrace_init(void)
 	ftrace_disabled = 1;
 }
 
+void __init ftrace_init_early(void)
+{
+	ftrace_sort_mcount_area();
+}
+
 /* Do nothing if arch does not support this */
 void __weak arch_ftrace_update_trampoline(struct ftrace_ops *ops)
 {
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 07/26] ftrace: allow search ftrace addr before ftrace fully inited.
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-12 12:19   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:19 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 kernel/trace/ftrace.c | 38 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 38 insertions(+)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index a6a6b09..79b3e88 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1539,6 +1539,8 @@ static unsigned long ftrace_location_range(unsigned long start, unsigned long en
 	return 0;
 }
 
+static unsigned long ftrace_search_mcount_ip(unsigned long ip);
+
 /**
  * ftrace_location - return true if the ip giving is a traced location
  * @ip: the instruction pointer to check
@@ -1550,6 +1552,9 @@ static unsigned long ftrace_location_range(unsigned long start, unsigned long en
  */
 unsigned long ftrace_location(unsigned long ip)
 {
+	if (unlikely(!ftrace_pages_start))
+		return ftrace_search_mcount_ip(ip);
+
 	return ftrace_location_range(ip, ip);
 }
 
@@ -4733,6 +4738,18 @@ static int ftrace_cmp_ips(const void *a, const void *b)
 	return 0;
 }
 
+static int ftrace_cmp_ips_insn(const void *a, const void *b)
+{
+	const unsigned long *ipa = a;
+	const unsigned long *ipb = b;
+
+	if (*ipa >= *ipb + MCOUNT_INSN_SIZE)
+		return 1;
+	if (*ipa < *ipb)
+		return -1;
+	return 0;
+}
+
 static void ftrace_swap_ips(void *a, void *b, int size)
 {
 	unsigned long *ipa = a;
@@ -4764,6 +4781,27 @@ static void ftrace_sort_mcount_area(void)
 	}
 }
 
+static unsigned long ftrace_search_mcount_ip(unsigned long ip)
+{
+	extern unsigned long __start_mcount_loc[];
+	extern unsigned long __stop_mcount_loc[];
+
+	unsigned long *mcount_start = __start_mcount_loc;
+	unsigned long *mcount_end = __stop_mcount_loc;
+	unsigned long count = mcount_end - mcount_start;
+	unsigned long *retval;
+
+	if (!mcount_sorted)
+		return 0;
+
+	retval = bsearch(&ip, mcount_start, count,
+			sizeof(unsigned long), ftrace_cmp_ips_insn);
+	if (!retval)
+		return 0;
+
+	return ftrace_call_adjust(ip);
+}
+
 static int ftrace_process_locs(struct module *mod,
 			       unsigned long *start,
 			       unsigned long *end)
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 07/26] ftrace: allow search ftrace addr before ftrace fully inited.
@ 2015-02-12 12:19   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:19 UTC (permalink / raw)
  To: linux-arm-kernel

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 kernel/trace/ftrace.c | 38 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 38 insertions(+)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index a6a6b09..79b3e88 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1539,6 +1539,8 @@ static unsigned long ftrace_location_range(unsigned long start, unsigned long en
 	return 0;
 }
 
+static unsigned long ftrace_search_mcount_ip(unsigned long ip);
+
 /**
  * ftrace_location - return true if the ip giving is a traced location
  * @ip: the instruction pointer to check
@@ -1550,6 +1552,9 @@ static unsigned long ftrace_location_range(unsigned long start, unsigned long en
  */
 unsigned long ftrace_location(unsigned long ip)
 {
+	if (unlikely(!ftrace_pages_start))
+		return ftrace_search_mcount_ip(ip);
+
 	return ftrace_location_range(ip, ip);
 }
 
@@ -4733,6 +4738,18 @@ static int ftrace_cmp_ips(const void *a, const void *b)
 	return 0;
 }
 
+static int ftrace_cmp_ips_insn(const void *a, const void *b)
+{
+	const unsigned long *ipa = a;
+	const unsigned long *ipb = b;
+
+	if (*ipa >= *ipb + MCOUNT_INSN_SIZE)
+		return 1;
+	if (*ipa < *ipb)
+		return -1;
+	return 0;
+}
+
 static void ftrace_swap_ips(void *a, void *b, int size)
 {
 	unsigned long *ipa = a;
@@ -4764,6 +4781,27 @@ static void ftrace_sort_mcount_area(void)
 	}
 }
 
+static unsigned long ftrace_search_mcount_ip(unsigned long ip)
+{
+	extern unsigned long __start_mcount_loc[];
+	extern unsigned long __stop_mcount_loc[];
+
+	unsigned long *mcount_start = __start_mcount_loc;
+	unsigned long *mcount_end = __stop_mcount_loc;
+	unsigned long count = mcount_end - mcount_start;
+	unsigned long *retval;
+
+	if (!mcount_sorted)
+		return 0;
+
+	retval = bsearch(&ip, mcount_start, count,
+			sizeof(unsigned long), ftrace_cmp_ips_insn);
+	if (!retval)
+		return 0;
+
+	return ftrace_call_adjust(ip);
+}
+
 static int ftrace_process_locs(struct module *mod,
 			       unsigned long *start,
 			       unsigned long *end)
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 08/26] ftrace: enable other subsystems make ftrace nop before ftrace_init()
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-12 12:19   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:19 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 include/linux/ftrace.h |  5 +++++
 kernel/trace/ftrace.c  | 15 +++++++++++++++
 2 files changed, 20 insertions(+)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 8db315a..d37ccd8a 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -702,9 +702,14 @@ static inline void __ftrace_enabled_restore(int enabled)
 #ifdef CONFIG_FTRACE_MCOUNT_RECORD
 extern void ftrace_init(void);
 extern void ftrace_init_early(void);
+extern int ftrace_process_loc_early(unsigned long ip);
 #else
 static inline void ftrace_init(void) { }
 static inline void ftrace_init_early(void) { }
+static inline int ftrace_process_loc_early(unsigned long __unused)
+{
+	return 0;
+}
 #endif
 
 /*
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 79b3e88..150762a 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -5028,6 +5028,21 @@ void __init ftrace_init_early(void)
 	ftrace_sort_mcount_area();
 }
 
+int __init ftrace_process_loc_early(unsigned long addr)
+{
+	unsigned long ip = ftrace_location(addr);
+	struct dyn_ftrace fake_rec;
+	int ret;
+
+	if (ip != addr)
+		return -EINVAL;
+
+	memset(&fake_rec, '\0', sizeof(fake_rec));
+	fake_rec.ip = ip;
+	ret = ftrace_make_nop(NULL, &fake_rec, MCOUNT_ADDR);
+	return ret;
+}
+
 /* Do nothing if arch does not support this */
 void __weak arch_ftrace_update_trampoline(struct ftrace_ops *ops)
 {
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 08/26] ftrace: enable other subsystems make ftrace nop before ftrace_init()
@ 2015-02-12 12:19   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:19 UTC (permalink / raw)
  To: linux-arm-kernel

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 include/linux/ftrace.h |  5 +++++
 kernel/trace/ftrace.c  | 15 +++++++++++++++
 2 files changed, 20 insertions(+)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 8db315a..d37ccd8a 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -702,9 +702,14 @@ static inline void __ftrace_enabled_restore(int enabled)
 #ifdef CONFIG_FTRACE_MCOUNT_RECORD
 extern void ftrace_init(void);
 extern void ftrace_init_early(void);
+extern int ftrace_process_loc_early(unsigned long ip);
 #else
 static inline void ftrace_init(void) { }
 static inline void ftrace_init_early(void) { }
+static inline int ftrace_process_loc_early(unsigned long __unused)
+{
+	return 0;
+}
 #endif
 
 /*
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 79b3e88..150762a 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -5028,6 +5028,21 @@ void __init ftrace_init_early(void)
 	ftrace_sort_mcount_area();
 }
 
+int __init ftrace_process_loc_early(unsigned long addr)
+{
+	unsigned long ip = ftrace_location(addr);
+	struct dyn_ftrace fake_rec;
+	int ret;
+
+	if (ip != addr)
+		return -EINVAL;
+
+	memset(&fake_rec, '\0', sizeof(fake_rec));
+	fake_rec.ip = ip;
+	ret = ftrace_make_nop(NULL, &fake_rec, MCOUNT_ADDR);
+	return ret;
+}
+
 /* Do nothing if arch does not support this */
 void __weak arch_ftrace_update_trampoline(struct ftrace_ops *ops)
 {
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 10/26] ftrace: x86: try to fix ftrace when ftrace_replace_code.
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-12 12:20   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:20 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 arch/x86/kernel/ftrace.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 7bdba65..c869138 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -553,8 +553,16 @@ void ftrace_replace_code(int enable)
 		rec = ftrace_rec_iter_record(iter);
 
 		ret = add_breakpoints(rec, enable);
-		if (ret)
-			goto remove_breakpoints;
+		if (ret) {
+			/*
+			 * Don't trigger ftrace_bug here. Let it done by
+			 * remove_breakpoints procedure.
+			 */
+			ret = __ftrace_tryfix_bug(ret, enable, rec,
+					add_breakpoints(rec, enable), false);
+			if (ret)
+				goto remove_breakpoints;
+		}
 		count++;
 	}
 
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 10/26] ftrace: x86: try to fix ftrace when ftrace_replace_code.
@ 2015-02-12 12:20   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:20 UTC (permalink / raw)
  To: linux-arm-kernel

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 arch/x86/kernel/ftrace.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 7bdba65..c869138 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -553,8 +553,16 @@ void ftrace_replace_code(int enable)
 		rec = ftrace_rec_iter_record(iter);
 
 		ret = add_breakpoints(rec, enable);
-		if (ret)
-			goto remove_breakpoints;
+		if (ret) {
+			/*
+			 * Don't trigger ftrace_bug here. Let it done by
+			 * remove_breakpoints procedure.
+			 */
+			ret = __ftrace_tryfix_bug(ret, enable, rec,
+					add_breakpoints(rec, enable), false);
+			if (ret)
+				goto remove_breakpoints;
+		}
 		count++;
 	}
 
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 11/26] early kprobes: introduce kprobe_is_early for futher early kprobe use.
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-12 12:20   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:20 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

Following early kprobe patches will enable kprobe registering very
early, even before kprobe system initialized. kprobe_is_early() can be
used to check whether we are working on early kprobe.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 include/linux/kprobes.h | 2 ++
 kernel/kprobes.c        | 6 ++++++
 2 files changed, 8 insertions(+)

diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index 1ab5475..e1c8307 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -50,6 +50,8 @@
 #define KPROBE_REENTER		0x00000004
 #define KPROBE_HIT_SSDONE	0x00000008
 
+extern int kprobes_is_early(void);
+
 #else /* CONFIG_KPROBES */
 typedef int kprobe_opcode_t;
 struct arch_specific_insn {
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index c90e417..647c95a 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -68,6 +68,12 @@
 #endif
 
 static int kprobes_initialized;
+
+int kprobes_is_early(void)
+{
+	return !kprobes_initialized;
+}
+
 static struct hlist_head kprobe_table[KPROBE_TABLE_SIZE];
 static struct hlist_head kretprobe_inst_table[KPROBE_TABLE_SIZE];
 
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 11/26] early kprobes: introduce kprobe_is_early for futher early kprobe use.
@ 2015-02-12 12:20   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:20 UTC (permalink / raw)
  To: linux-arm-kernel

Following early kprobe patches will enable kprobe registering very
early, even before kprobe system initialized. kprobe_is_early() can be
used to check whether we are working on early kprobe.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 include/linux/kprobes.h | 2 ++
 kernel/kprobes.c        | 6 ++++++
 2 files changed, 8 insertions(+)

diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index 1ab5475..e1c8307 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -50,6 +50,8 @@
 #define KPROBE_REENTER		0x00000004
 #define KPROBE_HIT_SSDONE	0x00000008
 
+extern int kprobes_is_early(void);
+
 #else /* CONFIG_KPROBES */
 typedef int kprobe_opcode_t;
 struct arch_specific_insn {
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index c90e417..647c95a 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -68,6 +68,12 @@
 #endif
 
 static int kprobes_initialized;
+
+int kprobes_is_early(void)
+{
+	return !kprobes_initialized;
+}
+
 static struct hlist_head kprobe_table[KPROBE_TABLE_SIZE];
 static struct hlist_head kretprobe_inst_table[KPROBE_TABLE_SIZE];
 
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 12/26] early kprobes: Add an KPROBE_FLAG_EARLY for early kprobe.
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-12 12:20   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:20 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

Introduce a KPROBE_FLAG_EARLY for futher expansion. KPROBE_FLAG_EARLY
indicates a kprobe is installed at very early stage, its resources
should be allocated statically.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 include/linux/kprobes.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index e1c8307..8d2e754 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -130,6 +130,7 @@ struct kprobe {
 				   * this flag is only for optimized_kprobe.
 				   */
 #define KPROBE_FLAG_FTRACE	8 /* probe is using ftrace */
+#define KPROBE_FLAG_EARLY	16 /* early kprobe */
 
 /* Has this kprobe gone ? */
 static inline int kprobe_gone(struct kprobe *p)
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 12/26] early kprobes: Add an KPROBE_FLAG_EARLY for early kprobe.
@ 2015-02-12 12:20   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:20 UTC (permalink / raw)
  To: linux-arm-kernel

Introduce a KPROBE_FLAG_EARLY for futher expansion. KPROBE_FLAG_EARLY
indicates a kprobe is installed at very early stage, its resources
should be allocated statically.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 include/linux/kprobes.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index e1c8307..8d2e754 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -130,6 +130,7 @@ struct kprobe {
 				   * this flag is only for optimized_kprobe.
 				   */
 #define KPROBE_FLAG_FTRACE	8 /* probe is using ftrace */
+#define KPROBE_FLAG_EARLY	16 /* early kprobe */
 
 /* Has this kprobe gone ? */
 static inline int kprobe_gone(struct kprobe *p)
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 13/26] early kprobes: ARM: directly modify code.
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-12 12:20   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:20 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

For early kprobe, we can simply patch text because we are in a relative
simple environment.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 arch/arm/probes/kprobes/opt-arm.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/arm/probes/kprobes/opt-arm.c b/arch/arm/probes/kprobes/opt-arm.c
index bcdecc2..43446df 100644
--- a/arch/arm/probes/kprobes/opt-arm.c
+++ b/arch/arm/probes/kprobes/opt-arm.c
@@ -330,8 +330,18 @@ void __kprobes arch_optimize_kprobes(struct list_head *oplist)
 		 * Similar to __arch_disarm_kprobe, operations which
 		 * removing breakpoints must be wrapped by stop_machine
 		 * to avoid racing.
+		 *
+		 * If this function is called before kprobes initialized,
+		 * the kprobe should be an early kprobe, the instruction
+		 * is not armed with breakpoint. There should be only
+		 * one core now, so directly __patch_text is enough.
 		 */
-		kprobes_remove_breakpoint(op->kp.addr, insn);
+		if (unlikely(kprobes_is_early())) {
+			BUG_ON(!(op->kp.flags & KPROBE_FLAG_EARLY));
+			__patch_text(op->kp.addr, insn);
+		} else {
+			kprobes_remove_breakpoint(op->kp.addr, insn);
+		}
 
 		list_del_init(&op->list);
 	}
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 13/26] early kprobes: ARM: directly modify code.
@ 2015-02-12 12:20   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:20 UTC (permalink / raw)
  To: linux-arm-kernel

For early kprobe, we can simply patch text because we are in a relative
simple environment.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 arch/arm/probes/kprobes/opt-arm.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/arm/probes/kprobes/opt-arm.c b/arch/arm/probes/kprobes/opt-arm.c
index bcdecc2..43446df 100644
--- a/arch/arm/probes/kprobes/opt-arm.c
+++ b/arch/arm/probes/kprobes/opt-arm.c
@@ -330,8 +330,18 @@ void __kprobes arch_optimize_kprobes(struct list_head *oplist)
 		 * Similar to __arch_disarm_kprobe, operations which
 		 * removing breakpoints must be wrapped by stop_machine
 		 * to avoid racing.
+		 *
+		 * If this function is called before kprobes initialized,
+		 * the kprobe should be an early kprobe, the instruction
+		 * is not armed with breakpoint. There should be only
+		 * one core now, so directly __patch_text is enough.
 		 */
-		kprobes_remove_breakpoint(op->kp.addr, insn);
+		if (unlikely(kprobes_is_early())) {
+			BUG_ON(!(op->kp.flags & KPROBE_FLAG_EARLY));
+			__patch_text(op->kp.addr, insn);
+		} else {
+			kprobes_remove_breakpoint(op->kp.addr, insn);
+		}
 
 		list_del_init(&op->list);
 	}
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 14/26] early kprobes: ARM: introduce early kprobes related code area.
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-12 12:20   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:20 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

In arm's vmlinux.lds, introduces code area inside text section.
Executable area used by early kprobes will be allocated from there.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 arch/arm/include/asm/kprobes.h | 31 +++++++++++++++++++++++++++++--
 arch/arm/kernel/vmlinux.lds.S  |  2 ++
 2 files changed, 31 insertions(+), 2 deletions(-)

diff --git a/arch/arm/include/asm/kprobes.h b/arch/arm/include/asm/kprobes.h
index 3ea9be5..0a4421e 100644
--- a/arch/arm/include/asm/kprobes.h
+++ b/arch/arm/include/asm/kprobes.h
@@ -17,16 +17,42 @@
 #define _ARM_KPROBES_H
 
 #include <linux/types.h>
-#include <linux/ptrace.h>
-#include <linux/notifier.h>
 
 #define __ARCH_WANT_KPROBES_INSN_SLOT
 #define MAX_INSN_SIZE			2
 
+#ifdef __ASSEMBLY__
+
+#define KPROBE_OPCODE_SIZE	4
+#define MAX_OPTINSN_SIZE (optprobe_template_end - optprobe_template_entry)
+
+#ifdef CONFIG_EARLY_KPROBES
+#define EARLY_KPROBES_CODES_AREA					\
+	. = ALIGN(8);							\
+	VMLINUX_SYMBOL(__early_kprobes_start) = .;			\
+	VMLINUX_SYMBOL(__early_kprobes_code_area_start) = .;		\
+	. = . + MAX_OPTINSN_SIZE * CONFIG_NR_EARLY_KPROBES_SLOTS;	\
+	VMLINUX_SYMBOL(__early_kprobes_code_area_end) = .;		\
+	. = ALIGN(8);							\
+	VMLINUX_SYMBOL(__early_kprobes_insn_slot_start) = .;		\
+	. = . + MAX_INSN_SIZE * KPROBE_OPCODE_SIZE * CONFIG_NR_EARLY_KPROBES_SLOTS;\
+	VMLINUX_SYMBOL(__early_kprobes_insn_slot_end) = .;		\
+	VMLINUX_SYMBOL(__early_kprobes_end) = .;
+
+#else
+#define EARLY_KPROBES_CODES_AREA
+#endif
+
+#else
+
+#include <linux/ptrace.h>
+#include <linux/notifier.h>
+
 #define flush_insn_slot(p)		do { } while (0)
 #define kretprobe_blacklist_size	0
 
 typedef u32 kprobe_opcode_t;
+#define KPROBE_OPCODE_SIZE	sizeof(kprobe_opcode_t)
 struct kprobe;
 #include <asm/probes.h>
 
@@ -83,4 +109,5 @@ struct arch_optimized_insn {
 	 */
 };
 
+#endif /* __ASSEMBLY__ */
 #endif /* _ARM_KPROBES_H */
diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
index 9351f7f..6fa2b85 100644
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -11,6 +11,7 @@
 #ifdef CONFIG_ARM_KERNMEM_PERMS
 #include <asm/pgtable.h>
 #endif
+#include <asm/kprobes.h>
 	
 #define PROC_INFO							\
 	. = ALIGN(4);							\
@@ -108,6 +109,7 @@ SECTIONS
 			SCHED_TEXT
 			LOCK_TEXT
 			KPROBES_TEXT
+			EARLY_KPROBES_CODES_AREA
 			IDMAP_TEXT
 #ifdef CONFIG_MMU
 			*(.fixup)
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 14/26] early kprobes: ARM: introduce early kprobes related code area.
@ 2015-02-12 12:20   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:20 UTC (permalink / raw)
  To: linux-arm-kernel

In arm's vmlinux.lds, introduces code area inside text section.
Executable area used by early kprobes will be allocated from there.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 arch/arm/include/asm/kprobes.h | 31 +++++++++++++++++++++++++++++--
 arch/arm/kernel/vmlinux.lds.S  |  2 ++
 2 files changed, 31 insertions(+), 2 deletions(-)

diff --git a/arch/arm/include/asm/kprobes.h b/arch/arm/include/asm/kprobes.h
index 3ea9be5..0a4421e 100644
--- a/arch/arm/include/asm/kprobes.h
+++ b/arch/arm/include/asm/kprobes.h
@@ -17,16 +17,42 @@
 #define _ARM_KPROBES_H
 
 #include <linux/types.h>
-#include <linux/ptrace.h>
-#include <linux/notifier.h>
 
 #define __ARCH_WANT_KPROBES_INSN_SLOT
 #define MAX_INSN_SIZE			2
 
+#ifdef __ASSEMBLY__
+
+#define KPROBE_OPCODE_SIZE	4
+#define MAX_OPTINSN_SIZE (optprobe_template_end - optprobe_template_entry)
+
+#ifdef CONFIG_EARLY_KPROBES
+#define EARLY_KPROBES_CODES_AREA					\
+	. = ALIGN(8);							\
+	VMLINUX_SYMBOL(__early_kprobes_start) = .;			\
+	VMLINUX_SYMBOL(__early_kprobes_code_area_start) = .;		\
+	. = . + MAX_OPTINSN_SIZE * CONFIG_NR_EARLY_KPROBES_SLOTS;	\
+	VMLINUX_SYMBOL(__early_kprobes_code_area_end) = .;		\
+	. = ALIGN(8);							\
+	VMLINUX_SYMBOL(__early_kprobes_insn_slot_start) = .;		\
+	. = . + MAX_INSN_SIZE * KPROBE_OPCODE_SIZE * CONFIG_NR_EARLY_KPROBES_SLOTS;\
+	VMLINUX_SYMBOL(__early_kprobes_insn_slot_end) = .;		\
+	VMLINUX_SYMBOL(__early_kprobes_end) = .;
+
+#else
+#define EARLY_KPROBES_CODES_AREA
+#endif
+
+#else
+
+#include <linux/ptrace.h>
+#include <linux/notifier.h>
+
 #define flush_insn_slot(p)		do { } while (0)
 #define kretprobe_blacklist_size	0
 
 typedef u32 kprobe_opcode_t;
+#define KPROBE_OPCODE_SIZE	sizeof(kprobe_opcode_t)
 struct kprobe;
 #include <asm/probes.h>
 
@@ -83,4 +109,5 @@ struct arch_optimized_insn {
 	 */
 };
 
+#endif /* __ASSEMBLY__ */
 #endif /* _ARM_KPROBES_H */
diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
index 9351f7f..6fa2b85 100644
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -11,6 +11,7 @@
 #ifdef CONFIG_ARM_KERNMEM_PERMS
 #include <asm/pgtable.h>
 #endif
+#include <asm/kprobes.h>
 	
 #define PROC_INFO							\
 	. = ALIGN(4);							\
@@ -108,6 +109,7 @@ SECTIONS
 			SCHED_TEXT
 			LOCK_TEXT
 			KPROBES_TEXT
+			EARLY_KPROBES_CODES_AREA
 			IDMAP_TEXT
 #ifdef CONFIG_MMU
 			*(.fixup)
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 15/26] early kprobes: x86: directly modify code.
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-12 12:20   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:20 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

When registering early kprobes, SMP should has not been enabled, so
doesn't require synchronization in text_poke_bp(). Simply memcpy is
enough.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 arch/x86/kernel/kprobes/opt.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 0dd8d08..21847ab 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -36,6 +36,7 @@
 #include <asm/alternative.h>
 #include <asm/insn.h>
 #include <asm/debugreg.h>
+#include <asm/tlbflush.h>
 
 #include "common.h"
 
@@ -397,8 +398,15 @@ void arch_optimize_kprobes(struct list_head *oplist)
 		insn_buf[0] = RELATIVEJUMP_OPCODE;
 		*(s32 *)(&insn_buf[1]) = rel;
 
-		text_poke_bp(op->kp.addr, insn_buf, RELATIVEJUMP_SIZE,
-			     op->optinsn.insn);
+		if (unlikely(kprobes_is_early())) {
+			BUG_ON(!(op->kp.flags & KPROBE_FLAG_EARLY));
+			memcpy(op->kp.addr, insn_buf, RELATIVEJUMP_SIZE);
+			local_flush_tlb();
+			sync_core();
+		} else {
+			text_poke_bp(op->kp.addr, insn_buf, RELATIVEJUMP_SIZE,
+				     op->optinsn.insn);
+		}
 
 		list_del_init(&op->list);
 	}
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 15/26] early kprobes: x86: directly modify code.
@ 2015-02-12 12:20   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:20 UTC (permalink / raw)
  To: linux-arm-kernel

When registering early kprobes, SMP should has not been enabled, so
doesn't require synchronization in text_poke_bp(). Simply memcpy is
enough.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 arch/x86/kernel/kprobes/opt.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 0dd8d08..21847ab 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -36,6 +36,7 @@
 #include <asm/alternative.h>
 #include <asm/insn.h>
 #include <asm/debugreg.h>
+#include <asm/tlbflush.h>
 
 #include "common.h"
 
@@ -397,8 +398,15 @@ void arch_optimize_kprobes(struct list_head *oplist)
 		insn_buf[0] = RELATIVEJUMP_OPCODE;
 		*(s32 *)(&insn_buf[1]) = rel;
 
-		text_poke_bp(op->kp.addr, insn_buf, RELATIVEJUMP_SIZE,
-			     op->optinsn.insn);
+		if (unlikely(kprobes_is_early())) {
+			BUG_ON(!(op->kp.flags & KPROBE_FLAG_EARLY));
+			memcpy(op->kp.addr, insn_buf, RELATIVEJUMP_SIZE);
+			local_flush_tlb();
+			sync_core();
+		} else {
+			text_poke_bp(op->kp.addr, insn_buf, RELATIVEJUMP_SIZE,
+				     op->optinsn.insn);
+		}
 
 		list_del_init(&op->list);
 	}
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 16/26] early kprobes: x86: introduce early kprobes related code area.
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-12 12:20   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:20 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

This patch introduces EARLY_KPROBES_CODES_AREA into x86 vmlinux for
early kprobes.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 arch/x86/include/asm/insn.h    |  7 ++++---
 arch/x86/include/asm/kprobes.h | 47 +++++++++++++++++++++++++++++++++++-------
 arch/x86/kernel/vmlinux.lds.S  |  2 ++
 3 files changed, 45 insertions(+), 11 deletions(-)

diff --git a/arch/x86/include/asm/insn.h b/arch/x86/include/asm/insn.h
index 47f29b1..ea6f318 100644
--- a/arch/x86/include/asm/insn.h
+++ b/arch/x86/include/asm/insn.h
@@ -20,6 +20,9 @@
  * Copyright (C) IBM Corporation, 2009
  */
 
+#define MAX_INSN_SIZE	16
+
+#ifndef __ASSEMBLY__
 /* insn_attr_t is defined in inat.h */
 #include <asm/inat.h>
 
@@ -69,8 +72,6 @@ struct insn {
 	const insn_byte_t *next_byte;
 };
 
-#define MAX_INSN_SIZE	16
-
 #define X86_MODRM_MOD(modrm) (((modrm) & 0xc0) >> 6)
 #define X86_MODRM_REG(modrm) (((modrm) & 0x38) >> 3)
 #define X86_MODRM_RM(modrm) ((modrm) & 0x07)
@@ -197,5 +198,5 @@ static inline int insn_offset_immediate(struct insn *insn)
 {
 	return insn_offset_displacement(insn) + insn->displacement.nbytes;
 }
-
+#endif /* __ASSEMBLY__ */
 #endif /* _ASM_X86_INSN_H */
diff --git a/arch/x86/include/asm/kprobes.h b/arch/x86/include/asm/kprobes.h
index 4421b5d..6a6066a 100644
--- a/arch/x86/include/asm/kprobes.h
+++ b/arch/x86/include/asm/kprobes.h
@@ -21,23 +21,54 @@
  *
  * See arch/x86/kernel/kprobes.c for x86 kprobes history.
  */
-#include <linux/types.h>
-#include <linux/ptrace.h>
-#include <linux/percpu.h>
-#include <asm/insn.h>
 
 #define  __ARCH_WANT_KPROBES_INSN_SLOT
 
-struct pt_regs;
-struct kprobe;
+#include <linux/types.h>
+#include <asm/insn.h>
 
-typedef u8 kprobe_opcode_t;
 #define BREAKPOINT_INSTRUCTION	0xcc
 #define RELATIVEJUMP_OPCODE 0xe9
 #define RELATIVEJUMP_SIZE 5
 #define RELATIVECALL_OPCODE 0xe8
 #define RELATIVE_ADDR_SIZE 4
 #define MAX_STACK_SIZE 64
+#define MAX_OPTIMIZED_LENGTH (MAX_INSN_SIZE + RELATIVE_ADDR_SIZE)
+
+#ifdef __ASSEMBLY__
+
+#define KPROBE_OPCODE_SIZE     1
+#define MAX_OPTINSN_SIZE ((optprobe_template_end - optprobe_template_entry) + \
+	MAX_OPTIMIZED_LENGTH + RELATIVEJUMP_SIZE)
+
+#ifdef CONFIG_EARLY_KPROBES
+# define EARLY_KPROBES_CODES_AREA					\
+	. = ALIGN(8);							\
+	VMLINUX_SYMBOL(__early_kprobes_start) = .;			\
+	VMLINUX_SYMBOL(__early_kprobes_code_area_start) = .;		\
+	. = . + MAX_OPTINSN_SIZE * CONFIG_NR_EARLY_KPROBES_SLOTS;	\
+	VMLINUX_SYMBOL(__early_kprobes_code_area_end) = .;		\
+	. = ALIGN(8);							\
+	VMLINUX_SYMBOL(__early_kprobes_insn_slot_start) = .;		\
+	. = . + MAX_INSN_SIZE * KPROBE_OPCODE_SIZE *			\
+		CONFIG_NR_EARLY_KPROBES_SLOTS;				\
+	VMLINUX_SYMBOL(__early_kprobes_insn_slot_end) = .;		\
+	VMLINUX_SYMBOL(__early_kprobes_end) = .;
+#else
+# define EARLY_KPROBES_CODES_AREA
+#endif
+
+#else
+
+#include <linux/ptrace.h>
+#include <linux/percpu.h>
+
+
+struct pt_regs;
+struct kprobe;
+
+typedef u8 kprobe_opcode_t;
+#define KPROBE_OPCODE_SIZE     sizeof(kprobe_opcode_t)
 #define MIN_STACK_SIZE(ADDR)					       \
 	(((MAX_STACK_SIZE) < (((unsigned long)current_thread_info()) + \
 			      THREAD_SIZE - (unsigned long)(ADDR)))    \
@@ -52,7 +83,6 @@ extern __visible kprobe_opcode_t optprobe_template_entry;
 extern __visible kprobe_opcode_t optprobe_template_val;
 extern __visible kprobe_opcode_t optprobe_template_call;
 extern __visible kprobe_opcode_t optprobe_template_end;
-#define MAX_OPTIMIZED_LENGTH (MAX_INSN_SIZE + RELATIVE_ADDR_SIZE)
 #define MAX_OPTINSN_SIZE 				\
 	(((unsigned long)&optprobe_template_end -	\
 	  (unsigned long)&optprobe_template_entry) +	\
@@ -117,4 +147,5 @@ extern int kprobe_exceptions_notify(struct notifier_block *self,
 				    unsigned long val, void *data);
 extern int kprobe_int3_handler(struct pt_regs *regs);
 extern int kprobe_debug_handler(struct pt_regs *regs);
+#endif /* __ASSEMBLY__ */
 #endif /* _ASM_X86_KPROBES_H */
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 00bf300..69f3f0e 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -26,6 +26,7 @@
 #include <asm/page_types.h>
 #include <asm/cache.h>
 #include <asm/boot.h>
+#include <asm/kprobes.h>
 
 #undef i386     /* in case the preprocessor is a 32bit one */
 
@@ -100,6 +101,7 @@ SECTIONS
 		SCHED_TEXT
 		LOCK_TEXT
 		KPROBES_TEXT
+		EARLY_KPROBES_CODES_AREA
 		ENTRY_TEXT
 		IRQENTRY_TEXT
 		*(.fixup)
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 16/26] early kprobes: x86: introduce early kprobes related code area.
@ 2015-02-12 12:20   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:20 UTC (permalink / raw)
  To: linux-arm-kernel

This patch introduces EARLY_KPROBES_CODES_AREA into x86 vmlinux for
early kprobes.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 arch/x86/include/asm/insn.h    |  7 ++++---
 arch/x86/include/asm/kprobes.h | 47 +++++++++++++++++++++++++++++++++++-------
 arch/x86/kernel/vmlinux.lds.S  |  2 ++
 3 files changed, 45 insertions(+), 11 deletions(-)

diff --git a/arch/x86/include/asm/insn.h b/arch/x86/include/asm/insn.h
index 47f29b1..ea6f318 100644
--- a/arch/x86/include/asm/insn.h
+++ b/arch/x86/include/asm/insn.h
@@ -20,6 +20,9 @@
  * Copyright (C) IBM Corporation, 2009
  */
 
+#define MAX_INSN_SIZE	16
+
+#ifndef __ASSEMBLY__
 /* insn_attr_t is defined in inat.h */
 #include <asm/inat.h>
 
@@ -69,8 +72,6 @@ struct insn {
 	const insn_byte_t *next_byte;
 };
 
-#define MAX_INSN_SIZE	16
-
 #define X86_MODRM_MOD(modrm) (((modrm) & 0xc0) >> 6)
 #define X86_MODRM_REG(modrm) (((modrm) & 0x38) >> 3)
 #define X86_MODRM_RM(modrm) ((modrm) & 0x07)
@@ -197,5 +198,5 @@ static inline int insn_offset_immediate(struct insn *insn)
 {
 	return insn_offset_displacement(insn) + insn->displacement.nbytes;
 }
-
+#endif /* __ASSEMBLY__ */
 #endif /* _ASM_X86_INSN_H */
diff --git a/arch/x86/include/asm/kprobes.h b/arch/x86/include/asm/kprobes.h
index 4421b5d..6a6066a 100644
--- a/arch/x86/include/asm/kprobes.h
+++ b/arch/x86/include/asm/kprobes.h
@@ -21,23 +21,54 @@
  *
  * See arch/x86/kernel/kprobes.c for x86 kprobes history.
  */
-#include <linux/types.h>
-#include <linux/ptrace.h>
-#include <linux/percpu.h>
-#include <asm/insn.h>
 
 #define  __ARCH_WANT_KPROBES_INSN_SLOT
 
-struct pt_regs;
-struct kprobe;
+#include <linux/types.h>
+#include <asm/insn.h>
 
-typedef u8 kprobe_opcode_t;
 #define BREAKPOINT_INSTRUCTION	0xcc
 #define RELATIVEJUMP_OPCODE 0xe9
 #define RELATIVEJUMP_SIZE 5
 #define RELATIVECALL_OPCODE 0xe8
 #define RELATIVE_ADDR_SIZE 4
 #define MAX_STACK_SIZE 64
+#define MAX_OPTIMIZED_LENGTH (MAX_INSN_SIZE + RELATIVE_ADDR_SIZE)
+
+#ifdef __ASSEMBLY__
+
+#define KPROBE_OPCODE_SIZE     1
+#define MAX_OPTINSN_SIZE ((optprobe_template_end - optprobe_template_entry) + \
+	MAX_OPTIMIZED_LENGTH + RELATIVEJUMP_SIZE)
+
+#ifdef CONFIG_EARLY_KPROBES
+# define EARLY_KPROBES_CODES_AREA					\
+	. = ALIGN(8);							\
+	VMLINUX_SYMBOL(__early_kprobes_start) = .;			\
+	VMLINUX_SYMBOL(__early_kprobes_code_area_start) = .;		\
+	. = . + MAX_OPTINSN_SIZE * CONFIG_NR_EARLY_KPROBES_SLOTS;	\
+	VMLINUX_SYMBOL(__early_kprobes_code_area_end) = .;		\
+	. = ALIGN(8);							\
+	VMLINUX_SYMBOL(__early_kprobes_insn_slot_start) = .;		\
+	. = . + MAX_INSN_SIZE * KPROBE_OPCODE_SIZE *			\
+		CONFIG_NR_EARLY_KPROBES_SLOTS;				\
+	VMLINUX_SYMBOL(__early_kprobes_insn_slot_end) = .;		\
+	VMLINUX_SYMBOL(__early_kprobes_end) = .;
+#else
+# define EARLY_KPROBES_CODES_AREA
+#endif
+
+#else
+
+#include <linux/ptrace.h>
+#include <linux/percpu.h>
+
+
+struct pt_regs;
+struct kprobe;
+
+typedef u8 kprobe_opcode_t;
+#define KPROBE_OPCODE_SIZE     sizeof(kprobe_opcode_t)
 #define MIN_STACK_SIZE(ADDR)					       \
 	(((MAX_STACK_SIZE) < (((unsigned long)current_thread_info()) + \
 			      THREAD_SIZE - (unsigned long)(ADDR)))    \
@@ -52,7 +83,6 @@ extern __visible kprobe_opcode_t optprobe_template_entry;
 extern __visible kprobe_opcode_t optprobe_template_val;
 extern __visible kprobe_opcode_t optprobe_template_call;
 extern __visible kprobe_opcode_t optprobe_template_end;
-#define MAX_OPTIMIZED_LENGTH (MAX_INSN_SIZE + RELATIVE_ADDR_SIZE)
 #define MAX_OPTINSN_SIZE 				\
 	(((unsigned long)&optprobe_template_end -	\
 	  (unsigned long)&optprobe_template_entry) +	\
@@ -117,4 +147,5 @@ extern int kprobe_exceptions_notify(struct notifier_block *self,
 				    unsigned long val, void *data);
 extern int kprobe_int3_handler(struct pt_regs *regs);
 extern int kprobe_debug_handler(struct pt_regs *regs);
+#endif /* __ASSEMBLY__ */
 #endif /* _ASM_X86_KPROBES_H */
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 00bf300..69f3f0e 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -26,6 +26,7 @@
 #include <asm/page_types.h>
 #include <asm/cache.h>
 #include <asm/boot.h>
+#include <asm/kprobes.h>
 
 #undef i386     /* in case the preprocessor is a 32bit one */
 
@@ -100,6 +101,7 @@ SECTIONS
 		SCHED_TEXT
 		LOCK_TEXT
 		KPROBES_TEXT
+		EARLY_KPROBES_CODES_AREA
 		ENTRY_TEXT
 		IRQENTRY_TEXT
 		*(.fixup)
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 17/26] early kprobes: introduces macros for allocing early kprobe resources.
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-12 12:20   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:20 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

Introduces macros to genearte common early kprobe related resource
allocator.

All early kprobe related resources are statically allocated during
linking for each early kprobe slot. For each type of resource, a bitmap
is used to track allocation. __DEFINE_EKPROBE_ALLOC_OPS defines alloc
and free handler for them. The range of the resource and the bitmap
should be provided for allocaing and freeing. DEFINE_EKPROBE_ALLOC_OPS
defines bitmap and the array used by it.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 include/linux/kprobes.h | 78 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 78 insertions(+)

diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index 8d2e754..cd7a2a5 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -270,6 +270,84 @@ extern void show_registers(struct pt_regs *regs);
 extern void kprobes_inc_nmissed_count(struct kprobe *p);
 extern bool arch_within_kprobe_blacklist(unsigned long addr);
 
+#ifdef CONFIG_EARLY_KPROBES
+
+#define NR_EARLY_KPROBES_SLOTS	CONFIG_NR_EARLY_KPROBES_SLOTS
+#define ALIGN_UP(v, a)	(((v) + ((a) - 1)) & ~((a) - 1))
+#define EARLY_KPROBES_BITMAP_SZ	ALIGN_UP(NR_EARLY_KPROBES_SLOTS, BITS_PER_LONG)
+
+#define __ek_in_range(v, s, e)	(((v) >= (s)) && ((v) < (e)))
+#define __ek_buf_sz(s, e)	((void *)(e) - (void *)(s))
+#define __ek_elem_sz_b(s, e)	(__ek_buf_sz(s, e) / NR_EARLY_KPROBES_SLOTS)
+#define __ek_elem_sz(s, e)	(__ek_elem_sz_b(s, e) / sizeof(s[0]))
+#define __ek_elem_idx(v, s, e)	(__ek_buf_sz(s, v) / __ek_elem_sz_b(s, e))
+#define __ek_get_elem(i, s, e)	(&((s)[__ek_elem_sz(s, e) * (i)]))
+#define __DEFINE_EKPROBE_ALLOC_OPS(__t, __name)				\
+static inline __t *__ek_alloc_##__name(__t *__s, __t *__e, unsigned long *__b)\
+{									\
+	int __i = find_next_zero_bit(__b, NR_EARLY_KPROBES_SLOTS, 0);	\
+	if (__i >= NR_EARLY_KPROBES_SLOTS)				\
+		return NULL;						\
+	set_bit(__i, __b);						\
+	return __ek_get_elem(__i, __s, __e);				\
+}									\
+static inline int __ek_free_##__name(__t *__v, __t *__s, __t *__e, unsigned long *__b)	\
+{									\
+	if (!__ek_in_range(__v, __s, __e))				\
+		return 0;						\
+	clear_bit(__ek_elem_idx(__v, __s, __e), __b);			\
+	return 1;							\
+}
+
+#define __DEFINE_EKPROBE_AREA(__t, __name, __static)			\
+__static __t __ek_##__name##_slots[NR_EARLY_KPROBES_SLOTS];		\
+__static unsigned long __ek_##__name##_bitmap[EARLY_KPROBES_BITMAP_SZ];
+
+#define DEFINE_EKPROBE_ALLOC_OPS(__t, __name, __static)			\
+__DEFINE_EKPROBE_AREA(__t, __name, __static)				\
+__DEFINE_EKPROBE_ALLOC_OPS(__t, __name)					\
+static inline __t *ek_alloc_##__name(void)				\
+{									\
+	return __ek_alloc_##__name(&((__ek_##__name##_slots)[0]),	\
+			&((__ek_##__name##_slots)[NR_EARLY_KPROBES_SLOTS]),\
+			(__ek_##__name##_bitmap));			\
+}									\
+static inline int ek_free_##__name(__t *__s)				\
+{									\
+	return __ek_free_##__name(__s, &((__ek_##__name##_slots)[0]),	\
+			&((__ek_##__name##_slots)[NR_EARLY_KPROBES_SLOTS]),\
+			(__ek_##__name##_bitmap));			\
+}
+
+
+#else
+#define __DEFINE_EKPROBE_ALLOC_OPS(__t, __name)				\
+static inline __t *__ek_alloc_##__name(__t *__s, __t *__e, unsigned long *__b)\
+{									\
+	return NULL;							\
+}									\
+static inline int __ek_free_##__name(__t *__v, __t *__s, __t *__e, unsigned long *__b)\
+{									\
+	return 0;							\
+}
+
+#define __DEFINE_EKPROBE_AREA(__t, __name, __static)			\
+__static __t __ek_##__name##_slots[0];					\
+__static unsigned long __ek_##__name##_bitmap[0];
+
+#define DEFINE_EKPROBE_ALLOC_OPS(__t, __name, __static)			\
+__DEFINE_EKPROBE_ALLOC_OPS(__t, __name)					\
+static inline __t *ek_alloc_##__name(void)				\
+{									\
+	return NULL;							\
+}									\
+static inline int ek_free_##__name(__t *__s)				\
+{									\
+	return 0;							\
+}
+
+#endif
+
 struct kprobe_insn_cache {
 	struct mutex mutex;
 	void *(*alloc)(void);	/* allocate insn page */
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 17/26] early kprobes: introduces macros for allocing early kprobe resources.
@ 2015-02-12 12:20   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:20 UTC (permalink / raw)
  To: linux-arm-kernel

Introduces macros to genearte common early kprobe related resource
allocator.

All early kprobe related resources are statically allocated during
linking for each early kprobe slot. For each type of resource, a bitmap
is used to track allocation. __DEFINE_EKPROBE_ALLOC_OPS defines alloc
and free handler for them. The range of the resource and the bitmap
should be provided for allocaing and freeing. DEFINE_EKPROBE_ALLOC_OPS
defines bitmap and the array used by it.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 include/linux/kprobes.h | 78 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 78 insertions(+)

diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index 8d2e754..cd7a2a5 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -270,6 +270,84 @@ extern void show_registers(struct pt_regs *regs);
 extern void kprobes_inc_nmissed_count(struct kprobe *p);
 extern bool arch_within_kprobe_blacklist(unsigned long addr);
 
+#ifdef CONFIG_EARLY_KPROBES
+
+#define NR_EARLY_KPROBES_SLOTS	CONFIG_NR_EARLY_KPROBES_SLOTS
+#define ALIGN_UP(v, a)	(((v) + ((a) - 1)) & ~((a) - 1))
+#define EARLY_KPROBES_BITMAP_SZ	ALIGN_UP(NR_EARLY_KPROBES_SLOTS, BITS_PER_LONG)
+
+#define __ek_in_range(v, s, e)	(((v) >= (s)) && ((v) < (e)))
+#define __ek_buf_sz(s, e)	((void *)(e) - (void *)(s))
+#define __ek_elem_sz_b(s, e)	(__ek_buf_sz(s, e) / NR_EARLY_KPROBES_SLOTS)
+#define __ek_elem_sz(s, e)	(__ek_elem_sz_b(s, e) / sizeof(s[0]))
+#define __ek_elem_idx(v, s, e)	(__ek_buf_sz(s, v) / __ek_elem_sz_b(s, e))
+#define __ek_get_elem(i, s, e)	(&((s)[__ek_elem_sz(s, e) * (i)]))
+#define __DEFINE_EKPROBE_ALLOC_OPS(__t, __name)				\
+static inline __t *__ek_alloc_##__name(__t *__s, __t *__e, unsigned long *__b)\
+{									\
+	int __i = find_next_zero_bit(__b, NR_EARLY_KPROBES_SLOTS, 0);	\
+	if (__i >= NR_EARLY_KPROBES_SLOTS)				\
+		return NULL;						\
+	set_bit(__i, __b);						\
+	return __ek_get_elem(__i, __s, __e);				\
+}									\
+static inline int __ek_free_##__name(__t *__v, __t *__s, __t *__e, unsigned long *__b)	\
+{									\
+	if (!__ek_in_range(__v, __s, __e))				\
+		return 0;						\
+	clear_bit(__ek_elem_idx(__v, __s, __e), __b);			\
+	return 1;							\
+}
+
+#define __DEFINE_EKPROBE_AREA(__t, __name, __static)			\
+__static __t __ek_##__name##_slots[NR_EARLY_KPROBES_SLOTS];		\
+__static unsigned long __ek_##__name##_bitmap[EARLY_KPROBES_BITMAP_SZ];
+
+#define DEFINE_EKPROBE_ALLOC_OPS(__t, __name, __static)			\
+__DEFINE_EKPROBE_AREA(__t, __name, __static)				\
+__DEFINE_EKPROBE_ALLOC_OPS(__t, __name)					\
+static inline __t *ek_alloc_##__name(void)				\
+{									\
+	return __ek_alloc_##__name(&((__ek_##__name##_slots)[0]),	\
+			&((__ek_##__name##_slots)[NR_EARLY_KPROBES_SLOTS]),\
+			(__ek_##__name##_bitmap));			\
+}									\
+static inline int ek_free_##__name(__t *__s)				\
+{									\
+	return __ek_free_##__name(__s, &((__ek_##__name##_slots)[0]),	\
+			&((__ek_##__name##_slots)[NR_EARLY_KPROBES_SLOTS]),\
+			(__ek_##__name##_bitmap));			\
+}
+
+
+#else
+#define __DEFINE_EKPROBE_ALLOC_OPS(__t, __name)				\
+static inline __t *__ek_alloc_##__name(__t *__s, __t *__e, unsigned long *__b)\
+{									\
+	return NULL;							\
+}									\
+static inline int __ek_free_##__name(__t *__v, __t *__s, __t *__e, unsigned long *__b)\
+{									\
+	return 0;							\
+}
+
+#define __DEFINE_EKPROBE_AREA(__t, __name, __static)			\
+__static __t __ek_##__name##_slots[0];					\
+__static unsigned long __ek_##__name##_bitmap[0];
+
+#define DEFINE_EKPROBE_ALLOC_OPS(__t, __name, __static)			\
+__DEFINE_EKPROBE_ALLOC_OPS(__t, __name)					\
+static inline __t *ek_alloc_##__name(void)				\
+{									\
+	return NULL;							\
+}									\
+static inline int ek_free_##__name(__t *__s)				\
+{									\
+	return 0;							\
+}
+
+#endif
+
 struct kprobe_insn_cache {
 	struct mutex mutex;
 	void *(*alloc)(void);	/* allocate insn page */
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 18/26] early kprobes: allows __alloc_insn_slot() from early kprobes slots.
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-12 12:20   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:20 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

Introduces early_slots_start/end and bitmap for struct kprobe_insn_cache
then uses previous introduced macro to generate allocator. This patch
makes get/free_insn_slot() and get/free_optinsn_slot() transparent to
early kprobes.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 include/linux/kprobes.h | 40 ++++++++++++++++++++++++++++++++++++++++
 kernel/kprobes.c        | 14 ++++++++++++++
 2 files changed, 54 insertions(+)

diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index cd7a2a5..6100678 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -319,6 +319,17 @@ static inline int ek_free_##__name(__t *__s)				\
 			(__ek_##__name##_bitmap));			\
 }
 
+/*
+ * Start and end of early kprobes area, including code area and
+ * insn_slot area.
+ */
+extern char __early_kprobes_start[];
+extern char __early_kprobes_end[];
+
+extern kprobe_opcode_t __early_kprobes_code_area_start[];
+extern kprobe_opcode_t __early_kprobes_code_area_end[];
+extern kprobe_opcode_t __early_kprobes_insn_slot_start[];
+extern kprobe_opcode_t __early_kprobes_insn_slot_end[];
 
 #else
 #define __DEFINE_EKPROBE_ALLOC_OPS(__t, __name)				\
@@ -348,6 +359,8 @@ static inline int ek_free_##__name(__t *__s)				\
 
 #endif
 
+__DEFINE_EKPROBE_ALLOC_OPS(kprobe_opcode_t, opcode)
+
 struct kprobe_insn_cache {
 	struct mutex mutex;
 	void *(*alloc)(void);	/* allocate insn page */
@@ -355,8 +368,35 @@ struct kprobe_insn_cache {
 	struct list_head pages; /* list of kprobe_insn_page */
 	size_t insn_size;	/* size of instruction slot */
 	int nr_garbage;
+#ifdef CONFIG_EARLY_KPROBES
+# define slots_start(c)	((c)->early_slots_start)
+# define slots_end(c)	((c)->early_slots_end)
+# define slots_bitmap(c)	((c)->early_slots_bitmap)
+	kprobe_opcode_t *early_slots_start;
+	kprobe_opcode_t *early_slots_end;
+	unsigned long early_slots_bitmap[EARLY_KPROBES_BITMAP_SZ];
+#else
+# define slots_start(c)	NULL
+# define slots_end(c)	NULL
+# define slots_bitmap(c)	NULL
+#endif
 };
 
+static inline kprobe_opcode_t *
+__get_insn_slot_early(struct kprobe_insn_cache *c)
+{
+	return __ek_alloc_opcode(slots_start(c),
+			slots_end(c), slots_bitmap(c));
+}
+
+static inline int
+__free_insn_slot_early(struct kprobe_insn_cache *c,
+		kprobe_opcode_t *slot)
+{
+	return __ek_free_opcode(slot, slots_start(c),
+			slots_end(c), slots_bitmap(c));
+}
+
 extern kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c);
 extern void __free_insn_slot(struct kprobe_insn_cache *c,
 			     kprobe_opcode_t *slot, int dirty);
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 647c95a..fa1e422 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -143,6 +143,10 @@ struct kprobe_insn_cache kprobe_insn_slots = {
 	.pages = LIST_HEAD_INIT(kprobe_insn_slots.pages),
 	.insn_size = MAX_INSN_SIZE,
 	.nr_garbage = 0,
+#ifdef CONFIG_EARLY_KPROBES
+	.early_slots_start = __early_kprobes_insn_slot_start,
+	.early_slots_end = __early_kprobes_insn_slot_end,
+#endif
 };
 static int collect_garbage_slots(struct kprobe_insn_cache *c);
 
@@ -155,6 +159,9 @@ kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c)
 	struct kprobe_insn_page *kip;
 	kprobe_opcode_t *slot = NULL;
 
+	if (kprobes_is_early())
+		return __get_insn_slot_early(c);
+
 	mutex_lock(&c->mutex);
  retry:
 	list_for_each_entry(kip, &c->pages, list) {
@@ -255,6 +262,9 @@ void __free_insn_slot(struct kprobe_insn_cache *c,
 {
 	struct kprobe_insn_page *kip;
 
+	if (unlikely(__free_insn_slot_early(c, slot)))
+		return;
+
 	mutex_lock(&c->mutex);
 	list_for_each_entry(kip, &c->pages, list) {
 		long idx = ((long)slot - (long)kip->insns) /
@@ -286,6 +296,10 @@ struct kprobe_insn_cache kprobe_optinsn_slots = {
 	.pages = LIST_HEAD_INIT(kprobe_optinsn_slots.pages),
 	/* .insn_size is initialized later */
 	.nr_garbage = 0,
+#ifdef CONFIG_EARLY_KPROBES
+	.early_slots_start = __early_kprobes_code_area_start,
+	.early_slots_end = __early_kprobes_code_area_end,
+#endif
 };
 #endif
 #endif
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 18/26] early kprobes: allows __alloc_insn_slot() from early kprobes slots.
@ 2015-02-12 12:20   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:20 UTC (permalink / raw)
  To: linux-arm-kernel

Introduces early_slots_start/end and bitmap for struct kprobe_insn_cache
then uses previous introduced macro to generate allocator. This patch
makes get/free_insn_slot() and get/free_optinsn_slot() transparent to
early kprobes.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 include/linux/kprobes.h | 40 ++++++++++++++++++++++++++++++++++++++++
 kernel/kprobes.c        | 14 ++++++++++++++
 2 files changed, 54 insertions(+)

diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index cd7a2a5..6100678 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -319,6 +319,17 @@ static inline int ek_free_##__name(__t *__s)				\
 			(__ek_##__name##_bitmap));			\
 }
 
+/*
+ * Start and end of early kprobes area, including code area and
+ * insn_slot area.
+ */
+extern char __early_kprobes_start[];
+extern char __early_kprobes_end[];
+
+extern kprobe_opcode_t __early_kprobes_code_area_start[];
+extern kprobe_opcode_t __early_kprobes_code_area_end[];
+extern kprobe_opcode_t __early_kprobes_insn_slot_start[];
+extern kprobe_opcode_t __early_kprobes_insn_slot_end[];
 
 #else
 #define __DEFINE_EKPROBE_ALLOC_OPS(__t, __name)				\
@@ -348,6 +359,8 @@ static inline int ek_free_##__name(__t *__s)				\
 
 #endif
 
+__DEFINE_EKPROBE_ALLOC_OPS(kprobe_opcode_t, opcode)
+
 struct kprobe_insn_cache {
 	struct mutex mutex;
 	void *(*alloc)(void);	/* allocate insn page */
@@ -355,8 +368,35 @@ struct kprobe_insn_cache {
 	struct list_head pages; /* list of kprobe_insn_page */
 	size_t insn_size;	/* size of instruction slot */
 	int nr_garbage;
+#ifdef CONFIG_EARLY_KPROBES
+# define slots_start(c)	((c)->early_slots_start)
+# define slots_end(c)	((c)->early_slots_end)
+# define slots_bitmap(c)	((c)->early_slots_bitmap)
+	kprobe_opcode_t *early_slots_start;
+	kprobe_opcode_t *early_slots_end;
+	unsigned long early_slots_bitmap[EARLY_KPROBES_BITMAP_SZ];
+#else
+# define slots_start(c)	NULL
+# define slots_end(c)	NULL
+# define slots_bitmap(c)	NULL
+#endif
 };
 
+static inline kprobe_opcode_t *
+__get_insn_slot_early(struct kprobe_insn_cache *c)
+{
+	return __ek_alloc_opcode(slots_start(c),
+			slots_end(c), slots_bitmap(c));
+}
+
+static inline int
+__free_insn_slot_early(struct kprobe_insn_cache *c,
+		kprobe_opcode_t *slot)
+{
+	return __ek_free_opcode(slot, slots_start(c),
+			slots_end(c), slots_bitmap(c));
+}
+
 extern kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c);
 extern void __free_insn_slot(struct kprobe_insn_cache *c,
 			     kprobe_opcode_t *slot, int dirty);
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 647c95a..fa1e422 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -143,6 +143,10 @@ struct kprobe_insn_cache kprobe_insn_slots = {
 	.pages = LIST_HEAD_INIT(kprobe_insn_slots.pages),
 	.insn_size = MAX_INSN_SIZE,
 	.nr_garbage = 0,
+#ifdef CONFIG_EARLY_KPROBES
+	.early_slots_start = __early_kprobes_insn_slot_start,
+	.early_slots_end = __early_kprobes_insn_slot_end,
+#endif
 };
 static int collect_garbage_slots(struct kprobe_insn_cache *c);
 
@@ -155,6 +159,9 @@ kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c)
 	struct kprobe_insn_page *kip;
 	kprobe_opcode_t *slot = NULL;
 
+	if (kprobes_is_early())
+		return __get_insn_slot_early(c);
+
 	mutex_lock(&c->mutex);
  retry:
 	list_for_each_entry(kip, &c->pages, list) {
@@ -255,6 +262,9 @@ void __free_insn_slot(struct kprobe_insn_cache *c,
 {
 	struct kprobe_insn_page *kip;
 
+	if (unlikely(__free_insn_slot_early(c, slot)))
+		return;
+
 	mutex_lock(&c->mutex);
 	list_for_each_entry(kip, &c->pages, list) {
 		long idx = ((long)slot - (long)kip->insns) /
@@ -286,6 +296,10 @@ struct kprobe_insn_cache kprobe_optinsn_slots = {
 	.pages = LIST_HEAD_INIT(kprobe_optinsn_slots.pages),
 	/* .insn_size is initialized later */
 	.nr_garbage = 0,
+#ifdef CONFIG_EARLY_KPROBES
+	.early_slots_start = __early_kprobes_code_area_start,
+	.early_slots_end = __early_kprobes_code_area_end,
+#endif
 };
 #endif
 #endif
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 19/26] early kprobes: perhibit probing at early kprobe reserved area.
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-12 12:21   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:21 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 kernel/kprobes.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index fa1e422..b83c406 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -1358,6 +1358,13 @@ static bool within_kprobe_blacklist(unsigned long addr)
 
 	if (arch_within_kprobe_blacklist(addr))
 		return true;
+
+#ifdef CONFIG_EARLY_KPROBES
+	if (addr >= (unsigned long)__early_kprobes_start &&
+			addr < (unsigned long)__early_kprobes_end)
+		return true;
+#endif
+
 	/*
 	 * If there exists a kprobe_blacklist, verify and
 	 * fail any probe registration in the prohibited area
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 19/26] early kprobes: perhibit probing at early kprobe reserved area.
@ 2015-02-12 12:21   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:21 UTC (permalink / raw)
  To: linux-arm-kernel

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 kernel/kprobes.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index fa1e422..b83c406 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -1358,6 +1358,13 @@ static bool within_kprobe_blacklist(unsigned long addr)
 
 	if (arch_within_kprobe_blacklist(addr))
 		return true;
+
+#ifdef CONFIG_EARLY_KPROBES
+	if (addr >= (unsigned long)__early_kprobes_start &&
+			addr < (unsigned long)__early_kprobes_end)
+		return true;
+#endif
+
 	/*
 	 * If there exists a kprobe_blacklist, verify and
 	 * fail any probe registration in the prohibited area
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 20/26] early kprobes: core logic of eraly kprobes.
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-12 12:21   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:21 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

This patch is the main logic of early kprobe.

If register_kprobe() is called before kprobes_initialized, an early
kprobe is allocated. Try to utilize existing OPTPROBE mechanism to
replace the target instruction by a branch instead of breakpoint,
because interrupt handlers may not been initialized yet.

All resources required by early kprobes are allocated statically.
CONFIG_NR_EARLY_KPROBES_SLOTS is used to control number of possible
early kprobes.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 include/linux/kprobes.h |   4 ++
 kernel/kprobes.c        | 150 ++++++++++++++++++++++++++++++++++++++++++++++--
 2 files changed, 148 insertions(+), 6 deletions(-)

diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index 6100678..0c64df8 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -450,6 +450,10 @@ extern int proc_kprobes_optimization_handler(struct ctl_table *table,
 					     size_t *length, loff_t *ppos);
 #endif
 
+struct early_kprobe_slot {
+	struct optimized_kprobe op;
+};
+
 #endif /* CONFIG_OPTPROBES */
 #ifdef CONFIG_KPROBES_ON_FTRACE
 extern void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index b83c406..131a71a 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -77,6 +77,10 @@ int kprobes_is_early(void)
 static struct hlist_head kprobe_table[KPROBE_TABLE_SIZE];
 static struct hlist_head kretprobe_inst_table[KPROBE_TABLE_SIZE];
 
+#ifdef CONFIG_EARLY_KPROBES
+static HLIST_HEAD(early_kprobe_hlist);
+#endif
+
 /* NOTE: change this value only with kprobe_mutex held */
 static bool kprobes_all_disarmed;
 
@@ -87,6 +91,8 @@ static struct {
 	raw_spinlock_t lock ____cacheline_aligned_in_smp;
 } kretprobe_table_locks[KPROBE_TABLE_SIZE];
 
+DEFINE_EKPROBE_ALLOC_OPS(struct early_kprobe_slot, early_kprobe, static)
+
 static raw_spinlock_t *kretprobe_table_lock_ptr(unsigned long hash)
 {
 	return &(kretprobe_table_locks[hash].lock);
@@ -326,7 +332,12 @@ struct kprobe *get_kprobe(void *addr)
 	struct hlist_head *head;
 	struct kprobe *p;
 
-	head = &kprobe_table[hash_ptr(addr, KPROBE_HASH_BITS)];
+#ifdef CONFIG_EARLY_KPROBES
+	if (kprobes_is_early())
+		head = &early_kprobe_hlist;
+	else
+#endif
+		head = &kprobe_table[hash_ptr(addr, KPROBE_HASH_BITS)];
 	hlist_for_each_entry_rcu(p, head, hlist) {
 		if (p->addr == addr)
 			return p;
@@ -386,11 +397,14 @@ NOKPROBE_SYMBOL(opt_pre_handler);
 static void free_aggr_kprobe(struct kprobe *p)
 {
 	struct optimized_kprobe *op;
+	struct early_kprobe_slot *ep;
 
 	op = container_of(p, struct optimized_kprobe, kp);
 	arch_remove_optimized_kprobe(op);
 	arch_remove_kprobe(p);
-	kfree(op);
+	ep = container_of(op, struct early_kprobe_slot, op);
+	if (likely(!ek_free_early_kprobe(ep)))
+		kfree(op);
 }
 
 /* Return true(!0) if the kprobe is ready for optimization. */
@@ -607,9 +621,15 @@ static void optimize_kprobe(struct kprobe *p)
 	struct optimized_kprobe *op;
 
 	/* Check if the kprobe is disabled or not ready for optimization. */
-	if (!kprobe_optready(p) || !kprobes_allow_optimization ||
-	    (kprobe_disabled(p) || kprobes_all_disarmed))
-		return;
+	if (unlikely(kprobes_is_early())) {
+		BUG_ON(!(p->flags & KPROBE_FLAG_EARLY));
+		if (!kprobe_optready(p) || kprobe_disabled(p))
+			return;
+	} else {
+		if (!kprobe_optready(p) || !kprobes_allow_optimization ||
+		    (kprobe_disabled(p) || kprobes_all_disarmed))
+			return;
+	}
 
 	/* Both of break_handler and post_handler are not supported. */
 	if (p->break_handler || p->post_handler)
@@ -631,7 +651,10 @@ static void optimize_kprobe(struct kprobe *p)
 		list_del_init(&op->list);
 	else {
 		list_add(&op->list, &optimizing_list);
-		kick_kprobe_optimizer();
+		if (kprobes_is_early())
+			arch_optimize_kprobes(&optimizing_list);
+		else
+			kick_kprobe_optimizer();
 	}
 }
 
@@ -1505,6 +1528,8 @@ out:
 	return ret;
 }
 
+static int register_early_kprobe(struct kprobe *p);
+
 int register_kprobe(struct kprobe *p)
 {
 	int ret;
@@ -1518,6 +1543,14 @@ int register_kprobe(struct kprobe *p)
 		return PTR_ERR(addr);
 	p->addr = addr;
 
+	if (unlikely(kprobes_is_early())) {
+		p->flags |= KPROBE_FLAG_EARLY;
+		return register_early_kprobe(p);
+	}
+
+	WARN(p->flags & KPROBE_FLAG_EARLY,
+		"register early kprobe after kprobes initialized\n");
+
 	ret = check_kprobe_rereg(p);
 	if (ret)
 		return ret;
@@ -2156,6 +2189,8 @@ static struct notifier_block kprobe_module_nb = {
 extern unsigned long __start_kprobe_blacklist[];
 extern unsigned long __stop_kprobe_blacklist[];
 
+static void convert_early_kprobes(void);
+
 static int __init init_kprobes(void)
 {
 	int i, err = 0;
@@ -2204,6 +2239,7 @@ static int __init init_kprobes(void)
 	if (!err)
 		err = register_module_notifier(&kprobe_module_nb);
 
+	convert_early_kprobes();
 	kprobes_initialized = (err == 0);
 
 	if (!err)
@@ -2497,3 +2533,105 @@ module_init(init_kprobes);
 
 /* defined in arch/.../kernel/kprobes.c */
 EXPORT_SYMBOL_GPL(jprobe_return);
+
+#ifdef CONFIG_EARLY_KPROBES
+
+static int register_early_kprobe(struct kprobe *p)
+{
+	struct early_kprobe_slot *slot;
+	int err;
+
+	if (p->break_handler || p->post_handler)
+		return -EINVAL;
+	if (p->flags & KPROBE_FLAG_DISABLED)
+		return -EINVAL;
+
+	slot = ek_alloc_early_kprobe();
+	if (!slot) {
+		pr_err("No enough early kprobe slots.\n");
+		return -ENOMEM;
+	}
+
+	p->flags &= KPROBE_FLAG_DISABLED;
+	p->flags |= KPROBE_FLAG_EARLY;
+	p->nmissed = 0;
+
+	err = arch_prepare_kprobe(p);
+	if (err) {
+		pr_err("arch_prepare_kprobe failed\n");
+		goto free_slot;
+	}
+
+	INIT_LIST_HEAD(&p->list);
+	INIT_HLIST_NODE(&p->hlist);
+	INIT_LIST_HEAD(&slot->op.list);
+	slot->op.kp.addr = p->addr;
+	slot->op.kp.flags = p->flags | KPROBE_FLAG_EARLY;
+
+	err = arch_prepare_optimized_kprobe(&slot->op, p);
+	if (err) {
+		pr_err("Failed to prepare optimized kprobe.\n");
+		goto remove_optimized;
+	}
+
+	if (!arch_prepared_optinsn(&slot->op.optinsn)) {
+		pr_err("Failed to prepare optinsn.\n");
+		err = -ENOMEM;
+		goto remove_optimized;
+	}
+
+	hlist_add_head_rcu(&p->hlist, &early_kprobe_hlist);
+	init_aggr_kprobe(&slot->op.kp, p);
+	optimize_kprobe(&slot->op.kp);
+	return 0;
+
+remove_optimized:
+	arch_remove_optimized_kprobe(&slot->op);
+free_slot:
+	ek_free_early_kprobe(slot);
+	return err;
+}
+
+static void
+convert_early_kprobe(struct kprobe *kp)
+{
+	struct module *probed_mod;
+	int err;
+
+	BUG_ON(!kprobe_aggrprobe(kp));
+
+	err = check_kprobe_address_safe(kp, &probed_mod);
+	if (err)
+		panic("Insert kprobe at %p is not safe!", kp->addr);
+
+	/*
+	 * FIXME:
+	 * convert kprobe to ftrace if CONFIG_KPROBES_ON_FTRACE is on
+	 * and kp is on ftrace location.
+	 */
+
+	mutex_lock(&kprobe_mutex);
+	hlist_del_rcu(&kp->hlist);
+
+	INIT_HLIST_NODE(&kp->hlist);
+	hlist_add_head_rcu(&kp->hlist,
+		       &kprobe_table[hash_ptr(kp->addr, KPROBE_HASH_BITS)]);
+	mutex_unlock(&kprobe_mutex);
+
+	if (probed_mod)
+		module_put(probed_mod);
+}
+
+static void
+convert_early_kprobes(void)
+{
+	struct kprobe *p;
+	struct hlist_node *tmp;
+
+	hlist_for_each_entry_safe(p, tmp, &early_kprobe_hlist, hlist)
+		convert_early_kprobe(p);
+};
+#else
+static int register_early_kprobe(struct kprobe *p) { return -ENOSYS; }
+static void convert_early_kprobes(void) {};
+#endif
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 20/26] early kprobes: core logic of eraly kprobes.
@ 2015-02-12 12:21   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:21 UTC (permalink / raw)
  To: linux-arm-kernel

This patch is the main logic of early kprobe.

If register_kprobe() is called before kprobes_initialized, an early
kprobe is allocated. Try to utilize existing OPTPROBE mechanism to
replace the target instruction by a branch instead of breakpoint,
because interrupt handlers may not been initialized yet.

All resources required by early kprobes are allocated statically.
CONFIG_NR_EARLY_KPROBES_SLOTS is used to control number of possible
early kprobes.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 include/linux/kprobes.h |   4 ++
 kernel/kprobes.c        | 150 ++++++++++++++++++++++++++++++++++++++++++++++--
 2 files changed, 148 insertions(+), 6 deletions(-)

diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index 6100678..0c64df8 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -450,6 +450,10 @@ extern int proc_kprobes_optimization_handler(struct ctl_table *table,
 					     size_t *length, loff_t *ppos);
 #endif
 
+struct early_kprobe_slot {
+	struct optimized_kprobe op;
+};
+
 #endif /* CONFIG_OPTPROBES */
 #ifdef CONFIG_KPROBES_ON_FTRACE
 extern void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index b83c406..131a71a 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -77,6 +77,10 @@ int kprobes_is_early(void)
 static struct hlist_head kprobe_table[KPROBE_TABLE_SIZE];
 static struct hlist_head kretprobe_inst_table[KPROBE_TABLE_SIZE];
 
+#ifdef CONFIG_EARLY_KPROBES
+static HLIST_HEAD(early_kprobe_hlist);
+#endif
+
 /* NOTE: change this value only with kprobe_mutex held */
 static bool kprobes_all_disarmed;
 
@@ -87,6 +91,8 @@ static struct {
 	raw_spinlock_t lock ____cacheline_aligned_in_smp;
 } kretprobe_table_locks[KPROBE_TABLE_SIZE];
 
+DEFINE_EKPROBE_ALLOC_OPS(struct early_kprobe_slot, early_kprobe, static)
+
 static raw_spinlock_t *kretprobe_table_lock_ptr(unsigned long hash)
 {
 	return &(kretprobe_table_locks[hash].lock);
@@ -326,7 +332,12 @@ struct kprobe *get_kprobe(void *addr)
 	struct hlist_head *head;
 	struct kprobe *p;
 
-	head = &kprobe_table[hash_ptr(addr, KPROBE_HASH_BITS)];
+#ifdef CONFIG_EARLY_KPROBES
+	if (kprobes_is_early())
+		head = &early_kprobe_hlist;
+	else
+#endif
+		head = &kprobe_table[hash_ptr(addr, KPROBE_HASH_BITS)];
 	hlist_for_each_entry_rcu(p, head, hlist) {
 		if (p->addr == addr)
 			return p;
@@ -386,11 +397,14 @@ NOKPROBE_SYMBOL(opt_pre_handler);
 static void free_aggr_kprobe(struct kprobe *p)
 {
 	struct optimized_kprobe *op;
+	struct early_kprobe_slot *ep;
 
 	op = container_of(p, struct optimized_kprobe, kp);
 	arch_remove_optimized_kprobe(op);
 	arch_remove_kprobe(p);
-	kfree(op);
+	ep = container_of(op, struct early_kprobe_slot, op);
+	if (likely(!ek_free_early_kprobe(ep)))
+		kfree(op);
 }
 
 /* Return true(!0) if the kprobe is ready for optimization. */
@@ -607,9 +621,15 @@ static void optimize_kprobe(struct kprobe *p)
 	struct optimized_kprobe *op;
 
 	/* Check if the kprobe is disabled or not ready for optimization. */
-	if (!kprobe_optready(p) || !kprobes_allow_optimization ||
-	    (kprobe_disabled(p) || kprobes_all_disarmed))
-		return;
+	if (unlikely(kprobes_is_early())) {
+		BUG_ON(!(p->flags & KPROBE_FLAG_EARLY));
+		if (!kprobe_optready(p) || kprobe_disabled(p))
+			return;
+	} else {
+		if (!kprobe_optready(p) || !kprobes_allow_optimization ||
+		    (kprobe_disabled(p) || kprobes_all_disarmed))
+			return;
+	}
 
 	/* Both of break_handler and post_handler are not supported. */
 	if (p->break_handler || p->post_handler)
@@ -631,7 +651,10 @@ static void optimize_kprobe(struct kprobe *p)
 		list_del_init(&op->list);
 	else {
 		list_add(&op->list, &optimizing_list);
-		kick_kprobe_optimizer();
+		if (kprobes_is_early())
+			arch_optimize_kprobes(&optimizing_list);
+		else
+			kick_kprobe_optimizer();
 	}
 }
 
@@ -1505,6 +1528,8 @@ out:
 	return ret;
 }
 
+static int register_early_kprobe(struct kprobe *p);
+
 int register_kprobe(struct kprobe *p)
 {
 	int ret;
@@ -1518,6 +1543,14 @@ int register_kprobe(struct kprobe *p)
 		return PTR_ERR(addr);
 	p->addr = addr;
 
+	if (unlikely(kprobes_is_early())) {
+		p->flags |= KPROBE_FLAG_EARLY;
+		return register_early_kprobe(p);
+	}
+
+	WARN(p->flags & KPROBE_FLAG_EARLY,
+		"register early kprobe after kprobes initialized\n");
+
 	ret = check_kprobe_rereg(p);
 	if (ret)
 		return ret;
@@ -2156,6 +2189,8 @@ static struct notifier_block kprobe_module_nb = {
 extern unsigned long __start_kprobe_blacklist[];
 extern unsigned long __stop_kprobe_blacklist[];
 
+static void convert_early_kprobes(void);
+
 static int __init init_kprobes(void)
 {
 	int i, err = 0;
@@ -2204,6 +2239,7 @@ static int __init init_kprobes(void)
 	if (!err)
 		err = register_module_notifier(&kprobe_module_nb);
 
+	convert_early_kprobes();
 	kprobes_initialized = (err == 0);
 
 	if (!err)
@@ -2497,3 +2533,105 @@ module_init(init_kprobes);
 
 /* defined in arch/.../kernel/kprobes.c */
 EXPORT_SYMBOL_GPL(jprobe_return);
+
+#ifdef CONFIG_EARLY_KPROBES
+
+static int register_early_kprobe(struct kprobe *p)
+{
+	struct early_kprobe_slot *slot;
+	int err;
+
+	if (p->break_handler || p->post_handler)
+		return -EINVAL;
+	if (p->flags & KPROBE_FLAG_DISABLED)
+		return -EINVAL;
+
+	slot = ek_alloc_early_kprobe();
+	if (!slot) {
+		pr_err("No enough early kprobe slots.\n");
+		return -ENOMEM;
+	}
+
+	p->flags &= KPROBE_FLAG_DISABLED;
+	p->flags |= KPROBE_FLAG_EARLY;
+	p->nmissed = 0;
+
+	err = arch_prepare_kprobe(p);
+	if (err) {
+		pr_err("arch_prepare_kprobe failed\n");
+		goto free_slot;
+	}
+
+	INIT_LIST_HEAD(&p->list);
+	INIT_HLIST_NODE(&p->hlist);
+	INIT_LIST_HEAD(&slot->op.list);
+	slot->op.kp.addr = p->addr;
+	slot->op.kp.flags = p->flags | KPROBE_FLAG_EARLY;
+
+	err = arch_prepare_optimized_kprobe(&slot->op, p);
+	if (err) {
+		pr_err("Failed to prepare optimized kprobe.\n");
+		goto remove_optimized;
+	}
+
+	if (!arch_prepared_optinsn(&slot->op.optinsn)) {
+		pr_err("Failed to prepare optinsn.\n");
+		err = -ENOMEM;
+		goto remove_optimized;
+	}
+
+	hlist_add_head_rcu(&p->hlist, &early_kprobe_hlist);
+	init_aggr_kprobe(&slot->op.kp, p);
+	optimize_kprobe(&slot->op.kp);
+	return 0;
+
+remove_optimized:
+	arch_remove_optimized_kprobe(&slot->op);
+free_slot:
+	ek_free_early_kprobe(slot);
+	return err;
+}
+
+static void
+convert_early_kprobe(struct kprobe *kp)
+{
+	struct module *probed_mod;
+	int err;
+
+	BUG_ON(!kprobe_aggrprobe(kp));
+
+	err = check_kprobe_address_safe(kp, &probed_mod);
+	if (err)
+		panic("Insert kprobe at %p is not safe!", kp->addr);
+
+	/*
+	 * FIXME:
+	 * convert kprobe to ftrace if CONFIG_KPROBES_ON_FTRACE is on
+	 * and kp is on ftrace location.
+	 */
+
+	mutex_lock(&kprobe_mutex);
+	hlist_del_rcu(&kp->hlist);
+
+	INIT_HLIST_NODE(&kp->hlist);
+	hlist_add_head_rcu(&kp->hlist,
+		       &kprobe_table[hash_ptr(kp->addr, KPROBE_HASH_BITS)]);
+	mutex_unlock(&kprobe_mutex);
+
+	if (probed_mod)
+		module_put(probed_mod);
+}
+
+static void
+convert_early_kprobes(void)
+{
+	struct kprobe *p;
+	struct hlist_node *tmp;
+
+	hlist_for_each_entry_safe(p, tmp, &early_kprobe_hlist, hlist)
+		convert_early_kprobe(p);
+};
+#else
+static int register_early_kprobe(struct kprobe *p) { return -ENOSYS; }
+static void convert_early_kprobes(void) {};
+#endif
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 21/26] early kprobes: add CONFIG_EARLY_KPROBES option.
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-12 12:21   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:21 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

Enable early kprobes in Kconfig.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 arch/Kconfig | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/arch/Kconfig b/arch/Kconfig
index 05d7a8a..06dff4b 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -46,6 +46,18 @@ config KPROBES
 	  for kernel debugging, non-intrusive instrumentation and testing.
 	  If in doubt, say "N".
 
+config EARLY_KPROBES
+	depends on KPROBES && OPTPROBES
+	def_bool y
+
+config NR_EARLY_KPROBES_SLOTS
+	int "Number of possible early kprobes"
+	range 1 64
+	default 16
+	depends on EARLY_KPROBES
+	help
+	  Number of early kprobes slots.
+
 config JUMP_LABEL
        bool "Optimize very unlikely/likely branches"
        depends on HAVE_ARCH_JUMP_LABEL
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 21/26] early kprobes: add CONFIG_EARLY_KPROBES option.
@ 2015-02-12 12:21   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:21 UTC (permalink / raw)
  To: linux-arm-kernel

Enable early kprobes in Kconfig.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 arch/Kconfig | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/arch/Kconfig b/arch/Kconfig
index 05d7a8a..06dff4b 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -46,6 +46,18 @@ config KPROBES
 	  for kernel debugging, non-intrusive instrumentation and testing.
 	  If in doubt, say "N".
 
+config EARLY_KPROBES
+	depends on KPROBES && OPTPROBES
+	def_bool y
+
+config NR_EARLY_KPROBES_SLOTS
+	int "Number of possible early kprobes"
+	range 1 64
+	default 16
+	depends on EARLY_KPROBES
+	help
+	  Number of early kprobes slots.
+
 config JUMP_LABEL
        bool "Optimize very unlikely/likely branches"
        depends on HAVE_ARCH_JUMP_LABEL
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 22/26] early kprobes: introduce arch_fix_ftrace_early_kprobe().
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-12 12:21   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:21 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

Gives arch code a chance to fix ftrace entry.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 arch/x86/kernel/kprobes/opt.c | 31 +++++++++++++++++++++++++++++++
 include/linux/kprobes.h       |  6 +++++-
 kernel/kprobes.c              |  6 ++++++
 3 files changed, 42 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 21847ab..f3ea954 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -456,3 +456,34 @@ int setup_detour_execution(struct kprobe *p, struct pt_regs *regs, int reenter)
 	return 0;
 }
 NOKPROBE_SYMBOL(setup_detour_execution);
+
+#ifdef CONFIG_EARLY_KPROBES
+void arch_fix_ftrace_early_kprobe(struct optimized_kprobe *op)
+{
+	const unsigned char *correct_nop5 = ideal_nops[NOP_ATOMIC5];
+	struct kprobe *list_p;
+
+	u32 mask = KPROBE_FLAG_EARLY |
+		KPROBE_FLAG_OPTIMIZED |
+		KPROBE_FLAG_FTRACE;
+
+	if ((op->kp.flags & mask) != mask)
+		return;
+
+	/*
+	 * For early kprobe on ftrace, use right nop instruction.
+	 * See x86 ftrace_make_nop and ftrace_nop_replace. Note that
+	 * ideal_nops used by ftrace_nop_replace is setupt after early
+	 * kprobe registration.
+	 */
+
+	memcpy(&op->kp.opcode, correct_nop5, sizeof(kprobe_opcode_t));
+	memcpy(op->optinsn.copied_insn, correct_nop5 + INT3_SIZE,
+			RELATIVE_ADDR_SIZE);
+
+	/* Fix all kprobes connected to it */
+	list_for_each_entry_rcu(list_p, &op->kp.list, list)
+		memcpy(&list_p->opcode, correct_nop5, sizeof(kprobe_opcode_t));
+
+}
+#endif
diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index 0c64df8..990d04b 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -330,7 +330,6 @@ extern kprobe_opcode_t __early_kprobes_code_area_start[];
 extern kprobe_opcode_t __early_kprobes_code_area_end[];
 extern kprobe_opcode_t __early_kprobes_insn_slot_start[];
 extern kprobe_opcode_t __early_kprobes_insn_slot_end[];
-
 #else
 #define __DEFINE_EKPROBE_ALLOC_OPS(__t, __name)				\
 static inline __t *__ek_alloc_##__name(__t *__s, __t *__e, unsigned long *__b)\
@@ -459,6 +458,11 @@ struct early_kprobe_slot {
 extern void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
 				  struct ftrace_ops *ops, struct pt_regs *regs);
 extern int arch_prepare_kprobe_ftrace(struct kprobe *p);
+
+#ifdef CONFIG_EARLY_KPROBES
+extern void arch_fix_ftrace_early_kprobe(struct optimized_kprobe *p);
+#endif
+
 #endif
 
 int arch_check_ftrace_location(struct kprobe *p);
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 131a71a..0bbb510 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -2536,6 +2536,12 @@ EXPORT_SYMBOL_GPL(jprobe_return);
 
 #ifdef CONFIG_EARLY_KPROBES
 
+#ifdef CONFIG_KPROBES_ON_FTRACE
+void __weak arch_fix_ftrace_early_kprobe(struct optimized_kprobe *p)
+{
+}
+#endif
+
 static int register_early_kprobe(struct kprobe *p)
 {
 	struct early_kprobe_slot *slot;
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 22/26] early kprobes: introduce arch_fix_ftrace_early_kprobe().
@ 2015-02-12 12:21   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:21 UTC (permalink / raw)
  To: linux-arm-kernel

Gives arch code a chance to fix ftrace entry.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 arch/x86/kernel/kprobes/opt.c | 31 +++++++++++++++++++++++++++++++
 include/linux/kprobes.h       |  6 +++++-
 kernel/kprobes.c              |  6 ++++++
 3 files changed, 42 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 21847ab..f3ea954 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -456,3 +456,34 @@ int setup_detour_execution(struct kprobe *p, struct pt_regs *regs, int reenter)
 	return 0;
 }
 NOKPROBE_SYMBOL(setup_detour_execution);
+
+#ifdef CONFIG_EARLY_KPROBES
+void arch_fix_ftrace_early_kprobe(struct optimized_kprobe *op)
+{
+	const unsigned char *correct_nop5 = ideal_nops[NOP_ATOMIC5];
+	struct kprobe *list_p;
+
+	u32 mask = KPROBE_FLAG_EARLY |
+		KPROBE_FLAG_OPTIMIZED |
+		KPROBE_FLAG_FTRACE;
+
+	if ((op->kp.flags & mask) != mask)
+		return;
+
+	/*
+	 * For early kprobe on ftrace, use right nop instruction.
+	 * See x86 ftrace_make_nop and ftrace_nop_replace. Note that
+	 * ideal_nops used by ftrace_nop_replace is setupt after early
+	 * kprobe registration.
+	 */
+
+	memcpy(&op->kp.opcode, correct_nop5, sizeof(kprobe_opcode_t));
+	memcpy(op->optinsn.copied_insn, correct_nop5 + INT3_SIZE,
+			RELATIVE_ADDR_SIZE);
+
+	/* Fix all kprobes connected to it */
+	list_for_each_entry_rcu(list_p, &op->kp.list, list)
+		memcpy(&list_p->opcode, correct_nop5, sizeof(kprobe_opcode_t));
+
+}
+#endif
diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index 0c64df8..990d04b 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -330,7 +330,6 @@ extern kprobe_opcode_t __early_kprobes_code_area_start[];
 extern kprobe_opcode_t __early_kprobes_code_area_end[];
 extern kprobe_opcode_t __early_kprobes_insn_slot_start[];
 extern kprobe_opcode_t __early_kprobes_insn_slot_end[];
-
 #else
 #define __DEFINE_EKPROBE_ALLOC_OPS(__t, __name)				\
 static inline __t *__ek_alloc_##__name(__t *__s, __t *__e, unsigned long *__b)\
@@ -459,6 +458,11 @@ struct early_kprobe_slot {
 extern void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
 				  struct ftrace_ops *ops, struct pt_regs *regs);
 extern int arch_prepare_kprobe_ftrace(struct kprobe *p);
+
+#ifdef CONFIG_EARLY_KPROBES
+extern void arch_fix_ftrace_early_kprobe(struct optimized_kprobe *p);
+#endif
+
 #endif
 
 int arch_check_ftrace_location(struct kprobe *p);
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 131a71a..0bbb510 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -2536,6 +2536,12 @@ EXPORT_SYMBOL_GPL(jprobe_return);
 
 #ifdef CONFIG_EARLY_KPROBES
 
+#ifdef CONFIG_KPROBES_ON_FTRACE
+void __weak arch_fix_ftrace_early_kprobe(struct optimized_kprobe *p)
+{
+}
+#endif
+
 static int register_early_kprobe(struct kprobe *p)
 {
 	struct early_kprobe_slot *slot;
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 23/26] early kprobes: x86: arch_restore_optimized_kprobe().
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-12 12:21   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:21 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

arch_restore_optimized_kprobe() can be used to temporarily restore
probed instruction. It will actually disable optimized kprobe.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 arch/x86/kernel/kprobes/opt.c | 26 ++++++++++++++++++++++++++
 include/linux/kprobes.h       |  1 +
 2 files changed, 27 insertions(+)

diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index f3ea954..12332c2 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -28,6 +28,7 @@
 #include <linux/kdebug.h>
 #include <linux/kallsyms.h>
 #include <linux/ftrace.h>
+#include <linux/stop_machine.h>
 
 #include <asm/cacheflush.h>
 #include <asm/desc.h>
@@ -486,4 +487,29 @@ void arch_fix_ftrace_early_kprobe(struct optimized_kprobe *op)
 		memcpy(&list_p->opcode, correct_nop5, sizeof(kprobe_opcode_t));
 
 }
+
+static int do_restore_kprobe(void *p)
+{
+	struct optimized_kprobe *op = p;
+	u8 insn_buf[RELATIVEJUMP_SIZE];
+
+	memcpy(insn_buf, &op->kp.opcode, sizeof(kprobe_opcode_t));
+	memcpy(insn_buf + INT3_SIZE,
+			op->optinsn.copied_insn,
+			RELATIVE_ADDR_SIZE);
+	text_poke(op->kp.addr, insn_buf, RELATIVEJUMP_SIZE);
+	return 0;
+}
+
+void arch_restore_optimized_kprobe(struct optimized_kprobe *op)
+{
+	u32 mask = KPROBE_FLAG_EARLY |
+		KPROBE_FLAG_OPTIMIZED |
+		KPROBE_FLAG_FTRACE;
+
+	if ((op->kp.flags & mask) != mask)
+		return;
+
+	stop_machine(do_restore_kprobe, op, NULL);
+}
 #endif
diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index 990d04b..92aafa7 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -461,6 +461,7 @@ extern int arch_prepare_kprobe_ftrace(struct kprobe *p);
 
 #ifdef CONFIG_EARLY_KPROBES
 extern void arch_fix_ftrace_early_kprobe(struct optimized_kprobe *p);
+extern void arch_restore_optimized_kprobe(struct optimized_kprobe *p);
 #endif
 
 #endif
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 23/26] early kprobes: x86: arch_restore_optimized_kprobe().
@ 2015-02-12 12:21   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:21 UTC (permalink / raw)
  To: linux-arm-kernel

arch_restore_optimized_kprobe() can be used to temporarily restore
probed instruction. It will actually disable optimized kprobe.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 arch/x86/kernel/kprobes/opt.c | 26 ++++++++++++++++++++++++++
 include/linux/kprobes.h       |  1 +
 2 files changed, 27 insertions(+)

diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index f3ea954..12332c2 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -28,6 +28,7 @@
 #include <linux/kdebug.h>
 #include <linux/kallsyms.h>
 #include <linux/ftrace.h>
+#include <linux/stop_machine.h>
 
 #include <asm/cacheflush.h>
 #include <asm/desc.h>
@@ -486,4 +487,29 @@ void arch_fix_ftrace_early_kprobe(struct optimized_kprobe *op)
 		memcpy(&list_p->opcode, correct_nop5, sizeof(kprobe_opcode_t));
 
 }
+
+static int do_restore_kprobe(void *p)
+{
+	struct optimized_kprobe *op = p;
+	u8 insn_buf[RELATIVEJUMP_SIZE];
+
+	memcpy(insn_buf, &op->kp.opcode, sizeof(kprobe_opcode_t));
+	memcpy(insn_buf + INT3_SIZE,
+			op->optinsn.copied_insn,
+			RELATIVE_ADDR_SIZE);
+	text_poke(op->kp.addr, insn_buf, RELATIVEJUMP_SIZE);
+	return 0;
+}
+
+void arch_restore_optimized_kprobe(struct optimized_kprobe *op)
+{
+	u32 mask = KPROBE_FLAG_EARLY |
+		KPROBE_FLAG_OPTIMIZED |
+		KPROBE_FLAG_FTRACE;
+
+	if ((op->kp.flags & mask) != mask)
+		return;
+
+	stop_machine(do_restore_kprobe, op, NULL);
+}
 #endif
diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index 990d04b..92aafa7 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -461,6 +461,7 @@ extern int arch_prepare_kprobe_ftrace(struct kprobe *p);
 
 #ifdef CONFIG_EARLY_KPROBES
 extern void arch_fix_ftrace_early_kprobe(struct optimized_kprobe *p);
+extern void arch_restore_optimized_kprobe(struct optimized_kprobe *p);
 #endif
 
 #endif
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 24/26] early kprobes: core logic to support early kprobe on ftrace.
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-12 12:21   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:21 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

Utilize previous introduced ftrace update notify chain to support early
kprobe on ftrace.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 include/linux/kprobes.h |   1 +
 kernel/kprobes.c        | 213 ++++++++++++++++++++++++++++++++++++++++++++----
 2 files changed, 197 insertions(+), 17 deletions(-)

diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index 92aafa7..1c211e8 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -131,6 +131,7 @@ struct kprobe {
 				   */
 #define KPROBE_FLAG_FTRACE	8 /* probe is using ftrace */
 #define KPROBE_FLAG_EARLY	16 /* early kprobe */
+#define KPROBE_FLAG_RESTORED	32 /* temporarily restored to its original insn */
 
 /* Has this kprobe gone ? */
 static inline int kprobe_gone(struct kprobe *p)
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 0bbb510..c9cd46f 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -48,6 +48,7 @@
 #include <linux/ftrace.h>
 #include <linux/cpu.h>
 #include <linux/jump_label.h>
+#include <linux/stop_machine.h>
 
 #include <asm-generic/sections.h>
 #include <asm/cacheflush.h>
@@ -2540,11 +2541,127 @@ EXPORT_SYMBOL_GPL(jprobe_return);
 void __weak arch_fix_ftrace_early_kprobe(struct optimized_kprobe *p)
 {
 }
+
+static int restore_optimized_kprobe(struct optimized_kprobe *op)
+{
+	/* If it already restored, pass it to other. */
+	if (op->kp.flags & KPROBE_FLAG_RESTORED)
+		return NOTIFY_DONE;
+
+	get_online_cpus();
+	mutex_lock(&text_mutex);
+	arch_restore_optimized_kprobe(op);
+	mutex_unlock(&text_mutex);
+	put_online_cpus();
+
+	op->kp.flags |= KPROBE_FLAG_RESTORED;
+	return NOTIFY_STOP;
+}
+
+static int ftrace_notifier_call(struct notifier_block *nb,
+		unsigned long val, void *param)
+{
+	struct ftrace_update_notifier_info *info = param;
+	struct optimized_kprobe *op;
+	struct dyn_ftrace *rec;
+	struct kprobe *kp;
+	int enable;
+	void *addr;
+	int ret = NOTIFY_DONE;
+
+	if (!info || !info->rec || !info->rec->ip)
+		return NOTIFY_DONE;
+
+	rec = info->rec;
+	enable = info->enable;
+	addr = (void *)rec->ip;
+
+	mutex_lock(&kprobe_mutex);
+	kp = get_kprobe(addr);
+	mutex_unlock(&kprobe_mutex);
+
+	if (!kp || !kprobe_aggrprobe(kp))
+		return NOTIFY_DONE;
+
+	op = container_of(kp, struct optimized_kprobe, kp);
+	/*
+	 * Ftrace is trying to convert ftrace entries to nop
+	 * instruction. This conversion should have already been done
+	 * at register_early_kprobe(). x86 needs fixing here.
+	 */
+	if (!(rec->flags & FTRACE_FL_ENABLED) && (!enable)) {
+		arch_fix_ftrace_early_kprobe(op);
+		return NOTIFY_STOP;
+	}
+
+	/*
+	 * Ftrace is trying to enable a trace entry. We temporary
+	 * restore the probed instruction.
+	 * We can continue using this kprobe as a ftrace-based kprobe,
+	 * but event between this restoring and early kprobe conversion
+	 * will get lost.
+	 */
+	if (!(rec->flags & FTRACE_FL_ENABLED) && enable) {
+		ret = restore_optimized_kprobe(op);
+
+		/* Let ftrace retry if restore is successful. */
+		if (ret == NOTIFY_STOP)
+			info->retry = true;
+		return ret;
+	}
+
+	return NOTIFY_DONE;
+}
+
+static struct notifier_block ftrace_notifier_block = {
+	.notifier_call = ftrace_notifier_call,
+};
+static bool ftrace_notifier_registred = false;
+
+static int enable_early_kprobe_on_ftrace(struct kprobe *p)
+{
+	int err;
+
+	if (!ftrace_notifier_registred) {
+		err = register_ftrace_update_notifier(&ftrace_notifier_block);
+		if (err) {
+			pr_err("Failed to register ftrace update notifier\n");
+			return err;
+		}
+		ftrace_notifier_registred = true;
+	}
+
+	err = ftrace_process_loc_early((unsigned long)p->addr);
+	if (err)
+		pr_err("Failed to process ftrace entry at %p\n", p->addr);
+	return err;
+}
+
+/* Caller must ensure kprobe_aggrprobe(kp). */
+static void convert_early_ftrace_kprobe_top(struct optimized_kprobe *op)
+{
+	restore_optimized_kprobe(op);
+	arm_kprobe_ftrace(&op->kp);
+}
+
+#else
+static inline int enable_early_kprobe_on_ftrace(struct kprobe *__unused)
+{ return 0; }
+
+/*
+ * If CONFIG_KPROBES_ON_FTRACE is off this function should never get called,
+ * so let it trigger a warning.
+ */
+static inline void convert_early_ftrace_kprobe_top(struct optimized_kprobe *__unused)
+{
+	WARN_ON(1);
+}
 #endif
 
 static int register_early_kprobe(struct kprobe *p)
 {
 	struct early_kprobe_slot *slot;
+	struct module *probed_mod;
 	int err;
 
 	if (p->break_handler || p->post_handler)
@@ -2552,13 +2669,25 @@ static int register_early_kprobe(struct kprobe *p)
 	if (p->flags & KPROBE_FLAG_DISABLED)
 		return -EINVAL;
 
+	err = check_kprobe_address_safe(p, &probed_mod);
+	if (err)
+		return err;
+
+	BUG_ON(probed_mod);
+
+	if (kprobe_ftrace(p)) {
+		err = enable_early_kprobe_on_ftrace(p);
+		if (err)
+			return err;
+	}
+
 	slot = ek_alloc_early_kprobe();
 	if (!slot) {
 		pr_err("No enough early kprobe slots.\n");
 		return -ENOMEM;
 	}
 
-	p->flags &= KPROBE_FLAG_DISABLED;
+	p->flags &= KPROBE_FLAG_DISABLED | KPROBE_FLAG_FTRACE;
 	p->flags |= KPROBE_FLAG_EARLY;
 	p->nmissed = 0;
 
@@ -2599,43 +2728,93 @@ free_slot:
 }
 
 static void
-convert_early_kprobe(struct kprobe *kp)
+convert_early_kprobe_top(struct kprobe *kp)
 {
 	struct module *probed_mod;
+	struct optimized_kprobe *op;
 	int err;
 
 	BUG_ON(!kprobe_aggrprobe(kp));
+	op = container_of(kp, struct optimized_kprobe, kp);
 
 	err = check_kprobe_address_safe(kp, &probed_mod);
 	if (err)
 		panic("Insert kprobe at %p is not safe!", kp->addr);
+	BUG_ON(probed_mod);
 
-	/*
-	 * FIXME:
-	 * convert kprobe to ftrace if CONFIG_KPROBES_ON_FTRACE is on
-	 * and kp is on ftrace location.
-	 */
+	if (kprobe_ftrace(kp))
+		convert_early_ftrace_kprobe_top(op);
+}
 
-	mutex_lock(&kprobe_mutex);
-	hlist_del_rcu(&kp->hlist);
+static void
+convert_early_kprobes_top(void)
+{
+	struct kprobe *p;
+
+	hlist_for_each_entry(p, &early_kprobe_hlist, hlist)
+		convert_early_kprobe_top(p);
+}
+
+static LIST_HEAD(early_freeing_list);
+
+static void
+convert_early_kprobe_stop_machine(struct kprobe *kp)
+{
+	struct optimized_kprobe *op;
+
+	BUG_ON(!kprobe_aggrprobe(kp));
+	op = container_of(kp, struct optimized_kprobe, kp);
+
+	if ((kprobe_ftrace(kp)) && (list_is_singular(&op->kp.list))) {
+		/* Update kp */
+		kp = list_entry(op->kp.list.next, struct kprobe, list);
+
+		hlist_replace_rcu(&op->kp.hlist, &kp->hlist);
+		list_del_init(&kp->list);
+
+		op->kp.flags |= KPROBE_FLAG_DISABLED;
+		list_add(&op->list, &early_freeing_list);
+	}
 
+	hlist_del_rcu(&kp->hlist);
 	INIT_HLIST_NODE(&kp->hlist);
 	hlist_add_head_rcu(&kp->hlist,
-		       &kprobe_table[hash_ptr(kp->addr, KPROBE_HASH_BITS)]);
-	mutex_unlock(&kprobe_mutex);
-
-	if (probed_mod)
-		module_put(probed_mod);
+			&kprobe_table[hash_ptr(kp->addr, KPROBE_HASH_BITS)]);
 }
 
-static void
-convert_early_kprobes(void)
+static int
+convert_early_kprobes_stop_machine(void *__unused)
 {
 	struct kprobe *p;
 	struct hlist_node *tmp;
 
 	hlist_for_each_entry_safe(p, tmp, &early_kprobe_hlist, hlist)
-		convert_early_kprobe(p);
+		convert_early_kprobe_stop_machine(p);
+	return 0;
+}
+
+static void
+convert_early_kprobes(void)
+{
+	struct optimized_kprobe *op, *tmp;
+
+	mutex_lock(&kprobe_mutex);
+
+	convert_early_kprobes_top();
+
+	get_online_cpus();
+	mutex_lock(&text_mutex);
+
+	stop_machine(convert_early_kprobes_stop_machine, NULL, NULL);
+
+	mutex_unlock(&text_mutex);
+	put_online_cpus();
+	mutex_unlock(&kprobe_mutex);
+
+	list_for_each_entry_safe(op, tmp, &early_freeing_list, list) {
+		list_del_init(&op->list);
+		free_aggr_kprobe(&op->kp);
+	}
 };
 #else
 static int register_early_kprobe(struct kprobe *p) { return -ENOSYS; }
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 24/26] early kprobes: core logic to support early kprobe on ftrace.
@ 2015-02-12 12:21   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:21 UTC (permalink / raw)
  To: linux-arm-kernel

Utilize previous introduced ftrace update notify chain to support early
kprobe on ftrace.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 include/linux/kprobes.h |   1 +
 kernel/kprobes.c        | 213 ++++++++++++++++++++++++++++++++++++++++++++----
 2 files changed, 197 insertions(+), 17 deletions(-)

diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index 92aafa7..1c211e8 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -131,6 +131,7 @@ struct kprobe {
 				   */
 #define KPROBE_FLAG_FTRACE	8 /* probe is using ftrace */
 #define KPROBE_FLAG_EARLY	16 /* early kprobe */
+#define KPROBE_FLAG_RESTORED	32 /* temporarily restored to its original insn */
 
 /* Has this kprobe gone ? */
 static inline int kprobe_gone(struct kprobe *p)
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 0bbb510..c9cd46f 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -48,6 +48,7 @@
 #include <linux/ftrace.h>
 #include <linux/cpu.h>
 #include <linux/jump_label.h>
+#include <linux/stop_machine.h>
 
 #include <asm-generic/sections.h>
 #include <asm/cacheflush.h>
@@ -2540,11 +2541,127 @@ EXPORT_SYMBOL_GPL(jprobe_return);
 void __weak arch_fix_ftrace_early_kprobe(struct optimized_kprobe *p)
 {
 }
+
+static int restore_optimized_kprobe(struct optimized_kprobe *op)
+{
+	/* If it already restored, pass it to other. */
+	if (op->kp.flags & KPROBE_FLAG_RESTORED)
+		return NOTIFY_DONE;
+
+	get_online_cpus();
+	mutex_lock(&text_mutex);
+	arch_restore_optimized_kprobe(op);
+	mutex_unlock(&text_mutex);
+	put_online_cpus();
+
+	op->kp.flags |= KPROBE_FLAG_RESTORED;
+	return NOTIFY_STOP;
+}
+
+static int ftrace_notifier_call(struct notifier_block *nb,
+		unsigned long val, void *param)
+{
+	struct ftrace_update_notifier_info *info = param;
+	struct optimized_kprobe *op;
+	struct dyn_ftrace *rec;
+	struct kprobe *kp;
+	int enable;
+	void *addr;
+	int ret = NOTIFY_DONE;
+
+	if (!info || !info->rec || !info->rec->ip)
+		return NOTIFY_DONE;
+
+	rec = info->rec;
+	enable = info->enable;
+	addr = (void *)rec->ip;
+
+	mutex_lock(&kprobe_mutex);
+	kp = get_kprobe(addr);
+	mutex_unlock(&kprobe_mutex);
+
+	if (!kp || !kprobe_aggrprobe(kp))
+		return NOTIFY_DONE;
+
+	op = container_of(kp, struct optimized_kprobe, kp);
+	/*
+	 * Ftrace is trying to convert ftrace entries to nop
+	 * instruction. This conversion should have already been done
+	 * at register_early_kprobe(). x86 needs fixing here.
+	 */
+	if (!(rec->flags & FTRACE_FL_ENABLED) && (!enable)) {
+		arch_fix_ftrace_early_kprobe(op);
+		return NOTIFY_STOP;
+	}
+
+	/*
+	 * Ftrace is trying to enable a trace entry. We temporary
+	 * restore the probed instruction.
+	 * We can continue using this kprobe as a ftrace-based kprobe,
+	 * but event between this restoring and early kprobe conversion
+	 * will get lost.
+	 */
+	if (!(rec->flags & FTRACE_FL_ENABLED) && enable) {
+		ret = restore_optimized_kprobe(op);
+
+		/* Let ftrace retry if restore is successful. */
+		if (ret == NOTIFY_STOP)
+			info->retry = true;
+		return ret;
+	}
+
+	return NOTIFY_DONE;
+}
+
+static struct notifier_block ftrace_notifier_block = {
+	.notifier_call = ftrace_notifier_call,
+};
+static bool ftrace_notifier_registred = false;
+
+static int enable_early_kprobe_on_ftrace(struct kprobe *p)
+{
+	int err;
+
+	if (!ftrace_notifier_registred) {
+		err = register_ftrace_update_notifier(&ftrace_notifier_block);
+		if (err) {
+			pr_err("Failed to register ftrace update notifier\n");
+			return err;
+		}
+		ftrace_notifier_registred = true;
+	}
+
+	err = ftrace_process_loc_early((unsigned long)p->addr);
+	if (err)
+		pr_err("Failed to process ftrace entry at %p\n", p->addr);
+	return err;
+}
+
+/* Caller must ensure kprobe_aggrprobe(kp). */
+static void convert_early_ftrace_kprobe_top(struct optimized_kprobe *op)
+{
+	restore_optimized_kprobe(op);
+	arm_kprobe_ftrace(&op->kp);
+}
+
+#else
+static inline int enable_early_kprobe_on_ftrace(struct kprobe *__unused)
+{ return 0; }
+
+/*
+ * If CONFIG_KPROBES_ON_FTRACE is off this function should never get called,
+ * so let it trigger a warning.
+ */
+static inline void convert_early_ftrace_kprobe_top(struct optimized_kprobe *__unused)
+{
+	WARN_ON(1);
+}
 #endif
 
 static int register_early_kprobe(struct kprobe *p)
 {
 	struct early_kprobe_slot *slot;
+	struct module *probed_mod;
 	int err;
 
 	if (p->break_handler || p->post_handler)
@@ -2552,13 +2669,25 @@ static int register_early_kprobe(struct kprobe *p)
 	if (p->flags & KPROBE_FLAG_DISABLED)
 		return -EINVAL;
 
+	err = check_kprobe_address_safe(p, &probed_mod);
+	if (err)
+		return err;
+
+	BUG_ON(probed_mod);
+
+	if (kprobe_ftrace(p)) {
+		err = enable_early_kprobe_on_ftrace(p);
+		if (err)
+			return err;
+	}
+
 	slot = ek_alloc_early_kprobe();
 	if (!slot) {
 		pr_err("No enough early kprobe slots.\n");
 		return -ENOMEM;
 	}
 
-	p->flags &= KPROBE_FLAG_DISABLED;
+	p->flags &= KPROBE_FLAG_DISABLED | KPROBE_FLAG_FTRACE;
 	p->flags |= KPROBE_FLAG_EARLY;
 	p->nmissed = 0;
 
@@ -2599,43 +2728,93 @@ free_slot:
 }
 
 static void
-convert_early_kprobe(struct kprobe *kp)
+convert_early_kprobe_top(struct kprobe *kp)
 {
 	struct module *probed_mod;
+	struct optimized_kprobe *op;
 	int err;
 
 	BUG_ON(!kprobe_aggrprobe(kp));
+	op = container_of(kp, struct optimized_kprobe, kp);
 
 	err = check_kprobe_address_safe(kp, &probed_mod);
 	if (err)
 		panic("Insert kprobe at %p is not safe!", kp->addr);
+	BUG_ON(probed_mod);
 
-	/*
-	 * FIXME:
-	 * convert kprobe to ftrace if CONFIG_KPROBES_ON_FTRACE is on
-	 * and kp is on ftrace location.
-	 */
+	if (kprobe_ftrace(kp))
+		convert_early_ftrace_kprobe_top(op);
+}
 
-	mutex_lock(&kprobe_mutex);
-	hlist_del_rcu(&kp->hlist);
+static void
+convert_early_kprobes_top(void)
+{
+	struct kprobe *p;
+
+	hlist_for_each_entry(p, &early_kprobe_hlist, hlist)
+		convert_early_kprobe_top(p);
+}
+
+static LIST_HEAD(early_freeing_list);
+
+static void
+convert_early_kprobe_stop_machine(struct kprobe *kp)
+{
+	struct optimized_kprobe *op;
+
+	BUG_ON(!kprobe_aggrprobe(kp));
+	op = container_of(kp, struct optimized_kprobe, kp);
+
+	if ((kprobe_ftrace(kp)) && (list_is_singular(&op->kp.list))) {
+		/* Update kp */
+		kp = list_entry(op->kp.list.next, struct kprobe, list);
+
+		hlist_replace_rcu(&op->kp.hlist, &kp->hlist);
+		list_del_init(&kp->list);
+
+		op->kp.flags |= KPROBE_FLAG_DISABLED;
+		list_add(&op->list, &early_freeing_list);
+	}
 
+	hlist_del_rcu(&kp->hlist);
 	INIT_HLIST_NODE(&kp->hlist);
 	hlist_add_head_rcu(&kp->hlist,
-		       &kprobe_table[hash_ptr(kp->addr, KPROBE_HASH_BITS)]);
-	mutex_unlock(&kprobe_mutex);
-
-	if (probed_mod)
-		module_put(probed_mod);
+			&kprobe_table[hash_ptr(kp->addr, KPROBE_HASH_BITS)]);
 }
 
-static void
-convert_early_kprobes(void)
+static int
+convert_early_kprobes_stop_machine(void *__unused)
 {
 	struct kprobe *p;
 	struct hlist_node *tmp;
 
 	hlist_for_each_entry_safe(p, tmp, &early_kprobe_hlist, hlist)
-		convert_early_kprobe(p);
+		convert_early_kprobe_stop_machine(p);
+	return 0;
+}
+
+static void
+convert_early_kprobes(void)
+{
+	struct optimized_kprobe *op, *tmp;
+
+	mutex_lock(&kprobe_mutex);
+
+	convert_early_kprobes_top();
+
+	get_online_cpus();
+	mutex_lock(&text_mutex);
+
+	stop_machine(convert_early_kprobes_stop_machine, NULL, NULL);
+
+	mutex_unlock(&text_mutex);
+	put_online_cpus();
+	mutex_unlock(&kprobe_mutex);
+
+	list_for_each_entry_safe(op, tmp, &early_freeing_list, list) {
+		list_del_init(&op->list);
+		free_aggr_kprobe(&op->kp);
+	}
 };
 #else
 static int register_early_kprobe(struct kprobe *p) { return -ENOSYS; }
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 25/26] early kprobes: introduce kconfig option to support early kprobe on ftrace.
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-12 12:21   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:21 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

On platform (like x86) which supports CONFIG_KPROBE_ON_FTRACE, makes
early kprobe depend on it so we are able to probe function entries.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 arch/Kconfig | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 06dff4b..7225386 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -47,7 +47,7 @@ config KPROBES
 	  If in doubt, say "N".
 
 config EARLY_KPROBES
-	depends on KPROBES && OPTPROBES
+	depends on KPROBES && OPTPROBES && (KPROBES_ON_FTRACE || !HAVE_KPROBES_ON_FTRACE)
 	def_bool y
 
 config NR_EARLY_KPROBES_SLOTS
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 25/26] early kprobes: introduce kconfig option to support early kprobe on ftrace.
@ 2015-02-12 12:21   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:21 UTC (permalink / raw)
  To: linux-arm-kernel

On platform (like x86) which supports CONFIG_KPROBE_ON_FTRACE, makes
early kprobe depend on it so we are able to probe function entries.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 arch/Kconfig | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 06dff4b..7225386 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -47,7 +47,7 @@ config KPROBES
 	  If in doubt, say "N".
 
 config EARLY_KPROBES
-	depends on KPROBES && OPTPROBES
+	depends on KPROBES && OPTPROBES && (KPROBES_ON_FTRACE || !HAVE_KPROBES_ON_FTRACE)
 	def_bool y
 
 config NR_EARLY_KPROBES_SLOTS
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 26/26] kprobes: enable 'ekprobe=' cmdline option for early kprobes.
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-12 12:21   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:21 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

This patch shows the basic idea of usage of early kprobes. By adding
kernel cmdline options such as 'ekprobe=__alloc_pages_nodemask' or
'ekprobe=0xc00f3c2c', early kprobes are installed. When the probed
instructions get hit, a message is printed.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 kernel/kprobes.c | 71 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 71 insertions(+)

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index c9cd46f..79c815b 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -2816,7 +2816,78 @@ convert_early_kprobes(void)
 		free_aggr_kprobe(&op->kp);
 	}
 };
+
+static int early_kprobe_pre_handler(struct kprobe *p, struct pt_regs *regs)
+{
+	const char *sym = NULL;
+	char *modname, namebuf[KSYM_NAME_LEN];
+	unsigned long offset = 0;
+
+	sym = kallsyms_lookup((unsigned long)p->addr, NULL,
+			&offset, &modname, namebuf);
+	if (sym)
+		pr_info("Hit early kprobe at %s+0x%lx%s%s\n",
+				sym, offset,
+				(modname ? " " : ""),
+				(modname ? modname : ""));
+	else
+		pr_info("Hit early kprobe at %p\n", p->addr);
+	return 0;
+}
+
+DEFINE_EKPROBE_ALLOC_OPS(struct kprobe, early_kprobe_setup, static);
+static int __init early_kprobe_setup(char *p)
+{
+	unsigned long long addr;
+	struct kprobe *kp;
+	int len = strlen(p);
+	int err;
+
+	if (len <= 0) {
+		pr_err("early kprobe: wrong param: %s\n", p);
+		return 0;
+	}
+
+	if ((p[0] == '0') && (p[1] == 'x')) {
+		err = kstrtoull(p, 16, &addr);
+		if (err) {
+			pr_err("early kprobe: wrong address: %p\n", p);
+			return 0;
+		}
+	} else {
+		addr = kallsyms_lookup_name(p);
+		if (!addr) {
+			pr_err("early kprobe: wrong symbol: %s\n", p);
+			return 0;
+		}
+	}
+
+	if ((addr < (unsigned long)_text) ||
+			(addr >= (unsigned long)_etext))
+		pr_err("early kprobe: address of %p out of range\n", p);
+
+	kp = ek_alloc_early_kprobe_setup();
+	if (kp == NULL) {
+		pr_err("early kprobe: no enough early kprobe slot\n");
+		return 0;
+	}
+	kp->addr = (void *)(unsigned long)(addr);
+	kp->pre_handler = early_kprobe_pre_handler;
+	err = register_kprobe(kp);
+	if (err) {
+		pr_err("early kprobe: register early kprobe %s failed\n", p);
+		ek_free_early_kprobe_setup(kp);
+	}
+	return 0;
+}
 #else
 static int register_early_kprobe(struct kprobe *p) { return -ENOSYS; }
 static void convert_early_kprobes(void) {};
+
+static int __init early_kprobe_setup(char *p)
+{
+	return 0;
+}
 #endif
+
+early_param("ekprobe", early_kprobe_setup);
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 26/26] kprobes: enable 'ekprobe=' cmdline option for early kprobes.
@ 2015-02-12 12:21   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:21 UTC (permalink / raw)
  To: linux-arm-kernel

This patch shows the basic idea of usage of early kprobes. By adding
kernel cmdline options such as 'ekprobe=__alloc_pages_nodemask' or
'ekprobe=0xc00f3c2c', early kprobes are installed. When the probed
instructions get hit, a message is printed.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 kernel/kprobes.c | 71 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 71 insertions(+)

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index c9cd46f..79c815b 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -2816,7 +2816,78 @@ convert_early_kprobes(void)
 		free_aggr_kprobe(&op->kp);
 	}
 };
+
+static int early_kprobe_pre_handler(struct kprobe *p, struct pt_regs *regs)
+{
+	const char *sym = NULL;
+	char *modname, namebuf[KSYM_NAME_LEN];
+	unsigned long offset = 0;
+
+	sym = kallsyms_lookup((unsigned long)p->addr, NULL,
+			&offset, &modname, namebuf);
+	if (sym)
+		pr_info("Hit early kprobe at %s+0x%lx%s%s\n",
+				sym, offset,
+				(modname ? " " : ""),
+				(modname ? modname : ""));
+	else
+		pr_info("Hit early kprobe at %p\n", p->addr);
+	return 0;
+}
+
+DEFINE_EKPROBE_ALLOC_OPS(struct kprobe, early_kprobe_setup, static);
+static int __init early_kprobe_setup(char *p)
+{
+	unsigned long long addr;
+	struct kprobe *kp;
+	int len = strlen(p);
+	int err;
+
+	if (len <= 0) {
+		pr_err("early kprobe: wrong param: %s\n", p);
+		return 0;
+	}
+
+	if ((p[0] == '0') && (p[1] == 'x')) {
+		err = kstrtoull(p, 16, &addr);
+		if (err) {
+			pr_err("early kprobe: wrong address: %p\n", p);
+			return 0;
+		}
+	} else {
+		addr = kallsyms_lookup_name(p);
+		if (!addr) {
+			pr_err("early kprobe: wrong symbol: %s\n", p);
+			return 0;
+		}
+	}
+
+	if ((addr < (unsigned long)_text) ||
+			(addr >= (unsigned long)_etext))
+		pr_err("early kprobe: address of %p out of range\n", p);
+
+	kp = ek_alloc_early_kprobe_setup();
+	if (kp == NULL) {
+		pr_err("early kprobe: no enough early kprobe slot\n");
+		return 0;
+	}
+	kp->addr = (void *)(unsigned long)(addr);
+	kp->pre_handler = early_kprobe_pre_handler;
+	err = register_kprobe(kp);
+	if (err) {
+		pr_err("early kprobe: register early kprobe %s failed\n", p);
+		ek_free_early_kprobe_setup(kp);
+	}
+	return 0;
+}
 #else
 static int register_early_kprobe(struct kprobe *p) { return -ENOSYS; }
 static void convert_early_kprobes(void) {};
+
+static int __init early_kprobe_setup(char *p)
+{
+	return 0;
+}
 #endif
+
+early_param("ekprobe", early_kprobe_setup);
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 09/26] ftrace: callchain and ftrace_bug_tryfix
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-12 12:21   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:21 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 include/linux/ftrace.h | 30 ++++++++++++++++++++++++++++++
 kernel/trace/ftrace.c  | 46 ++++++++++++++++++++++++++++++++++++++++------
 2 files changed, 70 insertions(+), 6 deletions(-)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index d37ccd8a..98da86d 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -283,6 +283,21 @@ int ftrace_arch_code_modify_post_process(void);
 struct dyn_ftrace;
 
 void ftrace_bug(int err, struct dyn_ftrace *rec);
+int ftrace_tryfix(int failed, int enable, struct dyn_ftrace *rec);
+
+#define __ftrace_tryfix_bug(__failed, __enable, __rec, __retry, __trigger)\
+	({								\
+		int __fix_ret = ftrace_tryfix((__failed), (__enable), (__rec));\
+		__fix_ret = (__fix_ret == -EAGAIN) ?			\
+			({ __retry; }) :				\
+			__fix_ret;					\
+		if (__fix_ret && (__trigger))					\
+			ftrace_bug(__failed, __rec);			\
+		__fix_ret;						\
+	})
+
+#define ftrace_tryfix_bug(__failed, __enable, __rec, __retry)	\
+	__ftrace_tryfix_bug(__failed, __enable, __rec, __retry, true)
 
 struct seq_file;
 
@@ -699,10 +714,20 @@ static inline void __ftrace_enabled_restore(int enabled)
 # define trace_preempt_off(a0, a1) do { } while (0)
 #endif
 
+struct ftrace_update_notifier_info {
+	struct dyn_ftrace *rec;
+	int errno;
+	int enable;
+
+	/* Filled by subscriber */
+	bool retry;
+};
+
 #ifdef CONFIG_FTRACE_MCOUNT_RECORD
 extern void ftrace_init(void);
 extern void ftrace_init_early(void);
 extern int ftrace_process_loc_early(unsigned long ip);
+extern int register_ftrace_update_notifier(struct notifier_block *nb);
 #else
 static inline void ftrace_init(void) { }
 static inline void ftrace_init_early(void) { }
@@ -710,6 +735,11 @@ static inline int ftrace_process_loc_early(unsigned long __unused)
 {
 	return 0;
 }
+
+static inline int register_ftrace_update_notifier(struct notifier_block *__unused)
+{
+	return 0;
+}
 #endif
 
 /*
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 150762a..e4c2176 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -112,6 +112,7 @@ ftrace_func_t ftrace_trace_function __read_mostly = ftrace_stub;
 ftrace_func_t ftrace_pid_function __read_mostly = ftrace_stub;
 static struct ftrace_ops global_ops;
 static struct ftrace_ops control_ops;
+static ATOMIC_NOTIFIER_HEAD(ftrace_update_notifier_list);
 
 static void ftrace_ops_recurs_func(unsigned long ip, unsigned long parent_ip,
 				   struct ftrace_ops *op, struct pt_regs *regs);
@@ -1971,6 +1972,28 @@ void ftrace_bug(int failed, struct dyn_ftrace *rec)
 	}
 }
 
+int ftrace_tryfix(int failed, int enable, struct dyn_ftrace *rec)
+{
+	int notify_result = NOTIFY_DONE;
+	struct ftrace_update_notifier_info info = {
+		.rec = rec,
+		.errno = failed,
+		.enable = enable,
+		.retry = false,
+	};
+
+	notify_result = atomic_notifier_call_chain(
+			&ftrace_update_notifier_list,
+			0, &info);
+
+	if (notify_result != NOTIFY_STOP)
+		return failed;
+
+	if (info.retry)
+		return -EAGAIN;
+	return 0;
+}
+
 static int ftrace_check_record(struct dyn_ftrace *rec, int enable, int update)
 {
 	unsigned long flag = 0UL;
@@ -2298,9 +2321,12 @@ void __weak ftrace_replace_code(int enable)
 	do_for_each_ftrace_rec(pg, rec) {
 		failed = __ftrace_replace_code(rec, enable);
 		if (failed) {
-			ftrace_bug(failed, rec);
-			/* Stop processing */
-			return;
+			failed = ftrace_tryfix_bug(failed, enable, rec,
+					__ftrace_replace_code(rec, enable));
+
+			/* Stop processing if still fail */
+			if (failed)
+				return;
 		}
 	} while_for_each_ftrace_rec();
 }
@@ -2387,8 +2413,10 @@ ftrace_code_disable(struct module *mod, struct dyn_ftrace *rec)
 
 	ret = ftrace_make_nop(mod, rec, MCOUNT_ADDR);
 	if (ret) {
-		ftrace_bug(ret, rec);
-		return 0;
+		ret = ftrace_tryfix_bug(ret, 0, rec,
+				ftrace_make_nop(mod, rec, MCOUNT_ADDR));
+		if (ret)
+			return 0;
 	}
 	return 1;
 }
@@ -2844,7 +2872,8 @@ static int ftrace_update_code(struct module *mod, struct ftrace_page *new_pgs)
 			if (ftrace_start_up && cnt) {
 				int failed = __ftrace_replace_code(p, 1);
 				if (failed)
-					ftrace_bug(failed, p);
+					failed = ftrace_tryfix_bug(failed, 1, p,
+							__ftrace_replace_code(p, 1));
 			}
 		}
 	}
@@ -5661,6 +5690,11 @@ ftrace_enable_sysctl(struct ctl_table *table, int write,
 	return ret;
 }
 
+int register_ftrace_update_notifier(struct notifier_block *nb)
+{
+	return atomic_notifier_chain_register(&ftrace_update_notifier_list, nb);
+}
+
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 
 static struct ftrace_ops graph_ops = {
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 09/26] ftrace: callchain and ftrace_bug_tryfix
@ 2015-02-12 12:21   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-12 12:21 UTC (permalink / raw)
  To: linux-arm-kernel

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 include/linux/ftrace.h | 30 ++++++++++++++++++++++++++++++
 kernel/trace/ftrace.c  | 46 ++++++++++++++++++++++++++++++++++++++++------
 2 files changed, 70 insertions(+), 6 deletions(-)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index d37ccd8a..98da86d 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -283,6 +283,21 @@ int ftrace_arch_code_modify_post_process(void);
 struct dyn_ftrace;
 
 void ftrace_bug(int err, struct dyn_ftrace *rec);
+int ftrace_tryfix(int failed, int enable, struct dyn_ftrace *rec);
+
+#define __ftrace_tryfix_bug(__failed, __enable, __rec, __retry, __trigger)\
+	({								\
+		int __fix_ret = ftrace_tryfix((__failed), (__enable), (__rec));\
+		__fix_ret = (__fix_ret == -EAGAIN) ?			\
+			({ __retry; }) :				\
+			__fix_ret;					\
+		if (__fix_ret && (__trigger))					\
+			ftrace_bug(__failed, __rec);			\
+		__fix_ret;						\
+	})
+
+#define ftrace_tryfix_bug(__failed, __enable, __rec, __retry)	\
+	__ftrace_tryfix_bug(__failed, __enable, __rec, __retry, true)
 
 struct seq_file;
 
@@ -699,10 +714,20 @@ static inline void __ftrace_enabled_restore(int enabled)
 # define trace_preempt_off(a0, a1) do { } while (0)
 #endif
 
+struct ftrace_update_notifier_info {
+	struct dyn_ftrace *rec;
+	int errno;
+	int enable;
+
+	/* Filled by subscriber */
+	bool retry;
+};
+
 #ifdef CONFIG_FTRACE_MCOUNT_RECORD
 extern void ftrace_init(void);
 extern void ftrace_init_early(void);
 extern int ftrace_process_loc_early(unsigned long ip);
+extern int register_ftrace_update_notifier(struct notifier_block *nb);
 #else
 static inline void ftrace_init(void) { }
 static inline void ftrace_init_early(void) { }
@@ -710,6 +735,11 @@ static inline int ftrace_process_loc_early(unsigned long __unused)
 {
 	return 0;
 }
+
+static inline int register_ftrace_update_notifier(struct notifier_block *__unused)
+{
+	return 0;
+}
 #endif
 
 /*
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 150762a..e4c2176 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -112,6 +112,7 @@ ftrace_func_t ftrace_trace_function __read_mostly = ftrace_stub;
 ftrace_func_t ftrace_pid_function __read_mostly = ftrace_stub;
 static struct ftrace_ops global_ops;
 static struct ftrace_ops control_ops;
+static ATOMIC_NOTIFIER_HEAD(ftrace_update_notifier_list);
 
 static void ftrace_ops_recurs_func(unsigned long ip, unsigned long parent_ip,
 				   struct ftrace_ops *op, struct pt_regs *regs);
@@ -1971,6 +1972,28 @@ void ftrace_bug(int failed, struct dyn_ftrace *rec)
 	}
 }
 
+int ftrace_tryfix(int failed, int enable, struct dyn_ftrace *rec)
+{
+	int notify_result = NOTIFY_DONE;
+	struct ftrace_update_notifier_info info = {
+		.rec = rec,
+		.errno = failed,
+		.enable = enable,
+		.retry = false,
+	};
+
+	notify_result = atomic_notifier_call_chain(
+			&ftrace_update_notifier_list,
+			0, &info);
+
+	if (notify_result != NOTIFY_STOP)
+		return failed;
+
+	if (info.retry)
+		return -EAGAIN;
+	return 0;
+}
+
 static int ftrace_check_record(struct dyn_ftrace *rec, int enable, int update)
 {
 	unsigned long flag = 0UL;
@@ -2298,9 +2321,12 @@ void __weak ftrace_replace_code(int enable)
 	do_for_each_ftrace_rec(pg, rec) {
 		failed = __ftrace_replace_code(rec, enable);
 		if (failed) {
-			ftrace_bug(failed, rec);
-			/* Stop processing */
-			return;
+			failed = ftrace_tryfix_bug(failed, enable, rec,
+					__ftrace_replace_code(rec, enable));
+
+			/* Stop processing if still fail */
+			if (failed)
+				return;
 		}
 	} while_for_each_ftrace_rec();
 }
@@ -2387,8 +2413,10 @@ ftrace_code_disable(struct module *mod, struct dyn_ftrace *rec)
 
 	ret = ftrace_make_nop(mod, rec, MCOUNT_ADDR);
 	if (ret) {
-		ftrace_bug(ret, rec);
-		return 0;
+		ret = ftrace_tryfix_bug(ret, 0, rec,
+				ftrace_make_nop(mod, rec, MCOUNT_ADDR));
+		if (ret)
+			return 0;
 	}
 	return 1;
 }
@@ -2844,7 +2872,8 @@ static int ftrace_update_code(struct module *mod, struct ftrace_page *new_pgs)
 			if (ftrace_start_up && cnt) {
 				int failed = __ftrace_replace_code(p, 1);
 				if (failed)
-					ftrace_bug(failed, p);
+					failed = ftrace_tryfix_bug(failed, 1, p,
+							__ftrace_replace_code(p, 1));
 			}
 		}
 	}
@@ -5661,6 +5690,11 @@ ftrace_enable_sysctl(struct ctl_table *table, int write,
 	return ret;
 }
 
+int register_ftrace_update_notifier(struct notifier_block *nb)
+{
+	return atomic_notifier_chain_register(&ftrace_update_notifier_list, nb);
+}
+
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 
 static struct ftrace_ops graph_ops = {
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH v2 06/26] ftrace: sort ftrace entries earlier.
  2015-02-12 12:19   ` Wang Nan
@ 2015-02-12 17:35     ` Steven Rostedt
  -1 siblings, 0 replies; 76+ messages in thread
From: Steven Rostedt @ 2015-02-12 17:35 UTC (permalink / raw)
  To: Wang Nan
  Cc: linux, tglx, mingo, hpa, ananth, anil.s.keshavamurthy, davem,
	masami.hiramatsu.pt, luto, keescook, oleg, dave.long, tixy, nico,
	yalin.wang2010, catalin.marinas, Yalin.Wang, mark.rutland,
	dave.hansen, jkenisto, anton, stefani, JBeulich, akpm, rusty,
	peterz, prarit, fabf, hannes, x86, linux-kernel,
	linux-arm-kernel, lizefan

On Thu, 12 Feb 2015 20:19:41 +0800
Wang Nan <wangnan0@huawei.com> wrote:


The header is not enough for a change log. You need to tell us why this
patch is needed.

BTW, the previous two patches look fine, and I'm willing to pull them
into my 3.21 queue as clean ups.

As for this one...

> Signed-off-by: Wang Nan <wangnan0@huawei.com>
> ---
>  include/linux/ftrace.h |  2 ++
>  init/main.c            |  1 +
>  kernel/trace/ftrace.c  | 29 +++++++++++++++++++++++++++--
>  3 files changed, 30 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
> index 1da6029..8db315a 100644
> --- a/include/linux/ftrace.h
> +++ b/include/linux/ftrace.h
> @@ -701,8 +701,10 @@ static inline void __ftrace_enabled_restore(int enabled)
>  
>  #ifdef CONFIG_FTRACE_MCOUNT_RECORD
>  extern void ftrace_init(void);
> +extern void ftrace_init_early(void);
>  #else
>  static inline void ftrace_init(void) { }
> +static inline void ftrace_init_early(void) { }
>  #endif
>  
>  /*
> diff --git a/init/main.c b/init/main.c
> index 6f0f1c5f..eaafc3e 100644
> --- a/init/main.c
> +++ b/init/main.c
> @@ -517,6 +517,7 @@ asmlinkage __visible void __init start_kernel(void)
>  	boot_cpu_init();
>  	page_address_init();
>  	pr_notice("%s", linux_banner);
> +	ftrace_init_early();
>  	setup_arch(&command_line);
>  	mm_init_cpumask(&init_mm);
>  	setup_command_line(command_line);
> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> index 6c6cbb1..a6a6b09 100644
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -1169,6 +1169,7 @@ struct ftrace_page {
>  
>  static struct ftrace_page	*ftrace_pages_start;
>  static struct ftrace_page	*ftrace_pages;
> +static bool mcount_sorted = false;
>  
>  static bool __always_inline ftrace_hash_empty(struct ftrace_hash *hash)
>  {
> @@ -4743,6 +4744,26 @@ static void ftrace_swap_ips(void *a, void *b, int size)
>  	*ipb = t;
>  }
>  
> +static void ftrace_sort_mcount_area(void)
> +{
> +	extern unsigned long __start_mcount_loc[];
> +	extern unsigned long __stop_mcount_loc[];
> +
> +	unsigned long *start = __start_mcount_loc;
> +	unsigned long *end = __stop_mcount_loc;
> +	unsigned long count;
> +
> +	count = end - start;
> +	if (!count)
> +		return;
> +
> +	if (!mcount_sorted) {
> +		sort(start, count, sizeof(*start),
> +		     ftrace_cmp_ips, ftrace_swap_ips);
> +		mcount_sorted = true;
> +	}
> +}
> +
>  static int ftrace_process_locs(struct module *mod,
>  			       unsigned long *start,
>  			       unsigned long *end)
> @@ -4761,8 +4782,7 @@ static int ftrace_process_locs(struct module *mod,
>  	if (!count)
>  		return 0;
>  
> -	sort(start, count, sizeof(*start),
> -	     ftrace_cmp_ips, ftrace_swap_ips);
> +	ftrace_sort_mcount_area();

Notice a problem with the above? You just lost start and count. They
are not always the same. You can not just hard code __start_mcount_loc.
In fact, I'm surprised this didn't crash, because the section that
holds __start_mcount_loc is freed after boot.

Modules use this code to pass in where they hold the mcount locations.
Your change ignores that and uses the stale __start_mcount_loc that no
longer exists at the point modules are loaded.

The sort routine needs to have start and end passed to it, then it
could calculate count from end - start.

-- Steve


>  
>  	start_pg = ftrace_allocate_pages(count);
>  	if (!start_pg)
> @@ -4965,6 +4985,11 @@ void __init ftrace_init(void)
>  	ftrace_disabled = 1;
>  }
>  
> +void __init ftrace_init_early(void)
> +{
> +	ftrace_sort_mcount_area();
> +}
> +
>  /* Do nothing if arch does not support this */
>  void __weak arch_ftrace_update_trampoline(struct ftrace_ops *ops)
>  {


^ permalink raw reply	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 06/26] ftrace: sort ftrace entries earlier.
@ 2015-02-12 17:35     ` Steven Rostedt
  0 siblings, 0 replies; 76+ messages in thread
From: Steven Rostedt @ 2015-02-12 17:35 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, 12 Feb 2015 20:19:41 +0800
Wang Nan <wangnan0@huawei.com> wrote:


The header is not enough for a change log. You need to tell us why this
patch is needed.

BTW, the previous two patches look fine, and I'm willing to pull them
into my 3.21 queue as clean ups.

As for this one...

> Signed-off-by: Wang Nan <wangnan0@huawei.com>
> ---
>  include/linux/ftrace.h |  2 ++
>  init/main.c            |  1 +
>  kernel/trace/ftrace.c  | 29 +++++++++++++++++++++++++++--
>  3 files changed, 30 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
> index 1da6029..8db315a 100644
> --- a/include/linux/ftrace.h
> +++ b/include/linux/ftrace.h
> @@ -701,8 +701,10 @@ static inline void __ftrace_enabled_restore(int enabled)
>  
>  #ifdef CONFIG_FTRACE_MCOUNT_RECORD
>  extern void ftrace_init(void);
> +extern void ftrace_init_early(void);
>  #else
>  static inline void ftrace_init(void) { }
> +static inline void ftrace_init_early(void) { }
>  #endif
>  
>  /*
> diff --git a/init/main.c b/init/main.c
> index 6f0f1c5f..eaafc3e 100644
> --- a/init/main.c
> +++ b/init/main.c
> @@ -517,6 +517,7 @@ asmlinkage __visible void __init start_kernel(void)
>  	boot_cpu_init();
>  	page_address_init();
>  	pr_notice("%s", linux_banner);
> +	ftrace_init_early();
>  	setup_arch(&command_line);
>  	mm_init_cpumask(&init_mm);
>  	setup_command_line(command_line);
> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> index 6c6cbb1..a6a6b09 100644
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -1169,6 +1169,7 @@ struct ftrace_page {
>  
>  static struct ftrace_page	*ftrace_pages_start;
>  static struct ftrace_page	*ftrace_pages;
> +static bool mcount_sorted = false;
>  
>  static bool __always_inline ftrace_hash_empty(struct ftrace_hash *hash)
>  {
> @@ -4743,6 +4744,26 @@ static void ftrace_swap_ips(void *a, void *b, int size)
>  	*ipb = t;
>  }
>  
> +static void ftrace_sort_mcount_area(void)
> +{
> +	extern unsigned long __start_mcount_loc[];
> +	extern unsigned long __stop_mcount_loc[];
> +
> +	unsigned long *start = __start_mcount_loc;
> +	unsigned long *end = __stop_mcount_loc;
> +	unsigned long count;
> +
> +	count = end - start;
> +	if (!count)
> +		return;
> +
> +	if (!mcount_sorted) {
> +		sort(start, count, sizeof(*start),
> +		     ftrace_cmp_ips, ftrace_swap_ips);
> +		mcount_sorted = true;
> +	}
> +}
> +
>  static int ftrace_process_locs(struct module *mod,
>  			       unsigned long *start,
>  			       unsigned long *end)
> @@ -4761,8 +4782,7 @@ static int ftrace_process_locs(struct module *mod,
>  	if (!count)
>  		return 0;
>  
> -	sort(start, count, sizeof(*start),
> -	     ftrace_cmp_ips, ftrace_swap_ips);
> +	ftrace_sort_mcount_area();

Notice a problem with the above? You just lost start and count. They
are not always the same. You can not just hard code __start_mcount_loc.
In fact, I'm surprised this didn't crash, because the section that
holds __start_mcount_loc is freed after boot.

Modules use this code to pass in where they hold the mcount locations.
Your change ignores that and uses the stale __start_mcount_loc that no
longer exists at the point modules are loaded.

The sort routine needs to have start and end passed to it, then it
could calculate count from end - start.

-- Steve


>  
>  	start_pg = ftrace_allocate_pages(count);
>  	if (!start_pg)
> @@ -4965,6 +4985,11 @@ void __init ftrace_init(void)
>  	ftrace_disabled = 1;
>  }
>  
> +void __init ftrace_init_early(void)
> +{
> +	ftrace_sort_mcount_area();
> +}
> +
>  /* Do nothing if arch does not support this */
>  void __weak arch_ftrace_update_trampoline(struct ftrace_ops *ops)
>  {

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH v2 07/26] ftrace: allow search ftrace addr before ftrace fully inited.
  2015-02-12 12:19   ` Wang Nan
@ 2015-02-12 17:38     ` Steven Rostedt
  -1 siblings, 0 replies; 76+ messages in thread
From: Steven Rostedt @ 2015-02-12 17:38 UTC (permalink / raw)
  To: Wang Nan
  Cc: linux, tglx, mingo, hpa, ananth, anil.s.keshavamurthy, davem,
	masami.hiramatsu.pt, luto, keescook, oleg, dave.long, tixy, nico,
	yalin.wang2010, catalin.marinas, Yalin.Wang, mark.rutland,
	dave.hansen, jkenisto, anton, stefani, JBeulich, akpm, rusty,
	peterz, prarit, fabf, hannes, x86, linux-kernel,
	linux-arm-kernel, lizefan

On Thu, 12 Feb 2015 20:19:46 +0800
Wang Nan <wangnan0@huawei.com> wrote:

-ENOCHANGELOG

I'm not even going to bother reviewing this patch because I have no
idea why it's needed, or what it is actually trying to accomplish.

-- Steve

> Signed-off-by: Wang Nan <wangnan0@huawei.com>
> ---

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 07/26] ftrace: allow search ftrace addr before ftrace fully inited.
@ 2015-02-12 17:38     ` Steven Rostedt
  0 siblings, 0 replies; 76+ messages in thread
From: Steven Rostedt @ 2015-02-12 17:38 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, 12 Feb 2015 20:19:46 +0800
Wang Nan <wangnan0@huawei.com> wrote:

-ENOCHANGELOG

I'm not even going to bother reviewing this patch because I have no
idea why it's needed, or what it is actually trying to accomplish.

-- Steve

> Signed-off-by: Wang Nan <wangnan0@huawei.com>
> ---

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH v2 08/26] ftrace: enable other subsystems make ftrace nop before ftrace_init()
  2015-02-12 12:19   ` Wang Nan
@ 2015-02-12 17:39     ` Steven Rostedt
  -1 siblings, 0 replies; 76+ messages in thread
From: Steven Rostedt @ 2015-02-12 17:39 UTC (permalink / raw)
  To: Wang Nan
  Cc: linux, tglx, mingo, hpa, ananth, anil.s.keshavamurthy, davem,
	masami.hiramatsu.pt, luto, keescook, oleg, dave.long, tixy, nico,
	yalin.wang2010, catalin.marinas, Yalin.Wang, mark.rutland,
	dave.hansen, jkenisto, anton, stefani, JBeulich, akpm, rusty,
	peterz, prarit, fabf, hannes, x86, linux-kernel,
	linux-arm-kernel, lizefan

On Thu, 12 Feb 2015 20:19:51 +0800
Wang Nan <wangnan0@huawei.com> wrote:

The rest of the ftrace patches have no change logs, so I stopped my
review here.

-- Steve

> Signed-off-by: Wang Nan <wangnan0@huawei.com>

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 08/26] ftrace: enable other subsystems make ftrace nop before ftrace_init()
@ 2015-02-12 17:39     ` Steven Rostedt
  0 siblings, 0 replies; 76+ messages in thread
From: Steven Rostedt @ 2015-02-12 17:39 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, 12 Feb 2015 20:19:51 +0800
Wang Nan <wangnan0@huawei.com> wrote:

The rest of the ftrace patches have no change logs, so I stopped my
review here.

-- Steve

> Signed-off-by: Wang Nan <wangnan0@huawei.com>

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH v2 08/26] ftrace: enable other subsystems make ftrace nop before ftrace_init()
  2015-02-12 17:39     ` Steven Rostedt
@ 2015-02-13  1:29       ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-13  1:29 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux, tglx, mingo, hpa, ananth, anil.s.keshavamurthy, davem,
	masami.hiramatsu.pt, luto, keescook, oleg, dave.long, tixy, nico,
	yalin.wang2010, catalin.marinas, Yalin.Wang, mark.rutland,
	dave.hansen, jkenisto, anton, stefani, JBeulich, akpm, rusty,
	peterz, prarit, fabf, hannes, x86, linux-kernel,
	linux-arm-kernel, lizefan

On 2015/2/13 1:39, Steven Rostedt wrote:
> On Thu, 12 Feb 2015 20:19:51 +0800
> Wang Nan <wangnan0@huawei.com> wrote:
> 
> The rest of the ftrace patches have no change logs, so I stopped my
> review here.
> 
> -- Steve
> 

I'm very sorry for the mistake. I must in a very bad state when sending
these patches.

I'll fix commit log immediately.

>> Signed-off-by: Wang Nan <wangnan0@huawei.com>



^ permalink raw reply	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 08/26] ftrace: enable other subsystems make ftrace nop before ftrace_init()
@ 2015-02-13  1:29       ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-13  1:29 UTC (permalink / raw)
  To: linux-arm-kernel

On 2015/2/13 1:39, Steven Rostedt wrote:
> On Thu, 12 Feb 2015 20:19:51 +0800
> Wang Nan <wangnan0@huawei.com> wrote:
> 
> The rest of the ftrace patches have no change logs, so I stopped my
> review here.
> 
> -- Steve
> 

I'm very sorry for the mistake. I must in a very bad state when sending
these patches.

I'll fix commit log immediately.

>> Signed-off-by: Wang Nan <wangnan0@huawei.com>

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [RFC PATCH v3 00/26] Early kprobe: enable kprobes at very early booting stage.
  2015-02-12 12:17 ` Wang Nan
@ 2015-02-13  5:38   ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-13  5:38 UTC (permalink / raw)
  To: linux, tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy,
	davem, masami.hiramatsu.pt, luto, keescook, oleg, wangnan0,
	dave.long, tixy, nico, yalin.wang2010, catalin.marinas,
	Yalin.Wang, mark.rutland, dave.hansen, jkenisto, anton, stefani,
	JBeulich, akpm, rusty, peterz, prarit, fabf, hannes
  Cc: x86, linux-kernel, linux-arm-kernel, lizefan

I fell very sorry for people who reviewed my v2 patch series yesterday
at https://lkml.org/lkml/2015/2/12/234 because I didn't provide enough
information in commit log. This v3 patch series add those missing
commit messages. There are also 2 small fix based on v2:

 1. Fixes ftrace_sort_mcount_area. Original patch doesn't work for module.
 2. Wraps setting of kprobes_initialized in stop_machine() context. 

Wang Nan (26):
  kprobes: set kprobes_all_disarmed earlier to enable re-optimization.
  kprobes: makes kprobes/enabled works correctly for optimized kprobes.
  kprobes: x86: mark 2 bytes NOP as boostable.
  ftrace: don't update record flags if code modification fail.
  ftrace/x86: Ensure rec->flags no change when failure occures.
  ftrace: sort ftrace entries earlier.
  ftrace: allow search ftrace addr before ftrace fully inited.
  ftrace: enable make ftrace nop before ftrace_init().
  ftrace: allow fixing code update failure by notifier chain.
  ftrace: x86: try to fix ftrace when ftrace_replace_code.
  early kprobes: introduce kprobe_is_early for futher early kprobe use.
  early kprobes: Add an KPROBE_FLAG_EARLY for early kprobe.
  early kprobes: ARM: directly modify code.
  early kprobes: ARM: introduce early kprobes related code area.
  early kprobes: x86: directly modify code.
  early kprobes: x86: introduce early kprobes related code area.
  early kprobes: introduces macros for allocing early kprobe resources.
  early kprobes: allows __alloc_insn_slot() from early kprobes slots.
  early kprobes: perhibit probing at early kprobe reserved area.
  early kprobes: core logic of eraly kprobes.
  early kprobes: add CONFIG_EARLY_KPROBES option.
  early kprobes: introduce arch_fix_ftrace_early_kprobe().
  early kprobes: x86: arch_restore_optimized_kprobe().
  early kprobes: core logic to support early kprobe on ftrace.
  early kprobes: introduce kconfig option to support early kprobe on
    ftrace.
  kprobes: enable 'ekprobe=' cmdline option for early kprobes.

 arch/Kconfig                      |  15 ++
 arch/arm/include/asm/kprobes.h    |  31 ++-
 arch/arm/kernel/vmlinux.lds.S     |   2 +
 arch/arm/probes/kprobes/opt-arm.c |  12 +-
 arch/x86/include/asm/insn.h       |   7 +-
 arch/x86/include/asm/kprobes.h    |  47 +++-
 arch/x86/kernel/ftrace.c          |  23 +-
 arch/x86/kernel/kprobes/core.c    |   2 +-
 arch/x86/kernel/kprobes/opt.c     |  69 +++++-
 arch/x86/kernel/vmlinux.lds.S     |   2 +
 include/linux/ftrace.h            |  37 +++
 include/linux/kprobes.h           | 132 +++++++++++
 init/main.c                       |   1 +
 kernel/kprobes.c                  | 479 +++++++++++++++++++++++++++++++++++++-
 kernel/trace/ftrace.c             | 157 +++++++++++--
 15 files changed, 969 insertions(+), 47 deletions(-)

-- 
1.8.4


^ permalink raw reply	[flat|nested] 76+ messages in thread

* [RFC PATCH v3 00/26] Early kprobe: enable kprobes at very early booting stage.
@ 2015-02-13  5:38   ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-13  5:38 UTC (permalink / raw)
  To: linux-arm-kernel

I fell very sorry for people who reviewed my v2 patch series yesterday
at https://lkml.org/lkml/2015/2/12/234 because I didn't provide enough
information in commit log. This v3 patch series add those missing
commit messages. There are also 2 small fix based on v2:

 1. Fixes ftrace_sort_mcount_area. Original patch doesn't work for module.
 2. Wraps setting of kprobes_initialized in stop_machine() context. 

Wang Nan (26):
  kprobes: set kprobes_all_disarmed earlier to enable re-optimization.
  kprobes: makes kprobes/enabled works correctly for optimized kprobes.
  kprobes: x86: mark 2 bytes NOP as boostable.
  ftrace: don't update record flags if code modification fail.
  ftrace/x86: Ensure rec->flags no change when failure occures.
  ftrace: sort ftrace entries earlier.
  ftrace: allow search ftrace addr before ftrace fully inited.
  ftrace: enable make ftrace nop before ftrace_init().
  ftrace: allow fixing code update failure by notifier chain.
  ftrace: x86: try to fix ftrace when ftrace_replace_code.
  early kprobes: introduce kprobe_is_early for futher early kprobe use.
  early kprobes: Add an KPROBE_FLAG_EARLY for early kprobe.
  early kprobes: ARM: directly modify code.
  early kprobes: ARM: introduce early kprobes related code area.
  early kprobes: x86: directly modify code.
  early kprobes: x86: introduce early kprobes related code area.
  early kprobes: introduces macros for allocing early kprobe resources.
  early kprobes: allows __alloc_insn_slot() from early kprobes slots.
  early kprobes: perhibit probing at early kprobe reserved area.
  early kprobes: core logic of eraly kprobes.
  early kprobes: add CONFIG_EARLY_KPROBES option.
  early kprobes: introduce arch_fix_ftrace_early_kprobe().
  early kprobes: x86: arch_restore_optimized_kprobe().
  early kprobes: core logic to support early kprobe on ftrace.
  early kprobes: introduce kconfig option to support early kprobe on
    ftrace.
  kprobes: enable 'ekprobe=' cmdline option for early kprobes.

 arch/Kconfig                      |  15 ++
 arch/arm/include/asm/kprobes.h    |  31 ++-
 arch/arm/kernel/vmlinux.lds.S     |   2 +
 arch/arm/probes/kprobes/opt-arm.c |  12 +-
 arch/x86/include/asm/insn.h       |   7 +-
 arch/x86/include/asm/kprobes.h    |  47 +++-
 arch/x86/kernel/ftrace.c          |  23 +-
 arch/x86/kernel/kprobes/core.c    |   2 +-
 arch/x86/kernel/kprobes/opt.c     |  69 +++++-
 arch/x86/kernel/vmlinux.lds.S     |   2 +
 include/linux/ftrace.h            |  37 +++
 include/linux/kprobes.h           | 132 +++++++++++
 init/main.c                       |   1 +
 kernel/kprobes.c                  | 479 +++++++++++++++++++++++++++++++++++++-
 kernel/trace/ftrace.c             | 157 +++++++++++--
 15 files changed, 969 insertions(+), 47 deletions(-)

-- 
1.8.4

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH v3 00/26] Early kprobe: enable kprobes at very early booting stage.
  2015-02-13  5:38   ` Wang Nan
@ 2015-02-13 17:15     ` Steven Rostedt
  -1 siblings, 0 replies; 76+ messages in thread
From: Steven Rostedt @ 2015-02-13 17:15 UTC (permalink / raw)
  To: Wang Nan
  Cc: linux, tglx, mingo, hpa, ananth, anil.s.keshavamurthy, davem,
	masami.hiramatsu.pt, luto, keescook, oleg, dave.long, tixy, nico,
	yalin.wang2010, catalin.marinas, Yalin.Wang, mark.rutland,
	dave.hansen, jkenisto, anton, stefani, JBeulich, akpm, rusty,
	peterz, prarit, fabf, hannes, x86, linux-kernel,
	linux-arm-kernel, lizefan

On Fri, 13 Feb 2015 13:38:27 +0800
Wang Nan <wangnan0@huawei.com> wrote:

> I fell very sorry for people who reviewed my v2 patch series yesterday
> at https://lkml.org/lkml/2015/2/12/234 because I didn't provide enough
> information in commit log. This v3 patch series add those missing
> commit messages. There are also 2 small fix based on v2:

Note the 0/26 patch should contain the summary of what the entire
series is trying to accomplish, and how it is trying to accomplish it.

> 
>  1. Fixes ftrace_sort_mcount_area. Original patch doesn't work for module.
>  2. Wraps setting of kprobes_initialized in stop_machine() context. 
> 

I'll be attending Linux Collaboration Summit next week and there's a
lot of things I need to finish before I leave, and I wont be able to
look at these while at the conference. Which means I can not take a in
depth look at the patches until I get back, and even then I'll be
catching up on other things. Feel free to ping me about this after
Feb 23rd.

>From what I can gather from skimming the patches, you attend to have a
way to pass kprobes via the kernel command line or some other way that
kprobes are pre allocated (called before memory management), and if
they happen to be at an ftrace location, you have hooks to have ftrace
notify kprobes to fix it up.

Honestly, I hate the notifiers. Get rid of them. kprobes and ftrace are
coupled, as kprobes must know about ftrace, and ftrace knows about
kprobes. This is a very specific case. Notifiers represent a "general"
use case, and I don't want something else hooking into these notifiers.
This should be hard coded, and fixed up at ftrace_init(), where after
ftrace_init() everything acts as it does today.

That is, the early kprobes add hooks to the ftrace nop locations. When
ftrace tries to convert them to nops it will notice that they do not
match the call to mcount. In this case, ftrace should call a kprobes
function asking if this is a call to kprobe, and if so, it will convert
this location into a normal call to the ftrace trampoline that calls
the early kprobe function. This will only be done during ftrace_init()
when it tries to convert the calls to mcount or __fentry__ (not
ftrace_caller) to nops. It will then convert it to ftrace_caller, if
need be, or whatever.

Perhaps that would be easier. Before doing the modifications, it could
do a special register of ftrace to have the ftrace_regs_caller point to
the early kprobe function and when its doing the modifications, it will
be aware that there might be some locations that call the early kprobe
function.

Basically what I'm saying is that this is a very special case. Don't
try to over engineer this into something that can be expanded by other
use cases. I rather not make this easy for other use cases to connect
to the ftrace locations at early boot up. That's just opening a can of
worms that are spoiled, and taste like bad sushi from a restaurant with
lots of neon lights.

I'm not against the idea of having early kprobes, but I'm not thrilled
with the current implementation.

-- Steve

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [RFC PATCH v3 00/26] Early kprobe: enable kprobes at very early booting stage.
@ 2015-02-13 17:15     ` Steven Rostedt
  0 siblings, 0 replies; 76+ messages in thread
From: Steven Rostedt @ 2015-02-13 17:15 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, 13 Feb 2015 13:38:27 +0800
Wang Nan <wangnan0@huawei.com> wrote:

> I fell very sorry for people who reviewed my v2 patch series yesterday
> at https://lkml.org/lkml/2015/2/12/234 because I didn't provide enough
> information in commit log. This v3 patch series add those missing
> commit messages. There are also 2 small fix based on v2:

Note the 0/26 patch should contain the summary of what the entire
series is trying to accomplish, and how it is trying to accomplish it.

> 
>  1. Fixes ftrace_sort_mcount_area. Original patch doesn't work for module.
>  2. Wraps setting of kprobes_initialized in stop_machine() context. 
> 

I'll be attending Linux Collaboration Summit next week and there's a
lot of things I need to finish before I leave, and I wont be able to
look at these while at the conference. Which means I can not take a in
depth look at the patches until I get back, and even then I'll be
catching up on other things. Feel free to ping me about this after
Feb 23rd.

>From what I can gather from skimming the patches, you attend to have a
way to pass kprobes via the kernel command line or some other way that
kprobes are pre allocated (called before memory management), and if
they happen to be at an ftrace location, you have hooks to have ftrace
notify kprobes to fix it up.

Honestly, I hate the notifiers. Get rid of them. kprobes and ftrace are
coupled, as kprobes must know about ftrace, and ftrace knows about
kprobes. This is a very specific case. Notifiers represent a "general"
use case, and I don't want something else hooking into these notifiers.
This should be hard coded, and fixed up at ftrace_init(), where after
ftrace_init() everything acts as it does today.

That is, the early kprobes add hooks to the ftrace nop locations. When
ftrace tries to convert them to nops it will notice that they do not
match the call to mcount. In this case, ftrace should call a kprobes
function asking if this is a call to kprobe, and if so, it will convert
this location into a normal call to the ftrace trampoline that calls
the early kprobe function. This will only be done during ftrace_init()
when it tries to convert the calls to mcount or __fentry__ (not
ftrace_caller) to nops. It will then convert it to ftrace_caller, if
need be, or whatever.

Perhaps that would be easier. Before doing the modifications, it could
do a special register of ftrace to have the ftrace_regs_caller point to
the early kprobe function and when its doing the modifications, it will
be aware that there might be some locations that call the early kprobe
function.

Basically what I'm saying is that this is a very special case. Don't
try to over engineer this into something that can be expanded by other
use cases. I rather not make this easy for other use cases to connect
to the ftrace locations at early boot up. That's just opening a can of
worms that are spoiled, and taste like bad sushi from a restaurant with
lots of neon lights.

I'm not against the idea of having early kprobes, but I'm not thrilled
with the current implementation.

-- Steve

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [RFC PATCH v2 14/26] early kprobes: ARM: introduce early kprobes related code area.
  2015-02-12 12:20   ` Wang Nan
@ 2015-02-13 17:32     ` Russell King - ARM Linux
  -1 siblings, 0 replies; 76+ messages in thread
From: Russell King - ARM Linux @ 2015-02-13 17:32 UTC (permalink / raw)
  To: Wang Nan
  Cc: tglx, mingo, hpa, rostedt, ananth, anil.s.keshavamurthy, davem,
	masami.hiramatsu.pt, luto, keescook, oleg, dave.long, tixy, nico,
	yalin.wang2010, catalin.marinas, Yalin.Wang, mark.rutland,
	dave.hansen, jkenisto, anton, stefani, JBeulich, akpm, rusty,
	peterz, prarit, fabf, hannes, x86, linux-kernel,
	linux-arm-kernel, lizefan

On Thu, Feb 12, 2015 at 08:20:35PM +0800, Wang Nan wrote:
> In arm's vmlinux.lds, introduces code area inside text section.
> Executable area used by early kprobes will be allocated from there.
> 
> Signed-off-by: Wang Nan <wangnan0@huawei.com>
> ---
>  arch/arm/include/asm/kprobes.h | 31 +++++++++++++++++++++++++++++--
>  arch/arm/kernel/vmlinux.lds.S  |  2 ++
>  2 files changed, 31 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm/include/asm/kprobes.h b/arch/arm/include/asm/kprobes.h
> index 3ea9be5..0a4421e 100644
> --- a/arch/arm/include/asm/kprobes.h
> +++ b/arch/arm/include/asm/kprobes.h
> @@ -17,16 +17,42 @@
>  #define _ARM_KPROBES_H
>  
>  #include <linux/types.h>
> -#include <linux/ptrace.h>
> -#include <linux/notifier.h>
>  
>  #define __ARCH_WANT_KPROBES_INSN_SLOT
>  #define MAX_INSN_SIZE			2
>  
> +#ifdef __ASSEMBLY__
> +
> +#define KPROBE_OPCODE_SIZE	4
> +#define MAX_OPTINSN_SIZE (optprobe_template_end - optprobe_template_entry)
> +
> +#ifdef CONFIG_EARLY_KPROBES
> +#define EARLY_KPROBES_CODES_AREA					\
> +	. = ALIGN(8);							\
> +	VMLINUX_SYMBOL(__early_kprobes_start) = .;			\
> +	VMLINUX_SYMBOL(__early_kprobes_code_area_start) = .;		\
> +	. = . + MAX_OPTINSN_SIZE * CONFIG_NR_EARLY_KPROBES_SLOTS;	\
> +	VMLINUX_SYMBOL(__early_kprobes_code_area_end) = .;		\
> +	. = ALIGN(8);							\
> +	VMLINUX_SYMBOL(__early_kprobes_insn_slot_start) = .;		\
> +	. = . + MAX_INSN_SIZE * KPROBE_OPCODE_SIZE * CONFIG_NR_EARLY_KPROBES_SLOTS;\
> +	VMLINUX_SYMBOL(__early_kprobes_insn_slot_end) = .;		\
> +	VMLINUX_SYMBOL(__early_kprobes_end) = .;
> +
> +#else
> +#define EARLY_KPROBES_CODES_AREA
> +#endif

Please don't spread vmlinux specific stuff around the kernel include files.
Let's try to keep it contained to a minimal set of files.

-- 
FTTC broadband for 0.8mile line: currently at 10.5Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [RFC PATCH v2 14/26] early kprobes: ARM: introduce early kprobes related code area.
@ 2015-02-13 17:32     ` Russell King - ARM Linux
  0 siblings, 0 replies; 76+ messages in thread
From: Russell King - ARM Linux @ 2015-02-13 17:32 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Feb 12, 2015 at 08:20:35PM +0800, Wang Nan wrote:
> In arm's vmlinux.lds, introduces code area inside text section.
> Executable area used by early kprobes will be allocated from there.
> 
> Signed-off-by: Wang Nan <wangnan0@huawei.com>
> ---
>  arch/arm/include/asm/kprobes.h | 31 +++++++++++++++++++++++++++++--
>  arch/arm/kernel/vmlinux.lds.S  |  2 ++
>  2 files changed, 31 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm/include/asm/kprobes.h b/arch/arm/include/asm/kprobes.h
> index 3ea9be5..0a4421e 100644
> --- a/arch/arm/include/asm/kprobes.h
> +++ b/arch/arm/include/asm/kprobes.h
> @@ -17,16 +17,42 @@
>  #define _ARM_KPROBES_H
>  
>  #include <linux/types.h>
> -#include <linux/ptrace.h>
> -#include <linux/notifier.h>
>  
>  #define __ARCH_WANT_KPROBES_INSN_SLOT
>  #define MAX_INSN_SIZE			2
>  
> +#ifdef __ASSEMBLY__
> +
> +#define KPROBE_OPCODE_SIZE	4
> +#define MAX_OPTINSN_SIZE (optprobe_template_end - optprobe_template_entry)
> +
> +#ifdef CONFIG_EARLY_KPROBES
> +#define EARLY_KPROBES_CODES_AREA					\
> +	. = ALIGN(8);							\
> +	VMLINUX_SYMBOL(__early_kprobes_start) = .;			\
> +	VMLINUX_SYMBOL(__early_kprobes_code_area_start) = .;		\
> +	. = . + MAX_OPTINSN_SIZE * CONFIG_NR_EARLY_KPROBES_SLOTS;	\
> +	VMLINUX_SYMBOL(__early_kprobes_code_area_end) = .;		\
> +	. = ALIGN(8);							\
> +	VMLINUX_SYMBOL(__early_kprobes_insn_slot_start) = .;		\
> +	. = . + MAX_INSN_SIZE * KPROBE_OPCODE_SIZE * CONFIG_NR_EARLY_KPROBES_SLOTS;\
> +	VMLINUX_SYMBOL(__early_kprobes_insn_slot_end) = .;		\
> +	VMLINUX_SYMBOL(__early_kprobes_end) = .;
> +
> +#else
> +#define EARLY_KPROBES_CODES_AREA
> +#endif

Please don't spread vmlinux specific stuff around the kernel include files.
Let's try to keep it contained to a minimal set of files.

-- 
FTTC broadband for 0.8mile line: currently at 10.5Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [RFC PATCH 0/3] early kprobes: rearrange vmlinux.lds related code.
  2015-02-13 17:32     ` Russell King - ARM Linux
@ 2015-02-15  8:26       ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-15  8:26 UTC (permalink / raw)
  To: linux; +Cc: rostedt, lizefan, linux-arm-kernel, linux-kernel, x86

This is part of early kprobes patch series update. Full series can be
found from [1].

Early kprobes need some statically allocated slots, which is determined
during linking by vmlinux.lds.S. Russell King suggests me not to spread
vmlinux stuff around the kernel include files. This series tries to
extract common code to include/asm-generic/vmlinux.lds.h and let arch
dependent code define macros common code requires.

[1]: https://lkml.org/lkml/2015/2/13/24

Wang Nan (3):
  early kprobes: ARM: add definition for vmlinux.lds use.
  early kprobes: x86: add definition for vmlinux.lds use.
  early kprobes: introduce early kprobes related code area.

 arch/arm/kernel/vmlinux.lds.S     | 10 ++++++++++
 arch/x86/kernel/vmlinux.lds.S     | 10 ++++++++++
 include/asm-generic/vmlinux.lds.h | 19 ++++++++++++++++++-
 3 files changed, 38 insertions(+), 1 deletion(-)

-- 
1.8.4


^ permalink raw reply	[flat|nested] 76+ messages in thread

* [RFC PATCH 0/3] early kprobes: rearrange vmlinux.lds related code.
@ 2015-02-15  8:26       ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-15  8:26 UTC (permalink / raw)
  To: linux-arm-kernel

This is part of early kprobes patch series update. Full series can be
found from [1].

Early kprobes need some statically allocated slots, which is determined
during linking by vmlinux.lds.S. Russell King suggests me not to spread
vmlinux stuff around the kernel include files. This series tries to
extract common code to include/asm-generic/vmlinux.lds.h and let arch
dependent code define macros common code requires.

[1]: https://lkml.org/lkml/2015/2/13/24

Wang Nan (3):
  early kprobes: ARM: add definition for vmlinux.lds use.
  early kprobes: x86: add definition for vmlinux.lds use.
  early kprobes: introduce early kprobes related code area.

 arch/arm/kernel/vmlinux.lds.S     | 10 ++++++++++
 arch/x86/kernel/vmlinux.lds.S     | 10 ++++++++++
 include/asm-generic/vmlinux.lds.h | 19 ++++++++++++++++++-
 3 files changed, 38 insertions(+), 1 deletion(-)

-- 
1.8.4

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [RFC PATCH 1/3] early kprobes: ARM: add definition for vmlinux.lds use.
  2015-02-15  8:26       ` Wang Nan
@ 2015-02-15  8:27         ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-15  8:27 UTC (permalink / raw)
  To: linux; +Cc: rostedt, lizefan, linux-arm-kernel, linux-kernel, x86

This patch defines MAX_OPTINSN_SIZE, MAX_INSN_SIZE and
KPROBE_OPCODE_SIZE for ARM for vmlinux.lds.S use. These macros are
originally defined in kprobes.h, which are unable to be used in
vmlinux.lds.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 arch/arm/kernel/vmlinux.lds.S | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
index b31aa73..38ba4fd 100644
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -45,6 +45,16 @@
 #define ARM_EXIT_DISCARD(x)	x
 #endif
 
+#ifdef CONFIG_EARLY_KPROBES
+# ifdef CONFIG_THUMB2_KERNEL
+#  error "Thumb2 kernel does not support early kprobes now"
+# else
+#  define MAX_OPTINSN_SIZE (optprobe_template_end - optprobe_template_entry)
+#  define MAX_INSN_SIZE 2
+#  define KPROBE_OPCODE_SIZE 4
+# endif
+#endif
+
 OUTPUT_ARCH(arm)
 ENTRY(stext)
 
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH 1/3] early kprobes: ARM: add definition for vmlinux.lds use.
@ 2015-02-15  8:27         ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-15  8:27 UTC (permalink / raw)
  To: linux-arm-kernel

This patch defines MAX_OPTINSN_SIZE, MAX_INSN_SIZE and
KPROBE_OPCODE_SIZE for ARM for vmlinux.lds.S use. These macros are
originally defined in kprobes.h, which are unable to be used in
vmlinux.lds.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 arch/arm/kernel/vmlinux.lds.S | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
index b31aa73..38ba4fd 100644
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -45,6 +45,16 @@
 #define ARM_EXIT_DISCARD(x)	x
 #endif
 
+#ifdef CONFIG_EARLY_KPROBES
+# ifdef CONFIG_THUMB2_KERNEL
+#  error "Thumb2 kernel does not support early kprobes now"
+# else
+#  define MAX_OPTINSN_SIZE (optprobe_template_end - optprobe_template_entry)
+#  define MAX_INSN_SIZE 2
+#  define KPROBE_OPCODE_SIZE 4
+# endif
+#endif
+
 OUTPUT_ARCH(arm)
 ENTRY(stext)
 
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH 2/3] early kprobes: x86: add definition for vmlinux.lds use.
  2015-02-15  8:26       ` Wang Nan
@ 2015-02-15  8:27         ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-15  8:27 UTC (permalink / raw)
  To: linux; +Cc: rostedt, lizefan, linux-arm-kernel, linux-kernel, x86

This patch defines MAX_OPTINSN_SIZE, MAX_INSN_SIZE and
KPROBE_OPCODE_SIZE for x86 for vmlinux.lds.S use.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 arch/x86/kernel/vmlinux.lds.S | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 00bf300..e46d877 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -29,6 +29,16 @@
 
 #undef i386     /* in case the preprocessor is a 32bit one */
 
+#ifdef CONFIG_EARLY_KPROBES
+# define MAX_INSN_SIZE 16
+# define RELATIVE_ADDR_SIZE 4
+# define RELATIVEJUMP_SIZE 5
+# define KPROBE_OPCODE_SIZE 1
+# define MAX_OPTIMIZED_LENGTH (MAX_INSN_SIZE + RELATIVE_ADDR_SIZE)
+# define MAX_OPTINSN_SIZE ((optprobe_template_end - optprobe_template_entry) + \
+	MAX_OPTIMIZED_LENGTH + RELATIVEJUMP_SIZE)
+#endif
+
 OUTPUT_FORMAT(CONFIG_OUTPUT_FORMAT, CONFIG_OUTPUT_FORMAT, CONFIG_OUTPUT_FORMAT)
 
 #ifdef CONFIG_X86_32
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH 2/3] early kprobes: x86: add definition for vmlinux.lds use.
@ 2015-02-15  8:27         ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-15  8:27 UTC (permalink / raw)
  To: linux-arm-kernel

This patch defines MAX_OPTINSN_SIZE, MAX_INSN_SIZE and
KPROBE_OPCODE_SIZE for x86 for vmlinux.lds.S use.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 arch/x86/kernel/vmlinux.lds.S | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 00bf300..e46d877 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -29,6 +29,16 @@
 
 #undef i386     /* in case the preprocessor is a 32bit one */
 
+#ifdef CONFIG_EARLY_KPROBES
+# define MAX_INSN_SIZE 16
+# define RELATIVE_ADDR_SIZE 4
+# define RELATIVEJUMP_SIZE 5
+# define KPROBE_OPCODE_SIZE 1
+# define MAX_OPTIMIZED_LENGTH (MAX_INSN_SIZE + RELATIVE_ADDR_SIZE)
+# define MAX_OPTINSN_SIZE ((optprobe_template_end - optprobe_template_entry) + \
+	MAX_OPTIMIZED_LENGTH + RELATIVEJUMP_SIZE)
+#endif
+
 OUTPUT_FORMAT(CONFIG_OUTPUT_FORMAT, CONFIG_OUTPUT_FORMAT, CONFIG_OUTPUT_FORMAT)
 
 #ifdef CONFIG_X86_32
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH 3/3] early kprobes: introduce early kprobes related code area.
  2015-02-15  8:26       ` Wang Nan
@ 2015-02-15  8:27         ` Wang Nan
  -1 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-15  8:27 UTC (permalink / raw)
  To: linux; +Cc: rostedt, lizefan, linux-arm-kernel, linux-kernel, x86

Append early kprobe related slots to KPROBES_TEXT. This is arch
independent part. Arch code should define MAX_OPTINSN_SIZE,
KPROBE_OPCODE_SIZE and MAX_INSN_SIZE for it.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 include/asm-generic/vmlinux.lds.h | 19 ++++++++++++++++++-
 1 file changed, 18 insertions(+), 1 deletion(-)

diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index ac78910..7cd1d21 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -424,11 +424,28 @@
 		*(.spinlock.text)					\
 		VMLINUX_SYMBOL(__lock_text_end) = .;
 
+#ifndef CONFIG_EARLY_KPROBES
+# define EARLY_KPROBES_TEXT
+#else
+# define EARLY_KPROBES_TEXT						\
+	. = ALIGN(8);							\
+	VMLINUX_SYMBOL(__early_kprobes_start) = .;			\
+	VMLINUX_SYMBOL(__early_kprobes_code_area_start) = .;		\
+	. = . + MAX_OPTINSN_SIZE * CONFIG_NR_EARLY_KPROBES_SLOTS;	\
+	VMLINUX_SYMBOL(__early_kprobes_code_area_end) = .;		\
+	. = ALIGN(8);							\
+	VMLINUX_SYMBOL(__early_kprobes_insn_slot_start) = .;		\
+	. = . + MAX_INSN_SIZE * KPROBE_OPCODE_SIZE * CONFIG_NR_EARLY_KPROBES_SLOTS;\
+	VMLINUX_SYMBOL(__early_kprobes_insn_slot_end) = .;		\
+	VMLINUX_SYMBOL(__early_kprobes_end) = .;
+#endif
+
 #define KPROBES_TEXT							\
 		ALIGN_FUNCTION();					\
 		VMLINUX_SYMBOL(__kprobes_text_start) = .;		\
 		*(.kprobes.text)					\
-		VMLINUX_SYMBOL(__kprobes_text_end) = .;
+		VMLINUX_SYMBOL(__kprobes_text_end) = .;			\
+		EARLY_KPROBES_TEXT
 
 #define ENTRY_TEXT							\
 		ALIGN_FUNCTION();					\
-- 
1.8.4


^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [RFC PATCH 3/3] early kprobes: introduce early kprobes related code area.
@ 2015-02-15  8:27         ` Wang Nan
  0 siblings, 0 replies; 76+ messages in thread
From: Wang Nan @ 2015-02-15  8:27 UTC (permalink / raw)
  To: linux-arm-kernel

Append early kprobe related slots to KPROBES_TEXT. This is arch
independent part. Arch code should define MAX_OPTINSN_SIZE,
KPROBE_OPCODE_SIZE and MAX_INSN_SIZE for it.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
---
 include/asm-generic/vmlinux.lds.h | 19 ++++++++++++++++++-
 1 file changed, 18 insertions(+), 1 deletion(-)

diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index ac78910..7cd1d21 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -424,11 +424,28 @@
 		*(.spinlock.text)					\
 		VMLINUX_SYMBOL(__lock_text_end) = .;
 
+#ifndef CONFIG_EARLY_KPROBES
+# define EARLY_KPROBES_TEXT
+#else
+# define EARLY_KPROBES_TEXT						\
+	. = ALIGN(8);							\
+	VMLINUX_SYMBOL(__early_kprobes_start) = .;			\
+	VMLINUX_SYMBOL(__early_kprobes_code_area_start) = .;		\
+	. = . + MAX_OPTINSN_SIZE * CONFIG_NR_EARLY_KPROBES_SLOTS;	\
+	VMLINUX_SYMBOL(__early_kprobes_code_area_end) = .;		\
+	. = ALIGN(8);							\
+	VMLINUX_SYMBOL(__early_kprobes_insn_slot_start) = .;		\
+	. = . + MAX_INSN_SIZE * KPROBE_OPCODE_SIZE * CONFIG_NR_EARLY_KPROBES_SLOTS;\
+	VMLINUX_SYMBOL(__early_kprobes_insn_slot_end) = .;		\
+	VMLINUX_SYMBOL(__early_kprobes_end) = .;
+#endif
+
 #define KPROBES_TEXT							\
 		ALIGN_FUNCTION();					\
 		VMLINUX_SYMBOL(__kprobes_text_start) = .;		\
 		*(.kprobes.text)					\
-		VMLINUX_SYMBOL(__kprobes_text_end) = .;
+		VMLINUX_SYMBOL(__kprobes_text_end) = .;			\
+		EARLY_KPROBES_TEXT
 
 #define ENTRY_TEXT							\
 		ALIGN_FUNCTION();					\
-- 
1.8.4

^ permalink raw reply related	[flat|nested] 76+ messages in thread

end of thread, other threads:[~2015-02-15  8:36 UTC | newest]

Thread overview: 76+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-12 12:17 [RFC PATCH v2 00/26] Early kprobe: enable kprobes at very early booting stage Wang Nan
2015-02-12 12:17 ` Wang Nan
2015-02-12 12:19 ` [RFC PATCH v2 01/26] kprobes: set kprobes_all_disarmed earlier to enable re-optimization Wang Nan
2015-02-12 12:19   ` Wang Nan
2015-02-12 12:19 ` [RFC PATCH v2 02/26] kprobes: makes kprobes/enabled works correctly for optimized kprobes Wang Nan
2015-02-12 12:19   ` Wang Nan
2015-02-12 12:19 ` [RFC PATCH v2 03/26] kprobes: x86: mark 2 bytes NOP as boostable Wang Nan
2015-02-12 12:19   ` Wang Nan
2015-02-12 12:19 ` [RFC PATCH v2 04/26] ftrace: don't update record flags if code modification fail Wang Nan
2015-02-12 12:19   ` Wang Nan
2015-02-12 12:19 ` [RFC PATCH v2 05/26] ftrace/x86: Ensure rec->flags no change when failure occures Wang Nan
2015-02-12 12:19   ` Wang Nan
2015-02-12 12:19 ` [RFC PATCH v2 06/26] ftrace: sort ftrace entries earlier Wang Nan
2015-02-12 12:19   ` Wang Nan
2015-02-12 17:35   ` Steven Rostedt
2015-02-12 17:35     ` Steven Rostedt
2015-02-12 12:19 ` [RFC PATCH v2 07/26] ftrace: allow search ftrace addr before ftrace fully inited Wang Nan
2015-02-12 12:19   ` Wang Nan
2015-02-12 17:38   ` Steven Rostedt
2015-02-12 17:38     ` Steven Rostedt
2015-02-12 12:19 ` [RFC PATCH v2 08/26] ftrace: enable other subsystems make ftrace nop before ftrace_init() Wang Nan
2015-02-12 12:19   ` Wang Nan
2015-02-12 17:39   ` Steven Rostedt
2015-02-12 17:39     ` Steven Rostedt
2015-02-13  1:29     ` Wang Nan
2015-02-13  1:29       ` Wang Nan
2015-02-12 12:20 ` [RFC PATCH v2 10/26] ftrace: x86: try to fix ftrace when ftrace_replace_code Wang Nan
2015-02-12 12:20   ` Wang Nan
2015-02-12 12:20 ` [RFC PATCH v2 11/26] early kprobes: introduce kprobe_is_early for futher early kprobe use Wang Nan
2015-02-12 12:20   ` Wang Nan
2015-02-12 12:20 ` [RFC PATCH v2 12/26] early kprobes: Add an KPROBE_FLAG_EARLY for early kprobe Wang Nan
2015-02-12 12:20   ` Wang Nan
2015-02-12 12:20 ` [RFC PATCH v2 13/26] early kprobes: ARM: directly modify code Wang Nan
2015-02-12 12:20   ` Wang Nan
2015-02-12 12:20 ` [RFC PATCH v2 14/26] early kprobes: ARM: introduce early kprobes related code area Wang Nan
2015-02-12 12:20   ` Wang Nan
2015-02-13 17:32   ` Russell King - ARM Linux
2015-02-13 17:32     ` Russell King - ARM Linux
2015-02-15  8:26     ` [RFC PATCH 0/3] early kprobes: rearrange vmlinux.lds related code Wang Nan
2015-02-15  8:26       ` Wang Nan
2015-02-15  8:27       ` [RFC PATCH 1/3] early kprobes: ARM: add definition for vmlinux.lds use Wang Nan
2015-02-15  8:27         ` Wang Nan
2015-02-15  8:27       ` [RFC PATCH 2/3] early kprobes: x86: " Wang Nan
2015-02-15  8:27         ` Wang Nan
2015-02-15  8:27       ` [RFC PATCH 3/3] early kprobes: introduce early kprobes related code area Wang Nan
2015-02-15  8:27         ` Wang Nan
2015-02-12 12:20 ` [RFC PATCH v2 15/26] early kprobes: x86: directly modify code Wang Nan
2015-02-12 12:20   ` Wang Nan
2015-02-12 12:20 ` [RFC PATCH v2 16/26] early kprobes: x86: introduce early kprobes related code area Wang Nan
2015-02-12 12:20   ` Wang Nan
2015-02-12 12:20 ` [RFC PATCH v2 17/26] early kprobes: introduces macros for allocing early kprobe resources Wang Nan
2015-02-12 12:20   ` Wang Nan
2015-02-12 12:20 ` [RFC PATCH v2 18/26] early kprobes: allows __alloc_insn_slot() from early kprobes slots Wang Nan
2015-02-12 12:20   ` Wang Nan
2015-02-12 12:21 ` [RFC PATCH v2 19/26] early kprobes: perhibit probing at early kprobe reserved area Wang Nan
2015-02-12 12:21   ` Wang Nan
2015-02-12 12:21 ` [RFC PATCH v2 20/26] early kprobes: core logic of eraly kprobes Wang Nan
2015-02-12 12:21   ` Wang Nan
2015-02-12 12:21 ` [RFC PATCH v2 21/26] early kprobes: add CONFIG_EARLY_KPROBES option Wang Nan
2015-02-12 12:21   ` Wang Nan
2015-02-12 12:21 ` [RFC PATCH v2 22/26] early kprobes: introduce arch_fix_ftrace_early_kprobe() Wang Nan
2015-02-12 12:21   ` Wang Nan
2015-02-12 12:21 ` [RFC PATCH v2 23/26] early kprobes: x86: arch_restore_optimized_kprobe() Wang Nan
2015-02-12 12:21   ` Wang Nan
2015-02-12 12:21 ` [RFC PATCH v2 24/26] early kprobes: core logic to support early kprobe on ftrace Wang Nan
2015-02-12 12:21   ` Wang Nan
2015-02-12 12:21 ` [RFC PATCH v2 25/26] early kprobes: introduce kconfig option " Wang Nan
2015-02-12 12:21   ` Wang Nan
2015-02-12 12:21 ` [RFC PATCH v2 26/26] kprobes: enable 'ekprobe=' cmdline option for early kprobes Wang Nan
2015-02-12 12:21   ` Wang Nan
2015-02-12 12:21 ` [RFC PATCH v2 09/26] ftrace: callchain and ftrace_bug_tryfix Wang Nan
2015-02-12 12:21   ` Wang Nan
2015-02-13  5:38 ` [RFC PATCH v3 00/26] Early kprobe: enable kprobes at very early booting stage Wang Nan
2015-02-13  5:38   ` Wang Nan
2015-02-13 17:15   ` Steven Rostedt
2015-02-13 17:15     ` Steven Rostedt

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.