linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization support
@ 2009-06-22 21:22 Masami Hiramatsu
  2009-06-22 21:23 ` [RFC][ PATCH -tip v2 1/7] kprobes: use list instead of hlist for insn_pages Masami Hiramatsu
                   ` (8 more replies)
  0 siblings, 9 replies; 29+ messages in thread
From: Masami Hiramatsu @ 2009-06-22 21:22 UTC (permalink / raw)
  To: Ingo Molnar, Ananth N Mavinakayanahalli, lkml
  Cc: H. Peter Anvin, Frederic Weisbecker, Ananth N Mavinakayanahalli,
	Jim Keniston, Srikar Dronamraju, Christoph Hellwig,
	Steven Rostedt, Anders Kaseorg, Tim Abbott, systemtap, DLE

Hi,

Here are the RFC patchset of the kprobes jump optimization v2
(a.k.a. Djprobe). This version is including some bugfixes and disabling
crossjumping patch.

The gcc's crossjumping unifies equivalent code by inserting indirect
jumps which jump into other function body. It is hard to know to where
these jumps jump, so I decided to disable it when setting
CONFIG_OPTPROBES=y.

I also decided not to optimize probes when it is in functions which
will cause exceptions, because the exception in the kernel will jump
to a fixup code and the fixup code jumps back to the middle of the
same function body.

These patches can be applied on -tip tree + x86 instruction decoder
which I re-sent right now. This is another example of x86 instruction
decoder.


Jump Optimized Kprobes
======================
o Concept
 Kprobes uses the int3 breakpoint instruction on x86 for instrumenting
probes into running kernel. Jump optimization allows kprobes to replace
breakpoint with a jump instruction for reducing probing overhead drastically.

o Performance
 An optimized kprobe 5 times faster than a kprobe.

 Optimizing probes gains its performance. Usually, a kprobe hit takes
0.5 to 1.0 microseconds to process. On the other hand, a jump optimized
probe hit takes less than 0.1 microseconds (actual number depends on the
processor). Here is a sample overheads.

Intel(R) Xeon(R) CPU E5410  @ 2.33GHz (without debugging options)

                     x86-32  x86-64
kprobe:              0.68us  0.91us
kprobe+booster:	     0.27us  0.40us
kprobe+optimized:    0.06us  0.06us

kretprobe :          0.95us  1.21us
kretprobe+booster:   0.53us  0.71us
kretprobe+optimized: 0.30us  0.35us

(booster skips single-stepping)

 Note that jump optimization also consumes more memory, but not so much.
It just uses ~200 bytes, so, even if you use ~10,000 probes, it just 
consumes a few MB.


o Usage
 Set CONFIG_OPTPROBES=y when building a kernel, then all *probes will be
optimized if possible.

 Kprobes decodes probed function and checks whether the target instructions
can be optimized(replaced with a jump) safely. If it can't be, Kprobes just
doesn't optimize it.


o Optimization
  Before preparing optimization, Kprobes inserts original(user-defined)
 kprobe on the specified address. So, even if the kprobe is not
 possible to be optimized, it just uses a normal kprobe.

 - Safety check
  First, Kprobes gets the address of probed function and checks whether the
 optimized region, which will be replaced by a jump instruction, does NOT
 straddle the function boundary, because if the optimized region reaches the
 next function, its caller causes unexpected results.
  Next, Kprobes decodes whole body of probed function and checks there is
 NO indirect jump, NO instruction which will cause exception by checking
 exception_tables (this will jump to fixup code and fixup code jumps into
 same function body) and NO near jump which jumps into the optimized region
 (except the 1st byte of jump), because if some jump instruction jumps
 into the middle of another instruction, it causes unexpected results too.
  Kprobes also measures the length of instructions which will be replaced
 by a jump instruction, because a jump instruction is longer than 1 byte,
 it may replaces multiple instructions, and it checks whether those
 instructions can be executed out-of-line.

 - Preparing detour code
  Then, Kprobes prepares "detour" buffer, which contains exception emulating
 code (push/pop registers, call handler), copied instructions(Kprobes copies
 instructions which will be replaced by a jump, to the detour buffer), and
 a jump which jumps back to the original execution path.

 - Pre-optimization
  After preparing detour code, Kprobes enqueues the kprobe to optimizing list
 and kicks kprobe-optimizer workqueue to optimize it. To wait other optimized
 probes, kprobe-optimizer will delay to work.
  When the optimized-kprobe is hit before optimization, its handler
 changes IP(instruction pointer) to copied code and exits. So, the
 instructions which were copied to detour buffer are executed on the detour
 buffer.

 - Optimization
  Kprobe-optimizer doesn't start instruction-replacing soon, it waits
 synchronize_sched for safety, because some processors are possible to be
 interrupted on the instructions which will be replaced by a jump instruction.
 As you know, synchronize_sched() can ensure that all interruptions which were
 executed when synchronize_sched() was called are done, only if
 CONFIG_PREEMPT=n. So, this version supports only the kernel with
 CONFIG_PREEMPT=n.(*)
  After that, kprobe-optimizer replaces the 4 bytes right after int3 breakpoint
 with relative-jump destination, and synchronize caches on all processors. Next,
 it replaces int3 with relative-jump opcode, and synchronize caches again.

 - Unoptimization
  When unregistering, disabling kprobe or being blocked by other kprobe,
 an optimized-kprobe will be unoptimized. Before kprobe-optimizer runs,
 the kprobe just be dequeued from the optimized list. When the optimization
 has been done, it replaces a jump with int3 breakpoint and original code.
  First it puts int3 at the first byte of the jump, synchronize caches
 on all processors, and replaces the 4 bytes right after int3 with the
 original code.

(*)This optimization-safety checking may be replaced with stop-machine method
 which ksplice is done for supporting CONFIG_PREEMPT=y kernel.


Thank you,

---

Masami Hiramatsu (7):
      kprobes: add documents of jump optimization
      kprobes: x86: support kprobes jump optimization on x86
      kprobes: x86: cleanup save/restore registers
      kprobes: kprobes jump optimization core
      Kbuild: disable gcc crossjumping
      kprobes: introducing generic insn_slot framework
      kprobes: use list instead of hlist for insn_pages


 Documentation/kprobes.txt      |  174 ++++++++++++-
 Makefile                       |    4 
 arch/Kconfig                   |   13 +
 arch/x86/Kconfig               |    1 
 arch/x86/include/asm/kprobes.h |   31 ++
 arch/x86/kernel/kprobes.c      |  534 ++++++++++++++++++++++++++++++++++------
 include/linux/kprobes.h        |   38 +++
 kernel/kprobes.c               |  521 ++++++++++++++++++++++++++++++++-------
 lib/Kconfig.debug              |    7 +
 9 files changed, 1128 insertions(+), 195 deletions(-)

-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America), Inc.
Software Solutions Division

e-mail: mhiramat@redhat.com

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [RFC][ PATCH -tip v2 1/7] kprobes: use list instead of hlist for insn_pages
  2009-06-22 21:22 [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization support Masami Hiramatsu
@ 2009-06-22 21:23 ` Masami Hiramatsu
  2009-06-22 21:23 ` [RFC][ PATCH -tip v2 2/7] kprobes: introducing generic insn_slot framework Masami Hiramatsu
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 29+ messages in thread
From: Masami Hiramatsu @ 2009-06-22 21:23 UTC (permalink / raw)
  To: Ingo Molnar, Ananth N Mavinakayanahalli, lkml
  Cc: systemtap, DLE, Masami Hiramatsu, Ananth N Mavinakayanahalli,
	Ingo Molnar, Jim Keniston, Srikar Dronamraju, Christoph Hellwig,
	Steven Rostedt, Frederic Weisbecker, H. Peter Anvin,
	Anders Kaseorg, Tim Abbott

Use struct list instead of struct hlist for managing insn_pages, because
insn_pages doesn't use hash table.

Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
Acked-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jim Keniston <jkenisto@us.ibm.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Anders Kaseorg <andersk@ksplice.com>
Cc: Tim Abbott <tabbott@ksplice.com>
---

 kernel/kprobes.c |   30 +++++++++++-------------------
 1 files changed, 11 insertions(+), 19 deletions(-)

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index c0fa54b..0b68fdc 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -103,7 +103,7 @@ static struct kprobe_blackpoint kprobe_blacklist[] = {
 #define INSNS_PER_PAGE	(PAGE_SIZE/(MAX_INSN_SIZE * sizeof(kprobe_opcode_t)))
 
 struct kprobe_insn_page {
-	struct hlist_node hlist;
+	struct list_head list;
 	kprobe_opcode_t *insns;		/* Page of instruction slots */
 	char slot_used[INSNS_PER_PAGE];
 	int nused;
@@ -117,7 +117,7 @@ enum kprobe_slot_state {
 };
 
 static DEFINE_MUTEX(kprobe_insn_mutex);	/* Protects kprobe_insn_pages */
-static struct hlist_head kprobe_insn_pages;
+static LIST_HEAD(kprobe_insn_pages);
 static int kprobe_garbage_slots;
 static int collect_garbage_slots(void);
 
@@ -152,10 +152,9 @@ loop_end:
 static kprobe_opcode_t __kprobes *__get_insn_slot(void)
 {
 	struct kprobe_insn_page *kip;
-	struct hlist_node *pos;
 
  retry:
-	hlist_for_each_entry(kip, pos, &kprobe_insn_pages, hlist) {
+	list_for_each_entry(kip, &kprobe_insn_pages, list) {
 		if (kip->nused < INSNS_PER_PAGE) {
 			int i;
 			for (i = 0; i < INSNS_PER_PAGE; i++) {
@@ -189,8 +188,8 @@ static kprobe_opcode_t __kprobes *__get_insn_slot(void)
 		kfree(kip);
 		return NULL;
 	}
-	INIT_HLIST_NODE(&kip->hlist);
-	hlist_add_head(&kip->hlist, &kprobe_insn_pages);
+	INIT_LIST_HEAD(&kip->list);
+	list_add(&kip->list, &kprobe_insn_pages);
 	memset(kip->slot_used, SLOT_CLEAN, INSNS_PER_PAGE);
 	kip->slot_used[0] = SLOT_USED;
 	kip->nused = 1;
@@ -219,12 +218,8 @@ static int __kprobes collect_one_slot(struct kprobe_insn_page *kip, int idx)
 		 * so as not to have to set it up again the
 		 * next time somebody inserts a probe.
 		 */
-		hlist_del(&kip->hlist);
-		if (hlist_empty(&kprobe_insn_pages)) {
-			INIT_HLIST_NODE(&kip->hlist);
-			hlist_add_head(&kip->hlist,
-				       &kprobe_insn_pages);
-		} else {
+		if (!list_is_singular(&kprobe_insn_pages)) {
+			list_del(&kip->list);
 			module_free(NULL, kip->insns);
 			kfree(kip);
 		}
@@ -235,8 +230,7 @@ static int __kprobes collect_one_slot(struct kprobe_insn_page *kip, int idx)
 
 static int __kprobes collect_garbage_slots(void)
 {
-	struct kprobe_insn_page *kip;
-	struct hlist_node *pos, *next;
+	struct kprobe_insn_page *kip, *next;
 	int safety;
 
 	/* Ensure no-one is preepmted on the garbages */
@@ -246,7 +240,7 @@ static int __kprobes collect_garbage_slots(void)
 	if (safety != 0)
 		return -EAGAIN;
 
-	hlist_for_each_entry_safe(kip, pos, next, &kprobe_insn_pages, hlist) {
+	list_for_each_entry_safe(kip, next, &kprobe_insn_pages, list) {
 		int i;
 		if (kip->ngarbage == 0)
 			continue;
@@ -264,19 +258,17 @@ static int __kprobes collect_garbage_slots(void)
 void __kprobes free_insn_slot(kprobe_opcode_t * slot, int dirty)
 {
 	struct kprobe_insn_page *kip;
-	struct hlist_node *pos;
 
 	mutex_lock(&kprobe_insn_mutex);
-	hlist_for_each_entry(kip, pos, &kprobe_insn_pages, hlist) {
+	list_for_each_entry(kip, &kprobe_insn_pages, list) {
 		if (kip->insns <= slot &&
 		    slot < kip->insns + (INSNS_PER_PAGE * MAX_INSN_SIZE)) {
 			int i = (slot - kip->insns) / MAX_INSN_SIZE;
 			if (dirty) {
 				kip->slot_used[i] = SLOT_DIRTY;
 				kip->ngarbage++;
-			} else {
+			} else
 				collect_one_slot(kip, i);
-			}
 			break;
 		}
 	}


-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America), Inc.
Software Solutions Division

e-mail: mhiramat@redhat.com

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [RFC][ PATCH -tip v2 2/7] kprobes: introducing generic insn_slot framework
  2009-06-22 21:22 [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization support Masami Hiramatsu
  2009-06-22 21:23 ` [RFC][ PATCH -tip v2 1/7] kprobes: use list instead of hlist for insn_pages Masami Hiramatsu
@ 2009-06-22 21:23 ` Masami Hiramatsu
  2009-06-27 20:09   ` Steven Rostedt
  2009-06-22 21:23 ` [RFC][ PATCH -tip v2 3/7] Kbuild: disable gcc crossjumping Masami Hiramatsu
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 29+ messages in thread
From: Masami Hiramatsu @ 2009-06-22 21:23 UTC (permalink / raw)
  To: Ingo Molnar, Ananth N Mavinakayanahalli, lkml
  Cc: systemtap, DLE, Masami Hiramatsu, Ananth N Mavinakayanahalli,
	Ingo Molnar, Jim Keniston, Srikar Dronamraju, Christoph Hellwig,
	Steven Rostedt, Frederic Weisbecker, H. Peter Anvin,
	Anders Kaseorg, Tim Abbott

Make insn_slot framework support various size slots.
Current insn_slot just supports one-size instruction buffer slot. However,
kprobes jump optimization needs larger size buffers.

Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jim Keniston <jkenisto@us.ibm.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Anders Kaseorg <andersk@ksplice.com>
Cc: Tim Abbott <tabbott@ksplice.com>
---

 kernel/kprobes.c |   96 +++++++++++++++++++++++++++++++++---------------------
 1 files changed, 58 insertions(+), 38 deletions(-)

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 0b68fdc..bc9cfd0 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -100,26 +100,38 @@ static struct kprobe_blackpoint kprobe_blacklist[] = {
  * stepping on the instruction on a vmalloced/kmalloced/data page
  * is a recipe for disaster
  */
-#define INSNS_PER_PAGE	(PAGE_SIZE/(MAX_INSN_SIZE * sizeof(kprobe_opcode_t)))
-
 struct kprobe_insn_page {
 	struct list_head list;
 	kprobe_opcode_t *insns;		/* Page of instruction slots */
-	char slot_used[INSNS_PER_PAGE];
 	int nused;
 	int ngarbage;
+	char slot_used[1];
+};
+
+struct kprobe_insn_cache {
+	struct list_head pages;	/* list of kprobe_insn_page */
+	size_t insn_size;	/* size of instruction slot */
+	int nr_garbage;
 };
 
+static int slots_per_page(struct kprobe_insn_cache *c)
+{
+	return PAGE_SIZE/(c->insn_size * sizeof(kprobe_opcode_t));
+}
+
 enum kprobe_slot_state {
 	SLOT_CLEAN = 0,
 	SLOT_DIRTY = 1,
 	SLOT_USED = 2,
 };
 
-static DEFINE_MUTEX(kprobe_insn_mutex);	/* Protects kprobe_insn_pages */
-static LIST_HEAD(kprobe_insn_pages);
-static int kprobe_garbage_slots;
-static int collect_garbage_slots(void);
+static DEFINE_MUTEX(kprobe_insn_mutex);	/* Protects kprobe_insn_slots */
+static struct kprobe_insn_cache kprobe_insn_slots = {
+	.pages = LIST_HEAD_INIT(kprobe_insn_slots.pages),
+	.insn_size = MAX_INSN_SIZE,
+	.nr_garbage = 0,
+};
+static int __kprobes collect_garbage_slots(struct kprobe_insn_cache *c);
 
 static int __kprobes check_safety(void)
 {
@@ -149,32 +161,33 @@ loop_end:
  * __get_insn_slot() - Find a slot on an executable page for an instruction.
  * We allocate an executable page if there's no room on existing ones.
  */
-static kprobe_opcode_t __kprobes *__get_insn_slot(void)
+static kprobe_opcode_t __kprobes *__get_insn_slot(struct kprobe_insn_cache *c)
 {
 	struct kprobe_insn_page *kip;
 
  retry:
-	list_for_each_entry(kip, &kprobe_insn_pages, list) {
-		if (kip->nused < INSNS_PER_PAGE) {
+	list_for_each_entry(kip, &c->pages, list) {
+		if (kip->nused < slots_per_page(c)) {
 			int i;
-			for (i = 0; i < INSNS_PER_PAGE; i++) {
+			for (i = 0; i < slots_per_page(c); i++) {
 				if (kip->slot_used[i] == SLOT_CLEAN) {
 					kip->slot_used[i] = SLOT_USED;
 					kip->nused++;
-					return kip->insns + (i * MAX_INSN_SIZE);
+					return kip->insns + (i * c->insn_size);
 				}
 			}
-			/* Surprise!  No unused slots.  Fix kip->nused. */
-			kip->nused = INSNS_PER_PAGE;
+			/* kip->nused is broken. */
+			BUG();
 		}
 	}
 
 	/* If there are any garbage slots, collect it and try again. */
-	if (kprobe_garbage_slots && collect_garbage_slots() == 0) {
+	if (c->nr_garbage && collect_garbage_slots(c) == 0)
 		goto retry;
-	}
+
 	/* All out of space.  Need to allocate a new page. Use slot 0. */
-	kip = kmalloc(sizeof(struct kprobe_insn_page), GFP_KERNEL);
+	kip = kmalloc(sizeof(struct kprobe_insn_page) + slots_per_page(c) - 1,
+		      GFP_KERNEL);
 	if (!kip)
 		return NULL;
 
@@ -189,19 +202,20 @@ static kprobe_opcode_t __kprobes *__get_insn_slot(void)
 		return NULL;
 	}
 	INIT_LIST_HEAD(&kip->list);
-	list_add(&kip->list, &kprobe_insn_pages);
-	memset(kip->slot_used, SLOT_CLEAN, INSNS_PER_PAGE);
+	memset(kip->slot_used, SLOT_CLEAN, slots_per_page(c));
 	kip->slot_used[0] = SLOT_USED;
 	kip->nused = 1;
 	kip->ngarbage = 0;
+	list_add(&kip->list, &c->pages);
 	return kip->insns;
 }
 
+
 kprobe_opcode_t __kprobes *get_insn_slot(void)
 {
-	kprobe_opcode_t *ret;
+	kprobe_opcode_t *ret = NULL;
 	mutex_lock(&kprobe_insn_mutex);
-	ret = __get_insn_slot();
+	ret = __get_insn_slot(&kprobe_insn_slots);
 	mutex_unlock(&kprobe_insn_mutex);
 	return ret;
 }
@@ -218,7 +232,7 @@ static int __kprobes collect_one_slot(struct kprobe_insn_page *kip, int idx)
 		 * so as not to have to set it up again the
 		 * next time somebody inserts a probe.
 		 */
-		if (!list_is_singular(&kprobe_insn_pages)) {
+		if (!list_is_singular(&kip->list)) {
 			list_del(&kip->list);
 			module_free(NULL, kip->insns);
 			kfree(kip);
@@ -228,7 +242,7 @@ static int __kprobes collect_one_slot(struct kprobe_insn_page *kip, int idx)
 	return 0;
 }
 
-static int __kprobes collect_garbage_slots(void)
+static int __kprobes collect_garbage_slots(struct kprobe_insn_cache *c)
 {
 	struct kprobe_insn_page *kip, *next;
 	int safety;
@@ -240,42 +254,48 @@ static int __kprobes collect_garbage_slots(void)
 	if (safety != 0)
 		return -EAGAIN;
 
-	list_for_each_entry_safe(kip, next, &kprobe_insn_pages, list) {
+	list_for_each_entry_safe(kip, next, &c->pages, list) {
 		int i;
 		if (kip->ngarbage == 0)
 			continue;
 		kip->ngarbage = 0;	/* we will collect all garbages */
-		for (i = 0; i < INSNS_PER_PAGE; i++) {
+		for (i = 0; i < slots_per_page(c); i++) {
 			if (kip->slot_used[i] == SLOT_DIRTY &&
 			    collect_one_slot(kip, i))
 				break;
 		}
 	}
-	kprobe_garbage_slots = 0;
+	c->nr_garbage = 0;
 	return 0;
 }
 
-void __kprobes free_insn_slot(kprobe_opcode_t * slot, int dirty)
+static void __kprobes __free_insn_slot(struct kprobe_insn_cache *c,
+				       kprobe_opcode_t *slot, int dirty)
 {
 	struct kprobe_insn_page *kip;
 
-	mutex_lock(&kprobe_insn_mutex);
-	list_for_each_entry(kip, &kprobe_insn_pages, list) {
-		if (kip->insns <= slot &&
-		    slot < kip->insns + (INSNS_PER_PAGE * MAX_INSN_SIZE)) {
-			int i = (slot - kip->insns) / MAX_INSN_SIZE;
+	list_for_each_entry(kip, &c->pages, list) {
+		long idx = ((long)slot - (long)kip->insns) / c->insn_size;
+		if (idx >= 0 && idx < slots_per_page(c)) {
+			WARN_ON(kip->slot_used[idx] != SLOT_USED);
 			if (dirty) {
-				kip->slot_used[i] = SLOT_DIRTY;
+				kip->slot_used[idx] = SLOT_DIRTY;
 				kip->ngarbage++;
+				if (++c->nr_garbage > slots_per_page(c))
+					collect_garbage_slots(c);
 			} else
-				collect_one_slot(kip, i);
-			break;
+				collect_one_slot(kip, idx);
+			return;
 		}
 	}
+	/* Could not free this slot. */
+	WARN_ON(1);
+}
 
-	if (dirty && ++kprobe_garbage_slots > INSNS_PER_PAGE)
-		collect_garbage_slots();
-
+void __kprobes free_insn_slot(kprobe_opcode_t * slot, int dirty)
+{
+	mutex_lock(&kprobe_insn_mutex);
+	__free_insn_slot(&kprobe_insn_slots, slot, dirty);
 	mutex_unlock(&kprobe_insn_mutex);
 }
 #endif


-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America), Inc.
Software Solutions Division

e-mail: mhiramat@redhat.com

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [RFC][ PATCH -tip v2 3/7] Kbuild: disable gcc crossjumping
  2009-06-22 21:22 [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization support Masami Hiramatsu
  2009-06-22 21:23 ` [RFC][ PATCH -tip v2 1/7] kprobes: use list instead of hlist for insn_pages Masami Hiramatsu
  2009-06-22 21:23 ` [RFC][ PATCH -tip v2 2/7] kprobes: introducing generic insn_slot framework Masami Hiramatsu
@ 2009-06-22 21:23 ` Masami Hiramatsu
  2009-06-27 20:13   ` Steven Rostedt
  2009-06-22 21:23 ` [RFC][ PATCH -tip v2 4/7] kprobes: kprobes jump optimization core Masami Hiramatsu
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 29+ messages in thread
From: Masami Hiramatsu @ 2009-06-22 21:23 UTC (permalink / raw)
  To: Ingo Molnar, Ananth N Mavinakayanahalli, lkml
  Cc: systemtap, DLE, Masami Hiramatsu, Ananth N Mavinakayanahalli,
	Ingo Molnar, Jim Keniston, Srikar Dronamraju, Christoph Hellwig,
	Steven Rostedt, Frederic Weisbecker, H. Peter Anvin,
	Anders Kaseorg, Tim Abbott

Add CONFIG_DISABLE_CROSSJUMP option which disables gcc's cross-function
jumping. This option is required by the kprobes jump optimization.

Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jim Keniston <jkenisto@us.ibm.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Anders Kaseorg <andersk@ksplice.com>
Cc: Tim Abbott <tabbott@ksplice.com>
---

 Makefile          |    4 ++++
 lib/Kconfig.debug |    7 +++++++
 2 files changed, 11 insertions(+), 0 deletions(-)

diff --git a/Makefile b/Makefile
index 2903e13..f73b139 100644
--- a/Makefile
+++ b/Makefile
@@ -524,6 +524,10 @@ else
 KBUILD_CFLAGS	+= -O2
 endif
 
+ifdef CONFIG_DISABLE_CROSSJUMP
+KBUILD_CFLAGS	+= -fno-crossjumping
+endif
+
 include $(srctree)/arch/$(SRCARCH)/Makefile
 
 ifneq ($(CONFIG_FRAME_WARN),0)
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 8da7467..f88e6b8 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -673,6 +673,13 @@ config FRAME_POINTER
 	  larger and slower, but it gives very useful debugging information
 	  in case of kernel bugs. (precise oopses/stacktraces/warnings)
 
+config DISABLE_CROSSJUMP
+	bool "Disable cross-function jump optimization"
+	help
+	  This build option disables cross-function jump optimization
+	  (crossjumping) of gcc. Disabling crossjumping might increase
+	  kernel binary size a little.
+
 config BOOT_PRINTK_DELAY
 	bool "Delay each boot printk message by N milliseconds"
 	depends on DEBUG_KERNEL && PRINTK && GENERIC_CALIBRATE_DELAY


-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America), Inc.
Software Solutions Division

e-mail: mhiramat@redhat.com

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [RFC][ PATCH -tip v2 4/7] kprobes: kprobes jump optimization core
  2009-06-22 21:22 [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization support Masami Hiramatsu
                   ` (2 preceding siblings ...)
  2009-06-22 21:23 ` [RFC][ PATCH -tip v2 3/7] Kbuild: disable gcc crossjumping Masami Hiramatsu
@ 2009-06-22 21:23 ` Masami Hiramatsu
  2009-06-23 12:58   ` Srikar Dronamraju
  2009-06-22 21:23 ` [RFC][ PATCH -tip v2 5/7] kprobes: x86: cleanup save/restore registers Masami Hiramatsu
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 29+ messages in thread
From: Masami Hiramatsu @ 2009-06-22 21:23 UTC (permalink / raw)
  To: Ingo Molnar, Ananth N Mavinakayanahalli, lkml
  Cc: systemtap, DLE, Masami Hiramatsu, Ananth N Mavinakayanahalli,
	Ingo Molnar, Jim Keniston, Srikar Dronamraju, Christoph Hellwig,
	Steven Rostedt, Frederic Weisbecker, H. Peter Anvin,
	Anders Kaseorg, Tim Abbott

Introduce kprobes jump optimization arch-independent parts.
Kprobes uses breakpoint instruction for interrupting execution flow, on
some architectures, it can be replaced by a jump instruction and
interruption emulation code. This gains kprobs' performance drastically.

Changes from v1:
 - Try to optimize kprobes again when unregistering aggregated kprobe.
 - Check the probe has been optimized when optimizing kprobe.
 - Select CONFIG_KALLSYMS_ALL when CONFIG_OPTPROBES=y.
 - Select CONFIG_DISABLE_CROSSJUMP when CONFIG_OPTPROBES=y.

Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jim Keniston <jkenisto@us.ibm.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Anders Kaseorg <andersk@ksplice.com>
Cc: Tim Abbott <tabbott@ksplice.com>
---

 arch/Kconfig            |   13 ++
 include/linux/kprobes.h |   38 ++++
 kernel/kprobes.c        |  407 ++++++++++++++++++++++++++++++++++++++++++-----
 3 files changed, 411 insertions(+), 47 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index c72f18f..b4d7785 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -44,6 +44,17 @@ config KPROBES
 	  for kernel debugging, non-intrusive instrumentation and testing.
 	  If in doubt, say "N".
 
+config OPTPROBES
+	bool "Kprobes jump optimization support (EXPERIMENTAL)"
+	depends on KPROBES
+	depends on !PREEMPT
+	depends on HAVE_OPTPROBES
+	select KALLSYMS_ALL
+	select DISABLE_CROSSJUMP
+	help
+	  This option will allow kprobes to optimize breakpoint to
+	  a jump for reducing its overhead.
+
 config HAVE_EFFICIENT_UNALIGNED_ACCESS
 	bool
 	help
@@ -79,6 +90,8 @@ config HAVE_KPROBES
 config HAVE_KRETPROBES
 	bool
 
+config HAVE_OPTPROBES
+	bool
 #
 # An arch should select this if it provides all these things:
 #
diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index bcd9c07..0adf7af 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -122,6 +122,11 @@ struct kprobe {
 /* Kprobe status flags */
 #define KPROBE_FLAG_GONE	1 /* breakpoint has already gone */
 #define KPROBE_FLAG_DISABLED	2 /* probe is temporarily disabled */
+#define KPROBE_FLAG_OPTIMIZED	4 /*
+				   * probe is really optimized.
+				   * NOTE:
+				   * this flag is only for optimized_kprobe.
+				   */
 
 /* Has this kprobe gone ? */
 static inline int kprobe_gone(struct kprobe *p)
@@ -134,6 +139,12 @@ static inline int kprobe_disabled(struct kprobe *p)
 {
 	return p->flags & (KPROBE_FLAG_DISABLED | KPROBE_FLAG_GONE);
 }
+
+/* Is this kprobe really running optimized path ? */
+static inline int kprobe_optimized(struct kprobe *p)
+{
+	return p->flags & KPROBE_FLAG_OPTIMIZED;
+}
 /*
  * Special probe type that uses setjmp-longjmp type tricks to resume
  * execution at a specified entry with a matching prototype corresponding
@@ -249,6 +260,33 @@ extern kprobe_opcode_t *get_insn_slot(void);
 extern void free_insn_slot(kprobe_opcode_t *slot, int dirty);
 extern void kprobes_inc_nmissed_count(struct kprobe *p);
 
+#ifdef CONFIG_OPTPROBES
+/*
+ * Internal structure for direct jump optimized probe
+ */
+struct optimized_kprobe {
+	struct kprobe kp;
+	struct list_head list;	/* list for optimizing queue */
+	struct arch_optimized_insn optinsn;
+};
+
+/* Architecture dependent functions for direct jump optimization */
+extern int arch_prepared_optinsn(struct arch_optimized_insn *optinsn);
+extern int arch_check_optimized_kprobe(struct optimized_kprobe *op);
+extern int arch_prepare_optimized_kprobe(struct optimized_kprobe *op);
+extern void arch_remove_optimized_kprobe(struct optimized_kprobe *op);
+extern int  arch_optimize_kprobe(struct optimized_kprobe *op);
+extern void arch_unoptimize_kprobe(struct optimized_kprobe *op);
+extern int arch_detour_optimized_kprobe(struct optimized_kprobe *op,
+					struct pt_regs *regs);
+extern kprobe_opcode_t *get_optinsn_slot(void);
+extern void free_optinsn_slot(kprobe_opcode_t *slot, int dirty);
+extern int arch_within_optimized_kprobe(struct optimized_kprobe *op,
+					unsigned long addr);
+
+extern void opt_pre_handler(struct kprobe *p, struct pt_regs *regs);
+#endif /* CONFIG_OPTPROBES */
+
 /* Get the kprobe at this addr (if any) - called with preemption disabled */
 struct kprobe *get_kprobe(void *addr);
 void kretprobe_hash_lock(struct task_struct *tsk,
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index bc9cfd0..13aea6d 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -298,6 +298,31 @@ void __kprobes free_insn_slot(kprobe_opcode_t * slot, int dirty)
 	__free_insn_slot(&kprobe_insn_slots, slot, dirty);
 	mutex_unlock(&kprobe_insn_mutex);
 }
+#ifdef CONFIG_OPTPROBES
+/* For optimized_kprobe buffer */
+static DEFINE_MUTEX(kprobe_optinsn_mutex); /* Protects kprobe_optinsn_slots */
+static struct kprobe_insn_cache kprobe_optinsn_slots = {
+	.pages = LIST_HEAD_INIT(kprobe_optinsn_slots.pages),
+	/* .insn_size is initialized later */
+	.nr_garbage = 0,
+};
+/* Get a slot for optimized_kprobe buffer */
+kprobe_opcode_t __kprobes *get_optinsn_slot(void)
+{
+	kprobe_opcode_t *ret = NULL;
+	mutex_lock(&kprobe_optinsn_mutex);
+	ret = __get_insn_slot(&kprobe_optinsn_slots);
+	mutex_unlock(&kprobe_optinsn_mutex);
+	return ret;
+}
+
+void __kprobes free_optinsn_slot(kprobe_opcode_t * slot, int dirty)
+{
+	mutex_lock(&kprobe_optinsn_mutex);
+	__free_insn_slot(&kprobe_optinsn_slots, slot, dirty);
+	mutex_unlock(&kprobe_optinsn_mutex);
+}
+#endif
 #endif
 
 /* We have preemption disabled.. so it is safe to use __ versions */
@@ -331,11 +356,271 @@ struct kprobe __kprobes *get_kprobe(void *addr)
 	return NULL;
 }
 
+static int __kprobes aggr_pre_handler(struct kprobe *p, struct pt_regs *regs);
+
+/* Return true if the kprobe is an aggregator */
+static inline int kprobe_aggrprobe(struct kprobe *p)
+{
+	return p->pre_handler == aggr_pre_handler;
+}
+
+/*
+ * Keep all fields in the kprobe consistent
+ */
+static inline void copy_kprobe(struct kprobe *old_p, struct kprobe *p)
+{
+	memcpy(&p->opcode, &old_p->opcode, sizeof(kprobe_opcode_t));
+	memcpy(&p->ainsn, &old_p->ainsn, sizeof(struct arch_specific_insn));
+}
+
+#ifdef CONFIG_OPTPROBES
+/*
+ * Call all pre_handler on the list, but ignores its return value.
+ * This must be called from arch-dep optimized caller.
+ */
+void __kprobes opt_pre_handler(struct kprobe *p, struct pt_regs *regs)
+{
+	struct kprobe *kp;
+
+	list_for_each_entry_rcu(kp, &p->list, list) {
+		if (kp->pre_handler && likely(!kprobe_disabled(kp))) {
+			set_kprobe_instance(kp);
+			kp->pre_handler(kp, regs);
+		}
+		reset_kprobe_instance();
+	}
+}
+
+/* Change IP to optimized path. */
+static int __kprobes detour_optimized_kprobe(struct kprobe *p,
+					     struct pt_regs *regs)
+{
+	struct optimized_kprobe *op;
+	if (p->flags & KPROBE_FLAG_OPTIMIZED) {
+		op = container_of(p, struct optimized_kprobe, kp);
+		/* This kprobe is really able to run optimized path. */
+		return arch_detour_optimized_kprobe(op, regs);
+	} else
+		/* This kprobe is not optimized. */
+		return 0;
+}
+
+/* Return true(!0) if the kprobe is ready for optimization. */
+static inline int kprobe_optready(struct kprobe *p)
+{
+	struct optimized_kprobe *op;
+	if (kprobe_aggrprobe(p)) {
+		op = container_of(p, struct optimized_kprobe, kp);
+		return arch_prepared_optinsn(&op->optinsn);
+	}
+	return 0;
+}
+
+/* Return an optimized kprobe which replaces instructions including addr. */
+struct kprobe *__kprobes get_optimized_kprobe(unsigned long addr)
+{
+	int i;
+	struct kprobe *p = NULL;
+	struct optimized_kprobe *op;
+	for (i = 0; !p && i < MAX_OPTIMIZED_LENGTH; i++)
+		p = get_kprobe((void *)(addr - i));
+
+	if (p && kprobe_optready(p)) {
+		op = container_of(p, struct optimized_kprobe, kp);
+		if (arch_within_optimized_kprobe(op, addr))
+			return p;
+	}
+	return NULL;
+}
+
+/* Optimization staging list, protected by kprobe_mutex */
+static LIST_HEAD(optimizing_list);
+
+static void kprobe_optimizer(struct work_struct *work);
+static DECLARE_DELAYED_WORK(optimizing_work, kprobe_optimizer);
+#define OPTIMIZE_DELAY 5
+
+/* Kprobe jump optimizer */
+static __kprobes void kprobe_optimizer(struct work_struct *work)
+{
+	struct optimized_kprobe *op, *tmp;
+
+	/* Lock modules while optimizing kprobes */
+	mutex_lock(&module_mutex);
+	mutex_lock(&kprobe_mutex);
+	if (kprobes_all_disarmed)
+		goto end;
+
+	/* Wait quiesence period for ensuring all interrupts are done */
+	synchronize_sched();
+
+	mutex_lock(&text_mutex);
+	list_for_each_entry_safe(op, tmp, &optimizing_list, list) {
+		WARN_ON(kprobe_disabled(&op->kp));
+		if (arch_optimize_kprobe(op) < 0)
+			op->kp.flags &= ~KPROBE_FLAG_OPTIMIZED;
+		list_del_init(&op->list);
+	}
+	mutex_unlock(&text_mutex);
+end:
+	mutex_unlock(&kprobe_mutex);
+	mutex_unlock(&module_mutex);
+}
+
+/* Optimize kprobe if p is ready to be optimized */
+static __kprobes void optimize_kprobe(struct kprobe *p)
+{
+	struct optimized_kprobe *op;
+	/* Check if the kprobe is disabled or not ready for optimization. */
+	if (!kprobe_optready(p) ||
+	    (kprobe_disabled(p) || kprobes_all_disarmed))
+		return;
+
+	/* Both of break_handler and post_handler are not supported. */
+	if (p->break_handler || p->post_handler)
+		return;
+
+	op = container_of(p, struct optimized_kprobe, kp);
+
+	/* Check there is no other kprobes at the optimized instructions */
+	if (arch_check_optimized_kprobe(op) < 0)
+		return;
+
+	/* Check if it is already optimized. */
+	if (op->kp.flags & KPROBE_FLAG_OPTIMIZED)
+		return;
+
+	op->kp.flags |= KPROBE_FLAG_OPTIMIZED;
+	list_add(&op->list, &optimizing_list);
+	if (!delayed_work_pending(&optimizing_work))
+		schedule_delayed_work(&optimizing_work, OPTIMIZE_DELAY);
+}
+
+/* Unoptimize a kprobe if p is optimized */
+static __kprobes void unoptimize_kprobe(struct kprobe *p)
+{
+	struct optimized_kprobe *op;
+	if ((p->flags & KPROBE_FLAG_OPTIMIZED) && kprobe_aggrprobe(p)) {
+		op = container_of(p, struct optimized_kprobe, kp);
+		if (!list_empty(&op->list))
+			/* Dequeue from the optimization queue */
+			list_del_init(&op->list);
+		else
+			/* Replace jump with break */
+			arch_unoptimize_kprobe(op);
+		op->kp.flags &= ~KPROBE_FLAG_OPTIMIZED;
+	}
+}
+
+/* Remove optimized instructions */
+static void __kprobes kill_optimized_kprobe(struct kprobe *p)
+{
+	struct optimized_kprobe *op;
+	op = container_of(p, struct optimized_kprobe, kp);
+	if (!list_empty(&op->list)) {
+		/* Dequeue from the optimization queue */
+		list_del_init(&op->list);
+		op->kp.flags &= ~KPROBE_FLAG_OPTIMIZED;
+	}
+	/* Don't unoptimize, because the target code will be freed. */
+	arch_remove_optimized_kprobe(op);
+}
+
+/* Try to prepare optimized instructions */
+static __kprobes void prepare_optimized_kprobe(struct kprobe *p)
+{
+	struct optimized_kprobe *op;
+	op = container_of(p, struct optimized_kprobe, kp);
+	arch_prepare_optimized_kprobe(op);
+}
+
+/* Free optimized instructions and optimized_kprobe */
+static __kprobes void free_aggr_kprobe(struct kprobe *p)
+{
+	struct optimized_kprobe *op;
+	op = container_of(p, struct optimized_kprobe, kp);
+	arch_remove_optimized_kprobe(op);
+	kfree(op);
+}
+
+/* Allocate new optimized_kprobe and try to prepare optimized instructions */
+static __kprobes struct kprobe *alloc_aggr_kprobe(struct kprobe *p)
+{
+	struct optimized_kprobe *op;
+
+	op = kzalloc(sizeof(struct optimized_kprobe), GFP_KERNEL);
+	if (!op)
+		return NULL;
+
+	INIT_LIST_HEAD(&op->list);
+	op->kp.addr = p->addr;
+	arch_prepare_optimized_kprobe(op);
+	return &op->kp;
+}
+
+static void __kprobes init_aggr_kprobe(struct kprobe *ap, struct kprobe *p);
+
+/*
+ * Prepare an optimized_kprobe and optimize it
+ * NOTE: p must be a normal registered kprobe
+ */
+static __kprobes void try_to_optimize_kprobe(struct kprobe *p)
+{
+	struct kprobe *ap;
+	struct optimized_kprobe *op;
+
+	ap = alloc_aggr_kprobe(p);
+	if (!ap)
+		return;
+
+	op = container_of(ap, struct optimized_kprobe, kp);
+	if (!arch_prepared_optinsn(&op->optinsn)) {
+		/* If failed to setup optimizing, fallback to kprobe */
+		free_aggr_kprobe(ap);
+		return;
+	}
+
+	init_aggr_kprobe(ap, p);
+	optimize_kprobe(ap);
+	return;
+}
+#else /* !CONFIG_OPTPROBES */
+#define get_optimized_kprobe(addr)		(NULL)
+#define detour_optimized_kprobe(p, regs)	(0)
+#define optimize_kprobe(p)			do {} while (0)
+#define unoptimize_kprobe(p)			do {} while (0)
+#define kill_optimized_kprobe(p)		do {} while (0)
+#define prepare_optimized_kprobe(p)		do {} while (0)
+#define try_to_optimize_kprobe(p)		do {} while (0)
+
+static __kprobes void free_aggr_kprobe(struct kprobe *p)
+{
+	kfree(p);
+}
+
+static __kprobes struct kprobe *alloc_aggr_kprobe(struct kprobe *p)
+{
+	return kzalloc(sizeof(struct kprobe), GFP_KERNEL);
+}
+#endif /* CONFIG_OPTPROBES */
+
+static void __kprobes __arm_kprobe(struct kprobe *kp)
+{
+	arch_arm_kprobe(kp);
+	optimize_kprobe(kp);	/* Try to re-optimize */
+}
+
+static void __kprobes __disarm_kprobe(struct kprobe *kp)
+{
+	unoptimize_kprobe(kp);	/* Try to unoptimize */
+	arch_disarm_kprobe(kp);
+}
+
 /* Arm a kprobe with text_mutex */
 static void __kprobes arm_kprobe(struct kprobe *kp)
 {
 	mutex_lock(&text_mutex);
-	arch_arm_kprobe(kp);
+	__arm_kprobe(kp);
 	mutex_unlock(&text_mutex);
 }
 
@@ -343,7 +628,7 @@ static void __kprobes arm_kprobe(struct kprobe *kp)
 static void __kprobes disarm_kprobe(struct kprobe *kp)
 {
 	mutex_lock(&text_mutex);
-	arch_disarm_kprobe(kp);
+	__disarm_kprobe(kp);
 	mutex_unlock(&text_mutex);
 }
 
@@ -363,7 +648,7 @@ static int __kprobes aggr_pre_handler(struct kprobe *p, struct pt_regs *regs)
 		}
 		reset_kprobe_instance();
 	}
-	return 0;
+	return detour_optimized_kprobe(p, regs);
 }
 
 static void __kprobes aggr_post_handler(struct kprobe *p, struct pt_regs *regs,
@@ -413,7 +698,7 @@ static int __kprobes aggr_break_handler(struct kprobe *p, struct pt_regs *regs)
 void __kprobes kprobes_inc_nmissed_count(struct kprobe *p)
 {
 	struct kprobe *kp;
-	if (p->pre_handler != aggr_pre_handler) {
+	if (!kprobe_aggrprobe(p)) {
 		p->nmissed++;
 	} else {
 		list_for_each_entry_rcu(kp, &p->list, list)
@@ -537,21 +822,16 @@ static void __kprobes cleanup_rp_inst(struct kretprobe *rp)
 }
 
 /*
- * Keep all fields in the kprobe consistent
- */
-static inline void copy_kprobe(struct kprobe *old_p, struct kprobe *p)
-{
-	memcpy(&p->opcode, &old_p->opcode, sizeof(kprobe_opcode_t));
-	memcpy(&p->ainsn, &old_p->ainsn, sizeof(struct arch_specific_insn));
-}
-
-/*
 * Add the new probe to ap->list. Fail if this is the
 * second jprobe at the address - two jprobes can't coexist
 */
 static int __kprobes add_new_kprobe(struct kprobe *ap, struct kprobe *p)
 {
 	BUG_ON(kprobe_gone(ap) || kprobe_gone(p));
+
+	if (p->break_handler || p->post_handler)
+		unoptimize_kprobe(ap);	/* Fall back to normal kprobe */
+
 	if (p->break_handler) {
 		if (ap->break_handler)
 			return -EEXIST;
@@ -566,7 +846,7 @@ static int __kprobes add_new_kprobe(struct kprobe *ap, struct kprobe *p)
 		ap->flags &= ~KPROBE_FLAG_DISABLED;
 		if (!kprobes_all_disarmed)
 			/* Arm the breakpoint again. */
-			arm_kprobe(ap);
+			__arm_kprobe(ap);
 	}
 	return 0;
 }
@@ -575,12 +855,13 @@ static int __kprobes add_new_kprobe(struct kprobe *ap, struct kprobe *p)
  * Fill in the required fields of the "manager kprobe". Replace the
  * earlier kprobe in the hlist with the manager kprobe
  */
-static inline void add_aggr_kprobe(struct kprobe *ap, struct kprobe *p)
+static void __kprobes init_aggr_kprobe(struct kprobe *ap, struct kprobe *p)
 {
+	/* Copy p's insn slot to ap */
 	copy_kprobe(p, ap);
 	flush_insn_slot(ap);
 	ap->addr = p->addr;
-	ap->flags = p->flags;
+	ap->flags = p->flags & ~KPROBE_FLAG_OPTIMIZED;
 	ap->pre_handler = aggr_pre_handler;
 	ap->fault_handler = aggr_fault_handler;
 	/* We don't care the kprobe which has gone. */
@@ -590,8 +871,9 @@ static inline void add_aggr_kprobe(struct kprobe *ap, struct kprobe *p)
 		ap->break_handler = aggr_break_handler;
 
 	INIT_LIST_HEAD(&ap->list);
-	list_add_rcu(&p->list, &ap->list);
+	INIT_HLIST_NODE(&ap->hlist);
 
+	list_add_rcu(&p->list, &ap->list);
 	hlist_replace_rcu(&p->hlist, &ap->hlist);
 }
 
@@ -605,12 +887,12 @@ static int __kprobes register_aggr_kprobe(struct kprobe *old_p,
 	int ret = 0;
 	struct kprobe *ap = old_p;
 
-	if (old_p->pre_handler != aggr_pre_handler) {
-		/* If old_p is not an aggr_probe, create new aggr_kprobe. */
-		ap = kzalloc(sizeof(struct kprobe), GFP_KERNEL);
+	if (!kprobe_aggrprobe(old_p)) {
+		/* If old_p is not an aggr_kprobe, create new aggr_kprobe. */
+		ap = alloc_aggr_kprobe(old_p);
 		if (!ap)
 			return -ENOMEM;
-		add_aggr_kprobe(ap, old_p);
+		init_aggr_kprobe(ap, old_p);
 	}
 
 	if (kprobe_gone(ap)) {
@@ -629,6 +911,9 @@ static int __kprobes register_aggr_kprobe(struct kprobe *old_p,
 			 */
 			return ret;
 
+		/* Prepare optimized instructions if possible. */
+		prepare_optimized_kprobe(ap);
+
 		/*
 		 * Clear gone flag to prevent allocating new slot again, and
 		 * set disabled flag because it is not armed yet.
@@ -637,6 +922,7 @@ static int __kprobes register_aggr_kprobe(struct kprobe *old_p,
 			    | KPROBE_FLAG_DISABLED;
 	}
 
+	/* Copy ap's insn slot to p */
 	copy_kprobe(ap, p);
 	return add_new_kprobe(ap, p);
 }
@@ -748,16 +1034,23 @@ int __kprobes register_kprobe(struct kprobe *p)
 	p->nmissed = 0;
 	INIT_LIST_HEAD(&p->list);
 	mutex_lock(&kprobe_mutex);
+	mutex_lock(&text_mutex);
+
 	old_p = get_kprobe(p->addr);
 	if (old_p) {
+		/* Since this may unoptimize old_p, locking text_mutex. */
 		ret = register_aggr_kprobe(old_p, p);
 		goto out;
 	}
 
-	mutex_lock(&text_mutex);
+	/* Check collision with other optimized kprobes */
+	old_p = get_optimized_kprobe((unsigned long)p->addr);
+	if (unlikely(old_p))
+		unoptimize_kprobe(old_p); /* Fallback to unoptimized kprobe */
+
 	ret = arch_prepare_kprobe(p);
 	if (ret)
-		goto out_unlock_text;
+		goto out;
 
 	INIT_HLIST_NODE(&p->hlist);
 	hlist_add_head_rcu(&p->hlist,
@@ -766,9 +1059,11 @@ int __kprobes register_kprobe(struct kprobe *p)
 	if (!kprobes_all_disarmed && !kprobe_disabled(p))
 		arch_arm_kprobe(p);
 
-out_unlock_text:
-	mutex_unlock(&text_mutex);
+	/* Try to optimize kprobe */
+	try_to_optimize_kprobe(p);
+
 out:
+	mutex_unlock(&text_mutex);
 	mutex_unlock(&kprobe_mutex);
 
 	if (probed_mod)
@@ -810,7 +1105,7 @@ static int __kprobes __unregister_kprobe_top(struct kprobe *p)
 		return -EINVAL;
 
 	if (old_p == p ||
-	    (old_p->pre_handler == aggr_pre_handler &&
+	    (kprobe_aggrprobe(old_p) &&
 	     list_is_singular(&old_p->list))) {
 		/*
 		 * Only probe on the hash list. Disarm only if kprobes are
@@ -818,8 +1113,13 @@ static int __kprobes __unregister_kprobe_top(struct kprobe *p)
 		 * already have been removed. We save on flushing icache.
 		 */
 		if (!kprobes_all_disarmed && !kprobe_disabled(old_p))
-			disarm_kprobe(p);
+			disarm_kprobe(old_p);
 		hlist_del_rcu(&old_p->hlist);
+
+		/* If another kprobe was blocked, optimize it. */
+		old_p = get_optimized_kprobe((unsigned long)p->addr);
+		if (unlikely(old_p))
+			optimize_kprobe(old_p);
 	} else {
 		if (p->break_handler && !kprobe_gone(p))
 			old_p->break_handler = NULL;
@@ -834,8 +1134,13 @@ noclean:
 		list_del_rcu(&p->list);
 		if (!kprobe_disabled(old_p)) {
 			try_to_disable_aggr_kprobe(old_p);
-			if (!kprobes_all_disarmed && kprobe_disabled(old_p))
-				disarm_kprobe(old_p);
+			if (!kprobes_all_disarmed) {
+				if (kprobe_disabled(old_p))
+					disarm_kprobe(old_p);
+				else
+					/* Try to optimize this probe again */
+					optimize_kprobe(old_p);
+			}
 		}
 	}
 	return 0;
@@ -852,7 +1157,7 @@ static void __kprobes __unregister_kprobe_bottom(struct kprobe *p)
 		old_p = list_entry(p->list.next, struct kprobe, list);
 		list_del(&p->list);
 		arch_remove_kprobe(old_p);
-		kfree(old_p);
+		free_aggr_kprobe(old_p);
 	}
 }
 
@@ -1148,7 +1453,7 @@ static void __kprobes kill_kprobe(struct kprobe *p)
 	struct kprobe *kp;
 
 	p->flags |= KPROBE_FLAG_GONE;
-	if (p->pre_handler == aggr_pre_handler) {
+	if (kprobe_aggrprobe(p)) {
 		/*
 		 * If this is an aggr_kprobe, we have to list all the
 		 * chained probes and mark them GONE.
@@ -1157,6 +1462,7 @@ static void __kprobes kill_kprobe(struct kprobe *p)
 			kp->flags |= KPROBE_FLAG_GONE;
 		p->post_handler = NULL;
 		p->break_handler = NULL;
+		kill_optimized_kprobe(p);
 	}
 	/*
 	 * Here, we can remove insn_slot safely, because no thread calls
@@ -1259,6 +1565,11 @@ static int __init init_kprobes(void)
 		}
 	}
 
+#if defined(CONFIG_OPTPROBES) && defined(__ARCH_WANT_KPROBES_INSN_SLOT)
+	/* Init kprobe_optinsn_slots */
+	kprobe_optinsn_slots.insn_size = MAX_OPTINSN_SIZE;
+#endif
+
 	/* By default, kprobes are armed */
 	kprobes_all_disarmed = false;
 
@@ -1277,7 +1588,7 @@ static int __init init_kprobes(void)
 
 #ifdef CONFIG_DEBUG_FS
 static void __kprobes report_probe(struct seq_file *pi, struct kprobe *p,
-		const char *sym, int offset,char *modname)
+		const char *sym, int offset, char *modname, struct kprobe *pp)
 {
 	char *kprobe_type;
 
@@ -1287,19 +1598,21 @@ static void __kprobes report_probe(struct seq_file *pi, struct kprobe *p,
 		kprobe_type = "j";
 	else
 		kprobe_type = "k";
+
 	if (sym)
-		seq_printf(pi, "%p  %s  %s+0x%x  %s %s%s\n",
+		seq_printf(pi, "%p  %s  %s+0x%x  %s ",
 			p->addr, kprobe_type, sym, offset,
-			(modname ? modname : " "),
-			(kprobe_gone(p) ? "[GONE]" : ""),
-			((kprobe_disabled(p) && !kprobe_gone(p)) ?
-			 "[DISABLED]" : ""));
+			(modname ? modname : " "));
 	else
-		seq_printf(pi, "%p  %s  %p %s%s\n",
-			p->addr, kprobe_type, p->addr,
-			(kprobe_gone(p) ? "[GONE]" : ""),
-			((kprobe_disabled(p) && !kprobe_gone(p)) ?
-			 "[DISABLED]" : ""));
+		seq_printf(pi, "%p  %s  %p ",
+			p->addr, kprobe_type, p->addr);
+
+	if (!pp)
+		pp = p;
+	seq_printf(pi, "%s%s%s\n",
+		(kprobe_gone(p) ? "[GONE]" : ""),
+		((kprobe_disabled(p) && !kprobe_gone(p)) ?  "[DISABLED]" : ""),
+		(kprobe_optimized(pp) ? "[OPTIMIZED]" : ""));
 }
 
 static void __kprobes *kprobe_seq_start(struct seq_file *f, loff_t *pos)
@@ -1335,11 +1648,11 @@ static int __kprobes show_kprobe_addr(struct seq_file *pi, void *v)
 	hlist_for_each_entry_rcu(p, node, head, hlist) {
 		sym = kallsyms_lookup((unsigned long)p->addr, NULL,
 					&offset, &modname, namebuf);
-		if (p->pre_handler == aggr_pre_handler) {
+		if (kprobe_aggrprobe(p)) {
 			list_for_each_entry_rcu(kp, &p->list, list)
-				report_probe(pi, kp, sym, offset, modname);
+				report_probe(pi, kp, sym, offset, modname, p);
 		} else
-			report_probe(pi, p, sym, offset, modname);
+			report_probe(pi, p, sym, offset, modname, NULL);
 	}
 	preempt_enable();
 	return 0;
@@ -1447,7 +1760,7 @@ static void __kprobes arm_all_kprobes(void)
 		head = &kprobe_table[i];
 		hlist_for_each_entry_rcu(p, node, head, hlist)
 			if (!kprobe_disabled(p))
-				arch_arm_kprobe(p);
+				__arm_kprobe(p);
 	}
 	mutex_unlock(&text_mutex);
 
@@ -1479,7 +1792,7 @@ static void __kprobes disarm_all_kprobes(void)
 		head = &kprobe_table[i];
 		hlist_for_each_entry_rcu(p, node, head, hlist) {
 			if (!arch_trampoline_kprobe(p) && !kprobe_disabled(p))
-				arch_disarm_kprobe(p);
+				__disarm_kprobe(p);
 		}
 	}
 


-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America), Inc.
Software Solutions Division

e-mail: mhiramat@redhat.com

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [RFC][ PATCH -tip v2 5/7] kprobes: x86: cleanup save/restore registers
  2009-06-22 21:22 [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization support Masami Hiramatsu
                   ` (3 preceding siblings ...)
  2009-06-22 21:23 ` [RFC][ PATCH -tip v2 4/7] kprobes: kprobes jump optimization core Masami Hiramatsu
@ 2009-06-22 21:23 ` Masami Hiramatsu
  2009-06-22 21:23 ` [RFC][ PATCH -tip v2 6/7] kprobes: x86: support kprobes jump optimization on x86 Masami Hiramatsu
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 29+ messages in thread
From: Masami Hiramatsu @ 2009-06-22 21:23 UTC (permalink / raw)
  To: Ingo Molnar, Ananth N Mavinakayanahalli, lkml
  Cc: systemtap, DLE, Masami Hiramatsu, Ananth N Mavinakayanahalli,
	Ingo Molnar, Jim Keniston, Srikar Dronamraju, Christoph Hellwig,
	Steven Rostedt, Frederic Weisbecker, H. Peter Anvin,
	Anders Kaseorg, Tim Abbott

Introduce SAVE/RESOTRE_REGS_STRING for cleanup kretprobe-trampoline asm
code. These macros will be used for emulating interruption.

Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jim Keniston <jkenisto@us.ibm.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Anders Kaseorg <andersk@ksplice.com>
Cc: Tim Abbott <tabbott@ksplice.com>
---

 arch/x86/kernel/kprobes.c |  128 ++++++++++++++++++++++++---------------------
 1 files changed, 67 insertions(+), 61 deletions(-)

diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
index b77e050..979b7ed 100644
--- a/arch/x86/kernel/kprobes.c
+++ b/arch/x86/kernel/kprobes.c
@@ -568,6 +568,69 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
 	return 0;
 }
 
+#ifdef CONFIG_X86_64
+#define SAVE_REGS_STRING		\
+	/* Skip cs, ip, orig_ax. */	\
+	"	subq $24, %rsp\n"	\
+	"	pushq %rdi\n"		\
+	"	pushq %rsi\n"		\
+	"	pushq %rdx\n"		\
+	"	pushq %rcx\n"		\
+	"	pushq %rax\n"		\
+	"	pushq %r8\n"		\
+	"	pushq %r9\n"		\
+	"	pushq %r10\n"		\
+	"	pushq %r11\n"		\
+	"	pushq %rbx\n"		\
+	"	pushq %rbp\n"		\
+	"	pushq %r12\n"		\
+	"	pushq %r13\n"		\
+	"	pushq %r14\n"		\
+	"	pushq %r15\n"
+#define RESTORE_REGS_STRING		\
+	"	popq %r15\n"		\
+	"	popq %r14\n"		\
+	"	popq %r13\n"		\
+	"	popq %r12\n"		\
+	"	popq %rbp\n"		\
+	"	popq %rbx\n"		\
+	"	popq %r11\n"		\
+	"	popq %r10\n"		\
+	"	popq %r9\n"		\
+	"	popq %r8\n"		\
+	"	popq %rax\n"		\
+	"	popq %rcx\n"		\
+	"	popq %rdx\n"		\
+	"	popq %rsi\n"		\
+	"	popq %rdi\n"		\
+	/* Skip orig_ax, ip, cs */	\
+	"	addq $24, %rsp\n"
+#else
+#define SAVE_REGS_STRING		\
+	/* Skip cs, ip, orig_ax and gs. */	\
+	"	subl $16, %esp\n"	\
+	"	pushl %fs\n"		\
+	"	pushl %ds\n"		\
+	"	pushl %es\n"		\
+	"	pushl %eax\n"		\
+	"	pushl %ebp\n"		\
+	"	pushl %edi\n"		\
+	"	pushl %esi\n"		\
+	"	pushl %edx\n"		\
+	"	pushl %ecx\n"		\
+	"	pushl %ebx\n"
+#define RESTORE_REGS_STRING		\
+	"	popl %ebx\n"		\
+	"	popl %ecx\n"		\
+	"	popl %edx\n"		\
+	"	popl %esi\n"		\
+	"	popl %edi\n"		\
+	"	popl %ebp\n"		\
+	"	popl %eax\n"		\
+	/* Skip ds, es, fs, gs, orig_ax, and ip. Note: don't pop cs here*/\
+	"	addl $24, %esp\n"
+#endif
+
 /*
  * When a retprobed function returns, this code saves registers and
  * calls trampoline_handler() runs, which calls the kretprobe's handler.
@@ -581,65 +644,16 @@ static void __used __kprobes kretprobe_trampoline_holder(void)
 			/* We don't bother saving the ss register */
 			"	pushq %rsp\n"
 			"	pushfq\n"
-			/*
-			 * Skip cs, ip, orig_ax.
-			 * trampoline_handler() will plug in these values
-			 */
-			"	subq $24, %rsp\n"
-			"	pushq %rdi\n"
-			"	pushq %rsi\n"
-			"	pushq %rdx\n"
-			"	pushq %rcx\n"
-			"	pushq %rax\n"
-			"	pushq %r8\n"
-			"	pushq %r9\n"
-			"	pushq %r10\n"
-			"	pushq %r11\n"
-			"	pushq %rbx\n"
-			"	pushq %rbp\n"
-			"	pushq %r12\n"
-			"	pushq %r13\n"
-			"	pushq %r14\n"
-			"	pushq %r15\n"
+			SAVE_REGS_STRING
 			"	movq %rsp, %rdi\n"
 			"	call trampoline_handler\n"
 			/* Replace saved sp with true return address. */
 			"	movq %rax, 152(%rsp)\n"
-			"	popq %r15\n"
-			"	popq %r14\n"
-			"	popq %r13\n"
-			"	popq %r12\n"
-			"	popq %rbp\n"
-			"	popq %rbx\n"
-			"	popq %r11\n"
-			"	popq %r10\n"
-			"	popq %r9\n"
-			"	popq %r8\n"
-			"	popq %rax\n"
-			"	popq %rcx\n"
-			"	popq %rdx\n"
-			"	popq %rsi\n"
-			"	popq %rdi\n"
-			/* Skip orig_ax, ip, cs */
-			"	addq $24, %rsp\n"
+			RESTORE_REGS_STRING
 			"	popfq\n"
 #else
 			"	pushf\n"
-			/*
-			 * Skip cs, ip, orig_ax and gs.
-			 * trampoline_handler() will plug in these values
-			 */
-			"	subl $16, %esp\n"
-			"	pushl %fs\n"
-			"	pushl %es\n"
-			"	pushl %ds\n"
-			"	pushl %eax\n"
-			"	pushl %ebp\n"
-			"	pushl %edi\n"
-			"	pushl %esi\n"
-			"	pushl %edx\n"
-			"	pushl %ecx\n"
-			"	pushl %ebx\n"
+			SAVE_REGS_STRING
 			"	movl %esp, %eax\n"
 			"	call trampoline_handler\n"
 			/* Move flags to cs */
@@ -647,15 +661,7 @@ static void __used __kprobes kretprobe_trampoline_holder(void)
 			"	movl %edx, 52(%esp)\n"
 			/* Replace saved flags with true return address. */
 			"	movl %eax, 56(%esp)\n"
-			"	popl %ebx\n"
-			"	popl %ecx\n"
-			"	popl %edx\n"
-			"	popl %esi\n"
-			"	popl %edi\n"
-			"	popl %ebp\n"
-			"	popl %eax\n"
-			/* Skip ds, es, fs, gs, orig_ax and ip */
-			"	addl $24, %esp\n"
+			RESTORE_REGS_STRING
 			"	popf\n"
 #endif
 			"	ret\n");


-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America), Inc.
Software Solutions Division

e-mail: mhiramat@redhat.com

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [RFC][ PATCH -tip v2 6/7] kprobes: x86: support kprobes jump optimization on x86
  2009-06-22 21:22 [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization support Masami Hiramatsu
                   ` (4 preceding siblings ...)
  2009-06-22 21:23 ` [RFC][ PATCH -tip v2 5/7] kprobes: x86: cleanup save/restore registers Masami Hiramatsu
@ 2009-06-22 21:23 ` Masami Hiramatsu
  2009-06-22 21:23 ` [RFC][ PATCH -tip v2 7/7] kprobes: add documents of jump optimization Masami Hiramatsu
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 29+ messages in thread
From: Masami Hiramatsu @ 2009-06-22 21:23 UTC (permalink / raw)
  To: Ingo Molnar, Ananth N Mavinakayanahalli, lkml
  Cc: systemtap, DLE, Masami Hiramatsu, Ananth N Mavinakayanahalli,
	Ingo Molnar, Jim Keniston, Srikar Dronamraju, Christoph Hellwig,
	Steven Rostedt, Frederic Weisbecker, H. Peter Anvin,
	Anders Kaseorg, Tim Abbott

Introduce x86 arch-specific optimization code, which supports both of
x86-32 and x86-64.

This code also supports safety checking, which decodes whole of a function
in which probe is inserted, and checks following conditions before
optimization:
 - The optimized instructions which will be replaced by a jump instruction
   don't straddle the function boundary.
 - There is no indirect jump instruction, because it will jumps into
   the address range which is replaced by jump operand.
 - There is no jump/loop instruction which jumps into the address range
   which is replaced by jump operand.

Changes from v1:
 - Add INT3_SIZE to regs->ip when hitting the optimized handler due to
   keeping compatibility with non-optimized kprobes.
 - Don't optimize kprobes if it is in functions into which fixup code will
   jumps.

Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jim Keniston <jkenisto@us.ibm.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Anders Kaseorg <andersk@ksplice.com>
Cc: Tim Abbott <tabbott@ksplice.com>
---

 arch/x86/Kconfig               |    1 
 arch/x86/include/asm/kprobes.h |   31 +++
 arch/x86/kernel/kprobes.c      |  406 ++++++++++++++++++++++++++++++++++++++--
 3 files changed, 415 insertions(+), 23 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 7e5ec11..dc4e134 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -30,6 +30,7 @@ config X86
 	select ARCH_WANT_FRAME_POINTERS
 	select HAVE_DMA_ATTRS
 	select HAVE_KRETPROBES
+	select HAVE_OPTPROBES
 	select HAVE_FTRACE_MCOUNT_RECORD
 	select HAVE_DYNAMIC_FTRACE
 	select HAVE_FUNCTION_TRACER
diff --git a/arch/x86/include/asm/kprobes.h b/arch/x86/include/asm/kprobes.h
index 4fe681d..cacc5ea 100644
--- a/arch/x86/include/asm/kprobes.h
+++ b/arch/x86/include/asm/kprobes.h
@@ -32,7 +32,10 @@ struct kprobe;
 
 typedef u8 kprobe_opcode_t;
 #define BREAKPOINT_INSTRUCTION	0xcc
-#define RELATIVEJUMP_INSTRUCTION 0xe9
+#define RELATIVEJUMP_OPCODE 0xe9
+#define RELATIVECALL_OPCODE 0xe8
+#define RELATIVE_ADDR_SIZE 4
+#define RELATIVE_JUMP_SIZE (sizeof(kprobe_opcode_t) + RELATIVE_ADDR_SIZE)
 #define MAX_INSN_SIZE 16
 #define MAX_STACK_SIZE 64
 #define MIN_STACK_SIZE(ADDR)					       \
@@ -44,6 +47,17 @@ typedef u8 kprobe_opcode_t;
 
 #define flush_insn_slot(p)	do { } while (0)
 
+/* optinsn template addresses */
+extern kprobe_opcode_t optprobe_template_entry;
+extern kprobe_opcode_t optprobe_template_val;
+extern kprobe_opcode_t optprobe_template_call;
+extern kprobe_opcode_t optprobe_template_end;
+#define MAX_OPTIMIZED_LENGTH (MAX_INSN_SIZE + RELATIVE_ADDR_SIZE)
+#define MAX_OPTINSN_SIZE 				\
+	(((unsigned long)&optprobe_template_end -	\
+	  (unsigned long)&optprobe_template_entry) +	\
+	 MAX_OPTIMIZED_LENGTH + RELATIVE_JUMP_SIZE)
+
 extern const int kretprobe_blacklist_size;
 
 void arch_remove_kprobe(struct kprobe *p);
@@ -64,6 +78,21 @@ struct arch_specific_insn {
 	int boostable;
 };
 
+struct arch_optimized_insn {
+	/* copy of the original instructions */
+	kprobe_opcode_t copied_insn[RELATIVE_ADDR_SIZE];
+	/* detour code buffer */
+	kprobe_opcode_t *insn;
+	/* the size of instructions copied to detour code buffer */
+	size_t size;
+};
+
+/* Return true (!0) if optinsn is prepared for optimization. */
+static inline int arch_prepared_optinsn(struct arch_optimized_insn *optinsn)
+{
+	return optinsn->size;
+}
+
 struct prev_kprobe {
 	struct kprobe *kp;
 	unsigned long status;
diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
index 979b7ed..35e1f2f 100644
--- a/arch/x86/kernel/kprobes.c
+++ b/arch/x86/kernel/kprobes.c
@@ -118,16 +118,36 @@ struct kretprobe_blackpoint kretprobe_blacklist[] = {
 };
 const int kretprobe_blacklist_size = ARRAY_SIZE(kretprobe_blacklist);
 
-/* Insert a jump instruction at address 'from', which jumps to address 'to'.*/
-static void __kprobes set_jmp_op(void *from, void *to)
+/*
+ * On pentium series, Unsynchronized cross-modifying code
+ * operations can cause unexpected instruction execution results.
+ * So after code modified, we should synchronize it on each processor.
+ */
+static void __local_serialize_cpu(void *info)
+{
+	sync_core();
+}
+
+void arch_serialize_cpus(void)
 {
-	struct __arch_jmp_op {
-		char op;
+	on_each_cpu(__local_serialize_cpu, NULL, 1);
+}
+
+static void __kprobes __synthesize_relative_insn(void *from, void *to, u8 op)
+{
+	struct __arch_relative_insn {
+		u8 op;
 		s32 raddr;
-	} __attribute__((packed)) * jop;
-	jop = (struct __arch_jmp_op *)from;
-	jop->raddr = (s32)((long)(to) - ((long)(from) + 5));
-	jop->op = RELATIVEJUMP_INSTRUCTION;
+	} __attribute__((packed)) *insn;
+	insn = (struct __arch_relative_insn *)from;
+	insn->raddr = (s32)((long)(to) - ((long)(from) + 5));
+	insn->op = op;
+}
+
+/* Insert a jump instruction at address 'from', which jumps to address 'to'.*/
+static void __kprobes synthesize_reljump(void *from, void *to)
+{
+	__synthesize_relative_insn(from, to, RELATIVEJUMP_OPCODE);
 }
 
 /*
@@ -214,7 +234,7 @@ static int recover_probed_instruction(kprobe_opcode_t *buf, unsigned long addr)
 	/*
 	 *  Basically, kp->ainsn.insn has an original instruction.
 	 *  However, RIP-relative instruction can not do single-stepping
-	 *  at different place, fix_riprel() tweaks the displacement of
+	 *  at different place, __copy_instruction() tweaks the displacement of
 	 *  that instruction. In that case, we can't recover the instruction
 	 *  from the kp->ainsn.insn.
 	 *
@@ -292,21 +312,37 @@ static int __kprobes is_IF_modifier(kprobe_opcode_t *insn)
 }
 
 /*
- * Adjust the displacement if the instruction uses the %rip-relative
- * addressing mode.
+ * Copy an instruction and adjust the displacement if the instruction
+ * uses the %rip-relative addressing mode.
  * If it does, Return the address of the 32-bit displacement word.
  * If not, return null.
  * Only applicable to 64-bit x86.
  */
-static void __kprobes fix_riprel(struct kprobe *p)
+static int __kprobes __copy_instruction(u8 *dest, u8 *src, int recover)
 {
-#ifdef CONFIG_X86_64
 	struct insn insn;
-	kernel_insn_init(&insn, p->ainsn.insn);
+	int ret;
+	kprobe_opcode_t buf[MAX_INSN_SIZE];
+
+	kernel_insn_init(&insn, src);
+	if (recover) {
+		insn_get_opcode(&insn);
+		if (OPCODE1(&insn) == BREAKPOINT_INSTRUCTION) {
+			ret = recover_probed_instruction(buf,
+							 (unsigned long)src);
+			if (ret)
+				return 0;
+			kernel_insn_init(&insn, buf);
+		}
+	}
+	insn_get_length(&insn);
+	memcpy(dest, insn.kaddr, insn.length);
 
+#ifdef CONFIG_X86_64
 	if (insn_rip_relative(&insn)) {
 		s64 newdisp;
 		u8 *disp;
+		kernel_insn_init(&insn, dest);
 		insn_get_displacement(&insn);
 		/*
 		 * The copied instruction uses the %rip-relative addressing
@@ -320,20 +356,23 @@ static void __kprobes fix_riprel(struct kprobe *p)
 		 * extension of the original signed 32-bit displacement would
 		 * have given.
 		 */
-		newdisp = (u8 *) p->addr + (s64) insn.displacement.value -
-			  (u8 *) p->ainsn.insn;
+		newdisp = (u8 *) src + (s64) insn.displacement.value -
+			  (u8 *) dest;
 		BUG_ON((s64) (s32) newdisp != newdisp); /* Sanity check.  */
-		disp = (u8 *) p->ainsn.insn + INSN_DISPLACEMENT_OFFS(&insn);
+		disp = (u8 *) dest + INSN_DISPLACEMENT_OFFS(&insn);
 		*(s32 *) disp = (s32) newdisp;
 	}
 #endif
+	return insn.length;
 }
 
 static void __kprobes arch_copy_kprobe(struct kprobe *p)
 {
-	memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
-
-	fix_riprel(p);
+	/*
+	 * Copy an instruction without recovering int3, because it will be
+	 * put by another subsystem.
+	 */
+	__copy_instruction(p->ainsn.insn, p->addr, 0);
 
 	if (can_boost(p->addr))
 		p->ainsn.boostable = 0;
@@ -829,8 +868,8 @@ static void __kprobes resume_execution(struct kprobe *p,
 			 * These instructions can be executed directly if it
 			 * jumps back to correct address.
 			 */
-			set_jmp_op((void *)regs->ip,
-				   (void *)orig_ip + (regs->ip - copy_ip));
+			synthesize_reljump((void *)regs->ip,
+				(void *)orig_ip + (regs->ip - copy_ip));
 			p->ainsn.boostable = 1;
 		} else {
 			p->ainsn.boostable = -1;
@@ -1057,6 +1096,329 @@ int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
 	return 0;
 }
 
+
+#ifdef CONFIG_OPTPROBES
+
+/* Insert a call instruction at address 'from', which calls address 'to'.*/
+static void __kprobes synthesize_relcall(void *from, void *to)
+{
+	__synthesize_relative_insn(from, to, RELATIVECALL_OPCODE);
+}
+
+/* Insert a move instruction which sets a pointer to eax/rdi (1st arg). */
+static void __kprobes synthesize_set_arg1(kprobe_opcode_t *addr,
+					  unsigned long val)
+{
+#ifdef CONFIG_X86_64
+	*addr++ = 0x48;
+	*addr++ = 0xbf;
+#else
+	*addr++ = 0xb8;
+#endif
+	*(unsigned long *)addr = val;
+}
+
+void __kprobes kprobes_optinsn_template_holder(void)
+{
+	asm volatile (
+			".global optprobe_template_entry\n"
+			"optprobe_template_entry: \n"
+#ifdef CONFIG_X86_64
+			/* We don't bother saving the ss register */
+			"	pushq %rsp\n"
+			"	pushfq\n"
+			SAVE_REGS_STRING
+			"	movq %rsp, %rsi\n"
+			".global optprobe_template_val\n"
+			"optprobe_template_val: \n"
+			ASM_NOP5
+			ASM_NOP5
+			".global optprobe_template_call\n"
+			"optprobe_template_call: \n"
+			ASM_NOP5
+			/* Move flags to rsp */
+			"	movq 144(%rsp), %rdx\n"
+			"	movq %rdx, 152(%rsp)\n"
+			RESTORE_REGS_STRING
+			/* Skip flags entry */
+			"	addq $8, %rsp\n"
+			"	popfq\n"
+#else /* CONFIG_X86_32 */
+			"	pushf\n"
+			SAVE_REGS_STRING
+			"	movl %esp, %edx\n"
+			".global optprobe_template_val\n"
+			"optprobe_template_val: \n"
+			ASM_NOP5
+			".global optprobe_template_call\n"
+			"optprobe_template_call: \n"
+			ASM_NOP5
+			RESTORE_REGS_STRING
+			"	addl $4, %esp\n"	/* skip cs */
+			"	popf\n"
+#endif
+			".global optprobe_template_end\n"
+			"optprobe_template_end: \n");
+}
+
+#define TMPL_MOVE_IDX \
+	((long)&optprobe_template_val - (long)&optprobe_template_entry)
+#define TMPL_CALL_IDX \
+	((long)&optprobe_template_call - (long)&optprobe_template_entry)
+#define TMPL_END_IDX \
+	((long)&optprobe_template_end - (long)&optprobe_template_entry)
+
+#define INT3_SIZE sizeof(kprobe_opcode_t)
+
+/* Optimized kprobe call back function: called from optinsn */
+static void __kprobes optimized_callback(struct optimized_kprobe *op,
+					 struct pt_regs *regs)
+{
+	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+
+	preempt_disable();
+	if (kprobe_running()) {
+		kprobes_inc_nmissed_count(&op->kp);
+	} else {
+		/* Save skipped registers */
+#ifdef CONFIG_X86_64
+		regs->cs = __KERNEL_CS;
+#else
+		regs->cs = __KERNEL_CS | get_kernel_rpl();
+		regs->gs = 0;
+#endif
+		regs->ip = (unsigned long)op->kp.addr + INT3_SIZE;
+		regs->orig_ax = ~0UL;
+
+		__get_cpu_var(current_kprobe) = &op->kp;
+		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
+		opt_pre_handler(&op->kp, regs);
+		__get_cpu_var(current_kprobe) = NULL;
+	}
+	preempt_enable_no_resched();
+}
+
+static int __kprobes copy_optimized_instructions(u8 *dest, u8 *src)
+{
+	int len = 0, ret;
+	while (len < RELATIVE_JUMP_SIZE) {
+		ret = __copy_instruction(dest + len, src + len, 1);
+		if (!ret || !can_boost(dest + len))
+			return -EINVAL;
+		len += ret;
+	}
+	return len;
+}
+
+/* Check whether insn is indirect jump */
+static int __kprobes insn_is_indirect_jump(struct insn *insn)
+{
+	return (OPCODE1(insn) == 0xff || OPCODE1(insn) == 0xea);
+}
+
+/* Check whether insn jumps into specified address range */
+static int insn_jump_into_range(struct insn *insn, unsigned long start, int len)
+{
+	unsigned long target = 0;
+	switch (OPCODE1(insn)) {
+	case 0xe0:	/* loopne */
+	case 0xe1:	/* loope */
+	case 0xe2:	/* loop */
+	case 0xe3:	/* jcxz */
+	case 0xe9:	/* near relative jump */
+	case 0xeb:	/* short relative jump */
+		break;
+	case 0x0f:
+		if ((OPCODE2(insn) & 0xf0) == 0x80) /* jcc near */
+			break;
+		return 0;
+	default:
+		if ((OPCODE1(insn) & 0xf0) == 0x70) /* jcc short */
+			break;
+		return 0;
+	}
+	target = (unsigned long)insn->next_byte + insn->immediate.value;
+	return (start <= target && target <= start + len);
+}
+
+/* Decode whole function to ensure any instructions don't jump into target */
+static int __kprobes can_optimize(unsigned long paddr)
+{
+	int ret;
+	unsigned long addr, size = 0, offset = 0;
+	struct insn insn;
+	kprobe_opcode_t buf[MAX_INSN_SIZE];
+	/* Dummy buffers for lookup_symbol_attrs */
+	static char __dummy_buf[KSYM_NAME_LEN];
+
+	/* Lookup symbol including addr */
+	if (!kallsyms_lookup(paddr, &size, &offset, NULL, __dummy_buf))
+		return 0;
+
+	/* Check there is enough space for a relative jump. */
+	if (size - offset < RELATIVE_JUMP_SIZE)
+		return 0;
+
+	/* Decode instructions */
+	addr = paddr - offset;
+	while (addr < paddr - offset + size) { /* Decode until function end */
+		if (search_exception_tables(addr))
+			/*
+			 * Since some fixup code will jumps into this function,
+			 * we can't optimize kprobe in this function.
+			 */
+			return 0;
+		kernel_insn_init(&insn, (void *)addr);
+		insn_get_opcode(&insn);
+		if (OPCODE1(&insn) == BREAKPOINT_INSTRUCTION) {
+			ret = recover_probed_instruction(buf, addr);
+			if (ret)
+				return 0;
+			kernel_insn_init(&insn, buf);
+		}
+		insn_get_length(&insn);
+		/* Recover address */
+		insn.kaddr = (void *)addr;
+		insn.next_byte = (void *)(addr + insn.length);
+		/* Check any instructions don't jump into target */
+		if (insn_is_indirect_jump(&insn) ||
+		    insn_jump_into_range(&insn, paddr + INT3_SIZE,
+					 RELATIVE_ADDR_SIZE))
+			return 0;
+		addr += insn.length;
+	}
+
+	return 1;
+}
+
+/* Check optimized_kprobe can actually be optimized. */
+int __kprobes arch_check_optimized_kprobe(struct optimized_kprobe *op)
+{
+	int i;
+	for (i = 1; i < op->optinsn.size; i++)
+		if (get_kprobe(op->kp.addr + i))
+			return -EEXIST;
+	return 0;
+}
+
+/* Check the addr is within the optimized instructions. */
+int __kprobes arch_within_optimized_kprobe(struct optimized_kprobe *op,
+					   unsigned long addr)
+{
+	return ((unsigned long)op->kp.addr <= addr &&
+		(unsigned long)op->kp.addr + op->optinsn.size > addr);
+}
+
+/* Free optimized instruction slot */
+static __kprobes
+void __arch_remove_optimized_kprobe(struct optimized_kprobe *op, int dirty)
+{
+	if (op->optinsn.insn) {
+		free_optinsn_slot(op->optinsn.insn, dirty);
+		op->optinsn.insn = NULL;
+		op->optinsn.size = 0;
+	}
+}
+
+void __kprobes arch_remove_optimized_kprobe(struct optimized_kprobe *op)
+{
+	__arch_remove_optimized_kprobe(op, 1);
+}
+
+/*
+ * Copy p st processing instructions
+ * Target instructions MUST be relocatable.
+ */
+int __kprobes arch_prepare_optimized_kprobe(struct optimized_kprobe *op)
+{
+	u8 *buf;
+	int ret;
+
+	if (!can_optimize((unsigned long)op->kp.addr))
+		return -EILSEQ;
+
+	op->optinsn.insn = get_optinsn_slot();
+	if (!op->optinsn.insn)
+		return -ENOMEM;
+
+	buf = (u8 *)op->optinsn.insn;
+
+	/* Copy instructions into the out-of-line buffer */
+	ret = copy_optimized_instructions(buf + TMPL_END_IDX, op->kp.addr);
+	if (ret < 0) {
+		__arch_remove_optimized_kprobe(op, 0);
+		return ret;
+	}
+	op->optinsn.size = ret;
+
+	/* Backup instructions which will be replaced by jump address */
+	memcpy(op->optinsn.copied_insn, op->kp.addr + INT3_SIZE,
+	       RELATIVE_ADDR_SIZE);
+
+	/* Copy arch-dep-instance from template */
+	memcpy(buf, &optprobe_template_entry, TMPL_END_IDX);
+
+	/* Set probe information */
+	synthesize_set_arg1(buf + TMPL_MOVE_IDX, (unsigned long)op);
+
+	/* Set probe function call */
+	synthesize_relcall(buf + TMPL_CALL_IDX, optimized_callback);
+
+	/* Set returning jmp instruction at the tail of out-of-line buffer */
+	synthesize_reljump(buf + TMPL_END_IDX + op->optinsn.size,
+			   (u8 *)op->kp.addr + op->optinsn.size);
+
+	flush_icache_range((unsigned long) buf,
+			   (unsigned long) buf + TMPL_END_IDX +
+			   op->optinsn.size + RELATIVE_JUMP_SIZE);
+	return 0;
+}
+
+/* Replace a breakpoint (int3) with a relative jump.  */
+int __kprobes arch_optimize_kprobe(struct optimized_kprobe *op)
+{
+	kprobe_opcode_t opcode = RELATIVEJUMP_OPCODE;
+	long rel = (long)(op->optinsn.insn) -
+		   ((long)(op->kp.addr) + RELATIVE_JUMP_SIZE);
+
+	/* Insert the destination address only */
+	text_poke((void *)((char *)op->kp.addr + INT3_SIZE), &rel,
+		   RELATIVE_ADDR_SIZE);
+	arch_serialize_cpus();
+
+	/* Overwrite breakpoint to reljump */
+	text_poke(op->kp.addr, &opcode, sizeof(kprobe_opcode_t));
+	arch_serialize_cpus();
+	return 0;
+}
+
+/* Replace a relative jump with a breakpoint (int3).  */
+void __kprobes arch_unoptimize_kprobe(struct optimized_kprobe *op)
+{
+	/* Change (the 1st byte of) jump to int3. */
+	arch_arm_kprobe(&op->kp);
+	arch_serialize_cpus();
+	/*
+	 * Recover the instructions covered by the destination address.
+	 * The int3 will be removed by arch_disarm_kprobe()
+	 */
+	text_poke((void *)((long)op->kp.addr + INT3_SIZE),
+		  (void *)op->optinsn.copied_insn, RELATIVE_ADDR_SIZE);
+	arch_serialize_cpus();
+}
+
+/* Switch to a bypass code */
+int __kprobes arch_detour_optimized_kprobe(struct optimized_kprobe *op,
+					   struct pt_regs *regs)
+{
+	/* Detour through copied instructions */
+	regs->ip = (unsigned long)op->optinsn.insn + TMPL_END_IDX;
+	reset_current_kprobe();
+	preempt_enable_no_resched();
+	return 1;		/* Already prepared */
+}
+#endif
+
 int __init arch_init_kprobes(void)
 {
 	return 0;


-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America), Inc.
Software Solutions Division

e-mail: mhiramat@redhat.com

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [RFC][ PATCH -tip v2 7/7] kprobes: add documents of jump optimization
  2009-06-22 21:22 [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization support Masami Hiramatsu
                   ` (5 preceding siblings ...)
  2009-06-22 21:23 ` [RFC][ PATCH -tip v2 6/7] kprobes: x86: support kprobes jump optimization on x86 Masami Hiramatsu
@ 2009-06-22 21:23 ` Masami Hiramatsu
  2009-06-23 11:42 ` [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization support Ingo Molnar
  2009-06-23 12:09 ` Andi Kleen
  8 siblings, 0 replies; 29+ messages in thread
From: Masami Hiramatsu @ 2009-06-22 21:23 UTC (permalink / raw)
  To: Ingo Molnar, Ananth N Mavinakayanahalli, lkml
  Cc: systemtap, DLE, Masami Hiramatsu, Ananth N Mavinakayanahalli,
	Ingo Molnar, Jim Keniston, Srikar Dronamraju, Christoph Hellwig,
	Steven Rostedt, Frederic Weisbecker, H. Peter Anvin,
	Anders Kaseorg, Tim Abbott

Add documentations about kprobe jump optimization to Documentation/kprobes.txt.

Changes from v1:
 - Update description about safety checking.

Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jim Keniston <jkenisto@us.ibm.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Anders Kaseorg <andersk@ksplice.com>
Cc: Tim Abbott <tabbott@ksplice.com>
---

 Documentation/kprobes.txt |  174 ++++++++++++++++++++++++++++++++++++++++++---
 1 files changed, 161 insertions(+), 13 deletions(-)

diff --git a/Documentation/kprobes.txt b/Documentation/kprobes.txt
index 053037a..ca6aaad 100644
--- a/Documentation/kprobes.txt
+++ b/Documentation/kprobes.txt
@@ -1,6 +1,7 @@
 Title	: Kernel Probes (Kprobes)
 Authors	: Jim Keniston <jkenisto@us.ibm.com>
 	: Prasanna S Panchamukhi <prasanna@in.ibm.com>
+	: Masami Hiramatsu <mhiramat@redhat.com>
 
 CONTENTS
 
@@ -14,6 +15,7 @@ CONTENTS
 8. Kprobes Example
 9. Jprobes Example
 10. Kretprobes Example
+11. Optimization Example
 Appendix A: The kprobes debugfs interface
 
 1. Concepts: Kprobes, Jprobes, Return Probes
@@ -42,13 +44,13 @@ registration/unregistration of a group of *probes. These functions
 can speed up unregistration process when you have to unregister
 a lot of probes at once.
 
-The next three subsections explain how the different types of
-probes work.  They explain certain things that you'll need to
-know in order to make the best use of Kprobes -- e.g., the
-difference between a pre_handler and a post_handler, and how
-to use the maxactive and nmissed fields of a kretprobe.  But
-if you're in a hurry to start using Kprobes, you can skip ahead
-to section 2.
+The next four subsections explain how the different types of
+probes work and how the optimization works.  They explain certain
+things that you'll need to know in order to make the best use of
+Kprobes -- e.g., the difference between a pre_handler and
+a post_handler, and how to use the maxactive and nmissed fields of
+a kretprobe.  But if you're in a hurry to start using Kprobes, you
+can skip ahead to section 2.
 
 1.1 How Does a Kprobe Work?
 
@@ -161,13 +163,107 @@ In case probed function is entered but there is no kretprobe_instance
 object available, then in addition to incrementing the nmissed count,
 the user entry_handler invocation is also skipped.
 
+1.4 How Does the Optimization Work?
+
+ If you configured kernel with CONFIG_OPTPROBES=y (currently this option is
+supported on x86/x86-64, non-preemptive kernel), kprobes tries to use a
+jump instruction instead of breakpoint instruction automatically.
+
+1.4.1 Init a Kprobe
+
+ Before preparing optimization, Kprobes inserts original(user-defined)
+kprobe on the specified address. So, even if the kprobe is not
+possible to be optimized, it just uses a normal kprobe.
+
+1.4.2 Safety check
+
+ First, Kprobes gets the address of probed function and checks whether the
+optimized region, which will be replaced by a jump instruction, does NOT
+straddle the function boundary, because if the optimized region reaches the
+next function, its caller causes unexpected results.
+ Next, Kprobes decodes whole body of probed function and checks there is
+NO indirect jump, NO instruction which will cause exception by checking
+exception_tables (this will jump to fixup code and fixup code jumps into
+same function body) and NO near jump which jumps into the optimized region
+(except the 1st byte of jump), because if some jump instruction jumps
+into the middle of another instruction, it causes unexpected results too.
+ Kprobes also measures the length of instructions which will be replaced
+by a jump instruction, because a jump instruction is longer than 1 byte,
+it may replaces multiple instructions, and it checks whether those
+instructions can be executed out-of-line.
+
+1.4.3 Preparing detour buffer
+
+ Then, Kprobes prepares "detour" buffer, which contains exception emulating
+code (push/pop registers, call handler), copied instructions(Kprobes copies
+instructions which will be replaced by a jump, to the detour buffer), and
+a jump which jumps back to the original execution path.
+
+1.4.4 Pre-optimization
+
+ After preparing detour buffer, Kprobes checks that the probe is *NOT* in
+the below cases;
+ - The probe has either break_handler or post_handler.
+ - Other probes are probing the instructions which will be replaced by
+   a jump instruction.
+ - The probe is disabled.
+In above cases, Kprobes just doesn't start optimizating the probe.
+
+ If the kprobe can be optimized, Kprobes enqueues the kprobe to optimizing
+list and kicks kprobe-optimizer workqueue to optimize it. To wait other
+optimized probes, kprobe-optimizer will delay to work.
+ When the optimized-kprobe is hit before optimization, its handler changes
+IP(instruction pointer) to copied code and exits. So, the instructions which
+were copied to detour buffer are executed on the detour buffer.
+
+1.4.5 Optimization
+
+ Kprobe-optimizer doesn't start instruction-replacing soon, it waits
+synchronize_sched for safety, because some processors are possible to be
+interrupted on the instructions which will be replaced by a jump instruction.
+As you know, synchronize_sched() can ensure that all interruptions which were
+executed when synchronize_sched() was called are done, only if
+CONFIG_PREEMPT=n. So, this version supports only the kernel with
+CONFIG_PREEMPT=n.(*)
+ After that, kprobe-optimizer replaces the 4 bytes right after int3
+breakpoint with relative-jump destination, and synchronize caches on all
+processors. And then, it replaces int3 with relative-jump opcode, and
+synchronize caches again.
+
+ After optimizing the probe, a CPU hits the jump instruction and jumps to
+the out-of-line buffer directly. Thus the breakpoint exception is skipped.
+
+1.4.6 Unoptimization
+
+ When unregistering, disabling kprobe or being blocked by other kprobe,
+an optimized-kprobe will be unoptimized. Before kprobe-optimizer runs,
+the kprobe just be dequeued from the optimized list. When the optimization
+has been done, it replaces a jump with int3 breakpoint and original code.
+ First it puts int3 at the first byte of the jump, synchronize caches
+on all processors, replaces the 4 bytes right after int3 with the original
+code and synchronize caches again.
+
+(*)This optimization-safety checking may be replaced with stop-machine method
+ which ksplice is done for supporting CONFIG_PREEMPT=y kernel.
+
+NOTE for geeks:
+The jump optimization changes the kprobe's pre_handler behavior.
+Without optimization, pre_handler can change kernel execution path by
+changing regs->ip and return 1. However, after optimizing the probe,
+that modification is ignored. Thus, if you'd like to tweak kernel
+execution path, you need to avoid optimization. In that case, you can
+choose either,
+ - Set empty function to post_handler or break_handler.
+ or
+ - Config CONFIG_OPTPROBES=n.
+
 2. Architectures Supported
 
 Kprobes, jprobes, and return probes are implemented on the following
 architectures:
 
-- i386
-- x86_64 (AMD-64, EM64T)
+- i386 (Supports jump optimization)
+- x86_64 (AMD-64, EM64T) (Supports jump optimization)
 - ppc64
 - ia64 (Does not support probes on instruction slot1.)
 - sparc64 (Return probes not yet implemented.)
@@ -193,6 +289,10 @@ it useful to "Compile the kernel with debug info" (CONFIG_DEBUG_INFO),
 so you can use "objdump -d -l vmlinux" to see the source-to-object
 code mapping.
 
+If you want to reduce probing overhead, set "Kprobes jump optimization
+support" (CONFIG_OPTPROBES) to "y". You can find this option under
+"Kprobes" line.
+
 4. API Reference
 
 The Kprobes API includes a "register" function and an "unregister"
@@ -387,9 +487,12 @@ the probe which has been registered.
 
 5. Kprobes Features and Limitations
 
-Kprobes allows multiple probes at the same address.  Currently,
-however, there cannot be multiple jprobes on the same function at
-the same time.
+Kprobes allows multiple probes at the same address even if it is optimized.
+Currently, however, there cannot be multiple jprobes on the same function
+at the same time. And also, optimized kprobes can not invoke the
+post_handler and the break_handler. So if you attempt to install the probe
+which has the the post_handler or the break_handler at the same address of
+an optimized kprobe, the probe will be unoptimized automatically.
 
 In general, you can install a probe anywhere in the kernel.
 In particular, you can probe interrupt handlers.  Known exceptions
@@ -453,6 +556,37 @@ reason, Kprobes doesn't support return probes (or kprobes or jprobes)
 on the x86_64 version of __switch_to(); the registration functions
 return -EINVAL.
 
+On x86/x86-64, since the Jump Optimization of Kprobes modifies instructions
+widely, there are some limitations for optimization. To explain it,
+we introduce some terminology. Image certain binary line which is
+constructed by 2 byte instruction, 2byte instruction and 3byte instruction.
+
+        IA
+         |
+[-2][-1][0][1][2][3][4][5][6][7]
+        [ins1][ins2][  ins3 ]
+	[<-     DCR       ->]
+	   [<- JTPR ->]
+
+ins1: 1st Instruction
+ins2: 2nd Instruction
+ins3: 3rd Instruction
+IA:  Insertion Address
+JTPR: Jump Target Prohibition Region
+DCR: Detoured Code Region
+
+The instructions in DCR are copied to the out-of-line buffer
+of the djprobe instance, because the bytes in JTPR are replaced by
+a jump instruction. So, there are several limitations.
+
+a) The instructions in DCR must be relocatable.
+b) The instructions in DCR must not include call instruction.
+c) JTPR must not be targeted by any jump or call instruction.
+d) DCR must not straddle the border betweeen functions.
+
+Anyway, these limitations are checked by in-kernel instruction decoder,
+so you don't need to care about that.
+
 6. Probe Overhead
 
 On a typical CPU in use in 2005, a kprobe hit takes 0.5 to 1.0
@@ -476,6 +610,19 @@ k = 0.49 usec; j = 0.76; r = 0.80; kr = 0.82; jr = 1.07
 ppc64: POWER5 (gr), 1656 MHz (SMT disabled, 1 virtual CPU per physical CPU)
 k = 0.77 usec; j = 1.31; r = 1.26; kr = 1.45; jr = 1.99
 
+6.1 Optimized Probe Overhead
+
+Typically, an optimized kprobe hit takes 0.07 to 0.1 microseconds to
+process. Here are sample overhead figures (in usec) for x86-64 architectures.
+k = unoptimized kprobe, b = boosted(single-step skipped), o = optimized kprobe,
+r = unoptimized kretprobe, rb = boosted kretprobe, ro = optimized kretprobe.
+
+i386: Intel(R) Xeon(R) E5410, 2.33GHz, 4656.90 bogomips
+k = 0.68 usec; b = 0.27; o = 0.06; r = 0.95; rb = 0.53; ro = 0.30
+
+x86-64: Intel(R) Xeon(R) E5410, 2.33GHz, 4656.90 bogomips
+k = 0.91 usec; b = 0.40; o = 0.06; r = 1.21; rb = 0.71; ro = 0.35
+
 7. TODO
 
 a. SystemTap (http://sourceware.org/systemtap): Provides a simplified
@@ -523,7 +670,8 @@ is also specified. Following columns show probe status. If the probe is on
 a virtual address that is no longer valid (module init sections, module
 virtual addresses that correspond to modules that've been unloaded),
 such probes are marked with [GONE]. If the probe is temporarily disabled,
-such probes are marked with [DISABLED].
+such probes are marked with [DISABLED]. If the probe is optimized, it is
+marked with [OPTIMIZED].
 
 /sys/kernel/debug/kprobes/enabled: Turn kprobes ON/OFF forcibly.
 


-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America), Inc.
Software Solutions Division

e-mail: mhiramat@redhat.com

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization support
  2009-06-22 21:22 [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization support Masami Hiramatsu
                   ` (6 preceding siblings ...)
  2009-06-22 21:23 ` [RFC][ PATCH -tip v2 7/7] kprobes: add documents of jump optimization Masami Hiramatsu
@ 2009-06-23 11:42 ` Ingo Molnar
  2009-06-23 14:06   ` Masami Hiramatsu
  2009-06-23 12:09 ` Andi Kleen
  8 siblings, 1 reply; 29+ messages in thread
From: Ingo Molnar @ 2009-06-23 11:42 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Ananth N Mavinakayanahalli, lkml, H. Peter Anvin,
	Frederic Weisbecker, Jim Keniston, Srikar Dronamraju,
	Christoph Hellwig, Steven Rostedt, Anders Kaseorg, Tim Abbott,
	systemtap, DLE


* Masami Hiramatsu <mhiramat@redhat.com> wrote:

> o Usage
>  Set CONFIG_OPTPROBES=y when building a kernel, then all *probes will be
> optimized if possible.

Should be default-y if KPROBES is enabled. I.e. we really only want 
to disable it to debug potential issues.

	Ingo

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization support
  2009-06-22 21:22 [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization support Masami Hiramatsu
                   ` (7 preceding siblings ...)
  2009-06-23 11:42 ` [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization support Ingo Molnar
@ 2009-06-23 12:09 ` Andi Kleen
  2009-06-23 13:50   ` Masami Hiramatsu
  8 siblings, 1 reply; 29+ messages in thread
From: Andi Kleen @ 2009-06-23 12:09 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Ingo Molnar, Ananth N Mavinakayanahalli, lkml, H. Peter Anvin,
	Frederic Weisbecker, Jim Keniston, Srikar Dronamraju,
	Christoph Hellwig, Steven Rostedt, Anders Kaseorg, Tim Abbott,
	systemtap, DLE

Masami Hiramatsu <mhiramat@redhat.com> writes:
>
> The gcc's crossjumping unifies equivalent code by inserting indirect
> jumps which jump into other function body. It is hard to know to where
> these jumps jump, so I decided to disable it when setting
> CONFIG_OPTPROBES=y.

That sounds quite bad. Tail call optimization is an important optimization
that especially on kernel style code (lots of indirect pointers
and sometimes deep call chains) is very useful.  It would be quite
sad if production kernels would lose that optimization.

Also tail calls in C should always jump directly to another function,
so they shouldn't be particularly complex to manage.

> I also decided not to optimize probes when it is in functions which
> will cause exceptions, because the exception in the kernel will jump
> to a fixup code and the fixup code jumps back to the middle of the
> same function body.

Note that not only exceptions do that, there are a few other cases
where jumps in and out of out of line sections happen. You might
need a more general mechanism to detect this.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC][ PATCH -tip v2 4/7] kprobes: kprobes jump optimization core
  2009-06-22 21:23 ` [RFC][ PATCH -tip v2 4/7] kprobes: kprobes jump optimization core Masami Hiramatsu
@ 2009-06-23 12:58   ` Srikar Dronamraju
  2009-06-23 13:11     ` Masami Hiramatsu
  0 siblings, 1 reply; 29+ messages in thread
From: Srikar Dronamraju @ 2009-06-23 12:58 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Ingo Molnar, Ananth N Mavinakayanahalli, lkml, systemtap, DLE,
	Jim Keniston, Christoph Hellwig, Steven Rostedt,
	Frederic Weisbecker, H. Peter Anvin, Anders Kaseorg, Tim Abbott

> +	/* .insn_size is initialized later */
> +	.nr_garbage = 0,
> +};

> 
> +#if defined(CONFIG_OPTPROBES) && defined(__ARCH_WANT_KPROBES_INSN_SLOT)
> +	/* Init kprobe_optinsn_slots */
> +	kprobe_optinsn_slots.insn_size = MAX_OPTINSN_SIZE;
> +#endif
> +

What would be be the size of kprobe_optinsn_slots.insn_size if
CONFIG_OPTPROBES is defined but __ARCH_WANT_KPROBES_INSN_SLOT is not
defined? 

-Srikar

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC][ PATCH -tip v2 4/7] kprobes: kprobes jump optimization core
  2009-06-23 12:58   ` Srikar Dronamraju
@ 2009-06-23 13:11     ` Masami Hiramatsu
  0 siblings, 0 replies; 29+ messages in thread
From: Masami Hiramatsu @ 2009-06-23 13:11 UTC (permalink / raw)
  To: Srikar Dronamraju
  Cc: Ingo Molnar, Ananth N Mavinakayanahalli, lkml, systemtap, DLE,
	Jim Keniston, Christoph Hellwig, Steven Rostedt,
	Frederic Weisbecker, H. Peter Anvin, Anders Kaseorg, Tim Abbott

Srikar Dronamraju wrote:
>> +	/* .insn_size is initialized later */
>> +	.nr_garbage = 0,
>> +};
> 
>> +#if defined(CONFIG_OPTPROBES) && defined(__ARCH_WANT_KPROBES_INSN_SLOT)
>> +	/* Init kprobe_optinsn_slots */
>> +	kprobe_optinsn_slots.insn_size = MAX_OPTINSN_SIZE;
>> +#endif
>> +
> 
> What would be be the size of kprobe_optinsn_slots.insn_size if
> CONFIG_OPTPROBES is defined but __ARCH_WANT_KPROBES_INSN_SLOT is not
> defined? 

No problem. If __ARCH_WANT_KPROBES_INSN_SLOT is not defined,
kprobe_optinsn_slots doesn't exist.

Thank you,
> 
> -Srikar

-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America), Inc.
Software Solutions Division

e-mail: mhiramat@redhat.com


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization support
  2009-06-23 12:09 ` Andi Kleen
@ 2009-06-23 13:50   ` Masami Hiramatsu
  2009-06-23 16:34     ` Andi Kleen
  0 siblings, 1 reply; 29+ messages in thread
From: Masami Hiramatsu @ 2009-06-23 13:50 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Ingo Molnar, Ananth N Mavinakayanahalli, lkml, H. Peter Anvin,
	Frederic Weisbecker, Jim Keniston, Srikar Dronamraju,
	Christoph Hellwig, Steven Rostedt, Anders Kaseorg, Tim Abbott,
	systemtap, DLE

Hi Andi,

Andi Kleen wrote:
> Masami Hiramatsu <mhiramat@redhat.com> writes:
>> The gcc's crossjumping unifies equivalent code by inserting indirect
>> jumps which jump into other function body. It is hard to know to where
>> these jumps jump, so I decided to disable it when setting
>> CONFIG_OPTPROBES=y.
> 
> That sounds quite bad. Tail call optimization is an important optimization
> that especially on kernel style code (lots of indirect pointers
> and sometimes deep call chains) is very useful.  It would be quite
> sad if production kernels would lose that optimization.

I think the crossjumping is not the tail call optimization,
http://gcc.gnu.org/onlinedocs/gcc-3.4.6/gccint/Passes.html

> 
> Also tail calls in C should always jump directly to another function,
> so they shouldn't be particularly complex to manage.

Tail call jumps directly into the head of another function,
not the middle. Thus it is safe.

>> I also decided not to optimize probes when it is in functions which
>> will cause exceptions, because the exception in the kernel will jump
>> to a fixup code and the fixup code jumps back to the middle of the
>> same function body.
> 
> Note that not only exceptions do that, there are a few other cases
> where jumps in and out of out of line sections happen. You might
> need a more general mechanism to detect this.

As far as I can see (under arch/x86), Almost all fixup entries are
defined with ex_table entries, and others jump to the head of
symbols(or functions). The jumps which jump into the middle of
some functions are what I need to find, and, as far as I know,
those fixup jumps are used with exception tables. Of course,
I might miss some fixup codes, in that case, please let me know:-)

Thank you,

-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America), Inc.
Software Solutions Division

e-mail: mhiramat@redhat.com


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization support
  2009-06-23 11:42 ` [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization support Ingo Molnar
@ 2009-06-23 14:06   ` Masami Hiramatsu
  2009-06-23 19:40     ` Ingo Molnar
  0 siblings, 1 reply; 29+ messages in thread
From: Masami Hiramatsu @ 2009-06-23 14:06 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Ananth N Mavinakayanahalli, lkml, H. Peter Anvin,
	Frederic Weisbecker, Jim Keniston, Srikar Dronamraju,
	Christoph Hellwig, Steven Rostedt, Anders Kaseorg, Tim Abbott,
	systemtap, DLE

Ingo Molnar wrote:
> * Masami Hiramatsu <mhiramat@redhat.com> wrote:
> 
>> o Usage
>>  Set CONFIG_OPTPROBES=y when building a kernel, then all *probes will be
>> optimized if possible.
> 
> Should be default-y if KPROBES is enabled. I.e. we really only want 
> to disable it to debug potential issues.

Sure, thanks!

-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America), Inc.
Software Solutions Division

e-mail: mhiramat@redhat.com


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization  support
  2009-06-23 13:50   ` Masami Hiramatsu
@ 2009-06-23 16:34     ` Andi Kleen
  2009-06-23 17:28       ` Masami Hiramatsu
  0 siblings, 1 reply; 29+ messages in thread
From: Andi Kleen @ 2009-06-23 16:34 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Ingo Molnar, Ananth N Mavinakayanahalli, lkml, H. Peter Anvin,
	Frederic Weisbecker, Jim Keniston, Srikar Dronamraju,
	Christoph Hellwig, Steven Rostedt, Anders Kaseorg, Tim Abbott,
	systemtap, DLE

Masami Hiramatsu <mhiramat@redhat.com> writes:

> Hi Andi,
>
> Andi Kleen wrote:
>> Masami Hiramatsu <mhiramat@redhat.com> writes:
>>> The gcc's crossjumping unifies equivalent code by inserting indirect
>>> jumps which jump into other function body. It is hard to know to where
>>> these jumps jump, so I decided to disable it when setting
>>> CONFIG_OPTPROBES=y.
>> 
>> That sounds quite bad. Tail call optimization is an important optimization
>> that especially on kernel style code (lots of indirect pointers
>> and sometimes deep call chains) is very useful.  It would be quite
>> sad if production kernels would lose that optimization.
>
> I think the crossjumping is not the tail call optimization,
> http://gcc.gnu.org/onlinedocs/gcc-3.4.6/gccint/Passes.html

Statement didn't make sense then. The RTL crossjump pass you're referring
AFAIK does not jump into other functions, it only optimizes jumps 
inside a function (unless you're talking about inlines)
 
>> Also tail calls in C should always jump directly to another function,
>> so they shouldn't be particularly complex to manage.
>
> Tail call jumps directly into the head of another function,
> not the middle. Thus it is safe.

cross jumping does neither.

>
>>> I also decided not to optimize probes when it is in functions which
>>> will cause exceptions, because the exception in the kernel will jump
>>> to a fixup code and the fixup code jumps back to the middle of the
>>> same function body.
>> 
>> Note that not only exceptions do that, there are a few other cases
>> where jumps in and out of out of line sections happen. You might
>> need a more general mechanism to detect this.
>
> As far as I can see (under arch/x86), Almost all fixup entries are
> defined with ex_table entries, and others jump to the head of
> symbols(or functions). The jumps which jump into the middle of
> some functions are what I need to find, and, as far as I know,
> those fixup jumps are used with exception tables. Of course,
> I might miss some fixup codes, in that case, please let me know:-)

One case for example are out of line sections generated by gcc itself
with the right options.

-andi

-- 
ak@linux.intel.com -- Speaking for myself only.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization support
  2009-06-23 16:34     ` Andi Kleen
@ 2009-06-23 17:28       ` Masami Hiramatsu
  2009-06-23 20:37         ` Andi Kleen
  0 siblings, 1 reply; 29+ messages in thread
From: Masami Hiramatsu @ 2009-06-23 17:28 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Ingo Molnar, Ananth N Mavinakayanahalli, lkml, H. Peter Anvin,
	Frederic Weisbecker, Jim Keniston, Srikar Dronamraju,
	Christoph Hellwig, Steven Rostedt, Anders Kaseorg, Tim Abbott,
	systemtap, DLE

Andi Kleen wrote:
> Masami Hiramatsu <mhiramat@redhat.com> writes:
> 
>> Hi Andi,
>>
>> Andi Kleen wrote:
>>> Masami Hiramatsu <mhiramat@redhat.com> writes:
>>>> The gcc's crossjumping unifies equivalent code by inserting indirect
>>>> jumps which jump into other function body. It is hard to know to where
>>>> these jumps jump, so I decided to disable it when setting
>>>> CONFIG_OPTPROBES=y.
>>> That sounds quite bad. Tail call optimization is an important optimization
>>> that especially on kernel style code (lots of indirect pointers
>>> and sometimes deep call chains) is very useful.  It would be quite
>>> sad if production kernels would lose that optimization.
>> I think the crossjumping is not the tail call optimization,
>> http://gcc.gnu.org/onlinedocs/gcc-3.4.6/gccint/Passes.html
> 
> Statement didn't make sense then. The RTL crossjump pass you're referring
> AFAIK does not jump into other functions, it only optimizes jumps 
> inside a function (unless you're talking about inlines)

If so, that's a good news for me. Then just drop the disable crossjumping
patch is enough.

>>> Also tail calls in C should always jump directly to another function,
>>> so they shouldn't be particularly complex to manage.
>> Tail call jumps directly into the head of another function,
>> not the middle. Thus it is safe.
> 
> cross jumping does neither.
> 
>>>> I also decided not to optimize probes when it is in functions which
>>>> will cause exceptions, because the exception in the kernel will jump
>>>> to a fixup code and the fixup code jumps back to the middle of the
>>>> same function body.
>>> Note that not only exceptions do that, there are a few other cases
>>> where jumps in and out of out of line sections happen. You might
>>> need a more general mechanism to detect this.
>> As far as I can see (under arch/x86), Almost all fixup entries are
>> defined with ex_table entries, and others jump to the head of
>> symbols(or functions). The jumps which jump into the middle of
>> some functions are what I need to find, and, as far as I know,
>> those fixup jumps are used with exception tables. Of course,
>> I might miss some fixup codes, in that case, please let me know:-)
> 
> One case for example are out of line sections generated by gcc itself
> with the right options.

Hmm, would you know what is the actual name of that section and option?

I think another possible solution is decoding those sections and
black-listing the target functions when making vmlinux or loading
modules.

Thank you,
-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America), Inc.
Software Solutions Division

e-mail: mhiramat@redhat.com


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization support
  2009-06-23 14:06   ` Masami Hiramatsu
@ 2009-06-23 19:40     ` Ingo Molnar
  2009-06-23 21:54       ` Masami Hiramatsu
  0 siblings, 1 reply; 29+ messages in thread
From: Ingo Molnar @ 2009-06-23 19:40 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Ananth N Mavinakayanahalli, lkml, H. Peter Anvin,
	Frederic Weisbecker, Jim Keniston, Srikar Dronamraju,
	Christoph Hellwig, Steven Rostedt, Anders Kaseorg, Tim Abbott,
	systemtap, DLE


* Masami Hiramatsu <mhiramat@redhat.com> wrote:

> Ingo Molnar wrote:
> > * Masami Hiramatsu <mhiramat@redhat.com> wrote:
> > 
> >> o Usage
> >>  Set CONFIG_OPTPROBES=y when building a kernel, then all *probes will be
> >> optimized if possible.
> > 
> > Should be default-y if KPROBES is enabled. I.e. we really only want 
> > to disable it to debug potential issues.
> 
> Sure, thanks!

Plus i'd suggest a runtime control (a sysctl) as well - if it's not 
too intrusive. Since this is an optional speedup feature, distros 
can have this enabled and if there's some problem with it then it 
can still be disabled in sysctl.conf, without having to rebuild the 
kernel.

	Ingo

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization support
  2009-06-23 17:28       ` Masami Hiramatsu
@ 2009-06-23 20:37         ` Andi Kleen
  2009-06-23 20:48           ` Masami Hiramatsu
  2009-06-26 23:20           ` Masami Hiramatsu
  0 siblings, 2 replies; 29+ messages in thread
From: Andi Kleen @ 2009-06-23 20:37 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Andi Kleen, Ingo Molnar, Ananth N Mavinakayanahalli, lkml,
	H. Peter Anvin, Frederic Weisbecker, Jim Keniston,
	Srikar Dronamraju, Christoph Hellwig, Steven Rostedt,
	Anders Kaseorg, Tim Abbott, systemtap, DLE

On Tue, Jun 23, 2009 at 01:28:03PM -0400, Masami Hiramatsu wrote:
> Hmm, would you know what is the actual name of that section and option?

-freorder-blocks-and-partition

.text.unlikely

> I think another possible solution is decoding those sections and
> black-listing the target functions when making vmlinux or loading
> modules.

Note that this can be pretty much all functions if you're unlucky.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization support
  2009-06-23 20:37         ` Andi Kleen
@ 2009-06-23 20:48           ` Masami Hiramatsu
  2009-06-26 23:20           ` Masami Hiramatsu
  1 sibling, 0 replies; 29+ messages in thread
From: Masami Hiramatsu @ 2009-06-23 20:48 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Ingo Molnar, Ananth N Mavinakayanahalli, lkml, H. Peter Anvin,
	Frederic Weisbecker, Jim Keniston, Srikar Dronamraju,
	Christoph Hellwig, Steven Rostedt, Anders Kaseorg, Tim Abbott,
	systemtap, DLE

Andi Kleen wrote:
> On Tue, Jun 23, 2009 at 01:28:03PM -0400, Masami Hiramatsu wrote:
>> Hmm, would you know what is the actual name of that section and option?
> 
> -freorder-blocks-and-partition
> 
> .text.unlikely

Thanks,

> 
>> I think another possible solution is decoding those sections and
>> black-listing the target functions when making vmlinux or loading
>> modules.
> 
> Note that this can be pretty much all functions if you're unlucky.

Sure, in that case, I have two options;
- Disable above option by kconfig.
- Black-listing the target addresses, instead of functions.

Thank you,

> 
> -Andi
> 

-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America), Inc.
Software Solutions Division

e-mail: mhiramat@redhat.com


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization support
  2009-06-23 19:40     ` Ingo Molnar
@ 2009-06-23 21:54       ` Masami Hiramatsu
  2009-06-23 22:08         ` Masami Hiramatsu
  0 siblings, 1 reply; 29+ messages in thread
From: Masami Hiramatsu @ 2009-06-23 21:54 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Ananth N Mavinakayanahalli, lkml, H. Peter Anvin,
	Frederic Weisbecker, Jim Keniston, Srikar Dronamraju,
	Christoph Hellwig, Steven Rostedt, Anders Kaseorg, Tim Abbott,
	systemtap, DLE

Ingo Molnar wrote:
>> Ingo Molnar wrote:
>>> * Masami Hiramatsu <mhiramat@redhat.com> wrote:
>>>
>>>> o Usage
>>>>  Set CONFIG_OPTPROBES=y when building a kernel, then all *probes will be
>>>> optimized if possible.
>>> Should be default-y if KPROBES is enabled. I.e. we really only want 
>>> to disable it to debug potential issues.
>> Sure, thanks!
> 
> Plus i'd suggest a runtime control (a sysctl) as well - if it's not 
> too intrusive. Since this is an optional speedup feature, distros 
> can have this enabled and if there's some problem with it then it 
> can still be disabled in sysctl.conf, without having to rebuild the 
> kernel.

The runtime control is a good idea. Btw, current kprobes already has
a runtime disable interface under sysfs. Is there any reason that we'd
better to use sysctl instead of sysfs?


Thank you,

-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America), Inc.
Software Solutions Division

e-mail: mhiramat@redhat.com


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization support
  2009-06-23 21:54       ` Masami Hiramatsu
@ 2009-06-23 22:08         ` Masami Hiramatsu
  0 siblings, 0 replies; 29+ messages in thread
From: Masami Hiramatsu @ 2009-06-23 22:08 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Ananth N Mavinakayanahalli, lkml, H. Peter Anvin,
	Frederic Weisbecker, Jim Keniston, Srikar Dronamraju,
	Christoph Hellwig, Steven Rostedt, Anders Kaseorg, Tim Abbott,
	systemtap, DLE

Masami Hiramatsu wrote:
>> Plus i'd suggest a runtime control (a sysctl) as well - if it's not 
>> too intrusive. Since this is an optional speedup feature, distros 
>> can have this enabled and if there's some problem with it then it 
>> can still be disabled in sysctl.conf, without having to rebuild the 
>> kernel.
> 
> The runtime control is a good idea. Btw, current kprobes already has
> a runtime disable interface under sysfs. Is there any reason that we'd
> better to use sysctl instead of sysfs?

Oops, s/sysfs/debugfs/g.

-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America), Inc.
Software Solutions Division

e-mail: mhiramat@redhat.com


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization support
  2009-06-23 20:37         ` Andi Kleen
  2009-06-23 20:48           ` Masami Hiramatsu
@ 2009-06-26 23:20           ` Masami Hiramatsu
  1 sibling, 0 replies; 29+ messages in thread
From: Masami Hiramatsu @ 2009-06-26 23:20 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Ingo Molnar, Ananth N Mavinakayanahalli, lkml, H. Peter Anvin,
	Frederic Weisbecker, Jim Keniston, Srikar Dronamraju,
	Christoph Hellwig, Steven Rostedt, Anders Kaseorg, Tim Abbott,
	systemtap, DLE

Andi Kleen wrote:
> On Tue, Jun 23, 2009 at 01:28:03PM -0400, Masami Hiramatsu wrote:
>> Hmm, would you know what is the actual name of that section and option?
> 
> -freorder-blocks-and-partition
> 
> .text.unlikely

Hi Andi,

I've built my kernel with -freorder-blocks-and-partition, and
I saw that .text.unlikely contains whole functions, not
pieces of codes. In that case, it doesn't generate jumps which
jump to the middle of other functions.

Thank you,

-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America), Inc.
Software Solutions Division

e-mail: mhiramat@redhat.com


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC][ PATCH -tip v2 2/7] kprobes: introducing generic insn_slot framework
  2009-06-22 21:23 ` [RFC][ PATCH -tip v2 2/7] kprobes: introducing generic insn_slot framework Masami Hiramatsu
@ 2009-06-27 20:09   ` Steven Rostedt
  2009-06-29 21:09     ` Masami Hiramatsu
  0 siblings, 1 reply; 29+ messages in thread
From: Steven Rostedt @ 2009-06-27 20:09 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Ingo Molnar, Ananth N Mavinakayanahalli, lkml, systemtap, DLE,
	Jim Keniston, Srikar Dronamraju, Christoph Hellwig,
	Frederic Weisbecker, H. Peter Anvin, Anders Kaseorg, Tim Abbott


Hi Masami,

I'm currently traveling so my responses are very slow this week.


On Mon, 22 Jun 2009, Masami Hiramatsu wrote:

> Make insn_slot framework support various size slots.
> Current insn_slot just supports one-size instruction buffer slot. However,
> kprobes jump optimization needs larger size buffers.
> 
> Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
> Cc: Ingo Molnar <mingo@elte.hu>
> Cc: Jim Keniston <jkenisto@us.ibm.com>
> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
> Cc: Christoph Hellwig <hch@infradead.org>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Cc: Frederic Weisbecker <fweisbec@gmail.com>
> Cc: H. Peter Anvin <hpa@zytor.com>
> Cc: Anders Kaseorg <andersk@ksplice.com>
> Cc: Tim Abbott <tabbott@ksplice.com>
> ---
> 
>  kernel/kprobes.c |   96 +++++++++++++++++++++++++++++++++---------------------
>  1 files changed, 58 insertions(+), 38 deletions(-)
> 
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index 0b68fdc..bc9cfd0 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -100,26 +100,38 @@ static struct kprobe_blackpoint kprobe_blacklist[] = {
>   * stepping on the instruction on a vmalloced/kmalloced/data page
>   * is a recipe for disaster
>   */
> -#define INSNS_PER_PAGE	(PAGE_SIZE/(MAX_INSN_SIZE * sizeof(kprobe_opcode_t)))
> -
>  struct kprobe_insn_page {
>  	struct list_head list;
>  	kprobe_opcode_t *insns;		/* Page of instruction slots */
> -	char slot_used[INSNS_PER_PAGE];
>  	int nused;
>  	int ngarbage;
> +	char slot_used[1];

I would recommend using [] instead of [1], that would help other 
developers know that it is a variable array.



> +};
> +
> +struct kprobe_insn_cache {
> +	struct list_head pages;	/* list of kprobe_insn_page */
> +	size_t insn_size;	/* size of instruction slot */
> +	int nr_garbage;
>  };
>  
> +static int slots_per_page(struct kprobe_insn_cache *c)
> +{
> +	return PAGE_SIZE/(c->insn_size * sizeof(kprobe_opcode_t));
> +}
> +
>  enum kprobe_slot_state {
>  	SLOT_CLEAN = 0,
>  	SLOT_DIRTY = 1,
>  	SLOT_USED = 2,
>  };
>  
> -static DEFINE_MUTEX(kprobe_insn_mutex);	/* Protects kprobe_insn_pages */
> -static LIST_HEAD(kprobe_insn_pages);
> -static int kprobe_garbage_slots;
> -static int collect_garbage_slots(void);
> +static DEFINE_MUTEX(kprobe_insn_mutex);	/* Protects kprobe_insn_slots */
> +static struct kprobe_insn_cache kprobe_insn_slots = {
> +	.pages = LIST_HEAD_INIT(kprobe_insn_slots.pages),
> +	.insn_size = MAX_INSN_SIZE,
> +	.nr_garbage = 0,
> +};
> +static int __kprobes collect_garbage_slots(struct kprobe_insn_cache *c);
>  
>  static int __kprobes check_safety(void)
>  {
> @@ -149,32 +161,33 @@ loop_end:
>   * __get_insn_slot() - Find a slot on an executable page for an instruction.
>   * We allocate an executable page if there's no room on existing ones.
>   */
> -static kprobe_opcode_t __kprobes *__get_insn_slot(void)
> +static kprobe_opcode_t __kprobes *__get_insn_slot(struct kprobe_insn_cache *c)
>  {
>  	struct kprobe_insn_page *kip;
>  
>   retry:
> -	list_for_each_entry(kip, &kprobe_insn_pages, list) {
> -		if (kip->nused < INSNS_PER_PAGE) {
> +	list_for_each_entry(kip, &c->pages, list) {
> +		if (kip->nused < slots_per_page(c)) {
>  			int i;
> -			for (i = 0; i < INSNS_PER_PAGE; i++) {
> +			for (i = 0; i < slots_per_page(c); i++) {
>  				if (kip->slot_used[i] == SLOT_CLEAN) {
>  					kip->slot_used[i] = SLOT_USED;
>  					kip->nused++;
> -					return kip->insns + (i * MAX_INSN_SIZE);
> +					return kip->insns + (i * c->insn_size);
>  				}
>  			}
> -			/* Surprise!  No unused slots.  Fix kip->nused. */
> -			kip->nused = INSNS_PER_PAGE;
> +			/* kip->nused is broken. */
> +			BUG();

Does this deserve a bug, or can we get away with a WARN and find a way to 
fail nicely? Is it already too late to recover?

>  		}
>  	}
>  
>  	/* If there are any garbage slots, collect it and try again. */
> -	if (kprobe_garbage_slots && collect_garbage_slots() == 0) {
> +	if (c->nr_garbage && collect_garbage_slots(c) == 0)
>  		goto retry;
> -	}
> +
>  	/* All out of space.  Need to allocate a new page. Use slot 0. */
> -	kip = kmalloc(sizeof(struct kprobe_insn_page), GFP_KERNEL);
> +	kip = kmalloc(sizeof(struct kprobe_insn_page) + slots_per_page(c) - 1,

Why the '- 1'?  Is it because of the char [1] above?

Would be better to make the size of the kprobe_insn_page a macro:

#define KPROBE_INSN_SIZE offsetof(struct kbrobe_insn_page, slot_used)

and then you can do the following:

	kip = kmalloc(KPROBE_INSN_SIZE + slots_per_page(c));

-- Steve


> +		      GFP_KERNEL);
>  	if (!kip)
>  		return NULL;
>  
> @@ -189,19 +202,20 @@ static kprobe_opcode_t __kprobes *__get_insn_slot(void)
>  		return NULL;
>  	}
>  	INIT_LIST_HEAD(&kip->list);
> -	list_add(&kip->list, &kprobe_insn_pages);
> -	memset(kip->slot_used, SLOT_CLEAN, INSNS_PER_PAGE);
> +	memset(kip->slot_used, SLOT_CLEAN, slots_per_page(c));
>  	kip->slot_used[0] = SLOT_USED;
>  	kip->nused = 1;
>  	kip->ngarbage = 0;
> +	list_add(&kip->list, &c->pages);
>  	return kip->insns;
>  }
>  
> +
>  kprobe_opcode_t __kprobes *get_insn_slot(void)
>  {
> -	kprobe_opcode_t *ret;
> +	kprobe_opcode_t *ret = NULL;
>  	mutex_lock(&kprobe_insn_mutex);
> -	ret = __get_insn_slot();
> +	ret = __get_insn_slot(&kprobe_insn_slots);
>  	mutex_unlock(&kprobe_insn_mutex);
>  	return ret;
>  }
> @@ -218,7 +232,7 @@ static int __kprobes collect_one_slot(struct kprobe_insn_page *kip, int idx)
>  		 * so as not to have to set it up again the
>  		 * next time somebody inserts a probe.
>  		 */
> -		if (!list_is_singular(&kprobe_insn_pages)) {
> +		if (!list_is_singular(&kip->list)) {
>  			list_del(&kip->list);
>  			module_free(NULL, kip->insns);
>  			kfree(kip);
> @@ -228,7 +242,7 @@ static int __kprobes collect_one_slot(struct kprobe_insn_page *kip, int idx)
>  	return 0;
>  }
>  
> -static int __kprobes collect_garbage_slots(void)
> +static int __kprobes collect_garbage_slots(struct kprobe_insn_cache *c)
>  {
>  	struct kprobe_insn_page *kip, *next;
>  	int safety;
> @@ -240,42 +254,48 @@ static int __kprobes collect_garbage_slots(void)
>  	if (safety != 0)
>  		return -EAGAIN;
>  
> -	list_for_each_entry_safe(kip, next, &kprobe_insn_pages, list) {
> +	list_for_each_entry_safe(kip, next, &c->pages, list) {
>  		int i;
>  		if (kip->ngarbage == 0)
>  			continue;
>  		kip->ngarbage = 0;	/* we will collect all garbages */
> -		for (i = 0; i < INSNS_PER_PAGE; i++) {
> +		for (i = 0; i < slots_per_page(c); i++) {
>  			if (kip->slot_used[i] == SLOT_DIRTY &&
>  			    collect_one_slot(kip, i))
>  				break;
>  		}
>  	}
> -	kprobe_garbage_slots = 0;
> +	c->nr_garbage = 0;
>  	return 0;
>  }
>  
> -void __kprobes free_insn_slot(kprobe_opcode_t * slot, int dirty)
> +static void __kprobes __free_insn_slot(struct kprobe_insn_cache *c,
> +				       kprobe_opcode_t *slot, int dirty)
>  {
>  	struct kprobe_insn_page *kip;
>  
> -	mutex_lock(&kprobe_insn_mutex);
> -	list_for_each_entry(kip, &kprobe_insn_pages, list) {
> -		if (kip->insns <= slot &&
> -		    slot < kip->insns + (INSNS_PER_PAGE * MAX_INSN_SIZE)) {
> -			int i = (slot - kip->insns) / MAX_INSN_SIZE;
> +	list_for_each_entry(kip, &c->pages, list) {
> +		long idx = ((long)slot - (long)kip->insns) / c->insn_size;
> +		if (idx >= 0 && idx < slots_per_page(c)) {
> +			WARN_ON(kip->slot_used[idx] != SLOT_USED);
>  			if (dirty) {
> -				kip->slot_used[i] = SLOT_DIRTY;
> +				kip->slot_used[idx] = SLOT_DIRTY;
>  				kip->ngarbage++;
> +				if (++c->nr_garbage > slots_per_page(c))
> +					collect_garbage_slots(c);
>  			} else
> -				collect_one_slot(kip, i);
> -			break;
> +				collect_one_slot(kip, idx);
> +			return;
>  		}
>  	}
> +	/* Could not free this slot. */
> +	WARN_ON(1);
> +}
>  
> -	if (dirty && ++kprobe_garbage_slots > INSNS_PER_PAGE)
> -		collect_garbage_slots();
> -
> +void __kprobes free_insn_slot(kprobe_opcode_t * slot, int dirty)
> +{
> +	mutex_lock(&kprobe_insn_mutex);
> +	__free_insn_slot(&kprobe_insn_slots, slot, dirty);
>  	mutex_unlock(&kprobe_insn_mutex);
>  }
>  #endif
> 
> 
> -- 
> Masami Hiramatsu
> 
> Software Engineer
> Hitachi Computer Products (America), Inc.
> Software Solutions Division
> 
> e-mail: mhiramat@redhat.com
> 

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC][ PATCH -tip v2 3/7] Kbuild: disable gcc crossjumping
  2009-06-22 21:23 ` [RFC][ PATCH -tip v2 3/7] Kbuild: disable gcc crossjumping Masami Hiramatsu
@ 2009-06-27 20:13   ` Steven Rostedt
  2009-07-17 20:28     ` Sam Ravnborg
  0 siblings, 1 reply; 29+ messages in thread
From: Steven Rostedt @ 2009-06-27 20:13 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Ingo Molnar, Ananth N Mavinakayanahalli, lkml, systemtap, DLE,
	Jim Keniston, Srikar Dronamraju, Christoph Hellwig,
	Frederic Weisbecker, H. Peter Anvin, Anders Kaseorg, Tim Abbott,
	sam


A change like this requires an ACK from Sam Ravnborg.

-- Steve



On Mon, 22 Jun 2009, Masami Hiramatsu wrote:

> Add CONFIG_DISABLE_CROSSJUMP option which disables gcc's cross-function
> jumping. This option is required by the kprobes jump optimization.
> 
> Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
> Cc: Ingo Molnar <mingo@elte.hu>
> Cc: Jim Keniston <jkenisto@us.ibm.com>
> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
> Cc: Christoph Hellwig <hch@infradead.org>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Cc: Frederic Weisbecker <fweisbec@gmail.com>
> Cc: H. Peter Anvin <hpa@zytor.com>
> Cc: Anders Kaseorg <andersk@ksplice.com>
> Cc: Tim Abbott <tabbott@ksplice.com>
> ---
> 
>  Makefile          |    4 ++++
>  lib/Kconfig.debug |    7 +++++++
>  2 files changed, 11 insertions(+), 0 deletions(-)
> 
> diff --git a/Makefile b/Makefile
> index 2903e13..f73b139 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -524,6 +524,10 @@ else
>  KBUILD_CFLAGS	+= -O2
>  endif
>  
> +ifdef CONFIG_DISABLE_CROSSJUMP
> +KBUILD_CFLAGS	+= -fno-crossjumping
> +endif
> +
>  include $(srctree)/arch/$(SRCARCH)/Makefile
>  
>  ifneq ($(CONFIG_FRAME_WARN),0)
> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> index 8da7467..f88e6b8 100644
> --- a/lib/Kconfig.debug
> +++ b/lib/Kconfig.debug
> @@ -673,6 +673,13 @@ config FRAME_POINTER
>  	  larger and slower, but it gives very useful debugging information
>  	  in case of kernel bugs. (precise oopses/stacktraces/warnings)
>  
> +config DISABLE_CROSSJUMP
> +	bool "Disable cross-function jump optimization"
> +	help
> +	  This build option disables cross-function jump optimization
> +	  (crossjumping) of gcc. Disabling crossjumping might increase
> +	  kernel binary size a little.
> +
>  config BOOT_PRINTK_DELAY
>  	bool "Delay each boot printk message by N milliseconds"
>  	depends on DEBUG_KERNEL && PRINTK && GENERIC_CALIBRATE_DELAY
> 
> 
> -- 
> Masami Hiramatsu
> 
> Software Engineer
> Hitachi Computer Products (America), Inc.
> Software Solutions Division
> 
> e-mail: mhiramat@redhat.com
> 

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC][ PATCH -tip v2 2/7] kprobes: introducing generic insn_slot framework
  2009-06-27 20:09   ` Steven Rostedt
@ 2009-06-29 21:09     ` Masami Hiramatsu
  0 siblings, 0 replies; 29+ messages in thread
From: Masami Hiramatsu @ 2009-06-29 21:09 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Ingo Molnar, Ananth N Mavinakayanahalli, lkml, systemtap, DLE,
	Jim Keniston, Srikar Dronamraju, Christoph Hellwig,
	Frederic Weisbecker, H. Peter Anvin, Anders Kaseorg, Tim Abbott

Thank you for reviewing.

Steven Rostedt wrote:
> Hi Masami,
> 
> I'm currently traveling so my responses are very slow this week.
> 
> 
> On Mon, 22 Jun 2009, Masami Hiramatsu wrote:
> 
>> Make insn_slot framework support various size slots.
>> Current insn_slot just supports one-size instruction buffer slot. However,
>> kprobes jump optimization needs larger size buffers.
>>
>> Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
>> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
>> Cc: Ingo Molnar <mingo@elte.hu>
>> Cc: Jim Keniston <jkenisto@us.ibm.com>
>> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
>> Cc: Christoph Hellwig <hch@infradead.org>
>> Cc: Steven Rostedt <rostedt@goodmis.org>
>> Cc: Frederic Weisbecker <fweisbec@gmail.com>
>> Cc: H. Peter Anvin <hpa@zytor.com>
>> Cc: Anders Kaseorg <andersk@ksplice.com>
>> Cc: Tim Abbott <tabbott@ksplice.com>
>> ---
>>
>>  kernel/kprobes.c |   96 +++++++++++++++++++++++++++++++++---------------------
>>  1 files changed, 58 insertions(+), 38 deletions(-)
>>
>> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
>> index 0b68fdc..bc9cfd0 100644
>> --- a/kernel/kprobes.c
>> +++ b/kernel/kprobes.c
>> @@ -100,26 +100,38 @@ static struct kprobe_blackpoint kprobe_blacklist[] = {
>>   * stepping on the instruction on a vmalloced/kmalloced/data page
>>   * is a recipe for disaster
>>   */
>> -#define INSNS_PER_PAGE	(PAGE_SIZE/(MAX_INSN_SIZE * sizeof(kprobe_opcode_t)))
>> -
>>  struct kprobe_insn_page {
>>  	struct list_head list;
>>  	kprobe_opcode_t *insns;		/* Page of instruction slots */
>> -	char slot_used[INSNS_PER_PAGE];
>>  	int nused;
>>  	int ngarbage;
>> +	char slot_used[1];
> 
> I would recommend using [] instead of [1], that would help other 
> developers know that it is a variable array.

Sure.

[...]
>> -	list_for_each_entry(kip, &kprobe_insn_pages, list) {
>> -		if (kip->nused < INSNS_PER_PAGE) {
>> +	list_for_each_entry(kip, &c->pages, list) {
>> +		if (kip->nused < slots_per_page(c)) {
>>  			int i;
>> -			for (i = 0; i < INSNS_PER_PAGE; i++) {
>> +			for (i = 0; i < slots_per_page(c); i++) {
>>  				if (kip->slot_used[i] == SLOT_CLEAN) {
>>  					kip->slot_used[i] = SLOT_USED;
>>  					kip->nused++;
>> -					return kip->insns + (i * MAX_INSN_SIZE);
>> +					return kip->insns + (i * c->insn_size);
>>  				}
>>  			}
>> -			/* Surprise!  No unused slots.  Fix kip->nused. */
>> -			kip->nused = INSNS_PER_PAGE;
>> +			/* kip->nused is broken. */
>> +			BUG();
> 
> Does this deserve a bug, or can we get away with a WARN and find a way to 
> fail nicely? Is it already too late to recover?

No, WARN() is enough here.

> 
>>  		}
>>  	}
>>  
>>  	/* If there are any garbage slots, collect it and try again. */
>> -	if (kprobe_garbage_slots && collect_garbage_slots() == 0) {
>> +	if (c->nr_garbage && collect_garbage_slots(c) == 0)
>>  		goto retry;
>> -	}
>> +
>>  	/* All out of space.  Need to allocate a new page. Use slot 0. */
>> -	kip = kmalloc(sizeof(struct kprobe_insn_page), GFP_KERNEL);
>> +	kip = kmalloc(sizeof(struct kprobe_insn_page) + slots_per_page(c) - 1,
> 
> Why the '- 1'?  Is it because of the char [1] above?
> 
> Would be better to make the size of the kprobe_insn_page a macro:
> 
> #define KPROBE_INSN_SIZE offsetof(struct kbrobe_insn_page, slot_used)
> 
> and then you can do the following:
> 
> 	kip = kmalloc(KPROBE_INSN_SIZE + slots_per_page(c));

Good idea!

Thanks


-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America), Inc.
Software Solutions Division

e-mail: mhiramat@redhat.com


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC][ PATCH -tip v2 3/7] Kbuild: disable gcc crossjumping
  2009-06-27 20:13   ` Steven Rostedt
@ 2009-07-17 20:28     ` Sam Ravnborg
  2009-07-17 21:13       ` Masami Hiramatsu
  0 siblings, 1 reply; 29+ messages in thread
From: Sam Ravnborg @ 2009-07-17 20:28 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Masami Hiramatsu, Ingo Molnar, Ananth N Mavinakayanahalli, lkml,
	systemtap, DLE, Jim Keniston, Srikar Dronamraju,
	Christoph Hellwig, Frederic Weisbecker, H. Peter Anvin,
	Anders Kaseorg, Tim Abbott

On Sat, Jun 27, 2009 at 04:13:04PM -0400, Steven Rostedt wrote:
> 
> A change like this requires an ACK from Sam Ravnborg.
> 
> -- Steve

Overdue review...

I do not know the gcc option so I cannot comment on theeffect of it.

> 
> > 
> > diff --git a/Makefile b/Makefile
> > index 2903e13..f73b139 100644
> > --- a/Makefile
> > +++ b/Makefile
> > @@ -524,6 +524,10 @@ else
> >  KBUILD_CFLAGS	+= -O2
> >  endif
> >  
> > +ifdef CONFIG_DISABLE_CROSSJUMP
> > +KBUILD_CFLAGS	+= -fno-crossjumping
> > +endif
> > +
> >  include $(srctree)/arch/$(SRCARCH)/Makefile

Why do we add this option _before_ we include the arch specific Makefile?
If we do not want the arch specific Makefile to undo this then move it lower.

Also please add a comment what the effect of the disabling is.
There is no reason I should fire up info gcc to find out
what -fno-crossjumping is about.


> >  
> >  ifneq ($(CONFIG_FRAME_WARN),0)
> > diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> > index 8da7467..f88e6b8 100644
> > --- a/lib/Kconfig.debug
> > +++ b/lib/Kconfig.debug
> > @@ -673,6 +673,13 @@ config FRAME_POINTER
> >  	  larger and slower, but it gives very useful debugging information
> >  	  in case of kernel bugs. (precise oopses/stacktraces/warnings)
> >  
> > +config DISABLE_CROSSJUMP
> > +	bool "Disable cross-function jump optimization"
> > +	help
> > +	  This build option disables cross-function jump optimization
> > +	  (crossjumping) of gcc. Disabling crossjumping might increase
> > +	  kernel binary size a little.

So the above comment tells me that is I disable crossjumping I will
increase the kernel image a little.
That is by no means a good explanation.

Please explain the potential advantage of disabling crossjumping.

	Sam

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC][ PATCH -tip v2 3/7] Kbuild: disable gcc crossjumping
  2009-07-17 20:28     ` Sam Ravnborg
@ 2009-07-17 21:13       ` Masami Hiramatsu
  2009-07-17 21:15         ` Masami Hiramatsu
  2009-07-17 21:19         ` Sam Ravnborg
  0 siblings, 2 replies; 29+ messages in thread
From: Masami Hiramatsu @ 2009-07-17 21:13 UTC (permalink / raw)
  To: Steven Rostedt, Ingo Molnar, Ananth N Mavinakayanahalli, lkml,
	systemtap, DLE, Jim Keniston, Srikar Dronamraju,
	Christoph Hellwig, Frederic Weisbecker, H. Peter Anvin,
	Anders Kaseorg, Tim Abbott

Sam Ravnborg wrote:
> On Sat, Jun 27, 2009 at 04:13:04PM -0400, Steven Rostedt wrote:
>> A change like this requires an ACK from Sam Ravnborg.
>>
>> -- Steve
> 
> Overdue review...
> 
> I do not know the gcc option so I cannot comment on theeffect of it.

Thanks for comment. however, as far as I can see, I couldn't find
in the difference between binary with -fno-crossjump and without it.
So I decided to drop this patch...

> 
>>> diff --git a/Makefile b/Makefile
>>> index 2903e13..f73b139 100644
>>> --- a/Makefile
>>> +++ b/Makefile
>>> @@ -524,6 +524,10 @@ else
>>>  KBUILD_CFLAGS	+= -O2
>>>  endif
>>>  
>>> +ifdef CONFIG_DISABLE_CROSSJUMP
>>> +KBUILD_CFLAGS	+= -fno-crossjumping
>>> +endif
>>> +
>>>  include $(srctree)/arch/$(SRCARCH)/Makefile
> 
> Why do we add this option _before_ we include the arch specific Makefile?
> If we do not want the arch specific Makefile to undo this then move it lower.
> 
> Also please add a comment what the effect of the disabling is.
> There is no reason I should fire up info gcc to find out
> what -fno-crossjumping is about.

Sure, I'll write a comment on, next time.

> 
> 
>>>  
>>>  ifneq ($(CONFIG_FRAME_WARN),0)
>>> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
>>> index 8da7467..f88e6b8 100644
>>> --- a/lib/Kconfig.debug
>>> +++ b/lib/Kconfig.debug
>>> @@ -673,6 +673,13 @@ config FRAME_POINTER
>>>  	  larger and slower, but it gives very useful debugging information
>>>  	  in case of kernel bugs. (precise oopses/stacktraces/warnings)
>>>  
>>> +config DISABLE_CROSSJUMP
>>> +	bool "Disable cross-function jump optimization"
>>> +	help
>>> +	  This build option disables cross-function jump optimization
>>> +	  (crossjumping) of gcc. Disabling crossjumping might increase
>>> +	  kernel binary size a little.
> 
> So the above comment tells me that is I disable crossjumping I will
> increase the kernel image a little.
> That is by no means a good explanation.
> 
> Please explain the potential advantage of disabling crossjumping.

Agreed.

Thank you,
-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America), Inc.
Software Solutions Division

e-mail: mhiramat@redhat.com


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC][ PATCH -tip v2 3/7] Kbuild: disable gcc crossjumping
  2009-07-17 21:13       ` Masami Hiramatsu
@ 2009-07-17 21:15         ` Masami Hiramatsu
  2009-07-17 21:19         ` Sam Ravnborg
  1 sibling, 0 replies; 29+ messages in thread
From: Masami Hiramatsu @ 2009-07-17 21:15 UTC (permalink / raw)
  To: Steven Rostedt, Ingo Molnar, Ananth N Mavinakayanahalli, lkml,
	systemtap, DLE, Jim Keniston, Srikar Dronamraju,
	Christoph Hellwig, Frederic Weisbecker, H. Peter Anvin,
	Anders Kaseorg, Tim Abbott

Masami Hiramatsu wrote:
> Sam Ravnborg wrote:
>> On Sat, Jun 27, 2009 at 04:13:04PM -0400, Steven Rostedt wrote:
>>> A change like this requires an ACK from Sam Ravnborg.
>>>
>>> -- Steve
>> Overdue review...
>>
>> I do not know the gcc option so I cannot comment on theeffect of it.
> 
> Thanks for comment. however, as far as I can see, I couldn't find

any inter-function jumps populated by crossjump optimization

> in the difference between binary with -fno-crossjump and without it.
> So I decided to drop this patch...

-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America), Inc.
Software Solutions Division

e-mail: mhiramat@redhat.com


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC][ PATCH -tip v2 3/7] Kbuild: disable gcc crossjumping
  2009-07-17 21:13       ` Masami Hiramatsu
  2009-07-17 21:15         ` Masami Hiramatsu
@ 2009-07-17 21:19         ` Sam Ravnborg
  1 sibling, 0 replies; 29+ messages in thread
From: Sam Ravnborg @ 2009-07-17 21:19 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Steven Rostedt, Ingo Molnar, Ananth N Mavinakayanahalli, lkml,
	systemtap, DLE, Jim Keniston, Srikar Dronamraju,
	Christoph Hellwig, Frederic Weisbecker, H. Peter Anvin,
	Anders Kaseorg, Tim Abbott

On Fri, Jul 17, 2009 at 05:13:22PM -0400, Masami Hiramatsu wrote:
> Sam Ravnborg wrote:
> > On Sat, Jun 27, 2009 at 04:13:04PM -0400, Steven Rostedt wrote:
> >> A change like this requires an ACK from Sam Ravnborg.
> >>
> >> -- Steve
> > 
> > Overdue review...
> > 
> > I do not know the gcc option so I cannot comment on theeffect of it.
> 
> Thanks for comment. however, as far as I can see, I couldn't find
> in the difference between binary with -fno-crossjump and without it.
> So I decided to drop this patch...

OK, thanks or info.

	Sam

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2009-07-17 21:19 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-06-22 21:22 [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization support Masami Hiramatsu
2009-06-22 21:23 ` [RFC][ PATCH -tip v2 1/7] kprobes: use list instead of hlist for insn_pages Masami Hiramatsu
2009-06-22 21:23 ` [RFC][ PATCH -tip v2 2/7] kprobes: introducing generic insn_slot framework Masami Hiramatsu
2009-06-27 20:09   ` Steven Rostedt
2009-06-29 21:09     ` Masami Hiramatsu
2009-06-22 21:23 ` [RFC][ PATCH -tip v2 3/7] Kbuild: disable gcc crossjumping Masami Hiramatsu
2009-06-27 20:13   ` Steven Rostedt
2009-07-17 20:28     ` Sam Ravnborg
2009-07-17 21:13       ` Masami Hiramatsu
2009-07-17 21:15         ` Masami Hiramatsu
2009-07-17 21:19         ` Sam Ravnborg
2009-06-22 21:23 ` [RFC][ PATCH -tip v2 4/7] kprobes: kprobes jump optimization core Masami Hiramatsu
2009-06-23 12:58   ` Srikar Dronamraju
2009-06-23 13:11     ` Masami Hiramatsu
2009-06-22 21:23 ` [RFC][ PATCH -tip v2 5/7] kprobes: x86: cleanup save/restore registers Masami Hiramatsu
2009-06-22 21:23 ` [RFC][ PATCH -tip v2 6/7] kprobes: x86: support kprobes jump optimization on x86 Masami Hiramatsu
2009-06-22 21:23 ` [RFC][ PATCH -tip v2 7/7] kprobes: add documents of jump optimization Masami Hiramatsu
2009-06-23 11:42 ` [RFC][ PATCH -tip v2 0/7] kprobes: Kprobes jump optimization support Ingo Molnar
2009-06-23 14:06   ` Masami Hiramatsu
2009-06-23 19:40     ` Ingo Molnar
2009-06-23 21:54       ` Masami Hiramatsu
2009-06-23 22:08         ` Masami Hiramatsu
2009-06-23 12:09 ` Andi Kleen
2009-06-23 13:50   ` Masami Hiramatsu
2009-06-23 16:34     ` Andi Kleen
2009-06-23 17:28       ` Masami Hiramatsu
2009-06-23 20:37         ` Andi Kleen
2009-06-23 20:48           ` Masami Hiramatsu
2009-06-26 23:20           ` Masami Hiramatsu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).